From patchwork Thu Sep 19 09:25:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59416 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BA67D1EC79; Thu, 19 Sep 2019 11:26:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 833481EC62 for ; Thu, 19 Sep 2019 11:26:13 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9PAEA012029; Thu, 19 Sep 2019 02:26:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=afhsA09faJqZd5B6Uw4PWPr0NB5gtljXb0VgB9NZJ7w=; b=YirtLSV3TH7d3EEFERHnrxhT3er691akE8zGSGkNUiEkgsLq7wAz8CnNj8gfhOyfn1+7 FXE8BuWtGSwNDNPx8Bfe5Re2YJVVvj1e0zv8q5YlQh7IJVj5RJbTrTUbHuL7/B6TOcwu KMPYGGJLSCMCmshz/NvvdNqWInqxZw+arew75Ulr4TFp5gcltljDsMSamu6vTeVpv5DK mwEAOvnSNeSLOY0kOc2zOjHHGTRfM3A2v/KKk3b0r1qVtaf8QVXKkOWwGzVHojB+o3k/ xEtjQ57BU7hzufGlKXU0hxHjgGqNDgR/RkKS3vs6tcQoL0+rX0MpN4xbmwBb4OStSc4e oQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2v3vcdt759-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:12 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:11 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:11 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 0C3D53F704C; Thu, 19 Sep 2019 02:26:07 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:55:55 +0530 Message-ID: <20190919092603.5485-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add the default l2fwd poll mode routines similar to examples/l2fwd. Signed-off-by: Sunil Kumar Kori --- examples/Makefile | 1 + examples/l2fwd-event/Makefile | 57 +++ examples/l2fwd-event/l2fwd_common.h | 26 + examples/l2fwd-event/main.c | 737 ++++++++++++++++++++++++++++ examples/l2fwd-event/meson.build | 12 + examples/l2fwd/main.c | 10 +- 6 files changed, 838 insertions(+), 5 deletions(-) create mode 100644 examples/l2fwd-event/Makefile create mode 100644 examples/l2fwd-event/l2fwd_common.h create mode 100644 examples/l2fwd-event/main.c create mode 100644 examples/l2fwd-event/meson.build diff --git a/examples/Makefile b/examples/Makefile index de11dd487..d18504bd2 100644 --- a/examples/Makefile +++ b/examples/Makefile @@ -34,6 +34,7 @@ endif DIRS-$(CONFIG_RTE_LIBRTE_HASH) += ipv4_multicast DIRS-$(CONFIG_RTE_LIBRTE_KNI) += kni DIRS-y += l2fwd +DIRS-y += l2fwd-event ifneq ($(PQOS_INSTALL_PATH),) DIRS-y += l2fwd-cat endif diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile new file mode 100644 index 000000000..a156c4162 --- /dev/null +++ b/examples/l2fwd-event/Makefile @@ -0,0 +1,57 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +# binary name +APP = l2fwd-event + +# all source are stored in SRCS-y +SRCS-y := main.c + +# Build using pkg-config variables if possible +ifeq ($(shell pkg-config --exists libdpdk && echo 0),0) + +all: shared +.PHONY: shared static +shared: build/$(APP)-shared + ln -sf $(APP)-shared build/$(APP) +static: build/$(APP)-static + ln -sf $(APP)-static build/$(APP) + +PKGCONF=pkg-config --define-prefix + +PC_FILE := $(shell $(PKGCONF) --path libdpdk) +CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk) +LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk) +LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk) + +build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build + $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED) + +build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build + $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC) + +build: + @mkdir -p $@ + +.PHONY: clean +clean: + rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared + test -d build && rmdir -p build || true + +else # Build using legacy build system + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, detect a build directory, by looking for a path with a .config +RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config))))) + +include $(RTE_SDK)/mk/rte.vars.mk + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +include $(RTE_SDK)/mk/rte.extapp.mk +endif diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h new file mode 100644 index 000000000..b0ef49144 --- /dev/null +++ b/examples/l2fwd-event/l2fwd_common.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __L2FWD_COMMON_H__ +#define __L2FWD_COMMON_H__ + +#define MAX_PKT_BURST 32 +#define MAX_RX_QUEUE_PER_LCORE 16 +#define MAX_TX_QUEUE_PER_PORT 16 + +#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1 + +#define RTE_TEST_RX_DESC_DEFAULT 1024 +#define RTE_TEST_TX_DESC_DEFAULT 1024 + +/* Per-port statistics struct */ +struct l2fwd_port_statistics { + uint64_t dropped; + uint64_t tx; + uint64_t rx; +} __rte_cache_aligned; + +void print_stats(void); + +#endif /* __L2FWD_EVENTDEV_H__ */ diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c new file mode 100644 index 000000000..cc47fa203 --- /dev/null +++ b/examples/l2fwd-event/main.c @@ -0,0 +1,737 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "l2fwd_common.h" + +static volatile bool force_quit; + +/* MAC updating enabled by default */ +static int mac_updating = 1; + +#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ +#define MEMPOOL_CACHE_SIZE 256 + +/* + * Configurable number of RX/TX ring descriptors + */ +static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT; +static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; + +/* ethernet addresses of ports */ +static struct rte_ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS]; + +/* mask of enabled ports */ +static uint32_t l2fwd_enabled_port_mask; + +/* list of enabled ports */ +static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS]; + +static unsigned int l2fwd_rx_queue_per_lcore = 1; + +struct lcore_queue_conf { + uint32_t rx_port_list[MAX_RX_QUEUE_PER_LCORE]; + uint32_t n_rx_port; +} __rte_cache_aligned; + +static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; + +static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS]; + +static struct rte_eth_conf port_conf = { + .rxmode = { + .split_hdr_size = 0, + }, + .txmode = { + .mq_mode = ETH_MQ_TX_NONE, + }, +}; + +static struct rte_mempool *l2fwd_pktmbuf_pool; + +static struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS]; + +#define MAX_TIMER_PERIOD 86400 /* 1 day max */ +/* A tsc-based timer responsible for triggering statistics printout */ +static uint64_t timer_period = 10; /* default period is 10 seconds */ + +/* Print out statistics on packets dropped */ +void print_stats(void) +{ + uint64_t total_packets_dropped, total_packets_tx, total_packets_rx; + uint32_t portid; + + total_packets_dropped = 0; + total_packets_tx = 0; + total_packets_rx = 0; + + const char clr[] = {27, '[', '2', 'J', '\0' }; + const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0' }; + + /* Clear screen and move to top left */ + printf("%s%s", clr, topLeft); + + printf("\nPort statistics ===================================="); + + for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) { + /* skip disabled ports */ + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + printf("\nStatistics for port %u ------------------------------" + "\nPackets sent: %24"PRIu64 + "\nPackets received: %20"PRIu64 + "\nPackets dropped: %21"PRIu64, + portid, + port_statistics[portid].tx, + port_statistics[portid].rx, + port_statistics[portid].dropped); + + total_packets_dropped += port_statistics[portid].dropped; + total_packets_tx += port_statistics[portid].tx; + total_packets_rx += port_statistics[portid].rx; + } + printf("\nAggregate statistics ===============================" + "\nTotal packets sent: %18"PRIu64 + "\nTotal packets received: %14"PRIu64 + "\nTotal packets dropped: %15"PRIu64, + total_packets_tx, + total_packets_rx, + total_packets_dropped); + printf("\n====================================================\n"); +} + +static void +l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_portid) +{ + struct rte_ether_hdr *eth; + void *tmp; + + eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); + + /* 02:00:00:00:00:xx */ + tmp = ð->d_addr.addr_bytes[0]; + *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40); + + /* src addr */ + rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], ð->s_addr); +} + +static void +l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid) +{ + uint32_t dst_port; + int32_t sent; + struct rte_eth_dev_tx_buffer *buffer; + + dst_port = l2fwd_dst_ports[portid]; + + if (mac_updating) + l2fwd_mac_updating(m, dst_port); + + buffer = tx_buffer[dst_port]; + sent = rte_eth_tx_buffer(dst_port, 0, buffer, m); + if (sent) + port_statistics[dst_port].tx += sent; +} + +/* main processing loop */ +static void l2fwd_main_loop(void) +{ + uint64_t prev_tsc, diff_tsc, cur_tsc, timer_tsc, drain_tsc; + struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; + struct rte_eth_dev_tx_buffer *buffer; + struct lcore_queue_conf *qconf; + uint32_t i, j, portid, nb_rx; + struct rte_mbuf *m; + uint32_t lcore_id; + int32_t sent; + + drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * + BURST_TX_DRAIN_US; + prev_tsc = 0; + timer_tsc = 0; + + lcore_id = rte_lcore_id(); + qconf = &lcore_queue_conf[lcore_id]; + + if (qconf->n_rx_port == 0) { + RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id); + return; + } + + RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id); + + for (i = 0; i < qconf->n_rx_port; i++) { + + portid = qconf->rx_port_list[i]; + RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id, + portid); + + } + + while (!force_quit) { + + cur_tsc = rte_rdtsc(); + + /* + * TX burst queue drain + */ + diff_tsc = cur_tsc - prev_tsc; + if (unlikely(diff_tsc > drain_tsc)) { + for (i = 0; i < qconf->n_rx_port; i++) { + portid = + l2fwd_dst_ports[qconf->rx_port_list[i]]; + buffer = tx_buffer[portid]; + sent = rte_eth_tx_buffer_flush(portid, 0, + buffer); + if (sent) + port_statistics[portid].tx += sent; + } + + /* if timer is enabled */ + if (timer_period > 0) { + /* advance the timer */ + timer_tsc += diff_tsc; + + /* if timer has reached its timeout */ + if (unlikely(timer_tsc >= timer_period)) { + /* do this only on master core */ + if (lcore_id == + rte_get_master_lcore()) { + print_stats(); + /* reset the timer */ + timer_tsc = 0; + } + } + } + + prev_tsc = cur_tsc; + } + + /* + * Read packet from RX queues + */ + for (i = 0; i < qconf->n_rx_port; i++) { + + portid = qconf->rx_port_list[i]; + nb_rx = rte_eth_rx_burst(portid, 0, + pkts_burst, MAX_PKT_BURST); + + port_statistics[portid].rx += nb_rx; + + for (j = 0; j < nb_rx; j++) { + m = pkts_burst[j]; + rte_prefetch0(rte_pktmbuf_mtod(m, void *)); + l2fwd_simple_forward(m, portid); + } + } + } +} + +static int +l2fwd_launch_one_lcore(void *args) +{ + RTE_SET_USED(args); + l2fwd_main_loop(); + + return 0; +} + +/* display usage */ +static void +l2fwd_usage(const char *prgname) +{ + printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n" + " -p PORTMASK: hexadecimal bitmask of ports to configure\n" + " -q NQ: number of queue (=ports) per lcore (default is 1)\n" + " -T PERIOD: statistics will be refreshed each PERIOD seconds " + " (0 to disable, 10 default, 86400 maximum)\n" + " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n" + " When enabled:\n" + " - The source MAC address is replaced by the TX port MAC address\n" + " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n", + prgname); +} + +static int +l2fwd_parse_portmask(const char *portmask) +{ + char *end = NULL; + unsigned long pm; + + /* parse hexadecimal string */ + pm = strtoul(portmask, &end, 16); + if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0')) + return -1; + + if (pm == 0) + return -1; + + return pm; +} + +static unsigned int +l2fwd_parse_nqueue(const char *q_arg) +{ + char *end = NULL; + unsigned long n; + + /* parse hexadecimal string */ + n = strtoul(q_arg, &end, 10); + if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0')) + return 0; + if (n == 0) + return 0; + if (n >= MAX_RX_QUEUE_PER_LCORE) + return 0; + + return n; +} + +static int +l2fwd_parse_timer_period(const char *q_arg) +{ + char *end = NULL; + int n; + + /* parse number string */ + n = strtol(q_arg, &end, 10); + if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0')) + return -1; + if (n >= MAX_TIMER_PERIOD) + return -1; + + return n; +} + +static const char short_options[] = + "p:" /* portmask */ + "q:" /* number of queues */ + "T:" /* timer period */ + ; + +#define CMD_LINE_OPT_MAC_UPDATING "mac-updating" +#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating" + +enum { + /* long options mapped to a short option */ + + /* first long only option value must be >= 256, so that we won't + * conflict with short options + */ + CMD_LINE_OPT_MIN_NUM = 256, +}; + +static const struct option lgopts[] = { + { CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1}, + { CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0}, + {NULL, 0, 0, 0} +}; + +/* Parse the argument given in the command line of the application */ +static int +l2fwd_parse_args(int argc, char **argv) +{ + int opt, ret, timer_secs; + char *prgname = argv[0]; + char **argvopt; + int option_index; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, short_options, + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* portmask */ + case 'p': + l2fwd_enabled_port_mask = l2fwd_parse_portmask(optarg); + if (l2fwd_enabled_port_mask == 0) { + printf("invalid portmask\n"); + l2fwd_usage(prgname); + return -1; + } + break; + + /* nqueue */ + case 'q': + l2fwd_rx_queue_per_lcore = l2fwd_parse_nqueue(optarg); + if (l2fwd_rx_queue_per_lcore == 0) { + printf("invalid queue number\n"); + l2fwd_usage(prgname); + return -1; + } + break; + + /* timer period */ + case 'T': + timer_secs = l2fwd_parse_timer_period(optarg); + if (timer_secs < 0) { + printf("invalid timer period\n"); + l2fwd_usage(prgname); + return -1; + } + timer_period = timer_secs; + break; + + /* long options */ + case 0: + break; + + default: + l2fwd_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* Check the link status of all ports in up to 9s, and print them finally */ +static void +check_all_ports_link_status(uint32_t port_mask) +{ +#define CHECK_INTERVAL 100 /* 100ms */ +#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */ + uint16_t portid; + uint8_t count, all_ports_up, print_flag = 0; + struct rte_eth_link link; + + printf("\nChecking link status..."); + fflush(stdout); + for (count = 0; count <= MAX_CHECK_TIME; count++) { + if (force_quit) + return; + all_ports_up = 1; + RTE_ETH_FOREACH_DEV(portid) { + if (force_quit) + return; + if ((port_mask & (1 << portid)) == 0) + continue; + memset(&link, 0, sizeof(link)); + rte_eth_link_get_nowait(portid, &link); + /* print link status if flag set */ + if (print_flag == 1) { + if (link.link_status) + printf( + "Port%d Link Up. Speed %u Mbps - %s\n", + portid, link.link_speed, + (link.link_duplex == ETH_LINK_FULL_DUPLEX) ? + ("full-duplex") : ("half-duplex\n")); + else + printf("Port %d Link Down\n", portid); + continue; + } + /* clear all_ports_up flag if any link down */ + if (link.link_status == ETH_LINK_DOWN) { + all_ports_up = 0; + break; + } + } + /* after finally printing all link status, get out */ + if (print_flag == 1) + break; + + if (all_ports_up == 0) { + printf("."); + fflush(stdout); + rte_delay_ms(CHECK_INTERVAL); + } + + /* set the print_flag if all ports up or timeout */ + if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) { + print_flag = 1; + printf("done\n"); + } + } +} + +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + } +} + +int +main(int argc, char **argv) +{ + uint16_t nb_ports_available = 0; + struct lcore_queue_conf *qconf; + uint32_t nb_ports_in_mask = 0; + uint16_t portid, last_port; + uint32_t nb_lcores = 0; + uint32_t rx_lcore_id; + uint32_t nb_mbufs; + uint16_t nb_ports; + int ret; + + /* init EAL */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n"); + argc -= ret; + argv += ret; + + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* parse application arguments (after the EAL ones) */ + ret = l2fwd_parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n"); + + printf("MAC updating %s\n", mac_updating ? "enabled" : "disabled"); + + /* convert to number of cycles */ + timer_period *= rte_get_timer_hz(); + + nb_ports = rte_eth_dev_count_avail(); + if (nb_ports == 0) + rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n"); + + /* check port mask to possible port mask */ + if (l2fwd_enabled_port_mask & ~((1 << nb_ports) - 1)) + rte_exit(EXIT_FAILURE, "Invalid portmask; possible (0x%x)\n", + (1 << nb_ports) - 1); + + /* reset l2fwd_dst_ports */ + for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) + l2fwd_dst_ports[portid] = 0; + last_port = 0; + + /* + * Each logical core is assigned a dedicated TX queue on each port. + */ + RTE_ETH_FOREACH_DEV(portid) { + /* skip ports that are not enabled */ + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + + if (nb_ports_in_mask % 2) { + l2fwd_dst_ports[portid] = last_port; + l2fwd_dst_ports[last_port] = portid; + } else { + last_port = portid; + } + + nb_ports_in_mask++; + } + if (nb_ports_in_mask % 2) { + printf("Notice: odd number of ports in portmask.\n"); + l2fwd_dst_ports[last_port] = last_port; + } + + + rx_lcore_id = 0; + qconf = NULL; + + nb_mbufs = RTE_MAX(nb_ports * (nb_rxd + nb_txd + MAX_PKT_BURST + + nb_lcores * MEMPOOL_CACHE_SIZE), 8192U); + + /* create the mbuf pool */ + l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", nb_mbufs, + MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, + rte_socket_id()); + if (l2fwd_pktmbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); + + /* Initialize the port/queue configuration of each logical core */ + RTE_ETH_FOREACH_DEV(portid) { + /* skip ports that are not enabled */ + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + + /* get the lcore_id for this port */ + while (rte_lcore_is_enabled(rx_lcore_id) == 0 || + lcore_queue_conf[rx_lcore_id].n_rx_port == + l2fwd_rx_queue_per_lcore) { + rx_lcore_id++; + if (rx_lcore_id >= RTE_MAX_LCORE) + rte_exit(EXIT_FAILURE, "Not enough cores\n"); + } + + if (qconf != &lcore_queue_conf[rx_lcore_id]) { + /* Assigned a new logical core in the loop above. */ + qconf = &lcore_queue_conf[rx_lcore_id]; + nb_lcores++; + } + + qconf->rx_port_list[qconf->n_rx_port] = portid; + qconf->n_rx_port++; + printf("Lcore %u: RX port %u\n", rx_lcore_id, portid); + } + + + /* Initialise each port */ + RTE_ETH_FOREACH_DEV(portid) { + struct rte_eth_rxconf rxq_conf; + struct rte_eth_txconf txq_conf; + struct rte_eth_conf local_port_conf = port_conf; + struct rte_eth_dev_info dev_info; + + /* skip ports that are not enabled */ + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) { + printf("Skipping disabled port %u\n", portid); + continue; + } + nb_ports_available++; + + /* init port */ + printf("Initializing port %u... ", portid); + fflush(stdout); + rte_eth_dev_info_get(portid, &dev_info); + if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE) + local_port_conf.txmode.offloads |= + DEV_TX_OFFLOAD_MBUF_FAST_FREE; + ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n", + ret, portid); + + ret = rte_eth_dev_adjust_nb_rx_tx_desc(portid, &nb_rxd, + &nb_txd); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot adjust number of descriptors: err=%d, port=%u\n", + ret, portid); + + rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]); + + /* init one RX queue */ + fflush(stdout); + rxq_conf = dev_info.default_rxconf; + rxq_conf.offloads = local_port_conf.rxmode.offloads; + ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd, + rte_eth_dev_socket_id(portid), + &rxq_conf, + l2fwd_pktmbuf_pool); + if (ret < 0) + rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, port=%u\n", + ret, portid); + + /* init one TX queue on each port */ + fflush(stdout); + txq_conf = dev_info.default_txconf; + txq_conf.offloads = local_port_conf.txmode.offloads; + ret = rte_eth_tx_queue_setup(portid, 0, nb_txd, + rte_eth_dev_socket_id(portid), + &txq_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n", + ret, portid); + + /* Initialize TX buffers */ + tx_buffer[portid] = rte_zmalloc_socket("tx_buffer", + RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0, + rte_eth_dev_socket_id(portid)); + if (tx_buffer[portid] == NULL) + rte_exit(EXIT_FAILURE, "Cannot allocate buffer for tx on port %u\n", + portid); + + rte_eth_tx_buffer_init(tx_buffer[portid], MAX_PKT_BURST); + + ret = rte_eth_tx_buffer_set_err_callback(tx_buffer[portid], + rte_eth_tx_buffer_count_callback, + &port_statistics[portid].dropped); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot set error callback for tx buffer on port %u\n", + portid); + + /* Start device */ + ret = rte_eth_dev_start(portid); + if (ret < 0) + rte_exit(EXIT_FAILURE, "rte_eth_dev_start:err=%d, port=%u\n", + ret, portid); + + printf("done:\n"); + + rte_eth_promiscuous_enable(portid); + + printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n", + portid, + l2fwd_ports_eth_addr[portid].addr_bytes[0], + l2fwd_ports_eth_addr[portid].addr_bytes[1], + l2fwd_ports_eth_addr[portid].addr_bytes[2], + l2fwd_ports_eth_addr[portid].addr_bytes[3], + l2fwd_ports_eth_addr[portid].addr_bytes[4], + l2fwd_ports_eth_addr[portid].addr_bytes[5]); + + /* initialize port stats */ + memset(&port_statistics, 0, sizeof(port_statistics)); + } + + if (!nb_ports_available) { + rte_exit(EXIT_FAILURE, + "All available ports are disabled. Please set portmask.\n"); + } + + check_all_ports_link_status(l2fwd_enabled_port_mask); + + ret = 0; + /* launch per-lcore init on every lcore */ + rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL, + CALL_MASTER); + rte_eal_mp_wait_lcore(); + + RTE_ETH_FOREACH_DEV(portid) { + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + printf("Closing port %d...", portid); + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } + printf("Bye...\n"); + + return ret; +} diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build new file mode 100644 index 000000000..16eadb0b4 --- /dev/null +++ b/examples/l2fwd-event/meson.build @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +# meson file, for building this example as part of a main DPDK build. +# +# To build this example as a standalone application with an already-installed +# DPDK instance, use 'make' + +sources = files( + 'main.c' +) diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c index 1e2b14297..f6d3d2cd7 100644 --- a/examples/l2fwd/main.c +++ b/examples/l2fwd/main.c @@ -294,11 +294,11 @@ l2fwd_usage(const char *prgname) printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n" " -p PORTMASK: hexadecimal bitmask of ports to configure\n" " -q NQ: number of queue (=ports) per lcore (default is 1)\n" - " -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default, 86400 maximum)\n" - " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n" - " When enabled:\n" - " - The source MAC address is replaced by the TX port MAC address\n" - " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n", + " -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default, 86400 maximum)\n" + " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n" + " When enabled:\n" + " - The source MAC address is replaced by the TX port MAC address\n" + " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n", prgname); } From patchwork Thu Sep 19 09:25:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59417 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9FF711ECBD; Thu, 19 Sep 2019 11:26:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id C69CE1ECBD for ; Thu, 19 Sep 2019 11:26:17 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9PUUU012561; Thu, 19 Sep 2019 02:26:17 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=+fuOlTpherMMoU2jRHPYGSNZrtZN0RZANEHOXSTWXE8=; b=KY32uwL9/RzdivIJvp+/9lm5dZPdaTgmegBZ0phPhqO6bRnJIipgOe85T34TDQDcIgoU hsBLA3S3tsVsudWDZb6C6IANbfqMZ1lTzfHMGWcA/ALP7KFiK3POeGvrTVYg8RcJOGeB 4oTWKMK6IHggypt5XcWkyPf+M3EOBL2F2Ye48Ca/CKGBJXHdbqTGbHiZuNC1blGmNlTP S2zyFj/g2+kY3nc5iBIF8eFMTe9+k7TfJ6uJLq9B2ZInDIuBpi2J1gyWQApE+IY9mrN5 kUHEXRZj7NVbK857aaoAWmlWR44U5eGST9oTr3nf0+cQnwjYIKJiBJFhAhoBHc6R6OWX fg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2v3vcdt75k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:16 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:15 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:15 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 28DFB3F703F; Thu, 19 Sep 2019 02:26:11 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:55:56 +0530 Message-ID: <20190919092603.5485-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 02/10] examples/l2fwd-event: add infra for eventdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add infra to select event device as a mode to process packets through command line arguments. Also, allow the user to select the schedule type to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC. Usage: `--mode="eventdev"` or `--mode="poll"` `--eventq-sync="ordered"` or `--eventq-sync="atomic"` Signed-off-by: Sunil Kumar Kori --- examples/l2fwd-event/Makefile | 1 + examples/l2fwd-event/l2fwd_eventdev.c | 107 ++++++++++++++++++++++++++ examples/l2fwd-event/l2fwd_eventdev.h | 62 +++++++++++++++ examples/l2fwd-event/main.c | 22 +++++- examples/l2fwd-event/meson.build | 3 +- 5 files changed, 192 insertions(+), 3 deletions(-) create mode 100644 examples/l2fwd-event/l2fwd_eventdev.c create mode 100644 examples/l2fwd-event/l2fwd_eventdev.h diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile index a156c4162..bfe0058a2 100644 --- a/examples/l2fwd-event/Makefile +++ b/examples/l2fwd-event/Makefile @@ -7,6 +7,7 @@ APP = l2fwd-event # all source are stored in SRCS-y SRCS-y := main.c +SRCS-y += l2fwd_eventdev.c # Build using pkg-config variables if possible ifeq ($(shell pkg-config --exists libdpdk && echo 0),0) diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c new file mode 100644 index 000000000..19efb6d1e --- /dev/null +++ b/examples/l2fwd-event/l2fwd_eventdev.c @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "l2fwd_common.h" +#include "l2fwd_eventdev.h" + +static void +parse_mode(const char *optarg) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + + if (!strncmp(optarg, "poll", 4)) + eventdev_rsrc->enabled = false; + else if (!strncmp(optarg, "eventdev", 8)) + eventdev_rsrc->enabled = true; +} + +static void +parse_eventq_sync(const char *optarg) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + + if (!strncmp(optarg, "ordered", 7)) + eventdev_rsrc->sync_mode = RTE_SCHED_TYPE_ORDERED; + else if (!strncmp(optarg, "atomic", 6)) + eventdev_rsrc->sync_mode = RTE_SCHED_TYPE_ATOMIC; +} + +static int +parse_eventdev_args(char **argv, int argc) +{ + const struct option eventdev_lgopts[] = { + {CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM}, + {CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM}, + {NULL, 0, 0, 0} + }; + char **argvopt = argv; + int32_t option_index; + int32_t opt; + + while ((opt = getopt_long(argc, argvopt, "", eventdev_lgopts, + &option_index)) != EOF) { + switch (opt) { + case CMD_LINE_OPT_MODE_NUM: + parse_mode(optarg); + break; + + case CMD_LINE_OPT_EVENTQ_SYNC_NUM: + parse_eventq_sync(optarg); + break; + + case '?': + /* skip other parameters except eventdev specific */ + break; + + default: + printf("Invalid eventdev parameter\n"); + return -1; + } + } + + return 0; +} + +void +eventdev_resource_setup(void) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + uint32_t service_id; + int32_t ret; + + /* Parse eventdev command line options */ + ret = parse_eventdev_args(eventdev_rsrc->args, eventdev_rsrc->nb_args); + if (ret < 0) + return; + + if (!rte_event_dev_count()) + rte_exit(EXIT_FAILURE, "No Eventdev found"); + /* Start event device service */ + ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id, + &service_id); + if (ret != -ESRCH && ret != 0) + rte_exit(EXIT_FAILURE, "Error in starting eventdev"); + + rte_service_runstate_set(service_id, 1); + rte_service_set_runstate_mapped_check(service_id, 0); + eventdev_rsrc->service_id = service_id; + + /* Start event device */ + ret = rte_event_dev_start(eventdev_rsrc->event_d_id); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error in starting eventdev"); +} diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h new file mode 100644 index 000000000..f823cf6e9 --- /dev/null +++ b/examples/l2fwd-event/l2fwd_eventdev.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __L2FWD_EVENTDEV_H__ +#define __L2FWD_EVENTDEV_H__ + +#include +#include + +#include "l2fwd_common.h" + +#define CMD_LINE_OPT_MODE "mode" +#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sync" + +enum { + CMD_LINE_OPT_MODE_NUM = 265, + CMD_LINE_OPT_EVENTQ_SYNC_NUM, +}; + +struct eventdev_resources { + struct l2fwd_port_statistics *stats; + struct rte_mempool *pkt_pool; + uint64_t timer_period; + uint32_t *dst_ports; + uint32_t service_id; + uint32_t port_mask; + volatile bool *done; + uint8_t event_d_id; + uint8_t sync_mode; + uint8_t tx_mode_q; + uint8_t mac_updt; + uint8_t enabled; + uint8_t nb_args; + char **args; +}; + +static inline struct eventdev_resources * +get_eventdev_rsrc(void) +{ + const char name[RTE_MEMZONE_NAMESIZE] = "l2fwd_event_rsrc"; + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(name); + + if (mz != NULL) + return mz->addr; + + mz = rte_memzone_reserve(name, sizeof(struct eventdev_resources), 0, 0); + if (mz != NULL) { + memset(mz->addr, 0, sizeof(struct eventdev_resources)); + return mz->addr; + } + + rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n"); + + return NULL; +} + +void eventdev_resource_setup(void); + +#endif /* __L2FWD_EVENTDEV_H__ */ diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c index cc47fa203..661f0833f 100644 --- a/examples/l2fwd-event/main.c +++ b/examples/l2fwd-event/main.c @@ -42,6 +42,7 @@ #include #include "l2fwd_common.h" +#include "l2fwd_eventdev.h" static volatile bool force_quit; @@ -288,7 +289,12 @@ l2fwd_usage(const char *prgname) " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n" " When enabled:\n" " - The source MAC address is replaced by the TX port MAC address\n" - " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n", + " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n" + " --mode: Packet transfer mode for I/O, poll or eventdev\n" + " Default mode = eventdev\n" + " --eventq-sync:Event queue synchronization method,\n" + " ordered or atomic.\nDefault: atomic\n" + " Valid only if --mode=eventdev\n\n", prgname); } @@ -503,6 +509,7 @@ signal_handler(int signum) int main(int argc, char **argv) { + struct eventdev_resources *eventdev_rsrc; uint16_t nb_ports_available = 0; struct lcore_queue_conf *qconf; uint32_t nb_ports_in_mask = 0; @@ -524,6 +531,7 @@ main(int argc, char **argv) signal(SIGINT, signal_handler); signal(SIGTERM, signal_handler); + eventdev_rsrc = get_eventdev_rsrc(); /* parse application arguments (after the EAL ones) */ ret = l2fwd_parse_args(argc, argv); if (ret < 0) @@ -584,6 +592,17 @@ main(int argc, char **argv) if (l2fwd_pktmbuf_pool == NULL) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); + eventdev_rsrc->port_mask = l2fwd_enabled_port_mask; + eventdev_rsrc->pkt_pool = l2fwd_pktmbuf_pool; + eventdev_rsrc->dst_ports = l2fwd_dst_ports; + eventdev_rsrc->timer_period = timer_period; + eventdev_rsrc->mac_updt = mac_updating; + eventdev_rsrc->stats = port_statistics; + eventdev_rsrc->done = &force_quit; + + /* Configure eventdev parameters if user has requested */ + eventdev_resource_setup(); + /* Initialize the port/queue configuration of each logical core */ RTE_ETH_FOREACH_DEV(portid) { /* skip ports that are not enabled */ @@ -610,7 +629,6 @@ main(int argc, char **argv) printf("Lcore %u: RX port %u\n", rx_lcore_id, portid); } - /* Initialise each port */ RTE_ETH_FOREACH_DEV(portid) { struct rte_eth_rxconf rxq_conf; diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build index 16eadb0b4..b1ad48cc5 100644 --- a/examples/l2fwd-event/meson.build +++ b/examples/l2fwd-event/meson.build @@ -8,5 +8,6 @@ # DPDK instance, use 'make' sources = files( - 'main.c' + 'main.c', + 'l2fwd_eventdev.c' ) From patchwork Thu Sep 19 09:25:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59418 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A29671ECEE; Thu, 19 Sep 2019 11:26:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 030F71ECEA for ; Thu, 19 Sep 2019 11:26:21 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9PAEE012029; Thu, 19 Sep 2019 02:26:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=MO/mfk+ZJwUe1cpTETdgFtsvMAZQE1GoHAR1qvaPGbI=; b=hBvKGsNtEIrOMQMTLNawLCVYByA4Q0MxIKiLlQgmZI2G9RBiNwBRKa9QKDgS2wy1OweW B+GwCEWq5N39IWdln0OpdwRB6tHxE2aZO3yGN8/COhJX12gr9RtRG14pycorjfe7oeql k8qRjvkoC8JW/NBO0vRvMope45lXxmhgYZIw3ol3cFgsK/sbI58jdRJLWsd9BhTy3oeH s67urwqUsyKrJvBboSKSdkyiFvpBQvK6vd2XMP9e8iZVRP6So5SbqDn9pUwertC4cOxD 5evLIt5GOU7IEK0Cj9jYG6AVvvlqMml9su8WY6DrugVoMsEFPzejN5P0mT9Do7cOh2K3 IQ== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2v3vcdt762-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:21 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:19 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:19 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 3BCDD3F703F; Thu, 19 Sep 2019 02:26:15 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:55:57 +0530 Message-ID: <20190919092603.5485-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 03/10] examples/l2fwd-event: add infra to split eventdev framework X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add infra to split eventdev framework based on event Tx adapter capability. If event Tx adapter has internal port capability then we use `rte_event_eth_tx_adapter_enqueue` to transmitting packets else we use a SINGLE_LINK event queue to enqueue packets to a service core which is responsible for transmitting packets. Signed-off-by: Sunil Kumar Kori Signed-off-by: Pavan Nikhilesh --- examples/l2fwd-event/Makefile | 2 ++ examples/l2fwd-event/l2fwd_eventdev.c | 29 +++++++++++++++++++ examples/l2fwd-event/l2fwd_eventdev.h | 20 +++++++++++++ examples/l2fwd-event/l2fwd_eventdev_generic.c | 24 +++++++++++++++ .../l2fwd_eventdev_internal_port.c | 24 +++++++++++++++ examples/l2fwd-event/meson.build | 4 ++- 6 files changed, 102 insertions(+), 1 deletion(-) create mode 100644 examples/l2fwd-event/l2fwd_eventdev_generic.c create mode 100644 examples/l2fwd-event/l2fwd_eventdev_internal_port.c diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile index bfe0058a2..c1f700a65 100644 --- a/examples/l2fwd-event/Makefile +++ b/examples/l2fwd-event/Makefile @@ -8,6 +8,8 @@ APP = l2fwd-event # all source are stored in SRCS-y SRCS-y := main.c SRCS-y += l2fwd_eventdev.c +SRCS-y += l2fwd_eventdev_internal_port.c +SRCS-y += l2fwd_eventdev_generic.c # Build using pkg-config variables if possible ifeq ($(shell pkg-config --exists libdpdk && echo 0),0) diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c index 19efb6d1e..df76f1c1f 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.c +++ b/examples/l2fwd-event/l2fwd_eventdev.c @@ -76,6 +76,31 @@ parse_eventdev_args(char **argv, int argc) return 0; } +static void +eventdev_capability_setup(void) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + uint32_t caps = 0; + uint16_t i; + int ret; + + RTE_ETH_FOREACH_DEV(i) { + ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps); + if (ret) + rte_exit(EXIT_FAILURE, + "Invalid capability for Tx adptr port %d\n", + i); + + eventdev_rsrc->tx_mode_q |= !(caps & + RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT); + } + + if (eventdev_rsrc->tx_mode_q) + eventdev_set_generic_ops(&eventdev_rsrc->ops); + else + eventdev_set_internal_port_ops(&eventdev_rsrc->ops); +} + void eventdev_resource_setup(void) { @@ -90,6 +115,10 @@ eventdev_resource_setup(void) if (!rte_event_dev_count()) rte_exit(EXIT_FAILURE, "No Eventdev found"); + + /* Setup eventdev capability callbacks */ + eventdev_capability_setup(); + /* Start event device service */ ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id, &service_id); diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h index f823cf6e9..717f688ce 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.h +++ b/examples/l2fwd-event/l2fwd_eventdev.h @@ -18,8 +18,26 @@ enum { CMD_LINE_OPT_EVENTQ_SYNC_NUM, }; +typedef void (*event_queue_setup_cb)(uint16_t ethdev_count, + uint32_t event_queue_cfg); +typedef uint32_t (*eventdev_setup_cb)(uint16_t ethdev_count); +typedef void (*adapter_setup_cb)(uint16_t ethdev_count); +typedef void (*event_port_setup_cb)(void); +typedef void (*service_setup_cb)(void); +typedef void (*event_loop_cb)(void); + +struct eventdev_setup_ops { + event_queue_setup_cb event_queue_setup; + event_port_setup_cb event_port_setup; + eventdev_setup_cb eventdev_setup; + adapter_setup_cb adapter_setup; + service_setup_cb service_setup; + event_loop_cb l2fwd_event_loop; +}; + struct eventdev_resources { struct l2fwd_port_statistics *stats; + struct eventdev_setup_ops ops; struct rte_mempool *pkt_pool; uint64_t timer_period; uint32_t *dst_ports; @@ -58,5 +76,7 @@ get_eventdev_rsrc(void) } void eventdev_resource_setup(void); +void eventdev_set_generic_ops(struct eventdev_setup_ops *ops); +void eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops); #endif /* __L2FWD_EVENTDEV_H__ */ diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c new file mode 100644 index 000000000..e3990f8b0 --- /dev/null +++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "l2fwd_common.h" +#include "l2fwd_eventdev.h" + +void +eventdev_set_generic_ops(struct eventdev_setup_ops *ops) +{ + RTE_SET_USED(ops); +} diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c new file mode 100644 index 000000000..a0d2111f9 --- /dev/null +++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "l2fwd_common.h" +#include "l2fwd_eventdev.h" + +void +eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops) +{ + RTE_SET_USED(ops); +} diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build index b1ad48cc5..38560840c 100644 --- a/examples/l2fwd-event/meson.build +++ b/examples/l2fwd-event/meson.build @@ -9,5 +9,7 @@ sources = files( 'main.c', - 'l2fwd_eventdev.c' + 'l2fwd_eventdev.c', + 'l2fwd_eventdev_internal_port.c', + 'l2fwd_eventdev_generic.c' ) From patchwork Thu Sep 19 09:25:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59419 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 715891ECED; Thu, 19 Sep 2019 11:26:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1F7341EC62 for ; Thu, 19 Sep 2019 11:26:26 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9PaXS027666; Thu, 19 Sep 2019 02:26:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=hkrxcdjbsvQcN3JwllrzShWdQKDvJevKl7mKO8SKVR0=; b=F3kn4TmBiT1zrYgO5iV1cXh4xiPRXKVFFTjxRqs0ouXzFnQLT/J4Ot0jaWQwfjqgQet7 caaFR07AiEju84EmlTG2vVlJvOYk/njs2UBUZjQch+Ds+AE+nDMpM4+C8zRq3kMtsEla wYGq7ngjBKiOJrhxXTyZy1SSA2oSUDounD5AQhvBYmk+0GSwkFHnTpgfQXcHhCw6Kufq 2IkqzfFE8uGU44TkSXQDTcrdobclTBKAiTpTbS6U4+NmOjLTJ3ugmH7l47EZsmtfu0ra B0WFyBdDqFEoHHAGcT1pxEGcJ728RFSaqxWrSmKqaJlkaTa95C79jERHU/2nutFTd6dF xw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2v3vcfj5wn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:25 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:23 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:23 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 6D7E23F7040; Thu, 19 Sep 2019 02:26:20 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:55:58 +0530 Message-ID: <20190919092603.5485-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 04/10] examples/l2fwd-event: add eth port setup for eventdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add ethernet port Rx/Tx queue setup for event device which are later used for setting up event eth Rx/Tx adapters. Signed-off-by: Sunil Kumar Kori --- examples/l2fwd-event/l2fwd_eventdev.c | 114 ++++++++++++++++++++++++++ examples/l2fwd-event/l2fwd_eventdev.h | 1 + examples/l2fwd-event/main.c | 17 ++++ 3 files changed, 132 insertions(+) diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c index df76f1c1f..0d0d3b8b9 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.c +++ b/examples/l2fwd-event/l2fwd_eventdev.c @@ -18,6 +18,14 @@ #include "l2fwd_common.h" #include "l2fwd_eventdev.h" +static void +print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr) +{ + char buf[RTE_ETHER_ADDR_FMT_SIZE]; + rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr); + printf("%s%s", name, buf); +} + static void parse_mode(const char *optarg) { @@ -76,6 +84,108 @@ parse_eventdev_args(char **argv, int argc) return 0; } +static void +eth_dev_port_setup(uint16_t ethdev_count __rte_unused) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + static struct rte_eth_conf port_config = { + .rxmode = { + .mq_mode = ETH_MQ_RX_RSS, + .max_rx_pkt_len = RTE_ETHER_MAX_LEN, + .split_hdr_size = 0, + .offloads = DEV_RX_OFFLOAD_CHECKSUM + }, + .rx_adv_conf = { + .rss_conf = { + .rss_key = NULL, + .rss_hf = ETH_RSS_IP, + } + }, + .txmode = { + .mq_mode = ETH_MQ_TX_NONE, + } + }; + struct rte_eth_conf local_port_conf; + struct rte_eth_dev_info dev_info; + struct rte_eth_txconf txconf; + struct rte_eth_rxconf rxconf; + uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT; + uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; + uint16_t port_id; + int32_t ret; + + /* initialize all ports */ + RTE_ETH_FOREACH_DEV(port_id) { + local_port_conf = port_config; + /* skip ports that are not enabled */ + if ((eventdev_rsrc->port_mask & (1 << port_id)) == 0) { + printf("\nSkipping disabled port %d\n", port_id); + continue; + } + + /* init port */ + printf("Initializing port %d ... ", port_id); + fflush(stdout); + rte_eth_dev_info_get(port_id, &dev_info); + if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE) + local_port_conf.txmode.offloads |= + DEV_TX_OFFLOAD_MBUF_FAST_FREE; + + local_port_conf.rx_adv_conf.rss_conf.rss_hf &= + dev_info.flow_type_rss_offloads; + if (local_port_conf.rx_adv_conf.rss_conf.rss_hf != + port_config.rx_adv_conf.rss_conf.rss_hf) { + printf("Port %u modified RSS hash function " + "based on hardware support," + "requested:%#"PRIx64" configured:%#"PRIx64"\n", + port_id, + port_config.rx_adv_conf.rss_conf.rss_hf, + local_port_conf.rx_adv_conf.rss_conf.rss_hf); + } + + ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot configure device: err=%d, port=%d\n", + ret, port_id); + + ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd, + &nb_txd); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot adjust number of descriptors: err=%d, " + "port=%d\n", ret, port_id); + + rte_eth_macaddr_get(port_id, + &eventdev_rsrc->ports_eth_addr[port_id]); + print_ethaddr(" Address:", + &eventdev_rsrc->ports_eth_addr[port_id]); + printf("\n"); + + + /* init one Rx queue per port */ + rxconf = dev_info.default_rxconf; + rxconf.offloads = local_port_conf.rxmode.offloads; + ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0, &rxconf, + eventdev_rsrc->pkt_pool); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_rx_queue_setup: err=%d, " + "port=%d\n", ret, port_id); + + /* init one Tx queue per port */ + txconf = dev_info.default_txconf; + txconf.offloads = local_port_conf.txmode.offloads; + ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_tx_queue_setup: err=%d, " + "port=%d\n", ret, port_id); + + rte_eth_promiscuous_enable(port_id); + } +} + static void eventdev_capability_setup(void) { @@ -105,6 +215,7 @@ void eventdev_resource_setup(void) { struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + uint16_t ethdev_count = rte_eth_dev_count_avail(); uint32_t service_id; int32_t ret; @@ -119,6 +230,9 @@ eventdev_resource_setup(void) /* Setup eventdev capability callbacks */ eventdev_capability_setup(); + /* Ethernet device configuration */ + eth_dev_port_setup(ethdev_count); + /* Start event device service */ ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id, &service_id); diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h index 717f688ce..cc0bdd1ad 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.h +++ b/examples/l2fwd-event/l2fwd_eventdev.h @@ -51,6 +51,7 @@ struct eventdev_resources { uint8_t enabled; uint8_t nb_args; char **args; + struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS]; }; static inline struct eventdev_resources * diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c index 661f0833f..f24bdd4a4 100644 --- a/examples/l2fwd-event/main.c +++ b/examples/l2fwd-event/main.c @@ -602,6 +602,22 @@ main(int argc, char **argv) /* Configure eventdev parameters if user has requested */ eventdev_resource_setup(); + if (eventdev_rsrc->enabled) { + /* All settings are done. Now enable eth devices */ + RTE_ETH_FOREACH_DEV(portid) { + /* skip ports that are not enabled */ + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + + ret = rte_eth_dev_start(portid); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_dev_start:err=%d, port=%u\n", + ret, portid); + } + + goto skip_port_config; + } /* Initialize the port/queue configuration of each logical core */ RTE_ETH_FOREACH_DEV(portid) { @@ -733,6 +749,7 @@ main(int argc, char **argv) "All available ports are disabled. Please set portmask.\n"); } +skip_port_config: check_all_ports_link_status(l2fwd_enabled_port_mask); ret = 0; From patchwork Thu Sep 19 09:25:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59420 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 296791ED25; Thu, 19 Sep 2019 11:26:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id BE69E1ED0E for ; Thu, 19 Sep 2019 11:26:29 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9PUUW012561; Thu, 19 Sep 2019 02:26:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=HzLEdFTWEycRngqpt28p0rLkOQWWac0aJzQ4LsNEaU8=; b=XwO2FMDYIlMINPnMSQxMNnsyEy2fK2VTKWIAy/c7/AOu/bqmscE4IpEQDkj/2VzQtF92 SlE+BUmsJBvkFmkrPx7XoaxORZgnLGf8prXDlqWG0Xxs1jwXpPJfbo3oK9AF2GFa6qga vzVWuuq+/QFeTH+eCIHB6oAjHzLZkLCQUTg+/A/13rRn40/Wlm36zBKidScVJOt5gIo4 ek7sJt2o+ASQ1Mxsj0nmfAO5IRJh12pHPuFj8G7xfxzpYt9GP9ZhOilq5IyA1cztC3+g vTmJELYa52FQ7WNHkiFMGmxX5r4NzKre0wk9kTiYTKhXPgynKXBDIGwvpkVrcUIYz1UO eg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2v3vcdt76h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:28 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:27 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:27 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 824F43F703F; Thu, 19 Sep 2019 02:26:24 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:55:59 +0530 Message-ID: <20190919092603.5485-6-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 05/10] examples/l2fwd-event: add eventdev queue and port setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event device queue and port setup based on event eth Tx adapter capabilities. Signed-off-by: Sunil Kumar Kori Signed-off-by: Pavan Nikhilesh --- examples/l2fwd-event/l2fwd_eventdev.c | 10 + examples/l2fwd-event/l2fwd_eventdev.h | 18 ++ examples/l2fwd-event/l2fwd_eventdev_generic.c | 179 +++++++++++++++++- .../l2fwd_eventdev_internal_port.c | 173 ++++++++++++++++- 4 files changed, 378 insertions(+), 2 deletions(-) diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c index 0d0d3b8b9..7a3d077ae 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.c +++ b/examples/l2fwd-event/l2fwd_eventdev.c @@ -216,6 +216,7 @@ eventdev_resource_setup(void) { struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); uint16_t ethdev_count = rte_eth_dev_count_avail(); + uint32_t event_queue_cfg = 0; uint32_t service_id; int32_t ret; @@ -233,6 +234,15 @@ eventdev_resource_setup(void) /* Ethernet device configuration */ eth_dev_port_setup(ethdev_count); + /* Event device configuration */ + event_queue_cfg = eventdev_rsrc->ops.eventdev_setup(ethdev_count); + + /* Event queue configuration */ + eventdev_rsrc->ops.event_queue_setup(ethdev_count, event_queue_cfg); + + /* Event port configuration */ + eventdev_rsrc->ops.event_port_setup(); + /* Start event device service */ ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id, &service_id); diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h index cc0bdd1ad..7646ef29f 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.h +++ b/examples/l2fwd-event/l2fwd_eventdev.h @@ -26,6 +26,17 @@ typedef void (*event_port_setup_cb)(void); typedef void (*service_setup_cb)(void); typedef void (*event_loop_cb)(void); +struct eventdev_queues { + uint8_t *event_q_id; + uint8_t nb_queues; +}; + +struct eventdev_ports { + uint8_t *event_p_id; + uint8_t nb_ports; + rte_spinlock_t lock; +}; + struct eventdev_setup_ops { event_queue_setup_cb event_queue_setup; event_port_setup_cb event_port_setup; @@ -36,9 +47,14 @@ struct eventdev_setup_ops { }; struct eventdev_resources { + struct rte_event_port_conf def_p_conf; struct l2fwd_port_statistics *stats; + /* Default port config. */ + uint8_t disable_implicit_release; struct eventdev_setup_ops ops; struct rte_mempool *pkt_pool; + struct eventdev_queues evq; + struct eventdev_ports evp; uint64_t timer_period; uint32_t *dst_ports; uint32_t service_id; @@ -47,6 +63,8 @@ struct eventdev_resources { uint8_t event_d_id; uint8_t sync_mode; uint8_t tx_mode_q; + uint8_t deq_depth; + uint8_t has_burst; uint8_t mac_updt; uint8_t enabled; uint8_t nb_args; diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c index e3990f8b0..65166fded 100644 --- a/examples/l2fwd-event/l2fwd_eventdev_generic.c +++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c @@ -17,8 +17,185 @@ #include "l2fwd_common.h" #include "l2fwd_eventdev.h" +static uint32_t +eventdev_setup_generic(uint16_t ethdev_count) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + struct rte_event_dev_config event_d_conf = { + .nb_events_limit = 4096, + .nb_event_queue_flows = 1024, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128 + }; + struct rte_event_dev_info dev_info; + const uint8_t event_d_id = 0; /* Always use first event device only */ + uint32_t event_queue_cfg = 0; + uint16_t num_workers = 0; + int ret; + + /* Event device configurtion */ + rte_event_dev_info_get(event_d_id, &dev_info); + eventdev_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE); + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES) + event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + /* One queue for each ethdev port + one Tx adapter Single link queue. */ + event_d_conf.nb_event_queues = ethdev_count + 1; + if (dev_info.max_event_queues < event_d_conf.nb_event_queues) + event_d_conf.nb_event_queues = dev_info.max_event_queues; + + if (dev_info.max_num_events < event_d_conf.nb_events_limit) + event_d_conf.nb_events_limit = dev_info.max_num_events; + + if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows) + event_d_conf.nb_event_queue_flows = + dev_info.max_event_queue_flows; + + if (dev_info.max_event_port_dequeue_depth < + event_d_conf.nb_event_port_dequeue_depth) + event_d_conf.nb_event_port_dequeue_depth = + dev_info.max_event_port_dequeue_depth; + + if (dev_info.max_event_port_enqueue_depth < + event_d_conf.nb_event_port_enqueue_depth) + event_d_conf.nb_event_port_enqueue_depth = + dev_info.max_event_port_enqueue_depth; + + num_workers = rte_lcore_count() - rte_service_lcore_count(); + if (dev_info.max_event_ports < num_workers) + num_workers = dev_info.max_event_ports; + + event_d_conf.nb_event_ports = num_workers; + eventdev_rsrc->evp.nb_ports = num_workers; + + eventdev_rsrc->has_burst = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_BURST_MODE); + + ret = rte_event_dev_configure(event_d_id, &event_d_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error in configuring event device"); + + eventdev_rsrc->event_d_id = event_d_id; + return event_queue_cfg; +} + +static void +event_port_setup_generic(void) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + uint8_t event_d_id = eventdev_rsrc->event_d_id; + struct rte_event_port_conf event_p_conf = { + .dequeue_depth = 32, + .enqueue_depth = 32, + .new_event_threshold = 4096 + }; + struct rte_event_port_conf def_p_conf; + uint8_t event_p_id; + int32_t ret; + + /* Service cores are not used to run worker thread */ + eventdev_rsrc->evp.nb_ports = eventdev_rsrc->evp.nb_ports; + eventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) * + eventdev_rsrc->evp.nb_ports); + if (!eventdev_rsrc->evp.event_p_id) + rte_exit(EXIT_FAILURE, " No space is available"); + + memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf)); + rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf); + + if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold) + event_p_conf.new_event_threshold = + def_p_conf.new_event_threshold; + + if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth) + event_p_conf.dequeue_depth = def_p_conf.dequeue_depth; + + if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth) + event_p_conf.enqueue_depth = def_p_conf.enqueue_depth; + + event_p_conf.disable_implicit_release = + eventdev_rsrc->disable_implicit_release; + eventdev_rsrc->deq_depth = def_p_conf.dequeue_depth; + + for (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports; + event_p_id++) { + ret = rte_event_port_setup(event_d_id, event_p_id, + &event_p_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event port %d\n", + event_p_id); + } + + ret = rte_event_port_link(event_d_id, event_p_id, + eventdev_rsrc->evq.event_q_id, + NULL, + eventdev_rsrc->evq.nb_queues - 1); + if (ret != (eventdev_rsrc->evq.nb_queues - 1)) { + rte_exit(EXIT_FAILURE, "Error in linking event port %d " + "to event queue", event_p_id); + } + eventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id; + } + /* init spinlock */ + rte_spinlock_init(&eventdev_rsrc->evp.lock); + + eventdev_rsrc->def_p_conf = event_p_conf; +} + +static void +event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + uint8_t event_d_id = eventdev_rsrc->event_d_id; + struct rte_event_queue_conf event_q_conf = { + .nb_atomic_flows = 1024, + .nb_atomic_order_sequences = 1024, + .event_queue_cfg = event_queue_cfg, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL + }; + struct rte_event_queue_conf def_q_conf; + uint8_t event_q_id; + int32_t ret; + + event_q_conf.schedule_type = eventdev_rsrc->sync_mode; + eventdev_rsrc->evq.nb_queues = ethdev_count + 1; + eventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) * + eventdev_rsrc->evq.nb_queues); + if (!eventdev_rsrc->evq.event_q_id) + rte_exit(EXIT_FAILURE, "Memory allocation failure"); + + rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf); + if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows) + event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows; + + for (event_q_id = 0; event_q_id < (eventdev_rsrc->evq.nb_queues - 1); + event_q_id++) { + ret = rte_event_queue_setup(event_d_id, event_q_id, + &event_q_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event queue"); + } + eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id; + } + + event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK; + event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST, + ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event queue for Tx adapter"); + } + eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id; +} + void eventdev_set_generic_ops(struct eventdev_setup_ops *ops) { - RTE_SET_USED(ops); + ops->eventdev_setup = eventdev_setup_generic; + ops->event_queue_setup = event_queue_setup_generic; + ops->event_port_setup = event_port_setup_generic; } diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c index a0d2111f9..52cb07707 100644 --- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c +++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c @@ -17,8 +17,179 @@ #include "l2fwd_common.h" #include "l2fwd_eventdev.h" +static uint32_t +eventdev_setup_internal_port(uint16_t ethdev_count) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + struct rte_event_dev_config event_d_conf = { + .nb_events_limit = 4096, + .nb_event_queue_flows = 1024, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128 + }; + struct rte_event_dev_info dev_info; + uint8_t disable_implicit_release; + const uint8_t event_d_id = 0; /* Always use first event device only */ + uint32_t event_queue_cfg = 0; + uint16_t num_workers = 0; + int ret; + + /* Event device configurtion */ + rte_event_dev_info_get(event_d_id, &dev_info); + + disable_implicit_release = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE); + eventdev_rsrc->disable_implicit_release = + disable_implicit_release; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES) + event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + event_d_conf.nb_event_queues = ethdev_count; + if (dev_info.max_event_queues < event_d_conf.nb_event_queues) + event_d_conf.nb_event_queues = dev_info.max_event_queues; + + if (dev_info.max_num_events < event_d_conf.nb_events_limit) + event_d_conf.nb_events_limit = dev_info.max_num_events; + + if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows) + event_d_conf.nb_event_queue_flows = + dev_info.max_event_queue_flows; + + if (dev_info.max_event_port_dequeue_depth < + event_d_conf.nb_event_port_dequeue_depth) + event_d_conf.nb_event_port_dequeue_depth = + dev_info.max_event_port_dequeue_depth; + + if (dev_info.max_event_port_enqueue_depth < + event_d_conf.nb_event_port_enqueue_depth) + event_d_conf.nb_event_port_enqueue_depth = + dev_info.max_event_port_enqueue_depth; + + num_workers = rte_lcore_count(); + if (dev_info.max_event_ports < num_workers) + num_workers = dev_info.max_event_ports; + + event_d_conf.nb_event_ports = num_workers; + eventdev_rsrc->evp.nb_ports = num_workers; + eventdev_rsrc->has_burst = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_BURST_MODE); + + ret = rte_event_dev_configure(event_d_id, &event_d_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error in configuring event device"); + + eventdev_rsrc->event_d_id = event_d_id; + return event_queue_cfg; +} + +static void +event_port_setup_internal_port(void) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + uint8_t event_d_id = eventdev_rsrc->event_d_id; + struct rte_event_port_conf event_p_conf = { + .dequeue_depth = 32, + .enqueue_depth = 32, + .new_event_threshold = 4096 + }; + struct rte_event_port_conf def_p_conf; + uint8_t event_p_id; + int32_t ret; + + eventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) * + eventdev_rsrc->evp.nb_ports); + if (!eventdev_rsrc->evp.event_p_id) + rte_exit(EXIT_FAILURE, + "Failed to allocate memory for Event Ports"); + + rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf); + if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold) + event_p_conf.new_event_threshold = + def_p_conf.new_event_threshold; + + if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth) + event_p_conf.dequeue_depth = def_p_conf.dequeue_depth; + + if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth) + event_p_conf.enqueue_depth = def_p_conf.enqueue_depth; + + event_p_conf.disable_implicit_release = + eventdev_rsrc->disable_implicit_release; + + for (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports; + event_p_id++) { + ret = rte_event_port_setup(event_d_id, event_p_id, + &event_p_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event port %d\n", + event_p_id); + } + + ret = rte_event_port_link(event_d_id, event_p_id, NULL, + NULL, 0); + if (ret < 0) { + rte_exit(EXIT_FAILURE, "Error in linking event port %d " + "to event queue", event_p_id); + } + eventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id; + + /* init spinlock */ + rte_spinlock_init(&eventdev_rsrc->evp.lock); + } + + eventdev_rsrc->def_p_conf = event_p_conf; +} + +static void +event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + uint8_t event_d_id = eventdev_rsrc->event_d_id; + struct rte_event_queue_conf event_q_conf = { + .nb_atomic_flows = 1024, + .nb_atomic_order_sequences = 1024, + .event_queue_cfg = event_queue_cfg, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL + }; + struct rte_event_queue_conf def_q_conf; + uint8_t event_q_id = 0; + int32_t ret; + + rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf); + + if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows) + event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows; + + if (def_q_conf.nb_atomic_order_sequences < + event_q_conf.nb_atomic_order_sequences) + event_q_conf.nb_atomic_order_sequences = + def_q_conf.nb_atomic_order_sequences; + + event_q_conf.event_queue_cfg = event_queue_cfg; + event_q_conf.schedule_type = eventdev_rsrc->sync_mode; + eventdev_rsrc->evq.nb_queues = ethdev_count; + eventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) * + eventdev_rsrc->evq.nb_queues); + if (!eventdev_rsrc->evq.event_q_id) + rte_exit(EXIT_FAILURE, "Memory allocation failure"); + + for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) { + ret = rte_event_queue_setup(event_d_id, event_q_id, + &event_q_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event queue"); + } + eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id; + } +} + void eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops) { - RTE_SET_USED(ops); + ops->eventdev_setup = eventdev_setup_internal_port; + ops->event_queue_setup = event_queue_setup_internal_port; + ops->event_port_setup = event_port_setup_internal_port; } From patchwork Thu Sep 19 09:26:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59421 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 638491ED49; Thu, 19 Sep 2019 11:26:35 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 303351ED46 for ; Thu, 19 Sep 2019 11:26:34 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9Otgh027068; Thu, 19 Sep 2019 02:26:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=cgpeSVhoP7xNrTkmg/IMcT99COmuDQ78+BdFCrVxaWw=; b=y2uUE1Z3GpK4xRYUuhN8jWHNPgfFDKgEesJZZPk7x2X+tsDGKCMBW7PeIhP0ZpRpwCwL gZtXk6Iv2XEJmis1voVpozVkqJgxasoJU1sYQ1p5DdjVhR2LqL7YNDUMgFw+GRVFBrO0 laqrksD+n+PSQgwCZA0/RRjBMyhNohux12Nlbd2c/HPUoHmANBFnnKXMe71eZtdQ663n Sp8xVVNuL93YcwbJcQl3O2BNIbZERMvdYqXGA5YKSIURVwdN4spPdmeVUanQJMgGK+re CTHM+ADE/GgzH7luqbwn6VM68xGKKSuMULgz+kt1BxtpSzNji3s8WY3BwcnOgBnsIQZC xA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2v3vcfj5x7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:33 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:31 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:31 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 9EABD3F703F; Thu, 19 Sep 2019 02:26:28 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:56:00 +0530 Message-ID: <20190919092603.5485-7-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event eth Rx/Tx adapter setup for both generic and internal port event device pipelines. Signed-off-by: Sunil Kumar Kori Signed-off-by: Pavan Nikhilesh --- examples/l2fwd-event/l2fwd_eventdev.c | 3 + examples/l2fwd-event/l2fwd_eventdev.h | 17 +++ examples/l2fwd-event/l2fwd_eventdev_generic.c | 117 ++++++++++++++++++ .../l2fwd_eventdev_internal_port.c | 80 ++++++++++++ 4 files changed, 217 insertions(+) diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c index 7a3d077ae..f964c69d6 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.c +++ b/examples/l2fwd-event/l2fwd_eventdev.c @@ -243,6 +243,9 @@ eventdev_resource_setup(void) /* Event port configuration */ eventdev_rsrc->ops.event_port_setup(); + /* Rx/Tx adapters configuration */ + eventdev_rsrc->ops.adapter_setup(ethdev_count); + /* Start event device service */ ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id, &service_id); diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h index 7646ef29f..95ff37294 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.h +++ b/examples/l2fwd-event/l2fwd_eventdev.h @@ -6,6 +6,9 @@ #define __L2FWD_EVENTDEV_H__ #include +#include +#include +#include #include #include "l2fwd_common.h" @@ -37,6 +40,18 @@ struct eventdev_ports { rte_spinlock_t lock; }; +struct eventdev_rx_adptr { + uint32_t service_id; + uint8_t nb_rx_adptr; + uint8_t *rx_adptr; +}; + +struct eventdev_tx_adptr { + uint32_t service_id; + uint8_t nb_tx_adptr; + uint8_t *tx_adptr; +}; + struct eventdev_setup_ops { event_queue_setup_cb event_queue_setup; event_port_setup_cb event_port_setup; @@ -50,6 +65,8 @@ struct eventdev_resources { struct rte_event_port_conf def_p_conf; struct l2fwd_port_statistics *stats; /* Default port config. */ + struct eventdev_rx_adptr rx_adptr; + struct eventdev_tx_adptr tx_adptr; uint8_t disable_implicit_release; struct eventdev_setup_ops ops; struct rte_mempool *pkt_pool; diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c index 65166fded..68b63279a 100644 --- a/examples/l2fwd-event/l2fwd_eventdev_generic.c +++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c @@ -192,10 +192,127 @@ event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg) eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id; } +static void +rx_tx_adapter_setup_generic(uint16_t ethdev_count) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = { + .rx_queue_flags = 0, + .ev = { + .queue_id = 0, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + } + }; + uint8_t event_d_id = eventdev_rsrc->event_d_id; + uint8_t rx_adptr_id = 0; + uint8_t tx_adptr_id = 0; + uint8_t tx_port_id = 0; + uint32_t service_id; + int32_t ret, i; + + /* Rx adapter setup */ + eventdev_rsrc->rx_adptr.nb_rx_adptr = 1; + eventdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) * + eventdev_rsrc->rx_adptr.nb_rx_adptr); + if (!eventdev_rsrc->rx_adptr.rx_adptr) { + free(eventdev_rsrc->evp.event_p_id); + free(eventdev_rsrc->evq.event_q_id); + rte_exit(EXIT_FAILURE, + "failed to allocate memery for Rx adapter"); + } + + ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id, + &eventdev_rsrc->def_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, "failed to create rx adapter"); + + eth_q_conf.ev.sched_type = eventdev_rsrc->sync_mode; + for (i = 0; i < ethdev_count; i++) { + /* Configure user requested sync mode */ + eth_q_conf.ev.queue_id = eventdev_rsrc->evq.event_q_id[i]; + ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, i, -1, + ð_q_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Rx adapter"); + } + + ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id); + if (ret != -ESRCH && ret != 0) { + rte_exit(EXIT_FAILURE, + "Error getting the service ID for rx adptr\n"); + } + + rte_service_runstate_set(service_id, 1); + rte_service_set_runstate_mapped_check(service_id, 0); + eventdev_rsrc->rx_adptr.service_id = service_id; + + ret = rte_event_eth_rx_adapter_start(rx_adptr_id); + if (ret) + rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed", + rx_adptr_id); + + eventdev_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id; + + /* Tx adapter setup */ + eventdev_rsrc->tx_adptr.nb_tx_adptr = 1; + eventdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) * + eventdev_rsrc->tx_adptr.nb_tx_adptr); + if (!eventdev_rsrc->tx_adptr.tx_adptr) { + free(eventdev_rsrc->rx_adptr.rx_adptr); + free(eventdev_rsrc->evp.event_p_id); + free(eventdev_rsrc->evq.event_q_id); + rte_exit(EXIT_FAILURE, + "failed to allocate memery for Rx adapter"); + } + + ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id, + &eventdev_rsrc->def_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, "failed to create tx adapter"); + + for (i = 0; i < ethdev_count; i++) { + ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, i, -1); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to add queues to Tx adapter"); + } + + ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id); + if (ret != -ESRCH && ret != 0) + rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID"); + + rte_service_runstate_set(service_id, 1); + rte_service_set_runstate_mapped_check(service_id, 0); + eventdev_rsrc->tx_adptr.service_id = service_id; + + ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to get Tx adapter port id: %d\n", ret); + + ret = rte_event_port_link(event_d_id, tx_port_id, + &eventdev_rsrc->evq.event_q_id[ + eventdev_rsrc->evq.nb_queues - 1], + NULL, 1); + if (ret != 1) + rte_exit(EXIT_FAILURE, + "Unable to link Tx adapter port to Tx queue:err = %d", + ret); + + ret = rte_event_eth_tx_adapter_start(tx_adptr_id); + if (ret) + rte_exit(EXIT_FAILURE, "Tx adapter[%d] start failed", + tx_adptr_id); + + eventdev_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id; +} + void eventdev_set_generic_ops(struct eventdev_setup_ops *ops) { ops->eventdev_setup = eventdev_setup_generic; ops->event_queue_setup = event_queue_setup_generic; ops->event_port_setup = event_port_setup_generic; + ops->adapter_setup = rx_tx_adapter_setup_generic; } diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c index 52cb07707..02663242f 100644 --- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c +++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c @@ -186,10 +186,90 @@ event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg) } } +static void +rx_tx_adapter_setup_internal_port(uint16_t ethdev_count) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = { + .rx_queue_flags = 0, + .ev = { + .queue_id = 0, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + } + }; + uint8_t event_d_id = eventdev_rsrc->event_d_id; + int32_t ret, i; + + eventdev_rsrc->rx_adptr.nb_rx_adptr = ethdev_count; + eventdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) * + eventdev_rsrc->rx_adptr.nb_rx_adptr); + if (!eventdev_rsrc->rx_adptr.rx_adptr) { + free(eventdev_rsrc->evp.event_p_id); + free(eventdev_rsrc->evq.event_q_id); + rte_exit(EXIT_FAILURE, + "failed to allocate memery for Rx adapter"); + } + + for (i = 0; i < ethdev_count; i++) { + ret = rte_event_eth_rx_adapter_create(i, event_d_id, + &eventdev_rsrc->def_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to create rx adapter[%d]", i); + + /* Configure user requested sync mode */ + eth_q_conf.ev.queue_id = eventdev_rsrc->evq.event_q_id[i]; + eth_q_conf.ev.sched_type = eventdev_rsrc->sync_mode; + ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Rx adapter"); + + ret = rte_event_eth_rx_adapter_start(i); + if (ret) + rte_exit(EXIT_FAILURE, + "Rx adapter[%d] start failed", i); + + eventdev_rsrc->rx_adptr.rx_adptr[i] = i; + } + + eventdev_rsrc->tx_adptr.nb_tx_adptr = ethdev_count; + eventdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) * + eventdev_rsrc->tx_adptr.nb_tx_adptr); + if (!eventdev_rsrc->tx_adptr.tx_adptr) { + free(eventdev_rsrc->rx_adptr.rx_adptr); + free(eventdev_rsrc->evp.event_p_id); + free(eventdev_rsrc->evq.event_q_id); + rte_exit(EXIT_FAILURE, + "failed to allocate memery for Rx adapter"); + } + + for (i = 0; i < ethdev_count; i++) { + ret = rte_event_eth_tx_adapter_create(i, event_d_id, + &eventdev_rsrc->def_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to create tx adapter[%d]", i); + + ret = rte_event_eth_tx_adapter_queue_add(i, i, -1); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to add queues to Tx adapter"); + + ret = rte_event_eth_tx_adapter_start(i); + if (ret) + rte_exit(EXIT_FAILURE, + "Tx adapter[%d] start failed", i); + + eventdev_rsrc->tx_adptr.tx_adptr[i] = i; + } +} + void eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops) { ops->eventdev_setup = eventdev_setup_internal_port; ops->event_queue_setup = event_queue_setup_internal_port; ops->event_port_setup = event_port_setup_internal_port; + ops->adapter_setup = rx_tx_adapter_setup_internal_port; } From patchwork Thu Sep 19 09:26:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59422 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3FE5F1ED56; Thu, 19 Sep 2019 11:26:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D0C381ED55 for ; Thu, 19 Sep 2019 11:26:37 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9PUUX012561; Thu, 19 Sep 2019 02:26:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=PNQfFZ5D0MrLEPJt4yfrMqyfxB4sVPs+xAaOydsr/YY=; b=r360m8DKPC35T/zEZv3whttbrUl7Y63H6wqOKxjsgNB5XGETlJ24uKAV3Tsv65tYE9i4 gT5ORgbS9JwXB6rT+HZYwq7xwN05vc8Zuok0x/Vkg9QYZ/oMwDm3xNpnUTjVyqfop2K5 4yXJN+07ggmIrO7X/IBtq/fGm9OnBa6zvgukbMVkZr6SAlPTAojbovJmmO9aBE+nFm+f Nhy1tp4U269sEa/UH6TgS8Rf/aBV7x00+bvqMPv10K7i8go7NG0OD7cG1hLYZ6CmDO/P RFaueWO6c705RPU+BvIdCCzlvSzHb4ntiE8zxaT6TZG9ZltjQhI1jpJ5sXQ5ELKbbzS1 Ww== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2v3vcdt774-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:37 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:35 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:35 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id B95143F703F; Thu, 19 Sep 2019 02:26:32 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:56:01 +0530 Message-ID: <20190919092603.5485-8-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 07/10] examples/l2fwd-event: add service core setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add service core setup when eventdev and Rx/Tx adapter don't have internal port capability. Signed-off-by: Sunil Kumar Kori --- examples/l2fwd-event/l2fwd_eventdev_generic.c | 31 +++++++++++++++++++ .../l2fwd_eventdev_internal_port.c | 6 ++++ examples/l2fwd-event/main.c | 2 ++ 3 files changed, 39 insertions(+) diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c index 68b63279a..e1e603052 100644 --- a/examples/l2fwd-event/l2fwd_eventdev_generic.c +++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c @@ -17,6 +17,36 @@ #include "l2fwd_common.h" #include "l2fwd_eventdev.h" +static void +eventdev_service_setup_generic(void) +{ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + uint32_t lcore_id[RTE_MAX_LCORE] = {0}; + int32_t req_service_cores = 3; + int32_t avail_service_cores; + + avail_service_cores = rte_service_lcore_list(lcore_id, RTE_MAX_LCORE); + if (avail_service_cores < req_service_cores) { + rte_exit(EXIT_FAILURE, "Enough services cores are not present" + " Required = %d Available = %d", + req_service_cores, avail_service_cores); + } + + /* Start eventdev scheduler service */ + rte_service_map_lcore_set(eventdev_rsrc->service_id, lcore_id[0], 1); + rte_service_lcore_start(lcore_id[0]); + + /* Start eventdev Rx adapter service */ + rte_service_map_lcore_set(eventdev_rsrc->rx_adptr.service_id, + lcore_id[1], 1); + rte_service_lcore_start(lcore_id[1]); + + /* Start eventdev Tx adapter service */ + rte_service_map_lcore_set(eventdev_rsrc->tx_adptr.service_id, + lcore_id[2], 1); + rte_service_lcore_start(lcore_id[2]); +} + static uint32_t eventdev_setup_generic(uint16_t ethdev_count) { @@ -315,4 +345,5 @@ eventdev_set_generic_ops(struct eventdev_setup_ops *ops) ops->event_queue_setup = event_queue_setup_generic; ops->event_port_setup = event_port_setup_generic; ops->adapter_setup = rx_tx_adapter_setup_generic; + ops->service_setup = eventdev_service_setup_generic; } diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c index 02663242f..39fcb4326 100644 --- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c +++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c @@ -265,6 +265,11 @@ rx_tx_adapter_setup_internal_port(uint16_t ethdev_count) } } +static void +eventdev_service_setup_internal_port(void) +{ +} + void eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops) { @@ -272,4 +277,5 @@ eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops) ops->event_queue_setup = event_queue_setup_internal_port; ops->event_port_setup = event_port_setup_internal_port; ops->adapter_setup = rx_tx_adapter_setup_internal_port; + ops->service_setup = eventdev_service_setup_internal_port; } diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c index f24bdd4a4..09c86d2cd 100644 --- a/examples/l2fwd-event/main.c +++ b/examples/l2fwd-event/main.c @@ -616,6 +616,8 @@ main(int argc, char **argv) ret, portid); } + /* Now start internal services */ + eventdev_rsrc->ops.service_setup(); goto skip_port_config; } From patchwork Thu Sep 19 09:26:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59423 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3323D1ED63; Thu, 19 Sep 2019 11:26:44 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 711291ED66 for ; Thu, 19 Sep 2019 11:26:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9POvL027611; Thu, 19 Sep 2019 02:26:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=zhtTUqxQCt/2dh386WEEPpIJ8nLIoNHKXeoqYrf74l4=; b=p7wMIpd8BwEIZuCr3EGhI/VOe0BdBruG6wmjXFA57uVzigLW+obRS8Bwnm03wfYXPX6z PXOCI4zfd3ctl1Rb+9EEbFQOnyqtGSVcnbGdTodKFjjcusD5dKqEHTD/PYXhdFm5LaZ4 FEJv0f3YegO6uY/Lt5eebsqIv5/l6mCBOtnyCZcBw0qgfEpeiopjMsP1jEh7lbZ9ZLVV D4ULgKwjtSKf5OcP12y8SlVYAZVq4nIZyue4VdyehSztxZj50MBO8eH0Q31YVdyCqq2V KV33otD41Z5/hBhc/RU3XUsoMvUb+e+ggG2QqI4j6V3HqAok7ZrQ3u1ifCROhyc5y0fQ 9w== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2v3vcfj5y7-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:41 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:40 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:40 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id D8E933F703F; Thu, 19 Sep 2019 02:26:36 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:56:02 +0530 Message-ID: <20190919092603.5485-9-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 08/10] examples/l2fwd-event: add eventdev main loop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event dev main loop based on enabled l2fwd options and eventdev capabilities. Signed-off-by: Pavan Nikhilesh --- examples/l2fwd-event/l2fwd_eventdev.c | 273 ++++++++++++++++++++++++++ examples/l2fwd-event/main.c | 10 +- 2 files changed, 280 insertions(+), 3 deletions(-) diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c index f964c69d6..345d9d15b 100644 --- a/examples/l2fwd-event/l2fwd_eventdev.c +++ b/examples/l2fwd-event/l2fwd_eventdev.c @@ -18,6 +18,12 @@ #include "l2fwd_common.h" #include "l2fwd_eventdev.h" +#define L2FWD_EVENT_SINGLE 0x1 +#define L2FWD_EVENT_BURST 0x2 +#define L2FWD_EVENT_TX_DIRECT 0x4 +#define L2FWD_EVENT_TX_ENQ 0x8 +#define L2FWD_EVENT_UPDT_MAC 0x10 + static void print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr) { @@ -211,10 +217,272 @@ eventdev_capability_setup(void) eventdev_set_internal_port_ops(&eventdev_rsrc->ops); } +static __rte_noinline int +get_free_event_port(struct eventdev_resources *eventdev_rsrc) +{ + static int index; + int port_id; + + rte_spinlock_lock(&eventdev_rsrc->evp.lock); + if (index >= eventdev_rsrc->evp.nb_ports) { + printf("No free event port is available\n"); + return -1; + } + + port_id = eventdev_rsrc->evp.event_p_id[index]; + index++; + rte_spinlock_unlock(&eventdev_rsrc->evp.lock); + + return port_id; +} + +static __rte_always_inline void +l2fwd_event_updt_mac(struct rte_mbuf *m, const struct rte_ether_addr *dst_mac, + uint8_t dst_port) +{ + struct rte_ether_hdr *eth; + void *tmp; + + eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); + + /* 02:00:00:00:00:xx */ + tmp = ð->d_addr.addr_bytes[0]; + *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40); + + /* src addr */ + rte_ether_addr_copy(dst_mac, ð->s_addr); +} + +static __rte_always_inline void +l2fwd_event_loop_single(struct eventdev_resources *eventdev_rsrc, + const uint32_t flags) +{ + const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id(); + const uint64_t timer_period = eventdev_rsrc->timer_period; + uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0; + const int port_id = get_free_event_port(eventdev_rsrc); + const uint8_t tx_q_id = eventdev_rsrc->evq.event_q_id[ + eventdev_rsrc->evq.nb_queues - 1]; + const uint8_t event_d_id = eventdev_rsrc->event_d_id; + volatile bool *done = eventdev_rsrc->done; + struct rte_mbuf *mbuf; + uint16_t dst_port; + struct rte_event ev; + + if (port_id < 0) + return; + + printf("%s(): entering eventdev main loop on lcore %u\n", __func__, + rte_lcore_id()); + + while (!*done) { + /* if timer is enabled */ + if (is_master && timer_period > 0) { + cur_tsc = rte_rdtsc(); + diff_tsc = cur_tsc - prev_tsc; + + /* advance the timer */ + timer_tsc += diff_tsc; + + /* if timer has reached its timeout */ + if (unlikely(timer_tsc >= timer_period)) { + print_stats(); + /* reset the timer */ + timer_tsc = 0; + } + prev_tsc = cur_tsc; + } + + /* Read packet from eventdev */ + if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0)) + continue; + + + mbuf = ev.mbuf; + dst_port = eventdev_rsrc->dst_ports[mbuf->port]; + rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *)); + + if (timer_period > 0) + __atomic_fetch_add(&eventdev_rsrc->stats[mbuf->port].rx, + 1, __ATOMIC_RELAXED); + + mbuf->port = dst_port; + if (flags & L2FWD_EVENT_UPDT_MAC) + l2fwd_event_updt_mac(mbuf, + &eventdev_rsrc->ports_eth_addr[dst_port], + dst_port); + + if (flags & L2FWD_EVENT_TX_ENQ) { + ev.queue_id = tx_q_id; + ev.op = RTE_EVENT_OP_FORWARD; + while (rte_event_enqueue_burst(event_d_id, port_id, + &ev, 1) && !*done) + ; + } + + if (flags & L2FWD_EVENT_TX_DIRECT) { + rte_event_eth_tx_adapter_txq_set(mbuf, 0); + while (!rte_event_eth_tx_adapter_enqueue(event_d_id, + port_id, + &ev, 1) && + !*done) + ; + } + + if (timer_period > 0) + __atomic_fetch_add(&eventdev_rsrc->stats[mbuf->port].tx, + 1, __ATOMIC_RELAXED); + } +} + +static __rte_always_inline void +l2fwd_event_loop_burst(struct eventdev_resources *eventdev_rsrc, + const uint32_t flags) +{ + const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id(); + const uint64_t timer_period = eventdev_rsrc->timer_period; + uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0; + const int port_id = get_free_event_port(eventdev_rsrc); + const uint8_t tx_q_id = eventdev_rsrc->evq.event_q_id[ + eventdev_rsrc->evq.nb_queues - 1]; + const uint8_t event_d_id = eventdev_rsrc->event_d_id; + const uint8_t deq_len = eventdev_rsrc->deq_depth; + volatile bool *done = eventdev_rsrc->done; + struct rte_event ev[MAX_PKT_BURST]; + struct rte_mbuf *mbuf; + uint16_t nb_rx, nb_tx; + uint16_t dst_port; + uint8_t i; + + if (port_id < 0) + return; + + printf("%s(): entering eventdev main loop on lcore %u\n", __func__, + rte_lcore_id()); + + while (!*done) { + /* if timer is enabled */ + if (is_master && timer_period > 0) { + cur_tsc = rte_rdtsc(); + diff_tsc = cur_tsc - prev_tsc; + + /* advance the timer */ + timer_tsc += diff_tsc; + + /* if timer has reached its timeout */ + if (unlikely(timer_tsc >= timer_period)) { + print_stats(); + /* reset the timer */ + timer_tsc = 0; + } + prev_tsc = cur_tsc; + } + + /* Read packet from eventdev */ + nb_rx = rte_event_dequeue_burst(event_d_id, port_id, ev, + deq_len, 0); + if (nb_rx == 0) + continue; + + + for (i = 0; i < nb_rx; i++) { + mbuf = ev[i].mbuf; + dst_port = eventdev_rsrc->dst_ports[mbuf->port]; + rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *)); + + if (timer_period > 0) { + __atomic_fetch_add( + &eventdev_rsrc->stats[mbuf->port].rx, + 1, __ATOMIC_RELAXED); + __atomic_fetch_add( + &eventdev_rsrc->stats[mbuf->port].tx, + 1, __ATOMIC_RELAXED); + } + mbuf->port = dst_port; + if (flags & L2FWD_EVENT_UPDT_MAC) + l2fwd_event_updt_mac(mbuf, + &eventdev_rsrc->ports_eth_addr[ + dst_port], + dst_port); + + if (flags & L2FWD_EVENT_TX_ENQ) { + ev[i].queue_id = tx_q_id; + ev[i].op = RTE_EVENT_OP_FORWARD; + } + + if (flags & L2FWD_EVENT_TX_DIRECT) + rte_event_eth_tx_adapter_txq_set(mbuf, 0); + + } + + if (flags & L2FWD_EVENT_TX_ENQ) { + nb_tx = rte_event_enqueue_burst(event_d_id, port_id, + ev, nb_rx); + while (nb_tx < nb_rx && !*done) + nb_tx += rte_event_enqueue_burst(event_d_id, + port_id, ev + nb_tx, + nb_rx - nb_tx); + } + + if (flags & L2FWD_EVENT_TX_DIRECT) { + nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id, + port_id, ev, + nb_rx); + while (nb_tx < nb_rx && !*done) + nb_tx += rte_event_eth_tx_adapter_enqueue( + event_d_id, port_id, + ev + nb_tx, nb_rx - nb_tx); + } + } +} + +static __rte_always_inline void +l2fwd_event_loop(struct eventdev_resources *eventdev_rsrc, + const uint32_t flags) +{ + if (flags & L2FWD_EVENT_SINGLE) + l2fwd_event_loop_single(eventdev_rsrc, flags); + if (flags & L2FWD_EVENT_BURST) + l2fwd_event_loop_burst(eventdev_rsrc, flags); +} + +#define L2FWD_EVENT_MODE \ +FP(tx_d, 0, 0, 0, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE) \ +FP(tx_d_burst, 0, 0, 1, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST) \ +FP(tx_q, 0, 1, 0, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE) \ +FP(tx_q_burst, 0, 1, 1, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST) \ +FP(tx_d_mac, 1, 0, 0, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_DIRECT | \ + L2FWD_EVENT_SINGLE) \ +FP(tx_d_brst_mac, 1, 0, 1, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_DIRECT | \ + L2FWD_EVENT_BURST) \ +FP(tx_q_mac, 1, 1, 0, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_ENQ | \ + L2FWD_EVENT_SINGLE) \ +FP(tx_q_brst_mac, 1, 1, 1, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_ENQ | \ + L2FWD_EVENT_BURST) + + +#define FP(_name, _f3, _f2, _f1, flags) \ +static void __rte_noinline \ +l2fwd_event_main_loop_ ## _name(void) \ +{ \ + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); \ + l2fwd_event_loop(eventdev_rsrc, flags); \ +} + +L2FWD_EVENT_MODE +#undef FP + void eventdev_resource_setup(void) { struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); + /* [MAC_UPDT][TX_MODE][BURST] */ + const event_loop_cb event_loop[2][2][2] = { +#define FP(_name, _f3, _f2, _f1, flags) \ + [_f3][_f2][_f1] = l2fwd_event_main_loop_ ## _name, + L2FWD_EVENT_MODE +#undef FP + }; uint16_t ethdev_count = rte_eth_dev_count_avail(); uint32_t event_queue_cfg = 0; uint32_t service_id; @@ -260,4 +528,9 @@ eventdev_resource_setup(void) ret = rte_event_dev_start(eventdev_rsrc->event_d_id); if (ret < 0) rte_exit(EXIT_FAILURE, "Error in starting eventdev"); + + eventdev_rsrc->ops.l2fwd_event_loop = event_loop + [eventdev_rsrc->mac_updt] + [eventdev_rsrc->tx_mode_q] + [eventdev_rsrc->has_burst]; } diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c index 09c86d2cd..56487809b 100644 --- a/examples/l2fwd-event/main.c +++ b/examples/l2fwd-event/main.c @@ -271,8 +271,12 @@ static void l2fwd_main_loop(void) static int l2fwd_launch_one_lcore(void *args) { - RTE_SET_USED(args); - l2fwd_main_loop(); + struct eventdev_resources *eventdev_rsrc = args; + + if (eventdev_rsrc->enabled) + eventdev_rsrc->ops.l2fwd_event_loop(); + else + l2fwd_main_loop(); return 0; } @@ -756,7 +760,7 @@ main(int argc, char **argv) ret = 0; /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL, + rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, eventdev_rsrc, CALL_MASTER); rte_eal_mp_wait_lcore(); From patchwork Thu Sep 19 09:26:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59424 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5D2401ED7C; Thu, 19 Sep 2019 11:26:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 416FA1ED68 for ; Thu, 19 Sep 2019 11:26:46 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9P0x8012007; Thu, 19 Sep 2019 02:26:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=+fuUm98hvehvo3ycobyUySOXkKqpIfY6joFkhpel12c=; b=xUGBsqwPFojfVuVoD7fsr5hYZtkgDyeaN0YjiCxyKf3qfES5mq4G0zZgUf3M6Wompo9z 2RFaXvTzoroivJBfar6YJcoTxdyBojnMATQz0SzkJGCMoufyusTkfkfCz16+oiMrchRO xDIc56XxBn3rBego6OOeJahoZ3eFMuJNdcI+OZaoRViTKfFjCdhIkKcbCzbVBjSkdSRe hc2U5Hfj3TdnTPLxvW9Gnpjx+OrSG3SAcmfVFI555wPF5TEoY+68eLmcS3rnzYj/PW/J PE5XQDpVQicuAKTy62eiWwGjaNYC0Ysrri7xjCC+xcuLH6Ph4L7V1LbuPg0SpYIrrqZN 7A== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2v3vcdt77u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:26:45 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:26:44 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:26:44 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id EA2F43F703F; Thu, 19 Sep 2019 02:26:40 -0700 (PDT) From: To: , , , Marko Kovacevic , Ori Kam , Radu Nicolau , Tomasz Kantecki , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Date: Thu, 19 Sep 2019 14:56:03 +0530 Message-ID: <20190919092603.5485-10-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092603.5485-1-pbhagavatula@marvell.com> References: <20190919092603.5485-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 09/10] examples/l2fwd-event: add graceful teardown X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add graceful teardown that addresses both event mode and poll mode. Signed-off-by: Pavan Nikhilesh --- examples/l2fwd-event/main.c | 44 +++++++++++++++++++++++++++++-------- 1 file changed, 35 insertions(+), 9 deletions(-) diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c index 56487809b..057381b29 100644 --- a/examples/l2fwd-event/main.c +++ b/examples/l2fwd-event/main.c @@ -522,7 +522,7 @@ main(int argc, char **argv) uint32_t rx_lcore_id; uint32_t nb_mbufs; uint16_t nb_ports; - int ret; + int i, ret; /* init EAL */ ret = rte_eal_init(argc, argv); @@ -762,15 +762,41 @@ main(int argc, char **argv) /* launch per-lcore init on every lcore */ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, eventdev_rsrc, CALL_MASTER); - rte_eal_mp_wait_lcore(); + if (eventdev_rsrc->enabled) { + for (i = 0; i < eventdev_rsrc->rx_adptr.nb_rx_adptr; i++) + rte_event_eth_rx_adapter_stop( + eventdev_rsrc->rx_adptr.rx_adptr[i]); + for (i = 0; i < eventdev_rsrc->tx_adptr.nb_tx_adptr; i++) + rte_event_eth_tx_adapter_stop( + eventdev_rsrc->tx_adptr.tx_adptr[i]); - RTE_ETH_FOREACH_DEV(portid) { - if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) - continue; - printf("Closing port %d...", portid); - rte_eth_dev_stop(portid); - rte_eth_dev_close(portid); - printf(" Done\n"); + RTE_ETH_FOREACH_DEV(portid) { + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + rte_eth_dev_stop(portid); + } + + rte_eal_mp_wait_lcore(); + RTE_ETH_FOREACH_DEV(portid) { + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + rte_eth_dev_close(portid); + } + + rte_event_dev_stop(eventdev_rsrc->event_d_id); + rte_event_dev_close(eventdev_rsrc->event_d_id); + + } else { + rte_eal_mp_wait_lcore(); + + RTE_ETH_FOREACH_DEV(portid) { + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + printf("Closing port %d...", portid); + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } } printf("Bye...\n"); From patchwork Thu Sep 19 09:31:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59425 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 875FF1ED25; Thu, 19 Sep 2019 11:31:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D62491ECE6 for ; Thu, 19 Sep 2019 11:31:13 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8J9QJ3C013206; Thu, 19 Sep 2019 02:31:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=YkK2DvRP46zW9cPTTU9a+WZElN1GuXCqkAcoUmK9eY8=; b=KL+nU/UxVyC4PwNnej04SJ6UUpsIIirCvs54g9W/CboP777vx89u+Ruo4naRjT0kvw+5 SO4CmXpStgUxJsnmL6YVUAdwNSQbZ3ogeFM8zutfJaWDzKwNYmRopphTTpguQaIlVsDW rhsa+ihHGlQynDQed1MaH3AVfUZqFFWZB2D79+HFhS+0+RIsx4wE0cZDUVQgl2oQ3p3u u31DV1e5+lRa7M/OnxyKW9chxEWEhd0yDfzjRedR1kzK3S+fldEDAZgngUVIfvyQaJSZ DtRcZHZKUxOJgPhplbLrDTRMqlMZ8BkXecD+2P7Qcz86jAIGsFBfG0Vrc4rL+xA8q6t6 8g== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2v3vcdt7w7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 19 Sep 2019 02:31:12 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 19 Sep 2019 02:31:11 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 19 Sep 2019 02:31:11 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 34DA93F703F; Thu, 19 Sep 2019 02:31:06 -0700 (PDT) From: To: , , , Thomas Monjalon , John McNamara , Marko Kovacevic , "Ori Kam" , Radu Nicolau , "Tomasz Kantecki" , Sunil Kumar Kori , Pavan Nikhilesh CC: Date: Thu, 19 Sep 2019 15:01:05 +0530 Message-ID: <20190919093105.5882-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.70,1.0.8 definitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0 Subject: [dpdk-dev] [PATCH v2 10/10] doc: add application usage guide for l2fwd-event X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add documentation for l2fwd-event example. Update MAINTAINERS file claiming responsibility of l2fwd-event. Signed-off-by: Sunil Kumar Kori --- MAINTAINERS | 5 + doc/guides/sample_app_ug/index.rst | 1 + doc/guides/sample_app_ug/intro.rst | 5 + .../l2_forward_event_real_virtual.rst | 799 ++++++++++++++++++ 4 files changed, 810 insertions(+) create mode 100644 doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst diff --git a/MAINTAINERS b/MAINTAINERS index b3d9aaddd..d8e1fa84d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1458,6 +1458,11 @@ M: Tomasz Kantecki F: doc/guides/sample_app_ug/l2_forward_cat.rst F: examples/l2fwd-cat/ +M: Sunil Kumar Kori +M: Pavan Nikhilesh +F: examples/l2fwd-event/ +F: doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst + F: examples/l3fwd/ F: doc/guides/sample_app_ug/l3_forward.rst diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst index f23f8f59e..83a4f8d5c 100644 --- a/doc/guides/sample_app_ug/index.rst +++ b/doc/guides/sample_app_ug/index.rst @@ -26,6 +26,7 @@ Sample Applications User Guides l2_forward_crypto l2_forward_job_stats l2_forward_real_virtual + l2_forward_event_real_virtual l2_forward_cat l3_forward l3_forward_power_man diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst index 90704194a..b33904ed1 100644 --- a/doc/guides/sample_app_ug/intro.rst +++ b/doc/guides/sample_app_ug/intro.rst @@ -87,6 +87,11 @@ examples are highlighted below. forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC addresses like a simple switch. +* :doc:`Network Layer 2 forwarding`: The Network Layer 2 + forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC + addresses like a simple switch. It demonstrate usage of poll and event mode Rx/Tx + mechanism. + * :doc:`Network Layer 3 forwarding`: The Network Layer3 forwarding, or ``l3fwd`` application does forwarding based on Internet Protocol, IPv4 or IPv6 like a simple router. diff --git a/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst new file mode 100644 index 000000000..7cea8efaf --- /dev/null +++ b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst @@ -0,0 +1,799 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2010-2014 Intel Corporation. + +.. _l2_fwd_event_app_real_and_virtual: + +L2 Forwarding Eventdev Sample Application (in Real and Virtualized Environments) +================================================================================ + +The L2 Forwarding eventdev sample application is a simple example of packet +processing using the Data Plane Development Kit (DPDK) to demonstrate usage of +poll and event mode packet I/O mechanism which also takes advantage of Single +Root I/O Virtualization (SR-IOV) features in a virtualized environment. + +Overview +-------- + +The L2 Forwarding eventdev sample application, which can operate in real and +virtualized environments, performs L2 forwarding for each packet that is +received on an RX_PORT. The destination port is the adjacent port from the +enabled portmask, that is, if the first four ports are enabled (portmask=0x0f), +ports 1 and 2 forward into each other, and ports 3 and 4 forward into each +other. Also, if MAC addresses updating is enabled, the MAC addresses are +affected as follows: + +* The source MAC address is replaced by the TX_PORT MAC address + +* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID + +Appliation receives packets from RX_PORT using below mentioned methods: + +* Poll mode + +* Eventdev mode (default) + +This application can be used to benchmark performance using a traffic-generator, +as shown in the :numref:`figure_l2_fwd_benchmark_setup`, or in a virtualized +environment as shown in :numref:`figure_l2_fwd_virtenv_benchmark_setup`. + +.. _figure_l2_fwd_benchmark_setup: + +.. figure:: img/l2_fwd_benchmark_setup.* + + Performance Benchmark Setup (Basic Environment) + +.. _figure_l2_fwd_virtenv_benchmark_setup: + +.. figure:: img/l2_fwd_virtenv_benchmark_setup.* + + Performance Benchmark Setup (Virtualized Environment) + +This application may be used for basic VM to VM communication as shown +in :numref:`figure_l2_fwd_vm2vm`, when MAC addresses updating is disabled. + +.. _figure_l2_fwd_vm2vm: + +.. figure:: img/l2_fwd_vm2vm.* + + Virtual Machine to Virtual Machine communication. + +.. _l2_fwd_event_vf_setup: + +Virtual Function Setup Instructions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Application can use the virtual function available in the system and therefore +can be used in a virtual machine without passing through the whole Network +Device into a guest machine in a virtualized scenario. The virtual functions +can be enabled on host machine or the hypervisor with the respective physical +function driver. + +For example, on a Linux* host machine, it is possible to enable a virtual +function using the following command: + +.. code-block:: console + + modprobe ixgbe max_vfs=2,2 + +This command enables two Virtual Functions on each of Physical Function of the +NIC, with two physical ports in the PCI configuration space. + +It is important to note that enabled Virtual Function 0 and 2 would belong to +Physical Function 0 and Virtual Function 1 and 3 would belong to Physical +Function 1, in this case enabling a total of four Virtual Functions. + +Compiling the Application +------------------------- + +To compile the sample application see :doc:`compiling`. + +The application is located in the ``l2fwd-event`` sub-directory. + +Running the Application +----------------------- + +The application requires a number of command line options: + +.. code-block:: console + + ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sync=SYNC_MODE + +where, + +* p PORTMASK: A hexadecimal bitmask of the ports to configure + +* q NQ: A number of queues (=ports) per lcore (default is 1) + +* --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default). + +* --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default. + +* --eventq-sync=SYNC_MODE: Event queue synchronization method, Ordered or Atomic. Atomic by default. + +Sample usage commands are given below to run the application into different mode: + +Poll mode on linux environment with 4 lcores, 16 ports and 8 RX queues per lcore +and MAC address updating enabled, issue the command: + +.. code-block:: console + + ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll + +Eventdev mode on linux environment with 4 lcores, 16 ports , sync method ordered +and MAC address updating enabled, issue the command: + +.. code-block:: console + + ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sync=ordered + +or + +.. code-block:: console + + ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sync=ordered + +Refer to the *DPDK Getting Started Guide* for general information on running +applications and the Environment Abstraction Layer (EAL) options. + +To run application with S/W scheduler, it uses following DPDK services: + +* Software scheduler +* Rx adapter service function +* Tx adapter service function + +Application needs service cores to run above mentioned services. Service cores +must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W +scheduler. Following is the sample command: + +.. code-block:: console + + ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 ---vdev event_sw0 --q 8 -p ffff --mode=eventdev --eventq-sync=ordered + +Explanation +----------- + +The following sections provide some explanation of the code. + +.. _l2_fwd_event_app_cmd_arguments: + +Command Line Arguments +~~~~~~~~~~~~~~~~~~~~~~ + +The L2 Forwarding eventdev sample application takes specific parameters, +in addition to Environment Abstraction Layer (EAL) arguments. +The preferred way to parse parameters is to use the getopt() function, +since it is part of a well-defined and portable library. + +The parsing of arguments is done in the **l2fwd_parse_args()** function for non +eventdev parameteres and in **parse_eventdev_args()** for eventded parameters. +The method of argument parsing is not described here. Refer to the +*glibc getopt(3)* man page for details. + +EAL arguments are parsed first, then application-specific arguments. +This is done at the beginning of the main() function and eventdev parameters +are parsed in eventdev_resource_setup() function during eventdev setup: + +.. code-block:: c + + /* init EAL */ + + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + + ret = l2fwd_parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n"); + . + . + . + + /* Parse eventdev command line options */ + ret = parse_eventdev_args(argc, argv); + if (ret < 0) + return ret; + + + + +.. _l2_fwd_event_app_mbuf_init: + +Mbuf Pool Initialization +~~~~~~~~~~~~~~~~~~~~~~~~ + +Once the arguments are parsed, the mbuf pool is created. +The mbuf pool contains a set of mbuf objects that will be used by the driver +and the application to store network packet data: + +.. code-block:: c + + /* create the mbuf pool */ + + l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, + RTE_MBUF_DEFAULT_BUF_SIZE, + rte_socket_id()); + if (l2fwd_pktmbuf_pool == NULL) + rte_panic("Cannot init mbuf pool\n"); + +The rte_mempool is a generic structure used to handle pools of objects. +In this case, it is necessary to create a pool that will be used by the driver. +The number of allocated pkt mbufs is NB_MBUF, with a data room size of +RTE_MBUF_DEFAULT_BUF_SIZE each. +A per-lcore cache of 32 mbufs is kept. +The memory is allocated in NUMA socket 0, +but it is possible to extend this code to allocate one mbuf pool per socket. + +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init(). +An advanced application may want to use the mempool API to create the +mbuf pool with more control. + +.. _l2_fwd_event_app_dvr_init: + +Driver Initialization +~~~~~~~~~~~~~~~~~~~~~ + +The main part of the code in the main() function relates to the initialization +of the driver. To fully understand this code, it is recommended to study the +chapters that related to the Poll Mode and Event mode Driver in the +*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*. + +.. code-block:: c + + if (rte_pci_probe() < 0) + rte_exit(EXIT_FAILURE, "Cannot probe PCI\n"); + + /* reset l2fwd_dst_ports */ + + for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) + l2fwd_dst_ports[portid] = 0; + + last_port = 0; + + /* + * Each logical core is assigned a dedicated TX queue on each port. + */ + + RTE_ETH_FOREACH_DEV(portid) { + /* skip ports that are not enabled */ + + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) + continue; + + if (nb_ports_in_mask % 2) { + l2fwd_dst_ports[portid] = last_port; + l2fwd_dst_ports[last_port] = portid; + } + else + last_port = portid; + + nb_ports_in_mask++; + + rte_eth_dev_info_get((uint8_t) portid, &dev_info); + } + +Observe that: + +* rte_igb_pmd_init_all() simultaneously registers the driver as a PCI driver + and as an Ethernet Poll Mode Driver. + +* rte_pci_probe() parses the devices on the PCI bus and initializes recognized + devices. + +The next step is to configure the RX and TX queues. For each port, there is only +one RX queue (only one lcore is able to poll a given port). The number of TX +queues depends on the number of available lcores. The rte_eth_dev_configure() +function is used to configure the number of queues for a port: + +.. code-block:: c + + ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Cannot configure device: " + "err=%d, port=%u\n", + ret, portid); + +.. _l2_fwd_event_app_rx_init: + +RX Queue Initialization +~~~~~~~~~~~~~~~~~~~~~~~ + +The application uses one lcore to poll one or several ports, depending on the -q +option, which specifies the number of queues per lcore. + +For example, if the user specifies -q 4, the application is able to poll four +ports with one lcore. If there are 16 ports on the target (and if the portmask +argument is -p ffff ), the application will need four lcores to poll all the +ports. + +.. code-block:: c + + ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0, + &rx_conf, l2fwd_pktmbuf_pool); + if (ret < 0) + + rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: " + "err=%d, port=%u\n", + ret, portid); + +The list of queues that must be polled for a given lcore is stored in a private +structure called struct lcore_queue_conf. + +.. code-block:: c + + struct lcore_queue_conf { + unsigned n_rx_port; + unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE]; + struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS]; + } rte_cache_aligned; + + struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; + +The values n_rx_port and rx_port_list[] are used in the main packet processing +loop (see :ref:`l2_fwd_event_app_rx_tx_packets`). + +.. _l2_fwd_event_app_tx_init: + +TX Queue Initialization +~~~~~~~~~~~~~~~~~~~~~~~ + +Each lcore should be able to transmit on any port. For every port, a single TX +queue is initialized. + +.. code-block:: c + + /* init one TX queue on each port */ + + fflush(stdout); + + ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd, + rte_eth_dev_socket_id(portid), &tx_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n", + ret, (unsigned) portid); + +The global configuration for TX queues is stored in a static structure: + +.. code-block:: c + + static const struct rte_eth_txconf tx_conf = { + .tx_thresh = { + .pthresh = TX_PTHRESH, + .hthresh = TX_HTHRESH, + .wthresh = TX_WTHRESH, + }, + .tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */ + }; + +To configure eventdev support, application setups following components: + +* Event dev +* Event queue +* Event Port +* Rx/Tx adapters +* Ethernet ports + +.. _l2_fwd_event_app_event_dev_init: + +Event dev Initialization +~~~~~~~~~~~~~~~~~~~~~~~~ +Application can use either H/W or S/W based event device scheduler +implementation and supports single instance of event device. It configures event +device as per below configuration + +.. code-block:: c + + struct rte_event_dev_config event_d_conf = { + .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */ + .nb_event_ports = num_workers, /* Dedicated to each lcore */ + .nb_events_limit = 4096, + .nb_event_queue_flows = 1024, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128 + }; + + ret = rte_event_dev_configure(event_d_id, &event_d_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error in configuring event device"); + +In case of S/W scheduler, application runs eventdev scheduler service on service +core. Application retrieves service id and later on it starts the same on a +given lcore. + +.. code-block:: c + + /* Start event device service */ + ret = rte_event_dev_service_id_get(eventdev_rsrc.event_d_id, + &service_id); + if (ret != -ESRCH && ret != 0) + rte_exit(EXIT_FAILURE, "Error in starting eventdev"); + + rte_service_runstate_set(service_id, 1); + rte_service_set_runstate_mapped_check(service_id, 0); + eventdev_rsrc.service_id = service_id; + + /* Start eventdev scheduler service */ + rte_service_map_lcore_set(eventdev_rsrc.service_id, lcore_id[0], 1); + rte_service_lcore_start(lcore_id[0]); + +.. _l2_fwd_app_event_queue_init: + +Event queue Initialization +~~~~~~~~~~~~~~~~~~~~~~~~~~ +Each Ethernet device is assigned a dedicated event queue which will be linked +to all available event ports i.e. each lcore can dequeue packets from any of the +Ethernet ports. + +.. code-block:: c + + struct rte_event_queue_conf event_q_conf = { + .nb_atomic_flows = 1024, + .nb_atomic_order_sequences = 1024, + .event_queue_cfg = 0, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, + .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST + }; + + /* User requested sync mode */ + event_q_conf.schedule_type = eventq_sync_mode; + for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) { + ret = rte_event_queue_setup(event_d_id, event_q_id, + &event_q_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event queue"); + } + } + +In case of S/W scheduler, an extra event queue is created which will be used for +Tx adapter service function for enqueue operation. + +.. _l2_fwd_app_event_port_init: + +Event port Initialization +~~~~~~~~~~~~~~~~~~~~~~~~~ +Each worker thread is assigned a dedicated event port for enq/deq operations +to/from an event device. All event ports are linked with all available event +queues. + +.. code-block:: c + + struct rte_event_port_conf event_p_conf = { + .dequeue_depth = 32, + .enqueue_depth = 32, + .new_event_threshold = 4096 + }; + + for (event_p_id = 0; event_p_id < num_workers; event_p_id++) { + ret = rte_event_port_setup(event_d_id, event_p_id, + &event_p_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event port %d\n", + event_p_id); + } + + ret = rte_event_port_link(event_d_id, event_p_id, NULL, + NULL, 0); + if (ret < 0) { + rte_exit(EXIT_FAILURE, "Error in linking event port %d " + "to event queue", event_p_id); + } + } + +In case of S/W scheduler, an extra event port is created by DPDK library which +is retrieved by the application and same will be used by Tx adapter service. + +.. code-block:: c + + ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to get Tx adapter port id: %d\n", ret); + + ret = rte_event_port_link(event_d_id, tx_port_id, + &eventdev_rsrc.evq.event_q_id[ + eventdev_rsrc.evq.nb_queues - 1], + NULL, 1); + if (ret != 1) + rte_exit(EXIT_FAILURE, + "Unable to link Tx adapter port to Tx queue:err = %d", + ret); + +.. _l2_fwd_event_app_adapter_init: + +Rx/Tx adapter Initialization +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each +Ethernet port's Rx queues are connected to its respective event queue at +priority 0 via Rx adapter configuration and Ethernet port's tx queues are +connected via Tx adapter. + +.. code-block:: c + + struct rte_event_port_conf event_p_conf = { + .dequeue_depth = 32, + .enqueue_depth = 32, + .new_event_threshold = 4096 + }; + + for (i = 0; i < ethdev_count; i++) { + ret = rte_event_eth_rx_adapter_create(i, event_d_id, + &event_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to create rx adapter[%d]", i); + + /* Configure user requested sync mode */ + eth_q_conf.ev.queue_id = eventdev_rsrc.evq.event_q_id[i]; + eth_q_conf.ev.sched_type = eventq_sync_mode; + ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Rx adapter"); + + ret = rte_event_eth_rx_adapter_start(i); + if (ret) + rte_exit(EXIT_FAILURE, + "Rx adapter[%d] start failed", i); + + eventdev_rsrc.rx_adptr.rx_adptr[i] = i; + } + + for (i = 0; i < ethdev_count; i++) { + ret = rte_event_eth_tx_adapter_create(i, event_d_id, + &event_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to create tx adapter[%d]", i); + + ret = rte_event_eth_tx_adapter_queue_add(i, i, -1); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to add queues to Tx adapter"); + + ret = rte_event_eth_tx_adapter_start(i); + if (ret) + rte_exit(EXIT_FAILURE, + "Tx adapter[%d] start failed", i); + + eventdev_rsrc.tx_adptr.tx_adptr[i] = i; + } + +For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are +configured which will be shared among all the Ethernet ports. Also DPDK library +need service cores to run internal services for Rx/Tx adapters. Application gets +service id for Rx/Tx adapters and after successful setup it runs the services +on dedicated service cores. + +.. code-block:: c + + /* retrieving service Id for Rx adapter */ + ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id); + if (ret != -ESRCH && ret != 0) { + rte_exit(EXIT_FAILURE, + "Error getting the service ID for rx adptr\n"); + } + + rte_service_runstate_set(service_id, 1); + rte_service_set_runstate_mapped_check(service_id, 0); + eventdev_rsrc.rx_adptr.service_id = service_id; + + /* Start eventdev Rx adapter service */ + rte_service_map_lcore_set(eventdev_rsrc.rx_adptr.service_id, + lcore_id[1], 1); + rte_service_lcore_start(lcore_id[1]); + + /* retrieving service Id for Tx adapter */ + ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id); + if (ret != -ESRCH && ret != 0) + rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID"); + + rte_service_runstate_set(service_id, 1); + rte_service_set_runstate_mapped_check(service_id, 0); + eventdev_rsrc.tx_adptr.service_id = service_id; + + /* Start eventdev Tx adapter service */ + rte_service_map_lcore_set(eventdev_rsrc.tx_adptr.service_id, + lcore_id[2], 1); + rte_service_lcore_start(lcore_id[2]); + +.. _l2_fwd_event_app_rx_tx_packets: + +Receive, Process and Transmit Packets +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the **l2fwd_main_loop()** function, the main task is to read ingress packets from +the RX queues. This is done using the following code: + +.. code-block:: c + + /* + * Read packet from RX queues + */ + + for (i = 0; i < qconf->n_rx_port; i++) { + portid = qconf->rx_port_list[i]; + nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst, + MAX_PKT_BURST); + + for (j = 0; j < nb_rx; j++) { + m = pkts_burst[j]; + rte_prefetch0(rte_pktmbuf_mtod(m, void *)); + l2fwd_simple_forward(m, portid); + } + } + +Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst() +function writes the mbuf pointers in a local table and returns the number of +available mbufs in the table. + +Then, each mbuf in the table is processed by the l2fwd_simple_forward() +function. The processing is very simple: process the TX port from the RX port, +then replace the source and destination MAC addresses if MAC addresses updating +is enabled. + +.. note:: + + In the following code, one line for getting the output port requires some + explanation. + +During the initialization process, a static array of destination ports +(l2fwd_dst_ports[]) is filled such that for each source port, a destination port +is assigned that is either the next or previous enabled port from the portmask. +If number of ports are odd in portmask then packet from last port will be +forwarded to first port i.e. if portmask=0x07, then forwarding will take place +like p0--->p1, p1--->p2, p2--->p0. + +Also to optimize enqueue opeartion, l2fwd_simple_forward() stores incoming mbus +upto MAX_PKT_BURST. Once it reaches upto limit, all packets are transmitted to +destination ports. + +.. code-block:: c + + static void + l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid) + { + uint32_t dst_port; + int32_t sent; + struct rte_eth_dev_tx_buffer *buffer; + + dst_port = l2fwd_dst_ports[portid]; + + if (mac_updating) + l2fwd_mac_updating(m, dst_port); + + buffer = tx_buffer[dst_port]; + sent = rte_eth_tx_buffer(dst_port, 0, buffer, m); + if (sent) + port_statistics[dst_port].tx += sent; + } + +For this test application, the processing is exactly the same for all packets +arriving on the same RX port. Therefore, it would have been possible to call +the rte_eth_tx_buffer() function directly from the main loop to send all the +received packets on the same TX port, using the burst-oriented send function, +which is more efficient. + +However, in real-life applications (such as, L3 routing), +packet N is not necessarily forwarded on the same port as packet N-1. +The application is implemented to illustrate that, so the same approach can be +reused in a more complex application. + +To ensure that no packets remain in the tables, each lcore does a draining of TX +queue in its main loop. This technique introduces some latency when there are +not many packets to send, however it improves performance: + +.. code-block:: c + + cur_tsc = rte_rdtsc(); + + /* + * TX burst queue drain + */ + diff_tsc = cur_tsc - prev_tsc; + if (unlikely(diff_tsc > drain_tsc)) { + for (i = 0; i < qconf->n_rx_port; i++) { + portid = l2fwd_dst_ports[qconf->rx_port_list[i]]; + buffer = tx_buffer[portid]; + sent = rte_eth_tx_buffer_flush(portid, 0, + buffer); + if (sent) + port_statistics[portid].tx += sent; + } + + /* if timer is enabled */ + if (timer_period > 0) { + /* advance the timer */ + timer_tsc += diff_tsc; + + /* if timer has reached its timeout */ + if (unlikely(timer_tsc >= timer_period)) { + /* do this only on master core */ + if (lcore_id == rte_get_master_lcore()) { + print_stats(); + /* reset the timer */ + timer_tsc = 0; + } + } + } + + prev_tsc = cur_tsc; + } + +In the **l2fwd_main_loop_eventdev()** function, the main task is to read ingress +packets from the event ports. This is done using the following code: + +.. code-block:: c + + /* Read packet from eventdev */ + nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id, + events, deq_len, 0); + if (nb_rx == 0) { + rte_pause(); + continue; + } + + for (i = 0; i < nb_rx; i++) { + mbuf[i] = events[i].mbuf; + rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *)); + } + + +Before reading packets, deq_len is fetched to ensure correct allowed deq length +by the eventdev. +The rte_event_dequeue_burst() function writes the mbuf pointers in a local table +and returns the number of available mbufs in the table. + +Then, each mbuf in the table is processed by the l2fwd_eventdev_forward() +function. The processing is very simple: process the TX port from the RX port, +then replace the source and destination MAC addresses if MAC addresses updating +is enabled. + +.. note:: + + In the following code, one line for getting the output port requires some + explanation. + +During the initialization process, a static array of destination ports +(l2fwd_dst_ports[]) is filled such that for each source port, a destination port +is assigned that is either the next or previous enabled port from the portmask. +If number of ports are odd in portmask then packet from last port will be +forwarded to first port i.e. if portmask=0x07, then forwarding will take place +like p0--->p1, p1--->p2, p2--->p0. + +l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded +be to destination ports via Tx adapter or generic event dev enqueue API +depending H/W or S/W scheduler is used. + +.. code-block:: c + + static inline void + l2fwd_eventdev_forward(struct rte_mbuf *m[], uint32_t portid, + uint16_t nb_rx, uint16_t event_p_id) + { + uint32_t dst_port, i; + + dst_port = l2fwd_dst_ports[portid]; + + for (i = 0; i < nb_rx; i++) { + if (mac_updating) + l2fwd_mac_updating(m[i], dst_port); + + m[i]->port = dst_port; + } + + if (timer_period > 0) { + rte_spinlock_lock(&port_stats_lock); + port_statistics[dst_port].tx += nb_rx; + rte_spinlock_unlock(&port_stats_lock); + } + /* Registered callback is invoked for Tx */ + eventdev_rsrc.send_burst_eventdev(m, nb_rx, event_p_id); + }