From patchwork Thu Sep 26 10:05:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59841 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 23C381BE9A; Thu, 26 Sep 2019 12:06:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 13C281B94A for ; Thu, 26 Sep 2019 12:06:07 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA4gPX032601; Thu, 26 Sep 2019 03:06:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=wT234Vhs8Y8c9k476/BIxg8EmkK8jr+k167M5edfI7w=; b=Ca2LohBu8/kO8l6a3LIhQEHajQTOfWYEe/pwlPRA4ctGv7ZFQARz/BKNHAZez4oGxl3J Je6LEH/i+Z8CCzD1G3Ju63z1YYLM3hanoZkMytwTFuvoTsFj4W5Tmaix3SWQ/jdIYwjZ NKmp1pJ1FWbgyGoadoXMRXgyHmqRDXWe0QSqc91vFjAgpDn14znKnwFQ5MjyAUSeLvUo qom7HawVpUrcLe7L/1pxruko68qT0Dqy8bpu8hbaexWVkcwDDMbSMeTrIvOUHCauIG6Y Lrwr/FTdP2oQiCxcJbQktKUbKx+yO0Me+cloX54I7y2JFjuYj20eE7ijLXSAcsrCHdET Dg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr18r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:07 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:05 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:05 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id BF2663F7043; Thu, 26 Sep 2019 03:06:02 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Sunil Kumar Kori Date: Thu, 26 Sep 2019 15:35:48 +0530 Message-ID: <20190926100558.24348-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 01/11] examples/l3fwd: add framework for event device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add framework to enable event device as a producer of packets. To switch between event mode and poll mode the following options have been added: `--mode="eventdev"` or `--mode="poll"` Also, allow the user to select the schedule type to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC through: `--eventq-sync="ordered"` or `--eventq-sync="atomic"` Poll mode is still the default operation mode. Signed-off-by: Sunil Kumar Kori --- examples/l3fwd/Makefile | 2 +- examples/l3fwd/l3fwd.h | 3 ++ examples/l3fwd/l3fwd_eventdev.c | 79 +++++++++++++++++++++++++++++++++ examples/l3fwd/l3fwd_eventdev.h | 54 ++++++++++++++++++++++ examples/l3fwd/main.c | 38 ++++++++++++++-- examples/l3fwd/meson.build | 4 +- 6 files changed, 173 insertions(+), 7 deletions(-) create mode 100644 examples/l3fwd/l3fwd_eventdev.c create mode 100644 examples/l3fwd/l3fwd_eventdev.h diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile index c55f5c288..4d20f3790 100644 --- a/examples/l3fwd/Makefile +++ b/examples/l3fwd/Makefile @@ -5,7 +5,7 @@ APP = l3fwd # all source are stored in SRCS-y -SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c +SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_eventdev.c # Build using pkg-config variables if possible ifeq ($(shell pkg-config --exists libdpdk && echo 0),0) diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h index 293fb1fa2..838aeed1d 100644 --- a/examples/l3fwd/l3fwd.h +++ b/examples/l3fwd/l3fwd.h @@ -5,6 +5,9 @@ #ifndef __L3_FWD_H__ #define __L3_FWD_H__ +#include + +#include #include #define DO_RFC_1812_CHECKS diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c new file mode 100644 index 000000000..319a20746 --- /dev/null +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "l3fwd.h" +#include "l3fwd_eventdev.h" + +static void +parse_mode(const char *optarg) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + + if (!strncmp(optarg, "poll", 4)) + evdev_rsrc->enabled = false; + else if (!strncmp(optarg, "eventdev", 8)) + evdev_rsrc->enabled = true; +} + +static void +parse_eventq_sync(const char *optarg) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + + if (!strncmp(optarg, "ordered", 7)) + evdev_rsrc->sync_mode = RTE_SCHED_TYPE_ORDERED; + else if (!strncmp(optarg, "atomic", 6)) + evdev_rsrc->sync_mode = RTE_SCHED_TYPE_ATOMIC; +} + +static int +l3fwd_parse_eventdev_args(char **argv, int argc) +{ + const struct option eventdev_lgopts[] = { + {CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM}, + {CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM}, + {NULL, 0, 0, 0} + }; + char **argvopt = argv; + int32_t option_index; + int32_t opt; + + while ((opt = getopt_long(argc, argvopt, "", eventdev_lgopts, + &option_index)) != EOF) { + switch (opt) { + case CMD_LINE_OPT_MODE_NUM: + parse_mode(optarg); + break; + + case CMD_LINE_OPT_EVENTQ_SYNC_NUM: + parse_eventq_sync(optarg); + break; + + case '?': + /* skip other parameters except eventdev specific */ + break; + + default: + printf("Invalid eventdev parameter\n"); + return -1; + } + } + + return 0; +} + +void +l3fwd_eventdev_resource_setup(void) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + int32_t ret; + + /* Parse eventdev command line options */ + ret = l3fwd_parse_eventdev_args(evdev_rsrc->args, evdev_rsrc->nb_args); + if (ret < 0 || !evdev_rsrc->enabled) + return; +} diff --git a/examples/l3fwd/l3fwd_eventdev.h b/examples/l3fwd/l3fwd_eventdev.h new file mode 100644 index 000000000..64d9207a3 --- /dev/null +++ b/examples/l3fwd/l3fwd_eventdev.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __L3FWD_EVENTDEV_H__ +#define __L3FWD_EVENTDEV_H__ + +#include +#include +#include + +#include "l3fwd.h" + +#define CMD_LINE_OPT_MODE "mode" +#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sync" + +enum { + CMD_LINE_OPT_MODE_NUM = 265, + CMD_LINE_OPT_EVENTQ_SYNC_NUM, +}; + +struct l3fwd_eventdev_resources { + uint8_t sync_mode; + uint8_t enabled; + uint8_t nb_args; + char **args; +}; + +static inline struct l3fwd_eventdev_resources * +l3fwd_get_eventdev_rsrc(void) +{ + static const char name[RTE_MEMZONE_NAMESIZE] = "l3fwd_event_rsrc"; + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(name); + + if (mz != NULL) + return mz->addr; + + mz = rte_memzone_reserve(name, sizeof(struct l3fwd_eventdev_resources), + 0, 0); + if (mz != NULL) { + memset(mz->addr, 0, sizeof(struct l3fwd_eventdev_resources)); + return mz->addr; + } + + rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n"); + + return NULL; +} + +void l3fwd_eventdev_resource_setup(void); + +#endif /* __L3FWD_EVENTDEV_H__ */ diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index 3800bad19..bd88bd4ce 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -13,12 +13,12 @@ #include #include #include -#include #include #include #include #include +#include #include #include #include @@ -33,7 +33,6 @@ #include #include #include -#include #include #include #include @@ -52,6 +51,7 @@ */ #define RTE_TEST_RX_DESC_DEFAULT 1024 #define RTE_TEST_TX_DESC_DEFAULT 1024 +#include "l3fwd_eventdev.h" #define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS #define MAX_RX_QUEUE_PER_PORT 128 @@ -289,7 +289,9 @@ print_usage(const char *prgname) " [--hash-entry-num]" " [--ipv6]" " [--parse-ptype]" - " [--per-port-pool]\n\n" + " [--per-port-pool]" + " [--mode]" + " [--eventq-sync]\n\n" " -p PORTMASK: Hexadecimal bitmask of ports to configure\n" " -P : Enable promiscuous mode\n" @@ -304,7 +306,12 @@ print_usage(const char *prgname) " --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n" " --ipv6: Set if running ipv6 packets\n" " --parse-ptype: Set to use software to analyze packet type\n" - " --per-port-pool: Use separate buffer pool per port\n\n", + " --per-port-pool: Use separate buffer pool per port\n" + " --mode: Packet transfer mode for I/O, poll or eventdev\n" + " Default mode = poll\n" + " --eventq-sync: Event queue synchronization method " + " ordered or atomic.\n\t\tDefault: atomic\n\t\t" + " Valid only if --mode=eventdev\n\n", prgname); } @@ -504,11 +511,19 @@ static const struct option lgopts[] = { static int parse_args(int argc, char **argv) { + struct l3fwd_eventdev_resources *evdev_rsrc; int opt, ret; char **argvopt; int option_index; char *prgname = argv[0]; + evdev_rsrc = l3fwd_get_eventdev_rsrc(); + evdev_rsrc->args = rte_zmalloc("l3fwd_event_args", sizeof(char *), 0); + if (evdev_rsrc->args == NULL) + rte_exit(EXIT_FAILURE, + "Unable to allocate memory for eventdev arg"); + evdev_rsrc->args[0] = argv[0]; + evdev_rsrc->nb_args++; argvopt = argv; /* Error or normal output strings. */ @@ -538,6 +553,15 @@ parse_args(int argc, char **argv) l3fwd_lpm_on = 1; break; + case '?': + /* Eventdev options are encountered skip for + * now and processed later. + */ + evdev_rsrc->args[evdev_rsrc->nb_args] = + argv[optind - 1]; + evdev_rsrc->nb_args++; + break; + /* long options */ case CMD_LINE_OPT_CONFIG_NUM: ret = parse_config(optarg); @@ -803,6 +827,7 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid) int main(int argc, char **argv) { + struct l3fwd_eventdev_resources *evdev_rsrc; struct lcore_conf *qconf; struct rte_eth_dev_info dev_info; struct rte_eth_txconf *txconf; @@ -831,11 +856,16 @@ main(int argc, char **argv) *(uint64_t *)(val_eth + portid) = dest_eth_addr[portid]; } + evdev_rsrc = l3fwd_get_eventdev_rsrc(); + RTE_SET_USED(evdev_rsrc); /* parse application arguments (after the EAL ones) */ ret = parse_args(argc, argv); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n"); + /* Configure eventdev parameters if user has requested */ + l3fwd_eventdev_resource_setup(); + if (check_lcore_params() < 0) rte_exit(EXIT_FAILURE, "check_lcore_params failed\n"); diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build index 6dd4b9022..c88676817 100644 --- a/examples/l3fwd/meson.build +++ b/examples/l3fwd/meson.build @@ -6,7 +6,7 @@ # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' -deps += ['hash', 'lpm'] +deps += ['hash', 'lpm', 'eventdev'] sources = files( - 'l3fwd_em.c', 'l3fwd_lpm.c', 'main.c' + 'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_eventdev.c', 'main.c' ) From patchwork Thu Sep 26 10:05:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59842 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6F4C51BF41; Thu, 26 Sep 2019 12:06:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 4DEAD1BF2A for ; Thu, 26 Sep 2019 12:06:12 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA5PsF000795; Thu, 26 Sep 2019 03:06:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=vPUbzA3mcZfxzIfEbAjIKbuTvPVGD3iZBQvadLhpwe0=; b=GV36bVBStA26W+LbZ+WnbanyT+Qr2s63Z3PT5PwXwcAOwmXNpVDGiGNPDP/VtC/cW6jE TsX8Ra0Q3LftUi1Hb/+QCnlbf37ZzlIa5bdh+DioO/aTWtrhxlEyWIlwNoeHiiVYP+rA sAdqFa+De2X+5lBjioCmeXiM4cL+pPQ9JV7boZ8DS/bS/9vEwTyO7zPnXIPqx/eJXn1V Wo86R+HGTyJa1zrovtzcmKzxadKSQTTI63Hd82FOsZdIOdaPFf8UyTVMscgkJeKW3yTQ 63Z2mHfgO0mLGtsX1dhl3mSFEbbOid7pHPl71t3DBeWmvk26ptLKgvDzeTcicoIj1SOV dg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr196-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:11 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:09 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id A17E03F7041; Thu, 26 Sep 2019 03:06:06 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Sunil Kumar Kori Date: Thu, 26 Sep 2019 15:35:49 +0530 Message-ID: <20190926100558.24348-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 02/11] examples/l3fwd: split pipelines based on capability X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add infra to split eventdev framework based on event Tx adapter capability. If event Tx adapter has internal port capability then we use `rte_event_eth_tx_adapter_enqueue` to transmitting packets else we use a SINGLE_LINK event queue to enqueue packets to a service core which is responsible for transmitting packets. Signed-off-by: Sunil Kumar Kori --- examples/l3fwd/Makefile | 1 + examples/l3fwd/l3fwd_eventdev.c | 31 +++++++++++++++++++ examples/l3fwd/l3fwd_eventdev.h | 21 +++++++++++++ examples/l3fwd/l3fwd_eventdev_generic.c | 12 +++++++ examples/l3fwd/l3fwd_eventdev_internal_port.c | 12 +++++++ examples/l3fwd/meson.build | 3 +- 6 files changed, 79 insertions(+), 1 deletion(-) create mode 100644 examples/l3fwd/l3fwd_eventdev_generic.c create mode 100644 examples/l3fwd/l3fwd_eventdev_internal_port.c diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile index 4d20f3790..42d3f0bb6 100644 --- a/examples/l3fwd/Makefile +++ b/examples/l3fwd/Makefile @@ -6,6 +6,7 @@ APP = l3fwd # all source are stored in SRCS-y SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_eventdev.c +SRCS-y += l3fwd_eventdev_generic.c l3fwd_eventdev_internal_port.c # Build using pkg-config variables if possible ifeq ($(shell pkg-config --exists libdpdk && echo 0),0) diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c index 319a20746..fa464626d 100644 --- a/examples/l3fwd/l3fwd_eventdev.c +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -66,6 +66,31 @@ l3fwd_parse_eventdev_args(char **argv, int argc) return 0; } +static void +l3fwd_eventdev_capability_setup(void) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + uint32_t caps = 0; + uint16_t i; + int ret; + + RTE_ETH_FOREACH_DEV(i) { + ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps); + if (ret) + rte_exit(EXIT_FAILURE, + "Invalid capability for Tx adptr port %d\n", + i); + + evdev_rsrc->tx_mode_q |= !(caps & + RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT); + } + + if (evdev_rsrc->tx_mode_q) + l3fwd_eventdev_set_generic_ops(&evdev_rsrc->ops); + else + l3fwd_eventdev_set_internal_port_ops(&evdev_rsrc->ops); +} + void l3fwd_eventdev_resource_setup(void) { @@ -76,4 +101,10 @@ l3fwd_eventdev_resource_setup(void) ret = l3fwd_parse_eventdev_args(evdev_rsrc->args, evdev_rsrc->nb_args); if (ret < 0 || !evdev_rsrc->enabled) return; + + if (!rte_event_dev_count()) + rte_exit(EXIT_FAILURE, "No Eventdev found"); + + /* Setup eventdev capability callbacks */ + l3fwd_eventdev_capability_setup(); } diff --git a/examples/l3fwd/l3fwd_eventdev.h b/examples/l3fwd/l3fwd_eventdev.h index 64d9207a3..61a537864 100644 --- a/examples/l3fwd/l3fwd_eventdev.h +++ b/examples/l3fwd/l3fwd_eventdev.h @@ -7,6 +7,7 @@ #include #include +#include #include #include "l3fwd.h" @@ -19,8 +20,26 @@ enum { CMD_LINE_OPT_EVENTQ_SYNC_NUM, }; +typedef void (*event_queue_setup_cb)(uint16_t ethdev_count, + uint32_t event_queue_cfg); +typedef uint32_t (*eventdev_setup_cb)(uint16_t ethdev_count); +typedef void (*adapter_setup_cb)(uint16_t ethdev_count); +typedef void (*event_port_setup_cb)(void); +typedef void (*service_setup_cb)(void); +typedef int (*event_loop_cb)(void *); + +struct l3fwd_eventdev_setup_ops { + event_queue_setup_cb event_queue_setup; + event_port_setup_cb event_port_setup; + adapter_setup_cb adapter_setup; + event_loop_cb lpm_event_loop; + event_loop_cb em_event_loop; +}; + struct l3fwd_eventdev_resources { + struct l3fwd_eventdev_setup_ops ops; uint8_t sync_mode; + uint8_t tx_mode_q; uint8_t enabled; uint8_t nb_args; char **args; @@ -50,5 +69,7 @@ l3fwd_get_eventdev_rsrc(void) } void l3fwd_eventdev_resource_setup(void); +void l3fwd_eventdev_set_generic_ops(struct l3fwd_eventdev_setup_ops *ops); +void l3fwd_eventdev_set_internal_port_ops(struct l3fwd_eventdev_setup_ops *ops); #endif /* __L3FWD_EVENTDEV_H__ */ diff --git a/examples/l3fwd/l3fwd_eventdev_generic.c b/examples/l3fwd/l3fwd_eventdev_generic.c new file mode 100644 index 000000000..35e655fc0 --- /dev/null +++ b/examples/l3fwd/l3fwd_eventdev_generic.c @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "l3fwd.h" +#include "l3fwd_eventdev.h" + +void +l3fwd_eventdev_set_generic_ops(struct l3fwd_eventdev_setup_ops *ops) +{ + RTE_SET_USED(ops); +} diff --git a/examples/l3fwd/l3fwd_eventdev_internal_port.c b/examples/l3fwd/l3fwd_eventdev_internal_port.c new file mode 100644 index 000000000..f698a0ce1 --- /dev/null +++ b/examples/l3fwd/l3fwd_eventdev_internal_port.c @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "l3fwd.h" +#include "l3fwd_eventdev.h" + +void +l3fwd_eventdev_set_internal_port_ops(struct l3fwd_eventdev_setup_ops *ops) +{ + RTE_SET_USED(ops); +} diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build index c88676817..dccea0b02 100644 --- a/examples/l3fwd/meson.build +++ b/examples/l3fwd/meson.build @@ -8,5 +8,6 @@ deps += ['hash', 'lpm', 'eventdev'] sources = files( - 'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_eventdev.c', 'main.c' + 'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_eventdev.c', + 'l3fwd_eventdev_internal_port.c', 'l3fwd_eventdev_generic.c', 'main.c' ) From patchwork Thu Sep 26 10:05:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59850 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BEF7F1BF7B; Thu, 26 Sep 2019 12:07:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 9ECE71BF77 for ; Thu, 26 Sep 2019 12:07:38 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA76ca002950; Thu, 26 Sep 2019 03:07:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=1zM7hRz2ferX2fSx+ccbf7xrn4e3kYFdZE0jxGnhVT4=; b=fnV5ltoJ/2oxaVFXQFwajkEHbcDmbtoRh3cGn87oV2Z/QOSb8QqGSSRVOOtOVhwbEY/J oumzq1d+8bQHG7k1CdqEFWcOgLUxXbtz1lpuBK6vKE9jVMx3R4+bFiqy7od+MeyAw5cS mBMDi/TVqd5+QYUJkbKsP2GG2pMp73gQh+vE133rYjciVKoVgzB56iPbI9rA/xgOchHh BxdHRKurPIvfhd9nu2vjlnu8YPUqN40tevWEFhOqts0295eNptgQV7qKytKUbQD5Cwa9 NAZy6tRGswnk7COZS6pogUPguf7uLivEBYKkfn3U0qcHSP2iEYz8IUiX4Q8i2uBAYPko Sg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2v8ua0002c-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:07:37 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:13 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:13 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id 7FE6F3F703F; Thu, 26 Sep 2019 03:06:10 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Pavan Nikhilesh Date: Thu, 26 Sep 2019 15:35:50 +0530 Message-ID: <20190926100558.24348-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 03/11] examples/l3fwd: add event device configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event device configuration based on the capabilities of the probed event device. Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd_eventdev.c | 67 +++++++++++++++++++ examples/l3fwd/l3fwd_eventdev.h | 4 ++ examples/l3fwd/l3fwd_eventdev_internal_port.c | 1 + 3 files changed, 72 insertions(+) diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c index fa464626d..7e2c4c66b 100644 --- a/examples/l3fwd/l3fwd_eventdev.c +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -91,10 +91,74 @@ l3fwd_eventdev_capability_setup(void) l3fwd_eventdev_set_internal_port_ops(&evdev_rsrc->ops); } + +static uint32_t +l3fwd_eventdev_setup(uint16_t ethdev_count) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + struct rte_event_dev_config event_d_conf = { + .nb_events_limit = 4096, + .nb_event_queue_flows = 1024, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128 + }; + struct rte_event_dev_info dev_info; + const uint8_t event_d_id = 0; /* Always use first event device only */ + uint32_t event_queue_cfg = 0; + uint16_t num_workers = 0; + int ret; + + /* Event device configurtion */ + rte_event_dev_info_get(event_d_id, &dev_info); + evdev_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE); + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES) + event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + event_d_conf.nb_event_queues = ethdev_count + + (evdev_rsrc->tx_mode_q ? 1 : 0); + if (dev_info.max_event_queues < event_d_conf.nb_event_queues) + event_d_conf.nb_event_queues = dev_info.max_event_queues; + + if (dev_info.max_num_events < event_d_conf.nb_events_limit) + event_d_conf.nb_events_limit = dev_info.max_num_events; + + if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows) + event_d_conf.nb_event_queue_flows = + dev_info.max_event_queue_flows; + + if (dev_info.max_event_port_dequeue_depth < + event_d_conf.nb_event_port_dequeue_depth) + event_d_conf.nb_event_port_dequeue_depth = + dev_info.max_event_port_dequeue_depth; + + if (dev_info.max_event_port_enqueue_depth < + event_d_conf.nb_event_port_enqueue_depth) + event_d_conf.nb_event_port_enqueue_depth = + dev_info.max_event_port_enqueue_depth; + + num_workers = rte_lcore_count() - rte_service_lcore_count(); + if (dev_info.max_event_ports < num_workers) + num_workers = dev_info.max_event_ports; + + event_d_conf.nb_event_ports = num_workers; + evdev_rsrc->has_burst = !!(dev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_BURST_MODE); + + ret = rte_event_dev_configure(event_d_id, &event_d_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error in configuring event device"); + + evdev_rsrc->event_d_id = event_d_id; + return event_queue_cfg; +} + void l3fwd_eventdev_resource_setup(void) { struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + uint16_t ethdev_count = rte_eth_dev_count_avail(); int32_t ret; /* Parse eventdev command line options */ @@ -107,4 +171,7 @@ l3fwd_eventdev_resource_setup(void) /* Setup eventdev capability callbacks */ l3fwd_eventdev_capability_setup(); + + /* Event device configuration */ + l3fwd_eventdev_setup(ethdev_count); } diff --git a/examples/l3fwd/l3fwd_eventdev.h b/examples/l3fwd/l3fwd_eventdev.h index 61a537864..ce4e35443 100644 --- a/examples/l3fwd/l3fwd_eventdev.h +++ b/examples/l3fwd/l3fwd_eventdev.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include "l3fwd.h" @@ -37,9 +38,12 @@ struct l3fwd_eventdev_setup_ops { }; struct l3fwd_eventdev_resources { + uint8_t disable_implicit_release; struct l3fwd_eventdev_setup_ops ops; + uint8_t event_d_id; uint8_t sync_mode; uint8_t tx_mode_q; + uint8_t has_burst; uint8_t enabled; uint8_t nb_args; char **args; diff --git a/examples/l3fwd/l3fwd_eventdev_internal_port.c b/examples/l3fwd/l3fwd_eventdev_internal_port.c index f698a0ce1..d40185862 100644 --- a/examples/l3fwd/l3fwd_eventdev_internal_port.c +++ b/examples/l3fwd/l3fwd_eventdev_internal_port.c @@ -5,6 +5,7 @@ #include "l3fwd.h" #include "l3fwd_eventdev.h" + void l3fwd_eventdev_set_internal_port_ops(struct l3fwd_eventdev_setup_ops *ops) { From patchwork Thu Sep 26 10:05:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59843 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2C25A1BF6D; Thu, 26 Sep 2019 12:06:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id C74D21BEBF for ; Thu, 26 Sep 2019 12:06:19 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA4fI7032589; Thu, 26 Sep 2019 03:06:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=aLuAmyEK82PdbcLaMIHxcrWwurHELjVNjQhrgVa7nnQ=; b=Mm8fq6hRhxalPegk67ze53bv6HBl9c64BmPvubOLtJ22QvrKsrfH7dj3kBv/HrtkpaDj pgj9tYLgubYKDi27PjKsak+gJRcFizqfYsK2DuA91+RRC15Tx8E6xB4ZMPhhKbJzhNJ3 7oLgzJq6dqFRK7a4UPRWdUkq5oYS4Rup4jt4/XCZvSdh3arZLhhqdgRRXDj4DOYoqI+c SZGDEMasG/OL6sNywqKuyYlMvlKtoJ9ffMV/On6eJAYsk1qOadu9i+4/Kps2LhuSB5z+ tTQA+S5MVHN70pIcoQydRWsYBxApQ+//jD/MRa0GkiPqtR5RgpY30ILmckupgh30bUlv ag== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr1a1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:19 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:17 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:17 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id 459BA3F703F; Thu, 26 Sep 2019 03:06:14 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Sunil Kumar Kori Date: Thu, 26 Sep 2019 15:35:51 +0530 Message-ID: <20190926100558.24348-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 04/11] examples/l3fwd: add ethdev setup based on eventdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add ethernet port Rx/Tx queue setup for event device which are later used for setting up event eth Rx/Tx adapters. Signed-off-by: Sunil Kumar Kori --- examples/l3fwd/l3fwd.h | 10 +++ examples/l3fwd/l3fwd_eventdev.c | 129 +++++++++++++++++++++++++++++++- examples/l3fwd/l3fwd_eventdev.h | 5 +- examples/l3fwd/main.c | 15 ++-- 4 files changed, 147 insertions(+), 12 deletions(-) diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h index 838aeed1d..ef978ae64 100644 --- a/examples/l3fwd/l3fwd.h +++ b/examples/l3fwd/l3fwd.h @@ -18,9 +18,16 @@ #define NO_HASH_MULTI_LOOKUP 1 #endif +/* + * Configurable number of RX/TX ring descriptors + */ +#define RTE_TEST_RX_DESC_DEFAULT 1024 +#define RTE_TEST_TX_DESC_DEFAULT 1024 + #define MAX_PKT_BURST 32 #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ +#define MEMPOOL_CACHE_SIZE 256 #define MAX_RX_QUEUE_PER_LCORE 16 /* @@ -172,6 +179,9 @@ is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len) } #endif /* DO_RFC_1812_CHECKS */ +int +init_mem(uint16_t portid, unsigned int nb_mbuf); + /* Function pointers for LPM or EM functionality. */ void setup_lpm(const int socketid); diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c index 7e2c4c66b..f07cd4b31 100644 --- a/examples/l3fwd/l3fwd_eventdev.c +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -8,6 +8,14 @@ #include "l3fwd.h" #include "l3fwd_eventdev.h" +static void +print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr) +{ + char buf[RTE_ETHER_ADDR_FMT_SIZE]; + rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr); + printf("%s%s", name, buf); +} + static void parse_mode(const char *optarg) { @@ -66,6 +74,122 @@ l3fwd_parse_eventdev_args(char **argv, int argc) return 0; } +static void +l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + uint16_t nb_ports = rte_eth_dev_count_avail(); + uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT; + uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; + unsigned int nb_lcores = rte_lcore_count(); + struct rte_eth_conf local_port_conf; + struct rte_eth_dev_info dev_info; + struct rte_eth_txconf txconf; + struct rte_eth_rxconf rxconf; + unsigned int nb_mbuf; + uint16_t port_id; + int32_t ret; + + /* initialize all ports */ + RTE_ETH_FOREACH_DEV(port_id) { + local_port_conf = *port_conf; + /* skip ports that are not enabled */ + if ((evdev_rsrc->port_mask & (1 << port_id)) == 0) { + printf("\nSkipping disabled port %d\n", port_id); + continue; + } + + /* init port */ + printf("Initializing port %d ... ", port_id); + fflush(stdout); + printf("Creating queues: nb_rxq=1 nb_txq=1...\n"); + + rte_eth_dev_info_get(port_id, &dev_info); + if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE) + local_port_conf.txmode.offloads |= + DEV_TX_OFFLOAD_MBUF_FAST_FREE; + + local_port_conf.rx_adv_conf.rss_conf.rss_hf &= + dev_info.flow_type_rss_offloads; + if (local_port_conf.rx_adv_conf.rss_conf.rss_hf != + port_conf->rx_adv_conf.rss_conf.rss_hf) { + printf("Port %u modified RSS hash function " + "based on hardware support," + "requested:%#"PRIx64" configured:%#"PRIx64"\n", + port_id, + port_conf->rx_adv_conf.rss_conf.rss_hf, + local_port_conf.rx_adv_conf.rss_conf.rss_hf); + } + + ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot configure device: err=%d, port=%d\n", + ret, port_id); + + ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd, + &nb_txd); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot adjust number of descriptors: err=%d, " + "port=%d\n", ret, port_id); + + rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]); + print_ethaddr(" Address:", &ports_eth_addr[port_id]); + printf(", "); + print_ethaddr("Destination:", + (const struct rte_ether_addr *)&dest_eth_addr[port_id]); + printf(", "); + + /* prepare source MAC for each port. */ + rte_ether_addr_copy(&ports_eth_addr[port_id], + (struct rte_ether_addr *)(val_eth + port_id) + 1); + + /* init memory */ + if (!evdev_rsrc->per_port_pool) { + /* port_id = 0; this is *not* signifying the first port, + * rather, it signifies that port_id is ignored. + */ + nb_mbuf = RTE_MAX(nb_ports * nb_rxd + + nb_ports * nb_txd + + nb_ports * nb_lcores * + MAX_PKT_BURST + + nb_lcores * MEMPOOL_CACHE_SIZE, + 8192u); + ret = init_mem(0, nb_mbuf); + } else { + nb_mbuf = RTE_MAX(nb_rxd + nb_rxd + + nb_lcores * MAX_PKT_BURST + + nb_lcores * MEMPOOL_CACHE_SIZE, + 8192u); + ret = init_mem(port_id, nb_mbuf); + } + /* init one Rx queue per port */ + rxconf = dev_info.default_rxconf; + rxconf.offloads = local_port_conf.rxmode.offloads; + if (!evdev_rsrc->per_port_pool) + ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0, + &rxconf, evdev_rsrc->pkt_pool[0][0]); + else + ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0, + &rxconf, + evdev_rsrc->pkt_pool[port_id][0]); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_rx_queue_setup: err=%d, " + "port=%d\n", ret, port_id); + + /* init one Tx queue per port */ + txconf = dev_info.default_txconf; + txconf.offloads = local_port_conf.txmode.offloads; + ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_tx_queue_setup: err=%d, " + "port=%d\n", ret, port_id); + } +} + static void l3fwd_eventdev_capability_setup(void) { @@ -155,7 +279,7 @@ l3fwd_eventdev_setup(uint16_t ethdev_count) } void -l3fwd_eventdev_resource_setup(void) +l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) { struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); uint16_t ethdev_count = rte_eth_dev_count_avail(); @@ -172,6 +296,9 @@ l3fwd_eventdev_resource_setup(void) /* Setup eventdev capability callbacks */ l3fwd_eventdev_capability_setup(); + /* Ethernet device configuration */ + l3fwd_eth_dev_port_setup(port_conf); + /* Event device configuration */ l3fwd_eventdev_setup(ethdev_count); } diff --git a/examples/l3fwd/l3fwd_eventdev.h b/examples/l3fwd/l3fwd_eventdev.h index ce4e35443..f63f3d4ef 100644 --- a/examples/l3fwd/l3fwd_eventdev.h +++ b/examples/l3fwd/l3fwd_eventdev.h @@ -40,6 +40,9 @@ struct l3fwd_eventdev_setup_ops { struct l3fwd_eventdev_resources { uint8_t disable_implicit_release; struct l3fwd_eventdev_setup_ops ops; + struct rte_mempool * (*pkt_pool)[NB_SOCKETS]; + uint32_t port_mask; + uint8_t per_port_pool; uint8_t event_d_id; uint8_t sync_mode; uint8_t tx_mode_q; @@ -72,7 +75,7 @@ l3fwd_get_eventdev_rsrc(void) return NULL; } -void l3fwd_eventdev_resource_setup(void); +void l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf); void l3fwd_eventdev_set_generic_ops(struct l3fwd_eventdev_setup_ops *ops); void l3fwd_eventdev_set_internal_port_ops(struct l3fwd_eventdev_setup_ops *ops); diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index bd88bd4ce..0ecb0ef68 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -45,12 +45,6 @@ #include #include "l3fwd.h" - -/* - * Configurable number of RX/TX ring descriptors - */ -#define RTE_TEST_RX_DESC_DEFAULT 1024 -#define RTE_TEST_TX_DESC_DEFAULT 1024 #include "l3fwd_eventdev.h" #define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS @@ -448,7 +442,6 @@ parse_eth_dest(const char *optarg) } #define MAX_JUMBO_PKT_LEN 9600 -#define MEMPOOL_CACHE_SIZE 256 static const char short_options[] = "p:" /* portmask */ @@ -678,7 +671,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr) printf("%s%s", name, buf); } -static int +int init_mem(uint16_t portid, unsigned int nb_mbuf) { struct lcore_conf *qconf; @@ -857,14 +850,16 @@ main(int argc, char **argv) } evdev_rsrc = l3fwd_get_eventdev_rsrc(); - RTE_SET_USED(evdev_rsrc); /* parse application arguments (after the EAL ones) */ ret = parse_args(argc, argv); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n"); + evdev_rsrc->per_port_pool = per_port_pool; + evdev_rsrc->pkt_pool = pktmbuf_pool; + evdev_rsrc->port_mask = enabled_port_mask; /* Configure eventdev parameters if user has requested */ - l3fwd_eventdev_resource_setup(); + l3fwd_eventdev_resource_setup(&port_conf); if (check_lcore_params() < 0) rte_exit(EXIT_FAILURE, "check_lcore_params failed\n"); From patchwork Thu Sep 26 10:05:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59851 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 025A01BFA2; Thu, 26 Sep 2019 12:07:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 271A41BF77 for ; Thu, 26 Sep 2019 12:07:39 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA72W1002599; Thu, 26 Sep 2019 03:07:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=NvPzi574gtm0MA8pazJVapiR/AmCQZySGCZqBYg0lcQ=; b=iEX2yVgEEVmZSRP1qRFSCm3hhPyhvXbg/AtKFsJf8EGoOrKEL20v5A4rAj97tbTNT0yn opd0YRK89ZUIfF604kjYYfCjsmM910VuM9uXoRqT0IDlSlGD8kXCKHMF0zNBNf1T8YEe bpElkfuy9JzYui6C08BhINoTHDIEOv8q1616p/jCeSnVIB5xlBatsBSrVZ67SNI+yaqg K/LSuXqjdcoBwQhZlNysR0Fgo5r4myvVhGkzjU9kX0oCFq75HNc49vRwahtif0wxFlyt ghOb02UT5juWJTlxXDdYyQzXZHaKZAHnWcKzxv8GPu4McBdjkhIPH8228hdyGBM73NlI rw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2v8ua0002f-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:07:38 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:21 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:20 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id 334423F7041; Thu, 26 Sep 2019 03:06:17 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Sunil Kumar Kori Date: Thu, 26 Sep 2019 15:35:52 +0530 Message-ID: <20190926100558.24348-6-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 05/11] examples/l3fwd: add event port and queue setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add event device queue and port setup based on event eth Tx adapter capabilities. Signed-off-by: Sunil Kumar Kori --- examples/l3fwd/l3fwd_eventdev.c | 30 ++++- examples/l3fwd/l3fwd_eventdev.h | 16 +++ examples/l3fwd/l3fwd_eventdev_generic.c | 113 +++++++++++++++++- examples/l3fwd/l3fwd_eventdev_internal_port.c | 106 +++++++++++++++- 4 files changed, 261 insertions(+), 4 deletions(-) diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c index f07cd4b31..f5ac3ccce 100644 --- a/examples/l3fwd/l3fwd_eventdev.c +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -215,7 +215,6 @@ l3fwd_eventdev_capability_setup(void) l3fwd_eventdev_set_internal_port_ops(&evdev_rsrc->ops); } - static uint32_t l3fwd_eventdev_setup(uint16_t ethdev_count) { @@ -267,6 +266,7 @@ l3fwd_eventdev_setup(uint16_t ethdev_count) num_workers = dev_info.max_event_ports; event_d_conf.nb_event_ports = num_workers; + evdev_rsrc->evp.nb_ports = num_workers; evdev_rsrc->has_burst = !!(dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE); @@ -278,11 +278,31 @@ l3fwd_eventdev_setup(uint16_t ethdev_count) return event_queue_cfg; } +int +l3fwd_get_free_event_port(struct l3fwd_eventdev_resources *evdev_rsrc) +{ + static int index; + int port_id; + + rte_spinlock_lock(&evdev_rsrc->evp.lock); + if (index >= evdev_rsrc->evp.nb_ports) { + printf("No free event port is available\n"); + return -1; + } + + port_id = evdev_rsrc->evp.event_p_id[index]; + index++; + rte_spinlock_unlock(&evdev_rsrc->evp.lock); + + return port_id; +} + void l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) { struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); uint16_t ethdev_count = rte_eth_dev_count_avail(); + uint32_t event_queue_cfg; int32_t ret; /* Parse eventdev command line options */ @@ -300,5 +320,11 @@ l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) l3fwd_eth_dev_port_setup(port_conf); /* Event device configuration */ - l3fwd_eventdev_setup(ethdev_count); + event_queue_cfg = l3fwd_eventdev_setup(ethdev_count); + + /* Event queue configuration */ + evdev_rsrc->ops.event_queue_setup(ethdev_count, event_queue_cfg); + + /* Event port configuration */ + evdev_rsrc->ops.event_port_setup(); } diff --git a/examples/l3fwd/l3fwd_eventdev.h b/examples/l3fwd/l3fwd_eventdev.h index f63f3d4ef..2640d6cec 100644 --- a/examples/l3fwd/l3fwd_eventdev.h +++ b/examples/l3fwd/l3fwd_eventdev.h @@ -29,6 +29,17 @@ typedef void (*event_port_setup_cb)(void); typedef void (*service_setup_cb)(void); typedef int (*event_loop_cb)(void *); +struct l3fwd_eventdev_queues { + uint8_t *event_q_id; + uint8_t nb_queues; +}; + +struct l3fwd_eventdev_ports { + uint8_t *event_p_id; + uint8_t nb_ports; + rte_spinlock_t lock; +}; + struct l3fwd_eventdev_setup_ops { event_queue_setup_cb event_queue_setup; event_port_setup_cb event_port_setup; @@ -38,14 +49,18 @@ struct l3fwd_eventdev_setup_ops { }; struct l3fwd_eventdev_resources { + struct rte_event_port_conf def_p_conf; uint8_t disable_implicit_release; struct l3fwd_eventdev_setup_ops ops; struct rte_mempool * (*pkt_pool)[NB_SOCKETS]; + struct l3fwd_eventdev_queues evq; + struct l3fwd_eventdev_ports evp; uint32_t port_mask; uint8_t per_port_pool; uint8_t event_d_id; uint8_t sync_mode; uint8_t tx_mode_q; + uint8_t deq_depth; uint8_t has_burst; uint8_t enabled; uint8_t nb_args; @@ -76,6 +91,7 @@ l3fwd_get_eventdev_rsrc(void) } void l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf); +int l3fwd_get_free_event_port(struct l3fwd_eventdev_resources *eventdev_rsrc); void l3fwd_eventdev_set_generic_ops(struct l3fwd_eventdev_setup_ops *ops); void l3fwd_eventdev_set_internal_port_ops(struct l3fwd_eventdev_setup_ops *ops); diff --git a/examples/l3fwd/l3fwd_eventdev_generic.c b/examples/l3fwd/l3fwd_eventdev_generic.c index 35e655fc0..4aec0e403 100644 --- a/examples/l3fwd/l3fwd_eventdev_generic.c +++ b/examples/l3fwd/l3fwd_eventdev_generic.c @@ -5,8 +5,119 @@ #include "l3fwd.h" #include "l3fwd_eventdev.h" +static void +l3fwd_event_port_setup_generic(void) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + uint8_t event_d_id = evdev_rsrc->event_d_id; + struct rte_event_port_conf event_p_conf = { + .dequeue_depth = 32, + .enqueue_depth = 32, + .new_event_threshold = 4096 + }; + struct rte_event_port_conf def_p_conf; + uint8_t event_p_id; + int32_t ret; + + evdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) * + evdev_rsrc->evp.nb_ports); + if (!evdev_rsrc->evp.event_p_id) + rte_exit(EXIT_FAILURE, " No space is available"); + + memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf)); + rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf); + + if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold) + event_p_conf.new_event_threshold = + def_p_conf.new_event_threshold; + + if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth) + event_p_conf.dequeue_depth = def_p_conf.dequeue_depth; + + if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth) + event_p_conf.enqueue_depth = def_p_conf.enqueue_depth; + + event_p_conf.disable_implicit_release = + evdev_rsrc->disable_implicit_release; + evdev_rsrc->deq_depth = def_p_conf.dequeue_depth; + + for (event_p_id = 0; event_p_id < evdev_rsrc->evp.nb_ports; + event_p_id++) { + ret = rte_event_port_setup(event_d_id, event_p_id, + &event_p_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event port %d\n", + event_p_id); + } + + ret = rte_event_port_link(event_d_id, event_p_id, + evdev_rsrc->evq.event_q_id, + NULL, + evdev_rsrc->evq.nb_queues - 1); + if (ret != (evdev_rsrc->evq.nb_queues - 1)) { + rte_exit(EXIT_FAILURE, "Error in linking event port %d " + "to event queue", event_p_id); + } + evdev_rsrc->evp.event_p_id[event_p_id] = event_p_id; + } + /* init spinlock */ + rte_spinlock_init(&evdev_rsrc->evp.lock); + + evdev_rsrc->def_p_conf = event_p_conf; +} + +static void +l3fwd_event_queue_setup_generic(uint16_t ethdev_count, + uint32_t event_queue_cfg) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + uint8_t event_d_id = evdev_rsrc->event_d_id; + struct rte_event_queue_conf event_q_conf = { + .nb_atomic_flows = 1024, + .nb_atomic_order_sequences = 1024, + .event_queue_cfg = event_queue_cfg, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL + }; + struct rte_event_queue_conf def_q_conf; + uint8_t event_q_id; + int32_t ret; + + event_q_conf.schedule_type = evdev_rsrc->sync_mode; + evdev_rsrc->evq.nb_queues = ethdev_count + 1; + evdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) * + evdev_rsrc->evq.nb_queues); + if (!evdev_rsrc->evq.event_q_id) + rte_exit(EXIT_FAILURE, "Memory allocation failure"); + + rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf); + if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows) + event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows; + + for (event_q_id = 0; event_q_id < (evdev_rsrc->evq.nb_queues - 1); + event_q_id++) { + ret = rte_event_queue_setup(event_d_id, event_q_id, + &event_q_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event queue"); + } + evdev_rsrc->evq.event_q_id[event_q_id] = event_q_id; + } + + event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK; + event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST, + ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event queue for Tx adapter"); + } + evdev_rsrc->evq.event_q_id[event_q_id] = event_q_id; +} + void l3fwd_eventdev_set_generic_ops(struct l3fwd_eventdev_setup_ops *ops) { - RTE_SET_USED(ops); + ops->event_queue_setup = l3fwd_event_queue_setup_generic; + ops->event_port_setup = l3fwd_event_port_setup_generic; } diff --git a/examples/l3fwd/l3fwd_eventdev_internal_port.c b/examples/l3fwd/l3fwd_eventdev_internal_port.c index d40185862..363e37899 100644 --- a/examples/l3fwd/l3fwd_eventdev_internal_port.c +++ b/examples/l3fwd/l3fwd_eventdev_internal_port.c @@ -5,9 +5,113 @@ #include "l3fwd.h" #include "l3fwd_eventdev.h" +static void +l3fwd_event_port_setup_internal_port(void) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + uint8_t event_d_id = evdev_rsrc->event_d_id; + struct rte_event_port_conf event_p_conf = { + .dequeue_depth = 32, + .enqueue_depth = 32, + .new_event_threshold = 4096 + }; + struct rte_event_port_conf def_p_conf; + uint8_t event_p_id; + int32_t ret; + + evdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) * + evdev_rsrc->evp.nb_ports); + if (!evdev_rsrc->evp.event_p_id) + rte_exit(EXIT_FAILURE, + "Failed to allocate memory for Event Ports"); + + rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf); + if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold) + event_p_conf.new_event_threshold = + def_p_conf.new_event_threshold; + + if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth) + event_p_conf.dequeue_depth = def_p_conf.dequeue_depth; + + if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth) + event_p_conf.enqueue_depth = def_p_conf.enqueue_depth; + + event_p_conf.disable_implicit_release = + evdev_rsrc->disable_implicit_release; + + for (event_p_id = 0; event_p_id < evdev_rsrc->evp.nb_ports; + event_p_id++) { + ret = rte_event_port_setup(event_d_id, event_p_id, + &event_p_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event port %d\n", + event_p_id); + } + + ret = rte_event_port_link(event_d_id, event_p_id, NULL, + NULL, 0); + if (ret < 0) { + rte_exit(EXIT_FAILURE, "Error in linking event port %d " + "to event queue", event_p_id); + } + evdev_rsrc->evp.event_p_id[event_p_id] = event_p_id; + + /* init spinlock */ + rte_spinlock_init(&evdev_rsrc->evp.lock); + } + + evdev_rsrc->def_p_conf = event_p_conf; +} + +static void +l3fwd_event_queue_setup_internal_port(uint16_t ethdev_count, + uint32_t event_queue_cfg) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + uint8_t event_d_id = evdev_rsrc->event_d_id; + struct rte_event_queue_conf event_q_conf = { + .nb_atomic_flows = 1024, + .nb_atomic_order_sequences = 1024, + .event_queue_cfg = event_queue_cfg, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL + }; + struct rte_event_queue_conf def_q_conf; + uint8_t event_q_id = 0; + int32_t ret; + + rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf); + + if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows) + event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows; + + if (def_q_conf.nb_atomic_order_sequences < + event_q_conf.nb_atomic_order_sequences) + event_q_conf.nb_atomic_order_sequences = + def_q_conf.nb_atomic_order_sequences; + + event_q_conf.event_queue_cfg = event_queue_cfg; + event_q_conf.schedule_type = evdev_rsrc->sync_mode; + evdev_rsrc->evq.nb_queues = ethdev_count; + evdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) * + evdev_rsrc->evq.nb_queues); + if (!evdev_rsrc->evq.event_q_id) + rte_exit(EXIT_FAILURE, "Memory allocation failure"); + + for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) { + ret = rte_event_queue_setup(event_d_id, event_q_id, + &event_q_conf); + if (ret < 0) { + rte_exit(EXIT_FAILURE, + "Error in configuring event queue"); + } + evdev_rsrc->evq.event_q_id[event_q_id] = event_q_id; + } +} void l3fwd_eventdev_set_internal_port_ops(struct l3fwd_eventdev_setup_ops *ops) { - RTE_SET_USED(ops); + ops->event_queue_setup = l3fwd_event_queue_setup_internal_port; + ops->event_port_setup = l3fwd_event_port_setup_internal_port; } From patchwork Thu Sep 26 10:05:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59846 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9DCC41BF92; Thu, 26 Sep 2019 12:06:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id E9B3E1BF8B for ; Thu, 26 Sep 2019 12:06:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA4fID032589; Thu, 26 Sep 2019 03:06:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=NnQm9UZspBVapoLuHUB5f9rhUSQOMhoJWA15bsZaxls=; b=mwepzvKncov7FPGxhSFImhD5h2rz9O4aGfrkmQd6heuB5Itb/bFlnfPg1oIqTEGcAEuy eQoE9ZrcOlx7+4We1KSaRjcuXnWQ0ewOOendicl7KsUw4LqnAtfoOcdr1FQ0d1tr5uSX dimv4e+96gwUjJrYRBmzrVo0nd4kbusWAJPsVRYULafNz6gtU1B6RpNXg4k+RUBMz8jl zUF2C89NSlvk1wQ+SZEtJO+ivjgZhTFo2leD4v69iVonym7uPbn7nhZTKTA1slD+vC8I Bv3tWwe0hp1jHTD+iGgGQugvpTwfVlBFV9KCdwrOa6jMqCeRFdC7TM3Gye+H2Iy6YQP+ 9w== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr1b5-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:35 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:25 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:25 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id F2AA33F7040; Thu, 26 Sep 2019 03:06:21 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Pavan Nikhilesh , "Sunil Kumar Kori" Date: Thu, 26 Sep 2019 15:35:53 +0530 Message-ID: <20190926100558.24348-7-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event eth Rx/Tx adapter setup for both generic and internal port event device pipelines. Signed-off-by: Sunil Kumar Kori Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd_eventdev.c | 3 + examples/l3fwd/l3fwd_eventdev.h | 13 +++ examples/l3fwd/l3fwd_eventdev_generic.c | 99 +++++++++++++++++++ examples/l3fwd/l3fwd_eventdev_internal_port.c | 80 +++++++++++++++ 4 files changed, 195 insertions(+) diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c index f5ac3ccce..031705b68 100644 --- a/examples/l3fwd/l3fwd_eventdev.c +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -327,4 +327,7 @@ l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) /* Event port configuration */ evdev_rsrc->ops.event_port_setup(); + + /* Rx/Tx adapters configuration */ + evdev_rsrc->ops.adapter_setup(ethdev_count); } diff --git a/examples/l3fwd/l3fwd_eventdev.h b/examples/l3fwd/l3fwd_eventdev.h index 2640d6cec..127bb7f42 100644 --- a/examples/l3fwd/l3fwd_eventdev.h +++ b/examples/l3fwd/l3fwd_eventdev.h @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -40,6 +41,16 @@ struct l3fwd_eventdev_ports { rte_spinlock_t lock; }; +struct l3fwd_eventdev_rx_adptr { + uint8_t nb_rx_adptr; + uint8_t *rx_adptr; +}; + +struct l3fwd_eventdev_tx_adptr { + uint8_t nb_tx_adptr; + uint8_t *tx_adptr; +}; + struct l3fwd_eventdev_setup_ops { event_queue_setup_cb event_queue_setup; event_port_setup_cb event_port_setup; @@ -50,6 +61,8 @@ struct l3fwd_eventdev_setup_ops { struct l3fwd_eventdev_resources { struct rte_event_port_conf def_p_conf; + struct l3fwd_eventdev_rx_adptr rx_adptr; + struct l3fwd_eventdev_tx_adptr tx_adptr; uint8_t disable_implicit_release; struct l3fwd_eventdev_setup_ops ops; struct rte_mempool * (*pkt_pool)[NB_SOCKETS]; diff --git a/examples/l3fwd/l3fwd_eventdev_generic.c b/examples/l3fwd/l3fwd_eventdev_generic.c index 4aec0e403..659a152b6 100644 --- a/examples/l3fwd/l3fwd_eventdev_generic.c +++ b/examples/l3fwd/l3fwd_eventdev_generic.c @@ -115,9 +115,108 @@ l3fwd_event_queue_setup_generic(uint16_t ethdev_count, evdev_rsrc->evq.event_q_id[event_q_id] = event_q_id; } +static void +l3fwd_rx_tx_adapter_setup_generic(uint16_t ethdev_count) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = { + .rx_queue_flags = 0, + .ev = { + .queue_id = 0, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + } + }; + uint8_t event_d_id = evdev_rsrc->event_d_id; + uint8_t rx_adptr_id = 0; + uint8_t tx_adptr_id = 0; + uint8_t tx_port_id = 0; + int32_t ret, i; + + /* Rx adapter setup */ + evdev_rsrc->rx_adptr.nb_rx_adptr = 1; + evdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) * + evdev_rsrc->rx_adptr.nb_rx_adptr); + if (!evdev_rsrc->rx_adptr.rx_adptr) { + free(evdev_rsrc->evp.event_p_id); + free(evdev_rsrc->evq.event_q_id); + rte_exit(EXIT_FAILURE, + "failed to allocate memery for Rx adapter"); + } + + ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id, + &evdev_rsrc->def_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, "failed to create rx adapter"); + + eth_q_conf.ev.sched_type = evdev_rsrc->sync_mode; + for (i = 0; i < ethdev_count; i++) { + /* Configure user requested sync mode */ + eth_q_conf.ev.queue_id = evdev_rsrc->evq.event_q_id[i]; + ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, i, -1, + ð_q_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Rx adapter"); + } + + ret = rte_event_eth_rx_adapter_start(rx_adptr_id); + if (ret) + rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed", + rx_adptr_id); + + evdev_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id; + + /* Tx adapter setup */ + evdev_rsrc->tx_adptr.nb_tx_adptr = 1; + evdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) * + evdev_rsrc->tx_adptr.nb_tx_adptr); + if (!evdev_rsrc->tx_adptr.tx_adptr) { + free(evdev_rsrc->rx_adptr.rx_adptr); + free(evdev_rsrc->evp.event_p_id); + free(evdev_rsrc->evq.event_q_id); + rte_exit(EXIT_FAILURE, + "failed to allocate memery for Rx adapter"); + } + + ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id, + &evdev_rsrc->def_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, "failed to create tx adapter[%d]", + tx_adptr_id); + + for (i = 0; i < ethdev_count; i++) { + ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, i, -1); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to add queues to Tx adapter"); + } + + ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to get Tx adapter port id: %d\n", ret); + + ret = rte_event_port_link(event_d_id, tx_port_id, + &evdev_rsrc->evq.event_q_id[ + evdev_rsrc->evq.nb_queues - 1], + NULL, 1); + if (ret != 1) + rte_exit(EXIT_FAILURE, + "Unable to link Tx adapter port to Tx queue:err = %d", + ret); + + ret = rte_event_eth_tx_adapter_start(tx_adptr_id); + if (ret) + rte_exit(EXIT_FAILURE, "Tx adapter[%d] start failed", + tx_adptr_id); + + evdev_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id; +} + void l3fwd_eventdev_set_generic_ops(struct l3fwd_eventdev_setup_ops *ops) { ops->event_queue_setup = l3fwd_event_queue_setup_generic; ops->event_port_setup = l3fwd_event_port_setup_generic; + ops->adapter_setup = l3fwd_rx_tx_adapter_setup_generic; } diff --git a/examples/l3fwd/l3fwd_eventdev_internal_port.c b/examples/l3fwd/l3fwd_eventdev_internal_port.c index 363e37899..811c99983 100644 --- a/examples/l3fwd/l3fwd_eventdev_internal_port.c +++ b/examples/l3fwd/l3fwd_eventdev_internal_port.c @@ -109,9 +109,89 @@ l3fwd_event_queue_setup_internal_port(uint16_t ethdev_count, } } +static void +l3fwd_rx_tx_adapter_setup_internal_port(uint16_t ethdev_count) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = { + .rx_queue_flags = 0, + .ev = { + .queue_id = 0, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + } + }; + uint8_t event_d_id = evdev_rsrc->event_d_id; + int32_t ret, i; + + evdev_rsrc->rx_adptr.nb_rx_adptr = ethdev_count; + evdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) * + evdev_rsrc->rx_adptr.nb_rx_adptr); + if (!evdev_rsrc->rx_adptr.rx_adptr) { + free(evdev_rsrc->evp.event_p_id); + free(evdev_rsrc->evq.event_q_id); + rte_exit(EXIT_FAILURE, + "failed to allocate memery for Rx adapter"); + } + + for (i = 0; i < ethdev_count; i++) { + ret = rte_event_eth_rx_adapter_create(i, event_d_id, + &evdev_rsrc->def_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to create rx adapter[%d]", i); + + /* Configure user requested sync mode */ + eth_q_conf.ev.queue_id = evdev_rsrc->evq.event_q_id[i]; + eth_q_conf.ev.sched_type = evdev_rsrc->sync_mode; + ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Rx adapter"); + + ret = rte_event_eth_rx_adapter_start(i); + if (ret) + rte_exit(EXIT_FAILURE, + "Rx adapter[%d] start failed", i); + + evdev_rsrc->rx_adptr.rx_adptr[i] = i; + } + + evdev_rsrc->tx_adptr.nb_tx_adptr = ethdev_count; + evdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) * + evdev_rsrc->tx_adptr.nb_tx_adptr); + if (!evdev_rsrc->tx_adptr.tx_adptr) { + free(evdev_rsrc->rx_adptr.rx_adptr); + free(evdev_rsrc->evp.event_p_id); + free(evdev_rsrc->evq.event_q_id); + rte_exit(EXIT_FAILURE, + "failed to allocate memery for Rx adapter"); + } + + for (i = 0; i < ethdev_count; i++) { + ret = rte_event_eth_tx_adapter_create(i, event_d_id, + &evdev_rsrc->def_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to create tx adapter[%d]", i); + + ret = rte_event_eth_tx_adapter_queue_add(i, i, -1); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to add queues to Tx adapter"); + + ret = rte_event_eth_tx_adapter_start(i); + if (ret) + rte_exit(EXIT_FAILURE, + "Tx adapter[%d] start failed", i); + + evdev_rsrc->tx_adptr.tx_adptr[i] = i; + } +} + void l3fwd_eventdev_set_internal_port_ops(struct l3fwd_eventdev_setup_ops *ops) { ops->event_queue_setup = l3fwd_event_queue_setup_internal_port; ops->event_port_setup = l3fwd_event_port_setup_internal_port; + ops->adapter_setup = l3fwd_rx_tx_adapter_setup_internal_port; } From patchwork Thu Sep 26 10:05:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59844 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2056D1BF7C; Thu, 26 Sep 2019 12:06:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 670E31BF34 for ; Thu, 26 Sep 2019 12:06:31 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA4gPc032601; Thu, 26 Sep 2019 03:06:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=evXeL3sHJgZvj8utDqwCYuHIfbTP7jvWoLMdl66aowk=; b=bQPHpifu8Ha9CzeB7byh5KEKE2xN3Bo0m4Kj4tSXaTysvqhx5ZqifYX7bGKW/oKfT+bN sO+gOKbVdJBS9PzAzNRt+HQKEtwXHTO5jmiRqzFN6kJo38ezggtFj109bHl4KuFwW9Kk epVn9tx6eIQH2R6kst1jURtn7jtE6DQ/yfMazH1P5dCNVCqsmIXJHdXWfN28wzbNy9bA UvvH7+H2GCdL4lvi/YSJaOOmIfMus3ZX29G8ewcpDp5VuEWw/denH0w/Zs/E18EovCJ8 9QNOhZPw0S1lxhHWKco7glWE701bTych2TPbWXvRQE6jX50ksqh087Z9X3H4YUhPeT0B hQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr1aw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:30 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:28 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:28 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id 043993F7041; Thu, 26 Sep 2019 03:06:25 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Pavan Nikhilesh Date: Thu, 26 Sep 2019 15:35:54 +0530 Message-ID: <20190926100558.24348-8-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 07/11] examples/l3fwd: add service core setup based on caps X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add service core setup when eventdev and Rx/Tx adapter don't have internal port capability. Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd_eventdev.c | 5 ++ examples/l3fwd/main.c | 93 +++++++++++++++++++++++++++++++++ 2 files changed, 98 insertions(+) diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c index 031705b68..4863f0a68 100644 --- a/examples/l3fwd/l3fwd_eventdev.c +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -330,4 +330,9 @@ l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) /* Rx/Tx adapters configuration */ evdev_rsrc->ops.adapter_setup(ethdev_count); + + /* Start event device */ + ret = rte_event_dev_start(evdev_rsrc->event_d_id); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error in starting eventdev"); } diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index 0ecb0ef68..8fec381ef 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -817,6 +817,93 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid) return 0; } +static inline int +l3fwd_service_enable(uint32_t service_id) +{ + uint8_t min_service_count = UINT8_MAX; + uint32_t slcore_array[RTE_MAX_LCORE]; + unsigned int slcore = 0; + uint8_t service_count; + int32_t slcore_count; + + if (!rte_service_lcore_count()) + return -ENOENT; + + slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE); + if (slcore_count < 0) + return -ENOENT; + /* Get the core which has least number of services running. */ + while (slcore_count--) { + /* Reset default mapping */ + rte_service_map_lcore_set(service_id, + slcore_array[slcore_count], 0); + service_count = rte_service_lcore_count_services( + slcore_array[slcore_count]); + if (service_count < min_service_count) { + slcore = slcore_array[slcore_count]; + min_service_count = service_count; + } + } + if (rte_service_map_lcore_set(service_id, slcore, 1)) + return -ENOENT; + rte_service_lcore_start(slcore); + + return 0; +} + +static void +l3fwd_eventdev_service_setup(void) +{ + struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + struct rte_event_dev_info evdev_info; + uint32_t service_id, caps; + int ret, i; + + rte_event_dev_info_get(evdev_rsrc->event_d_id, &evdev_info); + if (evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) { + ret = rte_event_dev_service_id_get(evdev_rsrc->event_d_id, + &service_id); + if (ret != -ESRCH && ret != 0) + rte_exit(EXIT_FAILURE, + "Error in starting eventdev service\n"); + l3fwd_service_enable(service_id); + } + + for (i = 0; i < evdev_rsrc->rx_adptr.nb_rx_adptr; i++) { + ret = rte_event_eth_rx_adapter_caps_get(evdev_rsrc->event_d_id, + evdev_rsrc->rx_adptr.rx_adptr[i], &caps); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Failed to get Rx adapter[%d] caps\n", + evdev_rsrc->rx_adptr.rx_adptr[i]); + ret = rte_event_eth_rx_adapter_service_id_get( + evdev_rsrc->event_d_id, + &service_id); + if (ret != -ESRCH && ret != 0) + rte_exit(EXIT_FAILURE, + "Error in starting Rx adapter[%d] service\n", + evdev_rsrc->rx_adptr.rx_adptr[i]); + l3fwd_service_enable(service_id); + } + + for (i = 0; i < evdev_rsrc->tx_adptr.nb_tx_adptr; i++) { + ret = rte_event_eth_tx_adapter_caps_get(evdev_rsrc->event_d_id, + evdev_rsrc->tx_adptr.tx_adptr[i], &caps); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Failed to get Rx adapter[%d] caps\n", + evdev_rsrc->tx_adptr.tx_adptr[i]); + ret = rte_event_eth_tx_adapter_service_id_get( + evdev_rsrc->event_d_id, + &service_id); + if (ret != -ESRCH && ret != 0) + rte_exit(EXIT_FAILURE, + "Error in starting Rx adapter[%d] service\n", + evdev_rsrc->tx_adptr.tx_adptr[i]); + l3fwd_service_enable(service_id); + } +} + int main(int argc, char **argv) { @@ -860,6 +947,8 @@ main(int argc, char **argv) evdev_rsrc->port_mask = enabled_port_mask; /* Configure eventdev parameters if user has requested */ l3fwd_eventdev_resource_setup(&port_conf); + if (evdev_rsrc->enabled) + goto skip_port_config; if (check_lcore_params() < 0) rte_exit(EXIT_FAILURE, "check_lcore_params failed\n"); @@ -1030,6 +1119,7 @@ main(int argc, char **argv) } } +skip_port_config: printf("\n"); /* start ports */ @@ -1054,6 +1144,9 @@ main(int argc, char **argv) rte_eth_promiscuous_enable(portid); } + if (evdev_rsrc->enabled) + l3fwd_eventdev_service_setup(); + printf("\n"); for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { From patchwork Thu Sep 26 10:05:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59845 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 399F71BF8B; Thu, 26 Sep 2019 12:06:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 65DD91BF8B for ; Thu, 26 Sep 2019 12:06:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA5PsW000795; Thu, 26 Sep 2019 03:06:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=HuQ/S5gtPLLWTPZKCkXilRjCkCrvwJk3zwwFO2V4uMM=; b=IZBK2fZb2YyqnUTB0j5XGuHFvPFrRLPO+6ec8zWfCDWtT0xlFcbYWKQWaSehq1LcWqHR NWm/B2HtAGBfAlu/kYBua4epUCklsI/5HO/feiDq7K5a5QqVjwusBsOtgiZByAXeX35b woVyRqKB74DrD/pjd8M3QgFHnt6Zs6DFXQ+mW/L/kA/pOD5J7FfaE4r3Zu2kBeVqyszC QKvZWdX5UFioeHCo81pSHkWZhRJS7B9Ffk3dR9n7MrlTsW9rWbePHQHjm/jaAqjqzzwA OMOgTv+HQ+3WIftvD7EXMfpSHD6A8EQQNORHqUKLJKMPIPoEj3KSNbCCaVY4S7PwUb28 aw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr1b6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:34 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:32 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:32 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id BBC1F3F7043; Thu, 26 Sep 2019 03:06:29 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Pavan Nikhilesh Date: Thu, 26 Sep 2019 15:35:55 +0530 Message-ID: <20190926100558.24348-9-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 08/11] examples/l3fwd: add event lpm main loop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add lpm main loop for handling events based on capabilities of the event device. Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd.h | 12 ++ examples/l3fwd/l3fwd_eventdev.c | 9 ++ examples/l3fwd/l3fwd_eventdev.h | 5 + examples/l3fwd/l3fwd_lpm.c | 205 ++++++++++++++++++++++++++++++++ examples/l3fwd/main.c | 10 +- 5 files changed, 237 insertions(+), 4 deletions(-) diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h index ef978ae64..2cee544a5 100644 --- a/examples/l3fwd/l3fwd.h +++ b/examples/l3fwd/l3fwd.h @@ -209,6 +209,18 @@ em_main_loop(__attribute__((unused)) void *dummy); int lpm_main_loop(__attribute__((unused)) void *dummy); +#define L3FWD_LPM_EVENT_MODE \ +LPM_FP(tx_d, 0, 0, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE) \ +LPM_FP(tx_d_burst, 0, 1, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST) \ +LPM_FP(tx_q, 1, 0, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE) \ +LPM_FP(tx_q_burst, 1, 1, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST) \ + +#define LPM_FP(_name, _f2, _f1, flags) \ +int \ +lpm_event_main_loop_ ## _name(__attribute__((unused)) void *dummy); +L3FWD_LPM_EVENT_MODE +#undef LPM_FP + /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */ void * em_get_ipv4_l3fwd_lookup_struct(const int socketid); diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c index 4863f0a68..8cb12d661 100644 --- a/examples/l3fwd/l3fwd_eventdev.c +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -301,6 +301,12 @@ void l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) { struct l3fwd_eventdev_resources *evdev_rsrc = l3fwd_get_eventdev_rsrc(); + const event_loop_cb lpm_event_loop[2][2] = { +#define LPM_FP(_name, _f2, _f1, flags) \ + [_f2][_f1] = lpm_event_main_loop_ ## _name, + L3FWD_LPM_EVENT_MODE +#undef LPM_FP + }; uint16_t ethdev_count = rte_eth_dev_count_avail(); uint32_t event_queue_cfg; int32_t ret; @@ -335,4 +341,7 @@ l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) ret = rte_event_dev_start(evdev_rsrc->event_d_id); if (ret < 0) rte_exit(EXIT_FAILURE, "Error in starting eventdev"); + + evdev_rsrc->ops.lpm_event_loop = lpm_event_loop[evdev_rsrc->tx_mode_q] + [evdev_rsrc->has_burst]; } diff --git a/examples/l3fwd/l3fwd_eventdev.h b/examples/l3fwd/l3fwd_eventdev.h index 127bb7f42..179a01056 100644 --- a/examples/l3fwd/l3fwd_eventdev.h +++ b/examples/l3fwd/l3fwd_eventdev.h @@ -14,6 +14,11 @@ #include "l3fwd.h" +#define L3FWD_EVENT_SINGLE 0x1 +#define L3FWD_EVENT_BURST 0x2 +#define L3FWD_EVENT_TX_DIRECT 0x4 +#define L3FWD_EVENT_TX_ENQ 0x8 + #define CMD_LINE_OPT_MODE "mode" #define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sync" diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c index 4143683cb..7d5ce5864 100644 --- a/examples/l3fwd/l3fwd_lpm.c +++ b/examples/l3fwd/l3fwd_lpm.c @@ -28,6 +28,7 @@ #include #include "l3fwd.h" +#include "l3fwd_eventdev.h" struct ipv4_l3fwd_lpm_route { uint32_t ip; @@ -254,6 +255,210 @@ lpm_main_loop(__attribute__((unused)) void *dummy) return 0; } +static __rte_always_inline void +lpm_event_loop_single(struct l3fwd_eventdev_resources *evdev_rsrc, + const uint8_t flags) +{ + const int event_p_id = l3fwd_get_free_event_port(evdev_rsrc); + const uint8_t tx_q_id = evdev_rsrc->evq.event_q_id[ + evdev_rsrc->evq.nb_queues - 1]; + const uint8_t event_d_id = evdev_rsrc->event_d_id; + struct lcore_conf *lconf; + unsigned int lcore_id; + struct rte_event ev; + + if (event_p_id < 0) + return; + + lcore_id = rte_lcore_id(); + lconf = &lcore_conf[lcore_id]; + + RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id); + while (!force_quit) { + if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0)) + continue; + + struct rte_mbuf *mbuf = ev.mbuf; + mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port); + +#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \ + || defined RTE_ARCH_PPC_64 + process_packet(mbuf, &mbuf->port); +#else + + struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf, + struct rte_ether_hdr *); +#ifdef DO_RFC_1812_CHECKS + struct rte_ipv4_hdr *ipv4_hdr; + if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) { + /* Handle IPv4 headers.*/ + ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, + struct rte_ipv4_hdr *, + sizeof(struct rte_ether_hdr)); + + if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len) + < 0) { + mbuf->port = BAD_PORT; + continue; + } + /* Update time to live and header checksum */ + --(ipv4_hdr->time_to_live); + ++(ipv4_hdr->hdr_checksum); + } +#endif + /* dst addr */ + *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[mbuf->port]; + + /* src addr */ + rte_ether_addr_copy(&ports_eth_addr[mbuf->port], + ð_hdr->s_addr); +#endif + if (mbuf->port == BAD_PORT) { + rte_pktmbuf_free(mbuf); + continue; + } + + if (flags & L3FWD_EVENT_TX_ENQ) { + ev.queue_id = tx_q_id; + ev.op = RTE_EVENT_OP_FORWARD; + while (rte_event_enqueue_burst(event_d_id, event_p_id, + &ev, 1) && !force_quit) + ; + } + + if (flags & L3FWD_EVENT_TX_DIRECT) { + rte_event_eth_tx_adapter_txq_set(mbuf, 0); + while (!rte_event_eth_tx_adapter_enqueue(event_d_id, + event_p_id, &ev, 1) && + !force_quit) + ; + } + } +} + +static __rte_always_inline void +lpm_event_loop_burst(struct l3fwd_eventdev_resources *evdev_rsrc, + const uint8_t flags) +{ + const int event_p_id = l3fwd_get_free_event_port(evdev_rsrc); + const uint8_t tx_q_id = evdev_rsrc->evq.event_q_id[ + evdev_rsrc->evq.nb_queues - 1]; + const uint8_t event_d_id = evdev_rsrc->event_d_id; + const uint16_t deq_len = evdev_rsrc->deq_depth; + struct rte_event events[MAX_PKT_BURST]; + struct lcore_conf *lconf; + unsigned int lcore_id; + int i, nb_enq, nb_deq; + + if (event_p_id < 0) + return; + + lcore_id = rte_lcore_id(); + + lconf = &lcore_conf[lcore_id]; + + RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id); + + while (!force_quit) { + /* Read events from RX queues */ + nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id, + events, deq_len, 0); + if (nb_deq == 0) { + rte_pause(); + continue; + } + + for (i = 0; i < nb_deq; i++) { + struct rte_mbuf *mbuf = events[i].mbuf; + + mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port); + +#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \ + || defined RTE_ARCH_PPC_64 + process_packet(mbuf, &mbuf->port); +#else + struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf, + struct rte_ether_hdr *); + +#ifdef DO_RFC_1812_CHECKS + struct rte_ipv4_hdr *ipv4_hdr; + if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) { + /* Handle IPv4 headers.*/ + ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, + struct rte_ipv4_hdr *, + sizeof(struct rte_ether_hdr)); + + if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len) + < 0) { + mbuf->port = BAD_PORT; + continue; + } + /* Update time to live and header checksum */ + --(ipv4_hdr->time_to_live); + ++(ipv4_hdr->hdr_checksum); + } +#endif + /* dst addr */ + *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[ + mbuf->port]; + /* src addr */ + rte_ether_addr_copy(&ports_eth_addr[mbuf->port], + ð_hdr->s_addr); +#endif + if (flags & L3FWD_EVENT_TX_ENQ) { + events[i].queue_id = tx_q_id; + events[i].op = RTE_EVENT_OP_FORWARD; + } + + if (flags & L3FWD_EVENT_TX_DIRECT) + rte_event_eth_tx_adapter_txq_set(mbuf, 0); + } + + if (flags & L3FWD_EVENT_TX_ENQ) { + nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id, + events, nb_deq); + while (nb_enq < nb_deq && !force_quit) + nb_enq += rte_event_enqueue_burst(event_d_id, + event_p_id, events + nb_enq, + nb_deq - nb_enq); + } + + if (flags & L3FWD_EVENT_TX_DIRECT) { + nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id, + event_p_id, events, nb_deq); + while (nb_enq < nb_deq && !force_quit) + nb_enq += rte_event_eth_tx_adapter_enqueue( + event_d_id, event_p_id, + events + nb_enq, + nb_deq - nb_enq); + } + } +} + +static __rte_always_inline void +lpm_event_loop(struct l3fwd_eventdev_resources *evdev_rsrc, + const uint8_t flags) +{ + if (flags & L3FWD_EVENT_SINGLE) + lpm_event_loop_single(evdev_rsrc, flags); + if (flags & L3FWD_EVENT_BURST) + lpm_event_loop_burst(evdev_rsrc, flags); +} + +#define LPM_FP(_name, _f2, _f1, flags) \ +int __rte_noinline \ +lpm_event_main_loop_ ## _name(__attribute__((unused)) void *dummy) \ +{ \ + struct l3fwd_eventdev_resources *evdev_rsrc = \ + l3fwd_get_eventdev_rsrc(); \ + \ + lpm_event_loop(evdev_rsrc, flags); \ + return 0; \ +} + +L3FWD_LPM_EVENT_MODE +#undef LPM_FP + void setup_lpm(const int socketid) { diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index 8fec381ef..dd371b945 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -942,13 +942,18 @@ main(int argc, char **argv) if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n"); + /* Setup function pointers for lookup method. */ + setup_l3fwd_lookup_tables(); + evdev_rsrc->per_port_pool = per_port_pool; evdev_rsrc->pkt_pool = pktmbuf_pool; evdev_rsrc->port_mask = enabled_port_mask; /* Configure eventdev parameters if user has requested */ l3fwd_eventdev_resource_setup(&port_conf); - if (evdev_rsrc->enabled) + if (evdev_rsrc->enabled) { + l3fwd_lkp.main_loop = evdev_rsrc->ops.lpm_event_loop; goto skip_port_config; + } if (check_lcore_params() < 0) rte_exit(EXIT_FAILURE, "check_lcore_params failed\n"); @@ -964,9 +969,6 @@ main(int argc, char **argv) nb_lcores = rte_lcore_count(); - /* Setup function pointers for lookup method. */ - setup_l3fwd_lookup_tables(); - /* initialize all ports */ RTE_ETH_FOREACH_DEV(portid) { struct rte_eth_conf local_port_conf = port_conf; From patchwork Thu Sep 26 10:05:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59847 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 14E811BF91; Thu, 26 Sep 2019 12:06:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B530D1BF91 for ; Thu, 26 Sep 2019 12:06:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA5PsY000795; Thu, 26 Sep 2019 03:06:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=yrohkh5q+2uTEvGdmSErnb6c8if/TN+euiYcnHPpB7c=; b=yUXyQIghZhidQixdnhMZa38D/aFLvlPLBOXM3ZadExCbHK8TCHAgPQWFZgqq6Y2Uz9p+ jIluP3nRtYxyshGvz/7O9F5/f8Td+FZofI7LkZLhiq636khkEa2H6hvWMiRtj/bEmfpk 7AHpaEdyo+9V14Bn34qxu3nbDa94bXpUiQbu0DbTGh+aSsBkpNRAdvU8qHWsymVhtb1D pJFEI+KFAB+Ve9so5L3mzZe9BVA5PgrLWwi50ZHl3uExg4NxaDym9QFsamV2T0izSEZb ++oBorb7RmGXa10sbe+tmNloWBKXjbNJqcweP0+Cz9UqRia4/obgzIk0kdarRvuHTAQg lg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr1br-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:38 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:36 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:36 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id AC93F3F7041; Thu, 26 Sep 2019 03:06:33 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Pavan Nikhilesh Date: Thu, 26 Sep 2019 15:35:56 +0530 Message-ID: <20190926100558.24348-10-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 09/11] examples/l3fwd: add event em main loop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add em main loop for handling events based on capabilities of the event device. Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd.h | 13 +++ examples/l3fwd/l3fwd_em.c | 151 +++++++++++++++++++++++++ examples/l3fwd/l3fwd_em.h | 159 +++++++++++++++++++-------- examples/l3fwd/l3fwd_em_hlm.h | 131 ++++++++++++++++++++++ examples/l3fwd/l3fwd_em_sequential.h | 26 +++++ examples/l3fwd/l3fwd_eventdev.c | 9 ++ examples/l3fwd/main.c | 5 +- 7 files changed, 447 insertions(+), 47 deletions(-) diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h index 2cee544a5..ff1f14225 100644 --- a/examples/l3fwd/l3fwd.h +++ b/examples/l3fwd/l3fwd.h @@ -221,6 +221,19 @@ lpm_event_main_loop_ ## _name(__attribute__((unused)) void *dummy); L3FWD_LPM_EVENT_MODE #undef LPM_FP +#define L3FWD_EM_EVENT_MODE \ +EM_FP(tx_d, 0, 0, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE) \ +EM_FP(tx_d_burst, 0, 1, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST) \ +EM_FP(tx_q, 1, 0, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE) \ +EM_FP(tx_q_burst, 1, 1, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST) \ + +#define EM_FP(_name, _f2, _f1, flags) \ +int \ +em_event_main_loop_ ## _name(__attribute__((unused)) void *dummy); +L3FWD_EM_EVENT_MODE +#undef EM_FP + + /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */ void * em_get_ipv4_l3fwd_lookup_struct(const int socketid); diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c index 74a7c8fa4..e572d5b95 100644 --- a/examples/l3fwd/l3fwd_em.c +++ b/examples/l3fwd/l3fwd_em.c @@ -26,6 +26,7 @@ #include #include "l3fwd.h" +#include "l3fwd_eventdev.h" #if defined(RTE_ARCH_X86) || defined(RTE_MACHINE_CPUFLAG_CRC32) #define EM_HASH_CRC 1 @@ -699,6 +700,156 @@ em_main_loop(__attribute__((unused)) void *dummy) return 0; } +static __rte_always_inline void +em_event_loop_single(struct l3fwd_eventdev_resources *evdev_rsrc, + const uint8_t flags) +{ + const int event_p_id = l3fwd_get_free_event_port(evdev_rsrc); + const uint8_t tx_q_id = evdev_rsrc->evq.event_q_id[ + evdev_rsrc->evq.nb_queues - 1]; + const uint8_t event_d_id = evdev_rsrc->event_d_id; + struct lcore_conf *lconf; + unsigned int lcore_id; + struct rte_event ev; + + if (event_p_id < 0) + return; + + lcore_id = rte_lcore_id(); + lconf = &lcore_conf[lcore_id]; + + RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id); + while (!force_quit) { + if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0)) + continue; + + struct rte_mbuf *mbuf = ev.mbuf; + +#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON + mbuf->port = em_get_dst_port(lconf, mbuf, mbuf->port); + process_packet(mbuf, &mbuf->port); +#else + l3fwd_em_simple_process(mbuf, lconf); +#endif + if (mbuf->port == BAD_PORT) { + rte_pktmbuf_free(mbuf); + continue; + } + + if (flags & L3FWD_EVENT_TX_ENQ) { + ev.queue_id = tx_q_id; + ev.op = RTE_EVENT_OP_FORWARD; + while (rte_event_enqueue_burst(event_d_id, event_p_id, + &ev, 1) && !force_quit) + ; + } + + if (flags & L3FWD_EVENT_TX_DIRECT) { + rte_event_eth_tx_adapter_txq_set(mbuf, 0); + while (!rte_event_eth_tx_adapter_enqueue(event_d_id, + event_p_id, &ev, 1) && + !force_quit) + ; + } + } +} + +static __rte_always_inline void +em_event_loop_burst(struct l3fwd_eventdev_resources *evdev_rsrc, + const uint8_t flags) +{ + const int event_p_id = l3fwd_get_free_event_port(evdev_rsrc); + const uint8_t tx_q_id = evdev_rsrc->evq.event_q_id[ + evdev_rsrc->evq.nb_queues - 1]; + const uint8_t event_d_id = evdev_rsrc->event_d_id; + const uint16_t deq_len = evdev_rsrc->deq_depth; + struct rte_event events[MAX_PKT_BURST]; + struct lcore_conf *lconf; + unsigned int lcore_id; + int i, nb_enq, nb_deq; + + if (event_p_id < 0) + return; + + lcore_id = rte_lcore_id(); + + lconf = &lcore_conf[lcore_id]; + + RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id); + + while (!force_quit) { + /* Read events from RX queues */ + nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id, + events, deq_len, 0); + if (nb_deq == 0) { + rte_pause(); + continue; + } + +#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON + l3fwd_em_process_events(nb_deq, (struct rte_event **)&events, + lconf); +#else + l3fwd_em_no_opt_process_events(nb_deq, + (struct rte_event **)&events, + lconf); +#endif + for (i = 0; i < nb_deq; i++) { + if (flags & L3FWD_EVENT_TX_ENQ) { + events[i].queue_id = tx_q_id; + events[i].op = RTE_EVENT_OP_FORWARD; + } + + if (flags & L3FWD_EVENT_TX_DIRECT) + rte_event_eth_tx_adapter_txq_set(events[i].mbuf, + 0); + } + + if (flags & L3FWD_EVENT_TX_ENQ) { + nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id, + events, nb_deq); + while (nb_enq < nb_deq && !force_quit) + nb_enq += rte_event_enqueue_burst(event_d_id, + event_p_id, events + nb_enq, + nb_deq - nb_enq); + } + + if (flags & L3FWD_EVENT_TX_DIRECT) { + nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id, + event_p_id, events, nb_deq); + while (nb_enq < nb_deq && !force_quit) + nb_enq += rte_event_eth_tx_adapter_enqueue( + event_d_id, event_p_id, + events + nb_enq, + nb_deq - nb_enq); + } + } +} + +static __rte_always_inline void +em_event_loop(struct l3fwd_eventdev_resources *evdev_rsrc, + const uint8_t flags) +{ + if (flags & L3FWD_EVENT_SINGLE) + em_event_loop_single(evdev_rsrc, flags); + if (flags & L3FWD_EVENT_BURST) + em_event_loop_burst(evdev_rsrc, flags); +} + +#define EM_FP(_name, _f2, _f1, flags) \ +int __rte_noinline \ +em_event_main_loop_ ## _name(__attribute__((unused)) void *dummy) \ +{ \ + struct l3fwd_eventdev_resources *evdev_rsrc = \ + l3fwd_get_eventdev_rsrc(); \ + \ + em_event_loop(evdev_rsrc, flags); \ + return 0; \ +} + +L3FWD_EM_EVENT_MODE +#undef EM_FP + /* * Initialize exact match (hash) parameters. */ diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h index 090c1b448..b992a21da 100644 --- a/examples/l3fwd/l3fwd_em.h +++ b/examples/l3fwd/l3fwd_em.h @@ -5,73 +5,92 @@ #ifndef __L3FWD_EM_H__ #define __L3FWD_EM_H__ -static __rte_always_inline void -l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid, - struct lcore_conf *qconf) +static __rte_always_inline uint16_t +l3fwd_em_handle_ipv4(struct rte_mbuf *m, uint16_t portid, + struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf) { - struct rte_ether_hdr *eth_hdr; struct rte_ipv4_hdr *ipv4_hdr; uint16_t dst_port; - uint32_t tcp_or_udp; - uint32_t l3_ptypes; - - eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); - tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP); - l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK; - if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) { - /* Handle IPv4 headers.*/ - ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *, - sizeof(struct rte_ether_hdr)); + /* Handle IPv4 headers.*/ + ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *, + sizeof(struct rte_ether_hdr)); #ifdef DO_RFC_1812_CHECKS - /* Check to make sure the packet is valid (RFC1812) */ - if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) { - rte_pktmbuf_free(m); - return; - } + /* Check to make sure the packet is valid (RFC1812) */ + if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) { + rte_pktmbuf_free(m); + return BAD_PORT; + } #endif - dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid, - qconf->ipv4_lookup_struct); + dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid, + qconf->ipv4_lookup_struct); - if (dst_port >= RTE_MAX_ETHPORTS || + if (dst_port >= RTE_MAX_ETHPORTS || (enabled_port_mask & 1 << dst_port) == 0) - dst_port = portid; + dst_port = portid; #ifdef DO_RFC_1812_CHECKS - /* Update time to live and header checksum */ - --(ipv4_hdr->time_to_live); - ++(ipv4_hdr->hdr_checksum); + /* Update time to live and header checksum */ + --(ipv4_hdr->time_to_live); + ++(ipv4_hdr->hdr_checksum); #endif - /* dst addr */ - *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port]; + /* dst addr */ + *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port]; - /* src addr */ - rte_ether_addr_copy(&ports_eth_addr[dst_port], - ð_hdr->s_addr); + /* src addr */ + rte_ether_addr_copy(&ports_eth_addr[dst_port], + ð_hdr->s_addr); - send_single_packet(qconf, m, dst_port); - } else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) { - /* Handle IPv6 headers.*/ - struct rte_ipv6_hdr *ipv6_hdr; + return dst_port; +} - ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *, - sizeof(struct rte_ether_hdr)); +static __rte_always_inline uint16_t +l3fwd_em_handle_ipv6(struct rte_mbuf *m, uint16_t portid, + struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf) +{ + /* Handle IPv6 headers.*/ + struct rte_ipv6_hdr *ipv6_hdr; + uint16_t dst_port; - dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid, - qconf->ipv6_lookup_struct); + ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *, + sizeof(struct rte_ether_hdr)); - if (dst_port >= RTE_MAX_ETHPORTS || + dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid, + qconf->ipv6_lookup_struct); + + if (dst_port >= RTE_MAX_ETHPORTS || (enabled_port_mask & 1 << dst_port) == 0) - dst_port = portid; + dst_port = portid; + + /* dst addr */ + *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port]; - /* dst addr */ - *(uint64_t *)ð_hdr->d_addr = dest_eth_addr[dst_port]; + /* src addr */ + rte_ether_addr_copy(&ports_eth_addr[dst_port], + ð_hdr->s_addr); - /* src addr */ - rte_ether_addr_copy(&ports_eth_addr[dst_port], - ð_hdr->s_addr); + return dst_port; +} +static __rte_always_inline void +l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid, + struct lcore_conf *qconf) +{ + struct rte_ether_hdr *eth_hdr; + uint16_t dst_port; + uint32_t tcp_or_udp; + uint32_t l3_ptypes; + + eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); + tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP); + l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK; + + if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) { + dst_port = l3fwd_em_handle_ipv4(m, portid, eth_hdr, qconf); + send_single_packet(qconf, m, dst_port); + } else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) { + dst_port = l3fwd_em_handle_ipv6(m, portid, eth_hdr, qconf); send_single_packet(qconf, m, dst_port); } else { /* Free the mbuf that contains non-IPV4/IPV6 packet */ @@ -79,6 +98,25 @@ l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid, } } +static __rte_always_inline void +l3fwd_em_simple_process(struct rte_mbuf *m, struct lcore_conf *qconf) +{ + struct rte_ether_hdr *eth_hdr; + uint32_t tcp_or_udp; + uint32_t l3_ptypes; + + eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); + tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP); + l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK; + + if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) + m->port = l3fwd_em_handle_ipv4(m, m->port, eth_hdr, qconf); + else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) + m->port = l3fwd_em_handle_ipv6(m, m->port, eth_hdr, qconf); + else + m->port = BAD_PORT; +} + /* * Buffer non-optimized handling of packets, invoked * from main_loop. @@ -108,4 +146,33 @@ l3fwd_em_no_opt_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, l3fwd_em_simple_forward(pkts_burst[j], portid, qconf); } +/* + * Buffer non-optimized handling of events, invoked + * from main_loop. + */ +static inline void +l3fwd_em_no_opt_process_events(int nb_rx, struct rte_event **events, + struct lcore_conf *qconf) +{ + int32_t j; + + /* Prefetch first packets */ + for (j = 0; j < PREFETCH_OFFSET && j < nb_rx; j++) + rte_prefetch0(rte_pktmbuf_mtod(events[j]->mbuf, void *)); + + /* + * Prefetch and forward already prefetched + * packets. + */ + for (j = 0; j < (nb_rx - PREFETCH_OFFSET); j++) { + rte_prefetch0(rte_pktmbuf_mtod(events[ + j + PREFETCH_OFFSET]->mbuf, void *)); + l3fwd_em_simple_process(events[j]->mbuf, qconf); + } + + /* Forward remaining prefetched packets */ + for (; j < nb_rx; j++) + l3fwd_em_simple_process(events[j]->mbuf, qconf); +} + #endif /* __L3FWD_EM_H__ */ diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h index ad8b9ce87..79812716c 100644 --- a/examples/l3fwd/l3fwd_em_hlm.h +++ b/examples/l3fwd/l3fwd_em_hlm.h @@ -75,6 +75,60 @@ em_get_dst_port_ipv6xN(struct lcore_conf *qconf, struct rte_mbuf *m[], } } +static __rte_always_inline void +em_get_dst_port_ipv4xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[], + uint16_t dst_port[]) +{ + int i; + int32_t ret[EM_HASH_LOOKUP_COUNT]; + union ipv4_5tuple_host key[EM_HASH_LOOKUP_COUNT]; + const void *key_array[EM_HASH_LOOKUP_COUNT]; + + for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) { + get_ipv4_5tuple(m[i], mask0.x, &key[i]); + key_array[i] = &key[i]; + } + + rte_hash_lookup_bulk(qconf->ipv4_lookup_struct, &key_array[0], + EM_HASH_LOOKUP_COUNT, ret); + + for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) { + dst_port[i] = ((ret[i] < 0) ? + m[i]->port : ipv4_l3fwd_out_if[ret[i]]); + + if (dst_port[i] >= RTE_MAX_ETHPORTS || + (enabled_port_mask & 1 << dst_port[i]) == 0) + dst_port[i] = m[i]->port; + } +} + +static __rte_always_inline void +em_get_dst_port_ipv6xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[], + uint16_t dst_port[]) +{ + int i; + int32_t ret[EM_HASH_LOOKUP_COUNT]; + union ipv6_5tuple_host key[EM_HASH_LOOKUP_COUNT]; + const void *key_array[EM_HASH_LOOKUP_COUNT]; + + for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) { + get_ipv6_5tuple(m[i], mask1.x, mask2.x, &key[i]); + key_array[i] = &key[i]; + } + + rte_hash_lookup_bulk(qconf->ipv6_lookup_struct, &key_array[0], + EM_HASH_LOOKUP_COUNT, ret); + + for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) { + dst_port[i] = ((ret[i] < 0) ? + m[i]->port : ipv6_l3fwd_out_if[ret[i]]); + + if (dst_port[i] >= RTE_MAX_ETHPORTS || + (enabled_port_mask & 1 << dst_port[i]) == 0) + dst_port[i] = m[i]->port; + } +} + static __rte_always_inline uint16_t em_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt, uint16_t portid) @@ -187,4 +241,81 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, send_packets_multi(qconf, pkts_burst, dst_port, nb_rx); } + +/* + * Buffer optimized handling of events, invoked + * from main_loop. + */ +static inline void +l3fwd_em_process_events(int nb_rx, struct rte_event **ev, + struct lcore_conf *qconf) +{ + int32_t i, j, pos; + uint16_t dst_port[MAX_PKT_BURST]; + struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; + + /* + * Send nb_rx - nb_rx % EM_HASH_LOOKUP_COUNT packets + * in groups of EM_HASH_LOOKUP_COUNT. + */ + int32_t n = RTE_ALIGN_FLOOR(nb_rx, EM_HASH_LOOKUP_COUNT); + + for (j = 0; j < EM_HASH_LOOKUP_COUNT && j < nb_rx; j++) { + pkts_burst[j] = ev[j]->mbuf; + rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[j], + struct rte_ether_hdr *) + 1); + } + + for (j = 0; j < n; j += EM_HASH_LOOKUP_COUNT) { + + uint32_t pkt_type = RTE_PTYPE_L3_MASK | + RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP; + uint32_t l3_type, tcp_or_udp; + + for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) + pkt_type &= pkts_burst[j + i]->packet_type; + + l3_type = pkt_type & RTE_PTYPE_L3_MASK; + tcp_or_udp = pkt_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP); + + for (i = 0, pos = j + EM_HASH_LOOKUP_COUNT; + i < EM_HASH_LOOKUP_COUNT && pos < nb_rx; i++, pos++) { + rte_prefetch0(rte_pktmbuf_mtod( + pkts_burst[pos], + struct rte_ether_hdr *) + 1); + } + + if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV4)) { + + em_get_dst_port_ipv4xN_events(qconf, &pkts_burst[j], + &dst_port[j]); + + } else if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV6)) { + + em_get_dst_port_ipv6xN_events(qconf, &pkts_burst[j], + &dst_port[j]); + + } else { + for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) { + pkts_burst[j + i]->port = em_get_dst_port(qconf, + pkts_burst[j + i], + pkts_burst[j + i]->port); + process_packet(pkts_burst[j + i], + &pkts_burst[j + i]->port); + } + continue; + } + processx4_step3(&pkts_burst[j], &dst_port[j]); + + for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) + pkts_burst[j + i]->port = dst_port[j + i]; + + } + + for (; j < nb_rx; j++) { + pkts_burst[j]->port = em_get_dst_port(qconf, pkts_burst[j], + pkts_burst[j]->port); + process_packet(pkts_burst[j], &pkts_burst[j]->port); + } +} #endif /* __L3FWD_EM_HLM_H__ */ diff --git a/examples/l3fwd/l3fwd_em_sequential.h b/examples/l3fwd/l3fwd_em_sequential.h index 23fe9dec8..b231b9994 100644 --- a/examples/l3fwd/l3fwd_em_sequential.h +++ b/examples/l3fwd/l3fwd_em_sequential.h @@ -95,4 +95,30 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, send_packets_multi(qconf, pkts_burst, dst_port, nb_rx); } + +/* + * Buffer optimized handling of events, invoked + * from main_loop. + */ +static inline void +l3fwd_em_process_events(int nb_rx, struct rte_event **events, + struct lcore_conf *qconf) +{ + int32_t i, j; + + rte_prefetch0(rte_pktmbuf_mtod(events[0]->mbuf, + struct rte_ether_hdr *) + 1); + + for (i = 1, j = 0; j < nb_rx; i++, j++) { + struct rte_mbuf *mbuf = events[j]->mbuf; + + if (i < nb_rx) { + rte_prefetch0(rte_pktmbuf_mtod( + events[i]->mbuf, + struct rte_ether_hdr *) + 1); + } + mbuf->port = em_get_dst_port(qconf, mbuf, mbuf->port); + process_packet(mbuf, &mbuf->port); + } +} #endif /* __L3FWD_EM_SEQUENTIAL_H__ */ diff --git a/examples/l3fwd/l3fwd_eventdev.c b/examples/l3fwd/l3fwd_eventdev.c index 8cb12d661..047c04356 100644 --- a/examples/l3fwd/l3fwd_eventdev.c +++ b/examples/l3fwd/l3fwd_eventdev.c @@ -306,6 +306,12 @@ l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) [_f2][_f1] = lpm_event_main_loop_ ## _name, L3FWD_LPM_EVENT_MODE #undef LPM_FP + }; + const event_loop_cb em_event_loop[2][2] = { +#define EM_FP(_name, _f2, _f1, flags) \ + [_f2][_f1] = em_event_main_loop_ ## _name, + L3FWD_EM_EVENT_MODE +#undef EM_FP }; uint16_t ethdev_count = rte_eth_dev_count_avail(); uint32_t event_queue_cfg; @@ -344,4 +350,7 @@ l3fwd_eventdev_resource_setup(struct rte_eth_conf *port_conf) evdev_rsrc->ops.lpm_event_loop = lpm_event_loop[evdev_rsrc->tx_mode_q] [evdev_rsrc->has_burst]; + + evdev_rsrc->ops.em_event_loop = em_event_loop[evdev_rsrc->tx_mode_q] + [evdev_rsrc->has_burst]; } diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index dd371b945..e31afe045 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -951,7 +951,10 @@ main(int argc, char **argv) /* Configure eventdev parameters if user has requested */ l3fwd_eventdev_resource_setup(&port_conf); if (evdev_rsrc->enabled) { - l3fwd_lkp.main_loop = evdev_rsrc->ops.lpm_event_loop; + if (l3fwd_em_on) + l3fwd_lkp.main_loop = evdev_rsrc->ops.em_event_loop; + else + l3fwd_lkp.main_loop = evdev_rsrc->ops.lpm_event_loop; goto skip_port_config; } From patchwork Thu Sep 26 10:05:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59848 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AA0D31BFB4; Thu, 26 Sep 2019 12:06:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id CE4221BFAB for ; Thu, 26 Sep 2019 12:06:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA4gPh032601; Thu, 26 Sep 2019 03:06:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=5IGMOTkw+GFreDBssguXk0sm+1M8SrancnXRuY0a0Eg=; b=jtOxfJQl/6LkIDOg5Tj5Kxm7QuOKS0SrC5LObqStLxlYI4t5Or2fl/Gvv0KMbO9itfvi e/nXPdygPntWOqgDUMtbBAgJrtvc7Mqbjh71EMId5kDbR+KvHOnllK++No1O1pHJnL8R ONb+YYuT9zw2Lhyhs8q4nURULjQ5eW28BMNA9SUD3W17Oew1Lzlm73AudxkWe2cZ5Gif JeyoT9gmB1Q3yyk79kd5prozZtdQwY9Idya9lhiBl0q9g0ZVRVUrwgc8lc45cu32gpBl 4DWcizvpwILdK6CGnHcb0EFmtRAGbFWKrAc2xRr77pbq/oP810diEfnVmsqD5WsbXEfz uQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr1c7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:42 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:40 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:40 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id 91C0A3F703F; Thu, 26 Sep 2019 03:06:37 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" CC: , Pavan Nikhilesh Date: Thu, 26 Sep 2019 15:35:57 +0530 Message-ID: <20190926100558.24348-11-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 10/11] examples/l3fwd: add graceful teardown for eventdevice X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add graceful teardown that addresses both event mode and poll mode. Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/main.c | 49 ++++++++++++++++++++++++++++++------------- 1 file changed, 34 insertions(+), 15 deletions(-) diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index e31afe045..d7b7fc287 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -911,7 +911,7 @@ main(int argc, char **argv) struct lcore_conf *qconf; struct rte_eth_dev_info dev_info; struct rte_eth_txconf *txconf; - int ret; + int i, ret; unsigned nb_ports; uint16_t queueid, portid; unsigned lcore_id; @@ -1166,27 +1166,46 @@ main(int argc, char **argv) } } - check_all_ports_link_status(enabled_port_mask); ret = 0; /* launch per-lcore init on every lcore */ rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { - if (rte_eal_wait_lcore(lcore_id) < 0) { - ret = -1; - break; + if (evdev_rsrc->enabled) { + for (i = 0; i < evdev_rsrc->rx_adptr.nb_rx_adptr; i++) + rte_event_eth_rx_adapter_stop( + evdev_rsrc->rx_adptr.rx_adptr[i]); + for (i = 0; i < evdev_rsrc->tx_adptr.nb_tx_adptr; i++) + rte_event_eth_tx_adapter_stop( + evdev_rsrc->tx_adptr.tx_adptr[i]); + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + rte_eth_dev_stop(portid); } - } - /* stop ports */ - RTE_ETH_FOREACH_DEV(portid) { - if ((enabled_port_mask & (1 << portid)) == 0) - continue; - printf("Closing port %d...", portid); - rte_eth_dev_stop(portid); - rte_eth_dev_close(portid); - printf(" Done\n"); + rte_eal_mp_wait_lcore(); + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + rte_eth_dev_close(portid); + } + + rte_event_dev_stop(evdev_rsrc->event_d_id); + rte_event_dev_close(evdev_rsrc->event_d_id); + + } else { + rte_eal_mp_wait_lcore(); + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + printf("Closing port %d...", portid); + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } } printf("Bye...\n"); From patchwork Thu Sep 26 10:05:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 59849 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 97F1F1BFB8; Thu, 26 Sep 2019 12:06:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 512271BFB6 for ; Thu, 26 Sep 2019 12:06:48 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x8QA4fII032589; Thu, 26 Sep 2019 03:06:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=18/E0V+JrXIlXzLS1Wt1Vqe48bkdjLgHnBCjko9M+b8=; b=nfYcPYYfV0gtKc0piAakSU2AncTCVMzMQgv154B7ZWFe0EkDaTZcPPMACtIw2ms4j38L aXkbVlmrtq3eIeETAxbpRjReYzXFtDLnh8ADbgpB2XLnS9WAJMsENHdLUM/m1pYSGQMV C6/vmz9QgzZpupNX608JwtMQrX8IcLtnH6udnRdRcW3HrTQKtF/Msm+KH0K0KPYV6ah4 AwmRJ6vp0ZDpKapl5TpMvi5kmIBifz2VSc0nustB8MxrJharAzh3UZsXchJNjO9BveZe qh7Py3Ax7yODd4m27g3g+jRoOI79JR9eIMxwKPw3aYT0icQTgXadO+j1qm0SkI1SEf7L 4Q== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2v8u5dr1cy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Sep 2019 03:06:47 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 26 Sep 2019 03:06:44 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 26 Sep 2019 03:06:44 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.15]) by maili.marvell.com (Postfix) with ESMTP id 4FD313F7040; Thu, 26 Sep 2019 03:06:41 -0700 (PDT) From: To: , , Marko Kovacevic , Ori Kam , Bruce Richardson , Radu Nicolau , "Tomasz Kantecki" , John McNamara CC: , Sunil Kumar Kori Date: Thu, 26 Sep 2019 15:35:58 +0530 Message-ID: <20190926100558.24348-12-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190926100558.24348-1-pbhagavatula@marvell.com> References: <20190926100558.24348-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-09-26_04:2019-09-25,2019-09-26 signatures=0 Subject: [dpdk-dev] [PATCH 11/11] doc: update l3fwd user guide to support eventdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Update l3fwd user guide to include event device related information. Signed-off-by: Sunil Kumar Kori --- doc/guides/sample_app_ug/l3_forward.rst | 76 +++++++++++++++++++++++-- 1 file changed, 70 insertions(+), 6 deletions(-) diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst index 4cb4b18da..c4b7d5fab 100644 --- a/doc/guides/sample_app_ug/l3_forward.rst +++ b/doc/guides/sample_app_ug/l3_forward.rst @@ -4,16 +4,23 @@ L3 Forwarding Sample Application ================================ -The L3 Forwarding application is a simple example of packet processing using the DPDK. +The L3 Forwarding application is a simple example of packet processing using +DPDK to demonstrate usage of poll and event mode packet I/O mechanism. The application performs L3 forwarding. Overview -------- -The application demonstrates the use of the hash and LPM libraries in the DPDK to implement packet forwarding. -The initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`. -The main difference from the L2 Forwarding sample application is that the forwarding decision -is made based on information read from the input packet. +The application demonstrates the use of the hash and LPM libraries in the DPDK +to implement packet forwarding using poll or event mode PMDs for packet I/O. +The initialization and run-time paths are very similar to those of the +:doc:`l2_forward_real_virtual` and :doc:`l2_forward_event_real_virtual`. +The main difference from the L2 Forwarding sample application is that optionally +packet can be Rx/Tx from/to eventdev instead of port directly and forwarding +decision is made based on information read from the input packet. + +Eventdev can optionally use S/W or H/W (if supported by platform) scheduler +implementation for packet I/O based on run time parameters. The lookup method is either hash-based or LPM-based and is selected at run time. When the selected lookup method is hash-based, a hash object is used to emulate the flow classification stage. @@ -56,6 +63,8 @@ The application has a number of command line options:: [--ipv6] [--parse-ptype] [--per-port-pool] + [--mode] + [--eventq-sync] Where, @@ -86,6 +95,11 @@ Where, * ``--per-port-pool:`` Optional, set to use independent buffer pools per port. Without this option, single buffer pool is used for all ports. +* ``--mode:`` Optional, Packet transfer mode for I/O, poll or eventdev. + +* ``--eventq-sync:`` Optional, Event queue synchronization method, Ordered or Atomic. Only valid if --mode=eventdev. + + For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0, while cores 8-15 and 24-31 appear on socket 1. @@ -116,6 +130,51 @@ In this command: | | | | | +----------+-----------+-----------+-------------------------------------+ +To use eventdev mode with sync method **ordered** on above mentioned environment, +Following is the sample command: + +.. code-block:: console + + ./build/l3fwd -l 0-3 -n 4 -w -- -p 0x3 --eventq-sync=ordered + +or + +.. code-block:: console + + ./build/l3fwd -l 0-3 -n 4 -w -- -p 0x03 --mode=eventdev --eventq-sync=ordered + +In this command: + +* -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform. + +* The --mode option defines PMD to be used for packet I/O. + +* The --eventq-sync option enables synchronization menthod of event queue so that packets will be scheduled accordingly. + +If application uses S/W scheduler, it uses following DPDK services: + +* Software scheduler +* Rx adapter service function +* Tx adapter service function + +Application needs service cores to run above mentioned services. Service cores +must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W +scheduler. Following is the sample command: + +.. code-block:: console + + ./build/l3fwd -l 0-7 -s 0-3 -n 4 --vdev event_sw0 -- -p 0x3 --mode=eventdev --eventq-sync=ordered + +In case of eventdev mode, *--config* option is not used for ethernet port +configuration. Instead each ethernet port will be configured with mentioned +setup: + +* Single Rx/Tx queue + +* Each Rx queue will be connected to event queue via Rx adapter. + +* Each Tx queue will be connected via Tx adapter. + Refer to the *DPDK Getting Started Guide* for general information on running applications and the Environment Abstraction Layer (EAL) options. @@ -125,7 +184,7 @@ Explanation ----------- The following sections provide some explanation of the sample application code. As mentioned in the overview section, -the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`. +the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual` and :doc:`l2_forward_event_real_virtual`. The following sections describe aspects that are specific to the L3 Forwarding sample application. Hash Initialization @@ -315,3 +374,8 @@ for LPM-based lookups is done by the get_ipv4_dst_port() function below: return ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct, rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)? next_hop : portid); } + +Eventdev Driver Initialization +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Eventdev driver initialization is same as L2 forwarding eventdev application. +Refer :doc:`l2_forward_event_real_virtual` for more details.