From patchwork Sun Dec 8 12:30:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 63650 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D005A04F1; Sun, 8 Dec 2019 13:32:56 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C3AA01BFBA; Sun, 8 Dec 2019 13:32:07 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id BC5D81BFB6 for ; Sun, 8 Dec 2019 13:32:05 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xB8CVRXv019938; Sun, 8 Dec 2019 04:32:05 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=akv0yhVbqBWmMqefBBFJw1lVlXXWqAapu3TfORK25dI=; b=wxRWwpXTVdpv+LWNVcaW+MEjdIyQ0N2gJIP/F30tYdlIGxdgc6RLNUz2SFyFHx0Lk9tl IKhY3AXghYihtHYwX6023JPDjfoSS7I6hZW3UAQm6JhWQUiW9PSilRpeaDECN5TObvnK SJLabXcApFyY+SzpCtzoUV7aZqSRt1XImDK3g/SaGSkM/CN4TZ4iK+62WFkIcpfhY7ZY +kWmzr8seZCXhc6iD5tHAh8iUgn45pZbD6aIZpzssAiUKEPcrRyKQg5dnBhstAEWHJXk OZRztgYuYWpIrGOysbDzSKGa/yI5wHad+RCk96SJCqfUwF9f4juqC2Fmg2mxeWD3V1y2 /A== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2wrcfpte8s-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 08 Dec 2019 04:32:04 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 8 Dec 2019 04:32:03 -0800 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 8 Dec 2019 04:32:03 -0800 Received: from ajoseph83.caveonetworks.com.com (unknown [10.29.45.60]) by maili.marvell.com (Postfix) with ESMTP id B5DF73F703F; Sun, 8 Dec 2019 04:31:58 -0800 (PST) From: Anoob Joseph To: Akhil Goyal , Radu Nicolau , Thomas Monjalon CC: Lukasz Bartosik , Jerin Jacob , Narayana Prasad , Ankur Dwivedi , Anoob Joseph , Archana Muniganti , Tejasree Kondoj , Vamsi Attunuru , Konstantin Ananyev , Date: Sun, 8 Dec 2019 18:00:44 +0530 Message-ID: <1575808249-31135-10-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1575808249-31135-1-git-send-email-anoobj@marvell.com> References: <1575808249-31135-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,18.0.572 definitions=2019-12-08_03:2019-12-05,2019-12-08 signatures=0 Subject: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lukasz Bartosik Add eventmode support to ipsec-secgw. This uses event helper to setup and use the eventmode capabilities. Add driver inbound worker. Example command: ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 --schedule-type 2 --process-mode drv --process-dir in Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/event_helper.c | 3 + examples/ipsec-secgw/event_helper.h | 26 +++ examples/ipsec-secgw/ipsec-secgw.c | 344 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec.h | 7 + examples/ipsec-secgw/ipsec_worker.c | 180 +++++++++++++++++++ examples/ipsec-secgw/meson.build | 2 +- 7 files changed, 555 insertions(+), 8 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec_worker.c diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index 09e3c5a..f6fd94c 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -15,6 +15,7 @@ SRCS-y += sa.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += ipsec_worker.c SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 6549875..44f997d 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -984,6 +984,9 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, else curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + curr_conf.cap.ipsec_mode = conf->ipsec_mode; + curr_conf.cap.ipsec_dir = conf->ipsec_dir; + /* Parse the passed list and see if we have matching capabilities */ /* Initialize the pointer used to traverse the list */ diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 2895dfa..07849b0 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -74,6 +74,22 @@ enum eh_tx_types { EH_TX_TYPE_NO_INTERNAL_PORT }; +/** + * Event mode ipsec mode types + */ +enum eh_ipsec_mode_types { + EH_IPSEC_MODE_TYPE_APP = 0, + EH_IPSEC_MODE_TYPE_DRIVER +}; + +/** + * Event mode ipsec direction types + */ +enum eh_ipsec_dir_types { + EH_IPSEC_DIR_TYPE_OUTBOUND = 0, + EH_IPSEC_DIR_TYPE_INBOUND, +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -183,6 +199,12 @@ struct eh_conf { */ void *mode_params; /**< Mode specific parameters */ + + /** Application specific params */ + enum eh_ipsec_mode_types ipsec_mode; + /**< Mode of ipsec run */ + enum eh_ipsec_dir_types ipsec_dir; + /**< Direction of ipsec processing */ }; /* Workers registered by the application */ @@ -194,6 +216,10 @@ struct eh_app_worker_params { /**< Specify status of rx type burst */ uint64_t tx_internal_port : 1; /**< Specify whether tx internal port is available */ + uint64_t ipsec_mode : 1; + /**< Specify ipsec processing level */ + uint64_t ipsec_dir : 1; + /**< Specify direction of ipsec */ }; uint64_t u64; } cap; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 7506922..c5d95b9 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2,6 +2,7 @@ * Copyright(c) 2016 Intel Corporation */ +#include #include #include #include @@ -14,6 +15,7 @@ #include #include #include +#include #include #include @@ -41,12 +43,17 @@ #include #include #include +#include +#include #include #include +#include "event_helper.h" #include "ipsec.h" #include "parser.h" +volatile bool force_quit; + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define MAX_JUMBO_PKT_LEN 9600 @@ -133,12 +140,21 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" +#define CMD_LINE_OPT_IPSEC_MODE "process-mode" +#define CMD_LINE_OPT_IPSEC_DIR "process-dir" #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" #define CMD_LINE_OPT_REASSEMBLE "reassemble" #define CMD_LINE_OPT_MTU "mtu" #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" +#define CMD_LINE_ARG_APP "app" +#define CMD_LINE_ARG_DRV "drv" +#define CMD_LINE_ARG_INB "in" +#define CMD_LINE_ARG_OUT "out" + enum { /* long options mapped to a short option */ @@ -149,7 +165,11 @@ enum { CMD_LINE_OPT_CONFIG_NUM, CMD_LINE_OPT_SINGLE_SA_NUM, CMD_LINE_OPT_CRYPTODEV_MASK_NUM, + CMD_LINE_OPT_TRANSFER_MODE_NUM, + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, CMD_LINE_OPT_RX_OFFLOAD_NUM, + CMD_LINE_OPT_IPSEC_MODE_NUM, + CMD_LINE_OPT_IPSEC_DIR_NUM, CMD_LINE_OPT_TX_OFFLOAD_NUM, CMD_LINE_OPT_REASSEMBLE_NUM, CMD_LINE_OPT_MTU_NUM, @@ -160,6 +180,10 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, + {CMD_LINE_OPT_IPSEC_MODE, 1, 0, CMD_LINE_OPT_IPSEC_MODE_NUM}, + {CMD_LINE_OPT_IPSEC_DIR, 1, 0, CMD_LINE_OPT_IPSEC_DIR_NUM}, {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, @@ -1094,8 +1118,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, } /* main processing loop */ -static int32_t -main_loop(__attribute__((unused)) void *dummy) +void +ipsec_poll_mode_worker(void) { struct rte_mbuf *pkts[MAX_PKT_BURST]; uint32_t lcore_id; @@ -1137,7 +1161,7 @@ main_loop(__attribute__((unused)) void *dummy) if (qconf->nb_rx_queue == 0) { RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", lcore_id); - return 0; + return; } RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); @@ -1150,7 +1174,7 @@ main_loop(__attribute__((unused)) void *dummy) lcore_id, portid, queueid); } - while (1) { + while (!force_quit) { cur_tsc = rte_rdtsc(); /* TX queue buffer drain */ @@ -1277,6 +1301,10 @@ print_usage(const char *prgname) " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" " [--cryptodev_mask MASK]" + " [--transfer-mode MODE]" + " [--schedule-type TYPE]" + " [--process-mode MODE]" + " [--process-dir DIR]" " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" @@ -1298,6 +1326,22 @@ print_usage(const char *prgname) " bypassing the SP\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" + " --transfer-mode MODE\n" + " 0: Packet transfer via polling (default)\n" + " 1: Packet transfer via eventdev\n" + " --schedule-type TYPE queue schedule type, used only when\n" + " transfer mode is set to eventdev\n" + " 0: Ordered (default)\n" + " 1: Atomic\n" + " 2: Parallel\n" + " --process-mode MODE processing mode, used only when\n" + " transfer mode is set to eventdev\n" + " \"app\" : application mode (default)\n" + " \"drv\" : driver mode\n" + " --process-dir DIR processing direction, used only when\n" + " transfer mode is set to eventdev\n" + " \"out\" : outbound (default)\n" + " \"in\" : inbound\n" " --" CMD_LINE_OPT_RX_OFFLOAD ": bitmask of the RX HW offload capabilities to enable/use\n" " (DEV_RX_OFFLOAD_*)\n" @@ -1433,7 +1477,89 @@ print_app_sa_prm(const struct app_sa_prm *prm) } static int32_t -parse_args(int32_t argc, char **argv) +eh_parse_decimal(const char *str) +{ + unsigned long num; + char *end = NULL; + + num = strtoul(str, &end, 10); + if ((str[0] == '\0') || (end == NULL) || (*end != '\0')) + return -EINVAL; + + return num; +} + +static int +parse_transfer_mode(struct eh_conf *conf, const char *optarg) +{ + int32_t parsed_dec; + + parsed_dec = eh_parse_decimal(optarg); + if (parsed_dec != EH_PKT_TRANSFER_MODE_POLL && + parsed_dec != EH_PKT_TRANSFER_MODE_EVENT) { + printf("Unsupported packet transfer mode"); + return -EINVAL; + } + conf->mode = parsed_dec; + return 0; +} + +static int +parse_schedule_type(struct eh_conf *conf, const char *optarg) +{ + struct eventmode_conf *em_conf = NULL; + int32_t parsed_dec; + + parsed_dec = eh_parse_decimal(optarg); + if (parsed_dec != RTE_SCHED_TYPE_ORDERED && + parsed_dec != RTE_SCHED_TYPE_ATOMIC && + parsed_dec != RTE_SCHED_TYPE_PARALLEL) + return -EINVAL; + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + em_conf->ext_params.sched_type = parsed_dec; + + return 0; +} + +static int +parse_ipsec_mode(struct eh_conf *conf, const char *optarg) +{ + if (!strncmp(CMD_LINE_ARG_APP, optarg, strlen(CMD_LINE_ARG_APP)) && + strlen(optarg) == strlen(CMD_LINE_ARG_APP)) + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + else if (!strncmp(CMD_LINE_ARG_DRV, optarg, strlen(CMD_LINE_ARG_DRV)) && + strlen(optarg) == strlen(CMD_LINE_ARG_DRV)) + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + else { + printf("Unsupported ipsec mode\n"); + return -EINVAL; + } + + return 0; +} + +static int +parse_ipsec_dir(struct eh_conf *conf, const char *optarg) +{ + if (!strncmp(CMD_LINE_ARG_INB, optarg, strlen(CMD_LINE_ARG_INB)) && + strlen(optarg) == strlen(CMD_LINE_ARG_INB)) + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; + else if (!strncmp(CMD_LINE_ARG_OUT, optarg, strlen(CMD_LINE_ARG_OUT)) && + strlen(optarg) == strlen(CMD_LINE_ARG_OUT)) + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; + else { + printf("Unsupported ipsec direction\n"); + return -EINVAL; + } + + return 0; +} + +static int32_t +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) { int opt; int64_t ret; @@ -1536,6 +1662,43 @@ parse_args(int32_t argc, char **argv) /* else */ enabled_cryptodev_mask = ret; break; + + case CMD_LINE_OPT_TRANSFER_MODE_NUM: + ret = parse_transfer_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid packet transfer mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: + ret = parse_schedule_type(eh_conf, optarg); + if (ret < 0) { + printf("Invalid queue schedule type\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_IPSEC_MODE_NUM: + ret = parse_ipsec_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid ipsec mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_IPSEC_DIR_NUM: + ret = parse_ipsec_dir(eh_conf, optarg); + if (ret < 0) { + printf("Invalid ipsec direction\n"); + print_usage(prgname); + return -1; + } + break; + case CMD_LINE_OPT_RX_OFFLOAD_NUM: ret = parse_mask(optarg, &dev_rx_offload); if (ret != 0) { @@ -2457,6 +2620,132 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) return ret; } +static struct eh_conf * +eh_conf_init(void) +{ + struct eventmode_conf *em_conf = NULL; + struct eh_conf *conf = NULL; + unsigned int eth_core_id; + uint32_t nb_bytes; + void *mem = NULL; + + /* Allocate memory for config */ + conf = calloc(1, sizeof(struct eh_conf)); + if (conf == NULL) { + printf("Failed to allocate memory for eventmode helper conf"); + goto err; + } + + /* Set default conf */ + + /* Packet transfer mode: poll */ + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; + + /* Keep all ethernet ports enabled by default */ + conf->eth_portmask = -1; + + /* Allocate memory for event mode params */ + conf->mode_params = + calloc(1, sizeof(struct eventmode_conf)); + if (conf->mode_params == NULL) { + printf("Failed to allocate memory for event mode params"); + goto err; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Allocate and initialize bitmap for eth cores */ + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); + if (!nb_bytes) { + printf("Failed to get bitmap footprint"); + goto err; + } + + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, + RTE_CACHE_LINE_SIZE); + if (!mem) { + printf("Failed to allocate memory for eth cores bitmap\n"); + goto err; + } + + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, mem, nb_bytes); + if (!em_conf->eth_core_mask) { + printf("Failed to initialize bitmap"); + goto err; + } + + /* Schedule type: ordered */ + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + + /* Set two cores as eth cores for Rx & Tx */ + + /* Use first core other than master core as Rx core */ + eth_core_id = rte_get_next_lcore(0, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + /* Use next core as Tx core */ + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + return conf; +err: + rte_free(mem); + free(em_conf); + free(conf); + return NULL; +} + +static void +eh_conf_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf = NULL; + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Free evenmode configuration memory */ + rte_free(em_conf->eth_core_mask); + free(em_conf); + free(conf); +} + +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + uint16_t port_id; + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + + /* Destroy the default ipsec flow */ + RTE_ETH_FOREACH_DEV(port_id) { + if ((enabled_port_mask & (1 << port_id)) == 0) + continue; + if (flow_info_tbl[port_id].rx_def_flow) { + struct rte_flow_error err; + int ret; + ret = rte_flow_destroy(port_id, + flow_info_tbl[port_id].rx_def_flow, + &err); + if (ret) + RTE_LOG(ERR, IPSEC, + "Failed to destroy flow for port %u, " + "err msg: %s\n", port_id, err.message); + } + } + } +} + int32_t main(int32_t argc, char **argv) { @@ -2466,6 +2755,7 @@ main(int32_t argc, char **argv) uint8_t socket_id; uint16_t portid; uint64_t req_rx_offloads, req_tx_offloads; + struct eh_conf *eh_conf = NULL; size_t sess_sz; /* init EAL */ @@ -2475,8 +2765,17 @@ main(int32_t argc, char **argv) argc -= ret; argv += ret; + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* initialize event helper configuration */ + eh_conf = eh_conf_init(); + if (eh_conf == NULL) + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); + /* parse application arguments (after the EAL ones) */ - ret = parse_args(argc, argv); + ret = parse_args(argc, argv, eh_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid parameters\n"); @@ -2592,12 +2891,43 @@ main(int32_t argc, char **argv) check_all_ports_link_status(enabled_port_mask); + /* + * Set the enabled port mask in helper config for use by helper + * sub-system. This will be used while intializing devices using + * helper sub-system. + */ + eh_conf->eth_portmask = enabled_port_mask; + + /* Initialize eventmode components */ + ret = eh_devs_init(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); + /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); + RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } + /* Uninitialize eventmode components */ + ret = eh_devs_uninit(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); + + /* Free eventmode configuration memory */ + eh_conf_uninit(eh_conf); + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + printf("Closing port %d...", portid); + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } + printf("Bye...\n"); + return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 28ff07d..0b9fc04 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -247,6 +247,13 @@ struct ipsec_traffic { struct traffic_type ip6; }; + +void +ipsec_poll_mode_worker(void); + +int +ipsec_launch_one_lcore(void *args); + uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t len); diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c new file mode 100644 index 0000000..87c657b --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -0,0 +1,180 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2016 Intel Corporation + * Copyright (C) 2019 Marvell International Ltd. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ipsec.h" +#include "event_helper.h" + +extern volatile bool force_quit; + +static inline void +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) +{ + /* Save the destination port in the mbuf */ + m->port = port_id; + + /* Save eth queue for Tx */ + rte_event_eth_tx_adapter_txq_set(m, 0); +} + +/* + * Event mode exposes various operating modes depending on the + * capabilities of the event device and the operating mode + * selected. + */ + +/* Workers registered */ +#define IPSEC_EVENTMODE_WORKERS 1 + +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - driver mode - inbound + */ +static void +ipsec_wrkr_non_burst_int_port_drvr_mode_inb(struct eh_event_link_info *links, + uint8_t nb_links) +{ + unsigned int nb_rx = 0; + struct rte_mbuf *pkt; + unsigned int port_id; + struct rte_event ev; + uint32_t lcore_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + goto exit; + } + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "driver mode - inbound) on lcore %d\n", lcore_id); + + /* We have valid links */ + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + port_id = ev.queue_id; + pkt = ev.mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } + +exit: + return; +} + +static uint8_t +ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) +{ + struct eh_app_worker_params *wrkr; + uint8_t nb_wrkr_param = 0; + + /* Save workers */ + wrkr = wrkrs; + + /* Non-burst - Tx internal port - driver mode - inbound */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drvr_mode_inb; + + nb_wrkr_param++; + return nb_wrkr_param; +} + +static void +ipsec_eventmode_worker(struct eh_conf *conf) +{ + struct eh_app_worker_params ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { + {{{0} }, NULL } }; + uint8_t nb_wrkr_param; + + /* Populate l2fwd_wrkr params */ + nb_wrkr_param = ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); + + /* + * Launch correct worker after checking + * the event device's capabilities. + */ + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); +} + +int ipsec_launch_one_lcore(void *args) +{ + struct eh_conf *conf; + + conf = (struct eh_conf *)args; + + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { + /* Run in poll mode */ + ipsec_poll_mode_worker(); + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { + /* Run in event mode */ + ipsec_eventmode_worker(conf); + } + return 0; +} diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 20f4064..ab40ca5 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c', 'ipsec_worker.c' )