From patchwork Fri Jul 6 06:43:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 42454 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 72A131BEDD; Fri, 6 Jul 2018 08:43:21 +0200 (CEST) Received: from mail-wm0-f66.google.com (mail-wm0-f66.google.com [74.125.82.66]) by dpdk.org (Postfix) with ESMTP id B12081BEC5 for ; Fri, 6 Jul 2018 08:43:18 +0200 (CEST) Received: by mail-wm0-f66.google.com with SMTP id s13-v6so3599135wmc.1 for ; Thu, 05 Jul 2018 23:43:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lYKepy2IKO2+C3TzK7EpM5PRO5kpzOV+xws5iy+7IN8=; b=KtgeHsaMQy9G89tD0nNl3NJhPPl7Z7A0ygOqpYDMzpwmD9bFDDo9Pd+6OEwAu2s+Z4 INXStgrWLb/ZvHeB2kjScL0CnItVGCZ4XGLdrfOqsXdsMovKK0OTtklTPVVt9R+Qkff8 +aB9qCDHKTJpdktrqNM+s0PyQbrvjCz/oMHjsIudybVURF/07bH47d0CjXF4/L1f4ZPt Mfd9AAVwXbFiyRqpIy/XyP2AbtoTZrjZZOKz2yb3y6tnL2uaW8INew438SrPdfGV3MR9 NgrzAVENxqGVlucyLEUBLXU8S3KLyAFuN+dS4nmDxI3I+Yj1/t8PJPdFlNcjAGUvwLPx bZKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lYKepy2IKO2+C3TzK7EpM5PRO5kpzOV+xws5iy+7IN8=; b=Nyq+2oJhLPDOzTK6wj9QnQbNnggJNoSqcsls9j2ORgMvcZzTlOhDAsWeMx5fKOLaw4 RNP5tCVg1AIu1JUZ4r/dqtY+RecJqncScUXSLBW8yXK6+j4TY9XTyBoysmwF0GCSvTCg 8/zBu7jr75EAgvmLwjewcq5XXCCHPeI1tx7S+w6vT3hci7wuFA25BD2FfvbuYAI8v4mD NwiwvV5amzCXAo07ROBeMUcrLPjk47I5Lm/Xeg3gjFcfI583WNV+Fd+lRTglTyvr2wG2 j7LM5Mo4FwDYgngJy7sBa8yT1ouXj5Y/7ZbErd5pcS3SnPiuFlfNw0IVuJJY/FMQ42ln Zmrw== X-Gm-Message-State: APt69E3Obum6Lvc4u7ryvALMdKFiZzr3PC6p1Vcnq+GyDj7O8p/nAGIF csNOnLGXTqWGkx+4VdwpgfQa/fSecQ== X-Google-Smtp-Source: AAOMgpdpRRrSm6Fl1p0U5b6aiCzuxS1jjmoSKugWVrnTi53dp6/V3gjNGBzEDDLuc8JAPv28+irUew== X-Received: by 2002:a1c:7409:: with SMTP id p9-v6mr5366650wmc.43.1530859397974; Thu, 05 Jul 2018 23:43:17 -0700 (PDT) Received: from laranjeiro-vm.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id n16-v6sm13126802wrg.58.2018.07.05.23.43.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jul 2018 23:43:17 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org, Wenzhuo Lu , Jingjing Wu , Bernard Iremonger , Stephen Hemminger Cc: Adrien Mazarguil , Mohammad Abdul Awal , Ori Kam Date: Fri, 6 Jul 2018 08:43:05 +0200 Message-Id: <9a3766b57ac9361f9724f5a1ca189635291c6eec.1530804001.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v9 1/2] app/testpmd: add VXLAN encap/decap support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Due to the complex VXLAN_ENCAP flow action and based on the fact testpmd does not allocate memory, this patch adds a new command in testpmd to initialise a global structure containing the necessary information to make the outer layer of the packet. This same global structure will then be used by the flow command line in testpmd when the action vxlan_encap will be parsed, at this point, the conversion into such action becomes trivial. This global structure is only used for the encap action. Signed-off-by: Nelio Laranjeiro Acked-by: Ori Kam Acked-by: Adrien Mazarguil Acked-by: Mohammad Abdul Awal Tested-by: Mohammad Abdul Awal --- app/test-pmd/cmdline.c | 185 ++++++++++++++++++++ app/test-pmd/cmdline_flow.c | 142 +++++++++++++++ app/test-pmd/testpmd.c | 17 ++ app/test-pmd/testpmd.h | 17 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 55 ++++++ 5 files changed, 416 insertions(+) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 27e2aa8c8..56bdb023c 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -781,6 +781,17 @@ static void cmd_help_long_parsed(void *parsed_result, "port tm hierarchy commit (port_id) (clean_on_fail)\n" " Commit tm hierarchy.\n\n" + "vxlan ip-version (ipv4|ipv6) vni (vni) udp-src" + " (udp-src) udp-dst (udp-dst) ip-src (ip-src) ip-dst" + " (ip-dst) eth-src (eth-src) eth-dst (eth-dst)\n" + " Configure the VXLAN encapsulation for flows.\n\n" + + "vxlan-with-vlan ip-version (ipv4|ipv6) vni (vni)" + " udp-src (udp-src) udp-dst (udp-dst) ip-src (ip-src)" + " ip-dst (ip-dst) vlan-tci (vlan-tci) eth-src (eth-src)" + " eth-dst (eth-dst)\n" + " Configure the VXLAN encapsulation for flows.\n\n" + , list_pkt_forwarding_modes() ); } @@ -14838,6 +14849,178 @@ cmdline_parse_inst_t cmd_set_port_tm_hierarchy_default = { }; #endif +/** Set VXLAN encapsulation details */ +struct cmd_set_vxlan_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t vxlan; + cmdline_fixed_string_t pos_token; + cmdline_fixed_string_t ip_version; + uint32_t vlan_present:1; + uint32_t vni; + uint16_t udp_src; + uint16_t udp_dst; + cmdline_ipaddr_t ip_src; + cmdline_ipaddr_t ip_dst; + uint16_t tci; + struct ether_addr eth_src; + struct ether_addr eth_dst; +}; + +cmdline_parse_token_string_t cmd_set_vxlan_set = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, set, "set"); +cmdline_parse_token_string_t cmd_set_vxlan_vxlan = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, "vxlan"); +cmdline_parse_token_string_t cmd_set_vxlan_vxlan_with_vlan = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, vxlan, + "vxlan-with-vlan"); +cmdline_parse_token_string_t cmd_set_vxlan_ip_version = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "ip-version"); +cmdline_parse_token_string_t cmd_set_vxlan_ip_version_value = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, ip_version, + "ipv4#ipv6"); +cmdline_parse_token_string_t cmd_set_vxlan_vni = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "vni"); +cmdline_parse_token_num_t cmd_set_vxlan_vni_value = + TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32); +cmdline_parse_token_string_t cmd_set_vxlan_udp_src = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "udp-src"); +cmdline_parse_token_num_t cmd_set_vxlan_udp_src_value = + TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16); +cmdline_parse_token_string_t cmd_set_vxlan_udp_dst = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "udp-dst"); +cmdline_parse_token_num_t cmd_set_vxlan_udp_dst_value = + TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16); +cmdline_parse_token_string_t cmd_set_vxlan_ip_src = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "ip-src"); +cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_src_value = + TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_src); +cmdline_parse_token_string_t cmd_set_vxlan_ip_dst = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "ip-dst"); +cmdline_parse_token_ipaddr_t cmd_set_vxlan_ip_dst_value = + TOKEN_IPADDR_INITIALIZER(struct cmd_set_vxlan_result, ip_dst); +cmdline_parse_token_string_t cmd_set_vxlan_vlan = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "vlan-tci"); +cmdline_parse_token_num_t cmd_set_vxlan_vlan_value = + TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16); +cmdline_parse_token_string_t cmd_set_vxlan_eth_src = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "eth-src"); +cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_src_value = + TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_src); +cmdline_parse_token_string_t cmd_set_vxlan_eth_dst = + TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token, + "eth-dst"); +cmdline_parse_token_etheraddr_t cmd_set_vxlan_eth_dst_value = + TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vxlan_result, eth_dst); + +static void cmd_set_vxlan_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_set_vxlan_result *res = parsed_result; + union { + uint32_t vxlan_id; + uint8_t vni[4]; + } id = { + .vxlan_id = rte_cpu_to_be_32(res->vni) & RTE_BE32(0x00ffffff), + }; + + if (strcmp(res->vxlan, "vxlan") == 0) + vxlan_encap_conf.select_vlan = 0; + else if (strcmp(res->vxlan, "vxlan-with-vlan") == 0) + vxlan_encap_conf.select_vlan = 1; + if (strcmp(res->ip_version, "ipv4") == 0) + vxlan_encap_conf.select_ipv4 = 1; + else if (strcmp(res->ip_version, "ipv6") == 0) + vxlan_encap_conf.select_ipv4 = 0; + else + return; + rte_memcpy(vxlan_encap_conf.vni, &id.vni[1], 3); + vxlan_encap_conf.udp_src = rte_cpu_to_be_16(res->udp_src); + vxlan_encap_conf.udp_dst = rte_cpu_to_be_16(res->udp_dst); + if (vxlan_encap_conf.select_ipv4) { + IPV4_ADDR_TO_UINT(res->ip_src, vxlan_encap_conf.ipv4_src); + IPV4_ADDR_TO_UINT(res->ip_dst, vxlan_encap_conf.ipv4_dst); + } else { + IPV6_ADDR_TO_ARRAY(res->ip_src, vxlan_encap_conf.ipv6_src); + IPV6_ADDR_TO_ARRAY(res->ip_dst, vxlan_encap_conf.ipv6_dst); + } + if (vxlan_encap_conf.select_vlan) + vxlan_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci); + rte_memcpy(vxlan_encap_conf.eth_src, res->eth_src.addr_bytes, + ETHER_ADDR_LEN); + rte_memcpy(vxlan_encap_conf.eth_dst, res->eth_dst.addr_bytes, + ETHER_ADDR_LEN); +} + +cmdline_parse_inst_t cmd_set_vxlan = { + .f = cmd_set_vxlan_parsed, + .data = NULL, + .help_str = "set vxlan ip-version ipv4|ipv6 vni udp-src" + " udp-dst ip-src ip-dst " + " eth-src eth-dst ", + .tokens = { + (void *)&cmd_set_vxlan_set, + (void *)&cmd_set_vxlan_vxlan, + (void *)&cmd_set_vxlan_ip_version, + (void *)&cmd_set_vxlan_ip_version_value, + (void *)&cmd_set_vxlan_vni, + (void *)&cmd_set_vxlan_vni_value, + (void *)&cmd_set_vxlan_udp_src, + (void *)&cmd_set_vxlan_udp_src_value, + (void *)&cmd_set_vxlan_udp_dst, + (void *)&cmd_set_vxlan_udp_dst_value, + (void *)&cmd_set_vxlan_ip_src, + (void *)&cmd_set_vxlan_ip_src_value, + (void *)&cmd_set_vxlan_ip_dst, + (void *)&cmd_set_vxlan_ip_dst_value, + (void *)&cmd_set_vxlan_eth_src, + (void *)&cmd_set_vxlan_eth_src_value, + (void *)&cmd_set_vxlan_eth_dst, + (void *)&cmd_set_vxlan_eth_dst_value, + NULL, + }, +}; + +cmdline_parse_inst_t cmd_set_vxlan_with_vlan = { + .f = cmd_set_vxlan_parsed, + .data = NULL, + .help_str = "set vxlan-with-vlan ip-version ipv4|ipv6 vni " + " udp-src udp-dst ip-src ip-dst" + " vlan-tci eth-src eth-dst" + " ", + .tokens = { + (void *)&cmd_set_vxlan_set, + (void *)&cmd_set_vxlan_vxlan_with_vlan, + (void *)&cmd_set_vxlan_ip_version, + (void *)&cmd_set_vxlan_ip_version_value, + (void *)&cmd_set_vxlan_vni, + (void *)&cmd_set_vxlan_vni_value, + (void *)&cmd_set_vxlan_udp_src, + (void *)&cmd_set_vxlan_udp_src_value, + (void *)&cmd_set_vxlan_udp_dst, + (void *)&cmd_set_vxlan_udp_dst_value, + (void *)&cmd_set_vxlan_ip_src, + (void *)&cmd_set_vxlan_ip_src_value, + (void *)&cmd_set_vxlan_ip_dst, + (void *)&cmd_set_vxlan_ip_dst_value, + (void *)&cmd_set_vxlan_vlan, + (void *)&cmd_set_vxlan_vlan_value, + (void *)&cmd_set_vxlan_eth_src, + (void *)&cmd_set_vxlan_eth_src_value, + (void *)&cmd_set_vxlan_eth_dst, + (void *)&cmd_set_vxlan_eth_dst_value, + NULL, + }, +}; + /* Strict link priority scheduling mode setting */ static void cmd_strict_link_prio_parsed( @@ -17462,6 +17645,8 @@ cmdline_parse_ctx_t main_ctx[] = { #if defined RTE_LIBRTE_PMD_SOFTNIC && defined RTE_LIBRTE_SCHED (cmdline_parse_inst_t *)&cmd_set_port_tm_hierarchy_default, #endif + (cmdline_parse_inst_t *)&cmd_set_vxlan, + (cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan, (cmdline_parse_inst_t *)&cmd_ddp_add, (cmdline_parse_inst_t *)&cmd_ddp_del, (cmdline_parse_inst_t *)&cmd_ddp_get_list, diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 934cf7e90..a99fd0048 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -239,6 +239,8 @@ enum index { ACTION_OF_POP_MPLS_ETHERTYPE, ACTION_OF_PUSH_MPLS, ACTION_OF_PUSH_MPLS_ETHERTYPE, + ACTION_VXLAN_ENCAP, + ACTION_VXLAN_DECAP, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -258,6 +260,23 @@ struct action_rss_data { uint16_t queue[ACTION_RSS_QUEUE_NUM]; }; +/** Maximum number of items in struct rte_flow_action_vxlan_encap. */ +#define ACTION_VXLAN_ENCAP_ITEMS_NUM 6 + +/** Storage for struct rte_flow_action_vxlan_encap including external data. */ +struct action_vxlan_encap_data { + struct rte_flow_action_vxlan_encap conf; + struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM]; + struct rte_flow_item_eth item_eth; + struct rte_flow_item_vlan item_vlan; + union { + struct rte_flow_item_ipv4 item_ipv4; + struct rte_flow_item_ipv6 item_ipv6; + }; + struct rte_flow_item_udp item_udp; + struct rte_flow_item_vxlan item_vxlan; +}; + /** Maximum number of subsequent tokens and arguments on the stack. */ #define CTX_STACK_SIZE 16 @@ -775,6 +794,8 @@ static const enum index next_action[] = { ACTION_OF_SET_VLAN_PCP, ACTION_OF_POP_MPLS, ACTION_OF_PUSH_MPLS, + ACTION_VXLAN_ENCAP, + ACTION_VXLAN_DECAP, ZERO, }; @@ -905,6 +926,9 @@ static int parse_vc_action_rss_type(struct context *, const struct token *, static int parse_vc_action_rss_queue(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_action_vxlan_encap(struct context *, const struct token *, + const char *, unsigned int, void *, + unsigned int); static int parse_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2387,6 +2411,24 @@ static const struct token token_list[] = { ethertype)), .call = parse_vc_conf, }, + [ACTION_VXLAN_ENCAP] = { + .name = "vxlan_encap", + .help = "VXLAN encapsulation, uses configuration set by \"set" + " vxlan\"", + .priv = PRIV_ACTION(VXLAN_ENCAP, + sizeof(struct action_vxlan_encap_data)), + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_vxlan_encap, + }, + [ACTION_VXLAN_DECAP] = { + .name = "vxlan_decap", + .help = "Performs a decapsulation action by stripping all" + " headers of the VXLAN tunnel network overlay from the" + " matched flow.", + .priv = PRIV_ACTION(VXLAN_DECAP, 0), + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc, + }, }; /** Remove and return last entry from argument stack. */ @@ -2951,6 +2993,106 @@ parse_vc_action_rss_queue(struct context *ctx, const struct token *token, return len; } +/** Parse VXLAN encap action. */ +static int +parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_vxlan_encap_data *action_vxlan_encap_data; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Set up default configuration. */ + action_vxlan_encap_data = ctx->object; + *action_vxlan_encap_data = (struct action_vxlan_encap_data){ + .conf = (struct rte_flow_action_vxlan_encap){ + .definition = action_vxlan_encap_data->items, + }, + .items = { + { + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = &action_vxlan_encap_data->item_eth, + .mask = &rte_flow_item_eth_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_VLAN, + .spec = &action_vxlan_encap_data->item_vlan, + .mask = &rte_flow_item_vlan_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_IPV4, + .spec = &action_vxlan_encap_data->item_ipv4, + .mask = &rte_flow_item_ipv4_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_UDP, + .spec = &action_vxlan_encap_data->item_udp, + .mask = &rte_flow_item_udp_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_VXLAN, + .spec = &action_vxlan_encap_data->item_vxlan, + .mask = &rte_flow_item_vxlan_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }, + .item_eth.type = 0, + .item_vlan = { + .tci = vxlan_encap_conf.vlan_tci, + .inner_type = 0, + }, + .item_ipv4.hdr = { + .src_addr = vxlan_encap_conf.ipv4_src, + .dst_addr = vxlan_encap_conf.ipv4_dst, + }, + .item_udp.hdr = { + .src_port = vxlan_encap_conf.udp_src, + .dst_port = vxlan_encap_conf.udp_dst, + }, + .item_vxlan.flags = 0, + }; + memcpy(action_vxlan_encap_data->item_eth.dst.addr_bytes, + vxlan_encap_conf.eth_dst, ETHER_ADDR_LEN); + memcpy(action_vxlan_encap_data->item_eth.src.addr_bytes, + vxlan_encap_conf.eth_src, ETHER_ADDR_LEN); + if (!vxlan_encap_conf.select_ipv4) { + memcpy(&action_vxlan_encap_data->item_ipv6.hdr.src_addr, + &vxlan_encap_conf.ipv6_src, + sizeof(vxlan_encap_conf.ipv6_src)); + memcpy(&action_vxlan_encap_data->item_ipv6.hdr.dst_addr, + &vxlan_encap_conf.ipv6_dst, + sizeof(vxlan_encap_conf.ipv6_dst)); + action_vxlan_encap_data->items[2] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_IPV6, + .spec = &action_vxlan_encap_data->item_ipv6, + .mask = &rte_flow_item_ipv6_mask, + }; + } + if (!vxlan_encap_conf.select_vlan) + action_vxlan_encap_data->items[1].type = + RTE_FLOW_ITEM_TYPE_VOID; + memcpy(action_vxlan_encap_data->item_vxlan.vni, vxlan_encap_conf.vni, + RTE_DIM(vxlan_encap_conf.vni)); + action->conf = &action_vxlan_encap_data->conf; + return ret; +} + /** Parse tokens for destroy command. */ static int parse_destroy(struct context *ctx, const struct token *token, diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index dde7d43e3..bf39ac3ff 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -392,6 +392,23 @@ uint8_t bitrate_enabled; struct gro_status gro_ports[RTE_MAX_ETHPORTS]; uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES; +struct vxlan_encap_conf vxlan_encap_conf = { + .select_ipv4 = 1, + .select_vlan = 0, + .vni = "\x00\x00\x00", + .udp_src = 0, + .udp_dst = RTE_BE16(4789), + .ipv4_src = IPv4(127, 0, 0, 1), + .ipv4_dst = IPv4(255, 255, 255, 255), + .ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x01", + .ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x11\x11", + .vlan_tci = 0, + .eth_src = "\x00\x00\x00\x00\x00\x00", + .eth_dst = "\xff\xff\xff\xff\xff\xff", +}; + /* Forward function declarations */ static void map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index f51cd9dd9..0d6618788 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -479,6 +479,23 @@ struct gso_status { extern struct gso_status gso_ports[RTE_MAX_ETHPORTS]; extern uint16_t gso_max_segment_size; +/* VXLAN encap/decap parameters. */ +struct vxlan_encap_conf { + uint32_t select_ipv4:1; + uint32_t select_vlan:1; + uint8_t vni[3]; + rte_be16_t udp_src; + rte_be16_t udp_dst; + rte_be32_t ipv4_src; + rte_be32_t ipv4_dst; + uint8_t ipv6_src[16]; + uint8_t ipv6_dst[16]; + rte_be16_t vlan_tci; + uint8_t eth_src[ETHER_ADDR_LEN]; + uint8_t eth_dst[ETHER_ADDR_LEN]; +}; +struct vxlan_encap_conf vxlan_encap_conf; + static inline unsigned int lcore_num(void) { diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 0d6fd50ca..3281778d9 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -1534,6 +1534,23 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue:: This command should be run when the port is stopped, or else it will fail. +Config VXLAN Encap outer layers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the outer layer to encapsulate a packet inside a VXLAN tunnel:: + + set vxlan ip-version (ipv4|ipv6) vni (vni) udp-src (udp-src) \ + udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) eth-src (eth-src) \ + eth-dst (eth-dst) + + set vxlan-with-vlan ip-version (ipv4|ipv6) vni (vni) udp-src (udp-src) \ + udp-dst (udp-dst) ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci) \ + eth-src (eth-src) eth-dst (eth-dst) + +Those command will set an internal configuration inside testpmd, any following +flow rule using the action vxlan_encap will use the last configuration set. +To have a different encapsulation header, one of those commands must be called +before the flow rule creation. Port Functions -------------- @@ -3650,6 +3667,12 @@ This section lists supported actions and their attributes, if any. - ``ethertype``: Ethertype. +- ``vxlan_encap``: Performs a VXLAN encapsulation, outer layer configuration + is done through `Config VXLAN Encap outer layers`_. + +- ``vxlan_decap``: Performs a decapsulation action by stripping all headers of + the VXLAN tunnel network overlay from the matched flow. + Destroying flow rules ~~~~~~~~~~~~~~~~~~~~~ @@ -3915,6 +3938,38 @@ Validate and create a QinQ rule on port 0 to steer traffic to a queue on the hos 0 0 0 i- ETH VLAN VLAN=>VF QUEUE 1 0 0 i- ETH VLAN VLAN=>PF QUEUE +Sample VXLAN encapsulation rule +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +VXLAN encapsulation outer layer has default value pre-configured in testpmd +source code, those can be changed by using the following commands + +IPv4 VXLAN outer header:: + + testpmd> set vxlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src 127.0.0.1 + ip-dst 128.0.0.1 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22 + testpmd> flow create 0 ingress pattern end actions vxlan_encap / + queue index 0 / end + + testpmd> set vxlan-with-vlan ip-version ipv4 vni 4 udp-src 4 udp-dst 4 ip-src + 127.0.0.1 ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11 + eth-dst 22:22:22:22:22:22 + testpmd> flow create 0 ingress pattern end actions vxlan_encap / + queue index 0 / end + +IPv6 VXLAN outer header:: + + testpmd> set vxlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 ip-src ::1 + ip-dst ::2222 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22 + testpmd> flow create 0 ingress pattern end actions vxlan_encap / + queue index 0 / end + + testpmd> set vxlan-with-vlan ip-version ipv6 vni 4 udp-src 4 udp-dst 4 + ip-src ::1 ip-dst ::2222 vlan-tci 34 eth-src 11:11:11:11:11:11 + eth-dst 22:22:22:22:22:22 + testpmd> flow create 0 ingress pattern end actions vxlan_encap / + queue index 0 / end + BPF Functions -------------- From patchwork Fri Jul 6 06:43:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 42455 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2805E1BEF9; Fri, 6 Jul 2018 08:43:23 +0200 (CEST) Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by dpdk.org (Postfix) with ESMTP id 4CE441BEC5 for ; Fri, 6 Jul 2018 08:43:19 +0200 (CEST) Received: by mail-wr1-f67.google.com with SMTP id h40-v6so3010491wrh.2 for ; Thu, 05 Jul 2018 23:43:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=a7mTBaGzcPPi86qnZfsdbIEFRBvekthb9T5iZwa0qUc=; b=lzCSyDA1+v94+VBtHFT32icux8nA7hwblMX/k74n4w78ZXucsp9p31m8x6LRxvbxm7 b4zWHVhVGHD/I2EmzRuEQonQ2fd4jaqqguQ6vTPpvRAVU8VKGPmJMfFiehZKUMmecomb 4jslX3c5NfLSYWcHrxPSNlUZRXuIAlxnsa8IoALPjSuecNg5xN3apTslSr5dDgot0AIe 1sHXoWChrwEh0b766/fj6qBUCDq/FwQ1ephcjuh+vwBhZOHD6uShDceOY4aHkGBE+vPe MhKOOAj0Ahvyj1p4MMXH8O1wLZMP7GwyREfKi9tDncCYbVxyQoRGFvU9ePS/Gv5sSx4I 4CxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=a7mTBaGzcPPi86qnZfsdbIEFRBvekthb9T5iZwa0qUc=; b=iDPYpkdcP2nG98WQhz3KRPd4ise6vtRsvieSkhyQHcmxiSv5Mq9BrZ67j6nfOjvxNs EU8QkgfIgEm+aPdvVGhAG9eMizUD+c6/DLHkc84+rPtL/zwFOl5Tx24gr8jwRQ9Xv1HJ UnhaWT0c3cQYSeEae7OO4QGUhZxwePsiC/23s1hjdGzPt5NWxSafNSTuvi1YGmi+hTQt +V220bKjrOUBvlu75QkNEM0Yw2rc7QGU7wNCbVAFkM2cPDJTloP3Uwf+HyWW3lCUp6+u I2iP0gYP4QicofBZoNvcdi56QiD0f/KS7oRshm/Je8EJS6ifE0HfsTB++NAIdvrnC4Xn WLUg== X-Gm-Message-State: APt69E3S0Ubijivcys4f+vlpng/94rdEL7pvJcYdOOrcQ+DF0isa9uyJ d9LryYWcHb7WCA6M5xxYVHxD0YAysA== X-Google-Smtp-Source: AAOMgpfex79NlH/W0L+H+MiMV4QuPzUJyvrOJH4kz14Av3z4HbDosEl37zDNB9hfYmTB9bz5SRGwYw== X-Received: by 2002:adf:f50e:: with SMTP id q14-v6mr6067266wro.241.1530859398738; Thu, 05 Jul 2018 23:43:18 -0700 (PDT) Received: from laranjeiro-vm.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id n16-v6sm13126802wrg.58.2018.07.05.23.43.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Jul 2018 23:43:18 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org, Wenzhuo Lu , Jingjing Wu , Bernard Iremonger , Stephen Hemminger Cc: Adrien Mazarguil , Mohammad Abdul Awal , Ori Kam Date: Fri, 6 Jul 2018 08:43:06 +0200 Message-Id: <947ebf03b07e38e9372e93e0f49de41b55ee94b1.1530804001.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v9 2/2] app/testpmd: add NVGRE encap/decap support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Due to the complex NVGRE_ENCAP flow action and based on the fact testpmd does not allocate memory, this patch adds a new command in testpmd to initialise a global structure containing the necessary information to make the outer layer of the packet. This same global structure will then be used by the flow command line in testpmd when the action nvgre_encap will be parsed, at this point, the conversion into such action becomes trivial. This global structure is only used for the encap action. Signed-off-by: Nelio Laranjeiro Acked-by: Ori Kam Acked-by: Adrien Mazarguil Acked-by: Mohammad Abdul Awal Tested-by: Mohammad Abdul Awal --- app/test-pmd/cmdline.c | 160 ++++++++++++++++++++ app/test-pmd/cmdline_flow.c | 132 ++++++++++++++++ app/test-pmd/testpmd.c | 15 ++ app/test-pmd/testpmd.h | 15 ++ doc/guides/testpmd_app_ug/testpmd_funcs.rst | 52 +++++++ 5 files changed, 374 insertions(+) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 56bdb023c..c3af6f956 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -792,6 +792,16 @@ static void cmd_help_long_parsed(void *parsed_result, " eth-dst (eth-dst)\n" " Configure the VXLAN encapsulation for flows.\n\n" + "nvgre ip-version (ipv4|ipv6) tni (tni) ip-src" + " (ip-src) ip-dst (ip-dst) eth-src (eth-src) eth-dst" + " (eth-dst)\n" + " Configure the NVGRE encapsulation for flows.\n\n" + + "nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni)" + " ip-src (ip-src) ip-dst (ip-dst) vlan-tci (vlan-tci)" + " eth-src (eth-src) eth-dst (eth-dst)\n" + " Configure the NVGRE encapsulation for flows.\n\n" + , list_pkt_forwarding_modes() ); } @@ -15021,6 +15031,154 @@ cmdline_parse_inst_t cmd_set_vxlan_with_vlan = { }, }; +/** Set NVGRE encapsulation details */ +struct cmd_set_nvgre_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t nvgre; + cmdline_fixed_string_t pos_token; + cmdline_fixed_string_t ip_version; + uint32_t tni; + cmdline_ipaddr_t ip_src; + cmdline_ipaddr_t ip_dst; + uint16_t tci; + struct ether_addr eth_src; + struct ether_addr eth_dst; +}; + +cmdline_parse_token_string_t cmd_set_nvgre_set = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, set, "set"); +cmdline_parse_token_string_t cmd_set_nvgre_nvgre = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, "nvgre"); +cmdline_parse_token_string_t cmd_set_nvgre_nvgre_with_vlan = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, nvgre, + "nvgre-with-vlan"); +cmdline_parse_token_string_t cmd_set_nvgre_ip_version = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token, + "ip-version"); +cmdline_parse_token_string_t cmd_set_nvgre_ip_version_value = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, ip_version, + "ipv4#ipv6"); +cmdline_parse_token_string_t cmd_set_nvgre_tni = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token, + "tni"); +cmdline_parse_token_num_t cmd_set_nvgre_tni_value = + TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32); +cmdline_parse_token_string_t cmd_set_nvgre_ip_src = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token, + "ip-src"); +cmdline_parse_token_num_t cmd_set_nvgre_ip_src_value = + TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_src); +cmdline_parse_token_string_t cmd_set_nvgre_ip_dst = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token, + "ip-dst"); +cmdline_parse_token_ipaddr_t cmd_set_nvgre_ip_dst_value = + TOKEN_IPADDR_INITIALIZER(struct cmd_set_nvgre_result, ip_dst); +cmdline_parse_token_string_t cmd_set_nvgre_vlan = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token, + "vlan-tci"); +cmdline_parse_token_num_t cmd_set_nvgre_vlan_value = + TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16); +cmdline_parse_token_string_t cmd_set_nvgre_eth_src = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token, + "eth-src"); +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_src_value = + TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_src); +cmdline_parse_token_string_t cmd_set_nvgre_eth_dst = + TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token, + "eth-dst"); +cmdline_parse_token_etheraddr_t cmd_set_nvgre_eth_dst_value = + TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_nvgre_result, eth_dst); + +static void cmd_set_nvgre_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_set_nvgre_result *res = parsed_result; + union { + uint32_t nvgre_tni; + uint8_t tni[4]; + } id = { + .nvgre_tni = rte_cpu_to_be_32(res->tni) & RTE_BE32(0x00ffffff), + }; + + if (strcmp(res->nvgre, "nvgre") == 0) + nvgre_encap_conf.select_vlan = 0; + else if (strcmp(res->nvgre, "nvgre-with-vlan") == 0) + nvgre_encap_conf.select_vlan = 1; + if (strcmp(res->ip_version, "ipv4") == 0) + nvgre_encap_conf.select_ipv4 = 1; + else if (strcmp(res->ip_version, "ipv6") == 0) + nvgre_encap_conf.select_ipv4 = 0; + else + return; + rte_memcpy(nvgre_encap_conf.tni, &id.tni[1], 3); + if (nvgre_encap_conf.select_ipv4) { + IPV4_ADDR_TO_UINT(res->ip_src, nvgre_encap_conf.ipv4_src); + IPV4_ADDR_TO_UINT(res->ip_dst, nvgre_encap_conf.ipv4_dst); + } else { + IPV6_ADDR_TO_ARRAY(res->ip_src, nvgre_encap_conf.ipv6_src); + IPV6_ADDR_TO_ARRAY(res->ip_dst, nvgre_encap_conf.ipv6_dst); + } + if (nvgre_encap_conf.select_vlan) + nvgre_encap_conf.vlan_tci = rte_cpu_to_be_16(res->tci); + rte_memcpy(nvgre_encap_conf.eth_src, res->eth_src.addr_bytes, + ETHER_ADDR_LEN); + rte_memcpy(nvgre_encap_conf.eth_dst, res->eth_dst.addr_bytes, + ETHER_ADDR_LEN); +} + +cmdline_parse_inst_t cmd_set_nvgre = { + .f = cmd_set_nvgre_parsed, + .data = NULL, + .help_str = "set nvgre ip-version tni ip-src" + " ip-dst eth-src " + " eth-dst ", + .tokens = { + (void *)&cmd_set_nvgre_set, + (void *)&cmd_set_nvgre_nvgre, + (void *)&cmd_set_nvgre_ip_version, + (void *)&cmd_set_nvgre_ip_version_value, + (void *)&cmd_set_nvgre_tni, + (void *)&cmd_set_nvgre_tni_value, + (void *)&cmd_set_nvgre_ip_src, + (void *)&cmd_set_nvgre_ip_src_value, + (void *)&cmd_set_nvgre_ip_dst, + (void *)&cmd_set_nvgre_ip_dst_value, + (void *)&cmd_set_nvgre_eth_src, + (void *)&cmd_set_nvgre_eth_src_value, + (void *)&cmd_set_nvgre_eth_dst, + (void *)&cmd_set_nvgre_eth_dst_value, + NULL, + }, +}; + +cmdline_parse_inst_t cmd_set_nvgre_with_vlan = { + .f = cmd_set_nvgre_parsed, + .data = NULL, + .help_str = "set nvgre-with-vlan ip-version tni " + " ip-src ip-dst vlan-tci " + " eth-src eth-dst ", + .tokens = { + (void *)&cmd_set_nvgre_set, + (void *)&cmd_set_nvgre_nvgre_with_vlan, + (void *)&cmd_set_nvgre_ip_version, + (void *)&cmd_set_nvgre_ip_version_value, + (void *)&cmd_set_nvgre_tni, + (void *)&cmd_set_nvgre_tni_value, + (void *)&cmd_set_nvgre_ip_src, + (void *)&cmd_set_nvgre_ip_src_value, + (void *)&cmd_set_nvgre_ip_dst, + (void *)&cmd_set_nvgre_ip_dst_value, + (void *)&cmd_set_nvgre_vlan, + (void *)&cmd_set_nvgre_vlan_value, + (void *)&cmd_set_nvgre_eth_src, + (void *)&cmd_set_nvgre_eth_src_value, + (void *)&cmd_set_nvgre_eth_dst, + (void *)&cmd_set_nvgre_eth_dst_value, + NULL, + }, +}; + /* Strict link priority scheduling mode setting */ static void cmd_strict_link_prio_parsed( @@ -17647,6 +17805,8 @@ cmdline_parse_ctx_t main_ctx[] = { #endif (cmdline_parse_inst_t *)&cmd_set_vxlan, (cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan, + (cmdline_parse_inst_t *)&cmd_set_nvgre, + (cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan, (cmdline_parse_inst_t *)&cmd_ddp_add, (cmdline_parse_inst_t *)&cmd_ddp_del, (cmdline_parse_inst_t *)&cmd_ddp_get_list, diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index a99fd0048..f9260600e 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -241,6 +241,8 @@ enum index { ACTION_OF_PUSH_MPLS_ETHERTYPE, ACTION_VXLAN_ENCAP, ACTION_VXLAN_DECAP, + ACTION_NVGRE_ENCAP, + ACTION_NVGRE_DECAP, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -277,6 +279,22 @@ struct action_vxlan_encap_data { struct rte_flow_item_vxlan item_vxlan; }; +/** Maximum number of items in struct rte_flow_action_nvgre_encap. */ +#define ACTION_NVGRE_ENCAP_ITEMS_NUM 5 + +/** Storage for struct rte_flow_action_nvgre_encap including external data. */ +struct action_nvgre_encap_data { + struct rte_flow_action_nvgre_encap conf; + struct rte_flow_item items[ACTION_NVGRE_ENCAP_ITEMS_NUM]; + struct rte_flow_item_eth item_eth; + struct rte_flow_item_vlan item_vlan; + union { + struct rte_flow_item_ipv4 item_ipv4; + struct rte_flow_item_ipv6 item_ipv6; + }; + struct rte_flow_item_nvgre item_nvgre; +}; + /** Maximum number of subsequent tokens and arguments on the stack. */ #define CTX_STACK_SIZE 16 @@ -796,6 +814,8 @@ static const enum index next_action[] = { ACTION_OF_PUSH_MPLS, ACTION_VXLAN_ENCAP, ACTION_VXLAN_DECAP, + ACTION_NVGRE_ENCAP, + ACTION_NVGRE_DECAP, ZERO, }; @@ -929,6 +949,9 @@ static int parse_vc_action_rss_queue(struct context *, const struct token *, static int parse_vc_action_vxlan_encap(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_vc_action_nvgre_encap(struct context *, const struct token *, + const char *, unsigned int, void *, + unsigned int); static int parse_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2429,6 +2452,24 @@ static const struct token token_list[] = { .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), .call = parse_vc, }, + [ACTION_NVGRE_ENCAP] = { + .name = "nvgre_encap", + .help = "NVGRE encapsulation, uses configuration set by \"set" + " nvgre\"", + .priv = PRIV_ACTION(NVGRE_ENCAP, + sizeof(struct action_nvgre_encap_data)), + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_nvgre_encap, + }, + [ACTION_NVGRE_DECAP] = { + .name = "nvgre_decap", + .help = "Performs a decapsulation action by stripping all" + " headers of the NVGRE tunnel network overlay from the" + " matched flow.", + .priv = PRIV_ACTION(NVGRE_DECAP, 0), + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc, + }, }; /** Remove and return last entry from argument stack. */ @@ -3093,6 +3134,97 @@ parse_vc_action_vxlan_encap(struct context *ctx, const struct token *token, return ret; } +/** Parse NVGRE encap action. */ +static int +parse_vc_action_nvgre_encap(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_nvgre_encap_data *action_nvgre_encap_data; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Set up default configuration. */ + action_nvgre_encap_data = ctx->object; + *action_nvgre_encap_data = (struct action_nvgre_encap_data){ + .conf = (struct rte_flow_action_nvgre_encap){ + .definition = action_nvgre_encap_data->items, + }, + .items = { + { + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = &action_nvgre_encap_data->item_eth, + .mask = &rte_flow_item_eth_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_VLAN, + .spec = &action_nvgre_encap_data->item_vlan, + .mask = &rte_flow_item_vlan_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_IPV4, + .spec = &action_nvgre_encap_data->item_ipv4, + .mask = &rte_flow_item_ipv4_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_NVGRE, + .spec = &action_nvgre_encap_data->item_nvgre, + .mask = &rte_flow_item_nvgre_mask, + }, + { + .type = RTE_FLOW_ITEM_TYPE_END, + }, + }, + .item_eth.type = 0, + .item_vlan = { + .tci = nvgre_encap_conf.vlan_tci, + .inner_type = 0, + }, + .item_ipv4.hdr = { + .src_addr = nvgre_encap_conf.ipv4_src, + .dst_addr = nvgre_encap_conf.ipv4_dst, + }, + .item_nvgre.flow_id = 0, + }; + memcpy(action_nvgre_encap_data->item_eth.dst.addr_bytes, + nvgre_encap_conf.eth_dst, ETHER_ADDR_LEN); + memcpy(action_nvgre_encap_data->item_eth.src.addr_bytes, + nvgre_encap_conf.eth_src, ETHER_ADDR_LEN); + if (!nvgre_encap_conf.select_ipv4) { + memcpy(&action_nvgre_encap_data->item_ipv6.hdr.src_addr, + &nvgre_encap_conf.ipv6_src, + sizeof(nvgre_encap_conf.ipv6_src)); + memcpy(&action_nvgre_encap_data->item_ipv6.hdr.dst_addr, + &nvgre_encap_conf.ipv6_dst, + sizeof(nvgre_encap_conf.ipv6_dst)); + action_nvgre_encap_data->items[2] = (struct rte_flow_item){ + .type = RTE_FLOW_ITEM_TYPE_IPV6, + .spec = &action_nvgre_encap_data->item_ipv6, + .mask = &rte_flow_item_ipv6_mask, + }; + } + if (!nvgre_encap_conf.select_vlan) + action_nvgre_encap_data->items[1].type = + RTE_FLOW_ITEM_TYPE_VOID; + memcpy(action_nvgre_encap_data->item_nvgre.tni, nvgre_encap_conf.tni, + RTE_DIM(nvgre_encap_conf.tni)); + action->conf = &action_nvgre_encap_data->conf; + return ret; +} + /** Parse tokens for destroy command. */ static int parse_destroy(struct context *ctx, const struct token *token, diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index bf39ac3ff..dbba7d253 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -409,6 +409,21 @@ struct vxlan_encap_conf vxlan_encap_conf = { .eth_dst = "\xff\xff\xff\xff\xff\xff", }; +struct nvgre_encap_conf nvgre_encap_conf = { + .select_ipv4 = 1, + .select_vlan = 0, + .tni = "\x00\x00\x00", + .ipv4_src = IPv4(127, 0, 0, 1), + .ipv4_dst = IPv4(255, 255, 255, 255), + .ipv6_src = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x00\x01", + .ipv6_dst = "\x00\x00\x00\x00\x00\x00\x00\x00" + "\x00\x00\x00\x00\x00\x00\x11\x11", + .vlan_tci = 0, + .eth_src = "\x00\x00\x00\x00\x00\x00", + .eth_dst = "\xff\xff\xff\xff\xff\xff", +}; + /* Forward function declarations */ static void map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port); diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 0d6618788..2b1e448b0 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -496,6 +496,21 @@ struct vxlan_encap_conf { }; struct vxlan_encap_conf vxlan_encap_conf; +/* NVGRE encap/decap parameters. */ +struct nvgre_encap_conf { + uint32_t select_ipv4:1; + uint32_t select_vlan:1; + uint8_t tni[3]; + rte_be32_t ipv4_src; + rte_be32_t ipv4_dst; + uint8_t ipv6_src[16]; + uint8_t ipv6_dst[16]; + rte_be16_t vlan_tci; + uint8_t eth_src[ETHER_ADDR_LEN]; + uint8_t eth_dst[ETHER_ADDR_LEN]; +}; +struct nvgre_encap_conf nvgre_encap_conf; + static inline unsigned int lcore_num(void) { diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 3281778d9..94d8d38c7 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -1552,6 +1552,21 @@ flow rule using the action vxlan_encap will use the last configuration set. To have a different encapsulation header, one of those commands must be called before the flow rule creation. +Config NVGRE Encap outer layers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure the outer layer to encapsulate a packet inside a NVGRE tunnel:: + + set nvgre ip-version (ipv4|ipv6) tni (tni) ip-src (ip-src) ip-dst (ip-dst) \ + eth-src (eth-src) eth-dst (eth-dst) + set nvgre-with-vlan ip-version (ipv4|ipv6) tni (tni) ip-src (ip-src) \ + ip-dst (ip-dst) vlan-tci (vlan-tci) eth-src (eth-src) eth-dst (eth-dst) + +Those command will set an internal configuration inside testpmd, any following +flow rule using the action nvgre_encap will use the last configuration set. +To have a different encapsulation header, one of those commands must be called +before the flow rule creation. + Port Functions -------------- @@ -3673,6 +3688,12 @@ This section lists supported actions and their attributes, if any. - ``vxlan_decap``: Performs a decapsulation action by stripping all headers of the VXLAN tunnel network overlay from the matched flow. +- ``nvgre_encap``: Performs a NVGRE encapsulation, outer layer configuration + is done through `Config NVGRE Encap outer layers`_. + +- ``nvgre_decap``: Performs a decapsulation action by stripping all headers of + the NVGRE tunnel network overlay from the matched flow. + Destroying flow rules ~~~~~~~~~~~~~~~~~~~~~ @@ -3970,6 +3991,37 @@ IPv6 VXLAN outer header:: testpmd> flow create 0 ingress pattern end actions vxlan_encap / queue index 0 / end +Sample NVGRE encapsulation rule +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +NVGRE encapsulation outer layer has default value pre-configured in testpmd +source code, those can be changed by using the following commands + +IPv4 NVGRE outer header:: + + testpmd> set nvgre ip-version ipv4 tni 4 ip-src 127.0.0.1 ip-dst 128.0.0.1 + eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22 + testpmd> flow create 0 ingress pattern end actions nvgre_encap / + queue index 0 / end + + testpmd> set nvgre-with-vlan ip-version ipv4 tni 4 ip-src 127.0.0.1 + ip-dst 128.0.0.1 vlan-tci 34 eth-src 11:11:11:11:11:11 + eth-dst 22:22:22:22:22:22 + testpmd> flow create 0 ingress pattern end actions nvgre_encap / + queue index 0 / end + +IPv6 NVGRE outer header:: + + testpmd> set nvgre ip-version ipv6 tni 4 ip-src ::1 ip-dst ::2222 + eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22 + testpmd> flow create 0 ingress pattern end actions nvgre_encap / + queue index 0 / end + + testpmd> set nvgre-with-vlan ip-version ipv6 tni 4 ip-src ::1 ip-dst ::2222 + vlan-tci 34 eth-src 11:11:11:11:11:11 eth-dst 22:22:22:22:22:22 + testpmd> flow create 0 ingress pattern end actions nvgre_encap / + queue index 0 / end + BPF Functions --------------