From patchwork Sat Oct 22 08:24:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 118950 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CEA48A0032; Sat, 22 Oct 2022 10:25:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 68EFF40DDB; Sat, 22 Oct 2022 10:25:01 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2110.outbound.protection.outlook.com [40.107.93.110]) by mails.dpdk.org (Postfix) with ESMTP id 50D1640042 for ; Sat, 22 Oct 2022 10:24:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UxeKZRkCrGdeRG5rzyM+b/puuuCt5Ax1JN7tpg5auCIlPV1u5znAj75jreI8E9RCafnaG9L8bGatOd8+UySFikfXGr4F0Z/MtFsHOWS7PqHZ4J3fcS/MyVA9LW7bzyS6wFBfxI/7CSK4JWmds7THJPMji5licSJAzAyFx1X7ARMIzmlKSydjM649/uty7LMuBA7FRcQqSsLm9USdAH5Qs9cOYgSDZ13iTxu7PYsUGrFFbcAVKmUvN24rgIxX7UFEtdT6ZeWDeJIyIkmklsGdeAx4s9kUmlKX/s0ab+Ls5qtM/bCD+SrougsXTKa2C8cQ7fStISCml0HqCoAMJCUTGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hmcNZpazN8P0Q14vgyZW5nNrOUPPzibmJFRTvwQs7ck=; b=NXoXQ4nj6bzLIV68OiTSInXeQ7MT3foJZe+MT9X08hKjgWRkJcr3Pb6M//s2JzZcKsMpHHfVYyxmn/RC6CaKi83eumV9WoJoc3kQAR+q+82bWgauV5cDF3ytmbBpWXnxCwlwRpR4p+BRfAD5e3PYJGi3GUEpZDSstQGLvlTim1XXMjD9PSwaWEffnKSYNwBaLraMkluZ3AYMYpu8RA88AWb2DAYU/QLVgXfbyU0K5x+i+CBRs64yX6QQ1snsDbG1T3Bhch4QOSF8lJdSmUN4SKtSgaOhx2iRfpnZpu1N+PNU4z5o7KLFb2RRuTapgXdl//SOI9JA0J9Kc6xJuPoTLQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hmcNZpazN8P0Q14vgyZW5nNrOUPPzibmJFRTvwQs7ck=; b=UYgLI2wuvYyiiTbizFIgFHxFKT/yFcyceoCxI/9bHoj34THWmgFioUViAQQP9ha5JE+3np4vPUWVKI7BJ8nRDo8Tn/f/InU+oUHBcn46yqbm3JdgtL9nKs6KJEl0B3ewnmccx/bIyEUfNqaMHf8zzSl5HiOrta6h/uz5r6YcMdc= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by PH7PR13MB5454.namprd13.prod.outlook.com (2603:10b6:510:130::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5746.8; Sat, 22 Oct 2022 08:24:48 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::7c26:1a0b:2825:6f4b]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::7c26:1a0b:2825:6f4b%4]) with mapi id 15.20.5723.014; Sat, 22 Oct 2022 08:24:48 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, niklas.soderlund@corigine.com, Chaoyong He Subject: [PATCH v2 01/25] net/nfp: support IPv4 VXLAN flow item Date: Sat, 22 Oct 2022 16:24:05 +0800 Message-Id: <1666427069-10553-2-git-send-email-chaoyong.he@corigine.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666427069-10553-1-git-send-email-chaoyong.he@corigine.com> References: <1666063359-34283-1-git-send-email-chaoyong.he@corigine.com> <1666427069-10553-1-git-send-email-chaoyong.he@corigine.com> X-ClientProxiedBy: SG2PR02CA0056.apcprd02.prod.outlook.com (2603:1096:4:54::20) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|PH7PR13MB5454:EE_ X-MS-Office365-Filtering-Correlation-Id: 4d073eb7-6d67-47eb-01bf-08dab406e1f7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5HvWmuWG3XLhXCGVo9Gg3lSeEhS04SMG811B3SSiUyNOkBD8mazr53q4io5c/Bb3NYmFC+IA74aL/fi3J3/c/N8qzpVc1yzsyZqDNGYcW56v7T6RIfCfpmWYaBGUI6RYIGheFQK5Xhjtwi2XfQJJSfydoBy57wS5Wsx2LitW0OhnZIIeyMJyS69P2fAenHPE5A5XRWB1xfBQIf+s990JLN75CXgGRfJ8598apZpdYtgKfpv1tx4ObRJ82vnij8hNPwjjG/08MDjH2Nw2X/zdCi0U0rZ/1lxaBKt9HEgR3PldjXoHMNWOxsTbrycF7aCm/MxekwuVk4Z6CHJgcEGU7M6gDjuZKI0aPuOYnZjk2zLkfoHSuom2bJJie6udhqxMEx/me+dgUxY9/ASmjp6sWAGpXBFjdWBI14aRa1303DcJg3uhW88YtnOkBG0kN7LPSQF6RHyDRwcv5OPdVtOiGDOBj3fub5fdcIO4EbQ7oL4IHIHOSw3Wl/NqNAvkgi2FCxXyHm0j0MqkjuQcwMV+wdTio97E0WeL7Vu+6+OFl8WrlhUkvKR1BRxhkEcB6V9Si0ZRiOWx/1n6/9FrUcQvC0LVWJaY/OPZe4qArpFfqjb9zlNJJMXr6eXxqh00BotFqm9PowItCSqddAzPyQcFm1k8NjPnKBF328qUcq7JpnIkojbmljx+GjB5EU9dRFul6CWWxhizcd9PMWNbWeSEpNVcEkUEvD5r8Djal4JoKoA2K4pE52sf0UEvX3gY4YLb3InJkBKGqkU6A0AorpFw+JVfZrrKYCbavnzZSBz4odWazXUZHIP09AQWCCEdZqvB X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(346002)(366004)(396003)(376002)(39840400004)(136003)(451199015)(186003)(6916009)(316002)(66574015)(83380400001)(2616005)(38350700002)(86362001)(44832011)(30864003)(38100700002)(66556008)(5660300002)(41300700001)(107886003)(478600001)(6506007)(6486002)(6666004)(8676002)(26005)(6512007)(8936002)(4326008)(2906002)(52116002)(66946007)(66476007)(36756003); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?q?7WVPfHhfFW1RxnBcmYzVWEIjRdWi?= =?utf-8?q?fG83IUGO2OOp3E//jp4b83+0OUr0ADr5TlfdKV31KHPZNxIucCl0Q4EFxX8HXh940?= =?utf-8?q?y5N42el6xEDfHoBsvnTFVOqVfPeLzCkx3WR7C/soWU7uVk4tMpn8FyOrXVPOTAfw6?= =?utf-8?q?WkxffGrJ5P0NbROb/90DDyXGq6HuwcdbS/fjQHclwckd8QiESnY2ExX5MTco5QWYy?= =?utf-8?q?w4inLSFjJcnFSk7sB5BNc2yyaC3hcqHtY2vy3Ea70Q5rWI8XkNPj1ZpR5ojAzS6kJ?= =?utf-8?q?eA07NQn/b7T9fzyhwAll4/RLkvFT7ahWJPuzmyeK2rzBWQw3sH+AKIarMR9elTRYz?= =?utf-8?q?ANSeF1vt1H1sAZYwyjoiM7NI7M4d6T4VGXL2KCuwZZNsAT6lEZaUVJV/pZ5FsyyZG?= =?utf-8?q?dtx1cPcYLsOp1Oddk/15bqqceZKPxQ0amYFnlQhEKhEorgI1O39mTkEro7xAJfdw9?= =?utf-8?q?ZRLdI+Bvpk/3tDyAUCVdzpMoTS9U405NEh2yahEg9JlLuS7o4NxPy1WnkRRTJwVtC?= =?utf-8?q?FiNje3mh8XHU3mboel7DJevWk9ztZq+rndXbwLfejmrVqufG9Z/HD9hCDKFha1c6b?= =?utf-8?q?xYVyJ037XAfi+RkY9M9HyqhfjxO70j6Lc2vqQgiH8plZ2UXRfLg8Nh0HkHojRcViz?= =?utf-8?q?zMUgCd5NjIq8YclLe7+8rXIFN9w1ReF47IUpbSgpHndPyKxlGEBk2Krf1fWPmxZxi?= =?utf-8?q?878bP8Ryt3x/vOhAaYHe2J9E6R1TuOLNgtsMR1Ov8O86lm2KNGcl0Tb6Qty8SK3Pk?= =?utf-8?q?gafdpWhmdK2b63AxtpNK1bEv+zdvQZ8ZDoy6lQfrErTahk/XIv6hylXCy49QbdjfY?= =?utf-8?q?pKXf82JuqRcYo3h7/hLxTNpstoIlKha+ewg8y1Atvbq7DHO2FsYudQFHCQEkSkCcn?= =?utf-8?q?zRkDKp+2NUFa3ji9mOsvEojqqvZ0Gusa0Tq8coW/0GznYe4iAh89J4LyOvLjiVMn5?= =?utf-8?q?MIPqroyrwvPbpzCnF/bbY3hF/ucCx6l/NdYuQc3OtheJgnb05L1ygJeXQkP7jIcOh?= =?utf-8?q?uXuZJTM9/0qv6xuM6FusyYR8k0mV3bRDjXevBXtn5Ia/LLJeTtZaMbtlwsnxmz92M?= =?utf-8?q?/NsDnQ3EFfIVw6PQaChcE2ly0kq6WDK5bymVIYpYwnYxoagxTR6YVRop98noKy6LE?= =?utf-8?q?2itECFocJDb0oHCtb31SJez6aBR4rZfriCYP3TzgYZ6QqRdZT0MqcyC4WybmUIlDS?= =?utf-8?q?X+Qpagj5KHYzNF269MDXMqPHAkUJJI+w7yVB7rWEWm4k+hpxFTjZaEtFSeTviRm//?= =?utf-8?q?ceSv++5kOYxZ3i8ffgtNOfmDBDOpEezJ2l99uV8lwrW4Gcb7rLw/eTjL2v9vhBiNn?= =?utf-8?q?qz24rigxtI8w2DuofEM7IcsjtXGTHpIBvwdmoA2TRzqOPrbe8biHfk+vMvWjeiaXi?= =?utf-8?q?A4Pxb0UEIMfKSqJnh2FmvY3fRxlNjIG7kecqVdkYGDtESqJSNhxFRJvUW4HlUG3/d?= =?utf-8?q?tp/gMs42cCez6Q3+2oYLDgFg3c88+NPrM273DisDLS09nTwcgPVoDgCQh4I5+UWSV?= =?utf-8?q?6KZF1trZvI+QzuvJkGAvs4z+4nv3IyKWHg=3D=3D?= X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4d073eb7-6d67-47eb-01bf-08dab406e1f7 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Oct 2022 08:24:48.3179 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: pn+Y5LZPIE2QNILhvmJrw2tyfLmYFLw8M0aFMMITUGSSTkQF2/LADiv/6cpAcgo/+0yc+gYorKiVz0xKWzaZqztFMc4fijLjGBczfUaqR7A= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR13MB5454 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add the corresponding data structure and logics, to support the offload of IPv4 VXLAN item. Signed-off-by: Chaoyong He Reviewed-by: Niklas Söderlund --- doc/guides/nics/features/nfp.ini | 1 + drivers/net/nfp/flower/nfp_flower_cmsg.h | 35 +++++ drivers/net/nfp/nfp_flow.c | 243 ++++++++++++++++++++++++++----- 3 files changed, 246 insertions(+), 33 deletions(-) diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini index 0184980..faaa7da 100644 --- a/doc/guides/nics/features/nfp.ini +++ b/doc/guides/nics/features/nfp.ini @@ -35,6 +35,7 @@ sctp = Y tcp = Y udp = Y vlan = Y +vxlan = Y [rte_flow actions] count = Y diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h index 6bf8ff7..08e2873 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.h +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h @@ -324,6 +324,41 @@ struct nfp_flower_ipv6 { uint8_t ipv6_dst[16]; }; +struct nfp_flower_tun_ipv4 { + rte_be32_t src; + rte_be32_t dst; +}; + +struct nfp_flower_tun_ip_ext { + uint8_t tos; + uint8_t ttl; +}; + +/* + * Flow Frame IPv4 UDP TUNNEL --> Tunnel details (5W/20B) + * ----------------------------------------------------------------- + * 3 2 1 + * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | ipv4_addr_src | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | ipv4_addr_dst | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Reserved | tos | ttl | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | Reserved | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + * | VNI | Reserved | + * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + */ +struct nfp_flower_ipv4_udp_tun { + struct nfp_flower_tun_ipv4 ipv4; + rte_be16_t reserved1; + struct nfp_flower_tun_ip_ext ip_ext; + rte_be32_t reserved2; + rte_be32_t tun_id; +}; + struct nfp_fl_act_head { uint8_t jump_id; uint8_t len_lw; diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 69fc8be..0e1e5ea 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -38,7 +38,8 @@ struct nfp_flow_item_proc { char **mbuf_off, const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc, - bool is_mask); + bool is_mask, + bool is_outer_layer); /* List of possible subsequent items. */ const enum rte_flow_item_type *const next_item; }; @@ -491,6 +492,7 @@ struct nfp_mask_id_entry { struct nfp_fl_key_ls *key_ls) { struct rte_eth_dev *ethdev; + bool outer_ip4_flag = false; const struct rte_flow_item *item; struct nfp_flower_representor *representor; const struct rte_flow_item_port_id *port_id; @@ -526,6 +528,8 @@ struct nfp_mask_id_entry { PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV4 detected"); key_ls->key_layer |= NFP_FLOWER_LAYER_IPV4; key_ls->key_size += sizeof(struct nfp_flower_ipv4); + if (!outer_ip4_flag) + outer_ip4_flag = true; break; case RTE_FLOW_ITEM_TYPE_IPV6: PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV6 detected"); @@ -547,6 +551,21 @@ struct nfp_mask_id_entry { key_ls->key_layer |= NFP_FLOWER_LAYER_TP; key_ls->key_size += sizeof(struct nfp_flower_tp_ports); break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_VXLAN detected"); + /* Clear IPv4 bits */ + key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4; + key_ls->tun_type = NFP_FL_TUN_VXLAN; + key_ls->key_layer |= NFP_FLOWER_LAYER_VXLAN; + if (outer_ip4_flag) { + key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun); + /* + * The outer l3 layer information is + * in `struct nfp_flower_ipv4_udp_tun` + */ + key_ls->key_size -= sizeof(struct nfp_flower_ipv4); + } + break; default: PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type); return -ENOTSUP; @@ -719,12 +738,25 @@ struct nfp_mask_id_entry { return ret; } +static bool +nfp_flow_is_tunnel(struct rte_flow *nfp_flow) +{ + struct nfp_flower_meta_tci *meta_tci; + + meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; + if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN) + return true; + + return false; +} + static int nfp_flow_merge_eth(__rte_unused struct rte_flow *nfp_flow, char **mbuf_off, const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc, - bool is_mask) + bool is_mask, + __rte_unused bool is_outer_layer) { struct nfp_flower_mac_mpls *eth; const struct rte_flow_item_eth *spec; @@ -760,7 +792,8 @@ struct nfp_mask_id_entry { __rte_unused char **mbuf_off, const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc, - bool is_mask) + bool is_mask, + __rte_unused bool is_outer_layer) { struct nfp_flower_meta_tci *meta_tci; const struct rte_flow_item_vlan *spec; @@ -789,41 +822,58 @@ struct nfp_mask_id_entry { char **mbuf_off, const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc, - bool is_mask) + bool is_mask, + bool is_outer_layer) { struct nfp_flower_ipv4 *ipv4; const struct rte_ipv4_hdr *hdr; struct nfp_flower_meta_tci *meta_tci; const struct rte_flow_item_ipv4 *spec; const struct rte_flow_item_ipv4 *mask; + struct nfp_flower_ipv4_udp_tun *ipv4_udp_tun; spec = item->spec; mask = item->mask ? item->mask : proc->mask_default; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (spec == NULL) { - PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!"); - goto ipv4_end; - } + if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) { + if (spec == NULL) { + PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!"); + return 0; + } - /* - * reserve space for L4 info. - * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4 - */ - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) - *mbuf_off += sizeof(struct nfp_flower_tp_ports); + hdr = is_mask ? &mask->hdr : &spec->hdr; + ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off; - hdr = is_mask ? &mask->hdr : &spec->hdr; - ipv4 = (struct nfp_flower_ipv4 *)*mbuf_off; + ipv4_udp_tun->ip_ext.tos = hdr->type_of_service; + ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live; + ipv4_udp_tun->ipv4.src = hdr->src_addr; + ipv4_udp_tun->ipv4.dst = hdr->dst_addr; + } else { + if (spec == NULL) { + PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!"); + goto ipv4_end; + } + + /* + * reserve space for L4 info. + * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4 + */ + if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + *mbuf_off += sizeof(struct nfp_flower_tp_ports); + + hdr = is_mask ? &mask->hdr : &spec->hdr; + ipv4 = (struct nfp_flower_ipv4 *)*mbuf_off; - ipv4->ip_ext.tos = hdr->type_of_service; - ipv4->ip_ext.proto = hdr->next_proto_id; - ipv4->ip_ext.ttl = hdr->time_to_live; - ipv4->ipv4_src = hdr->src_addr; - ipv4->ipv4_dst = hdr->dst_addr; + ipv4->ip_ext.tos = hdr->type_of_service; + ipv4->ip_ext.proto = hdr->next_proto_id; + ipv4->ip_ext.ttl = hdr->time_to_live; + ipv4->ipv4_src = hdr->src_addr; + ipv4->ipv4_dst = hdr->dst_addr; ipv4_end: - *mbuf_off += sizeof(struct nfp_flower_ipv4); + *mbuf_off += sizeof(struct nfp_flower_ipv4); + } return 0; } @@ -833,7 +883,8 @@ struct nfp_mask_id_entry { char **mbuf_off, const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc, - bool is_mask) + bool is_mask, + __rte_unused bool is_outer_layer) { struct nfp_flower_ipv6 *ipv6; const struct rte_ipv6_hdr *hdr; @@ -878,7 +929,8 @@ struct nfp_mask_id_entry { char **mbuf_off, const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc, - bool is_mask) + bool is_mask, + __rte_unused bool is_outer_layer) { uint8_t tcp_flags; struct nfp_flower_tp_ports *ports; @@ -950,7 +1002,8 @@ struct nfp_mask_id_entry { char **mbuf_off, const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc, - bool is_mask) + bool is_mask, + bool is_outer_layer) { char *ports_off; struct nfp_flower_tp_ports *ports; @@ -964,6 +1017,12 @@ struct nfp_mask_id_entry { return 0; } + /* Don't add L4 info if working on a inner layer pattern */ + if (!is_outer_layer) { + PMD_DRV_LOG(INFO, "Detected inner layer UDP, skipping."); + return 0; + } + meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - @@ -991,7 +1050,8 @@ struct nfp_mask_id_entry { char **mbuf_off, const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc, - bool is_mask) + bool is_mask, + __rte_unused bool is_outer_layer) { char *ports_off; struct nfp_flower_tp_ports *ports; @@ -1027,10 +1087,42 @@ struct nfp_mask_id_entry { return 0; } +static int +nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow, + char **mbuf_off, + const struct rte_flow_item *item, + const struct nfp_flow_item_proc *proc, + bool is_mask, + __rte_unused bool is_outer_layer) +{ + const struct rte_vxlan_hdr *hdr; + struct nfp_flower_ipv4_udp_tun *tun4; + const struct rte_flow_item_vxlan *spec; + const struct rte_flow_item_vxlan *mask; + + spec = item->spec; + if (spec == NULL) { + PMD_DRV_LOG(DEBUG, "nfp flow merge vxlan: no item->spec!"); + goto vxlan_end; + } + + mask = item->mask ? item->mask : proc->mask_default; + hdr = is_mask ? &mask->hdr : &spec->hdr; + + tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off; + tun4->tun_id = hdr->vx_vni; + +vxlan_end: + *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun); + + return 0; +} + /* Graph of supported items and associated process function */ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { [RTE_FLOW_ITEM_TYPE_END] = { - .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH), + .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4), }, [RTE_FLOW_ITEM_TYPE_ETH] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN, @@ -1113,6 +1205,7 @@ struct nfp_mask_id_entry { .merge = nfp_flow_merge_tcp, }, [RTE_FLOW_ITEM_TYPE_UDP] = { + .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN), .mask_support = &(const struct rte_flow_item_udp){ .hdr = { .src_port = RTE_BE16(0xffff), @@ -1134,6 +1227,17 @@ struct nfp_mask_id_entry { .mask_sz = sizeof(struct rte_flow_item_sctp), .merge = nfp_flow_merge_sctp, }, + [RTE_FLOW_ITEM_TYPE_VXLAN] = { + .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH), + .mask_support = &(const struct rte_flow_item_vxlan){ + .hdr = { + .vx_vni = RTE_BE32(0xffffff00), + }, + }, + .mask_default = &rte_flow_item_vxlan_mask, + .mask_sz = sizeof(struct rte_flow_item_vxlan), + .merge = nfp_flow_merge_vxlan, + }, }; static int @@ -1187,21 +1291,53 @@ struct nfp_mask_id_entry { return ret; } +static bool +nfp_flow_is_tun_item(const struct rte_flow_item *item) +{ + if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) + return true; + + return false; +} + +static bool +nfp_flow_inner_item_get(const struct rte_flow_item items[], + const struct rte_flow_item **inner_item) +{ + const struct rte_flow_item *item; + + *inner_item = items; + + for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) { + if (nfp_flow_is_tun_item(item)) { + *inner_item = ++item; + return true; + } + } + + return false; +} + static int nfp_flow_compile_item_proc(const struct rte_flow_item items[], struct rte_flow *nfp_flow, char **mbuf_off_exact, - char **mbuf_off_mask) + char **mbuf_off_mask, + bool is_outer_layer) { int i; int ret = 0; + bool continue_flag = true; const struct rte_flow_item *item; const struct nfp_flow_item_proc *proc_list; proc_list = nfp_flow_item_proc_list; - for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) { + for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END && continue_flag; ++item) { const struct nfp_flow_item_proc *proc = NULL; + if (nfp_flow_is_tun_item(item)) + continue_flag = false; + for (i = 0; proc_list->next_item && proc_list->next_item[i]; ++i) { if (proc_list->next_item[i] == item->type) { proc = &nfp_flow_item_proc_list[item->type]; @@ -1230,14 +1366,14 @@ struct nfp_mask_id_entry { } ret = proc->merge(nfp_flow, mbuf_off_exact, item, - proc, false); + proc, false, is_outer_layer); if (ret != 0) { PMD_DRV_LOG(ERR, "nfp flow item %d exact merge failed", item->type); break; } ret = proc->merge(nfp_flow, mbuf_off_mask, item, - proc, true); + proc, true, is_outer_layer); if (ret != 0) { PMD_DRV_LOG(ERR, "nfp flow item %d mask merge failed", item->type); break; @@ -1257,6 +1393,9 @@ struct nfp_mask_id_entry { int ret; char *mbuf_off_mask; char *mbuf_off_exact; + bool is_tun_flow = false; + bool is_outer_layer = true; + const struct rte_flow_item *loop_item; mbuf_off_exact = nfp_flow->payload.unmasked_data + sizeof(struct nfp_flower_meta_tci) + @@ -1265,14 +1404,29 @@ struct nfp_mask_id_entry { sizeof(struct nfp_flower_meta_tci) + sizeof(struct nfp_flower_in_port); + /* Check if this is a tunnel flow and get the inner item*/ + is_tun_flow = nfp_flow_inner_item_get(items, &loop_item); + if (is_tun_flow) + is_outer_layer = false; + /* Go over items */ - ret = nfp_flow_compile_item_proc(items, nfp_flow, - &mbuf_off_exact, &mbuf_off_mask); + ret = nfp_flow_compile_item_proc(loop_item, nfp_flow, + &mbuf_off_exact, &mbuf_off_mask, is_outer_layer); if (ret != 0) { PMD_DRV_LOG(ERR, "nfp flow item compile failed."); return -EINVAL; } + /* Go over inner items */ + if (is_tun_flow) { + ret = nfp_flow_compile_item_proc(items, nfp_flow, + &mbuf_off_exact, &mbuf_off_mask, true); + if (ret != 0) { + PMD_DRV_LOG(ERR, "nfp flow outer item compile failed."); + return -EINVAL; + } + } + return 0; } @@ -2119,12 +2273,35 @@ struct nfp_mask_id_entry { return 0; } +static int +nfp_flow_tunnel_match(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow_tunnel *tunnel, + __rte_unused struct rte_flow_item **pmd_items, + uint32_t *num_of_items, + __rte_unused struct rte_flow_error *err) +{ + *num_of_items = 0; + + return 0; +} + +static int +nfp_flow_tunnel_item_release(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_flow_item *pmd_items, + __rte_unused uint32_t num_of_items, + __rte_unused struct rte_flow_error *err) +{ + return 0; +} + static const struct rte_flow_ops nfp_flow_ops = { .validate = nfp_flow_validate, .create = nfp_flow_create, .destroy = nfp_flow_destroy, .flush = nfp_flow_flush, .query = nfp_flow_query, + .tunnel_match = nfp_flow_tunnel_match, + .tunnel_item_release = nfp_flow_tunnel_item_release, }; int