From patchwork Fri Nov 24 20:35:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mody, Rasesh" X-Patchwork-Id: 31664 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 979BC2BDF; Fri, 24 Nov 2017 21:36:42 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0040.outbound.protection.outlook.com [104.47.36.40]) by dpdk.org (Postfix) with ESMTP id 4F1FD2B89 for ; Fri, 24 Nov 2017 21:36:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=7/nNL4ZX1hErIvDi1sdoSKbehVByNyp2HKmErCmo2n8=; b=Df2oZZ7PRbOSNj9mKfuKYP+sPBFLF6ZaAOSFLdYDo1sNnLsI1vwW6jl1PuO8YLjdhX/OFGh+dcwpSpRuha+UwelDdyWAEHuagNNcRKfZrH9rNuiMPJqezSMChKa0PioiQLm93Z5jQBDpOUQfc5dqM/Mbiq83pHXP5xR4Gv0WIAI= Received: from cavium.com (198.186.0.2) by MWHPR0701MB3833.namprd07.prod.outlook.com (2603:10b6:301:7f::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.260.4; Fri, 24 Nov 2017 20:36:37 +0000 From: Rasesh Mody To: dev@dpdk.org Cc: Shahed Shaikh , Dept-EngDPDKDev@cavium.com Date: Fri, 24 Nov 2017 12:35:45 -0800 Message-Id: <1511555745-13793-6-git-send-email-rasesh.mody@cavium.com> X-Mailer: git-send-email 1.7.10.3 In-Reply-To: <1511555745-13793-1-git-send-email-rasesh.mody@cavium.com> References: <1511555745-13793-1-git-send-email-rasesh.mody@cavium.com> MIME-Version: 1.0 X-Originating-IP: [198.186.0.2] X-ClientProxiedBy: DM5PR2001CA0004.namprd20.prod.outlook.com (2603:10b6:4:16::14) To MWHPR0701MB3833.namprd07.prod.outlook.com (2603:10b6:301:7f::26) X-MS-PublicTrafficType: Email X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603199); SRVR:MWHPR0701MB3833; X-Microsoft-Exchange-Diagnostics: 1; MWHPR0701MB3833; 3:7K8DVtLTHlNI4brjDY5acfU0opiu1HZI81pKjOek3yRJlv8SoK1VGAVd3FqJmUJsiiESMQMYwVZclDmIddsqeHCwcglw6adhp5ub8lZeGCYNvEtnm0F1atzVfOvJmRHEKgRbkKcyBsJ1YoF+uGp/0f8+pujRMYBCLIkHnsxjHemqs4enqKWkvvgNqZa8RH1UfhpY6awvXOuKmI6AKUnBEqeQrgfVZx+tj9L/Za/Hrxz+e38Rkw7cH3c0LGsWCyC6; 25:XX6DHusMEOp8TzkqqdD9a4KVkMxt1wqUCsvtU/YMNYc1LrdzEspQZPZZxwRQ3gJbUAHUPb6q3PdUL9H9sk8x7/pDd67L1fkT+6h8ECexa3zmgWzvuFQAkMZky4ASOsICH/hCXF1S6aE/UYNMTz6h13cXZkzPINCQ6roDtaTyTy5TX+8Kd0S1RzvVrJLu8ESz0BOra6veYdrsqQLjhIQcWEAAvyTx3TLvKoALn2aeFsTvEu5Pd6Ahmcc3UG/uzZXZIspKzOxL0UCA8Cc7HXINXBYk6VuatD6z/Xa/HigkYv9GFioC71BDMcd7563BxDZDrWV7tCvAn45qYteLio4j6A==; 31:F4DRKnWwgbnmb4gcTX5Xh3nlQqMG7eO2Tf6IjuIkWgjSzfbGmxWUgQO4jOjB11/D41O+1L24G+oHcZ9FCOtkf4HGoc/hrMPTh2Z4G86UrAnaEWl4r/sVjYvHqUvlQt8SnF4Rf8KWp53IE8G2+c4tb7y1j+7uENn5oXv/nxIdSuNNPxVO25V6JicqMhzh2DtmgcifThwubJokJCJvhYwJfniamPm/X7fAP6YvLojhlfU= X-MS-TrafficTypeDiagnostic: MWHPR0701MB3833: X-MS-Office365-Filtering-Correlation-Id: 3f55a86c-b615-4958-738e-08d5337b0f97 X-Microsoft-Exchange-Diagnostics: 1; MWHPR0701MB3833; 20:vwWw+Uqd9xL0jlYTWzqIfcKdhRT+qre4iJ1KBTwFps8JGR86yCmJ3i7aXycKGL7rxxzzvO7dFy/xU78cxsVuSa+ld/fnQNGesuWPbwInFC+i92Nze8DfCxQEpKARFt0ExwLjfO3V7a1TBiT9jrUH3ApJzmx8Xxl7HlXcnW5phNhijdWEwvpVVNS6GUOiiUnY/05Q/4z/r9trtqL1a5ItoB7rVRPpQnPYuY+IOB7/MWzirpfdDZ+52xXnVKWFxclTFNBwR5ntSXRJ2xeFnBzmuytBRVJt8aGrebSMRMRAMJrLEBdHDJ2O9tlAOBivYPI5Fb2DwPQeiuEadbz4Lt/BpDTv7F7CZC/fqahoyfhGUp97lv80QeLODuw5ysPoSmnb/r3LeqTI1wnvsEZ1NCZi782BOudQOTeTYqBIkZTA7bV5wRpBP2JuDj9nOo7VKBNJyQTsLWPMkP/oDAWP/mH7EaNLKQKtFlcdWOmviIYofu8WdFmmTDLQUcEx+Swafnrr; 4:eC6mqG1Y9FNtQU3xQHui8gDwZP/AVGWZXSfMQmU3acRvLdUH4MJGkFv/IoTaxv2Z6zVWPcdyQiqcbXKvUI2HX7NS4o3Kwgq4fQg7ERv7lUe0VcL8Znu5uwgeC9N8NRmMij4DhMQMWWunZqYpqpF2weDXRhgcmPkFDznbDrNLeOzisPW8DY6oyKnMsc5eXtoMA7na2lmld/w6n16PzvHoj+JuTVWTIBJHM/GQMGwKp769Kgs8hP1dP3DEN9SiNIAA/w8o/IEWm69Km71AghrPIg== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(8121501046)(5005006)(3231022)(93006095)(93001095)(3002001)(10201501046)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123564025)(20161123560025)(20161123562025)(20161123558100)(20161123555025)(6072148)(201708071742011); SRVR:MWHPR0701MB3833; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:MWHPR0701MB3833; X-Forefront-PRVS: 05015EB482 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(346002)(366004)(199003)(189002)(21086003)(305945005)(97736004)(7736002)(50986999)(81166006)(76176999)(81156014)(2351001)(50226002)(105586002)(8676002)(33646002)(2361001)(101416001)(66066001)(107886003)(7696005)(36756003)(106356001)(68736007)(47776003)(16586007)(316002)(6116002)(3846002)(5660300001)(2906002)(48376002)(50466002)(52116002)(69596002)(8936002)(4720700003)(4326008)(6666003)(72206003)(2950100002)(6916009)(53936002)(478600001)(189998001)(16526018)(25786009)(51416003)(86362001)(575784001)(55016002); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR0701MB3833; H:cavium.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Rasesh.Mody@cavium.com; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MWHPR0701MB3833; 23:zwNvAXnmvqEn4AsIoY4/vllwXJ84CKyj2cXQp/0?= NlNZ5QGkR+LOD6cTNshd0QZJRoM8bYzbxXFkdDU4FJtWqhK9gdzypbHoY+B9BYdn+mNDHsiWBhinnZRqCKQannlCiuYis8xkowJ9zE8eoHLguagh2uMxTbeIJ/3y9UbcP/DCXifmrG4WtzWUQqfBg//98shnpvPIqEWOEgp4u/HyE/QqIb0WVpFTEbXz64YM4Hj1DkiFM05w3SEuxtbHShHwad9IQaSuRbWwl/fmstWevLx0QB/1q8G7cHH3l5I8po0dA46DPa+MLJZGShFrjNY8BbpJujD4/0pnsiaMcWRMndk0tufcOrtz8K7GIexbktDMgngoGqY26PEEUtqt8IekVaSTzdaUCE6LI73nggG/DapLTuS4h/G+rR9vvp6REMlkG9WlWgEzi8PcL5Xsp/0ueMT4UhPI9OpyQ3Yz/aiKx8+BGnX8S5bNTZzLnxhIsccbYl1pgaB+/CnGY8TeCX2vIaVQNTMK0VS3uP90tK9zMsZc612mnOPCJaOvxDT/Hp9seER2eQIXOxxEK9bTZaNEOB2PmlEVXiQbvTRkVuWJHw3uscJvuhZBSdE1DeWEGp7zUCV/m0avU1HdXzKS9Jf9tlBZ+b98dDDzKpx+BCnOffYHIWh/kW83WpU/gywCGQ8SVXQwaVoFcKAkJWmK2rctIdOgpE+vwIGc/gHTdDyKxOpM9dZQAOt7eiEIycbsW4ul6aHEczxWyboRC1wId39KDnp3qNKAT2EwqOKNY93vVZMGfMGUpGISJ958+/zVk00s6AiQEwFG1vbOMDJv/zZ6GXbvevsziT5HqrZu+KFb+YotXXRNpRFhX+BmAk05jj4hVU8iM6XeQ0GJ87W8TsKylYXYBdyZ2YR9ElZ0qf7/4BQH9VVHoR8t5ddsI6Hx8E9N/2WxImQWWqfDA67MKPtClKBaCtQRe4+QQ9UlMUOBR0wAv3Lp/SZVyOhMWH9v3DBC57obYgwZ02AeF+vMv+xTK8vH/Y16gNOnsWiL+mPLnWhwkMiqejxo8vVfk6BcvnzKUpR+YP2ngJc7QpM5h3ZvvStKqgUZb3gaQ9SEsamZhupa+DlZE16mCTnzb91N3DHQtMiwQfSSlQORcrnQM6zAzNnoYZGfw5Nkn35AlVYL1F6PrdCHtsXO1vrCbsWTV4acM3q6PDB1n4V4lr39UhvXCosAN/9tgcc8qSfr8U9YQ4Q6CyZ/HeM6KmtJ3f88DcKI= X-Microsoft-Exchange-Diagnostics: 1; MWHPR0701MB3833; 6:Cei3PeBUpWABOnlYmkJXm51FXH9ni4zgdRJV0zqklD5Lb79Lhp7kQl1k2We87ZuGTLiL5ZBiKdywXzQ8f9XkmkFWwdKp8LuCxO2ehWi8SVdrY+UziT5Oxaz7se1455i/PGuPgNWSaX/D7TjF6FhCx7WdgA51K8Eg87YnLDyHN9+XVdC4Hc0M/mUVSB77jmpf0r4i3V8yVktB5Vcw3TcTOlvUecSUv5ksw3RJ9k0Co2xK5dRO4n44E6A86n1CrgkUxQOwpgeCI2Ss+pwxHR8W7zYND/n8r/36lrfkf6efM1B+O3JpNQVKN7NCK3UZ3QW2fT8e/3YgXplfO4vNhGwhGA==; 5:g84qlJmLGIGWcPKd3Wp2NatW4JqIjYpaF83XcOiVn0CH4xNeT6VO5pMEoWjkmA6Nk0gW8jjxQxZ0KlZT3zRYvm+XrZA3WBl/gZg/F3Z7VOMbx/5UxwiQyQnY8YANiNrl06boUwQiHCF8z60z0qzJfQuLptrMub+ZSzX4ZvZOt3k=; 24:Io3RcfRd0acyKfe1xFqlYlruZlPD3jiyL5s+wYIBVK6Q8YrJKT3+q2ix3y4xcECfXzhBNO3KjlMtLtwv6uzdJ8uMQmY2sMyiz5/XHDcj5yI=; 7:B547MFXtibIVUkRlj5yh3RBl+ETk5PIl6xrZOdzZlHEctOT/RgKa1Z/etSNPxcA9jjlGO0p+ItxChZeNazef/cko/K1amNy15dm2/PxROKdymWF2vTVONw/6athpz1mn5OL6cP+CtCYMrto4++/Eli8J7eQgGMKsv+fx98xqqZbXe0LaFujCJkky0WeJWgsie9liPAs2HDQIk3wN0FWVjOWn5ZqiI9KTrPKqXK3GYPp2DO8FqqMqckGRxsnwNxyl SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Nov 2017 20:36:37.3527 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3f55a86c-b615-4958-738e-08d5337b0f97 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR0701MB3833 Subject: [dpdk-dev] [PATCH 5/5] net/qede: add support for GENEVE tunneling offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Shahed Shaikh This patch refactors existing VXLAN tunneling offload code and enables following features for GENEVE: - destination UDP port configuration - checksum offloads - filter configuration Signed-off-by: Shahed Shaikh --- drivers/net/qede/qede_ethdev.c | 518 ++++++++++++++++++++++++++-------------- drivers/net/qede/qede_ethdev.h | 10 +- drivers/net/qede/qede_rxtx.c | 4 +- drivers/net/qede/qede_rxtx.h | 4 +- 4 files changed, 350 insertions(+), 186 deletions(-) diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 0128cec..68e99c5 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -15,7 +15,7 @@ static int64_t timer_period = 1; /* VXLAN tunnel classification mapping */ -const struct _qede_vxlan_tunn_types { +const struct _qede_udp_tunn_types { uint16_t rte_filter_type; enum ecore_filter_ucast_type qede_type; enum ecore_tunn_clss qede_tunn_clss; @@ -612,48 +612,118 @@ static void qede_set_ucast_cmn_params(struct ecore_filter_ucast *ucast) } static int +qede_tunnel_update(struct qede_dev *qdev, + struct ecore_tunnel_info *tunn_info) +{ + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + enum _ecore_status_t rc = ECORE_INVAL; + struct ecore_hwfn *p_hwfn; + struct ecore_ptt *p_ptt; + int i; + + for_each_hwfn(edev, i) { + p_hwfn = &edev->hwfns[i]; + p_ptt = IS_PF(edev) ? ecore_ptt_acquire(p_hwfn) : NULL; + rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, + tunn_info, ECORE_SPQ_MODE_CB, NULL); + if (IS_PF(edev)) + ecore_ptt_release(p_hwfn, p_ptt); + + if (rc != ECORE_SUCCESS) + break; + } + + return rc; +} + +static int qede_vxlan_enable(struct rte_eth_dev *eth_dev, uint8_t clss, - bool enable, bool mask) + bool enable) { struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); enum _ecore_status_t rc = ECORE_INVAL; - struct ecore_ptt *p_ptt; struct ecore_tunnel_info tunn; - struct ecore_hwfn *p_hwfn; - int i; + + if (qdev->vxlan.enable == enable) + return ECORE_SUCCESS; memset(&tunn, 0, sizeof(struct ecore_tunnel_info)); - tunn.vxlan.b_update_mode = enable; - tunn.vxlan.b_mode_enabled = mask; + tunn.vxlan.b_update_mode = true; + tunn.vxlan.b_mode_enabled = enable; tunn.b_update_rx_cls = true; tunn.b_update_tx_cls = true; tunn.vxlan.tun_cls = clss; - for_each_hwfn(edev, i) { - p_hwfn = &edev->hwfns[i]; - if (IS_PF(edev)) { - p_ptt = ecore_ptt_acquire(p_hwfn); - if (!p_ptt) - return -EAGAIN; - } else { - p_ptt = NULL; - } - rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, - &tunn, ECORE_SPQ_MODE_CB, NULL); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Failed to update tunn_clss %u\n", - tunn.vxlan.tun_cls); - if (IS_PF(edev)) - ecore_ptt_release(p_hwfn, p_ptt); - break; - } - } + tunn.vxlan_port.b_update_port = true; + tunn.vxlan_port.port = enable ? QEDE_VXLAN_DEF_PORT : 0; + rc = qede_tunnel_update(qdev, &tunn); if (rc == ECORE_SUCCESS) { qdev->vxlan.enable = enable; qdev->vxlan.udp_port = (enable) ? QEDE_VXLAN_DEF_PORT : 0; - DP_INFO(edev, "vxlan is %s\n", enable ? "enabled" : "disabled"); + DP_INFO(edev, "vxlan is %s, UDP port = %d\n", + enable ? "enabled" : "disabled", qdev->vxlan.udp_port); + } else { + DP_ERR(edev, "Failed to update tunn_clss %u\n", + tunn.vxlan.tun_cls); + } + + return rc; +} + +static int +qede_geneve_enable(struct rte_eth_dev *eth_dev, uint8_t clss, + bool enable) +{ + struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + enum _ecore_status_t rc = ECORE_INVAL; + struct ecore_tunnel_info tunn; + + memset(&tunn, 0, sizeof(struct ecore_tunnel_info)); + tunn.l2_geneve.b_update_mode = true; + tunn.l2_geneve.b_mode_enabled = enable; + tunn.ip_geneve.b_update_mode = true; + tunn.ip_geneve.b_mode_enabled = enable; + tunn.l2_geneve.tun_cls = clss; + tunn.ip_geneve.tun_cls = clss; + tunn.b_update_rx_cls = true; + tunn.b_update_tx_cls = true; + + tunn.geneve_port.b_update_port = true; + tunn.geneve_port.port = enable ? QEDE_GENEVE_DEF_PORT : 0; + + rc = qede_tunnel_update(qdev, &tunn); + if (rc == ECORE_SUCCESS) { + qdev->geneve.enable = enable; + qdev->geneve.udp_port = (enable) ? QEDE_GENEVE_DEF_PORT : 0; + DP_INFO(edev, "GENEVE is %s, UDP port = %d\n", + enable ? "enabled" : "disabled", qdev->geneve.udp_port); + } else { + DP_ERR(edev, "Failed to update tunn_clss %u\n", + clss); + } + + return rc; +} + +static int +qede_tunn_enable(struct rte_eth_dev *eth_dev, uint8_t clss, + enum rte_eth_tunnel_type tunn_type, bool enable) +{ + int rc = -EINVAL; + + switch (tunn_type) { + case RTE_TUNNEL_TYPE_VXLAN: + rc = qede_vxlan_enable(eth_dev, clss, enable); + break; + case RTE_TUNNEL_TYPE_GENEVE: + rc = qede_geneve_enable(eth_dev, clss, enable); + break; + default: + rc = -EINVAL; + break; } return rc; @@ -1367,7 +1437,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev) DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_TCP_TSO | - DEV_TX_OFFLOAD_VXLAN_TNL_TSO); + DEV_TX_OFFLOAD_VXLAN_TNL_TSO | + DEV_TX_OFFLOAD_GENEVE_TNL_TSO); memset(&link, 0, sizeof(struct qed_link_output)); qdev->ops->common->get_link(edev, &link); @@ -1873,6 +1944,7 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev, RTE_PTYPE_L4_UDP, RTE_PTYPE_TUNNEL_VXLAN, RTE_PTYPE_L4_FRAG, + RTE_PTYPE_TUNNEL_GENEVE, /* Inner */ RTE_PTYPE_INNER_L2_ETHER, RTE_PTYPE_INNER_L2_ETHER_VLAN, @@ -2221,74 +2293,36 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } static int -qede_conf_udp_dst_port(struct rte_eth_dev *eth_dev, - struct rte_eth_udp_tunnel *tunnel_udp, - bool add) +qede_udp_dst_port_del(struct rte_eth_dev *eth_dev, + struct rte_eth_udp_tunnel *tunnel_udp) { struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); struct ecore_tunnel_info tunn; /* @DPDK */ - struct ecore_hwfn *p_hwfn; - struct ecore_ptt *p_ptt; uint16_t udp_port; - int rc, i; + int rc; PMD_INIT_FUNC_TRACE(edev); memset(&tunn, 0, sizeof(tunn)); - if (tunnel_udp->prot_type == RTE_TUNNEL_TYPE_VXLAN) { - /* Enable VxLAN tunnel if needed before UDP port update using - * default MAC/VLAN classification. - */ - if (add) { - if (qdev->vxlan.udp_port == tunnel_udp->udp_port) { - DP_INFO(edev, - "UDP port %u was already configured\n", - tunnel_udp->udp_port); - return ECORE_SUCCESS; - } - /* Enable VXLAN if it was not enabled while adding - * VXLAN filter. - */ - if (!qdev->vxlan.enable) { - rc = qede_vxlan_enable(eth_dev, - ECORE_TUNN_CLSS_MAC_VLAN, true, true); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Failed to enable VXLAN " - "prior to updating UDP port\n"); - return rc; - } - } - udp_port = tunnel_udp->udp_port; - } else { - if (qdev->vxlan.udp_port != tunnel_udp->udp_port) { - DP_ERR(edev, "UDP port %u doesn't exist\n", - tunnel_udp->udp_port); - return ECORE_INVAL; - } - udp_port = 0; + + switch (tunnel_udp->prot_type) { + case RTE_TUNNEL_TYPE_VXLAN: + if (qdev->vxlan.udp_port != tunnel_udp->udp_port) { + DP_ERR(edev, "UDP port %u doesn't exist\n", + tunnel_udp->udp_port); + return ECORE_INVAL; } + udp_port = 0; tunn.vxlan_port.b_update_port = true; tunn.vxlan_port.port = udp_port; - for_each_hwfn(edev, i) { - p_hwfn = &edev->hwfns[i]; - if (IS_PF(edev)) { - p_ptt = ecore_ptt_acquire(p_hwfn); - if (!p_ptt) - return -EAGAIN; - } else { - p_ptt = NULL; - } - rc = ecore_sp_pf_update_tunn_cfg(p_hwfn, p_ptt, &tunn, - ECORE_SPQ_MODE_CB, NULL); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Unable to config UDP port %u\n", - tunn.vxlan_port.port); - if (IS_PF(edev)) - ecore_ptt_release(p_hwfn, p_ptt); - return rc; - } + + rc = qede_tunnel_update(qdev, &tunn); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unable to config UDP port %u\n", + tunn.vxlan_port.port); + return rc; } qdev->vxlan.udp_port = udp_port; @@ -2296,26 +2330,145 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) * VXLAN filters have reached 0 then VxLAN offload can be be * disabled. */ - if (!add && qdev->vxlan.enable && qdev->vxlan.num_filters == 0) + if (qdev->vxlan.enable && qdev->vxlan.num_filters == 0) return qede_vxlan_enable(eth_dev, - ECORE_TUNN_CLSS_MAC_VLAN, false, true); + ECORE_TUNN_CLSS_MAC_VLAN, false); + + break; + + case RTE_TUNNEL_TYPE_GENEVE: + if (qdev->geneve.udp_port != tunnel_udp->udp_port) { + DP_ERR(edev, "UDP port %u doesn't exist\n", + tunnel_udp->udp_port); + return ECORE_INVAL; + } + + udp_port = 0; + + tunn.geneve_port.b_update_port = true; + tunn.geneve_port.port = udp_port; + + rc = qede_tunnel_update(qdev, &tunn); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unable to config UDP port %u\n", + tunn.vxlan_port.port); + return rc; + } + + qdev->vxlan.udp_port = udp_port; + /* If the request is to delete UDP port and if the number of + * GENEVE filters have reached 0 then GENEVE offload can be be + * disabled. + */ + if (qdev->geneve.enable && qdev->geneve.num_filters == 0) + return qede_geneve_enable(eth_dev, + ECORE_TUNN_CLSS_MAC_VLAN, false); + + break; + + default: + return ECORE_INVAL; } return 0; -} -static int -qede_udp_dst_port_del(struct rte_eth_dev *eth_dev, - struct rte_eth_udp_tunnel *tunnel_udp) -{ - return qede_conf_udp_dst_port(eth_dev, tunnel_udp, false); } - static int qede_udp_dst_port_add(struct rte_eth_dev *eth_dev, struct rte_eth_udp_tunnel *tunnel_udp) { - return qede_conf_udp_dst_port(eth_dev, tunnel_udp, true); + struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + struct ecore_tunnel_info tunn; /* @DPDK */ + uint16_t udp_port; + int rc; + + PMD_INIT_FUNC_TRACE(edev); + + memset(&tunn, 0, sizeof(tunn)); + + switch (tunnel_udp->prot_type) { + case RTE_TUNNEL_TYPE_VXLAN: + if (qdev->vxlan.udp_port == tunnel_udp->udp_port) { + DP_INFO(edev, + "UDP port %u for VXLAN was already configured\n", + tunnel_udp->udp_port); + return ECORE_SUCCESS; + } + + /* Enable VxLAN tunnel with default MAC/VLAN classification if + * it was not enabled while adding VXLAN filter before UDP port + * update. + */ + if (!qdev->vxlan.enable) { + rc = qede_vxlan_enable(eth_dev, + ECORE_TUNN_CLSS_MAC_VLAN, true); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Failed to enable VXLAN " + "prior to updating UDP port\n"); + return rc; + } + } + udp_port = tunnel_udp->udp_port; + + tunn.vxlan_port.b_update_port = true; + tunn.vxlan_port.port = udp_port; + + rc = qede_tunnel_update(qdev, &tunn); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unable to config UDP port %u for VXLAN\n", + udp_port); + return rc; + } + + DP_INFO(edev, "Updated UDP port %u for VXLAN\n", udp_port); + + qdev->vxlan.udp_port = udp_port; + break; + + case RTE_TUNNEL_TYPE_GENEVE: + if (qdev->geneve.udp_port == tunnel_udp->udp_port) { + DP_INFO(edev, + "UDP port %u for GENEVE was already configured\n", + tunnel_udp->udp_port); + return ECORE_SUCCESS; + } + + /* Enable GENEVE tunnel with default MAC/VLAN classification if + * it was not enabled while adding GENEVE filter before UDP port + * update. + */ + if (!qdev->geneve.enable) { + rc = qede_geneve_enable(eth_dev, + ECORE_TUNN_CLSS_MAC_VLAN, true); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Failed to enable GENEVE " + "prior to updating UDP port\n"); + return rc; + } + } + udp_port = tunnel_udp->udp_port; + + tunn.geneve_port.b_update_port = true; + tunn.geneve_port.port = udp_port; + + rc = qede_tunnel_update(qdev, &tunn); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unable to config UDP port %u for GENEVE\n", + udp_port); + return rc; + } + + DP_INFO(edev, "Updated UDP port %u for GENEVE\n", udp_port); + + qdev->geneve.udp_port = udp_port; + break; + + default: + return ECORE_INVAL; + } + + return 0; } static void qede_get_ecore_tunn_params(uint32_t filter, uint32_t *type, @@ -2382,113 +2535,116 @@ static void qede_get_ecore_tunn_params(uint32_t filter, uint32_t *type, return ECORE_SUCCESS; } -static int qede_vxlan_tunn_config(struct rte_eth_dev *eth_dev, - enum rte_filter_op filter_op, - const struct rte_eth_tunnel_filter_conf *conf) +static int +_qede_tunn_filter_config(struct rte_eth_dev *eth_dev, + const struct rte_eth_tunnel_filter_conf *conf, + __attribute__((unused)) enum rte_filter_op filter_op, + enum ecore_tunn_clss *clss, + bool add) { struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); - enum ecore_filter_ucast_type type; - enum ecore_tunn_clss clss = MAX_ECORE_TUNN_CLSS; struct ecore_filter_ucast ucast = {0}; - char str[80]; + enum ecore_filter_ucast_type type; uint16_t filter_type = 0; + char str[80]; int rc; - PMD_INIT_FUNC_TRACE(edev); + filter_type = conf->filter_type; + /* Determine if the given filter classification is supported */ + qede_get_ecore_tunn_params(filter_type, &type, clss, str); + if (*clss == MAX_ECORE_TUNN_CLSS) { + DP_ERR(edev, "Unsupported filter type\n"); + return -EINVAL; + } + /* Init tunnel ucast params */ + rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type); + if (rc != ECORE_SUCCESS) { + DP_ERR(edev, "Unsupported Tunnel filter type 0x%x\n", + conf->filter_type); + return rc; + } + DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n", + str, filter_op, ucast.type); - switch (filter_op) { - case RTE_ETH_FILTER_ADD: - if (IS_VF(edev)) - return qede_vxlan_enable(eth_dev, - ECORE_TUNN_CLSS_MAC_VLAN, true, true); + ucast.opcode = add ? ECORE_FILTER_ADD : ECORE_FILTER_REMOVE; - filter_type = conf->filter_type; - /* Determine if the given filter classification is supported */ - qede_get_ecore_tunn_params(filter_type, &type, &clss, str); - if (clss == MAX_ECORE_TUNN_CLSS) { - DP_ERR(edev, "Unsupported filter type\n"); - return -EINVAL; - } - /* Init tunnel ucast params */ - rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Unsupported VxLAN filter type 0x%x\n", - conf->filter_type); - return rc; + /* Skip MAC/VLAN if filter is based on VNI */ + if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) { + rc = qede_mac_int_ops(eth_dev, &ucast, add); + if ((rc == 0) && add) { + /* Enable accept anyvlan */ + qede_config_accept_any_vlan(qdev, true); } - DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n", - str, filter_op, ucast.type); - - ucast.opcode = ECORE_FILTER_ADD; + } else { + rc = qede_ucast_filter(eth_dev, &ucast, add); + if (rc == 0) + rc = ecore_filter_ucast_cmd(edev, &ucast, + ECORE_SPQ_MODE_CB, NULL); + } - /* Skip MAC/VLAN if filter is based on VNI */ - if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) { - rc = qede_mac_int_ops(eth_dev, &ucast, 1); - if (rc == 0) { - /* Enable accept anyvlan */ - qede_config_accept_any_vlan(qdev, true); - } - } else { - rc = qede_ucast_filter(eth_dev, &ucast, 1); - if (rc == 0) - rc = ecore_filter_ucast_cmd(edev, &ucast, - ECORE_SPQ_MODE_CB, NULL); - } + return rc; +} - if (rc != ECORE_SUCCESS) - return rc; +static int +qede_tunn_filter_config(struct rte_eth_dev *eth_dev, + enum rte_filter_op filter_op, + const struct rte_eth_tunnel_filter_conf *conf) +{ + struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev); + struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); + enum ecore_tunn_clss clss = MAX_ECORE_TUNN_CLSS; + bool add; + int rc; - qdev->vxlan.num_filters++; - qdev->vxlan.filter_type = filter_type; - if (!qdev->vxlan.enable) - return qede_vxlan_enable(eth_dev, clss, true, true); + PMD_INIT_FUNC_TRACE(edev); - break; + switch (filter_op) { + case RTE_ETH_FILTER_ADD: + add = true; + break; case RTE_ETH_FILTER_DELETE: - if (IS_VF(edev)) - return qede_vxlan_enable(eth_dev, - ECORE_TUNN_CLSS_MAC_VLAN, false, true); + add = false; + break; + default: + DP_ERR(edev, "Unsupported operation %d\n", filter_op); + return -EINVAL; + } - filter_type = conf->filter_type; - /* Determine if the given filter classification is supported */ - qede_get_ecore_tunn_params(filter_type, &type, &clss, str); - if (clss == MAX_ECORE_TUNN_CLSS) { - DP_ERR(edev, "Unsupported filter type\n"); - return -EINVAL; - } - /* Init tunnel ucast params */ - rc = qede_set_ucast_tunn_cmn_param(&ucast, conf, type); - if (rc != ECORE_SUCCESS) { - DP_ERR(edev, "Unsupported VxLAN filter type 0x%x\n", - conf->filter_type); - return rc; - } - DP_INFO(edev, "Rule: \"%s\", op %d, type 0x%x\n", - str, filter_op, ucast.type); + if (IS_VF(edev)) + return qede_tunn_enable(eth_dev, + ECORE_TUNN_CLSS_MAC_VLAN, + conf->tunnel_type, add); - ucast.opcode = ECORE_FILTER_REMOVE; + rc = _qede_tunn_filter_config(eth_dev, conf, filter_op, &clss, add); + if (rc != ECORE_SUCCESS) + return rc; - if (!(filter_type & ETH_TUNNEL_FILTER_TENID)) { - rc = qede_mac_int_ops(eth_dev, &ucast, 0); - } else { - rc = qede_ucast_filter(eth_dev, &ucast, 0); - if (rc == 0) - rc = ecore_filter_ucast_cmd(edev, &ucast, - ECORE_SPQ_MODE_CB, NULL); + if (add) { + if (conf->tunnel_type == RTE_TUNNEL_TYPE_VXLAN) { + qdev->vxlan.num_filters++; + qdev->vxlan.filter_type = conf->filter_type; + } else { /* GENEVE */ + qdev->geneve.num_filters++; + qdev->geneve.filter_type = conf->filter_type; } - if (rc != ECORE_SUCCESS) - return rc; - qdev->vxlan.num_filters--; + if (!qdev->vxlan.enable || !qdev->geneve.enable) + return qede_tunn_enable(eth_dev, clss, + conf->tunnel_type, + true); + } else { + if (conf->tunnel_type == RTE_TUNNEL_TYPE_VXLAN) + qdev->vxlan.num_filters--; + else /*GENEVE*/ + qdev->geneve.num_filters--; /* Disable VXLAN if VXLAN filters become 0 */ - if (qdev->vxlan.num_filters == 0) - return qede_vxlan_enable(eth_dev, clss, false, true); - break; - default: - DP_ERR(edev, "Unsupported operation %d\n", filter_op); - return -EINVAL; + if ((qdev->vxlan.num_filters == 0) || + (qdev->geneve.num_filters == 0)) + return qede_tunn_enable(eth_dev, clss, + conf->tunnel_type, + false); } return 0; @@ -2508,13 +2664,13 @@ int qede_dev_filter_ctrl(struct rte_eth_dev *eth_dev, case RTE_ETH_FILTER_TUNNEL: switch (filter_conf->tunnel_type) { case RTE_TUNNEL_TYPE_VXLAN: + case RTE_TUNNEL_TYPE_GENEVE: DP_INFO(edev, "Packet steering to the specified Rx queue" - " is not supported with VXLAN tunneling"); - return(qede_vxlan_tunn_config(eth_dev, filter_op, + " is not supported with UDP tunneling"); + return(qede_tunn_filter_config(eth_dev, filter_op, filter_conf)); /* Place holders for future tunneling support */ - case RTE_TUNNEL_TYPE_GENEVE: case RTE_TUNNEL_TYPE_TEREDO: case RTE_TUNNEL_TYPE_NVGRE: case RTE_TUNNEL_TYPE_IP_IN_GRE: diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h index 021de5c..7e55baf 100644 --- a/drivers/net/qede/qede_ethdev.h +++ b/drivers/net/qede/qede_ethdev.h @@ -166,11 +166,14 @@ struct qede_fdir_info { SLIST_HEAD(fdir_list_head, qede_fdir_entry)fdir_list_head; }; -struct qede_vxlan_tunn { +/* IANA assigned default UDP ports for encapsulation protocols */ +#define QEDE_VXLAN_DEF_PORT (4789) +#define QEDE_GENEVE_DEF_PORT (6081) + +struct qede_udp_tunn { bool enable; uint16_t num_filters; uint16_t filter_type; -#define QEDE_VXLAN_DEF_PORT (4789) uint16_t udp_port; }; @@ -202,7 +205,8 @@ struct qede_dev { SLIST_HEAD(uc_list_head, qede_ucast_entry) uc_list_head; uint16_t num_uc_addr; bool handle_hw_err; - struct qede_vxlan_tunn vxlan; + struct qede_udp_tunn vxlan; + struct qede_udp_tunn geneve; struct qede_fdir_info fdir_info; bool vlan_strip_flg; char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE]; diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 01a24e5..184f0e1 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -1792,7 +1792,9 @@ static inline uint32_t qede_rx_cqe_to_tunn_pkt_type(uint16_t flags) if (((tx_ol_flags & PKT_TX_TUNNEL_MASK) == PKT_TX_TUNNEL_VXLAN) || ((tx_ol_flags & PKT_TX_TUNNEL_MASK) == - PKT_TX_TUNNEL_MPLSINUDP)) { + PKT_TX_TUNNEL_MPLSINUDP) || + ((tx_ol_flags & PKT_TX_TUNNEL_MASK) == + PKT_TX_TUNNEL_GENEVE)) { /* Check against max which is Tunnel IPv6 + ext */ if (unlikely(txq->nb_tx_avail < ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT)) diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h index acf9e47..6214c97 100644 --- a/drivers/net/qede/qede_rxtx.h +++ b/drivers/net/qede/qede_rxtx.h @@ -73,7 +73,8 @@ ETH_RSS_IPV6 |\ ETH_RSS_NONFRAG_IPV6_TCP |\ ETH_RSS_NONFRAG_IPV6_UDP |\ - ETH_RSS_VXLAN) + ETH_RSS_VXLAN |\ + ETH_RSS_GENEVE) #define QEDE_TXQ_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS) @@ -151,6 +152,7 @@ PKT_TX_QINQ_PKT | \ PKT_TX_VLAN_PKT | \ PKT_TX_TUNNEL_VXLAN | \ + PKT_TX_TUNNEL_GENEVE | \ PKT_TX_TUNNEL_MPLSINUDP) #define QEDE_TX_OFFLOAD_NOTSUP_MASK \