From patchwork Wed Nov 11 09:28:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bing Zhao X-Patchwork-Id: 84000 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0455A09D2; Wed, 11 Nov 2020 10:29:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E2F012C36; Wed, 11 Nov 2020 10:29:00 +0100 (CET) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id E32DB2AB; Wed, 11 Nov 2020 10:28:58 +0100 (CET) From: Bing Zhao To: viacheslavo@nvidia.com, matan@nvidia.com, ferruh.yigit@intel.com Cc: dev@dpdk.org, orika@nvidia.com, rasland@nvidia.com, stable@dpdk.org Date: Wed, 11 Nov 2020 17:28:50 +0800 Message-Id: <1605086930-189770-1-git-send-email-bingz@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1604382154-336373-1-git-send-email-bingz@nvidia.com> References: <1604382154-336373-1-git-send-email-bingz@nvidia.com> Subject: [dpdk-dev] [PATCH v2] net/mlx5: fix eCPRI previous layer checking X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Based on the specification, eCPRI can only follow ETH (VLAN) layer or UDP layer. When creating a flow with eCPRI item, this should be checked and invalid layout of the layers should be rejected. Fixes: c7eca23657b7 ("net/mlx5: add flow validation of eCPRI header") Cc: stable@dpdk.org Signed-off-by: Bing Zhao Acked-by: Viacheslav Ovsiienko --- v2: remove the line break for error log message. --- drivers/net/mlx5/mlx5_flow.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index a6e60af..859b7f6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -2896,17 +2896,20 @@ struct mlx5_flow_tunnel_info { MLX5_FLOW_LAYER_OUTER_VLAN); struct rte_flow_item_ecpri mask_lo; + if (!(last_item & outer_l2_vlan) && + last_item != MLX5_FLOW_LAYER_OUTER_L4_UDP) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "eCPRI can only follow L2/VLAN layer or UDP layer"); if ((last_item & outer_l2_vlan) && ether_type && ether_type != RTE_ETHER_TYPE_ECPRI) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "eCPRI cannot follow L2/VLAN layer " - "which ether type is not 0xAEFE."); + "eCPRI cannot follow L2/VLAN layer which ether type is not 0xAEFE"); if (item_flags & MLX5_FLOW_LAYER_TUNNEL) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "eCPRI with tunnel is not supported " - "right now."); + "eCPRI with tunnel is not supported right now"); if (item_flags & MLX5_FLOW_LAYER_OUTER_L3) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -2914,13 +2917,12 @@ struct mlx5_flow_tunnel_info { else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_TCP) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "eCPRI cannot follow a TCP layer."); + "eCPRI cannot coexist with a TCP layer"); /* In specification, eCPRI could be over UDP layer. */ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "eCPRI over UDP layer is not yet " - "supported right now."); + "eCPRI over UDP layer is not yet supported right now"); /* Mask for type field in common header could be zero. */ if (!mask) mask = &rte_flow_item_ecpri_mask; @@ -2929,13 +2931,11 @@ struct mlx5_flow_tunnel_info { if (mask_lo.hdr.common.type != 0 && mask_lo.hdr.common.type != 0xff) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, - "partial mask is not supported " - "for protocol"); + "partial mask is not supported for protocol"); else if (mask_lo.hdr.common.type == 0 && mask->hdr.dummy[0] != 0) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, - "message header mask must be after " - "a type mask"); + "message header mask must be after a type mask"); return mlx5_flow_item_acceptable(item, (const uint8_t *)mask, acc_mask ? (const uint8_t *)acc_mask : (const uint8_t *)&nic_mask,