From patchwork Wed Jun 30 07:19:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Etelson X-Patchwork-Id: 95044 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 232D0A0A0F; Wed, 30 Jun 2021 09:20:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B5CA440141; Wed, 30 Jun 2021 09:20:22 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2044.outbound.protection.outlook.com [40.107.94.44]) by mails.dpdk.org (Postfix) with ESMTP id 843B840040; Wed, 30 Jun 2021 09:20:21 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Y4fOZWTtySnMl7sF2rmjwbSpfj0D7hSoYjNo7ReYg9qx+5wJEE/va/9tySYLSCI4+UCAATVRtT0nhVHkDctZbZ+pTCsKzaDBBSJlOhqiq8gtqnY+trgh2rgIegdV8UIURJxCSXHbe7P6Kih8d5RI4sEJ53O+VQMGnSS6Dtuuf+hFWDZuT0iMJUJvh0Mfg5dWKZK05jCnsN0g0KxZ1rX8wZV+nT0f7AlYVf04yvMfG2Nlfd6TVO5WSgtlkKLoT16dmXEKIjz44vshvF8VBbxk1d2IjdQXSDTkRPRPJ/6UY5hMknZvAa/R8w24q78pYZ9MANn+vZlCdItGMqOj0j/4xA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7kfzQvS41/DfCFbCbr2u+ERmwLd2LPgR0T5gYXAHvbc=; b=OobGplFLm3ag+IAPCFNLRF2ZRi5YnsQ3/oGd9+e8AALfbEdNMT+fVkpu7jizHBF0SCTxHdpAjnZgbbbm/wPCJL7WtXckG0O+0Zs44+emPzTq6IooR2uBg/NvhG9JmwHh4u/sl0Fn1McfRaG5xhI1uUYA1W9DUl1IrL3SbnhMUHa+FvU3Vg/jApRpcaJh5L+Z68LXn5smC4cnO8E/mxnS/GkbO3HKH2MiBlPDuuNDjG2IWfVmyz6FaYf8X9q/C9I+Y4Sjsexgqe0n9DMahWU6D3t5JUSIbjmia6y102TFoXQ6E9chLVhqPdLPp9e8qywNLiDkPTWBLQQ3hIdcO3A1Mw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7kfzQvS41/DfCFbCbr2u+ERmwLd2LPgR0T5gYXAHvbc=; b=US/dRl3c8tS+o/sgA1BzKfIF0GnpZUyiOSd3s72Gvl1tfNYPxJS1VPn7AxLv/0By4pWGgW7RvyR5MiE1EM0D85ubJRbszclpDUAlmYD+10MYqgw26EdWZ0CEURkz6x6Co/LMxdnkbF9w0IMumdeOi/2kMsr/11yr2fkdSU+H5baFjAUPxf3VI3i5zh32VL4d4YHjRaEr7BnETLop7iulBYW7OSvETtgZeiTYHKh0K1BEZK7uAmEsCKWMHk6umCCYy8b1N/5tmu23j4S4QGx5F5dPYdcYfmRpifenm+lboGRvwPZmI1b3YHym20MbRHQdQ+sG/iWeFoQPF/WClIOcqw== Received: from BN6PR19CA0118.namprd19.prod.outlook.com (2603:10b6:404:a0::32) by CH2PR12MB5019.namprd12.prod.outlook.com (2603:10b6:610:6a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Wed, 30 Jun 2021 07:20:20 +0000 Received: from BN8NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:404:a0:cafe::da) by BN6PR19CA0118.outlook.office365.com (2603:10b6:404:a0::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21 via Frontend Transport; Wed, 30 Jun 2021 07:20:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT034.mail.protection.outlook.com (10.13.176.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4264.18 via Frontend Transport; Wed, 30 Jun 2021 07:20:19 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Jun 2021 07:20:06 +0000 From: Gregory Etelson To: CC: , , , , Viacheslav Ovsiienko , "Shahaf Shuler" , Dekel Peled , Ferruh Yigit Date: Wed, 30 Jun 2021 10:19:52 +0300 Message-ID: <20210630071952.6225-1-getelson@nvidia.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1697cb04-2425-4b44-3d32-08d93b97844b X-MS-TrafficTypeDiagnostic: CH2PR12MB5019: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2803; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RRhLjwIWc5AYKsrapy+2Wytj294wBAI1YVJSoUHG6w9py0wJDRIvN7wLSbVNNPeSGnx+ulIlko0LH8IW5R8FKuwlSToPYYcYBygQ/q7L1TQL2Rxpkg7zYSZ7mN6qagXiLiI+I5yK4X6xKfUMKe4rRhKWC4+/86Xohs6B/jYub+5ps0Tbhti5eTovGvYkr6SBYaxavs7E7TPFnSqbRwdoA/4ASw5MkuCtzS3GWWR/UZBCB/PpqGA04B+K63BcMqb+XTpbL/bbrAYCFlCqeJQoJ7abMxDOVwLwiz3iTEzcQ7TRkYI2PSbC6iSBoDz2CTF2EFmqKP5sqy7A+fk8fVV4xm8WC160DJxCH1Z9BomQ6qtIz31ib1LuGk46V6hyMrID88Hflw2nCakiWb5LLnh64fbSsEEyZ/xgXuc+plRa/74l8prjnjCAepuO8HtSnjdkBaWGSxcaOJ2ObvS9x2DFGpUvZbDjJD5rC3E8BfjrEBs/lIORS671K4HUP+mn2HtaNB7DbfQEkH67SPDTleUHOwhfbA1hcC9oF7JIREBNsZcl82sOxnecGVJh0LX/szoimpmTaZOFnF5qSysolmAIpfV0Nwwvl1yT+wzY7/W0YMTYJ2xk7iB14XpyGaWCpxbCAo3thqDkIr0+x85IZD2+KZp/h1+mFBZVSZ3ISyY6Y6n4FTCpwi+dUOv7DefpOmiOU3XHK/MbyQsNqOU56rtu6A== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(36840700001)(46966006)(2906002)(86362001)(47076005)(8676002)(1076003)(54906003)(82310400003)(2616005)(316002)(4326008)(26005)(36756003)(7636003)(82740400003)(478600001)(356005)(8936002)(5660300002)(7696005)(426003)(336012)(55016002)(36860700001)(70586007)(16526019)(70206006)(6286002)(83380400001)(6666004)(6916009)(186003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2021 07:20:19.3360 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1697cb04-2425-4b44-3d32-08d93b97844b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5019 Subject: [dpdk-dev] [PATCH] net/mlx5: fix pattern expansion in RSS flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Flow rule pattern may be implicitly expanded by the PMD if the rule has RSS flow action. The expansion adds network headers to the original pattern. The new pattern lists all network levels that participate in the rule RSS action. The patch validates that buffer for expanded pattern has enough bytes for new flow items. Fixes: c7870bfe09dc ("ethdev: move RSS expansion code to mlx5 driver") Cc: stable@dpdk.org Signed-off-by: Gregory Etelson Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow.c | 63 +++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 30 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c5d4a95a8f..159e84bfab 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -264,6 +264,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item) * set, the following errors are defined: * * -E2BIG: graph-depth @p graph is too deep. + * -EINVAL: @p size has not enough space for expanded pattern. */ static int mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size, @@ -290,12 +291,12 @@ mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size, memset(&missed_item, 0, sizeof(missed_item)); lsize = offsetof(struct mlx5_flow_expand_rss, entry) + MLX5_RSS_EXP_ELT_N * sizeof(buf->entry[0]); - if (lsize <= size) { - buf->entry[0].priority = 0; - buf->entry[0].pattern = (void *)&buf->entry[MLX5_RSS_EXP_ELT_N]; - buf->entries = 0; - addr = buf->entry[0].pattern; - } + if (lsize > size) + return -EINVAL; + buf->entry[0].priority = 0; + buf->entry[0].pattern = (void *)&buf->entry[MLX5_RSS_EXP_ELT_N]; + buf->entries = 0; + addr = buf->entry[0].pattern; for (item = pattern; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { if (!mlx5_flow_is_rss_expandable_item(item)) { user_pattern_size += sizeof(*item); @@ -313,12 +314,12 @@ mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size, } user_pattern_size += sizeof(*item); /* Handle END item. */ lsize += user_pattern_size; + if (lsize > size) + return -EINVAL; /* Copy the user pattern in the first entry of the buffer. */ - if (lsize <= size) { - rte_memcpy(addr, pattern, user_pattern_size); - addr = (void *)(((uintptr_t)addr) + user_pattern_size); - buf->entries = 1; - } + rte_memcpy(addr, pattern, user_pattern_size); + addr = (void *)(((uintptr_t)addr) + user_pattern_size); + buf->entries = 1; /* Start expanding. */ memset(flow_items, 0, sizeof(flow_items)); user_pattern_size -= sizeof(*item); @@ -348,7 +349,9 @@ mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size, elt = 2; /* missed item + item end. */ node = next; lsize += elt * sizeof(*item) + user_pattern_size; - if ((node->rss_types & types) && lsize <= size) { + if (lsize > size) + return -EINVAL; + if (node->rss_types & types) { buf->entry[buf->entries].priority = 1; buf->entry[buf->entries].pattern = addr; buf->entries++; @@ -367,6 +370,7 @@ mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size, while (node) { flow_items[stack_pos].type = node->type; if (node->rss_types & types) { + size_t n; /* * compute the number of items to copy from the * expansion and copy it. @@ -376,24 +380,23 @@ mlx5_flow_expand_rss(struct mlx5_flow_expand_rss *buf, size_t size, elt = stack_pos + 2; flow_items[stack_pos + 1].type = RTE_FLOW_ITEM_TYPE_END; lsize += elt * sizeof(*item) + user_pattern_size; - if (lsize <= size) { - size_t n = elt * sizeof(*item); - - buf->entry[buf->entries].priority = - stack_pos + 1 + missed; - buf->entry[buf->entries].pattern = addr; - buf->entries++; - rte_memcpy(addr, buf->entry[0].pattern, - user_pattern_size); - addr = (void *)(((uintptr_t)addr) + - user_pattern_size); - rte_memcpy(addr, &missed_item, - missed * sizeof(*item)); - addr = (void *)(((uintptr_t)addr) + - missed * sizeof(*item)); - rte_memcpy(addr, flow_items, n); - addr = (void *)(((uintptr_t)addr) + n); - } + if (lsize > size) + return -EINVAL; + n = elt * sizeof(*item); + buf->entry[buf->entries].priority = + stack_pos + 1 + missed; + buf->entry[buf->entries].pattern = addr; + buf->entries++; + rte_memcpy(addr, buf->entry[0].pattern, + user_pattern_size); + addr = (void *)(((uintptr_t)addr) + + user_pattern_size); + rte_memcpy(addr, &missed_item, + missed * sizeof(*item)); + addr = (void *)(((uintptr_t)addr) + + missed * sizeof(*item)); + rte_memcpy(addr, flow_items, n); + addr = (void *)(((uintptr_t)addr) + n); } /* Go deeper. */ if (!node->optional && node->next) {