From patchwork Tue Nov 2 17:01:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 103521 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72EACA0C4E; Tue, 2 Nov 2021 18:02:01 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CD6E1410FE; Tue, 2 Nov 2021 18:01:55 +0100 (CET) Received: from AZHDRRW-EX02.NVIDIA.COM (azhdrrw-ex02.nvidia.com [20.64.145.131]) by mails.dpdk.org (Postfix) with ESMTP id BBA1140689 for ; Tue, 2 Nov 2021 18:01:54 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.169) by mxs.oss.nvidia.com (10.13.234.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Tue, 2 Nov 2021 10:01:54 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EHjpyrYm2Jc2xvHcehn6Di5FC5Eug2Jj7xYOSfWk0tjSZUx/l1lAlZps1DWRj/5RSOh0nMqZZWuCH2Ii9g8odi/a0L7vyPoT6b+xo9PxolXrcYO7hVcgfrh6dPReXLj9w54gtt/PRS7Uhj75blSJw2jKgCzcM3Bvtr6SQ4adSKiyiJObD6c79Rku0PNoyDRk1S05eq/tyjfeZXYwuL7Qx/1L7SLBBbBC2rihMMm+32wWvjE5dzh4jdgVv2UmwXLfZs8wff2r8W26PSZd6+gqrd4Jw+NwwoObsAXKkmJXYKBCmPIvfTmem4u+0prmtJpHewCYZnSnkEB+aho9uDw5wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/P/kTHM58kQdpjo2ltSVSSQzdhYRBK0c4x5KMXvqVU0=; b=IvhF09L1s0MtAs/4T8hVtpSEOXSyMQ9XGuSO9gJJdzXwKxyTk75ITJgsDGrKsYQSCaehx/s90J9SjzbIi6F0q4mo3ukuNZb3q764LEEhbmGIsduQZZtuiYGKfzr+ZgLwe82UmighOU2gM4LrZrrX64ptAD1bJdCETfRyKiCg2oBWSJKjKlx53Ly+giU+FbCs2BIZHefkyzNLTRAGgk9Yzs33bwTmWT/kB8WiziYnNDlwItDblJ/uHM2svZ/aF7EV/jqMEAz8vDyOsVaqWIDuOfIyzMPHRZcouL/kvbHVRJ10fhLphr4bljwW6ChK1H1ONbGX56eOdoT4m5obFFp2JQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/P/kTHM58kQdpjo2ltSVSSQzdhYRBK0c4x5KMXvqVU0=; b=pRAGqEnKWjFhZMfu7fEeuoM3D/J4AymNYxNwu0xziswKc1nApvY1sPu0AhhHgGLTNeS6PzxWhRl7B4xx/RCS/kxgVqKc7JM9ExzYV2zRmsoQy7TzP8n/Bvw+DKRICa/vaRnWFspBJih7kC2J2CNI/A8VL2T9Evnlr3nixk4ATIRI4916bE7du/vMGvMOXZHCELyjWrm17rZjZaR0sYi7qASoecH1jpWmJd5vO1zWyoxdww/LuwWs5ohjkcr/k5p+53fvBEHmx+GlssPPUAddp9j5dnYzDBU6UBKzSPs34Foeg/Etfb8gSM/3urahILC2javN9+0fgPL71S5UJO7z3A== Received: from BN1PR14CA0019.namprd14.prod.outlook.com (2603:10b6:408:e3::24) by MWHPR12MB1229.namprd12.prod.outlook.com (2603:10b6:300:11::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.17; Tue, 2 Nov 2021 17:01:52 +0000 Received: from BN8NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e3:cafe::f2) by BN1PR14CA0019.outlook.office365.com (2603:10b6:408:e3::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Tue, 2 Nov 2021 17:01:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT054.mail.protection.outlook.com (10.13.177.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Tue, 2 Nov 2021 17:01:52 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:01:49 +0000 From: Dmitry Kozlyuk To: CC: Ori Kam , Andrew Rybchenko , Thomas Monjalon , Ferruh Yigit Date: Tue, 2 Nov 2021 19:01:30 +0200 Message-ID: <20211102170135.959380-2-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102170135.959380-1-dkozlyuk@nvidia.com> References: <20211102135415.944050-1-dkozlyuk@nvidia.com> <20211102170135.959380-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 48e9dd8e-593b-40ac-8ddb-08d99e2277c5 X-MS-TrafficTypeDiagnostic: MWHPR12MB1229: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WDyM8tmRqcr1TSh7E2DdYU+P4GoSMZ412pc04uKKkpNkkvWWUSVFVsYp8p628SQzVroWEb50lk5c0YxJsIFRCATNKrVbbVp3T1FZXlDbwBuf3LYMdbZjkwxcHlzRGc4fTRE+NkofspyS41Itxd8BEa2wz2eNHRIddBZZ1kdlsOWqz6OeF9MATQp3ewMvAVZiRW3zY9IN2OpUWVUwITMZqC9uFZ9E0GcZN4GanlQr+teT9OAqijEtVIr8IXCQo6NXuJnro5RsElffrQyJvazxE937C7KUX2iL7aIBY75xMsQO8xzLtqVhzB3Wbrp2Bj/I/gzSiQpdrD3h8IUb8KyxvvY+Ih4sK9Uv84X87EJSrXK7TkudgdQEVtHNA04LvQLSQrrccEJENQtEqT5QN9I+BHLp4freecYUqoKoWHCjUsERUC7lQbKnpWThpOftyH9lamfLne+UnWehuXT07wy6iVOKpEw4aULzt1uPaeNHmV1x7SGuJOe+LnDI65aTIwv6UN8nIKWZh8mYVCxmD5rUfQiLxq/4UyKHN0NxxKDEaPtE7fLks8WsVjidKAs4Tketd6/DhX17/czOnP/+MLBDE0vWLcTOluSHdvtA4RXqK6XgaNbtqJK75wqG7M3TmE3+GINi2Jj4dmtdcckzCoJDnO0Yh+4up0SbR0IOihTBZG1CwKlZDw+VxryJ0npR7YsQtV0Leu8MnVtYBblXRbkh0Q== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(36906005)(54906003)(70206006)(356005)(70586007)(1076003)(6286002)(8936002)(6916009)(316002)(16526019)(186003)(8676002)(6666004)(83380400001)(4326008)(7636003)(7696005)(36756003)(55016002)(336012)(36860700001)(426003)(82310400003)(86362001)(5660300002)(26005)(107886003)(508600001)(47076005)(2616005)(2906002); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2021 17:01:52.2669 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 48e9dd8e-593b-40ac-8ddb-08d99e2277c5 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1229 Subject: [dpdk-dev] [PATCH v6 1/6] ethdev: add capability to keep flow rules on restart X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Previously, it was not specified what happens to the flow rules when the device is stopped, possibly reconfigured, then started. If flow rules were kept, it could be convenient for application developers, because they wouldn't need to save and restore them. However, due to the number of flows and possible creation rate it is impractical to save all flow rules in DPDK layer. This means that flow rules persistence really depends on whether PMD and HW can implement it efficiently. It can also be limited by the rule item and action types, and its attributes transfer bit (a combination of an item/action type and a value of the transfer bit is called a ruel feature). Add a device capability bit for PMDs that can keep at least some of the flow rules across restart. Without this capability behavior is still unspecified and it is declared that the application must flush the rules before stopping the device. Allow the application to test for persistence of rules using a particular feature by attempting to create a flow rule using that feature when the device is stopped and checking for the specific error. This is logical because if the PMD can to create the flow rule when the device is not started and use it after the start happens, it is natural that it can move its internal flow rule object to the same state when the device is stopped and restore the state when the device is started. Rule persistence across a reconfigurations is not required, because tracking all the rules and configuration-dependent resources they use may be infeasible. In case a PMD cannot keep the rules across reconfiguration, it is allowed just to report an error. Application must then flush the rules before attempting it. Signed-off-by: Dmitry Kozlyuk Acked-by: Ori Kam Acked-by: Andrew Rybchenko --- doc/guides/prog_guide/rte_flow.rst | 36 ++++++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 7 ++++++ lib/ethdev/rte_flow.h | 1 + 3 files changed, 44 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 2d2d87f1db..e01a079230 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -87,6 +87,42 @@ To avoid resource leaks on the PMD side, handles must be explicitly destroyed by the application before releasing associated resources such as queues and ports. +.. warning:: + + The following description of rule persistence is an experimental behavior + that may change without a prior notice. + +When the device is stopped, its rules do not process the traffic. +In particular, transfer rules created using some device +stop affecting the traffic even if they refer to different ports. + +If ``RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP`` is not advertised, +rules cannot be created until the device is started for the first time +and cannot be kept when the device is stopped. +However, PMD also does not flush them automatically on stop, +so the application must call ``rte_flow_flush()`` or ``rte_flow_destroy()`` +before stopping the device to ensure no rules remain. + +If ``RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP`` is advertised, this means +the PMD can keep at least some rules across the device stop and start. +However, ``rte_eth_dev_configure()`` may fail if any rules remain, +so the application must flush them before attempting a reconfiguration. +Keeping may be unsupported for some types of rule items and actions, +as well as depending on the value of flow attributes transfer bit. +A combination of a single an item or action type +and a value of the transfer bit is called a rule feature. +For example: a COUNT action with the transfer bit set. +To test if rules with a particular feature are kept, the application must try +to create a valid rule using this feature when the device is not started +(either before the first start or after a stop). +If it fails with an error of type ``RTE_FLOW_ERROR_TYPE_STATE``, +all rules using this feature must be flushed by the application +before stopping the device. +If it succeeds, such rules will be kept when the device is stopped, +provided they do not use other features that are not supported. +Rules that are created when the device is stopped, including the rules +created for the test, will be kept after the device is started. + The following sections cover: - **Attributes** (represented by ``struct rte_flow_attr``): properties of a diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 24f30b4b28..a18e6ab887 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -90,6 +90,11 @@ * - flow director filtering mode (but not filtering rules) * - NIC queue statistics mappings * + * The following configuration may be retained or not + * depending on the device capabilities: + * + * - flow rules + * * Any other configuration will not be stored and will need to be re-entered * before a call to rte_eth_dev_start(). * @@ -1691,6 +1696,8 @@ struct rte_eth_conf { * mbuf->port field. */ #define RTE_ETH_DEV_CAPA_RXQ_SHARE RTE_BIT64(2) +/** Device supports keeping flow rules across restart. */ +#define RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP RTE_BIT64(3) /**@}*/ /* diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 85ab29b320..ebcd3a3c8e 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -3748,6 +3748,7 @@ enum rte_flow_error_type { RTE_FLOW_ERROR_TYPE_ACTION_NUM, /**< Number of actions. */ RTE_FLOW_ERROR_TYPE_ACTION_CONF, /**< Action configuration. */ RTE_FLOW_ERROR_TYPE_ACTION, /**< Specific action. */ + RTE_FLOW_ERROR_TYPE_STATE, /**< Current device state. */ }; /** From patchwork Tue Nov 2 17:01:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 103522 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7BFBA0C4E; Tue, 2 Nov 2021 18:02:10 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 267E241104; Tue, 2 Nov 2021 18:02:03 +0100 (CET) Received: from AZHDRRW-EX01.nvidia.com (azhdrrw-ex01.nvidia.com [20.51.104.162]) by mails.dpdk.org (Postfix) with ESMTP id 1DA2841103 for ; Tue, 2 Nov 2021 18:02:01 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.108) by mxs.oss.nvidia.com (10.13.234.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Tue, 2 Nov 2021 10:02:00 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FvHK4naJEmrNO0IKkKin50ArPopWExRXxmpLFcDj697YDY6fRbsLJRBh6wSspPRbf0ojKd246NzvFf0Wze1oCz6fsMT4SsCgjb3EvzTYGSEH3nVBZlITLxjSLCAh3+q5BKMesM/cnuGbFrOwRTCj7zQ7ts2S6a1CyU4FtUA+cvJBzBcqT9bV+7Ena/fiaaHeNghctCLU56X3m1/BvSFuaNAQjUoeujCnvw7AbZSxMamhnWQMEc5vYNnf5ntzQm7+UPegIq69bIqEkJ/XgFdZTvxXuFS0Cy9a2JNundTpAlDvl8wfyOefloEkKvlMIZDLHbcnLMAoMP5i36eLAJby8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RC0ZdV58kGomw7BbLQTMyGI39SX8nUj7DGunP+cx6fw=; b=KW2qm6EPHpMx1zmagPnfeLFf7Eh3xEXRPYPcHSPZJpf/wMbmoYdAIJI3av5QM0QmUAApvtAZGw2dbtWPc9NdM4lb05OM9FWWzxRjDTHWikbLTNVnIFf0Pyba6cdo83jpKsujOtAZ5YADAabDKGIVOu3pTSOu58QkiGCr1h3cac8SANKyqnoTmBqSKNGNc43KukEHyQfyifm7Aq8Rby7Z+rmsdHOJU1+iHn/QanXcdLblLXZN+7buU6Y8GyV9J4/H+kMk0giGxdJOE8RwKmuNlX2SgAr7UuHNWQ76VLxUk5IO1jAxmlCsaALfnjqxXvrnDCPt5naqoiIqglWOYHj5KQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RC0ZdV58kGomw7BbLQTMyGI39SX8nUj7DGunP+cx6fw=; b=SZRJ0p4nCqjAJc7b4f02PKkndF15l0BIVZ+CLjzOZPbs4ptbjGI6qOhYNVbwG/qQj06peYDWbTQMnig3Yshiau0Go4FDIXb0rTg6GIibSWn4D9HR2JG6+1BPvvUyOjf35++WOlRZ9n8jWO41DOnjdIzA9iGLq9W4N07sz2E0nqeZj/Cqa0oiYjWn7wrZuh7n1OzBN+yx9n+pklNjqx9Mi5ssUOK9RKuRcUqvC+pz1tuaCebESl/cIuGIE9+LdZs9DApl2Dla0/pS7oLx2cQBIlFDiH/Dh/1fkKuthYp0ocL6hedNX1C0SbGrqduSm5JbhjlcJEpv2yfnjRuZOXBIng== Received: from BN9PR03CA0971.namprd03.prod.outlook.com (2603:10b6:408:109::16) by DM4PR12MB5374.namprd12.prod.outlook.com (2603:10b6:5:39a::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Tue, 2 Nov 2021 17:01:58 +0000 Received: from BN8NAM11FT007.eop-nam11.prod.protection.outlook.com (2603:10b6:408:109:cafe::bf) by BN9PR03CA0971.outlook.office365.com (2603:10b6:408:109::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Tue, 2 Nov 2021 17:01:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; monjalon.net; dkim=none (message not signed) header.d=none;monjalon.net; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT007.mail.protection.outlook.com (10.13.177.109) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Tue, 2 Nov 2021 17:01:54 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:01:52 +0000 From: Dmitry Kozlyuk To: CC: Ori Kam , Andrew Rybchenko , Thomas Monjalon , Ferruh Yigit Date: Tue, 2 Nov 2021 19:01:31 +0200 Message-ID: <20211102170135.959380-3-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102170135.959380-1-dkozlyuk@nvidia.com> References: <20211102135415.944050-1-dkozlyuk@nvidia.com> <20211102170135.959380-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 34d8b294-deba-4a44-9b02-08d99e2278f3 X-MS-TrafficTypeDiagnostic: DM4PR12MB5374: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iHzOw+4DBf4t3bkJ48TZ83kVrK7Lst/Jo5/mSvDYWxOiCjI95ErJ2v0b4NfpkENwuxzJvR02nCSgHyAUFe4vClrpuaGDDHY5uHjpq3M7ODueOojJvTMEVH29XgXnDjgT4K0QbLAAQH4nJXkjeZPGzAMNbZ0uNGoYnMnRM9N51aizdEWw8WNjZDeL6jxglC8CJziYV1vGE+2MzaOXWqUhjgd9BT7/p7Dl/GopZLfJQOLR9itjhw4NPrj/9a+PAx0XNj/qzpxozzpCpFiqCBXo1vhKZbdTxu3lmZyA61Vhjtr5FnU8VscPul0KpugfwR7opl+GIgQb7hac8k/rhwlM6Y56gMbjYbSnvcKM0Iu2TwfAy0fzXXrVC1Qrcs8awpsd0pKzw+5juvRFeI9wXsXWLjTSfzP689ijojayxlCKelx/eha0SjwziMC71GGJix7KZR1uwayggOU57lNstlb94CDk//m0vcq2G7HSXx0szsXMADKzke4rH+1TDWxDGVz9CPu/L549BIX24uDfuwGFJN4z+0iesqHGjJWWvg2204bGMA1pQWkGXKvRIu91VvHf0AMXhWCCwaxrJjIkVKtQNbXi4vqTW6g8CYGsQcZlfBVLecvZUOKm5Lz2rmqXlSSS0M/F7SW9PB9l3EV4pE5Ii7zLMyAgon0kGCe0rCZYYjtFTWFmerkmZBTkzlAHpIN1iWKPQGbbwpQCzx5e2pOv+w== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(4326008)(47076005)(5660300002)(8936002)(2616005)(336012)(55016002)(508600001)(70206006)(6286002)(70586007)(1076003)(86362001)(426003)(54906003)(82310400003)(8676002)(2906002)(6916009)(83380400001)(7636003)(36756003)(107886003)(7696005)(26005)(356005)(36860700001)(6666004)(16526019)(316002)(186003); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2021 17:01:54.2394 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 34d8b294-deba-4a44-9b02-08d99e2278f3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT007.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5374 Subject: [dpdk-dev] [PATCH v6 2/6] ethdev: add capability to keep shared objects on restart X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rte_flow_action_handle_create() did not mention what happens with an indirect action when a device is stopped and started again. It is natural for some indirect actions, like counter, to be persistent. Keeping others at least saves application time and complexity. However, not all PMDs can support it, or the support may be limited by particular action kinds, that is, combinations of action type and the value of the transfer bit in its configuration. Add a device capability to indicate if at least some indirect actions are kept across the above sequence. Without this capability the behavior is still unspecified, and application is required to destroy the indirect actions before stopping the device. In the future, indirect actions may not be the only type of objects shared between flow rules. The capability bit intends to cover all possible types of such objects, hence its name. Declare that the application can test for the persistence of a particular indirect action kind by attempting to create an indirect action of that kind when the device is stopped and checking for the specific error type. This is logical because if the PMD can to create an indirect action when the device is not started and use it after the start happens, it is natural that it can move its internal flow shared object to the same state when the device is stopped and restore the state when the device is started. Indirect action persistence across a reconfigurations is not required. In case a PMD cannot keep the indirect actions across reconfiguration, it is allowed just to report an error. Application must then flush the indirect actions before attempting it. Signed-off-by: Dmitry Kozlyuk Acked-by: Ori Kam Acked-by: Andrew Rybchenko --- doc/guides/prog_guide/rte_flow.rst | 31 ++++++++++++++++++++++++++++++ lib/ethdev/rte_ethdev.h | 3 +++ 2 files changed, 34 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index e01a079230..77de8da973 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2995,6 +2995,37 @@ updated depend on the type of the ``action`` and different for every type. The indirect action specified data (e.g. counter) can be queried by ``rte_flow_action_handle_query()``. +.. warning:: + + The following description of indirect action persistence + is an experimental behavior that may change without a prior notice. + +If ``RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP`` is not advertised, +indirect actions cannot be created until the device is started for the first time +and cannot be kept when the device is stopped. +However, PMD also does not flush them automatically on stop, +so the application must call ``rte_flow_action_handle_destroy()`` +before stopping the device to ensure no indirect actions remain. + +If ``RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP`` is advertised, +this means that the PMD can keep at least some indirect actions +across device stop and start. +However, ``rte_eth_dev_configure()`` may fail if any indirect actions remain, +so the application must destroy them before attempting a reconfiguration. +Keeping may be only supported for certain kinds of indirect actions. +A kind is a combination of an action type and a value of its transfer bit. +For example: an indirect counter with the transfer bit reset. +To test if a particular kind of indirect actions is kept, +the application must try to create a valid indirect action of that kind +when the device is not started (either before the first start of after a stop). +If it fails with an error of type ``RTE_FLOW_ERROR_TYPE_STATE``, +application must destroy all indirect actions of this kind +before stopping the device. +If it succeeds, all indirect actions of the same kind are kept +when the device is stopped. +Indirect actions of a kept kind that are created when the device is stopped, +including the ones created for the test, will be kept after the device start. + .. _table_rte_flow_action_handle: .. table:: INDIRECT diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index a18e6ab887..5f803ad1e6 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -94,6 +94,7 @@ * depending on the device capabilities: * * - flow rules + * - flow-related shared objects, e.g. indirect actions * * Any other configuration will not be stored and will need to be re-entered * before a call to rte_eth_dev_start(). @@ -1698,6 +1699,8 @@ struct rte_eth_conf { #define RTE_ETH_DEV_CAPA_RXQ_SHARE RTE_BIT64(2) /** Device supports keeping flow rules across restart. */ #define RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP RTE_BIT64(3) +/** Device supports keeping shared flow objects across restart. */ +#define RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP RTE_BIT64(4) /**@}*/ /* From patchwork Tue Nov 2 17:01:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 103523 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3A17A0C4E; Tue, 2 Nov 2021 18:02:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 27A9A41103; Tue, 2 Nov 2021 18:02:11 +0100 (CET) Received: from AZHDRRW-EX01.nvidia.com (azhdrrw-ex01.nvidia.com [20.51.104.162]) by mails.dpdk.org (Postfix) with ESMTP id D68DC410FD for ; Tue, 2 Nov 2021 18:02:08 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.100) by mxs.oss.nvidia.com (10.13.234.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Tue, 2 Nov 2021 10:02:08 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=c+tN2d5jlWvDzA3+2KxXCg52LK1ZJw+1dinBsztmxJnBX+TDCUE8DDPhRg1Rz4vsK65oVTdqu5KxNjZ5dnIa+0BawbAgZhjUKd3j9UNiBluzdhhQIBQ5Q74elkri+swcv/dCIiyfPejjOUcaUdNM+h1H8gu+jKdN+kxksZV0gazTRvUNcNENu5sjBNK7u220VXLUEHY9Q2e5uvIxViJDY4iSw8lOraAw5ya+/vomU/mLl/AYSLvAyhZ3VsablgU6DKLE9IxNUBPelpdDSEB9P2rYICjA88+XPzgMWpficcNrf+K6tf5U1QZui3pg/nyFtYHMDB+7UlNR7cIQyQPwgA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4NKtaOA4JM9ZzmJp9dPmnsw3f3NON7YxC6/tLpEsjdQ=; b=Zl8diJw/XA/BUpdloBnmg+TCC6fNIFCfxns3yFa8Wmge/TMpMc1b1BCnQtZ8CgkAijIxEdvmrwwsinmpHqDZpaBnBrDPxlfALd/URt/ZFC0KvuNQpGqhEYSHjwo7WMvGGKU+au0+poSyiZBuJldhdL/4+NMxpbFGANs+G84ApI0ZP8A1iLqQsDNbd9zZyk/GRWdQ9C5vN17FioarCAIlZb903RxxxxOsEHOW78/TS1d66ARSYfzLsGOwon1bBtDyE6o6aul0gesrUINKBF3EhAYiYzJQRO2m39r7aQ6XxEreGMNZkxRB6428tTaqdHiqkiu+lvYNjjZFWV5NHUKGrw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4NKtaOA4JM9ZzmJp9dPmnsw3f3NON7YxC6/tLpEsjdQ=; b=SvFs1zwPpjkGKWHr+lCcFGRLePlgLuXSusnDyl6lBzcNR1mwvI3GjMSnJ1ghcWeny6MnHtTORVnnPcTRvltZ2sklDot+sOE48XBEZx61MCENhMWxSsH9WwOD44M0skdjk1sCNgpIVAwK7kwQVEKiinlI3d/uZRVvPdOV8iS9gwA8M6Pqhst8zTHYLOPTbfBLzz/EHTA7MSEF+SdwxgLz9vqr5lzyWB1HmurRslM70a0HpfFW/WCp7SPIA8urK7y0eD9uWUQQDpYnz1xFwnhUuXR1eZiioPp2QreadE/j/ca97AT3QF7i5pkLGmfFiAmFAikUThhadqUkhrugu/D8Aw== Received: from BN6PR2001CA0035.namprd20.prod.outlook.com (2603:10b6:405:16::21) by BN8PR12MB3060.namprd12.prod.outlook.com (2603:10b6:408:4a::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.14; Tue, 2 Nov 2021 17:02:03 +0000 Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:405:16:cafe::89) by BN6PR2001CA0035.outlook.office365.com (2603:10b6:405:16::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Tue, 2 Nov 2021 17:02:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Tue, 2 Nov 2021 17:02:02 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:01:54 +0000 From: Dmitry Kozlyuk To: CC: Ferruh Yigit , Ajit Khaparde , Somnath Kotur , Hyong Youb Kim , Nithin Dabilpuram , Kiran Kumar K , "Sunil Kumar Kori" , Satha Rao , "Rahul Lakkireddy" , Hemant Agrawal , Sachin Saxena , "Haiyue Wang" , John Daley , Gaetan Rivet , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , "Min Hu (Connor)" , Yisen Zhuang , Lijun Ou , Beilei Xing , "Jingjing Wu" , Qiming Yang , Qi Zhang , Rosen Xu , Liron Himi , Jerin Jacob , Rasesh Mody , Devendra Singh Rawat , "Andrew Rybchenko" , Jasvinder Singh , Cristian Dumitrescu , Keith Wiles , "Jiawen Wu" , Jian Wang Date: Tue, 2 Nov 2021 19:01:32 +0200 Message-ID: <20211102170135.959380-4-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102170135.959380-1-dkozlyuk@nvidia.com> References: <20211102135415.944050-1-dkozlyuk@nvidia.com> <20211102170135.959380-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a7cc9d27-aa49-4ffc-90c1-08d99e227e20 X-MS-TrafficTypeDiagnostic: BN8PR12MB3060: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: av6YQk4DRkvEbwijheHycf98u7JfhHPn5bhEEa0gDrYIxvIO6PWG4EmWxfwQip6Le4DT+xA26Ifb+ksfM19w2GjXBBFJ/dIEq/lQqUTimit70xWTWZaq22WzxoO//W30IIndCkfMqfebYqFMLu+7bsYj5b4z5dwx4jw1+EovMZA4SLuGqWaawJCbl8+QGS0Ncs78GTtr/RBBRQzFW13xtfIgD+MjD0hqpxnGWSQh3JKxv2ukmkbaZsEgWxmfIOnLCycsr9ICD7hOLs1Yh/8v5vauNk2NjYzf+0ZH+yIYoBEK1mY+EBoGqirB/QW9KDJEv6VDchU7zlxUPqR8ZLCwy6Iigv82yKKbmVRFlZF7j7y0KOBia021hy1U9qnm+x3UvjfenFhc4M+HNxyYOcPXYpbHZvFyjwONfjtQMUCKVayc212SzZLb36CM3Ky26lh/YqUTtMS/1U7ry9TqtiOXehb19EVTyPKy2BArAezwVy/1Iy/PLg83KWLa0whK7Q+RMRU7ZqQmzTFJWskVHQvPGbq7HiX3UOPha1RFl8gydg4XtADsnwqXmHPi3rVgBnhZRZMoI5ZwNAtosrQLFoqRh+bYTPkA/aBi9EWn9DOzeXp2C5vhKFjly0E7/la+iwTvCvapt3hF8CJPOSMTmCmsuY2PRAkx/aR4FNNCmAjsTvrNWB/zspt1BJK6sjhF/1fUsJaym1id04GlKbsg5iLvSNlUceYJn30V5d4gIEalZ3k= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(7636003)(7696005)(6916009)(8936002)(30864003)(70586007)(1076003)(2616005)(5660300002)(47076005)(508600001)(107886003)(6286002)(4326008)(186003)(16526019)(426003)(26005)(86362001)(6666004)(36906005)(336012)(316002)(70206006)(83380400001)(356005)(82310400003)(54906003)(36756003)(36860700001)(8676002)(55016002)(2906002)(21314003); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2021 17:02:02.8967 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a7cc9d27-aa49-4ffc-90c1-08d99e227e20 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3060 Subject: [dpdk-dev] [PATCH v6 3/6] net: advertise no support for keeping flow rules X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero, the specified behavior is the same as it had been before this bit was introduced. Explicitly reset it in all PMDs supporting rte_flow API in order to attract the attention of maintainers, who should eventually choose to advertise the new capability or not. It is already known that mlx4 and mlx5 will not support this capability. For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP similar action is not performed, because no PMD except mlx5 supports indirect actions. Any PMD that starts doing so will anyway have to consider all relevant API, including this capability. Suggested-by: Ferruh Yigit Signed-off-by: Dmitry Kozlyuk Acked-by: Ajit Khaparde Acked-by: Somnath Kotur Acked-by: Hyong Youb Kim --- drivers/net/bnxt/bnxt_ethdev.c | 1 + drivers/net/bnxt/bnxt_reps.c | 1 + drivers/net/cnxk/cnxk_ethdev_ops.c | 1 + drivers/net/cxgbe/cxgbe_ethdev.c | 2 ++ drivers/net/dpaa2/dpaa2_ethdev.c | 1 + drivers/net/e1000/em_ethdev.c | 2 ++ drivers/net/e1000/igb_ethdev.c | 1 + drivers/net/enic/enic_ethdev.c | 1 + drivers/net/failsafe/failsafe_ops.c | 1 + drivers/net/hinic/hinic_pmd_ethdev.c | 2 ++ drivers/net/hns3/hns3_ethdev.c | 1 + drivers/net/hns3/hns3_ethdev_vf.c | 1 + drivers/net/i40e/i40e_ethdev.c | 1 + drivers/net/i40e/i40e_vf_representor.c | 2 ++ drivers/net/iavf/iavf_ethdev.c | 1 + drivers/net/ice/ice_dcf_ethdev.c | 1 + drivers/net/igc/igc_ethdev.c | 1 + drivers/net/ipn3ke/ipn3ke_representor.c | 1 + drivers/net/mvpp2/mrvl_ethdev.c | 2 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 1 + drivers/net/qede/qede_ethdev.c | 1 + drivers/net/sfc/sfc_ethdev.c | 1 + drivers/net/softnic/rte_eth_softnic.c | 1 + drivers/net/tap/rte_eth_tap.c | 1 + drivers/net/txgbe/txgbe_ethdev.c | 1 + drivers/net/txgbe/txgbe_ethdev_vf.c | 1 + 26 files changed, 31 insertions(+) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index c8dad8a7c5..257e6b0d6a 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1000,6 +1000,7 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev, dev_info->speed_capa = bnxt_get_speed_capabilities(bp); dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->default_rxconf = (struct rte_eth_rxconf) { .rx_thresh = { diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c index 92beea3558..19da24b41d 100644 --- a/drivers/net/bnxt/bnxt_reps.c +++ b/drivers/net/bnxt/bnxt_reps.c @@ -546,6 +546,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev, dev_info->max_tx_queues = max_rx_rings; dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp); dev_info->hash_key_size = 40; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; /* MTU specifics */ dev_info->min_mtu = RTE_ETHER_MIN_MTU; diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 6746430265..62306b6cd6 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -68,6 +68,7 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) devinfo->speed_capa = dev->speed_capa; devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; return 0; } diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c index 4758321778..e7ea76180f 100644 --- a/drivers/net/cxgbe/cxgbe_ethdev.c +++ b/drivers/net/cxgbe/cxgbe_ethdev.c @@ -131,6 +131,8 @@ int cxgbe_dev_info_get(struct rte_eth_dev *eth_dev, device_info->max_vfs = adapter->params.arch.vfcount; device_info->max_vmdq_pools = 0; /* XXX: For now no support for VMDQ */ + device_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + device_info->rx_queue_offload_capa = 0UL; device_info->rx_offload_capa = CXGBE_RX_OFFLOADS; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 73d17f7b3c..a3706439d5 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -254,6 +254,7 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | RTE_ETH_LINK_SPEED_10G; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->max_hash_mac_addrs = 0; dev_info->max_vfs = 0; diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 18fea4e0ac..31c4870086 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -1101,6 +1101,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + /* Preferred queue parameters */ dev_info->default_rxportconf.nb_queues = 1; dev_info->default_txportconf.nb_queues = 1; diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index ff06575f03..d0e2bc9814 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -2168,6 +2168,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->tx_queue_offload_capa = igb_get_tx_queue_offloads_capa(dev); dev_info->tx_offload_capa = igb_get_tx_port_offloads_capa(dev) | dev_info->tx_queue_offload_capa; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; switch (hw->mac.type) { case e1000_82575: diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c index c8bdaf1a8e..163be09809 100644 --- a/drivers/net/enic/enic_ethdev.c +++ b/drivers/net/enic/enic_ethdev.c @@ -469,6 +469,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev, device_info->rx_offload_capa = enic->rx_offload_capa; device_info->tx_offload_capa = enic->tx_offload_capa; device_info->tx_queue_offload_capa = enic->tx_queue_offload_capa; + device_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; device_info->default_rxconf = (struct rte_eth_rxconf) { .rx_free_thresh = ENIC_DEFAULT_RX_FREE_THRESH }; diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 822883bc2f..55e21d635c 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -1227,6 +1227,7 @@ fs_dev_infos_get(struct rte_eth_dev *dev, infos->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + infos->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) { struct rte_eth_dev_info sub_info; diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index 9cabd3e0c1..1853511c3b 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -751,6 +751,8 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + info->hash_key_size = HINIC_RSS_KEY_SIZE; info->reta_size = HINIC_RSS_INDIR_SIZE; info->flow_type_rss_offloads = HINIC_RSS_OFFLOAD_ALL; diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 56eca03833..03447c8d4a 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -2598,6 +2598,7 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info) if (hns3_dev_get_support(hw, INDEP_TXRX)) info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; if (hns3_dev_get_support(hw, PTP)) info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP; diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 675db44e85..4a0d73fc29 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -699,6 +699,7 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info) if (hns3_dev_get_support(hw, INDEP_TXRX)) info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; info->rx_desc_lim = (struct rte_eth_desc_lim) { .nb_max = HNS3_MAX_RING_DESC, diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 62e374d19e..9ea5f303ff 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -3750,6 +3750,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t); diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c index 663c46b91d..7f8e81858e 100644 --- a/drivers/net/i40e/i40e_vf_representor.c +++ b/drivers/net/i40e/i40e_vf_representor.c @@ -35,6 +35,8 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev, /* get dev info for the vdev */ dev_info->device = ethdev->device; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + dev_info->max_rx_queues = ethdev->data->nb_rx_queues; dev_info->max_tx_queues = ethdev->data->nb_tx_queues; diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 8ae15652cd..7bdf09b199 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -1057,6 +1057,7 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->reta_size = vf->vf_res->rss_lut_size; dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL; dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_QINQ_STRIP | diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 4d9484e994..d1e6757641 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -663,6 +663,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, dev_info->hash_key_size = hw->vf_res->rss_key_size; dev_info->reta_size = hw->vf_res->rss_lut_size; dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP | diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 8189ad412a..3e2bf14b94 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -1477,6 +1477,7 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */ dev_info->max_rx_pktlen = MAX_RX_JUMBO_FRAME_SIZE; dev_info->max_mac_addrs = hw->mac.rar_entry_count; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL; dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL; dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP; diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index 1708858575..de325c7d29 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -96,6 +96,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev, dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->switch_info.name = ethdev->device->name; dev_info->switch_info.domain_id = rpst->switch_domain_id; diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 25f213bda5..9c7fe13f7f 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -1709,6 +1709,8 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev, { struct mrvl_priv *priv = dev->data->dev_private; + info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; + info->speed_capa = RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G | diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index d5caaa326a..48781514c3 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -583,6 +583,7 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; return 0; } diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 8ca00e7f6c..3e9aaeecd3 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -1367,6 +1367,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev, dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN; dev_info->rx_desc_lim = qede_rx_desc_lim; dev_info->tx_desc_lim = qede_tx_desc_lim; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; if (IS_PF(edev)) dev_info->max_rx_queues = (uint16_t)RTE_MIN( diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index 833d833a04..6b0a7e6b0c 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -186,6 +186,7 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; if (mae->status == SFC_MAE_STATUS_SUPPORTED || mae->status == SFC_MAE_STATUS_ADMIN) { diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c index 3ef33818a9..8c098cad5b 100644 --- a/drivers/net/softnic/rte_eth_softnic.c +++ b/drivers/net/softnic/rte_eth_softnic.c @@ -93,6 +93,7 @@ pmd_dev_infos_get(struct rte_eth_dev *dev __rte_unused, dev_info->max_rx_pktlen = UINT32_MAX; dev_info->max_rx_queues = UINT16_MAX; dev_info->max_tx_queues = UINT16_MAX; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; return 0; } diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index a9a7658147..37ac18f951 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -1006,6 +1006,7 @@ tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) * functions together and not in partial combinations */ dev_info->flow_type_rss_offloads = ~TAP_RSS_HF_MASK; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; return 0; } diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index fde9914e49..5c31ba5358 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -2597,6 +2597,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_vfs = pci_dev->max_vfs; dev_info->max_vmdq_pools = RTE_ETH_64_POOLS; dev_info->vmdq_queue_num = dev_info->max_rx_queues; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev); dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) | dev_info->rx_queue_offload_capa); diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 4dda55b0c2..67ae69dec3 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -487,6 +487,7 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev, dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC; dev_info->max_vfs = pci_dev->max_vfs; dev_info->max_vmdq_pools = RTE_ETH_64_POOLS; + dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP; dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev); dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) | dev_info->rx_queue_offload_capa); From patchwork Tue Nov 2 17:01:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 103524 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B019DA0C4E; Tue, 2 Nov 2021 18:02:28 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3D76C410E6; Tue, 2 Nov 2021 18:02:12 +0100 (CET) Received: from AZHDRRW-EX02.NVIDIA.COM (azhdrrw-ex02.nvidia.com [20.64.145.131]) by mails.dpdk.org (Postfix) with ESMTP id B49B7410FD; Tue, 2 Nov 2021 18:02:09 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.171) by mxs.oss.nvidia.com (10.13.234.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Tue, 2 Nov 2021 10:02:08 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mY7brQzzucAiyPJLa6n/kS1wx9ooGk4iNYyh+zHne+c/kgFEHSvf1+YUlFk2ekkZaiuu56IYqtYACFvCMp8t3THjFYgRIyAIV68+1A/MCQ/2+S1DbNMLDxneKVe0fFHpX8ZmyBKPHLzM0/kgG9GlzxP+q9h/uJVs9H23hFIA3p9MdFrw4EcWIA8oDbN5NxWV2OV2ji3O/66WTNTTXaZWrGuUpNSuwOCaTypHu/jMXHLSZeTC3A38BLYm5X9XwvYBPQE7UMoBXphnG9UHaMNFG+1OD/tUeTokoA+kQiDAt5jqt1jMYVYbCZ2+6J2j8WF+sNB5hIQquSpjrSai8U4uWw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=HoCzisoC3+EeCXhfZ+3O3mWxyGQI0xCXhGGP2F5qj3A=; b=Ww+hl69iLjF5GYgy380mtouCg5hS7wt2FwcIGun9U+RV0mOOtoKAjcptpTEP84dIf64NWRI9B4ZpeIqh0KYNK16244jxGgNRNZG2G26fzzOlIZ/P6wfbjP0lhMW/JNtpiRFSl01HlB+d/jISt+tSgJJElNEd1J8tu/nVOqnhK/kytE8zkf4GXEddIiFAFXPpWqdAX69zDe/2j8F4kI7q906kVhmLPFS58rGQCbsGVrnGH6VIKLXyjYAoWItKam6H1uPXAThsQ45ylRQLHwYMO0xd6gucFdqBPf7ct+PLtIrAEy/VwR2wSKASNc0CDW5w7d9+YZtWuTlyRahcVupHuA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HoCzisoC3+EeCXhfZ+3O3mWxyGQI0xCXhGGP2F5qj3A=; b=fZurQDWzPKPzg5tJrVvfTfh649pdH8Al+4uw1FpGItRNlhl53OwA6EExWi4Uvo+DWZpZpT+zA1WlqXx+tLQLpjK93lfNtiGMaSrgvcqoV1Ec6jZUfS71NGVFH4n75snNJHJHz5t37uNHPNhv1fJg2sD8FyYRDxXBeYQyNDSx5AeGg1eDJ940ldPN/hNqDxAsw3g6zUmp+crEQmXmoiGEibdCk0v1jv9Bz0hSDxy0bCimwVxGYi8koiPewAF91lddoi5QoS1Iq5CYAM0j+GWUkfH8GK4r+3fQpdTa581gCsJXblHLhlRpPWTBz+yEL6THZXOSiKydi4fXiYudGYupmg== Received: from BN6PR2001CA0034.namprd20.prod.outlook.com (2603:10b6:405:16::20) by MN2PR12MB3773.namprd12.prod.outlook.com (2603:10b6:208:164::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Tue, 2 Nov 2021 17:02:06 +0000 Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:405:16:cafe::c3) by BN6PR2001CA0034.outlook.office365.com (2603:10b6:405:16::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Tue, 2 Nov 2021 17:02:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Tue, 2 Nov 2021 17:02:06 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:02:02 +0000 From: Dmitry Kozlyuk To: CC: , Matan Azrad , Viacheslav Ovsiienko Date: Tue, 2 Nov 2021 19:01:33 +0200 Message-ID: <20211102170135.959380-5-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102170135.959380-1-dkozlyuk@nvidia.com> References: <20211102135415.944050-1-dkozlyuk@nvidia.com> <20211102170135.959380-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0b1121bf-9390-4b7f-8da0-08d99e228001 X-MS-TrafficTypeDiagnostic: MN2PR12MB3773: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1751; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: w8b5Yyaarv4Rh+o7PrNeXQpfMGyjByHSLDc/APc7fVz85MpezGte5seoMgAxRzzOIylqjdUv0GE0djM9I4YMvhMoAoWZNxxpNDjnMDKvPtRibZEuHMZPcj8TbCI7w/yLCwsTYMuEXCLc0I+QhTK+oapslAXg5Da0z1h9p9l/USHLAxy49CisVu49vF0w48EOeEVQ47RA4FuHiwDoaAFkJcEcPIl6SP08zmSsKl6FPh3ssIHosnlfwwNL/7JahGqsC9ugqr9SqshKroKYKeISm7wmfJFUqudTtTAAmjxRetlQASFiX6AyLzpuBdlyi+b/H8DgmtORw0KD1RMauwBBpon5plv2ljAdzdGappfWoHYqKT2zhz1e0ijyJdIS8jpdl6dNCAwLM8wigoWlaJ9Ud1/R+60yduQhI2zR7/FMoqiYPqDJAnX7hu4aXO2xrGPEKGagEXs/lSCV7FoBM0mIUqzXYTW41czZnGDzTdmn8UjRRchbLZcRv1rTj0ianBxwWqTuyDXbWphy/T8U8nWyWyI+ukVFlGf5H5q97ibzssB7ZpuSejHkPodrrAmSa9h9j4H95KxPYEAOCSvl7MzI+SDeI4u1brgQoAgGz68OUvTxcnKFyonkN3ObhukuP+fOYZXldd/ASgCQ5AwqX2GuOcBxiJNQLWBMSyqSadnhQRda54vVFiWg1rq/G3Z1tmWjWaz36X9Uwgz6RQe303Hk3Q== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(107886003)(36756003)(2616005)(6666004)(30864003)(86362001)(356005)(8676002)(26005)(336012)(426003)(82310400003)(7636003)(4326008)(83380400001)(508600001)(450100002)(316002)(2906002)(47076005)(54906003)(70586007)(70206006)(55016002)(6916009)(16526019)(186003)(1076003)(8936002)(6286002)(36906005)(5660300002)(36860700001)(7696005); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2021 17:02:06.1608 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0b1121bf-9390-4b7f-8da0-08d99e228001 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3773 Subject: [dpdk-dev] [PATCH v6 4/6] net/mlx5: discover max flow priority using DevX X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Maximum available flow priority was discovered using Verbs API regardless of the selected flow engine. This required some Verbs objects to be initialized in order to use DevX engine. Make priority discovery an engine method and implement it for DevX using its API. Cc: stable@dpdk.org Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_flow.c | 98 +++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow.h | 4 ++ drivers/net/mlx5/mlx5_flow_dv.c | 103 +++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_verbs.c | 74 +++------------------ 4 files changed, 216 insertions(+), 63 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 2385a0b550..3d8dd974ce 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -9700,3 +9700,101 @@ mlx5_flow_expand_rss_adjust_node(const struct rte_flow_item *pattern, } return node; } + +/* Map of Verbs to Flow priority with 8 Verbs priorities. */ +static const uint32_t priority_map_3[][MLX5_PRIORITY_MAP_MAX] = { + { 0, 1, 2 }, { 2, 3, 4 }, { 5, 6, 7 }, +}; + +/* Map of Verbs to Flow priority with 16 Verbs priorities. */ +static const uint32_t priority_map_5[][MLX5_PRIORITY_MAP_MAX] = { + { 0, 1, 2 }, { 3, 4, 5 }, { 6, 7, 8 }, + { 9, 10, 11 }, { 12, 13, 14 }, +}; + +/** + * Discover the number of available flow priorities. + * + * @param dev + * Ethernet device. + * + * @return + * On success, number of available flow priorities. + * On failure, a negative errno-style code and rte_errno is set. + */ +int +mlx5_flow_discover_priorities(struct rte_eth_dev *dev) +{ + static const uint16_t vprio[] = {8, 16}; + const struct mlx5_priv *priv = dev->data->dev_private; + const struct mlx5_flow_driver_ops *fops; + enum mlx5_flow_drv_type type; + int ret; + + type = mlx5_flow_os_get_type(); + if (type == MLX5_FLOW_TYPE_MAX) { + type = MLX5_FLOW_TYPE_VERBS; + if (priv->sh->devx && priv->config.dv_flow_en) + type = MLX5_FLOW_TYPE_DV; + } + fops = flow_get_drv_ops(type); + if (fops->discover_priorities == NULL) { + DRV_LOG(ERR, "Priority discovery not supported"); + rte_errno = ENOTSUP; + return -rte_errno; + } + ret = fops->discover_priorities(dev, vprio, RTE_DIM(vprio)); + if (ret < 0) + return ret; + switch (ret) { + case 8: + ret = RTE_DIM(priority_map_3); + break; + case 16: + ret = RTE_DIM(priority_map_5); + break; + default: + rte_errno = ENOTSUP; + DRV_LOG(ERR, + "port %u maximum priority: %d expected 8/16", + dev->data->port_id, ret); + return -rte_errno; + } + DRV_LOG(INFO, "port %u supported flow priorities:" + " 0-%d for ingress or egress root table," + " 0-%d for non-root table or transfer root table.", + dev->data->port_id, ret - 2, + MLX5_NON_ROOT_FLOW_MAX_PRIO - 1); + return ret; +} + +/** + * Adjust flow priority based on the highest layer and the request priority. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] priority + * The rule base priority. + * @param[in] subpriority + * The priority based on the items. + * + * @return + * The new priority. + */ +uint32_t +mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority, + uint32_t subpriority) +{ + uint32_t res = 0; + struct mlx5_priv *priv = dev->data->dev_private; + + switch (priv->sh->flow_max_priority) { + case RTE_DIM(priority_map_3): + res = priority_map_3[priority][subpriority]; + break; + case RTE_DIM(priority_map_5): + res = priority_map_5[priority][subpriority]; + break; + } + return res; +} diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 5509c28f01..8b83fa6f67 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1232,6 +1232,9 @@ typedef int (*mlx5_flow_create_def_policy_t) (struct rte_eth_dev *dev); typedef void (*mlx5_flow_destroy_def_policy_t) (struct rte_eth_dev *dev); +typedef int (*mlx5_flow_discover_priorities_t) + (struct rte_eth_dev *dev, + const uint16_t *vprio, int vprio_n); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; @@ -1266,6 +1269,7 @@ struct mlx5_flow_driver_ops { mlx5_flow_action_update_t action_update; mlx5_flow_action_query_t action_query; mlx5_flow_sync_domain_t sync_domain; + mlx5_flow_discover_priorities_t discover_priorities; }; /* mlx5_flow.c */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 8962d26c75..aaf96fc297 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -17932,6 +17932,108 @@ flow_dv_sync_domain(struct rte_eth_dev *dev, uint32_t domains, uint32_t flags) return 0; } +/** + * Discover the number of available flow priorities + * by trying to create a flow with the highest priority value + * for each possible number. + * + * @param[in] dev + * Ethernet device. + * @param[in] vprio + * List of possible number of available priorities. + * @param[in] vprio_n + * Size of @p vprio array. + * @return + * On success, number of available flow priorities. + * On failure, a negative errno-style code and rte_errno is set. + */ +static int +flow_dv_discover_priorities(struct rte_eth_dev *dev, + const uint16_t *vprio, int vprio_n) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool *pool = priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW]; + struct rte_flow_item_eth eth; + struct rte_flow_item item = { + .type = RTE_FLOW_ITEM_TYPE_ETH, + .spec = ð, + .mask = ð, + }; + struct mlx5_flow_dv_matcher matcher = { + .mask = { + .size = sizeof(matcher.mask.buf), + }, + }; + union mlx5_flow_tbl_key tbl_key; + struct mlx5_flow flow; + void *action; + struct rte_flow_error error; + uint8_t misc_mask; + int i, err, ret = -ENOTSUP; + + /* + * Prepare a flow with a catch-all pattern and a drop action. + * Use drop queue, because shared drop action may be unavailable. + */ + action = priv->drop_queue.hrxq->action; + if (action == NULL) { + DRV_LOG(ERR, "Priority discovery requires a drop action"); + rte_errno = ENOTSUP; + return -rte_errno; + } + memset(&flow, 0, sizeof(flow)); + flow.handle = mlx5_ipool_zmalloc(pool, &flow.handle_idx); + if (flow.handle == NULL) { + DRV_LOG(ERR, "Cannot create flow handle"); + rte_errno = ENOMEM; + return -rte_errno; + } + flow.ingress = true; + flow.dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param); + flow.dv.actions[0] = action; + flow.dv.actions_n = 1; + memset(ð, 0, sizeof(eth)); + flow_dv_translate_item_eth(matcher.mask.buf, flow.dv.value.buf, + &item, /* inner */ false, /* group */ 0); + matcher.crc = rte_raw_cksum(matcher.mask.buf, matcher.mask.size); + for (i = 0; i < vprio_n; i++) { + /* Configure the next proposed maximum priority. */ + matcher.priority = vprio[i] - 1; + memset(&tbl_key, 0, sizeof(tbl_key)); + err = flow_dv_matcher_register(dev, &matcher, &tbl_key, &flow, + /* tunnel */ NULL, + /* group */ 0, + &error); + if (err != 0) { + /* This action is pure SW and must always succeed. */ + DRV_LOG(ERR, "Cannot register matcher"); + ret = -rte_errno; + break; + } + /* Try to apply the flow to HW. */ + misc_mask = flow_dv_matcher_enable(flow.dv.value.buf); + __flow_dv_adjust_buf_size(&flow.dv.value.size, misc_mask); + err = mlx5_flow_os_create_flow + (flow.handle->dvh.matcher->matcher_object, + (void *)&flow.dv.value, flow.dv.actions_n, + flow.dv.actions, &flow.handle->drv_flow); + if (err == 0) { + claim_zero(mlx5_flow_os_destroy_flow + (flow.handle->drv_flow)); + flow.handle->drv_flow = NULL; + } + claim_zero(flow_dv_matcher_release(dev, flow.handle)); + if (err != 0) + break; + ret = vprio[i]; + } + mlx5_ipool_free(pool, flow.handle_idx); + /* Set rte_errno if no expected priority value matched. */ + if (ret < 0) + rte_errno = -ret; + return ret; +} + const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .validate = flow_dv_validate, .prepare = flow_dv_prepare, @@ -17965,6 +18067,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .action_update = flow_dv_action_update, .action_query = flow_dv_action_query, .sync_domain = flow_dv_sync_domain, + .discover_priorities = flow_dv_discover_priorities, }; #endif /* HAVE_IBV_FLOW_DV_SUPPORT */ diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c index 0a89a136a2..29cd694752 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -28,17 +28,6 @@ #define VERBS_SPEC_INNER(item_flags) \ (!!((item_flags) & MLX5_FLOW_LAYER_TUNNEL) ? IBV_FLOW_SPEC_INNER : 0) -/* Map of Verbs to Flow priority with 8 Verbs priorities. */ -static const uint32_t priority_map_3[][MLX5_PRIORITY_MAP_MAX] = { - { 0, 1, 2 }, { 2, 3, 4 }, { 5, 6, 7 }, -}; - -/* Map of Verbs to Flow priority with 16 Verbs priorities. */ -static const uint32_t priority_map_5[][MLX5_PRIORITY_MAP_MAX] = { - { 0, 1, 2 }, { 3, 4, 5 }, { 6, 7, 8 }, - { 9, 10, 11 }, { 12, 13, 14 }, -}; - /* Verbs specification header. */ struct ibv_spec_header { enum ibv_flow_spec_type type; @@ -50,13 +39,17 @@ struct ibv_spec_header { * * @param[in] dev * Pointer to the Ethernet device structure. - * + * @param[in] vprio + * Expected result variants. + * @param[in] vprio_n + * Number of entries in @p vprio array. * @return - * number of supported flow priority on success, a negative errno + * Number of supported flow priority on success, a negative errno * value otherwise and rte_errno is set. */ -int -mlx5_flow_discover_priorities(struct rte_eth_dev *dev) +static int +flow_verbs_discover_priorities(struct rte_eth_dev *dev, + const uint16_t *vprio, int vprio_n) { struct mlx5_priv *priv = dev->data->dev_private; struct { @@ -79,20 +72,19 @@ mlx5_flow_discover_priorities(struct rte_eth_dev *dev) }; struct ibv_flow *flow; struct mlx5_hrxq *drop = priv->drop_queue.hrxq; - uint16_t vprio[] = { 8, 16 }; int i; int priority = 0; #if defined(HAVE_MLX5DV_DR_DEVX_PORT) || defined(HAVE_MLX5DV_DR_DEVX_PORT_V35) /* If DevX supported, driver must support 16 verbs flow priorities. */ - priority = RTE_DIM(priority_map_5); + priority = 16; goto out; #endif if (!drop->qp) { rte_errno = ENOTSUP; return -rte_errno; } - for (i = 0; i != RTE_DIM(vprio); i++) { + for (i = 0; i != vprio_n; i++) { flow_attr.attr.priority = vprio[i] - 1; flow = mlx5_glue->create_flow(drop->qp, &flow_attr.attr); if (!flow) @@ -100,20 +92,6 @@ mlx5_flow_discover_priorities(struct rte_eth_dev *dev) claim_zero(mlx5_glue->destroy_flow(flow)); priority = vprio[i]; } - switch (priority) { - case 8: - priority = RTE_DIM(priority_map_3); - break; - case 16: - priority = RTE_DIM(priority_map_5); - break; - default: - rte_errno = ENOTSUP; - DRV_LOG(ERR, - "port %u verbs maximum priority: %d expected 8/16", - dev->data->port_id, priority); - return -rte_errno; - } #if defined(HAVE_MLX5DV_DR_DEVX_PORT) || defined(HAVE_MLX5DV_DR_DEVX_PORT_V35) out: #endif @@ -125,37 +103,6 @@ mlx5_flow_discover_priorities(struct rte_eth_dev *dev) return priority; } -/** - * Adjust flow priority based on the highest layer and the request priority. - * - * @param[in] dev - * Pointer to the Ethernet device structure. - * @param[in] priority - * The rule base priority. - * @param[in] subpriority - * The priority based on the items. - * - * @return - * The new priority. - */ -uint32_t -mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority, - uint32_t subpriority) -{ - uint32_t res = 0; - struct mlx5_priv *priv = dev->data->dev_private; - - switch (priv->sh->flow_max_priority) { - case RTE_DIM(priority_map_3): - res = priority_map_3[priority][subpriority]; - break; - case RTE_DIM(priority_map_5): - res = priority_map_5[priority][subpriority]; - break; - } - return res; -} - /** * Get Verbs flow counter by index. * @@ -2095,4 +2042,5 @@ const struct mlx5_flow_driver_ops mlx5_flow_verbs_drv_ops = { .destroy = flow_verbs_destroy, .query = flow_verbs_query, .sync_domain = flow_verbs_sync_domain, + .discover_priorities = flow_verbs_discover_priorities, }; From patchwork Tue Nov 2 17:01:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 103525 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32E35A0C4E; Tue, 2 Nov 2021 18:02:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB1964114B; Tue, 2 Nov 2021 18:02:13 +0100 (CET) Received: from AZHDRRW-EX02.NVIDIA.COM (azhdrrw-ex02.nvidia.com [20.64.145.131]) by mails.dpdk.org (Postfix) with ESMTP id 0520B41124; Tue, 2 Nov 2021 18:02:12 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (104.47.58.169) by mxs.oss.nvidia.com (10.13.234.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Tue, 2 Nov 2021 10:02:11 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fMCPzqj+dyW2PFZ7TmW8mjJeCyb2525ZzJ8/ZDQ3g/M+pX1gSe3XA37qaaKtoiFLmN6xM6poBu6PPjLUE5duJw+JsE1WlrqbK9AByoLX8h4WSLhkrimkTC3idsAFayAiF5xtWwRh76VeF7P+0wEE2ZI+ItfM9mUETq1QIz/ZUVCdAlI6Chn08Yr7H3Hn8uRf9FsKPMIuOfvByud73Bs31vEXjknhCqncyY17404EmCBKV8aR7AMHO3AboT3UzTEqTm+UhPrMvHp19m/Y7r4i0ItwBnfw/qZqC25ykExpVUbIyYrsNRfcIv30198jVM9+QtEoNPTPbyKfvrfXTUK7WA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Sh+VIihifOl+K1wn0m3vejK/+na2y2lgW0ODIIVLl9A=; b=bXrAfStvhM2QkKfdavnL4gCBzRABIh8ZQ3UqqcHqwvLAPrP6hyBTUUb+OpDTXb0cAzht4sDJB2hwV1C0xxLV0iMgcUZ/wkEBMutaaqleHZ12XUYiuv2f6YwRJl2+eUKdn5KB3b+ysBtq8D0m1pRvXLx/RlsvPyDaTZvUWAvrBdKByhD4MAf+3p6wcUNfy4c6NxNMyg+z4uiAxJYe3UOVzRiNTv4N1VcSzuy+QAoRP3cZZ6N9fMtvZbRAeuDv7aLnBY07v8b3MDBEKi6hr8MHJ/wE+q0I73ohd4RhBWg/53ZmT9z0zKvYJUkul3FlPcL6V15TcpuYiHHwsbCoDtDxTg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Sh+VIihifOl+K1wn0m3vejK/+na2y2lgW0ODIIVLl9A=; b=DozJe5DkQfZ/32HQx9HoeVkyuFH6asu5j7cHywn42pnWpU8bXadkVrq6RvsulEa1TrZpK0/XQkyIJR+qaznXV8a9Ag856Z43V28JLCso27UDjGusLN+gbUVnxuXXRN0iocOhY4BKagzT66P6/5rxvZqABP5Uu9stMP0lQQjRtGH6/WMgdETNKxBZkAZu3u6aTxPP7euxdzqpAxsrhaoWPD/8Tplv4aa1cz0mia7q7+XMqe+XeF+aqxYmZp4/jT9E99KYpj1uL4f/N2be2JtY6799LbJ3DiRQgbFFQZX55HA8JvYhwUw5e3gAr9s3jTZHrxq7ruRMJfqZxex41OHvZA== Received: from BN6PR2001CA0036.namprd20.prod.outlook.com (2603:10b6:405:16::22) by CH0PR12MB5235.namprd12.prod.outlook.com (2603:10b6:610:d2::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Tue, 2 Nov 2021 17:02:08 +0000 Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:405:16:cafe::f3) by BN6PR2001CA0036.outlook.office365.com (2603:10b6:405:16::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Tue, 2 Nov 2021 17:02:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Tue, 2 Nov 2021 17:02:08 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:02:04 +0000 From: Dmitry Kozlyuk To: CC: , Matan Azrad , Viacheslav Ovsiienko Date: Tue, 2 Nov 2021 19:01:34 +0200 Message-ID: <20211102170135.959380-6-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102170135.959380-1-dkozlyuk@nvidia.com> References: <20211102135415.944050-1-dkozlyuk@nvidia.com> <20211102170135.959380-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1fa3c76b-2e85-4e19-1019-08d99e228173 X-MS-TrafficTypeDiagnostic: CH0PR12MB5235: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3276; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rkdi1kPe0Z5vd1b4UCLwL8M418VeC6giAj31ujs44QUdJNxEvSNHeENyfZ5FwyKooDQDbZSuZpd47u04BZxfQhrmiLrhImRnuhbxNkVjxXx0i0LI/q5c8IW3tQSHCg3GccRHJXFtW/IFxDd2BreBYdYBUEq1JAoJAT9q238EClN6HV4ZfawNnO+xtPa3Vn9UQ346U7LQVGbo5U77nUiUSagqlqy5p8uI0HNUv1+1oFYUk5ZvUoeWy83NBcHogjt8rbcAnGPTOV1E1Ntw9HUpFPnl2DJNz/UN5ZIbVWlFYxb6jRyFtvyyGz0zCpn4Cj1LY8CRBuZrcHXFH4N03vMfZLukqL55fhPpPkgE6CtqhY4D6xL6E3/4d5LABqGoaG0xGbQpUaPYck87ZLHBodtn2Tc6Zb30RmzkAgyZmEr33EI83Vk20ffZNffNfzJRGqa34542ABBYL1OMZEUwm+siEdQtrdTIwCNIm9L3pSmrWVfDyeLDOTtAfG0hOulm/sBjl/1JwXNm65hak4BEK/JRhCP26VnOt/PnaRGTJbRWIHG8mRr8JO5ZVMjQpcHyWcNACyyOuT6cX5oMAxLU3QzAnkDRhIxMouqNZ/a2wWiN7vcs7oziXoC3TiY7s6kLEQk7ljGQuZmwf6K5SU6nylWpGt1G1Wf0Bk5vxg1jEgvosfEZPlxgRi1YrwdsIsJ3wyO7lfbG297spTApFpkRb2e1Hw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(6916009)(186003)(83380400001)(55016002)(508600001)(2906002)(16526019)(1076003)(54906003)(8676002)(7696005)(6666004)(36756003)(8936002)(2616005)(86362001)(6286002)(70206006)(4326008)(316002)(426003)(30864003)(36906005)(450100002)(336012)(7636003)(36860700001)(26005)(5660300002)(47076005)(70586007)(82310400003)(356005)(107886003); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2021 17:02:08.5855 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1fa3c76b-2e85-4e19-1019-08d99e228173 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR12MB5235 Subject: [dpdk-dev] [PATCH v6 5/6] net/mlx5: create drop queue using DevX X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Drop queue creation and destruction were not implemented for DevX flow engine and Verbs engine methods were used as a workaround. Implement these methods for DevX so that there is a valid queue ID that can be used regardless of queue configuration via API. Cc: stable@dpdk.org Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 4 - drivers/net/mlx5/mlx5_devx.c | 211 ++++++++++++++++++++++++++----- 2 files changed, 180 insertions(+), 35 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index f31f1e96c6..dd4fc0c716 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1690,10 +1690,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } if (sh->devx && config->dv_flow_en && config->dest_tir) { priv->obj_ops = devx_obj_ops; - priv->obj_ops.drop_action_create = - ibv_obj_ops.drop_action_create; - priv->obj_ops.drop_action_destroy = - ibv_obj_ops.drop_action_destroy; mlx5_queue_counter_id_prepare(eth_dev); priv->obj_ops.lb_dummy_queue_create = mlx5_rxq_ibv_obj_dummy_lb_create; diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 7ed774e804..424f77be79 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -226,18 +226,18 @@ mlx5_rx_devx_get_event(struct mlx5_rxq_obj *rxq_obj) * * @param dev * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq_data + * RX queue data. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, + struct mlx5_rxq_data *rxq_data) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_common_device *cdev = priv->sh->cdev; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; struct mlx5_rxq_ctrl *rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); struct mlx5_devx_create_rq_attr rq_attr = { 0 }; @@ -290,20 +290,20 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx) * * @param dev * Pointer to Ethernet device. - * @param idx - * Queue index in DPDK Rx queue array. + * @param rxq_data + * RX queue data. * * @return * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, uint16_t idx) +mlx5_rxq_create_devx_cq_resources(struct rte_eth_dev *dev, + struct mlx5_rxq_data *rxq_data) { struct mlx5_devx_cq *cq_obj = 0; struct mlx5_devx_cq_attr cq_attr = { 0 }; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[idx]; struct mlx5_rxq_ctrl *rxq_ctrl = container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); unsigned int cqe_n = mlx5_rxq_cqe_num(rxq_data); @@ -498,13 +498,13 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) tmpl->fd = mlx5_os_get_devx_channel_fd(tmpl->devx_channel); } /* Create CQ using DevX API. */ - ret = mlx5_rxq_create_devx_cq_resources(dev, idx); + ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data); if (ret) { DRV_LOG(ERR, "Failed to create CQ."); goto error; } /* Create RQ using DevX API. */ - ret = mlx5_rxq_create_devx_rq_resources(dev, idx); + ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data); if (ret) { DRV_LOG(ERR, "Port %u Rx queue %u RQ creation failure.", dev->data->port_id, idx); @@ -537,6 +537,11 @@ mlx5_rxq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx) * Pointer to Ethernet device. * @param log_n * Log of number of queues in the array. + * @param queues + * List of RX queue indices or NULL, in which case + * the attribute will be filled by drop queue ID. + * @param queues_n + * Size of @p queues array or 0 if it is NULL. * @param ind_tbl * DevX indirection table object. * @@ -564,6 +569,11 @@ mlx5_devx_ind_table_create_rqt_attr(struct rte_eth_dev *dev, } rqt_attr->rqt_max_size = priv->config.ind_table_max_size; rqt_attr->rqt_actual_size = rqt_n; + if (queues == NULL) { + for (i = 0; i < rqt_n; i++) + rqt_attr->rq_list[i] = priv->drop_queue.rxq->rq->id; + return rqt_attr; + } for (i = 0; i != queues_n; ++i) { struct mlx5_rxq_data *rxq = (*priv->rxqs)[queues[i]]; struct mlx5_rxq_ctrl *rxq_ctrl = @@ -596,11 +606,12 @@ mlx5_devx_ind_table_new(struct rte_eth_dev *dev, const unsigned int log_n, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_devx_rqt_attr *rqt_attr = NULL; + const uint16_t *queues = dev->data->dev_started ? ind_tbl->queues : + NULL; MLX5_ASSERT(ind_tbl); - rqt_attr = mlx5_devx_ind_table_create_rqt_attr(dev, log_n, - ind_tbl->queues, - ind_tbl->queues_n); + rqt_attr = mlx5_devx_ind_table_create_rqt_attr(dev, log_n, queues, + ind_tbl->queues_n); if (!rqt_attr) return -rte_errno; ind_tbl->rqt = mlx5_devx_cmd_create_rqt(priv->sh->cdev->ctx, rqt_attr); @@ -671,7 +682,8 @@ mlx5_devx_ind_table_destroy(struct mlx5_ind_table_obj *ind_tbl) * @param[in] hash_fields * Verbs protocol hash field to make the RSS on. * @param[in] ind_tbl - * Indirection table for TIR. + * Indirection table for TIR. If table queues array is NULL, + * a TIR for drop queue is assumed. * @param[in] tunnel * Tunnel type. * @param[out] tir_attr @@ -687,19 +699,27 @@ mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, int tunnel, struct mlx5_devx_tir_attr *tir_attr) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - enum mlx5_rxq_type rxq_obj_type = rxq_ctrl->type; + enum mlx5_rxq_type rxq_obj_type; bool lro = true; uint32_t i; - /* Enable TIR LRO only if all the queues were configured for. */ - for (i = 0; i < ind_tbl->queues_n; ++i) { - if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) { - lro = false; - break; + /* NULL queues designate drop queue. */ + if (ind_tbl->queues != NULL) { + struct mlx5_rxq_data *rxq_data = + (*priv->rxqs)[ind_tbl->queues[0]]; + struct mlx5_rxq_ctrl *rxq_ctrl = + container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + rxq_obj_type = rxq_ctrl->type; + + /* Enable TIR LRO only if all the queues were configured for. */ + for (i = 0; i < ind_tbl->queues_n; ++i) { + if (!(*priv->rxqs)[ind_tbl->queues[i]]->lro) { + lro = false; + break; + } } + } else { + rxq_obj_type = priv->drop_queue.rxq->rxq_ctrl->type; } memset(tir_attr, 0, sizeof(*tir_attr)); tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; @@ -858,7 +878,7 @@ mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, } /** - * Create a DevX drop action for Rx Hash queue. + * Create a DevX drop Rx queue. * * @param dev * Pointer to Ethernet device. @@ -867,14 +887,99 @@ mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -mlx5_devx_drop_action_create(struct rte_eth_dev *dev) +mlx5_rxq_devx_obj_drop_create(struct rte_eth_dev *dev) { - (void)dev; - DRV_LOG(ERR, "DevX drop action is not supported yet."); - rte_errno = ENOTSUP; + struct mlx5_priv *priv = dev->data->dev_private; + int socket_id = dev->device->numa_node; + struct mlx5_rxq_ctrl *rxq_ctrl; + struct mlx5_rxq_data *rxq_data; + struct mlx5_rxq_obj *rxq = NULL; + int ret; + + /* + * Initialize dummy control structures. + * They are required to hold pointers for cleanup + * and are only accessible via drop queue DevX objects. + */ + rxq_ctrl = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq_ctrl), + 0, socket_id); + if (rxq_ctrl == NULL) { + DRV_LOG(ERR, "Port %u could not allocate drop queue control", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + rxq = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*rxq), 0, socket_id); + if (rxq == NULL) { + DRV_LOG(ERR, "Port %u could not allocate drop queue object", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + rxq->rxq_ctrl = rxq_ctrl; + rxq_ctrl->type = MLX5_RXQ_TYPE_STANDARD; + rxq_ctrl->priv = priv; + rxq_ctrl->obj = rxq; + rxq_data = &rxq_ctrl->rxq; + /* Create CQ using DevX API. */ + ret = mlx5_rxq_create_devx_cq_resources(dev, rxq_data); + if (ret != 0) { + DRV_LOG(ERR, "Port %u drop queue CQ creation failed.", + dev->data->port_id); + goto error; + } + /* Create RQ using DevX API. */ + ret = mlx5_rxq_create_devx_rq_resources(dev, rxq_data); + if (ret != 0) { + DRV_LOG(ERR, "Port %u drop queue RQ creation failed.", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + /* Change queue state to ready. */ + ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RST2RDY); + if (ret != 0) + goto error; + /* Initialize drop queue. */ + priv->drop_queue.rxq = rxq; + return 0; +error: + ret = rte_errno; /* Save rte_errno before cleanup. */ + if (rxq != NULL) { + if (rxq->rq_obj.rq != NULL) + mlx5_devx_rq_destroy(&rxq->rq_obj); + if (rxq->cq_obj.cq != NULL) + mlx5_devx_cq_destroy(&rxq->cq_obj); + if (rxq->devx_channel) + mlx5_os_devx_destroy_event_channel + (rxq->devx_channel); + mlx5_free(rxq); + } + if (rxq_ctrl != NULL) + mlx5_free(rxq_ctrl); + rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } +/** + * Release drop Rx queue resources. + * + * @param dev + * Pointer to Ethernet device. + */ +static void +mlx5_rxq_devx_obj_drop_release(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_obj *rxq = priv->drop_queue.rxq; + struct mlx5_rxq_ctrl *rxq_ctrl = rxq->rxq_ctrl; + + mlx5_rxq_devx_obj_release(rxq); + mlx5_free(rxq); + mlx5_free(rxq_ctrl); + priv->drop_queue.rxq = NULL; +} + /** * Release a drop hash Rx queue. * @@ -884,9 +989,53 @@ mlx5_devx_drop_action_create(struct rte_eth_dev *dev) static void mlx5_devx_drop_action_destroy(struct rte_eth_dev *dev) { - (void)dev; - DRV_LOG(ERR, "DevX drop action is not supported yet."); - rte_errno = ENOTSUP; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq = priv->drop_queue.hrxq; + + if (hrxq->tir != NULL) + mlx5_devx_tir_destroy(hrxq); + if (hrxq->ind_table->ind_table != NULL) + mlx5_devx_ind_table_destroy(hrxq->ind_table); + if (priv->drop_queue.rxq->rq != NULL) + mlx5_rxq_devx_obj_drop_release(dev); +} + +/** + * Create a DevX drop action for Rx Hash queue. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_drop_action_create(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq = priv->drop_queue.hrxq; + int ret; + + ret = mlx5_rxq_devx_obj_drop_create(dev); + if (ret != 0) { + DRV_LOG(ERR, "Cannot create drop RX queue"); + return ret; + } + /* hrxq->ind_table queues are NULL, drop RX queue ID will be used */ + ret = mlx5_devx_ind_table_new(dev, 0, hrxq->ind_table); + if (ret != 0) { + DRV_LOG(ERR, "Cannot create drop hash RX queue indirection table"); + goto error; + } + ret = mlx5_devx_hrxq_new(dev, hrxq, /* tunnel */ false); + if (ret != 0) { + DRV_LOG(ERR, "Cannot create drop hash RX queue"); + goto error; + } + return 0; +error: + mlx5_devx_drop_action_destroy(dev); + return ret; } /** From patchwork Tue Nov 2 17:01:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Kozlyuk X-Patchwork-Id: 103526 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 82989A0C4E; Tue, 2 Nov 2021 18:02:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 495DD41149; Tue, 2 Nov 2021 18:02:17 +0100 (CET) Received: from AZHDRRW-EX01.nvidia.com (azhdrrw-ex01.nvidia.com [20.51.104.162]) by mails.dpdk.org (Postfix) with ESMTP id DCFC741102; Tue, 2 Nov 2021 18:02:15 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (104.47.51.46) by mxs.oss.nvidia.com (10.13.234.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.858.15; Tue, 2 Nov 2021 10:02:15 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aaL+YPgVddg6pNRdXz07Dh6n3NCYZbE0LF2S9RLdeFovl5YmmXUSe8NO7XWG14aNXPudZF1rWDqWcVWSLepGuqRk0zlkXIjMJ/9/ujUBo2eXpsK5LJFJU5fAu9cPbrmM82WCOeeU6or6V5x+dley6ILPvX0VFHvh97sue5sbRwiafs+qQSI5GHNutuZqcxYZzYQxx+vi82MazGuqNxLUD79aDkaVFcCsAiSn5mPs+BCfBIqCK4ggGn2cx3pEIj8mt93eQ6O/TfPL/PfvS7l0+pNd/W9qxpxL+AA9gA2vPqK8FvNP81Ea6yAcEtDm0+UQOhedm5YulWpeSA3HYB4iYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=h7yI2RUxfvE41StoPHYIRvWwePxgM/rJ0lx7+ilWAHY=; b=HTrU5uNqRv+ApEg5YUwd2Ty8beAWS0UaCVvnTugx2u+zP6e8jL5aUabBNcWMZNwJiLkMhOFJcBK11HXcQqFX476d4oNq6WREcQ/O36ntro5AiNFG8xdyjIbFhZeXPTDvsuLQQseUPGCarOrB89xfOibtGxNEM4dqjoiBk4OLQThBDPOGIlOhhrZrSFkCmwNMLpDYHgzG2L+vbqAQj9NonbzL3gZ+ltCu79NE3QlzmxMwAFfNsFb5/eUZVb7/Hhgr6xp7Q8c7bXh6mlb3QRRhqMaw/z0UdYy9jAPIRm3Xx5+HTDkfB5RxhnoKeaK/K6KX+1353xptRHGsVVGQC3J0Qg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=h7yI2RUxfvE41StoPHYIRvWwePxgM/rJ0lx7+ilWAHY=; b=Fcjgllf8BIp+lfu59M/XzZSftUSIzH1ws27vCEvocLhkVbXnpm3EZwWByuBOIIj7Ofrln83R9uWJTI4U31l5AwBa/cZLZG+prHoiFv+oPjRbx5BZwvLP1m8FkXQvFwnM8u+OFNN+eSfk2HbJmDXtkqETAY57zlArhuO3bksbfhLILuE0YccdRl11woO4M8LHwgmWABU3FoTCcpgo2ysTKfnZIcvB3/7z3Nsu5/EI94YyQBHpRNmuAb5XrzHh+EAS+f2SeAJiWqj+bkFHXjrBjpjQwX44IcXdLUZJjJsqCF746IF7vMptepmjjClBmC5Mdi/ReX0aMWXrDXwYnFlGcA== Received: from BN6PR2001CA0039.namprd20.prod.outlook.com (2603:10b6:405:16::25) by DM5PR12MB2375.namprd12.prod.outlook.com (2603:10b6:4:b3::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10; Tue, 2 Nov 2021 17:02:13 +0000 Received: from BN8NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:405:16:cafe::da) by BN6PR2001CA0039.outlook.office365.com (2603:10b6:405:16::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Tue, 2 Nov 2021 17:02:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT044.mail.protection.outlook.com (10.13.177.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4649.14 via Frontend Transport; Tue, 2 Nov 2021 17:02:10 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:02:05 +0000 From: Dmitry Kozlyuk To: CC: , , Matan Azrad , Viacheslav Ovsiienko Date: Tue, 2 Nov 2021 19:01:35 +0200 Message-ID: <20211102170135.959380-7-dkozlyuk@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211102170135.959380-1-dkozlyuk@nvidia.com> References: <20211102135415.944050-1-dkozlyuk@nvidia.com> <20211102170135.959380-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1a1812bf-ec92-43c3-9839-08d99e228293 X-MS-TrafficTypeDiagnostic: DM5PR12MB2375: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:291; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LFxTK3EAgH6hapawhWRxm6wKgBn7Fz022223a63n0T388j+zraMMhTdH8pto//WQmH0ZgrHVLB1EqEqfiE5NC8E6yL9BExrL9WvZSNXfEPUbZEjzghhIirjaVkko6Xjhtpm3rXXXCaOU6A5G4C2lWnrtn+sFUeP5eB2AsLMn4jSkq1m8Oh9cj4Jde7F7/up420DETV+/wFIg+jA2ustPr1dVccbY2KrSx6Pg0msfA200FHZ/M8O+v9eYAtNJowaSays1RvoOJUMztKUhfJmtxutdsuWvkPLsrNSqSZKJ0/E19TAmhrnTLBnyA1VrtnyUlgBSWHUDJ06QJuWAatYkoniH+qsbfIRQXpkiKqDZwK8yV1PU0poo8ocqSheI1Hax2GTz9cChae92aM1eIdM6CB3a+boUWdA3JG2IApMYPPnmf+4EmR7fDFOIY6YT4qzUD5tNJo40zfzw4UtDxL29QydfAvOEh7QMWk1vn8UFImeqmsP0vOANP2GpZ03F2TjYnsL/jnoXqze4ol8eQ8xnn7WyNOxOHxiHNpnZgKLsNjRBeXH9z4PNdA7T/Sb+wgviRlGRc+7jaDp9fIU9LWHi7JO1Emvz9EAX2JouDmS2QB8o2tFU2Y+krVIXxZSGR6D4rb/dUSrzSOQDQ8b1sQf465d1eKuFe8cLf0hpLH1nHwn5jA5vbYJW2w/vhkqhF//I0T6Pt6Btp5dws4J0BZrbEQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(4326008)(70586007)(6916009)(82310400003)(426003)(2906002)(7636003)(336012)(450100002)(2616005)(8936002)(70206006)(7696005)(83380400001)(54906003)(30864003)(107886003)(186003)(16526019)(86362001)(8676002)(36756003)(508600001)(26005)(6666004)(47076005)(5660300002)(36860700001)(356005)(6286002)(55016002)(316002)(1076003); DIR:OUT; SFP:1101; X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Nov 2021 17:02:10.4744 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1a1812bf-ec92-43c3-9839-08d99e228293 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2375 Subject: [dpdk-dev] [PATCH v6 6/6] net/mlx5: preserve indirect actions on restart X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" MLX5 PMD uses reference counting to manage RX queue resources. After port stop shared RSS actions kept references to RX queues, preventing resource release. As a result, internal PMD mempool for such queues had been exhausted after a number of port restarts. Diagnostic message from rte_eth_dev_start(): Rx queue allocation failed: Cannot allocate memory Dereference RX queues used by indirect actions on port stop (detach) and restore references on port start (attach) in order to allow RX queue resource release, but keep indirect RSS across the port restart. Replace queue IDs in HW by drop queue ID on detach and restore actual queue IDs on attach. When the port is stopped, create indirect RSS in the detached state. As a result, MLX5 PMD is able to keep all its indirect actions across port restart. Advertise this capability. Fixes: 4b61b8774be9 ("ethdev: introduce indirect flow action") Cc: bingz@nvidia.com Cc: stable@dpdk.org Signed-off-by: Dmitry Kozlyuk Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_ethdev.c | 1 + drivers/net/mlx5/mlx5_flow.c | 194 ++++++++++++++++++++++++++++---- drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_rx.h | 4 + drivers/net/mlx5/mlx5_rxq.c | 99 ++++++++++++++-- drivers/net/mlx5/mlx5_trigger.c | 10 ++ 6 files changed, 276 insertions(+), 34 deletions(-) diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index f2b78c3cc6..81fa8845bb 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -321,6 +321,7 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) info->rx_offload_capa = (mlx5_get_rx_port_offloads() | info->rx_queue_offload_capa); info->tx_offload_capa = mlx5_get_tx_port_offloads(dev); + info->dev_capa = RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP; info->if_index = mlx5_ifindex(dev); info->reta_size = priv->reta_idx_n ? priv->reta_idx_n : config->ind_table_max_size; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 3d8dd974ce..9904bc5863 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1594,6 +1594,58 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, return 0; } +/** + * Validate queue numbers for device RSS. + * + * @param[in] dev + * Configured device. + * @param[in] queues + * Array of queue numbers. + * @param[in] queues_n + * Size of the @p queues array. + * @param[out] error + * On error, filled with a textual error description. + * @param[out] queue + * On error, filled with an offending queue index in @p queues array. + * + * @return + * 0 on success, a negative errno code on error. + */ +static int +mlx5_validate_rss_queues(const struct rte_eth_dev *dev, + const uint16_t *queues, uint32_t queues_n, + const char **error, uint32_t *queue_idx) +{ + const struct mlx5_priv *priv = dev->data->dev_private; + enum mlx5_rxq_type rxq_type = MLX5_RXQ_TYPE_UNDEFINED; + uint32_t i; + + for (i = 0; i != queues_n; ++i) { + struct mlx5_rxq_ctrl *rxq_ctrl; + + if (queues[i] >= priv->rxqs_n) { + *error = "queue index out of range"; + *queue_idx = i; + return -EINVAL; + } + if (!(*priv->rxqs)[queues[i]]) { + *error = "queue is not configured"; + *queue_idx = i; + return -EINVAL; + } + rxq_ctrl = container_of((*priv->rxqs)[queues[i]], + struct mlx5_rxq_ctrl, rxq); + if (i == 0) + rxq_type = rxq_ctrl->type; + if (rxq_type != rxq_ctrl->type) { + *error = "combining hairpin and regular RSS queues is not supported"; + *queue_idx = i; + return -ENOTSUP; + } + } + return 0; +} + /* * Validate the rss action. * @@ -1614,8 +1666,9 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_action_rss *rss = action->conf; - enum mlx5_rxq_type rxq_type = MLX5_RXQ_TYPE_UNDEFINED; - unsigned int i; + int ret; + const char *message; + uint32_t queue_idx; if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT && rss->func != RTE_ETH_HASH_FUNCTION_TOEPLITZ) @@ -1679,27 +1732,12 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev, return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "No queues configured"); - for (i = 0; i != rss->queue_num; ++i) { - struct mlx5_rxq_ctrl *rxq_ctrl; - - if (rss->queue[i] >= priv->rxqs_n) - return rte_flow_error_set - (error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, - &rss->queue[i], "queue index out of range"); - if (!(*priv->rxqs)[rss->queue[i]]) - return rte_flow_error_set - (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, - &rss->queue[i], "queue is not configured"); - rxq_ctrl = container_of((*priv->rxqs)[rss->queue[i]], - struct mlx5_rxq_ctrl, rxq); - if (i == 0) - rxq_type = rxq_ctrl->type; - if (rxq_type != rxq_ctrl->type) - return rte_flow_error_set - (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION_CONF, - &rss->queue[i], - "combining hairpin and regular RSS queues is not supported"); + ret = mlx5_validate_rss_queues(dev, rss->queue, rss->queue_num, + &message, &queue_idx); + if (ret != 0) { + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + &rss->queue[queue_idx], message); } return 0; } @@ -8786,6 +8824,116 @@ mlx5_action_handle_flush(struct rte_eth_dev *dev) return ret; } +/** + * Validate existing indirect actions against current device configuration + * and attach them to device resources. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_action_handle_attach(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool *ipool = + priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS]; + struct mlx5_shared_action_rss *shared_rss, *shared_rss_last; + int ret = 0; + uint32_t idx; + + ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { + struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; + const char *message; + uint32_t queue_idx; + + ret = mlx5_validate_rss_queues(dev, ind_tbl->queues, + ind_tbl->queues_n, + &message, &queue_idx); + if (ret != 0) { + DRV_LOG(ERR, "Port %u cannot use queue %u in RSS: %s", + dev->data->port_id, ind_tbl->queues[queue_idx], + message); + break; + } + } + if (ret != 0) + return ret; + ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { + struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; + + ret = mlx5_ind_table_obj_attach(dev, ind_tbl); + if (ret != 0) { + DRV_LOG(ERR, "Port %u could not attach " + "indirection table obj %p", + dev->data->port_id, (void *)ind_tbl); + goto error; + } + } + return 0; +error: + shared_rss_last = shared_rss; + ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { + struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; + + if (shared_rss == shared_rss_last) + break; + if (mlx5_ind_table_obj_detach(dev, ind_tbl) != 0) + DRV_LOG(CRIT, "Port %u could not detach " + "indirection table obj %p on rollback", + dev->data->port_id, (void *)ind_tbl); + } + return ret; +} + +/** + * Detach indirect actions of the device from its resources. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_action_handle_detach(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_indexed_pool *ipool = + priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS]; + struct mlx5_shared_action_rss *shared_rss, *shared_rss_last; + int ret = 0; + uint32_t idx; + + ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { + struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; + + ret = mlx5_ind_table_obj_detach(dev, ind_tbl); + if (ret != 0) { + DRV_LOG(ERR, "Port %u could not detach " + "indirection table obj %p", + dev->data->port_id, (void *)ind_tbl); + goto error; + } + } + return 0; +error: + shared_rss_last = shared_rss; + ILIST_FOREACH(ipool, priv->rss_shared_actions, idx, shared_rss, next) { + struct mlx5_ind_table_obj *ind_tbl = shared_rss->ind_tbl; + + if (shared_rss == shared_rss_last) + break; + if (mlx5_ind_table_obj_attach(dev, ind_tbl) != 0) + DRV_LOG(CRIT, "Port %u could not attach " + "indirection table obj %p on rollback", + dev->data->port_id, (void *)ind_tbl); + } + return ret; +} + #ifndef HAVE_MLX5DV_DR #define MLX5_DOMAIN_SYNC_FLOW ((1 << 0) | (1 << 1)) #else diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 8b83fa6f67..8fbc37feb7 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1582,6 +1582,8 @@ void mlx5_flow_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, struct mlx5_flow_meter_policy *mtr_policy); int mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev); int mlx5_flow_discover_dr_action_support(struct rte_eth_dev *dev); +int mlx5_action_handle_attach(struct rte_eth_dev *dev); +int mlx5_action_handle_detach(struct rte_eth_dev *dev); int mlx5_action_handle_flush(struct rte_eth_dev *dev); void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id); int mlx5_alloc_tunnel_hub(struct mlx5_dev_ctx_shared *sh); diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 4952fe1455..69b1263339 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -211,6 +211,10 @@ int mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, uint16_t *queues, const uint32_t queues_n, bool standalone); +int mlx5_ind_table_obj_attach(struct rte_eth_dev *dev, + struct mlx5_ind_table_obj *ind_tbl); +int mlx5_ind_table_obj_detach(struct rte_eth_dev *dev, + struct mlx5_ind_table_obj *ind_tbl); struct mlx5_list_entry *mlx5_hrxq_create_cb(void *tool_ctx, void *cb_ctx); int mlx5_hrxq_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 4f02fe02b9..9220bb2c15 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2032,6 +2032,26 @@ mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues, return ind_tbl; } +static int +mlx5_ind_table_obj_check_standalone(struct rte_eth_dev *dev __rte_unused, + struct mlx5_ind_table_obj *ind_tbl) +{ + uint32_t refcnt; + + refcnt = __atomic_load_n(&ind_tbl->refcnt, __ATOMIC_RELAXED); + if (refcnt <= 1) + return 0; + /* + * Modification of indirection tables having more than 1 + * reference is unsupported. + */ + DRV_LOG(DEBUG, + "Port %u cannot modify indirection table %p (refcnt %u > 1).", + dev->data->port_id, (void *)ind_tbl, refcnt); + rte_errno = EINVAL; + return -rte_errno; +} + /** * Modify an indirection table. * @@ -2064,18 +2084,8 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, MLX5_ASSERT(standalone); RTE_SET_USED(standalone); - if (__atomic_load_n(&ind_tbl->refcnt, __ATOMIC_RELAXED) > 1) { - /* - * Modification of indirection ntables having more than 1 - * reference unsupported. Intended for standalone indirection - * tables only. - */ - DRV_LOG(DEBUG, - "Port %u cannot modify indirection table (refcnt> 1).", - dev->data->port_id); - rte_errno = EINVAL; + if (mlx5_ind_table_obj_check_standalone(dev, ind_tbl) < 0) return -rte_errno; - } for (i = 0; i != queues_n; ++i) { if (!mlx5_rxq_get(dev, queues[i])) { ret = -rte_errno; @@ -2101,6 +2111,73 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, return ret; } +/** + * Attach an indirection table to its queues. + * + * @param dev + * Pointer to Ethernet device. + * @param ind_table + * Indirection table to attach. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_ind_table_obj_attach(struct rte_eth_dev *dev, + struct mlx5_ind_table_obj *ind_tbl) +{ + unsigned int i; + int ret; + + ret = mlx5_ind_table_obj_modify(dev, ind_tbl, ind_tbl->queues, + ind_tbl->queues_n, true); + if (ret != 0) { + DRV_LOG(ERR, "Port %u could not modify indirect table obj %p", + dev->data->port_id, (void *)ind_tbl); + return ret; + } + for (i = 0; i < ind_tbl->queues_n; i++) + mlx5_rxq_get(dev, ind_tbl->queues[i]); + return 0; +} + +/** + * Detach an indirection table from its queues. + * + * @param dev + * Pointer to Ethernet device. + * @param ind_table + * Indirection table to detach. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_ind_table_obj_detach(struct rte_eth_dev *dev, + struct mlx5_ind_table_obj *ind_tbl) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const unsigned int n = rte_is_power_of_2(ind_tbl->queues_n) ? + log2above(ind_tbl->queues_n) : + log2above(priv->config.ind_table_max_size); + unsigned int i; + int ret; + + ret = mlx5_ind_table_obj_check_standalone(dev, ind_tbl); + if (ret != 0) + return ret; + MLX5_ASSERT(priv->obj_ops.ind_table_modify); + ret = priv->obj_ops.ind_table_modify(dev, n, NULL, 0, ind_tbl); + if (ret != 0) { + DRV_LOG(ERR, "Port %u could not modify indirect table obj %p", + dev->data->port_id, (void *)ind_tbl); + return ret; + } + for (i = 0; i < ind_tbl->queues_n; i++) + mlx5_rxq_release(dev, ind_tbl->queues[i]); + return ret; +} + int mlx5_hrxq_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index d916c8addc..ebeeae279e 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -14,6 +14,7 @@ #include #include "mlx5.h" +#include "mlx5_flow.h" #include "mlx5_rx.h" #include "mlx5_tx.h" #include "mlx5_utils.h" @@ -1162,6 +1163,14 @@ mlx5_dev_start(struct rte_eth_dev *dev) mlx5_rxq_timestamp_set(dev); /* Set a mask and offset of scheduling on timestamp into Tx queues. */ mlx5_txq_dynf_timestamp_set(dev); + /* Attach indirection table objects detached on port stop. */ + ret = mlx5_action_handle_attach(dev); + if (ret) { + DRV_LOG(ERR, + "port %u failed to attach indirect actions: %s", + dev->data->port_id, rte_strerror(rte_errno)); + goto error; + } /* * In non-cached mode, it only needs to start the default mreg copy * action and no flow created by application exists anymore. @@ -1239,6 +1248,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) /* All RX queue flags will be cleared in the flush interface. */ mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_GEN, true); mlx5_flow_meter_rxq_flush(dev); + mlx5_action_handle_detach(dev); mlx5_rx_intr_vec_disable(dev); priv->sh->port[priv->dev_port - 1].ih_port_id = RTE_MAX_ETHPORTS; priv->sh->port[priv->dev_port - 1].devx_ih_port_id = RTE_MAX_ETHPORTS;