From patchwork Fri Jul 2 06:17:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95154 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D800CA0A0C; Fri, 2 Jul 2021 08:18:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0F3944131A; Fri, 2 Jul 2021 08:18:37 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2081.outbound.protection.outlook.com [40.107.243.81]) by mails.dpdk.org (Postfix) with ESMTP id D65524003E for ; Fri, 2 Jul 2021 08:18:35 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TBIWIB1q9z8qATHT3CdWlR0M83iVOv8z9Xf7xHr3plmCa/a8yR/DGiwt6MIDYaVhes3wLxmVtWlT/3qFvDxajsEFRZlBF97NV8yN2gocIsjnP/my7E7vddm3qRAs2+nfCTamhcI+NvyT0BqLcyVD9GIgc40kRr3qfRhHsq0JMk0fViZA+5tf9g+gOEEqGRqxLOLZy6TpnipLqnuc9KHctUDVmqRE1/YMTUeNAjskexOwsu0ccA12T1s9LndhAFAkkevdEzce2SmQ90ST3iRFtKHkWnIV/nGjIx61z5fz8+LcN/S6WLw+BH4F8I1SeJdThDXkwvYc/fBbJ5h22LUfPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Sn456CeZcsUOaCDkAvywA4S0SZXVH3xrrIFlbhWHdDU=; b=JD1iJgqIREYKG7riWe5ljW6MLgoEENb8mAAEjknFluu4SZRfXSrCc0LL2uzLcECbIzYlmdxBx25H5M2t/xuhqKAQc3p12pG9ryrnNykaDOx98kKyenvH/a1l2O/2wWNcKX1dgX3I6T+uP2OW5Z3va+uLzTRkI1xRbVAbb6rXR8CArbkvVFb6rVsdNYliVyJZoEpjdJsGEHoAe+LQLzSJSaDYvPQkKUmajzF6K6s9TSVtI/CuH9XDiJ0Divd4rxqjh7ntbmHZUFnUS0RivsgeF+Hdhr2fJeHh42Skv/Ct66asLrpB00UV83Ac/hv4MNf48CLnHFTiJxMtI0d2xq7OGQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Sn456CeZcsUOaCDkAvywA4S0SZXVH3xrrIFlbhWHdDU=; b=uk/Xk2OSYlAjbfVXlcakxdKcTkh8gWTzcOunXjkdC3H2XSUuo6FizgJ7+kthtJbSiamfIR5ugItmcx+gAPWGtMA37NRwkX9K89wwir28ALUyXg5Mp49u6CBR/ORwaY9YtaKiViQJ9hxste3mfegL9EFX8sxFXAPxzAq681cnm2o2wKsMMzz47I8ZR38/1oob3XxnFN3ov6tYfHjih9TogYyqkFjromSLhuuREeh63Tb6heqBvT4oLXv2Zfocj5yv/Mu3NnSgQDcoaa05lcE/p4x0x31KzQWyflxC4dZ5IXgRKT4cqyF8nWSVTmShs65X3styFx0bgKs5/wOirkHSfQ== Received: from CO2PR04CA0075.namprd04.prod.outlook.com (2603:10b6:102:1::43) by BL1PR12MB5064.namprd12.prod.outlook.com (2603:10b6:208:30a::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23; Fri, 2 Jul 2021 06:18:34 +0000 Received: from CO1NAM11FT054.eop-nam11.prod.protection.outlook.com (2603:10b6:102:1:cafe::1e) by CO2PR04CA0075.outlook.office365.com (2603:10b6:102:1::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Fri, 2 Jul 2021 06:18:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT054.mail.protection.outlook.com (10.13.174.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:34 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:31 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:17:55 +0300 Message-ID: <20210702061816.10454-2-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 491a74a0-7766-45ea-1f5c-08d93d213893 X-MS-TrafficTypeDiagnostic: BL1PR12MB5064: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: P3Rbx75Qac5AOe8zXHId+/N+gv0D5xnqq/ysYqBlImPzMbYkSyMTg6SSvsBMAefuLvumJiAIFbjd09TZNQ37X2cScekUG74QroyxgLP8He7egpr07VGPlDmNUs+2GjXOudmfy5s3lD5s8r/opYpW2et4uOlIvLWqA3jI04MX8v45INq2DmzK8u9rlowp7G2q3N6gdwuRyZuxi2uLCUVUnMu4o+15W5RX+5nBzE90MpHx56F/ui7VgKH5yMVLMaAi1DXy02spxUm8XrQSrs9B9HWQ2ByT5+MBBoCYbFdVXF0+ueHBqn9RWLOKzhpFbJP49XjSRIAbvWfyEDSUqvqAPyF/E5C6L237s6X+f+YtY0Bjye326n5MuMpngiLuJQFF7RTL4WPDFFzRXh6o6KYt5fS6l6RI3fNFr5Pjr5PKdU11YeRUyOVAtLG8kXZg0i/agIN6uF1EMgmYyCL9BzfY2WxjR1cu+A3M2gPyKe/JCXJMK3P0g9CtIOQPbhfg3fdld+8DJR1uXS2Vw0zd4wMhgwMnwyl5/MGBYmqmBHYBrrh+6zIjEkaadhMqO0e+pJQhMM7oZqrkXIwo4SG8Ne7HPikXvYn/lGlAEdK3gdxq/GfMeKkc8ZvpP87QBPwDh/7H6mZp6iaXhH0OubHVg5tbZnJO5kAOIokMF5y+9nQzTWDcy6iiFTuNkKiEJun7eR6A9VnpKSNMvugNavg9d8Wkkg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(136003)(396003)(346002)(39860400002)(36840700001)(46966006)(426003)(1076003)(36756003)(6636002)(2616005)(86362001)(8936002)(7636003)(47076005)(186003)(356005)(6286002)(83380400001)(336012)(16526019)(26005)(8676002)(54906003)(82310400003)(6666004)(70586007)(70206006)(2906002)(478600001)(5660300002)(4326008)(55016002)(36906005)(110136005)(316002)(36860700001)(82740400003)(7696005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:34.0557 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 491a74a0-7766-45ea-1f5c-08d93d213893 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT054.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5064 Subject: [dpdk-dev] [PATCH v3 01/22] net/mlx5: allow limiting the index pool maximum index X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some ipool instances in the driver are used as ID\index allocator and added other logic in order to work with limited index values. Add a new configuration for ipool specify the maximum index value. The ipool will ensure that no index bigger than the maximum value is provided. Use this configuration in ID allocator cases instead of the current logics. This patch add the maximum ID configurable for the index pool. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_utils.c | 14 ++++++++++++-- drivers/net/mlx5/mlx5_utils.h | 1 + 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 18fe23e4fb..bf2b2ebc72 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -270,6 +270,9 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg) if (i > 0) pool->grow_tbl[i] += pool->grow_tbl[i - 1]; } + if (!pool->cfg.max_idx) + pool->cfg.max_idx = + mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); return pool; } @@ -282,9 +285,11 @@ mlx5_ipool_grow(struct mlx5_indexed_pool *pool) size_t trunk_size = 0; size_t data_size; size_t bmp_size; - uint32_t idx; + uint32_t idx, cur_max_idx, i; - if (pool->n_trunk_valid == TRUNK_MAX_IDX) + cur_max_idx = mlx5_trunk_idx_offset_get(pool, pool->n_trunk_valid); + if (pool->n_trunk_valid == TRUNK_MAX_IDX || + cur_max_idx >= pool->cfg.max_idx) return -ENOMEM; if (pool->n_trunk_valid == pool->n_trunk) { /* No free trunk flags, expand trunk list. */ @@ -336,6 +341,11 @@ mlx5_ipool_grow(struct mlx5_indexed_pool *pool) trunk->bmp = rte_bitmap_init_with_all_set(data_size, &trunk->data [RTE_CACHE_LINE_ROUNDUP(data_size * pool->cfg.size)], bmp_size); + /* Clear the overhead bits in the trunk if it happens. */ + if (cur_max_idx + data_size > pool->cfg.max_idx) { + for (i = pool->cfg.max_idx - cur_max_idx; i < data_size; i++) + rte_bitmap_clear(trunk->bmp, i); + } MLX5_ASSERT(trunk->bmp); pool->n_trunk_valid++; #ifdef POOL_DEBUG diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index b54517c6df..15870e14c2 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -208,6 +208,7 @@ struct mlx5_indexed_pool_config { uint32_t need_lock:1; /* Lock is needed for multiple thread usage. */ uint32_t release_mem_en:1; /* Rlease trunk when it is free. */ + uint32_t max_idx; /* The maximum index can be allocated. */ const char *type; /* Memory allocate type name. */ void *(*malloc)(uint32_t flags, size_t size, unsigned int align, int socket); From patchwork Fri Jul 2 06:17:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95155 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D8808A0A0C; Fri, 2 Jul 2021 08:18:49 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 244994132C; Fri, 2 Jul 2021 08:18:38 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2052.outbound.protection.outlook.com [40.107.237.52]) by mails.dpdk.org (Postfix) with ESMTP id 231894131F for ; Fri, 2 Jul 2021 08:18:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j1ySi7X7+5GipFtY1C/9winH0d2ZS2skLkE7WVWJWtwOFyddBGAhPdhSfgPQRCDKiTAdd6l230ToiOwGLA/PnavQ8pU4wAsqS22OusbXuxBsFf+odXv65+DygPV3OYw/cySCbS1bNbyChHRie7DqqkSJLzYab7zZDKQMQOgPYMGp8LaNRwl/cHlGgyNgE4A5S3aqqU6uyWu07MHV9LVANfVNU7AGfhfw1zET0OEK6HaqJqNrM39gZxJavhOuPs7tfJ7eg0erDIj9LkWoKh4OYyKNOk19sWn6phk2tMggFg66P+pLlQWP8ua+Ya5XPcMyrWjvbu27Wa3DcqKMjik7AQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G3a40XRhxQNysiw/TUmJVq9HuF3HCykU1xkDYN+Tm1A=; b=hrdACkupLfzeS8/AYbvhQEUHqOYdaDHMPpwRr1xRvz+V4yCY1VXWiFb3HngMR3WUr7A4c1ICkqEEUQNCfvfr2NAga3bD41Bji0AcfXJXqihLOajedTm5YIUYlPu1HWwNvh12FA1yU1KmCLUjI0OinlxMVRT8ZE2zJ72r4EtIxmEy+rEGex4I8G4SSocyNmzMGXIlZJbfWS/D1EQcDHo8XY8ZNThv0Dgm2DAHniG5OfWvfnmSvP12Wew/8BmLWiXpDXAHX57djDqNVZ5fh3plBGoJyb2ZlZ1DJeHrjc+wElyUVtrfSVJiygyaN+M4XPweZoCCKks2pihooQrTpwFZ0Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G3a40XRhxQNysiw/TUmJVq9HuF3HCykU1xkDYN+Tm1A=; b=BYe3vMOVAsgZOzOFAM8YvaFZK3qmEczVRMpIePJKjnaODOPVu6dICZwJt1NSkiLlemCCYO2vyIBrW/FbvQCrgOKSbhrFkPPWdSYDBjwdtvxuENH77AtrkKlK0eQHfZYHFl7pZaQF9CFAugFm1sYV8O9r/nB/vt6RPSHs/OvN+u5+heEmHyJEy8Hg6PPlRq1cxpjXP8w16hTx/bc10LPzptHPUROIrl5Zgy4euF6/b8IKKljZjVwo7D0PpMT8yADFfRfbYv5NhKvPxGIzLBSW+LV8xNsfe9n8zNpivXbrLH9Db+1RvEzhC3B/ChKVYXGTgt4na5ymNE589OYha7ePHg== Received: from CO2PR04CA0072.namprd04.prod.outlook.com (2603:10b6:102:1::40) by MW3PR12MB4458.namprd12.prod.outlook.com (2603:10b6:303:5d::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 2 Jul 2021 06:18:35 +0000 Received: from CO1NAM11FT003.eop-nam11.prod.protection.outlook.com (2603:10b6:102:1:cafe::ae) by CO2PR04CA0072.outlook.office365.com (2603:10b6:102:1::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.24 via Frontend Transport; Fri, 2 Jul 2021 06:18:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT003.mail.protection.outlook.com (10.13.175.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:35 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:33 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:17:56 +0300 Message-ID: <20210702061816.10454-3-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9798a1aa-011c-433b-2dc7-08d93d213955 X-MS-TrafficTypeDiagnostic: MW3PR12MB4458: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1332; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nzSWPMHQC7wJx2deBpsRN15Nqr++M9XlMc8gUCubhz33aJP9qs+mGBlSg/rm1KuPGzkdOA3fFi+eBDSkY6/bhEA3Ugqq/cviglGXvc82+VENDw3H5asrjTCec3VdWg3cV+eIq75WMQV7rlcw0flDo5DiICCXGtnsaCE/9SaKUotbi3svrA9w5Rj0OvVELu4xwr/urk+XFOhVIqXyORNAMcfX7j6AWTT4JB7jfPHxHWrF+KC5imCsRurBNnF8AyrRkPQ7O4r9X1q5o+3+YSBoAEmVN3m7XTAafWfdiVhsGwznb+/VJ7yS1OxUClF/akgfit7MM4E1CyHPFF3VxSfqRC6v3sK7jV+5jIboPt4J+33b0d1lUEoZdZhwtZdawMSxG8e8X6fgkSruoRoegLDMjC+MtT4EY+VkOfxiLUcy/fFEI00KNDS8CWMfEzPImVhIHQg3fBhGnkgfveen8e3SeVEVKy0MgniSA16bc0SBYQjH9TOD2736jc3vwm2P+xKYwt9IZ5gve97lsXIYXJVPLITwFDCDPSPeKtYAjwae1Bkr72txanrKtZnrxqgeNrIq0S84CBA/Op85gIeDDemowK0jNzlXtzUdn9yVfXrR5jZ5BhkSVVZnRZxhF/RxhtW2mNBzzdPcTAoq4Q6kZ+x9ewLqTlq3kPPil2yoODFgSwbAkzSXeT+AnO6REs4G0nKvgyejMYtn04+DuBve6bSuNQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(346002)(376002)(39860400002)(396003)(36840700001)(46966006)(54906003)(6666004)(356005)(8936002)(186003)(86362001)(1076003)(16526019)(36906005)(316002)(110136005)(36860700001)(7696005)(8676002)(7636003)(2616005)(55016002)(82740400003)(82310400003)(4326008)(70586007)(36756003)(5660300002)(426003)(6286002)(2906002)(336012)(47076005)(83380400001)(478600001)(26005)(70206006)(6636002)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:35.3415 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9798a1aa-011c-433b-2dc7-08d93d213955 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT003.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4458 Subject: [dpdk-dev] [PATCH v3 02/22] net/mlx5: add indexed pool local cache X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For object which wants efficient index allocate and free, local cache will be very helpful. Two level cache is introduced to allocate and free the index more efficient. One as local and the other as global. The global cache is able to save all the allocated index. That means all the allocated index will not be freed. Once the local cache is full, the extra index will be flushed to the global cache. Once local cache is empty, first try to fetch more index from global, if global is still empty, allocate new trunk with more index. This commit adds new local cache mechanism for indexed pool. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_utils.c | 323 ++++++++++++++++++++++++++++++++-- drivers/net/mlx5/mlx5_utils.h | 64 ++++++- 2 files changed, 372 insertions(+), 15 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index bf2b2ebc72..215024632d 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -175,14 +175,14 @@ static inline void mlx5_ipool_lock(struct mlx5_indexed_pool *pool) { if (pool->cfg.need_lock) - rte_spinlock_lock(&pool->lock); + rte_spinlock_lock(&pool->rsz_lock); } static inline void mlx5_ipool_unlock(struct mlx5_indexed_pool *pool) { if (pool->cfg.need_lock) - rte_spinlock_unlock(&pool->lock); + rte_spinlock_unlock(&pool->rsz_lock); } static inline uint32_t @@ -243,6 +243,7 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg) uint32_t i; if (!cfg || (!cfg->malloc ^ !cfg->free) || + (cfg->per_core_cache && cfg->release_mem_en) || (cfg->trunk_size && ((cfg->trunk_size & (cfg->trunk_size - 1)) || ((__builtin_ffs(cfg->trunk_size) + TRUNK_IDX_BITS) > 32)))) return NULL; @@ -258,9 +259,8 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg) pool->cfg.malloc = mlx5_malloc; pool->cfg.free = mlx5_free; } - pool->free_list = TRUNK_INVALID; if (pool->cfg.need_lock) - rte_spinlock_init(&pool->lock); + rte_spinlock_init(&pool->rsz_lock); /* * Initialize the dynamic grow trunk size lookup table to have a quick * lookup for the trunk entry index offset. @@ -273,6 +273,8 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg) if (!pool->cfg.max_idx) pool->cfg.max_idx = mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); + if (!cfg->per_core_cache) + pool->free_list = TRUNK_INVALID; return pool; } @@ -355,6 +357,274 @@ mlx5_ipool_grow(struct mlx5_indexed_pool *pool) return 0; } +static inline struct mlx5_indexed_cache * +mlx5_ipool_update_global_cache(struct mlx5_indexed_pool *pool, int cidx) +{ + struct mlx5_indexed_cache *gc, *lc, *olc = NULL; + + lc = pool->cache[cidx]->lc; + gc = __atomic_load_n(&pool->gc, __ATOMIC_RELAXED); + if (gc && lc != gc) { + mlx5_ipool_lock(pool); + if (lc && !(--lc->ref_cnt)) + olc = lc; + lc = pool->gc; + lc->ref_cnt++; + pool->cache[cidx]->lc = lc; + mlx5_ipool_unlock(pool); + if (olc) + pool->cfg.free(olc); + } + return lc; +} + +static uint32_t +mlx5_ipool_allocate_from_global(struct mlx5_indexed_pool *pool, int cidx) +{ + struct mlx5_indexed_trunk *trunk; + struct mlx5_indexed_cache *p, *lc, *olc = NULL; + size_t trunk_size = 0; + size_t data_size; + uint32_t cur_max_idx, trunk_idx, trunk_n; + uint32_t fetch_size, ts_idx, i; + int n_grow; + +check_again: + p = NULL; + fetch_size = 0; + /* + * Fetch new index from global if possible. First round local + * cache will be NULL. + */ + lc = pool->cache[cidx]->lc; + mlx5_ipool_lock(pool); + /* Try to update local cache first. */ + if (likely(pool->gc)) { + if (lc != pool->gc) { + if (lc && !(--lc->ref_cnt)) + olc = lc; + lc = pool->gc; + lc->ref_cnt++; + pool->cache[cidx]->lc = lc; + } + if (lc->len) { + /* Use the updated local cache to fetch index. */ + fetch_size = pool->cfg.per_core_cache >> 2; + if (lc->len < fetch_size) + fetch_size = lc->len; + lc->len -= fetch_size; + memcpy(pool->cache[cidx]->idx, &lc->idx[lc->len], + sizeof(uint32_t) * fetch_size); + } + } + mlx5_ipool_unlock(pool); + if (unlikely(olc)) { + pool->cfg.free(olc); + olc = NULL; + } + if (fetch_size) { + pool->cache[cidx]->len = fetch_size - 1; + return pool->cache[cidx]->idx[pool->cache[cidx]->len]; + } + trunk_idx = lc ? __atomic_load_n(&lc->n_trunk_valid, + __ATOMIC_ACQUIRE) : 0; + trunk_n = lc ? lc->n_trunk : 0; + cur_max_idx = mlx5_trunk_idx_offset_get(pool, trunk_idx); + /* Check if index reach maximum. */ + if (trunk_idx == TRUNK_MAX_IDX || + cur_max_idx >= pool->cfg.max_idx) + return 0; + /* No enough space in trunk array, resize the trunks array. */ + if (trunk_idx == trunk_n) { + n_grow = trunk_idx ? trunk_idx : + RTE_CACHE_LINE_SIZE / sizeof(void *); + cur_max_idx = mlx5_trunk_idx_offset_get(pool, trunk_n + n_grow); + /* Resize the trunk array. */ + p = pool->cfg.malloc(0, ((trunk_idx + n_grow) * + sizeof(struct mlx5_indexed_trunk *)) + + (cur_max_idx * sizeof(uint32_t)) + sizeof(*p), + RTE_CACHE_LINE_SIZE, rte_socket_id()); + if (!p) + return 0; + p->trunks = (struct mlx5_indexed_trunk **)&p->idx[cur_max_idx]; + if (lc) + memcpy(p->trunks, lc->trunks, trunk_idx * + sizeof(struct mlx5_indexed_trunk *)); +#ifdef RTE_LIBRTE_MLX5_DEBUG + memset(RTE_PTR_ADD(p->trunks, trunk_idx * sizeof(void *)), 0, + n_grow * sizeof(void *)); +#endif + p->n_trunk_valid = trunk_idx; + p->n_trunk = trunk_n + n_grow; + p->len = 0; + } + /* Prepare the new trunk. */ + trunk_size = sizeof(*trunk); + data_size = mlx5_trunk_size_get(pool, trunk_idx); + trunk_size += RTE_CACHE_LINE_ROUNDUP(data_size * pool->cfg.size); + trunk = pool->cfg.malloc(0, trunk_size, + RTE_CACHE_LINE_SIZE, rte_socket_id()); + if (unlikely(!trunk)) { + pool->cfg.free(p); + return 0; + } + trunk->idx = trunk_idx; + trunk->free = data_size; + mlx5_ipool_lock(pool); + /* + * Double check if trunks has been updated or have available index. + * During the new trunk allocate, index may still be flushed to the + * global cache. So also need to check the pool->gc->len. + */ + if (pool->gc && (lc != pool->gc || + lc->n_trunk_valid != trunk_idx || + pool->gc->len)) { + mlx5_ipool_unlock(pool); + if (p) + pool->cfg.free(p); + pool->cfg.free(trunk); + goto check_again; + } + /* Resize the trunk array and update local cache first. */ + if (p) { + if (lc && !(--lc->ref_cnt)) + olc = lc; + lc = p; + lc->ref_cnt = 1; + pool->cache[cidx]->lc = lc; + __atomic_store_n(&pool->gc, p, __ATOMIC_RELAXED); + } + /* Add trunk to trunks array. */ + lc->trunks[trunk_idx] = trunk; + __atomic_fetch_add(&lc->n_trunk_valid, 1, __ATOMIC_RELAXED); + /* Enqueue half of the index to global. */ + ts_idx = mlx5_trunk_idx_offset_get(pool, trunk_idx) + 1; + fetch_size = trunk->free >> 1; + for (i = 0; i < fetch_size; i++) + lc->idx[i] = ts_idx + i; + lc->len = fetch_size; + mlx5_ipool_unlock(pool); + /* Copy left half - 1 to local cache index array. */ + pool->cache[cidx]->len = trunk->free - fetch_size - 1; + ts_idx += fetch_size; + for (i = 0; i < pool->cache[cidx]->len; i++) + pool->cache[cidx]->idx[i] = ts_idx + i; + if (olc) + pool->cfg.free(olc); + return ts_idx + i; +} + +static void * +mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +{ + struct mlx5_indexed_trunk *trunk; + struct mlx5_indexed_cache *lc; + uint32_t trunk_idx; + uint32_t entry_idx; + int cidx; + + MLX5_ASSERT(idx); + cidx = rte_lcore_index(rte_lcore_id()); + if (unlikely(cidx == -1)) { + rte_errno = ENOTSUP; + return NULL; + } + lc = mlx5_ipool_update_global_cache(pool, cidx); + idx -= 1; + trunk_idx = mlx5_trunk_idx_get(pool, idx); + trunk = lc->trunks[trunk_idx]; + MLX5_ASSERT(trunk); + entry_idx = idx - mlx5_trunk_idx_offset_get(pool, trunk_idx); + return &trunk->data[entry_idx * pool->cfg.size]; +} + +static void * +mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, uint32_t *idx) +{ + int cidx; + + cidx = rte_lcore_index(rte_lcore_id()); + if (unlikely(cidx == -1)) { + rte_errno = ENOTSUP; + return NULL; + } + if (unlikely(!pool->cache[cidx])) { + pool->cache[cidx] = pool->cfg.malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_ipool_per_lcore) + + (pool->cfg.per_core_cache * sizeof(uint32_t)), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!pool->cache[cidx]) { + DRV_LOG(ERR, "Ipool cache%d allocate failed\n", cidx); + return NULL; + } + } else if (pool->cache[cidx]->len) { + pool->cache[cidx]->len--; + *idx = pool->cache[cidx]->idx[pool->cache[cidx]->len]; + return mlx5_ipool_get_cache(pool, *idx); + } + /* Not enough idx in global cache. Keep fetching from global. */ + *idx = mlx5_ipool_allocate_from_global(pool, cidx); + if (unlikely(!(*idx))) + return NULL; + return mlx5_ipool_get_cache(pool, *idx); +} + +static void +mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +{ + int cidx; + struct mlx5_ipool_per_lcore *ilc; + struct mlx5_indexed_cache *gc, *olc = NULL; + uint32_t reclaim_num = 0; + + MLX5_ASSERT(idx); + cidx = rte_lcore_index(rte_lcore_id()); + if (unlikely(cidx == -1)) { + rte_errno = ENOTSUP; + return; + } + /* + * When index was allocated on core A but freed on core B. In this + * case check if local cache on core B was allocated before. + */ + if (unlikely(!pool->cache[cidx])) { + pool->cache[cidx] = pool->cfg.malloc(MLX5_MEM_ZERO, + sizeof(struct mlx5_ipool_per_lcore) + + (pool->cfg.per_core_cache * sizeof(uint32_t)), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!pool->cache[cidx]) { + DRV_LOG(ERR, "Ipool cache%d allocate failed\n", cidx); + return; + } + } + /* Try to enqueue to local index cache. */ + if (pool->cache[cidx]->len < pool->cfg.per_core_cache) { + pool->cache[cidx]->idx[pool->cache[cidx]->len] = idx; + pool->cache[cidx]->len++; + return; + } + ilc = pool->cache[cidx]; + reclaim_num = pool->cfg.per_core_cache >> 2; + ilc->len -= reclaim_num; + /* Local index cache full, try with global index cache. */ + mlx5_ipool_lock(pool); + gc = pool->gc; + if (ilc->lc != gc) { + if (!(--ilc->lc->ref_cnt)) + olc = ilc->lc; + gc->ref_cnt++; + ilc->lc = gc; + } + memcpy(&gc->idx[gc->len], &ilc->idx[ilc->len], + reclaim_num * sizeof(uint32_t)); + gc->len += reclaim_num; + mlx5_ipool_unlock(pool); + if (olc) + pool->cfg.free(olc); + pool->cache[cidx]->idx[pool->cache[cidx]->len] = idx; + pool->cache[cidx]->len++; +} + void * mlx5_ipool_malloc(struct mlx5_indexed_pool *pool, uint32_t *idx) { @@ -363,6 +633,8 @@ mlx5_ipool_malloc(struct mlx5_indexed_pool *pool, uint32_t *idx) uint32_t iidx = 0; void *p; + if (pool->cfg.per_core_cache) + return mlx5_ipool_malloc_cache(pool, idx); mlx5_ipool_lock(pool); if (pool->free_list == TRUNK_INVALID) { /* If no available trunks, grow new. */ @@ -432,6 +704,10 @@ mlx5_ipool_free(struct mlx5_indexed_pool *pool, uint32_t idx) if (!idx) return; + if (pool->cfg.per_core_cache) { + mlx5_ipool_free_cache(pool, idx); + return; + } idx -= 1; mlx5_ipool_lock(pool); trunk_idx = mlx5_trunk_idx_get(pool, idx); @@ -497,6 +773,8 @@ mlx5_ipool_get(struct mlx5_indexed_pool *pool, uint32_t idx) if (!idx) return NULL; + if (pool->cfg.per_core_cache) + return mlx5_ipool_get_cache(pool, idx); idx -= 1; mlx5_ipool_lock(pool); trunk_idx = mlx5_trunk_idx_get(pool, idx); @@ -519,18 +797,43 @@ mlx5_ipool_get(struct mlx5_indexed_pool *pool, uint32_t idx) int mlx5_ipool_destroy(struct mlx5_indexed_pool *pool) { - struct mlx5_indexed_trunk **trunks; - uint32_t i; + struct mlx5_indexed_trunk **trunks = NULL; + struct mlx5_indexed_cache *gc = pool->gc; + uint32_t i, n_trunk_valid = 0; MLX5_ASSERT(pool); mlx5_ipool_lock(pool); - trunks = pool->trunks; - for (i = 0; i < pool->n_trunk; i++) { + if (pool->cfg.per_core_cache) { + for (i = 0; i < RTE_MAX_LCORE; i++) { + /* + * Free only old global cache. Pool gc will be + * freed at last. + */ + if (pool->cache[i]) { + if (pool->cache[i]->lc && + pool->cache[i]->lc != pool->gc && + (!(--pool->cache[i]->lc->ref_cnt))) + pool->cfg.free(pool->cache[i]->lc); + pool->cfg.free(pool->cache[i]); + } + } + if (gc) { + trunks = gc->trunks; + n_trunk_valid = gc->n_trunk_valid; + } + } else { + gc = NULL; + trunks = pool->trunks; + n_trunk_valid = pool->n_trunk_valid; + } + for (i = 0; i < n_trunk_valid; i++) { if (trunks[i]) pool->cfg.free(trunks[i]); } - if (!pool->trunks) - pool->cfg.free(pool->trunks); + if (!gc && trunks) + pool->cfg.free(trunks); + if (gc) + pool->cfg.free(gc); mlx5_ipool_unlock(pool); mlx5_free(pool); return 0; diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 15870e14c2..0469062695 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -209,6 +209,11 @@ struct mlx5_indexed_pool_config { /* Lock is needed for multiple thread usage. */ uint32_t release_mem_en:1; /* Rlease trunk when it is free. */ uint32_t max_idx; /* The maximum index can be allocated. */ + uint32_t per_core_cache; + /* + * Cache entry number per core for performance. Should not be + * set with release_mem_en. + */ const char *type; /* Memory allocate type name. */ void *(*malloc)(uint32_t flags, size_t size, unsigned int align, int socket); @@ -225,14 +230,39 @@ struct mlx5_indexed_trunk { uint8_t data[] __rte_cache_aligned; /* Entry data start. */ }; +struct mlx5_indexed_cache { + struct mlx5_indexed_trunk **trunks; + volatile uint32_t n_trunk_valid; /* Trunks allocated. */ + uint32_t n_trunk; /* Trunk pointer array size. */ + uint32_t ref_cnt; + uint32_t len; + uint32_t idx[]; +}; + +struct mlx5_ipool_per_lcore { + struct mlx5_indexed_cache *lc; + uint32_t len; /**< Current cache count. */ + uint32_t idx[]; /**< Cache objects. */ +}; + struct mlx5_indexed_pool { struct mlx5_indexed_pool_config cfg; /* Indexed pool configuration. */ - rte_spinlock_t lock; /* Pool lock for multiple thread usage. */ - uint32_t n_trunk_valid; /* Trunks allocated. */ - uint32_t n_trunk; /* Trunk pointer array size. */ + rte_spinlock_t rsz_lock; /* Pool lock for multiple thread usage. */ /* Dim of trunk pointer array. */ - struct mlx5_indexed_trunk **trunks; - uint32_t free_list; /* Index to first free trunk. */ + union { + struct { + uint32_t n_trunk_valid; /* Trunks allocated. */ + uint32_t n_trunk; /* Trunk pointer array size. */ + struct mlx5_indexed_trunk **trunks; + uint32_t free_list; /* Index to first free trunk. */ + }; + struct { + struct mlx5_indexed_cache *gc; + /* Global cache. */ + struct mlx5_ipool_per_lcore *cache[RTE_MAX_LCORE]; + /* Local cache. */ + }; + }; #ifdef POOL_DEBUG uint32_t n_entry; uint32_t trunk_new; @@ -542,6 +572,30 @@ int mlx5_ipool_destroy(struct mlx5_indexed_pool *pool); */ void mlx5_ipool_dump(struct mlx5_indexed_pool *pool); +/** + * This function flushes all the cache index back to pool trunk. + * + * @param pool + * Pointer to the index memory pool handler. + * + */ + +void mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool); + +/** + * This function gets the available entry from pos. + * + * @param pool + * Pointer to the index memory pool handler. + * @param pos + * Pointer to the index position start from. + * + * @return + * - Pointer to the next available entry. + * + */ +void *mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos); + /** * This function allocates new empty Three-level table. * From patchwork Fri Jul 2 06:17:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95156 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7B6ACA0A0C; Fri, 2 Jul 2021 08:18:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BEF5341348; Fri, 2 Jul 2021 08:18:39 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2087.outbound.protection.outlook.com [40.107.94.87]) by mails.dpdk.org (Postfix) with ESMTP id 0D84941334 for ; Fri, 2 Jul 2021 08:18:39 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BCEitCjKqv4yn0DjrJDp0qWe1Kp++Lm+Nu7OiHJNnKx7zT0Ipq6HMVaivSFxMxsaS7rOGq1iN5WZNIzqs76a318GsVlGSfd/K/hrzTvIEJ7pgT/BG4CuhBu64pS7lPhtDeFJgjBxjsNP8fakxEL5P/yBGU5kc2BNsMWLGenbKpuCXcaOU8EVLGRWkqEdU9Iwt8NgF5ChBZguB9Kk9Yt93/lHmQJUbPQFtjXv6zf1r3iEBAsbX8QS19mWxOFTZeoAJYgk0TK1kq++iYhra23wkwzY+IlXzm+cSwM3sy2GwwgmlbJXPkOMUnIlh+WFZkBJV0OpFnYSVHza4v3/bp04SA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=R2pm1BG6aGpNvKPymC8cAhTBiJOaRwwMoTMoPkGsTDU=; b=LoodH9pzzX4qj6Cdd/NXiBOiCDco4pZw+wFIhxOmI3Y3BzgJyWWn1GKnNfXP5v8g8+fw1cCp72jR0UyNH06Jefg/Ml+AW+vyW9e8ragLx7U64Ul+xBLars5zUPq8yKFzb+vC0+q4FPDz1IdUJePUKtHcODP7EpCcwkecX03QgwrDnnNH1Cx/jTmRkKRuvYnFN5DaLDarySPn8vko6yRK/bYzcF6pGXoPrUpqk21J91f34Tkh5dIfk/OadTDM5wZxd1J7kbpqvFttoYPrTxk28myxUachijTrC9APECn1EdrttYgAs/r3tav6E6pbSaFmSF3gKiYoiOGCQmhc+T8epw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=R2pm1BG6aGpNvKPymC8cAhTBiJOaRwwMoTMoPkGsTDU=; b=C1idI6MhX7kUYqCTOPXgQOEd7M+oFnamudogVft6rQdVKUjlMe7rV06q58QsNGxsk6L4rtQ6rRoKK58feF+p1KTTJM8Ig94FMq0ccw8piIKzVuHyFpAi/EgP+tL/9/y47FwQYTUsL4tC2AYhboawBOmMTT4Z+KOAdzZu1HLTsXju9tlpB0hU+5j/AP8Flju3RizrG76RON6SWgfxJYMLvIeuNSB3rr+zINJ4X1RaFm/IsPJa6mrpP0ChvEmTc8DAMTkdbkWOuKvYfARhbYKn5OF39JKkURYBbGllxVzFaMLEFnvY7ffYJhOP1dbLbnlkpAFJJ67s/slixHJUFyik2g== Received: from MW4PR03CA0109.namprd03.prod.outlook.com (2603:10b6:303:b7::24) by DM6PR12MB2892.namprd12.prod.outlook.com (2603:10b6:5:182::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.26; Fri, 2 Jul 2021 06:18:37 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::87) by MW4PR03CA0109.outlook.office365.com (2603:10b6:303:b7::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21 via Frontend Transport; Fri, 2 Jul 2021 06:18:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:36 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:35 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:17:57 +0300 Message-ID: <20210702061816.10454-4-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 12c04215-15ae-48d4-cc36-08d93d213a40 X-MS-TrafficTypeDiagnostic: DM6PR12MB2892: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:156; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cWGZt8hws4nFs/gCBg9jNPS8u/2KQOUyrydc1wylHDHnL+iotgKek2L6HtN21XZkv9/qBUaB0sZToMGbT7yKzYflVwkUYwPMLLQ6nxiHz2uNeFGKMH49Ngo7WAntDQniWWSB/tZVdW+7Js73aWR9ug44lNBalCI40dFu4edPBW0/q+NxA0/uXtL99P3jHQIUG2QpZ2bWRLpbUdSxuYJkG8IHe4Lnn1naB6BKnteGT5OE/sP3S5m1jFxlOw11dFocqcB2GNJgV5B7fdmVo82YQFh5+axJFRLojIgtm8E7xlEXQNZ8H2h+92pMB9edwnIbbjV7DgINcLUGnYTS8uF4I/D4ZfH/A5MUc8fSFd68FVIdEME43Rc3KPJsJGIucKJXRhHJ3EyLkqOEIgcrjcUk8rzICPCyjN860PrBhmDpAD7shv2nXp6YP6uSrCyVSTgyGyyNy+MnnCPgXT+egn/IvgaGFg7/drAsMqx+v65pv9LQGsQv7l0qh2Fpjj9XCaMotttecvvXm3Fmd2NnnbOcX7nYD0T257B/ZTWvIOoGzdqDK5S5vdgZWszGBpIvi1scmtOnPlrMnc4FkTAFd89dc9tp1fQQmRGu0sYdbrKbLPh4vkwaC1Md0tc3yt4w7HK4ZTlhHMM/umjuYF+2aDlndohoU43EcYpS0I0O7TQc6EAurMR1PBcHG0xcciBweI/8EeO+uEjex5df/iEiXZ0F2w== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(376002)(396003)(39860400002)(346002)(46966006)(36840700001)(70206006)(55016002)(70586007)(186003)(8676002)(16526019)(8936002)(478600001)(2906002)(26005)(7696005)(426003)(336012)(6666004)(54906003)(110136005)(86362001)(4326008)(1076003)(7636003)(36756003)(36860700001)(6636002)(47076005)(82310400003)(6286002)(83380400001)(5660300002)(316002)(82740400003)(356005)(36906005)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:36.8446 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 12c04215-15ae-48d4-cc36-08d93d213a40 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB2892 Subject: [dpdk-dev] [PATCH v3 03/22] net/mlx5: add index pool foreach define X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In some cases, application may want to know all the allocated index in order to apply some operations to the allocated index. This commit adds the indexed pool functions to support foreach operation. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_utils.c | 86 +++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_utils.h | 8 ++++ 2 files changed, 94 insertions(+) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 215024632d..0ed279e162 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -839,6 +839,92 @@ mlx5_ipool_destroy(struct mlx5_indexed_pool *pool) return 0; } +void +mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool) +{ + uint32_t i, j; + struct mlx5_indexed_cache *gc; + struct rte_bitmap *ibmp; + uint32_t bmp_num, mem_size; + + if (!pool->cfg.per_core_cache) + return; + gc = pool->gc; + if (!gc) + return; + /* Reset bmp. */ + bmp_num = mlx5_trunk_idx_offset_get(pool, gc->n_trunk_valid); + mem_size = rte_bitmap_get_memory_footprint(bmp_num); + pool->bmp_mem = pool->cfg.malloc(MLX5_MEM_ZERO, mem_size, + RTE_CACHE_LINE_SIZE, rte_socket_id()); + if (!pool->bmp_mem) { + DRV_LOG(ERR, "Ipool bitmap mem allocate failed.\n"); + return; + } + ibmp = rte_bitmap_init_with_all_set(bmp_num, pool->bmp_mem, mem_size); + if (!ibmp) { + pool->cfg.free(pool->bmp_mem); + pool->bmp_mem = NULL; + DRV_LOG(ERR, "Ipool bitmap create failed.\n"); + return; + } + pool->ibmp = ibmp; + /* Clear global cache. */ + for (i = 0; i < gc->len; i++) + rte_bitmap_clear(ibmp, gc->idx[i] - 1); + /* Clear core cache. */ + for (i = 0; i < RTE_MAX_LCORE; i++) { + struct mlx5_ipool_per_lcore *ilc = pool->cache[i]; + + if (!ilc) + continue; + for (j = 0; j < ilc->len; j++) + rte_bitmap_clear(ibmp, ilc->idx[j] - 1); + } +} + +static void * +mlx5_ipool_get_next_cache(struct mlx5_indexed_pool *pool, uint32_t *pos) +{ + struct rte_bitmap *ibmp; + uint64_t slab = 0; + uint32_t iidx = *pos; + + ibmp = pool->ibmp; + if (!ibmp || !rte_bitmap_scan(ibmp, &iidx, &slab)) { + if (pool->bmp_mem) { + pool->cfg.free(pool->bmp_mem); + pool->bmp_mem = NULL; + pool->ibmp = NULL; + } + return NULL; + } + iidx += __builtin_ctzll(slab); + rte_bitmap_clear(ibmp, iidx); + iidx++; + *pos = iidx; + return mlx5_ipool_get_cache(pool, iidx); +} + +void * +mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos) +{ + uint32_t idx = *pos; + void *entry; + + if (pool->cfg.per_core_cache) + return mlx5_ipool_get_next_cache(pool, pos); + while (idx <= mlx5_trunk_idx_offset_get(pool, pool->n_trunk)) { + entry = mlx5_ipool_get(pool, idx); + if (entry) { + *pos = idx; + return entry; + } + idx++; + } + return NULL; +} + void mlx5_ipool_dump(struct mlx5_indexed_pool *pool) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 0469062695..737dd7052d 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -261,6 +261,9 @@ struct mlx5_indexed_pool { /* Global cache. */ struct mlx5_ipool_per_lcore *cache[RTE_MAX_LCORE]; /* Local cache. */ + struct rte_bitmap *ibmp; + void *bmp_mem; + /* Allocate objects bitmap. Use during flush. */ }; }; #ifdef POOL_DEBUG @@ -862,4 +865,9 @@ struct { \ (entry); \ idx++, (entry) = mlx5_l3t_get_next((tbl), &idx)) +#define MLX5_IPOOL_FOREACH(ipool, idx, entry) \ + for ((idx) = 0, mlx5_ipool_flush_cache((ipool)), \ + (entry) = mlx5_ipool_get_next((ipool), &idx); \ + (entry); idx++, (entry) = mlx5_ipool_get_next((ipool), &idx)) + #endif /* RTE_PMD_MLX5_UTILS_H_ */ From patchwork Fri Jul 2 06:17:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95157 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCC16A0A0C; Fri, 2 Jul 2021 08:19:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 04CD341343; Fri, 2 Jul 2021 08:18:43 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2071.outbound.protection.outlook.com [40.107.244.71]) by mails.dpdk.org (Postfix) with ESMTP id 7AB534134D for ; Fri, 2 Jul 2021 08:18:41 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aw1wK7AFdJvyK10Z5qJlnX/t+oLdYiCTRmRkJGfB0S1Y1jnFvgP71iiHNIuQdv3fXDOhI6jsxp/F+4VGV1G/R08WbaDih14LOaPHxhLLKh211A+A11Q5hoha/w16bbtKiDu3ylgGClAXPUgO/rE9FkczsRnpO6hYcjB3gx+xYfK7Gg0oG889vj73rj86vt6BbHK8LT8QXf+BUX1w/NGNbwA5lkRX7stKJfH6fe5HWu1sSYwLnkE3KmmgFcCoMUlqFkSV5PoWi5I5tDx49bX0W7EkDD5xepdK9U6N4HRSU+Ea+1CgmVlI2slzv5A7mtc8D5nAl4drOv7tmAtfM3POow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hAeJVFwFOiEmpjTQXSpxcu5t64+D0jlXh2+/Lk5R3Ng=; b=DbHH05a/8RF09MkBqRyF4NlafbWj0k2alGLYL8GisFkHgGuYKtcV0CE7udP2jAM9PigINDUruprkCJs3x4aWK6CAK2tZzOFJR5D5bITeHNx48TzUlyUW+hQfMXfe4NVD7dtgAUW4/Q0Kc37ENjMZYMdWGatFsO7mmzBPW00BunUY/VpgaKqMdnL6RhC31kH/V4q+/uhT1Oe5cz4prDtyIJWnYjuTChDgOe/V/C8yP+Xm+UUCAJasFtYnTYkrPeVbQUc6mlkTp7zNB/O/Z2V2hUr97fYa5wTYie2FnW7u9/GWElKfmJyWzoq0G/VG9EXyqftzPD/yRfxAKOuhfaLDiw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hAeJVFwFOiEmpjTQXSpxcu5t64+D0jlXh2+/Lk5R3Ng=; b=pw5GWLc3NHhdU06zdlgzZF1t/TtUaZZArz9/KEO6dAudnT/4H3aCC4/KbPgvkfGFxJK7UY1wv24EVeJwheduFtcPIwE9aWRt7qI+GBBD+p5fI68H6XmZ8orBrKGkBM5TRR6b7U3xWUN6nEBd99ddEucOolUHXxJEQYI+LVwt7hVYQ1ZFT10smO5C+Pdjap5FDjKMua3gLGOUAD1Xz5qiMj1J/0gTJrb3tjOXNrxOCA1sluEThRsFhXiXfwXjtKvbrGQTgrysSsqSxo4TEJZmTmjoCV/kjyyDJ9VYhtRhRyHmcZx22qmkd4iWJlIIiIK89VhPGDYT8d0lra7JIJ/A0A== Received: from MW4PR03CA0095.namprd03.prod.outlook.com (2603:10b6:303:b7::10) by DM6PR12MB3164.namprd12.prod.outlook.com (2603:10b6:5:188::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23; Fri, 2 Jul 2021 06:18:39 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::67) by MW4PR03CA0095.outlook.office365.com (2603:10b6:303:b7::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Fri, 2 Jul 2021 06:18:39 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:39 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:36 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:17:58 +0300 Message-ID: <20210702061816.10454-5-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 62a1c639-c289-4048-9df4-08d93d213b90 X-MS-TrafficTypeDiagnostic: DM6PR12MB3164: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:269; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Q+Em5xXt1gVfAnlBuzAcFAYNvT6M0rlLTW5v2HrdAdGn5FS7r+CZboFEJo1eCHu1imthMXVV3A1rOvBpoCda/os0Bi5+Nm7hz1+2y2ZPik7X5AVfAZTkSniqYPiGDEvJow1ZVg0mA40aQBiy5LJk5uEMJCLgzvBhilBy4Ejo421KxgXRx/6cg4h+5J68dTBHq4UyZ5gB9VRBWd4fQrrpa4GOqQiniNi+8dDHds+jGF5uybgMkzJjk/xA2FGISeV+MM5f6LuhWcvl4fymhK2GXgbEFldywNTG+E3aUU+Lwi85hth3Z5Wxdmdm/EusMmOE9Q8LpIus0SfQLEdPMX2OqO90R/gXtbWIwfks0jIOkHriD/AbmtcVe2Gy7b5XiIXMVQwtJXRNeRZ/Ulxh3xSg+HMM/+J6amNm3C9otYczRMkjxf42htX/QLdRD68WEs7tVGTrRSW80Ip3x/vkWN6YGn/rLKylqsJkW8q/TtlaVMIDzZcCMrigemv9t6lLgTrZHbZPdXPBe3PmSgi/kiCx/fHrkHWTju8xtmiPmdEJ2yFZ51IqdvLMKDHN/cknMJ/OLRybyN1MCQB7nMl2sz/cUE6eAWKHGtdNrN5U1v3t94Z90uo9q+xgCt+ASmv8AOltKNOTIuWuab8vYNLoyRn2arb6Tl8pdCFv15LDIiJm+DsuPLqwRfvSxxwAoVFLVYlRtiBUMk//FsQEtiHv21SXQgugxZ9Qw5/pBpWVTWb/HNo= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966006)(36840700001)(1076003)(2906002)(316002)(86362001)(36906005)(4326008)(36860700001)(70586007)(7696005)(47076005)(54906003)(8676002)(110136005)(356005)(5660300002)(70206006)(7636003)(36756003)(82740400003)(55016002)(83380400001)(6666004)(6286002)(478600001)(186003)(6636002)(16526019)(82310400003)(426003)(30864003)(336012)(26005)(8936002)(2616005)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:39.1064 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 62a1c639-c289-4048-9df4-08d93d213b90 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3164 Subject: [dpdk-dev] [PATCH v3 04/22] net/mlx5: replace flow list with index pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The flow list is used to save the create flows and to be used only when port closes all the flows need to be flushed. This commit takes advantage of the index pool foreach operation to flush all the allocated flows. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 48 +++++++++- drivers/net/mlx5/mlx5.c | 9 +- drivers/net/mlx5/mlx5.h | 14 ++- drivers/net/mlx5/mlx5_flow.c | 149 ++++++++++------------------- drivers/net/mlx5/mlx5_flow.h | 2 +- drivers/net/mlx5/mlx5_flow_dv.c | 5 + drivers/net/mlx5/mlx5_trigger.c | 8 +- drivers/net/mlx5/windows/mlx5_os.c | 1 - 8 files changed, 126 insertions(+), 110 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 92b3009786..31cc8d9eb8 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -69,6 +69,44 @@ static rte_spinlock_t mlx5_shared_data_lock = RTE_SPINLOCK_INITIALIZER; /* Process local data for secondary processes. */ static struct mlx5_local_data mlx5_local_data; +/* rte flow indexed pool configuration. */ +static struct mlx5_indexed_pool_config icfg[] = { + { + .size = sizeof(struct rte_flow), + .trunk_size = 64, + .need_lock = 1, + .release_mem_en = 0, + .malloc = mlx5_malloc, + .free = mlx5_free, + .per_core_cache = 0, + .type = "ctl_flow_ipool", + }, + { + .size = sizeof(struct rte_flow), + .trunk_size = 64, + .grow_trunk = 3, + .grow_shift = 2, + .need_lock = 1, + .release_mem_en = 0, + .malloc = mlx5_malloc, + .free = mlx5_free, + .per_core_cache = 1 << 14, + .type = "rte_flow_ipool", + }, + { + .size = sizeof(struct rte_flow), + .trunk_size = 64, + .grow_trunk = 3, + .grow_shift = 2, + .need_lock = 1, + .release_mem_en = 0, + .malloc = mlx5_malloc, + .free = mlx5_free, + .per_core_cache = 0, + .type = "mcp_flow_ipool", + }, +}; + /** * Set the completion channel file descriptor interrupt as non-blocking. * @@ -823,6 +861,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, int own_domain_id = 0; uint16_t port_id; struct mlx5_port_info vport_info = { .query_flags = 0 }; + int i; /* Determine if this port representor is supposed to be spawned. */ if (switch_info->representor && dpdk_dev->devargs && @@ -1566,7 +1605,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_ifindex(eth_dev), eth_dev->data->mac_addrs, MLX5_MAX_MAC_ADDRESSES); - priv->flows = 0; priv->ctrl_flows = 0; rte_spinlock_init(&priv->flow_list_lock); TAILQ_INIT(&priv->flow_meters); @@ -1600,6 +1638,14 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_set_min_inline(spawn, config); /* Store device configuration on private structure. */ priv->config = *config; + for (i = 0; i < MLX5_FLOW_TYPE_MAXI; i++) { + icfg[i].release_mem_en = !!config->reclaim_mode; + if (config->reclaim_mode) + icfg[i].per_core_cache = 0; + priv->flows[i] = mlx5_ipool_create(&icfg[i]); + if (!priv->flows[i]) + goto error; + } /* Create context for virtual machine VLAN workaround. */ priv->vmwa_context = mlx5_vlan_vmwa_init(eth_dev, spawn->ifindex); if (config->dv_flow_en) { diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index cf1815cb74..fcfc3dcdca 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -322,7 +322,8 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .grow_trunk = 3, .grow_shift = 2, .need_lock = 1, - .release_mem_en = 1, + .release_mem_en = 0, + .per_core_cache = 1 << 19, .malloc = mlx5_malloc, .free = mlx5_free, .type = "mlx5_flow_handle_ipool", @@ -792,8 +793,10 @@ mlx5_flow_ipool_create(struct mlx5_dev_ctx_shared *sh, MLX5_FLOW_HANDLE_VERBS_SIZE; break; } - if (config->reclaim_mode) + if (config->reclaim_mode) { cfg.release_mem_en = 1; + cfg.per_core_cache = 0; + } sh->ipool[i] = mlx5_ipool_create(&cfg); } } @@ -1528,7 +1531,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) * If all the flows are already flushed in the device stop stage, * then this will return directly without any action. */ - mlx5_flow_list_flush(dev, &priv->flows, true); + mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_GEN, true); mlx5_action_handle_flush(dev); mlx5_flow_meter_flush(dev, NULL); /* Prevent crashes when queues are still in use. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 1b2dc8f815..380d35d420 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -71,6 +71,14 @@ enum mlx5_reclaim_mem_mode { MLX5_RCM_AGGR, /* Reclaim PMD and rdma-core level. */ }; +/* The type of flow. */ +enum mlx5_flow_type { + MLX5_FLOW_TYPE_CTL, /* Control flow. */ + MLX5_FLOW_TYPE_GEN, /* General flow. */ + MLX5_FLOW_TYPE_MCP, /* MCP flow. */ + MLX5_FLOW_TYPE_MAXI, +}; + /* Hash and cache list callback context. */ struct mlx5_flow_cb_ctx { struct rte_eth_dev *dev; @@ -1344,7 +1352,8 @@ struct mlx5_priv { unsigned int (*reta_idx)[]; /* RETA index table. */ unsigned int reta_idx_n; /* RETA index size. */ struct mlx5_drop drop_queue; /* Flow drop queues. */ - uint32_t flows; /* RTE Flow rules. */ + struct mlx5_indexed_pool *flows[MLX5_FLOW_TYPE_MAXI]; + /* RTE Flow rules. */ uint32_t ctrl_flows; /* Control flow rules. */ rte_spinlock_t flow_list_lock; struct mlx5_obj_ops obj_ops; /* HW objects operations. */ @@ -1596,7 +1605,8 @@ struct rte_flow *mlx5_flow_create(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error); -void mlx5_flow_list_flush(struct rte_eth_dev *dev, uint32_t *list, bool active); +void mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type, + bool active); int mlx5_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow, const struct rte_flow_action *action, void *data, diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 3b7c94d92f..450a84a6c5 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3095,31 +3095,6 @@ mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item, MLX5_ITEM_RANGE_NOT_ACCEPTED, error); } -/** - * Release resource related QUEUE/RSS action split. - * - * @param dev - * Pointer to Ethernet device. - * @param flow - * Flow to release id's from. - */ -static void -flow_mreg_split_qrss_release(struct rte_eth_dev *dev, - struct rte_flow *flow) -{ - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t handle_idx; - struct mlx5_flow_handle *dev_handle; - - SILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], flow->dev_handles, - handle_idx, dev_handle, next) - if (dev_handle->split_flow_id && - !dev_handle->is_meter_flow_id) - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], - dev_handle->split_flow_id); -} - static int flow_null_validate(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_attr *attr __rte_unused, @@ -3415,7 +3390,6 @@ flow_drv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) const struct mlx5_flow_driver_ops *fops; enum mlx5_flow_drv_type type = flow->drv_type; - flow_mreg_split_qrss_release(dev, flow); MLX5_ASSERT(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX); fops = flow_get_drv_ops(type); fops->destroy(dev, flow); @@ -3998,14 +3972,14 @@ flow_check_hairpin_split(struct rte_eth_dev *dev, /* Declare flow create/destroy prototype in advance. */ static uint32_t -flow_list_create(struct rte_eth_dev *dev, uint32_t *list, +flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action actions[], bool external, struct rte_flow_error *error); static void -flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, +flow_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, uint32_t flow_idx); int @@ -4127,8 +4101,8 @@ flow_dv_mreg_create_cb(struct mlx5_hlist *list, uint64_t key, * be applied, removed, deleted in ardbitrary order * by list traversing. */ - mcp_res->rix_flow = flow_list_create(dev, NULL, &attr, items, - actions, false, error); + mcp_res->rix_flow = flow_list_create(dev, MLX5_FLOW_TYPE_MCP, + &attr, items, actions, false, error); if (!mcp_res->rix_flow) { mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], idx); return NULL; @@ -4190,7 +4164,7 @@ flow_dv_mreg_remove_cb(struct mlx5_hlist *list, struct mlx5_hlist_entry *entry) struct mlx5_priv *priv = dev->data->dev_private; MLX5_ASSERT(mcp_res->rix_flow); - flow_list_destroy(dev, NULL, mcp_res->rix_flow); + flow_list_destroy(dev, MLX5_FLOW_TYPE_MCP, mcp_res->rix_flow); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], mcp_res->idx); } @@ -6093,7 +6067,7 @@ flow_rss_workspace_adjust(struct mlx5_flow_workspace *wks, * A flow index on success, 0 otherwise and rte_errno is set. */ static uint32_t -flow_list_create(struct rte_eth_dev *dev, uint32_t *list, +flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action original_actions[], @@ -6161,7 +6135,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, external, hairpin_flow, error); if (ret < 0) goto error_before_hairpin_split; - flow = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], &idx); + flow = mlx5_ipool_zmalloc(priv->flows[type], &idx); if (!flow) { rte_errno = ENOMEM; goto error_before_hairpin_split; @@ -6291,12 +6265,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, if (ret < 0) goto error; } - if (list) { - rte_spinlock_lock(&priv->flow_list_lock); - ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, idx, - flow, next); - rte_spinlock_unlock(&priv->flow_list_lock); - } + flow->type = type; flow_rxq_flags_set(dev, flow); rte_free(translated_actions); tunnel = flow_tunnel_from_rule(wks->flows); @@ -6318,7 +6287,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, mlx5_ipool_get (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], rss_desc->shared_rss))->refcnt, 1, __ATOMIC_RELAXED); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], idx); + mlx5_ipool_free(priv->flows[type], idx); rte_errno = ret; /* Restore rte_errno. */ ret = rte_errno; rte_errno = ret; @@ -6370,10 +6339,9 @@ mlx5_flow_create_esw_table_zero_flow(struct rte_eth_dev *dev) .type = RTE_FLOW_ACTION_TYPE_END, }, }; - struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_error error; - return (void *)(uintptr_t)flow_list_create(dev, &priv->ctrl_flows, + return (void *)(uintptr_t)flow_list_create(dev, MLX5_FLOW_TYPE_CTL, &attr, &pattern, actions, false, &error); } @@ -6425,8 +6393,6 @@ mlx5_flow_create(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error) { - struct mlx5_priv *priv = dev->data->dev_private; - /* * If the device is not started yet, it is not allowed to created a * flow from application. PMD default flows and traffic control flows @@ -6442,8 +6408,9 @@ mlx5_flow_create(struct rte_eth_dev *dev, return NULL; } - return (void *)(uintptr_t)flow_list_create(dev, &priv->flows, - attr, items, actions, true, error); + return (void *)(uintptr_t)flow_list_create(dev, MLX5_FLOW_TYPE_GEN, + attr, items, actions, + true, error); } /** @@ -6451,24 +6418,19 @@ mlx5_flow_create(struct rte_eth_dev *dev, * * @param dev * Pointer to Ethernet device. - * @param list - * Pointer to the Indexed flow list. If this parameter NULL, - * there is no flow removal from the list. Be noted that as - * flow is add to the indexed list, memory of the indexed - * list points to maybe changed as flow destroyed. * @param[in] flow_idx * Index of flow to destroy. */ static void -flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, +flow_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, uint32_t flow_idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = mlx5_ipool_get(priv->sh->ipool - [MLX5_IPOOL_RTE_FLOW], flow_idx); + struct rte_flow *flow = mlx5_ipool_get(priv->flows[type], flow_idx); if (!flow) return; + MLX5_ASSERT(flow->type == type); /* * Update RX queue flags only if port is started, otherwise it is * already clean. @@ -6476,12 +6438,6 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, if (dev->data->dev_started) flow_rxq_flags_trim(dev, flow); flow_drv_destroy(dev, flow); - if (list) { - rte_spinlock_lock(&priv->flow_list_lock); - ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, - flow_idx, flow, next); - rte_spinlock_unlock(&priv->flow_list_lock); - } if (flow->tunnel) { struct mlx5_flow_tunnel *tunnel; @@ -6491,7 +6447,7 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, mlx5_flow_tunnel_free(dev, tunnel); } flow_mreg_del_copy_action(dev, flow); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], flow_idx); + mlx5_ipool_free(priv->flows[type], flow_idx); } /** @@ -6499,18 +6455,21 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, * * @param dev * Pointer to Ethernet device. - * @param list - * Pointer to the Indexed flow list. + * @param type + * Flow type to be flushed. * @param active * If flushing is called avtively. */ void -mlx5_flow_list_flush(struct rte_eth_dev *dev, uint32_t *list, bool active) +mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type, + bool active) { - uint32_t num_flushed = 0; + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t num_flushed = 0, fidx = 1; + struct rte_flow *flow; - while (*list) { - flow_list_destroy(dev, list, *list); + MLX5_IPOOL_FOREACH(priv->flows[type], fidx, flow) { + flow_list_destroy(dev, type, fidx); num_flushed++; } if (active) { @@ -6682,18 +6641,19 @@ mlx5_flow_pop_thread_workspace(void) * @return the number of flows not released. */ int -mlx5_flow_verify(struct rte_eth_dev *dev) +mlx5_flow_verify(struct rte_eth_dev *dev __rte_unused) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow *flow; - uint32_t idx; - int ret = 0; + uint32_t idx = 0; + int ret = 0, i; - ILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], priv->flows, idx, - flow, next) { - DRV_LOG(DEBUG, "port %u flow %p still referenced", - dev->data->port_id, (void *)flow); - ++ret; + for (i = 0; i < MLX5_FLOW_TYPE_MAXI; i++) { + MLX5_IPOOL_FOREACH(priv->flows[i], idx, flow) { + DRV_LOG(DEBUG, "port %u flow %p still referenced", + dev->data->port_id, (void *)flow); + ret++; + } } return ret; } @@ -6713,7 +6673,6 @@ int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, uint32_t queue) { - struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_attr attr = { .egress = 1, .priority = 0, @@ -6746,8 +6705,8 @@ mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, actions[0].type = RTE_FLOW_ACTION_TYPE_JUMP; actions[0].conf = &jump; actions[1].type = RTE_FLOW_ACTION_TYPE_END; - flow_idx = flow_list_create(dev, &priv->ctrl_flows, - &attr, items, actions, false, &error); + flow_idx = flow_list_create(dev, MLX5_FLOW_TYPE_CTL, + &attr, items, actions, false, &error); if (!flow_idx) { DRV_LOG(DEBUG, "Failed to create ctrl flow: rte_errno(%d)," @@ -6836,8 +6795,8 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, action_rss.types = 0; for (i = 0; i != priv->reta_idx_n; ++i) queue[i] = (*priv->reta_idx)[i]; - flow_idx = flow_list_create(dev, &priv->ctrl_flows, - &attr, items, actions, false, &error); + flow_idx = flow_list_create(dev, MLX5_FLOW_TYPE_CTL, + &attr, items, actions, false, &error); if (!flow_idx) return -rte_errno; return 0; @@ -6878,7 +6837,6 @@ mlx5_ctrl_flow(struct rte_eth_dev *dev, int mlx5_flow_lacp_miss(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; /* * The LACP matching is done by only using ether type since using * a multicast dst mac causes kernel to give low priority to this flow. @@ -6912,8 +6870,9 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev) }, }; struct rte_flow_error error; - uint32_t flow_idx = flow_list_create(dev, &priv->ctrl_flows, - &attr, items, actions, false, &error); + uint32_t flow_idx = flow_list_create(dev, MLX5_FLOW_TYPE_CTL, + &attr, items, actions, + false, &error); if (!flow_idx) return -rte_errno; @@ -6931,9 +6890,8 @@ mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error __rte_unused) { - struct mlx5_priv *priv = dev->data->dev_private; - - flow_list_destroy(dev, &priv->flows, (uintptr_t)(void *)flow); + flow_list_destroy(dev, MLX5_FLOW_TYPE_GEN, + (uintptr_t)(void *)flow); return 0; } @@ -6947,9 +6905,7 @@ int mlx5_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error __rte_unused) { - struct mlx5_priv *priv = dev->data->dev_private; - - mlx5_flow_list_flush(dev, &priv->flows, false); + mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_GEN, false); return 0; } @@ -7000,8 +6956,7 @@ flow_drv_query(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct mlx5_flow_driver_ops *fops; - struct rte_flow *flow = mlx5_ipool_get(priv->sh->ipool - [MLX5_IPOOL_RTE_FLOW], + struct rte_flow *flow = mlx5_ipool_get(priv->flows[MLX5_FLOW_TYPE_GEN], flow_idx); enum mlx5_flow_drv_type ftype; @@ -7867,14 +7822,14 @@ mlx5_flow_discover_mreg_c(struct rte_eth_dev *dev) if (!config->dv_flow_en) break; /* Create internal flow, validation skips copy action. */ - flow_idx = flow_list_create(dev, NULL, &attr, items, - actions, false, &error); - flow = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], + flow_idx = flow_list_create(dev, MLX5_FLOW_TYPE_GEN, &attr, + items, actions, false, &error); + flow = mlx5_ipool_get(priv->flows[MLX5_FLOW_TYPE_GEN], flow_idx); if (!flow) continue; config->flow_mreg_c[n++] = idx; - flow_list_destroy(dev, NULL, flow_idx); + flow_list_destroy(dev, MLX5_FLOW_TYPE_GEN, flow_idx); } for (; n < MLX5_MREG_C_NUM; ++n) config->flow_mreg_c[n] = REG_NON; @@ -7918,8 +7873,8 @@ mlx5_flow_dev_dump(struct rte_eth_dev *dev, struct rte_flow *flow_idx, sh->rx_domain, sh->tx_domain, file); /* dump one */ - flow = mlx5_ipool_get(priv->sh->ipool - [MLX5_IPOOL_RTE_FLOW], (uintptr_t)(void *)flow_idx); + flow = mlx5_ipool_get(priv->flows[MLX5_FLOW_TYPE_GEN], + (uintptr_t)(void *)flow_idx); if (!flow) return -ENOENT; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2f2aa962f9..d9b6acaafd 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -997,9 +997,9 @@ flow_items_to_tunnel(const struct rte_flow_item items[]) /* Flow structure. */ struct rte_flow { - ILIST_ENTRY(uint32_t)next; /**< Index to the next flow structure. */ uint32_t dev_handles; /**< Device flow handles that are part of the flow. */ + uint32_t type:2; uint32_t drv_type:2; /**< Driver type. */ uint32_t tunnel:1; uint32_t meter:24; /**< Holds flow meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a04a3c2bb8..e898c571da 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13844,6 +13844,11 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) dev_handle->split_flow_id) mlx5_ipool_free(fm->flow_ipool, dev_handle->split_flow_id); + else if (dev_handle->split_flow_id && + !dev_handle->is_meter_flow_id) + mlx5_ipool_free(priv->sh->ipool + [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], + dev_handle->split_flow_id); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], tmp_idx); } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index ae7fcca229..7cb8920d6b 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1187,7 +1187,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) /* Control flows for default traffic can be removed firstly. */ mlx5_traffic_disable(dev); /* All RX queue flags will be cleared in the flush interface. */ - mlx5_flow_list_flush(dev, &priv->flows, true); + mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_GEN, true); mlx5_flow_meter_rxq_flush(dev); mlx5_rx_intr_vec_disable(dev); priv->sh->port[priv->dev_port - 1].ih_port_id = RTE_MAX_ETHPORTS; @@ -1370,7 +1370,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) return 0; error: ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_flow_list_flush(dev, &priv->ctrl_flows, false); + mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_CTL, false); rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -1385,9 +1385,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) void mlx5_traffic_disable(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; - - mlx5_flow_list_flush(dev, &priv->ctrl_flows, false); + mlx5_flow_list_flush(dev, MLX5_FLOW_TYPE_CTL, false); } /** diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 3fe3f55f49..7d15c998bb 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -563,7 +563,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, eth_dev->rx_queue_count = mlx5_rx_queue_count; /* Register MAC address. */ claim_zero(mlx5_mac_addr_add(eth_dev, &mac, 0, 0)); - priv->flows = 0; priv->ctrl_flows = 0; TAILQ_INIT(&priv->flow_meters); TAILQ_INIT(&priv->flow_meter_profiles); From patchwork Fri Jul 2 06:17:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95158 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 67C87A0A0C; Fri, 2 Jul 2021 08:19:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 855C041354; Fri, 2 Jul 2021 08:18:45 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2060.outbound.protection.outlook.com [40.107.92.60]) by mails.dpdk.org (Postfix) with ESMTP id A05DD4134F for ; Fri, 2 Jul 2021 08:18:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=H2XXgujsEzokJjktKEbJ1vm+U/EKnLg5gDmUa0n/f0ttDk3lgwhWFXpf17BCktrnyzQaPZRCYRAjigfGOD0SRJbg1a2TQE/e2znTvJ1HpknuzQgGO1S2Toqtvo1ArAfFzFnWP3IVbZj9daztUnpj8ubUOwzgd+YN7wC4lhf0XiVKPJvuzrQLTjAOCjWmmMUX2LRzlQWlKMADpphMP0O/YoPPlDznLT9j+bCE2PshF7X5fmQ+cxLBctF52U7MIY8G9ZkObulGY01nvm/KZ8jq5lG/dhPneqU21XNEXNlOMGV/tCyb/ZeMCDt7JFMEaIakirUyVuVykVXDBPphg1zw+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mqWksTbUuNbubtfEH5KeXNyjZFFqr65kn1RuNFMVbok=; b=Bag2ohPikdwPgpD/M2GEFkp4h0eMlLiaWP91WaoiAs81pRDPsJ3IL0ICt27DWt46BMgD0NaJVO0j2/zD2E01erbJv/vgSY5/qVRN/TuQe9Tg5GrdUOcRkneLtzSrpW0lOUfItSr+aizWgkPtlR/jVSDXL1QPVBcvGLVcxXNCqBLy2QJ+ySqCcch7bhcv5G8vB4+oF5r3CalMhWKATQtSV5z7E3ppCZJdBTx/CjUg/d98mAUUlwa4LuNCO1SadzLX6nyvB1LCK/PZkfakT5+I6XHVpHBS13PVxtErFGinuKx4FbhtD6gpL4vUVEOM6P/6r5vq8/YGtZvfb0woqjrqMw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mqWksTbUuNbubtfEH5KeXNyjZFFqr65kn1RuNFMVbok=; b=t8KYUHEsm23oPdM/Ys5RIaVMYF6e/ay4hRkhdFY1d/UX+H1fjDljRGm0SDYgIjuZ9iNBiTsiboJ4alXcTL2mSf9+04q4gccAuS/gZPyuLylKSgf9UZ+rAwxa9u1AEwsQ5QXaikRI6J2BpqRqpAkb/qyCKezAmS1WgkhUz3VpISTAm3QrNIMazXdHfYYk1AcL8cDyFxAwwzTagqitfBN/0Q/smfQIaDju+dDfv6LZQ6/dYC1VoFjQKbhAJm2scnkGnZbujTNfqzqlAoh/xHA7bv1G9QINXiFmbBljlbg3B7L/2h10VSPI3IcE0DPLLgqYRIty6kjbddHXCBujCVOB5Q== Received: from MW4PR03CA0120.namprd03.prod.outlook.com (2603:10b6:303:b7::35) by BN8PR12MB3283.namprd12.prod.outlook.com (2603:10b6:408:9f::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Fri, 2 Jul 2021 06:18:42 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::51) by MW4PR03CA0120.outlook.office365.com (2603:10b6:303:b7::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21 via Frontend Transport; Fri, 2 Jul 2021 06:18:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:40 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:38 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:17:59 +0300 Message-ID: <20210702061816.10454-6-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 35c1ebae-07c6-4ea3-7365-08d93d213c5b X-MS-TrafficTypeDiagnostic: BN8PR12MB3283: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: hB2QCRyOJjR60/EtKFgi0YORbpJtubFQ78rSERBr4y7FKokDcg04xZ2D8VlJbWyTHnWsnVATt6RAEA/MhY7xdAhMThRrIRPieqkg6KcId2ijvDCCAZOWwUnBDYb6XLTK3yrvvU/4ffiklsQdPlQiKs4pW3KT0uFxvfBgVLqyzV6frpA2BI5n9VMM32VvMd3nK7QcayawtX1U3BiJ8pqQlwE0SR3LHjkDYs4yL3oeDJu+OkjHOWRG5bQLvMumQP9RPw2+CAYhvMgeOLasmxhR/4Hp629TBClQuGEJbXK6Tlpjtpfhiu9PrBXuJO+VAdwR+IhsCcxgPVKccS0SxOZGbjDM/aeRzDzusZ2z+zTaud+BDBWu1OeB2FtHo1ZurmwL3xKw0JD6FGYZCph+tzv94jXvtyyzWQfyceZ0ZzjbuoJQhUcXRlCYBqEdxK6gC6BEsVMB1iT0DuFYLuYypQiOLCQOpQmou6ZxFCbgVzekszbGUa2qHM2x+AFy0Q5spUplcYG97CgheTGDiw6Ge/j2wT1s7nl8J5n196y1GmxRVRnjueoR0drdTd6prqcwOHQO2z1lcCfXYb5WNIU2SOhJBWfMbfnuZ2kQGJfOzJhmV+bxk2Kl96ql7wNi5ucwCm3h3vB3yTeS8urvodSXoLos8krnMYtI9jxWWKwBB9bl+WDTJ12AgOhaCh8aaSv16Pzi9GHrSowlGs7SBB3Jp9bT9Q== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(36840700001)(46966006)(26005)(186003)(316002)(8936002)(70206006)(86362001)(5660300002)(70586007)(478600001)(16526019)(6666004)(6636002)(36860700001)(8676002)(2906002)(110136005)(82740400003)(7636003)(55016002)(54906003)(82310400003)(47076005)(6286002)(4326008)(2616005)(1076003)(336012)(36906005)(426003)(7696005)(36756003)(356005)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:40.4357 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 35c1ebae-07c6-4ea3-7365-08d93d213c5b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3283 Subject: [dpdk-dev] [PATCH v3 05/22] net/mlx5: optimize modify header action memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad Define the types of the modify header action fields to be with the minimum size needed for the optional values range. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/common/mlx5/linux/mlx5_glue.h | 1 + drivers/net/mlx5/linux/mlx5_flow_os.h | 3 ++- drivers/net/mlx5/mlx5_flow.h | 6 +++--- drivers/net/mlx5/mlx5_flow_dv.c | 13 ++++++------- 4 files changed, 12 insertions(+), 11 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index 840d8cf57f..a186ee577f 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -78,6 +78,7 @@ struct mlx5dv_devx_async_cmd_hdr; enum mlx5dv_dr_domain_type { unused, }; struct mlx5dv_dr_domain; struct mlx5dv_dr_action; +#define MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL 1 #endif #ifndef HAVE_MLX5DV_DR_DEVX_PORT diff --git a/drivers/net/mlx5/linux/mlx5_flow_os.h b/drivers/net/mlx5/linux/mlx5_flow_os.h index cee685015b..1926d26410 100644 --- a/drivers/net/mlx5/linux/mlx5_flow_os.h +++ b/drivers/net/mlx5/linux/mlx5_flow_os.h @@ -225,7 +225,8 @@ mlx5_flow_os_create_flow_action_modify_header(void *ctx, void *domain, (struct mlx5_flow_dv_modify_hdr_resource *)resource; *action = mlx5_glue->dv_create_flow_action_modify_header - (ctx, res->ft_type, domain, res->flags, + (ctx, res->ft_type, domain, res->root ? + MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL : 0, actions_len, (uint64_t *)res->actions); return (*action) ? 0 : -1; } diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index d9b6acaafd..81c95e0beb 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -523,11 +523,11 @@ struct mlx5_flow_dv_modify_hdr_resource { void *action; /**< Modify header action object. */ /* Key area for hash list matching: */ uint8_t ft_type; /**< Flow table type, Rx or Tx. */ - uint32_t actions_num; /**< Number of modification actions. */ - uint64_t flags; /**< Flags for RDMA API. */ + uint8_t actions_num; /**< Number of modification actions. */ + bool root; /**< Whether action is in root table. */ struct mlx5_modification_cmd actions[]; /**< Modification actions. */ -}; +} __rte_packed; /* Modify resource key of the hash organization. */ union mlx5_flow_modify_hdr_key { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index e898c571da..a7c1cf05da 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5000,21 +5000,21 @@ flow_dv_validate_action_port_id(struct rte_eth_dev *dev, * * @param dev * Pointer to rte_eth_dev structure. - * @param flags - * Flags bits to check if root level. + * @param root + * Whether action is on root table. * * @return * Max number of modify header actions device can support. */ static inline unsigned int flow_dv_modify_hdr_action_max(struct rte_eth_dev *dev __rte_unused, - uint64_t flags) + bool root) { /* * There's no way to directly query the max capacity from FW. * The maximal value on root table should be assumed to be supported. */ - if (!(flags & MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL)) + if (!root) return MLX5_MAX_MODIFY_NUM; else return MLX5_ROOT_TBL_MODIFY_NUM; @@ -5582,10 +5582,9 @@ flow_dv_modify_hdr_resource_register }; uint64_t key64; - resource->flags = dev_flow->dv.group ? 0 : - MLX5DV_DR_ACTION_FLAGS_ROOT_LEVEL; + resource->root = !dev_flow->dv.group; if (resource->actions_num > flow_dv_modify_hdr_action_max(dev, - resource->flags)) + resource->root)) return rte_flow_error_set(error, EOVERFLOW, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "too many modify header items"); From patchwork Fri Jul 2 06:18:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95160 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 91C8EA0A0C; Fri, 2 Jul 2021 08:19:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E045E4135D; Fri, 2 Jul 2021 08:18:50 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2081.outbound.protection.outlook.com [40.107.237.81]) by mails.dpdk.org (Postfix) with ESMTP id 2B9BD41347 for ; Fri, 2 Jul 2021 08:18:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RNyIGaRqjPWPfme59PI4zxnPcnVm1uKgfSHGF85jjpclefVOGK8dyti6RC0eMI/t4WyRzplaxyR/tMAGn+V5bq9RhvmkhP01tJFxtZgeJE8mjo5ciVp1hKU7FZV5SIKZKcoqwm/bs0JVS7C6IS+A0nXUxLL0Jo55ltxg6XN3NFee00PagfxIQxy5SV8/OOmgduhy6OrsBPDC/MACFuU5FX2dgCVNlrNSQ6SwRZLnqBqMI3H7CcFuERaEx4VY719CyLXtaSCmV/3DXYGtfnHTHYZ3grdlcvOqr2kOQEyfJ34Gvoo2s/9HIBhdUFg36kTTxmNWKuJsdbb3VT4W6ub3aA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nqihUB68kLZz30mwsxljWcnrKb99CqsFxzuYTfY0ixk=; b=MIgZUFP4Fa6nUeixneabouz1xYYoIpaZIPFpfK6cxzu8KBgtOW/MIae+xJOTQ8XaKqDzaVhJS3dwFcNsr4prUckwECdOQbC5uNOcCq2pJeDg2lRWriZsnU6zp6rXFyolz+6M+JT8HyklknlR7K1DQgZc0Z/wVLKSvEGi9J6gXYANIA+jp6g+mzlILAObNtK7fg+2t4gJSZaClJbDaCb2WV8EHpEo37yWUIGmfaD4gkvfoBEgTdMdznBia1Reu8LtoZAsvnV0m09wERg4zuOqZWQF71LhhkV0rJ/gp/sSPf3JmDwyASud9SaYlvs0urZQW0z5IxkWaPYUgB5ScfvNLA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nqihUB68kLZz30mwsxljWcnrKb99CqsFxzuYTfY0ixk=; b=cXTENO0PY1HED/5xjZ4nTgXH/E0UvV0N4HMZKxCkIrMowWLE85Y+XzCahDGUnUCh11Pc/eDVqJDPGK/8oVC4MkP/ht2F43m2q8mlCp8oUdF0UlowFDmMP8z1SzwSDre2cJmwTi/fPmyfCN6VJtX0SJUumru4nytWEHxCxO4G0CY+QUN8so9IIynBVA+LmTGwTPNS34beFYn8pFNgsrv8xkqcvFgc0q/VYhrfmb4UGktKpEc/OYLM/L9hTMbeLZoBGv5ucAFshJsx+3jL+t4yMsfdgozvCShhqXoQsnXkVpeCnHFgBy81NYR8DB7LN9NnWXkX069zE2lKuzKkvXomZQ== Received: from MW4PR03CA0118.namprd03.prod.outlook.com (2603:10b6:303:b7::33) by SJ0PR12MB5422.namprd12.prod.outlook.com (2603:10b6:a03:3ac::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 2 Jul 2021 06:18:45 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::70) by MW4PR03CA0118.outlook.office365.com (2603:10b6:303:b7::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:44 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:40 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:00 +0300 Message-ID: <20210702061816.10454-7-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 6b50b521-1656-4d81-7991-08d93d213f0c X-MS-TrafficTypeDiagnostic: SJ0PR12MB5422: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:370; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LBxsCssKXpGZRQ4RZM55UxQzqWUJB2HfrCTchKM1u6n/qADoT6TW5OMfa8G0OvEiIG5BXI1FH6JzgwBmOQF9iYdO5tdFVBNIZgxZMK5K3YdfPnbXsdf+S3bQfvDCfP+0iNK1nOUlLqZ1l8UDynJe3sETOwjHgclcKtb/0N7vsmCAC6OLBMnD8LJsurjUcxFceok+xNhECXOUifvdT13EVifyztCTOmZMEHeaso9nr9BkAWXXBgIfWLlKcM+tp6+JhM4II5t4Bvjchbe6i1K7KpoIzcV4zzVlBPh5J0HKYvNJT7AaK1JX9ctH1lpCThuAwkxh30FOfFIgdhsysGNH9IbhUSPLjl6+s9LM5Q8fCCS07pOvkAzPKSATutOMI9sMXONzupwTQ8ZhUeb3SctPWuJquqC0QVQk6Y2x14bHNvp8dHf0dethoAVSIqevxR6H3Dqr+vyk8DIFAohi5UpQBDLHFARFmeeqt+D+kzXK9/0MShj0uqpa/vktY/1kyZUTYszyqf/z0QLiP9Alu/P8cAm2jcT3+3HiaGl10wPAmV+twNUeIllxq+g4BYkb5oux2HphXs35zZNC7ttg/a2BMjDQdetI3vJ+XgFOtoo9uNNOJU97Ykxe/MpXu3MMaKs6wpBjqeU5KEJqkUCuSyhEySzZB3Q3MS78bPBQW+TgaP0opfL7x3eSCuUnlbb+Iyf8Ovx7kpb1MaNWfadgrAoxVg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(39860400002)(376002)(346002)(36840700001)(46966006)(36756003)(82310400003)(8676002)(82740400003)(47076005)(1076003)(55016002)(4326008)(336012)(36860700001)(30864003)(6286002)(5660300002)(70206006)(2616005)(2906002)(16526019)(426003)(26005)(186003)(7636003)(36906005)(70586007)(7696005)(54906003)(316002)(6666004)(356005)(110136005)(6636002)(86362001)(8936002)(478600001)(83380400001)(559001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:44.9533 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6b50b521-1656-4d81-7991-08d93d213f0c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB5422 Subject: [dpdk-dev] [PATCH v3 06/22] net/mlx5: remove cache term from the list utility X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad The internal mlx5 list tool is used mainly when the list objects need to be synchronized between multiple threads. The "cache" term is used in the internal mlx5 list API. Next enhancements on this tool will use the "cache" term for per thread cache management. To prevent confusing, remove the current "cache" term from the API's names. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 32 +- drivers/net/mlx5/mlx5.c | 2 +- drivers/net/mlx5/mlx5.h | 15 +- drivers/net/mlx5/mlx5_flow.h | 88 ++--- drivers/net/mlx5/mlx5_flow_dv.c | 558 ++++++++++++++--------------- drivers/net/mlx5/mlx5_rx.h | 12 +- drivers/net/mlx5/mlx5_rxq.c | 28 +- drivers/net/mlx5/mlx5_utils.c | 78 ++-- drivers/net/mlx5/mlx5_utils.h | 94 ++--- drivers/net/mlx5/windows/mlx5_os.c | 7 +- 10 files changed, 454 insertions(+), 460 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 31cc8d9eb8..9aa57e38b7 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -272,27 +272,27 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* The resources below are only valid with DV support. */ #ifdef HAVE_IBV_FLOW_DV_SUPPORT - /* Init port id action cache list. */ - snprintf(s, sizeof(s), "%s_port_id_action_cache", sh->ibdev_name); - mlx5_cache_list_init(&sh->port_id_action_list, s, 0, sh, + /* Init port id action mlx5 list. */ + snprintf(s, sizeof(s), "%s_port_id_action_list", sh->ibdev_name); + mlx5_list_create(&sh->port_id_action_list, s, 0, sh, flow_dv_port_id_create_cb, flow_dv_port_id_match_cb, flow_dv_port_id_remove_cb); - /* Init push vlan action cache list. */ - snprintf(s, sizeof(s), "%s_push_vlan_action_cache", sh->ibdev_name); - mlx5_cache_list_init(&sh->push_vlan_action_list, s, 0, sh, + /* Init push vlan action mlx5 list. */ + snprintf(s, sizeof(s), "%s_push_vlan_action_list", sh->ibdev_name); + mlx5_list_create(&sh->push_vlan_action_list, s, 0, sh, flow_dv_push_vlan_create_cb, flow_dv_push_vlan_match_cb, flow_dv_push_vlan_remove_cb); - /* Init sample action cache list. */ - snprintf(s, sizeof(s), "%s_sample_action_cache", sh->ibdev_name); - mlx5_cache_list_init(&sh->sample_action_list, s, 0, sh, + /* Init sample action mlx5 list. */ + snprintf(s, sizeof(s), "%s_sample_action_list", sh->ibdev_name); + mlx5_list_create(&sh->sample_action_list, s, 0, sh, flow_dv_sample_create_cb, flow_dv_sample_match_cb, flow_dv_sample_remove_cb); - /* Init dest array action cache list. */ - snprintf(s, sizeof(s), "%s_dest_array_cache", sh->ibdev_name); - mlx5_cache_list_init(&sh->dest_array_list, s, 0, sh, + /* Init dest array action mlx5 list. */ + snprintf(s, sizeof(s), "%s_dest_array_list", sh->ibdev_name); + mlx5_list_create(&sh->dest_array_list, s, 0, sh, flow_dv_dest_array_create_cb, flow_dv_dest_array_match_cb, flow_dv_dest_array_remove_cb); @@ -500,8 +500,8 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv) mlx5_release_tunnel_hub(sh, priv->dev_port); sh->tunnel_hub = NULL; } - mlx5_cache_list_destroy(&sh->port_id_action_list); - mlx5_cache_list_destroy(&sh->push_vlan_action_list); + mlx5_list_destroy(&sh->port_id_action_list); + mlx5_list_destroy(&sh->push_vlan_action_list); mlx5_free_table_hash_list(priv); } @@ -1702,7 +1702,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - mlx5_cache_list_init(&priv->hrxqs, "hrxq", 0, eth_dev, + mlx5_list_create(&priv->hrxqs, "hrxq", 0, eth_dev, mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, mlx5_hrxq_remove_cb); @@ -1761,7 +1761,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_drop_action_destroy(eth_dev); if (own_domain_id) claim_zero(rte_eth_switch_domain_free(priv->domain_id)); - mlx5_cache_list_destroy(&priv->hrxqs); + mlx5_list_destroy(&priv->hrxqs); mlx5_free(priv); if (eth_dev != NULL) eth_dev->data->dev_private = NULL; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index fcfc3dcdca..9aade013c5 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1611,7 +1611,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) if (ret) DRV_LOG(WARNING, "port %u some flows still remain", dev->data->port_id); - mlx5_cache_list_destroy(&priv->hrxqs); + mlx5_list_destroy(&priv->hrxqs); /* * Free the shared context in last turn, because the cleanup * routines above may use some shared fields, like diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 380d35d420..58646da331 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -79,7 +79,7 @@ enum mlx5_flow_type { MLX5_FLOW_TYPE_MAXI, }; -/* Hash and cache list callback context. */ +/* Hlist and list callback context. */ struct mlx5_flow_cb_ctx { struct rte_eth_dev *dev; struct rte_flow_error *error; @@ -1114,10 +1114,10 @@ struct mlx5_dev_ctx_shared { struct mlx5_hlist *encaps_decaps; /* Encap/decap action hash list. */ struct mlx5_hlist *modify_cmds; struct mlx5_hlist *tag_table; - struct mlx5_cache_list port_id_action_list; /* Port ID action cache. */ - struct mlx5_cache_list push_vlan_action_list; /* Push VLAN actions. */ - struct mlx5_cache_list sample_action_list; /* List of sample actions. */ - struct mlx5_cache_list dest_array_list; + struct mlx5_list port_id_action_list; /* Port ID action list. */ + struct mlx5_list push_vlan_action_list; /* Push VLAN actions. */ + struct mlx5_list sample_action_list; /* List of sample actions. */ + struct mlx5_list dest_array_list; /* List of destination array actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ void *default_miss_action; /* Default miss action. */ @@ -1221,7 +1221,7 @@ struct mlx5_ind_table_obj { /* Hash Rx queue. */ __extension__ struct mlx5_hrxq { - struct mlx5_cache_entry entry; /* Cache entry. */ + struct mlx5_list_entry entry; /* List entry. */ uint32_t standalone:1; /* This object used in shared action. */ struct mlx5_ind_table_obj *ind_table; /* Indirection table. */ RTE_STD_C11 @@ -1359,7 +1359,7 @@ struct mlx5_priv { struct mlx5_obj_ops obj_ops; /* HW objects operations. */ LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */ - struct mlx5_cache_list hrxqs; /* Hash Rx queues. */ + struct mlx5_list hrxqs; /* Hash Rx queues. */ LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */ LIST_HEAD(txqobj, mlx5_txq_obj) txqsobj; /* Verbs/DevX Tx queues. */ /* Indirection tables. */ @@ -1369,7 +1369,6 @@ struct mlx5_priv { /**< Verbs modify header action object. */ uint8_t ft_type; /**< Flow table type, Rx or Tx. */ uint8_t max_lro_msg_size; - /* Tags resources cache. */ uint32_t link_speed_capa; /* Link speed capabilities. */ struct mlx5_xstats_ctrl xstats_ctrl; /* Extended stats control. */ struct mlx5_stats_ctrl stats_ctrl; /* Stats control. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 81c95e0beb..4dec703366 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -467,7 +467,7 @@ struct mlx5_flow_dv_match_params { /* Matcher structure. */ struct mlx5_flow_dv_matcher { - struct mlx5_cache_entry entry; /**< Pointer to the next element. */ + struct mlx5_list_entry entry; /**< Pointer to the next element. */ struct mlx5_flow_tbl_resource *tbl; /**< Pointer to the table(group) the matcher associated with. */ void *matcher_object; /**< Pointer to DV matcher */ @@ -547,7 +547,7 @@ struct mlx5_flow_dv_jump_tbl_resource { /* Port ID resource structure. */ struct mlx5_flow_dv_port_id_action_resource { - struct mlx5_cache_entry entry; + struct mlx5_list_entry entry; void *action; /**< Action object. */ uint32_t port_id; /**< Port ID value. */ uint32_t idx; /**< Indexed pool memory index. */ @@ -555,7 +555,7 @@ struct mlx5_flow_dv_port_id_action_resource { /* Push VLAN action resource structure */ struct mlx5_flow_dv_push_vlan_action_resource { - struct mlx5_cache_entry entry; /* Cache entry. */ + struct mlx5_list_entry entry; /* Cache entry. */ void *action; /**< Action object. */ uint8_t ft_type; /**< Flow table type, Rx, Tx or FDB. */ rte_be32_t vlan_tag; /**< VLAN tag value. */ @@ -590,7 +590,7 @@ struct mlx5_flow_tbl_data_entry { /**< hash list entry, 64-bits key inside. */ struct mlx5_flow_tbl_resource tbl; /**< flow table resource. */ - struct mlx5_cache_list matchers; + struct mlx5_list matchers; /**< matchers' header associated with the flow table. */ struct mlx5_flow_dv_jump_tbl_resource jump; /**< jump resource, at most one for each table created. */ @@ -631,7 +631,7 @@ struct mlx5_flow_sub_actions_idx { /* Sample action resource structure. */ struct mlx5_flow_dv_sample_resource { - struct mlx5_cache_entry entry; /**< Cache entry. */ + struct mlx5_list_entry entry; /**< Cache entry. */ union { void *verbs_action; /**< Verbs sample action object. */ void **sub_actions; /**< Sample sub-action array. */ @@ -653,7 +653,7 @@ struct mlx5_flow_dv_sample_resource { /* Destination array action resource structure. */ struct mlx5_flow_dv_dest_array_resource { - struct mlx5_cache_entry entry; /**< Cache entry. */ + struct mlx5_list_entry entry; /**< Cache entry. */ uint32_t idx; /** Destination array action object index. */ uint8_t ft_type; /** Flow Table Type */ uint8_t num_of_dest; /**< Number of destination actions. */ @@ -1619,43 +1619,45 @@ struct mlx5_hlist_entry *flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, void flow_dv_encap_decap_remove_cb(struct mlx5_hlist *list, struct mlx5_hlist_entry *entry); -int flow_dv_matcher_match_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *ctx); -struct mlx5_cache_entry *flow_dv_matcher_create_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *ctx); -void flow_dv_matcher_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry); - -int flow_dv_port_id_match_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *cb_ctx); -struct mlx5_cache_entry *flow_dv_port_id_create_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *cb_ctx); -void flow_dv_port_id_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry); - -int flow_dv_push_vlan_match_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *cb_ctx); -struct mlx5_cache_entry *flow_dv_push_vlan_create_cb - (struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *cb_ctx); -void flow_dv_push_vlan_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry); - -int flow_dv_sample_match_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *cb_ctx); -struct mlx5_cache_entry *flow_dv_sample_create_cb - (struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *cb_ctx); -void flow_dv_sample_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry); - -int flow_dv_dest_array_match_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *cb_ctx); -struct mlx5_cache_entry *flow_dv_dest_array_create_cb - (struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *cb_ctx); -void flow_dv_dest_array_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry); +int flow_dv_matcher_match_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *ctx); +struct mlx5_list_entry *flow_dv_matcher_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, + void *ctx); +void flow_dv_matcher_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); + +int flow_dv_port_id_match_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx); +struct mlx5_list_entry *flow_dv_port_id_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, + void *cb_ctx); +void flow_dv_port_id_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); + +int flow_dv_push_vlan_match_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx); +struct mlx5_list_entry *flow_dv_push_vlan_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, + void *cb_ctx); +void flow_dv_push_vlan_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); + +int flow_dv_sample_match_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx); +struct mlx5_list_entry *flow_dv_sample_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, + void *cb_ctx); +void flow_dv_sample_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); + +int flow_dv_dest_array_match_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx); +struct mlx5_list_entry *flow_dv_dest_array_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, + void *cb_ctx); +void flow_dv_dest_array_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); struct mlx5_aso_age_action *flow_aso_age_get_by_idx(struct rte_eth_dev *dev, uint32_t age_idx); int flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a7c1cf05da..897bcf52a6 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3601,18 +3601,17 @@ flow_dv_encap_decap_match_cb(struct mlx5_hlist *list __rte_unused, uint64_t key __rte_unused, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; - struct mlx5_flow_dv_encap_decap_resource *resource = ctx->data; - struct mlx5_flow_dv_encap_decap_resource *cache_resource; - - cache_resource = container_of(entry, - struct mlx5_flow_dv_encap_decap_resource, - entry); - if (resource->reformat_type == cache_resource->reformat_type && - resource->ft_type == cache_resource->ft_type && - resource->flags == cache_resource->flags && - resource->size == cache_resource->size && + struct mlx5_flow_dv_encap_decap_resource *ctx_resource = ctx->data; + struct mlx5_flow_dv_encap_decap_resource *resource; + + resource = container_of(entry, struct mlx5_flow_dv_encap_decap_resource, + entry); + if (resource->reformat_type == ctx_resource->reformat_type && + resource->ft_type == ctx_resource->ft_type && + resource->flags == ctx_resource->flags && + resource->size == ctx_resource->size && !memcmp((const void *)resource->buf, - (const void *)cache_resource->buf, + (const void *)ctx_resource->buf, resource->size)) return 0; return -1; @@ -3639,31 +3638,30 @@ flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, struct mlx5_dev_ctx_shared *sh = list->ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5dv_dr_domain *domain; - struct mlx5_flow_dv_encap_decap_resource *resource = ctx->data; - struct mlx5_flow_dv_encap_decap_resource *cache_resource; + struct mlx5_flow_dv_encap_decap_resource *ctx_resource = ctx->data; + struct mlx5_flow_dv_encap_decap_resource *resource; uint32_t idx; int ret; - if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) domain = sh->fdb_domain; - else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) + else if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) domain = sh->rx_domain; else domain = sh->tx_domain; /* Register new encap/decap resource. */ - cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], - &idx); - if (!cache_resource) { + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], &idx); + if (!resource) { rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot allocate resource memory"); return NULL; } - *cache_resource = *resource; - cache_resource->idx = idx; - ret = mlx5_flow_os_create_flow_action_packet_reformat - (sh->ctx, domain, cache_resource, - &cache_resource->action); + *resource = *ctx_resource; + resource->idx = idx; + ret = mlx5_flow_os_create_flow_action_packet_reformat(sh->ctx, domain, + resource, + &resource->action); if (ret) { mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], idx); rte_flow_error_set(ctx->error, ENOMEM, @@ -3672,7 +3670,7 @@ flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, return NULL; } - return &cache_resource->entry; + return &resource->entry; } /** @@ -3776,8 +3774,8 @@ flow_dv_jump_tbl_resource_register } int -flow_dv_port_id_match_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry, void *cb_ctx) +flow_dv_port_id_match_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_port_id_action_resource *ref = ctx->data; @@ -3787,30 +3785,30 @@ flow_dv_port_id_match_cb(struct mlx5_cache_list *list __rte_unused, return ref->port_id != res->port_id; } -struct mlx5_cache_entry * -flow_dv_port_id_create_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry __rte_unused, +struct mlx5_list_entry * +flow_dv_port_id_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = list->ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_port_id_action_resource *ref = ctx->data; - struct mlx5_flow_dv_port_id_action_resource *cache; + struct mlx5_flow_dv_port_id_action_resource *resource; uint32_t idx; int ret; /* Register new port id action resource. */ - cache = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PORT_ID], &idx); - if (!cache) { + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PORT_ID], &idx); + if (!resource) { rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate port_id action cache memory"); + "cannot allocate port_id action memory"); return NULL; } - *cache = *ref; + *resource = *ref; ret = mlx5_flow_os_create_flow_action_dest_port(sh->fdb_domain, ref->port_id, - &cache->action); + &resource->action); if (ret) { mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PORT_ID], idx); rte_flow_error_set(ctx->error, ENOMEM, @@ -3818,8 +3816,8 @@ flow_dv_port_id_create_cb(struct mlx5_cache_list *list, "cannot create action"); return NULL; } - cache->idx = idx; - return &cache->entry; + resource->idx = idx; + return &resource->entry; } /** @@ -3827,8 +3825,8 @@ flow_dv_port_id_create_cb(struct mlx5_cache_list *list, * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in, out] resource - * Pointer to port ID action resource. + * @param[in, out] ref + * Pointer to port ID action resource reference. * @parm[in, out] dev_flow * Pointer to the dev_flow. * @param[out] error @@ -3840,30 +3838,30 @@ flow_dv_port_id_create_cb(struct mlx5_cache_list *list, static int flow_dv_port_id_action_resource_register (struct rte_eth_dev *dev, - struct mlx5_flow_dv_port_id_action_resource *resource, + struct mlx5_flow_dv_port_id_action_resource *ref, struct mlx5_flow *dev_flow, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_cache_entry *entry; - struct mlx5_flow_dv_port_id_action_resource *cache; + struct mlx5_list_entry *entry; + struct mlx5_flow_dv_port_id_action_resource *resource; struct mlx5_flow_cb_ctx ctx = { .error = error, - .data = resource, + .data = ref, }; - entry = mlx5_cache_register(&priv->sh->port_id_action_list, &ctx); + entry = mlx5_list_register(&priv->sh->port_id_action_list, &ctx); if (!entry) return -rte_errno; - cache = container_of(entry, typeof(*cache), entry); - dev_flow->dv.port_id_action = cache; - dev_flow->handle->rix_port_id_action = cache->idx; + resource = container_of(entry, typeof(*resource), entry); + dev_flow->dv.port_id_action = resource; + dev_flow->handle->rix_port_id_action = resource->idx; return 0; } int -flow_dv_push_vlan_match_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry, void *cb_ctx) +flow_dv_push_vlan_match_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_push_vlan_action_resource *ref = ctx->data; @@ -3873,28 +3871,28 @@ flow_dv_push_vlan_match_cb(struct mlx5_cache_list *list __rte_unused, return ref->vlan_tag != res->vlan_tag || ref->ft_type != res->ft_type; } -struct mlx5_cache_entry * -flow_dv_push_vlan_create_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry __rte_unused, +struct mlx5_list_entry * +flow_dv_push_vlan_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = list->ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_push_vlan_action_resource *ref = ctx->data; - struct mlx5_flow_dv_push_vlan_action_resource *cache; + struct mlx5_flow_dv_push_vlan_action_resource *resource; struct mlx5dv_dr_domain *domain; uint32_t idx; int ret; /* Register new port id action resource. */ - cache = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PUSH_VLAN], &idx); - if (!cache) { + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PUSH_VLAN], &idx); + if (!resource) { rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate push_vlan action cache memory"); + "cannot allocate push_vlan action memory"); return NULL; } - *cache = *ref; + *resource = *ref; if (ref->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) domain = sh->fdb_domain; else if (ref->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) @@ -3902,7 +3900,7 @@ flow_dv_push_vlan_create_cb(struct mlx5_cache_list *list, else domain = sh->tx_domain; ret = mlx5_flow_os_create_flow_action_push_vlan(domain, ref->vlan_tag, - &cache->action); + &resource->action); if (ret) { mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PUSH_VLAN], idx); rte_flow_error_set(ctx->error, ENOMEM, @@ -3910,8 +3908,8 @@ flow_dv_push_vlan_create_cb(struct mlx5_cache_list *list, "cannot create push vlan action"); return NULL; } - cache->idx = idx; - return &cache->entry; + resource->idx = idx; + return &resource->entry; } /** @@ -3919,8 +3917,8 @@ flow_dv_push_vlan_create_cb(struct mlx5_cache_list *list, * * @param [in, out] dev * Pointer to rte_eth_dev structure. - * @param[in, out] resource - * Pointer to port ID action resource. + * @param[in, out] ref + * Pointer to port ID action resource reference. * @parm[in, out] dev_flow * Pointer to the dev_flow. * @param[out] error @@ -3932,25 +3930,25 @@ flow_dv_push_vlan_create_cb(struct mlx5_cache_list *list, static int flow_dv_push_vlan_action_resource_register (struct rte_eth_dev *dev, - struct mlx5_flow_dv_push_vlan_action_resource *resource, + struct mlx5_flow_dv_push_vlan_action_resource *ref, struct mlx5_flow *dev_flow, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_push_vlan_action_resource *cache; - struct mlx5_cache_entry *entry; + struct mlx5_flow_dv_push_vlan_action_resource *resource; + struct mlx5_list_entry *entry; struct mlx5_flow_cb_ctx ctx = { .error = error, - .data = resource, + .data = ref, }; - entry = mlx5_cache_register(&priv->sh->push_vlan_action_list, &ctx); + entry = mlx5_list_register(&priv->sh->push_vlan_action_list, &ctx); if (!entry) return -rte_errno; - cache = container_of(entry, typeof(*cache), entry); + resource = container_of(entry, typeof(*resource), entry); - dev_flow->handle->dvh.rix_push_vlan = cache->idx; - dev_flow->dv.push_vlan_res = cache; + dev_flow->handle->dvh.rix_push_vlan = resource->idx; + dev_flow->dv.push_vlan_res = resource; return 0; } @@ -9913,13 +9911,13 @@ flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) return NULL; } } - MKSTR(matcher_name, "%s_%s_%u_%u_matcher_cache", + MKSTR(matcher_name, "%s_%s_%u_%u_matcher_list", key.is_fdb ? "FDB" : "NIC", key.is_egress ? "egress" : "ingress", key.level, key.id); - mlx5_cache_list_init(&tbl_data->matchers, matcher_name, 0, sh, - flow_dv_matcher_create_cb, - flow_dv_matcher_match_cb, - flow_dv_matcher_remove_cb); + mlx5_list_create(&tbl_data->matchers, matcher_name, 0, sh, + flow_dv_matcher_create_cb, + flow_dv_matcher_match_cb, + flow_dv_matcher_remove_cb); return &tbl_data->entry; } @@ -10047,7 +10045,7 @@ flow_dv_tbl_remove_cb(struct mlx5_hlist *list, tbl_data->tunnel->tunnel_id : 0, tbl_data->group_id); } - mlx5_cache_list_destroy(&tbl_data->matchers); + mlx5_list_destroy(&tbl_data->matchers); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], tbl_data->idx); } @@ -10075,8 +10073,8 @@ flow_dv_tbl_resource_release(struct mlx5_dev_ctx_shared *sh, } int -flow_dv_matcher_match_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry, void *cb_ctx) +flow_dv_matcher_match_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_matcher *ref = ctx->data; @@ -10089,15 +10087,15 @@ flow_dv_matcher_match_cb(struct mlx5_cache_list *list __rte_unused, (const void *)ref->mask.buf, ref->mask.size); } -struct mlx5_cache_entry * -flow_dv_matcher_create_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry __rte_unused, +struct mlx5_list_entry * +flow_dv_matcher_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, void *cb_ctx) { struct mlx5_dev_ctx_shared *sh = list->ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_matcher *ref = ctx->data; - struct mlx5_flow_dv_matcher *cache; + struct mlx5_flow_dv_matcher *resource; struct mlx5dv_flow_matcher_attr dv_attr = { .type = IBV_FLOW_ATTR_NORMAL, .match_mask = (void *)&ref->mask, @@ -10106,29 +10104,30 @@ flow_dv_matcher_create_cb(struct mlx5_cache_list *list, typeof(*tbl), tbl); int ret; - cache = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*cache), 0, SOCKET_ID_ANY); - if (!cache) { + resource = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*resource), 0, + SOCKET_ID_ANY); + if (!resource) { rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot create matcher"); return NULL; } - *cache = *ref; + *resource = *ref; dv_attr.match_criteria_enable = - flow_dv_matcher_enable(cache->mask.buf); + flow_dv_matcher_enable(resource->mask.buf); dv_attr.priority = ref->priority; if (tbl->is_egress) dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS; ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->tbl.obj, - &cache->matcher_object); + &resource->matcher_object); if (ret) { - mlx5_free(cache); + mlx5_free(resource); rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot create matcher"); return NULL; } - return &cache->entry; + return &resource->entry; } /** @@ -10157,8 +10156,8 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, uint32_t group_id, struct rte_flow_error *error) { - struct mlx5_cache_entry *entry; - struct mlx5_flow_dv_matcher *cache; + struct mlx5_list_entry *entry; + struct mlx5_flow_dv_matcher *resource; struct mlx5_flow_tbl_resource *tbl; struct mlx5_flow_tbl_data_entry *tbl_data; struct mlx5_flow_cb_ctx ctx = { @@ -10178,15 +10177,15 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, return -rte_errno; /* No need to refill the error info */ tbl_data = container_of(tbl, struct mlx5_flow_tbl_data_entry, tbl); ref->tbl = tbl; - entry = mlx5_cache_register(&tbl_data->matchers, &ctx); + entry = mlx5_list_register(&tbl_data->matchers, &ctx); if (!entry) { flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot allocate ref memory"); } - cache = container_of(entry, typeof(*cache), entry); - dev_flow->handle->dvh.matcher = cache; + resource = container_of(entry, typeof(*resource), entry); + dev_flow->handle->dvh.matcher = resource; return 0; } @@ -10254,15 +10253,15 @@ flow_dv_tag_resource_register struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_tag_resource *cache_resource; + struct mlx5_flow_dv_tag_resource *resource; struct mlx5_hlist_entry *entry; entry = mlx5_hlist_register(priv->sh->tag_table, tag_be24, error); if (entry) { - cache_resource = container_of - (entry, struct mlx5_flow_dv_tag_resource, entry); - dev_flow->handle->dvh.rix_tag = cache_resource->idx; - dev_flow->dv.tag_resource = cache_resource; + resource = container_of(entry, struct mlx5_flow_dv_tag_resource, + entry); + dev_flow->handle->dvh.rix_tag = resource->idx; + dev_flow->dv.tag_resource = resource; return 0; } return -rte_errno; @@ -10589,68 +10588,69 @@ flow_dv_sample_sub_actions_release(struct rte_eth_dev *dev, } int -flow_dv_sample_match_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry, void *cb_ctx) +flow_dv_sample_match_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct rte_eth_dev *dev = ctx->dev; - struct mlx5_flow_dv_sample_resource *resource = ctx->data; - struct mlx5_flow_dv_sample_resource *cache_resource = - container_of(entry, typeof(*cache_resource), entry); - - if (resource->ratio == cache_resource->ratio && - resource->ft_type == cache_resource->ft_type && - resource->ft_id == cache_resource->ft_id && - resource->set_action == cache_resource->set_action && - !memcmp((void *)&resource->sample_act, - (void *)&cache_resource->sample_act, + struct mlx5_flow_dv_sample_resource *ctx_resource = ctx->data; + struct mlx5_flow_dv_sample_resource *resource = container_of(entry, + typeof(*resource), + entry); + + if (ctx_resource->ratio == resource->ratio && + ctx_resource->ft_type == resource->ft_type && + ctx_resource->ft_id == resource->ft_id && + ctx_resource->set_action == resource->set_action && + !memcmp((void *)&ctx_resource->sample_act, + (void *)&resource->sample_act, sizeof(struct mlx5_flow_sub_actions_list))) { /* * Existing sample action should release the prepared * sub-actions reference counter. */ flow_dv_sample_sub_actions_release(dev, - &resource->sample_idx); + &ctx_resource->sample_idx); return 0; } return 1; } -struct mlx5_cache_entry * -flow_dv_sample_create_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry __rte_unused, +struct mlx5_list_entry * +flow_dv_sample_create_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry __rte_unused, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct rte_eth_dev *dev = ctx->dev; - struct mlx5_flow_dv_sample_resource *resource = ctx->data; - void **sample_dv_actions = resource->sub_actions; - struct mlx5_flow_dv_sample_resource *cache_resource; + struct mlx5_flow_dv_sample_resource *ctx_resource = ctx->data; + void **sample_dv_actions = ctx_resource->sub_actions; + struct mlx5_flow_dv_sample_resource *resource; struct mlx5dv_dr_flow_sampler_attr sampler_attr; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; struct mlx5_flow_tbl_resource *tbl; uint32_t idx = 0; const uint32_t next_ft_step = 1; - uint32_t next_ft_id = resource->ft_id + next_ft_step; + uint32_t next_ft_id = ctx_resource->ft_id + next_ft_step; uint8_t is_egress = 0; uint8_t is_transfer = 0; struct rte_flow_error *error = ctx->error; /* Register new sample resource. */ - cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_SAMPLE], &idx); - if (!cache_resource) { + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_SAMPLE], &idx); + if (!resource) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot allocate resource memory"); return NULL; } - *cache_resource = *resource; + *resource = *ctx_resource; /* Create normal path table level */ - if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) + if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) is_transfer = 1; - else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX) + else if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_TX) is_egress = 1; tbl = flow_dv_tbl_resource_get(dev, next_ft_id, is_egress, is_transfer, @@ -10663,8 +10663,8 @@ flow_dv_sample_create_cb(struct mlx5_cache_list *list __rte_unused, "for sample"); goto error; } - cache_resource->normal_path_tbl = tbl; - if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) { + resource->normal_path_tbl = tbl; + if (ctx_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) { if (!sh->default_miss_action) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -10673,33 +10673,33 @@ flow_dv_sample_create_cb(struct mlx5_cache_list *list __rte_unused, "created"); goto error; } - sample_dv_actions[resource->sample_act.actions_num++] = + sample_dv_actions[ctx_resource->sample_act.actions_num++] = sh->default_miss_action; } /* Create a DR sample action */ - sampler_attr.sample_ratio = cache_resource->ratio; + sampler_attr.sample_ratio = resource->ratio; sampler_attr.default_next_table = tbl->obj; - sampler_attr.num_sample_actions = resource->sample_act.actions_num; + sampler_attr.num_sample_actions = ctx_resource->sample_act.actions_num; sampler_attr.sample_actions = (struct mlx5dv_dr_action **) &sample_dv_actions[0]; - sampler_attr.action = cache_resource->set_action; + sampler_attr.action = resource->set_action; if (mlx5_os_flow_dr_create_flow_action_sampler - (&sampler_attr, &cache_resource->verbs_action)) { + (&sampler_attr, &resource->verbs_action)) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot create sample action"); goto error; } - cache_resource->idx = idx; - cache_resource->dev = dev; - return &cache_resource->entry; + resource->idx = idx; + resource->dev = dev; + return &resource->entry; error: - if (cache_resource->ft_type != MLX5DV_FLOW_TABLE_TYPE_FDB) + if (resource->ft_type != MLX5DV_FLOW_TABLE_TYPE_FDB) flow_dv_sample_sub_actions_release(dev, - &cache_resource->sample_idx); - if (cache_resource->normal_path_tbl) + &resource->sample_idx); + if (resource->normal_path_tbl) flow_dv_tbl_resource_release(MLX5_SH(dev), - cache_resource->normal_path_tbl); + resource->normal_path_tbl); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_SAMPLE], idx); return NULL; @@ -10710,8 +10710,8 @@ flow_dv_sample_create_cb(struct mlx5_cache_list *list __rte_unused, * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in] resource - * Pointer to sample resource. + * @param[in] ref + * Pointer to sample resource reference. * @parm[in, out] dev_flow * Pointer to the dev_flow. * @param[out] error @@ -10722,66 +10722,66 @@ flow_dv_sample_create_cb(struct mlx5_cache_list *list __rte_unused, */ static int flow_dv_sample_resource_register(struct rte_eth_dev *dev, - struct mlx5_flow_dv_sample_resource *resource, + struct mlx5_flow_dv_sample_resource *ref, struct mlx5_flow *dev_flow, struct rte_flow_error *error) { - struct mlx5_flow_dv_sample_resource *cache_resource; - struct mlx5_cache_entry *entry; + struct mlx5_flow_dv_sample_resource *resource; + struct mlx5_list_entry *entry; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_cb_ctx ctx = { .dev = dev, .error = error, - .data = resource, + .data = ref, }; - entry = mlx5_cache_register(&priv->sh->sample_action_list, &ctx); + entry = mlx5_list_register(&priv->sh->sample_action_list, &ctx); if (!entry) return -rte_errno; - cache_resource = container_of(entry, typeof(*cache_resource), entry); - dev_flow->handle->dvh.rix_sample = cache_resource->idx; - dev_flow->dv.sample_res = cache_resource; + resource = container_of(entry, typeof(*resource), entry); + dev_flow->handle->dvh.rix_sample = resource->idx; + dev_flow->dv.sample_res = resource; return 0; } int -flow_dv_dest_array_match_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry, void *cb_ctx) +flow_dv_dest_array_match_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; - struct mlx5_flow_dv_dest_array_resource *resource = ctx->data; + struct mlx5_flow_dv_dest_array_resource *ctx_resource = ctx->data; struct rte_eth_dev *dev = ctx->dev; - struct mlx5_flow_dv_dest_array_resource *cache_resource = - container_of(entry, typeof(*cache_resource), entry); + struct mlx5_flow_dv_dest_array_resource *resource = + container_of(entry, typeof(*resource), entry); uint32_t idx = 0; - if (resource->num_of_dest == cache_resource->num_of_dest && - resource->ft_type == cache_resource->ft_type && - !memcmp((void *)cache_resource->sample_act, - (void *)resource->sample_act, - (resource->num_of_dest * + if (ctx_resource->num_of_dest == resource->num_of_dest && + ctx_resource->ft_type == resource->ft_type && + !memcmp((void *)resource->sample_act, + (void *)ctx_resource->sample_act, + (ctx_resource->num_of_dest * sizeof(struct mlx5_flow_sub_actions_list)))) { /* * Existing sample action should release the prepared * sub-actions reference counter. */ - for (idx = 0; idx < resource->num_of_dest; idx++) + for (idx = 0; idx < ctx_resource->num_of_dest; idx++) flow_dv_sample_sub_actions_release(dev, - &resource->sample_idx[idx]); + &ctx_resource->sample_idx[idx]); return 0; } return 1; } -struct mlx5_cache_entry * -flow_dv_dest_array_create_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry __rte_unused, +struct mlx5_list_entry * +flow_dv_dest_array_create_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry __rte_unused, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct rte_eth_dev *dev = ctx->dev; - struct mlx5_flow_dv_dest_array_resource *cache_resource; - struct mlx5_flow_dv_dest_array_resource *resource = ctx->data; + struct mlx5_flow_dv_dest_array_resource *resource; + struct mlx5_flow_dv_dest_array_resource *ctx_resource = ctx->data; struct mlx5dv_dr_action_dest_attr *dest_attr[MLX5_MAX_DEST_NUM] = { 0 }; struct mlx5dv_dr_action_dest_reformat dest_reformat[MLX5_MAX_DEST_NUM]; struct mlx5_priv *priv = dev->data->dev_private; @@ -10794,23 +10794,23 @@ flow_dv_dest_array_create_cb(struct mlx5_cache_list *list __rte_unused, int ret; /* Register new destination array resource. */ - cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DEST_ARRAY], + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DEST_ARRAY], &res_idx); - if (!cache_resource) { + if (!resource) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot allocate resource memory"); return NULL; } - *cache_resource = *resource; + *resource = *ctx_resource; if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) domain = sh->fdb_domain; else if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_NIC_RX) domain = sh->rx_domain; else domain = sh->tx_domain; - for (idx = 0; idx < resource->num_of_dest; idx++) { + for (idx = 0; idx < ctx_resource->num_of_dest; idx++) { dest_attr[idx] = (struct mlx5dv_dr_action_dest_attr *) mlx5_malloc(MLX5_MEM_ZERO, sizeof(struct mlx5dv_dr_action_dest_attr), @@ -10823,7 +10823,7 @@ flow_dv_dest_array_create_cb(struct mlx5_cache_list *list __rte_unused, goto error; } dest_attr[idx]->type = MLX5DV_DR_ACTION_DEST; - sample_act = &resource->sample_act[idx]; + sample_act = &ctx_resource->sample_act[idx]; action_flags = sample_act->action_flags; switch (action_flags) { case MLX5_FLOW_ACTION_QUEUE: @@ -10854,9 +10854,9 @@ flow_dv_dest_array_create_cb(struct mlx5_cache_list *list __rte_unused, /* create a dest array actioin */ ret = mlx5_os_flow_dr_create_flow_action_dest_array (domain, - cache_resource->num_of_dest, + resource->num_of_dest, dest_attr, - &cache_resource->action); + &resource->action); if (ret) { rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -10864,19 +10864,18 @@ flow_dv_dest_array_create_cb(struct mlx5_cache_list *list __rte_unused, "cannot create destination array action"); goto error; } - cache_resource->idx = res_idx; - cache_resource->dev = dev; - for (idx = 0; idx < resource->num_of_dest; idx++) + resource->idx = res_idx; + resource->dev = dev; + for (idx = 0; idx < ctx_resource->num_of_dest; idx++) mlx5_free(dest_attr[idx]); - return &cache_resource->entry; + return &resource->entry; error: - for (idx = 0; idx < resource->num_of_dest; idx++) { + for (idx = 0; idx < ctx_resource->num_of_dest; idx++) { flow_dv_sample_sub_actions_release(dev, - &cache_resource->sample_idx[idx]); + &resource->sample_idx[idx]); if (dest_attr[idx]) mlx5_free(dest_attr[idx]); } - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DEST_ARRAY], res_idx); return NULL; } @@ -10886,8 +10885,8 @@ flow_dv_dest_array_create_cb(struct mlx5_cache_list *list __rte_unused, * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in] resource - * Pointer to destination array resource. + * @param[in] ref + * Pointer to destination array resource reference. * @parm[in, out] dev_flow * Pointer to the dev_flow. * @param[out] error @@ -10898,25 +10897,25 @@ flow_dv_dest_array_create_cb(struct mlx5_cache_list *list __rte_unused, */ static int flow_dv_dest_array_resource_register(struct rte_eth_dev *dev, - struct mlx5_flow_dv_dest_array_resource *resource, + struct mlx5_flow_dv_dest_array_resource *ref, struct mlx5_flow *dev_flow, struct rte_flow_error *error) { - struct mlx5_flow_dv_dest_array_resource *cache_resource; + struct mlx5_flow_dv_dest_array_resource *resource; struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_cache_entry *entry; + struct mlx5_list_entry *entry; struct mlx5_flow_cb_ctx ctx = { .dev = dev, .error = error, - .data = resource, + .data = ref, }; - entry = mlx5_cache_register(&priv->sh->dest_array_list, &ctx); + entry = mlx5_list_register(&priv->sh->dest_array_list, &ctx); if (!entry) return -rte_errno; - cache_resource = container_of(entry, typeof(*cache_resource), entry); - dev_flow->handle->dvh.rix_dest_array = cache_resource->idx; - dev_flow->dv.dest_array_res = cache_resource; + resource = container_of(entry, typeof(*resource), entry); + dev_flow->handle->dvh.rix_dest_array = resource->idx; + dev_flow->dv.dest_array_res = resource; return 0; } @@ -13345,14 +13344,15 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, } void -flow_dv_matcher_remove_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry) +flow_dv_matcher_remove_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry) { - struct mlx5_flow_dv_matcher *cache = container_of(entry, typeof(*cache), - entry); + struct mlx5_flow_dv_matcher *resource = container_of(entry, + typeof(*resource), + entry); - claim_zero(mlx5_flow_os_destroy_flow_matcher(cache->matcher_object)); - mlx5_free(cache); + claim_zero(mlx5_flow_os_destroy_flow_matcher(resource->matcher_object)); + mlx5_free(resource); } /** @@ -13376,7 +13376,7 @@ flow_dv_matcher_release(struct rte_eth_dev *dev, int ret; MLX5_ASSERT(matcher->matcher_object); - ret = mlx5_cache_unregister(&tbl->matchers, &matcher->entry); + ret = mlx5_list_unregister(&tbl->matchers, &matcher->entry); flow_dv_tbl_resource_release(MLX5_SH(dev), &tbl->tbl); return ret; } @@ -13395,7 +13395,7 @@ flow_dv_encap_decap_remove_cb(struct mlx5_hlist *list, { struct mlx5_dev_ctx_shared *sh = list->ctx; struct mlx5_flow_dv_encap_decap_resource *res = - container_of(entry, typeof(*res), entry); + container_of(entry, typeof(*res), entry); claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], res->idx); @@ -13417,15 +13417,14 @@ flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, uint32_t encap_decap_idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_encap_decap_resource *cache_resource; + struct mlx5_flow_dv_encap_decap_resource *resource; - cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_DECAP_ENCAP], - encap_decap_idx); - if (!cache_resource) + resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_DECAP_ENCAP], + encap_decap_idx); + if (!resource) return 0; - MLX5_ASSERT(cache_resource->action); - return mlx5_hlist_unregister(priv->sh->encaps_decaps, - &cache_resource->entry); + MLX5_ASSERT(resource->action); + return mlx5_hlist_unregister(priv->sh->encaps_decaps, &resource->entry); } /** @@ -13487,15 +13486,15 @@ flow_dv_modify_hdr_resource_release(struct rte_eth_dev *dev, } void -flow_dv_port_id_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry) +flow_dv_port_id_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry) { struct mlx5_dev_ctx_shared *sh = list->ctx; - struct mlx5_flow_dv_port_id_action_resource *cache = - container_of(entry, typeof(*cache), entry); + struct mlx5_flow_dv_port_id_action_resource *resource = + container_of(entry, typeof(*resource), entry); - claim_zero(mlx5_flow_os_destroy_flow_action(cache->action)); - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PORT_ID], cache->idx); + claim_zero(mlx5_flow_os_destroy_flow_action(resource->action)); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PORT_ID], resource->idx); } /** @@ -13514,14 +13513,14 @@ flow_dv_port_id_action_resource_release(struct rte_eth_dev *dev, uint32_t port_id) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_port_id_action_resource *cache; + struct mlx5_flow_dv_port_id_action_resource *resource; - cache = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PORT_ID], port_id); - if (!cache) + resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PORT_ID], port_id); + if (!resource) return 0; - MLX5_ASSERT(cache->action); - return mlx5_cache_unregister(&priv->sh->port_id_action_list, - &cache->entry); + MLX5_ASSERT(resource->action); + return mlx5_list_unregister(&priv->sh->port_id_action_list, + &resource->entry); } /** @@ -13544,15 +13543,15 @@ flow_dv_shared_rss_action_release(struct rte_eth_dev *dev, uint32_t srss) } void -flow_dv_push_vlan_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry) +flow_dv_push_vlan_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry) { struct mlx5_dev_ctx_shared *sh = list->ctx; - struct mlx5_flow_dv_push_vlan_action_resource *cache = - container_of(entry, typeof(*cache), entry); + struct mlx5_flow_dv_push_vlan_action_resource *resource = + container_of(entry, typeof(*resource), entry); - claim_zero(mlx5_flow_os_destroy_flow_action(cache->action)); - mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PUSH_VLAN], cache->idx); + claim_zero(mlx5_flow_os_destroy_flow_action(resource->action)); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PUSH_VLAN], resource->idx); } /** @@ -13571,15 +13570,15 @@ flow_dv_push_vlan_action_resource_release(struct rte_eth_dev *dev, struct mlx5_flow_handle *handle) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_push_vlan_action_resource *cache; + struct mlx5_flow_dv_push_vlan_action_resource *resource; uint32_t idx = handle->dvh.rix_push_vlan; - cache = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PUSH_VLAN], idx); - if (!cache) + resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PUSH_VLAN], idx); + if (!resource) return 0; - MLX5_ASSERT(cache->action); - return mlx5_cache_unregister(&priv->sh->push_vlan_action_list, - &cache->entry); + MLX5_ASSERT(resource->action); + return mlx5_list_unregister(&priv->sh->push_vlan_action_list, + &resource->entry); } /** @@ -13616,26 +13615,24 @@ flow_dv_fate_resource_release(struct rte_eth_dev *dev, } void -flow_dv_sample_remove_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry) +flow_dv_sample_remove_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry) { - struct mlx5_flow_dv_sample_resource *cache_resource = - container_of(entry, typeof(*cache_resource), entry); - struct rte_eth_dev *dev = cache_resource->dev; + struct mlx5_flow_dv_sample_resource *resource = container_of(entry, + typeof(*resource), + entry); + struct rte_eth_dev *dev = resource->dev; struct mlx5_priv *priv = dev->data->dev_private; - if (cache_resource->verbs_action) + if (resource->verbs_action) claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->verbs_action)); - if (cache_resource->normal_path_tbl) + (resource->verbs_action)); + if (resource->normal_path_tbl) flow_dv_tbl_resource_release(MLX5_SH(dev), - cache_resource->normal_path_tbl); - flow_dv_sample_sub_actions_release(dev, - &cache_resource->sample_idx); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_SAMPLE], - cache_resource->idx); - DRV_LOG(DEBUG, "sample resource %p: removed", - (void *)cache_resource); + resource->normal_path_tbl); + flow_dv_sample_sub_actions_release(dev, &resource->sample_idx); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_SAMPLE], resource->idx); + DRV_LOG(DEBUG, "sample resource %p: removed", (void *)resource); } /** @@ -13654,38 +13651,36 @@ flow_dv_sample_resource_release(struct rte_eth_dev *dev, struct mlx5_flow_handle *handle) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_sample_resource *cache_resource; + struct mlx5_flow_dv_sample_resource *resource; - cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_SAMPLE], - handle->dvh.rix_sample); - if (!cache_resource) + resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_SAMPLE], + handle->dvh.rix_sample); + if (!resource) return 0; - MLX5_ASSERT(cache_resource->verbs_action); - return mlx5_cache_unregister(&priv->sh->sample_action_list, - &cache_resource->entry); + MLX5_ASSERT(resource->verbs_action); + return mlx5_list_unregister(&priv->sh->sample_action_list, + &resource->entry); } void -flow_dv_dest_array_remove_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry) +flow_dv_dest_array_remove_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry) { - struct mlx5_flow_dv_dest_array_resource *cache_resource = - container_of(entry, typeof(*cache_resource), entry); - struct rte_eth_dev *dev = cache_resource->dev; + struct mlx5_flow_dv_dest_array_resource *resource = + container_of(entry, typeof(*resource), entry); + struct rte_eth_dev *dev = resource->dev; struct mlx5_priv *priv = dev->data->dev_private; uint32_t i = 0; - MLX5_ASSERT(cache_resource->action); - if (cache_resource->action) - claim_zero(mlx5_flow_os_destroy_flow_action - (cache_resource->action)); - for (; i < cache_resource->num_of_dest; i++) + MLX5_ASSERT(resource->action); + if (resource->action) + claim_zero(mlx5_flow_os_destroy_flow_action(resource->action)); + for (; i < resource->num_of_dest; i++) flow_dv_sample_sub_actions_release(dev, - &cache_resource->sample_idx[i]); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_DEST_ARRAY], - cache_resource->idx); + &resource->sample_idx[i]); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_DEST_ARRAY], resource->idx); DRV_LOG(DEBUG, "destination array resource %p: removed", - (void *)cache_resource); + (void *)resource); } /** @@ -13704,15 +13699,15 @@ flow_dv_dest_array_resource_release(struct rte_eth_dev *dev, struct mlx5_flow_handle *handle) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_flow_dv_dest_array_resource *cache; + struct mlx5_flow_dv_dest_array_resource *resource; - cache = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_DEST_ARRAY], - handle->dvh.rix_dest_array); - if (!cache) + resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_DEST_ARRAY], + handle->dvh.rix_dest_array); + if (!resource) return 0; - MLX5_ASSERT(cache->action); - return mlx5_cache_unregister(&priv->sh->dest_array_list, - &cache->entry); + MLX5_ASSERT(resource->action); + return mlx5_list_unregister(&priv->sh->dest_array_list, + &resource->entry); } static void @@ -14555,7 +14550,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, if (sub_policy->color_matcher[i]) { tbl = container_of(sub_policy->color_matcher[i]->tbl, typeof(*tbl), tbl); - mlx5_cache_unregister(&tbl->matchers, + mlx5_list_unregister(&tbl->matchers, &sub_policy->color_matcher[i]->entry); sub_policy->color_matcher[i] = NULL; } @@ -15289,8 +15284,8 @@ flow_dv_destroy_mtr_drop_tbls(struct rte_eth_dev *dev) if (mtrmng->def_matcher[i]) { tbl = container_of(mtrmng->def_matcher[i]->tbl, struct mlx5_flow_tbl_data_entry, tbl); - mlx5_cache_unregister(&tbl->matchers, - &mtrmng->def_matcher[i]->entry); + mlx5_list_unregister(&tbl->matchers, + &mtrmng->def_matcher[i]->entry); mtrmng->def_matcher[i] = NULL; } for (j = 0; j < MLX5_REG_BITS; j++) { @@ -15299,8 +15294,8 @@ flow_dv_destroy_mtr_drop_tbls(struct rte_eth_dev *dev) container_of(mtrmng->drop_matcher[i][j]->tbl, struct mlx5_flow_tbl_data_entry, tbl); - mlx5_cache_unregister(&tbl->matchers, - &mtrmng->drop_matcher[i][j]->entry); + mlx5_list_unregister(&tbl->matchers, + &mtrmng->drop_matcher[i][j]->entry); mtrmng->drop_matcher[i][j] = NULL; } } @@ -15396,7 +15391,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, bool is_default_policy, struct rte_flow_error *error) { - struct mlx5_cache_entry *entry; + struct mlx5_list_entry *entry; struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc; struct mlx5_flow_dv_matcher matcher = { .mask = { @@ -15432,7 +15427,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, matcher.mask.size); - entry = mlx5_cache_register(&tbl_data->matchers, &ctx); + entry = mlx5_list_register(&tbl_data->matchers, &ctx); if (!entry) { DRV_LOG(ERR, "Failed to register meter drop matcher."); return -1; @@ -15795,7 +15790,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, 0, &error); uint32_t mtr_id_mask = (UINT32_C(1) << mtrmng->max_mtr_bits) - 1; uint8_t mtr_id_offset = priv->mtr_reg_share ? MLX5_MTR_COLOR_BITS : 0; - struct mlx5_cache_entry *entry; + struct mlx5_list_entry *entry; struct mlx5_flow_dv_matcher matcher = { .mask = { .size = sizeof(matcher.mask.buf) - @@ -15841,7 +15836,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, matcher.crc = rte_raw_cksum ((const void *)matcher.mask.buf, matcher.mask.size); - entry = mlx5_cache_register(&tbl_data->matchers, &ctx); + entry = mlx5_list_register(&tbl_data->matchers, &ctx); if (!entry) { DRV_LOG(ERR, "Failed to register meter " "drop default matcher."); @@ -15878,7 +15873,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, matcher.crc = rte_raw_cksum ((const void *)matcher.mask.buf, matcher.mask.size); - entry = mlx5_cache_register(&tbl_data->matchers, &ctx); + entry = mlx5_list_register(&tbl_data->matchers, &ctx); if (!entry) { DRV_LOG(ERR, "Failed to register meter drop matcher."); @@ -16064,7 +16059,6 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, return NULL; } - /** * Destroy the sub policy table with RX queue. * diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 1b264e5994..3dcc71d51d 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -222,13 +222,13 @@ int mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, uint16_t *queues, const uint32_t queues_n, bool standalone); -struct mlx5_cache_entry *mlx5_hrxq_create_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry __rte_unused, void *cb_ctx); -int mlx5_hrxq_match_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, +struct mlx5_list_entry *mlx5_hrxq_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, void *cb_ctx); +int mlx5_hrxq_match_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx); -void mlx5_hrxq_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry); +void mlx5_hrxq_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc); int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index bb9a908087..8395332507 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2093,7 +2093,7 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, * Match an Rx Hash queue. * * @param list - * Cache list pointer. + * mlx5 list pointer. * @param entry * Hash queue entry pointer. * @param cb_ctx @@ -2103,8 +2103,8 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, * 0 if match, none zero if not match. */ int -mlx5_hrxq_match_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, +mlx5_hrxq_match_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx) { struct rte_eth_dev *dev = list->ctx; @@ -2242,13 +2242,13 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) * Index to Hash Rx queue to release. * * @param list - * Cache list pointer. + * mlx5 list pointer. * @param entry * Hash queue entry pointer. */ void -mlx5_hrxq_remove_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry) +mlx5_hrxq_remove_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry) { struct rte_eth_dev *dev = list->ctx; struct mlx5_hrxq *hrxq = container_of(entry, typeof(*hrxq), entry); @@ -2305,7 +2305,7 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, * Create an Rx Hash queue. * * @param list - * Cache list pointer. + * mlx5 list pointer. * @param entry * Hash queue entry pointer. * @param cb_ctx @@ -2314,9 +2314,9 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, * @return * queue entry on success, NULL otherwise. */ -struct mlx5_cache_entry * -mlx5_hrxq_create_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry __rte_unused, +struct mlx5_list_entry * +mlx5_hrxq_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, void *cb_ctx) { struct rte_eth_dev *dev = list->ctx; @@ -2344,7 +2344,7 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_hrxq *hrxq; - struct mlx5_cache_entry *entry; + struct mlx5_list_entry *entry; struct mlx5_flow_cb_ctx ctx = { .data = rss_desc, }; @@ -2352,7 +2352,7 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, if (rss_desc->shared_rss) { hrxq = __mlx5_hrxq_create(dev, rss_desc); } else { - entry = mlx5_cache_register(&priv->hrxqs, &ctx); + entry = mlx5_list_register(&priv->hrxqs, &ctx); if (!entry) return 0; hrxq = container_of(entry, typeof(*hrxq), entry); @@ -2382,7 +2382,7 @@ int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hrxq_idx) if (!hrxq) return 0; if (!hrxq->standalone) - return mlx5_cache_unregister(&priv->hrxqs, &hrxq->entry); + return mlx5_list_unregister(&priv->hrxqs, &hrxq->entry); __mlx5_hrxq_remove(dev, hrxq); return 0; } @@ -2470,7 +2470,7 @@ mlx5_hrxq_verify(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - return mlx5_cache_list_get_entry_num(&priv->hrxqs); + return mlx5_list_get_entry_num(&priv->hrxqs); } /** diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 0ed279e162..a2b5accb84 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -9,29 +9,29 @@ #include "mlx5_utils.h" -/********************* Cache list ************************/ +/********************* MLX5 list ************************/ -static struct mlx5_cache_entry * -mlx5_clist_default_create_cb(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry __rte_unused, +static struct mlx5_list_entry * +mlx5_list_default_create_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, void *ctx __rte_unused) { return mlx5_malloc(MLX5_MEM_ZERO, list->entry_sz, 0, SOCKET_ID_ANY); } static void -mlx5_clist_default_remove_cb(struct mlx5_cache_list *list __rte_unused, - struct mlx5_cache_entry *entry) +mlx5_list_default_remove_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry) { mlx5_free(entry); } int -mlx5_cache_list_init(struct mlx5_cache_list *list, const char *name, +mlx5_list_create(struct mlx5_list *list, const char *name, uint32_t entry_size, void *ctx, - mlx5_cache_create_cb cb_create, - mlx5_cache_match_cb cb_match, - mlx5_cache_remove_cb cb_remove) + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove) { MLX5_ASSERT(list); if (!cb_match || (!cb_create ^ !cb_remove)) @@ -40,19 +40,19 @@ mlx5_cache_list_init(struct mlx5_cache_list *list, const char *name, snprintf(list->name, sizeof(list->name), "%s", name); list->entry_sz = entry_size; list->ctx = ctx; - list->cb_create = cb_create ? cb_create : mlx5_clist_default_create_cb; + list->cb_create = cb_create ? cb_create : mlx5_list_default_create_cb; list->cb_match = cb_match; - list->cb_remove = cb_remove ? cb_remove : mlx5_clist_default_remove_cb; + list->cb_remove = cb_remove ? cb_remove : mlx5_list_default_remove_cb; rte_rwlock_init(&list->lock); - DRV_LOG(DEBUG, "Cache list %s initialized.", list->name); + DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); LIST_INIT(&list->head); return 0; } -static struct mlx5_cache_entry * -__cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse) +static struct mlx5_list_entry * +__list_lookup(struct mlx5_list *list, void *ctx, bool reuse) { - struct mlx5_cache_entry *entry; + struct mlx5_list_entry *entry; LIST_FOREACH(entry, &list->head, next) { if (list->cb_match(list, entry, ctx)) @@ -60,7 +60,7 @@ __cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse) if (reuse) { __atomic_add_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED); - DRV_LOG(DEBUG, "Cache list %s entry %p ref++: %u.", + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref++: %u.", list->name, (void *)entry, entry->ref_cnt); } break; @@ -68,33 +68,33 @@ __cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse) return entry; } -static struct mlx5_cache_entry * -cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse) +static struct mlx5_list_entry * +list_lookup(struct mlx5_list *list, void *ctx, bool reuse) { - struct mlx5_cache_entry *entry; + struct mlx5_list_entry *entry; rte_rwlock_read_lock(&list->lock); - entry = __cache_lookup(list, ctx, reuse); + entry = __list_lookup(list, ctx, reuse); rte_rwlock_read_unlock(&list->lock); return entry; } -struct mlx5_cache_entry * -mlx5_cache_lookup(struct mlx5_cache_list *list, void *ctx) +struct mlx5_list_entry * +mlx5_list_lookup(struct mlx5_list *list, void *ctx) { - return cache_lookup(list, ctx, false); + return list_lookup(list, ctx, false); } -struct mlx5_cache_entry * -mlx5_cache_register(struct mlx5_cache_list *list, void *ctx) +struct mlx5_list_entry * +mlx5_list_register(struct mlx5_list *list, void *ctx) { - struct mlx5_cache_entry *entry; + struct mlx5_list_entry *entry; uint32_t prev_gen_cnt = 0; MLX5_ASSERT(list); prev_gen_cnt = __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE); /* Lookup with read lock, reuse if found. */ - entry = cache_lookup(list, ctx, true); + entry = list_lookup(list, ctx, true); if (entry) return entry; /* Not found, append with write lock - block read from other threads. */ @@ -102,13 +102,13 @@ mlx5_cache_register(struct mlx5_cache_list *list, void *ctx) /* If list changed by other threads before lock, search again. */ if (prev_gen_cnt != __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE)) { /* Lookup and reuse w/o read lock. */ - entry = __cache_lookup(list, ctx, true); + entry = __list_lookup(list, ctx, true); if (entry) goto done; } entry = list->cb_create(list, entry, ctx); if (!entry) { - DRV_LOG(ERR, "Failed to init cache list %s entry %p.", + DRV_LOG(ERR, "Failed to init mlx5 list %s entry %p.", list->name, (void *)entry); goto done; } @@ -116,7 +116,7 @@ mlx5_cache_register(struct mlx5_cache_list *list, void *ctx) LIST_INSERT_HEAD(&list->head, entry, next); __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_RELEASE); __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "Cache list %s entry %p new: %u.", + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, (void *)entry, entry->ref_cnt); done: rte_rwlock_write_unlock(&list->lock); @@ -124,12 +124,12 @@ mlx5_cache_register(struct mlx5_cache_list *list, void *ctx) } int -mlx5_cache_unregister(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry) +mlx5_list_unregister(struct mlx5_list *list, + struct mlx5_list_entry *entry) { rte_rwlock_write_lock(&list->lock); MLX5_ASSERT(entry && entry->next.le_prev); - DRV_LOG(DEBUG, "Cache list %s entry %p ref--: %u.", + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref--: %u.", list->name, (void *)entry, entry->ref_cnt); if (--entry->ref_cnt) { rte_rwlock_write_unlock(&list->lock); @@ -140,15 +140,15 @@ mlx5_cache_unregister(struct mlx5_cache_list *list, LIST_REMOVE(entry, next); list->cb_remove(list, entry); rte_rwlock_write_unlock(&list->lock); - DRV_LOG(DEBUG, "Cache list %s entry %p removed.", + DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", list->name, (void *)entry); return 0; } void -mlx5_cache_list_destroy(struct mlx5_cache_list *list) +mlx5_list_destroy(struct mlx5_list *list) { - struct mlx5_cache_entry *entry; + struct mlx5_list_entry *entry; MLX5_ASSERT(list); /* no LIST_FOREACH_SAFE, using while instead */ @@ -156,14 +156,14 @@ mlx5_cache_list_destroy(struct mlx5_cache_list *list) entry = LIST_FIRST(&list->head); LIST_REMOVE(entry, next); list->cb_remove(list, entry); - DRV_LOG(DEBUG, "Cache list %s entry %p destroyed.", + DRV_LOG(DEBUG, "mlx5 list %s entry %p destroyed.", list->name, (void *)entry); } memset(list, 0, sizeof(*list)); } uint32_t -mlx5_cache_list_get_entry_num(struct mlx5_cache_list *list) +mlx5_list_get_entry_num(struct mlx5_list *list) { MLX5_ASSERT(list); return __atomic_load_n(&list->count, __ATOMIC_RELAXED); diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 737dd7052d..593793345d 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -296,19 +296,19 @@ log2above(unsigned int v) return l + r; } -/************************ cache list *****************************/ +/************************ mlx5 list *****************************/ /** Maximum size of string for naming. */ #define MLX5_NAME_SIZE 32 -struct mlx5_cache_list; +struct mlx5_list; /** - * Structure of the entry in the cache list, user should define its own struct + * Structure of the entry in the mlx5 list, user should define its own struct * that contains this in order to store the data. */ -struct mlx5_cache_entry { - LIST_ENTRY(mlx5_cache_entry) next; /* Entry pointers in the list. */ +struct mlx5_list_entry { + LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ uint32_t ref_cnt; /* Reference count. */ }; @@ -316,18 +316,18 @@ struct mlx5_cache_entry { * Type of callback function for entry removal. * * @param list - * The cache list. + * The mlx5 list. * @param entry * The entry in the list. */ -typedef void (*mlx5_cache_remove_cb)(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry); +typedef void (*mlx5_list_remove_cb)(struct mlx5_list *list, + struct mlx5_list_entry *entry); /** * Type of function for user defined matching. * * @param list - * The cache list. + * The mlx5 list. * @param entry * The entry in the list. * @param ctx @@ -336,14 +336,14 @@ typedef void (*mlx5_cache_remove_cb)(struct mlx5_cache_list *list, * @return * 0 if matching, non-zero number otherwise. */ -typedef int (*mlx5_cache_match_cb)(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, void *ctx); +typedef int (*mlx5_list_match_cb)(struct mlx5_list *list, + struct mlx5_list_entry *entry, void *ctx); /** - * Type of function for user defined cache list entry creation. + * Type of function for user defined mlx5 list entry creation. * * @param list - * The cache list. + * The mlx5 list. * @param entry * The new allocated entry, NULL if list entry size unspecified, * New entry has to be allocated in callback and return. @@ -353,46 +353,46 @@ typedef int (*mlx5_cache_match_cb)(struct mlx5_cache_list *list, * @return * Pointer of entry on success, NULL otherwise. */ -typedef struct mlx5_cache_entry *(*mlx5_cache_create_cb) - (struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry, +typedef struct mlx5_list_entry *(*mlx5_list_create_cb) + (struct mlx5_list *list, + struct mlx5_list_entry *entry, void *ctx); /** - * Linked cache list structure. + * Linked mlx5 list structure. * - * Entry in cache list could be reused if entry already exists, + * Entry in mlx5 list could be reused if entry already exists, * reference count will increase and the existing entry returns. * * When destroy an entry from list, decrease reference count and only * destroy when no further reference. * - * Linked list cache is designed for limited number of entries cache, + * Linked list is designed for limited number of entries, * read mostly, less modification. * - * For huge amount of entries cache, please consider hash list cache. + * For huge amount of entries, please consider hash list. * */ -struct mlx5_cache_list { - char name[MLX5_NAME_SIZE]; /**< Name of the cache list. */ +struct mlx5_list { + char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ uint32_t entry_sz; /**< Entry size, 0: use create callback. */ rte_rwlock_t lock; /* read/write lock. */ uint32_t gen_cnt; /* List modification will update generation count. */ uint32_t count; /* number of entries in list. */ void *ctx; /* user objects target to callback. */ - mlx5_cache_create_cb cb_create; /**< entry create callback. */ - mlx5_cache_match_cb cb_match; /**< entry match callback. */ - mlx5_cache_remove_cb cb_remove; /**< entry remove callback. */ - LIST_HEAD(mlx5_cache_head, mlx5_cache_entry) head; + mlx5_list_create_cb cb_create; /**< entry create callback. */ + mlx5_list_match_cb cb_match; /**< entry match callback. */ + mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ + LIST_HEAD(mlx5_list_head, mlx5_list_entry) head; }; /** - * Initialize a cache list. + * Create a mlx5 list. * * @param list * Pointer to the hast list table. * @param name - * Name of the cache list. + * Name of the mlx5 list. * @param entry_size * Entry size to allocate, 0 to allocate by creation callback. * @param ctx @@ -406,11 +406,11 @@ struct mlx5_cache_list { * @return * 0 on success, otherwise failure. */ -int mlx5_cache_list_init(struct mlx5_cache_list *list, +int mlx5_list_create(struct mlx5_list *list, const char *name, uint32_t entry_size, void *ctx, - mlx5_cache_create_cb cb_create, - mlx5_cache_match_cb cb_match, - mlx5_cache_remove_cb cb_remove); + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove); /** * Search an entry matching the key. @@ -419,18 +419,18 @@ int mlx5_cache_list_init(struct mlx5_cache_list *list, * this function only in main thread. * * @param list - * Pointer to the cache list. + * Pointer to the mlx5 list. * @param ctx * Common context parameter used by entry callback function. * * @return - * Pointer of the cache entry if found, NULL otherwise. + * Pointer of the list entry if found, NULL otherwise. */ -struct mlx5_cache_entry *mlx5_cache_lookup(struct mlx5_cache_list *list, +struct mlx5_list_entry *mlx5_list_lookup(struct mlx5_list *list, void *ctx); /** - * Reuse or create an entry to the cache list. + * Reuse or create an entry to the mlx5 list. * * @param list * Pointer to the hast list table. @@ -440,42 +440,42 @@ struct mlx5_cache_entry *mlx5_cache_lookup(struct mlx5_cache_list *list, * @return * registered entry on success, NULL otherwise */ -struct mlx5_cache_entry *mlx5_cache_register(struct mlx5_cache_list *list, +struct mlx5_list_entry *mlx5_list_register(struct mlx5_list *list, void *ctx); /** - * Remove an entry from the cache list. + * Remove an entry from the mlx5 list. * * User should guarantee the validity of the entry. * * @param list * Pointer to the hast list. * @param entry - * Entry to be removed from the cache list table. + * Entry to be removed from the mlx5 list table. * @return * 0 on entry removed, 1 on entry still referenced. */ -int mlx5_cache_unregister(struct mlx5_cache_list *list, - struct mlx5_cache_entry *entry); +int mlx5_list_unregister(struct mlx5_list *list, + struct mlx5_list_entry *entry); /** - * Destroy the cache list. + * Destroy the mlx5 list. * * @param list - * Pointer to the cache list. + * Pointer to the mlx5 list. */ -void mlx5_cache_list_destroy(struct mlx5_cache_list *list); +void mlx5_list_destroy(struct mlx5_list *list); /** - * Get entry number from the cache list. + * Get entry number from the mlx5 list. * * @param list * Pointer to the hast list. * @return - * Cache list entry number. + * mlx5 list entry number. */ uint32_t -mlx5_cache_list_get_entry_num(struct mlx5_cache_list *list); +mlx5_list_get_entry_num(struct mlx5_list *list); /********************************* indexed pool *************************/ diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 7d15c998bb..b10c47fee3 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -608,10 +608,9 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - mlx5_cache_list_init(&priv->hrxqs, "hrxq", 0, eth_dev, - mlx5_hrxq_create_cb, - mlx5_hrxq_match_cb, - mlx5_hrxq_remove_cb); + mlx5_list_create(&priv->hrxqs, "hrxq", 0, eth_dev, + mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, + mlx5_hrxq_remove_cb); /* Query availability of metadata reg_c's. */ err = mlx5_flow_discover_mreg_c(eth_dev); if (err < 0) { From patchwork Fri Jul 2 06:18:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95159 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 607B3A0A0C; Fri, 2 Jul 2021 08:19:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AE7F841347; Fri, 2 Jul 2021 08:18:49 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2061.outbound.protection.outlook.com [40.107.92.61]) by mails.dpdk.org (Postfix) with ESMTP id 1454F40141 for ; Fri, 2 Jul 2021 08:18:48 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jQ85jyQL7Yt7FZmfk9ne2i2ykIj+iF3+wDwOXKocP/NPeFzqtQ/aJ246eLNRIjIzTm2II3JI5Nl6xDiKUE22s/iL3jY9qL92KtnFLnYt3QzF5PBC7b4q/hD+zExX3wXiMUTG135p3KVa3drmgfQiQQv7XQymnt23g6B8LskThQZlkG1o76lD43WMrCQOhE3tR3LRz4s2NOfEwQeT77CE5NRnkqJLkqDNLxG8sbSoN33Mgwju0H1ZXGGiB9vCBoDoo9MhK+wZE5eg/+qNV6npor0GDZbFI5bu+xC7AlJiiP8rOLRmMj6T00nTZgnaeIZvfEwqTeQLcaRKgP15KbtCKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=j7d3adoS6vjyJ5tMM/BXLTwIenc4iPZSeMut9z3NMYE=; b=T6F39RSIcLuWJunfEHESEcOzg3INTgghFl/fXwHWexDq78B+Qj2bOeHcNNLEkCjPY8NHsGy2TVR3rxmL+8MkqFQIjAhU+o2jCoZrCZzPQQZhoC4RVBSVqcN752gsEqbiDwqrG148OAB2o6bEXJbW+dcOeUhda9dDwEZxXIEoDZ9B2gKnB1Q9G4eCnODzExkIfySCIg6ZQnOKR3EqFENnHxIjY/5QVSRwYgYUOXM7pHrLBqKx+sLNDS6ycpPWWBJlVsAAA3UhPjJt4+3KFCgzMBGg1K2/IjShLK4OsbBOjeQrwK2l3bVjZgmrHBAWySWTVesKcqrGKVb3C13+3hojqw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=j7d3adoS6vjyJ5tMM/BXLTwIenc4iPZSeMut9z3NMYE=; b=QnW+vUlEA8RzSV8NIi7U7Xw/KzK4npkSdx374ko5BTct33VxjtGQGCtSTf4Fi9mvVYqf2ars1SoDAe6QQce3+Fqqj2rC/bUgJBAdmrdVBo9uiylCH7QN3zZIOtgSjrH6Yy+/IjbgLPB84Pf851uAHB6mEMX6HBtzqFNIVsDhOSARyGhYcoRVw9VBm8Dh7QnsdMtl+gmdRKCdbHUAgC3en7BoPw/T7Tdz/n88S7hDd8Kah7wesUN2VGaon20Xn4VdqrgTmG9VKbs2Ph6N2Wq1JeisDAc655iwpc9UFllLyyo0Iycv6ya6u2Tg3+en68MYNbD1seFSKuXRwB6LY/Jgcg== Received: from MW4PR03CA0101.namprd03.prod.outlook.com (2603:10b6:303:b7::16) by DM6PR12MB4404.namprd12.prod.outlook.com (2603:10b6:5:2a7::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 2 Jul 2021 06:18:46 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::6f) by MW4PR03CA0101.outlook.office365.com (2603:10b6:303:b7::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:45 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:45 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:42 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:01 +0300 Message-ID: <20210702061816.10454-8-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ceb7dba6-5983-443f-4b7a-08d93d213f81 X-MS-TrafficTypeDiagnostic: DM6PR12MB4404: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:86; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Cfsv2fkRx1M2zZYFADhLlUqUgos50ktfpKlOJYznBL833nPMjF4IY2AWwclb+77chf55IXx772Z5RxnCBMqCeMFrGfwvgR4weq9cF7FoyBqjqQgtG00MPnjZ+bWHlvwH7eq6yuIxMNtE2Kl3JuzJEeM0dXtSWzv71C7tl/hUFJ6LTu+kBARlYhIvS32Q+MjNSVpOZi3yjPbeTUmyZCuIkHiSA7tcCL8SEKXizGVnGOMmxbPxFJjX86X6gqMLqTPZQJIRZHhRXlsPiPN1d0xW6Uwq3YktT6YK1XRgUxHQGfAZkHmrsX8k8qbtAmXfPJgeQ+jmu9dhLLCpsmtIWQl8HgRpxhkqASH+b+a4nHLhbA8YZyMpplWvJgoZwO+/GuDdLfoHkkT0ICglL6oZVeGOYjw0ZKJ3Zt7dOZIUzXcEmTiOXXO32sZJ/Xaj3pDNf9P4EixbvtWKEEP+t1RIrCGmLd/zxwXpmaCkDfH2Ed1A2hHwJeouYZcpvxAGS+A/IJvGgIgqtJBPkJFQnN96tNTwCAvd2/K//vL9x03ciODB+q4YOG/vY6WYFlPEoVmNwH0zgs6UwQJjlRAF8MQdGC+JYLVZV2xRpMbbLWDbs1+4Idm9HupwUAKuQRedCBemeM56omYdvkuBkH++sGJzlVm6Csoizm3TZmK+HvWJziEVu+kgEKra4+ozaAFbIE8MylNG+yfLZMYz2CcM03nXhLI8HQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(46966006)(36840700001)(316002)(36906005)(86362001)(2616005)(82310400003)(2906002)(36756003)(478600001)(110136005)(336012)(36860700001)(54906003)(26005)(426003)(4326008)(186003)(1076003)(16526019)(7696005)(70206006)(6636002)(5660300002)(356005)(47076005)(55016002)(30864003)(82740400003)(70586007)(8676002)(83380400001)(6286002)(7636003)(6666004)(8936002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:45.7219 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ceb7dba6-5983-443f-4b7a-08d93d213f81 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4404 Subject: [dpdk-dev] [PATCH v3 07/22] net/mlx5: add per lcore cache to the list utility X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad When mlx5 list object is accessed by multiple cores, the list lock counter is all the time written by all the cores what increases cache misses in the memory caches. In addition, when one thread accesses the list for add\remove\lookup operation, all the other threads coming to do an operation in the list are stuck in the lock. Add per lcore cache to allow thread manipulations to be lockless when the list objects are mostly reused. Synchronization with atomic operations should be done in order to allow threads to unregister an entry from other thread cache. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 58 ++++---- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 21 ++- drivers/net/mlx5/mlx5_flow_dv.c | 181 +++++++++++++++++++++++- drivers/net/mlx5/mlx5_rx.h | 5 + drivers/net/mlx5/mlx5_rxq.c | 71 +++++++--- drivers/net/mlx5/mlx5_utils.c | 214 ++++++++++++++++++----------- drivers/net/mlx5/mlx5_utils.h | 30 ++-- drivers/net/mlx5/windows/mlx5_os.c | 5 +- 9 files changed, 451 insertions(+), 135 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 9aa57e38b7..8a043526da 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -272,30 +272,38 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* The resources below are only valid with DV support. */ #ifdef HAVE_IBV_FLOW_DV_SUPPORT - /* Init port id action mlx5 list. */ + /* Init port id action list. */ snprintf(s, sizeof(s), "%s_port_id_action_list", sh->ibdev_name); - mlx5_list_create(&sh->port_id_action_list, s, 0, sh, - flow_dv_port_id_create_cb, - flow_dv_port_id_match_cb, - flow_dv_port_id_remove_cb); - /* Init push vlan action mlx5 list. */ + mlx5_list_create(&sh->port_id_action_list, s, sh, + flow_dv_port_id_create_cb, + flow_dv_port_id_match_cb, + flow_dv_port_id_remove_cb, + flow_dv_port_id_clone_cb, + flow_dv_port_id_clone_free_cb); + /* Init push vlan action list. */ snprintf(s, sizeof(s), "%s_push_vlan_action_list", sh->ibdev_name); - mlx5_list_create(&sh->push_vlan_action_list, s, 0, sh, - flow_dv_push_vlan_create_cb, - flow_dv_push_vlan_match_cb, - flow_dv_push_vlan_remove_cb); - /* Init sample action mlx5 list. */ + mlx5_list_create(&sh->push_vlan_action_list, s, sh, + flow_dv_push_vlan_create_cb, + flow_dv_push_vlan_match_cb, + flow_dv_push_vlan_remove_cb, + flow_dv_push_vlan_clone_cb, + flow_dv_push_vlan_clone_free_cb); + /* Init sample action list. */ snprintf(s, sizeof(s), "%s_sample_action_list", sh->ibdev_name); - mlx5_list_create(&sh->sample_action_list, s, 0, sh, - flow_dv_sample_create_cb, - flow_dv_sample_match_cb, - flow_dv_sample_remove_cb); - /* Init dest array action mlx5 list. */ + mlx5_list_create(&sh->sample_action_list, s, sh, + flow_dv_sample_create_cb, + flow_dv_sample_match_cb, + flow_dv_sample_remove_cb, + flow_dv_sample_clone_cb, + flow_dv_sample_clone_free_cb); + /* Init dest array action list. */ snprintf(s, sizeof(s), "%s_dest_array_list", sh->ibdev_name); - mlx5_list_create(&sh->dest_array_list, s, 0, sh, - flow_dv_dest_array_create_cb, - flow_dv_dest_array_match_cb, - flow_dv_dest_array_remove_cb); + mlx5_list_create(&sh->dest_array_list, s, sh, + flow_dv_dest_array_create_cb, + flow_dv_dest_array_match_cb, + flow_dv_dest_array_remove_cb, + flow_dv_dest_array_clone_cb, + flow_dv_dest_array_clone_free_cb); /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, @@ -1702,10 +1710,12 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - mlx5_list_create(&priv->hrxqs, "hrxq", 0, eth_dev, - mlx5_hrxq_create_cb, - mlx5_hrxq_match_cb, - mlx5_hrxq_remove_cb); + mlx5_list_create(&priv->hrxqs, "hrxq", eth_dev, mlx5_hrxq_create_cb, + mlx5_hrxq_match_cb, + mlx5_hrxq_remove_cb, + mlx5_hrxq_clone_cb, + mlx5_hrxq_clone_free_cb); + rte_rwlock_init(&priv->ind_tbls_lock); /* Query availability of metadata reg_c's. */ err = mlx5_flow_discover_mreg_c(eth_dev); if (err < 0) { diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 58646da331..546bee761e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1365,6 +1365,7 @@ struct mlx5_priv { /* Indirection tables. */ LIST_HEAD(ind_tables, mlx5_ind_table_obj) ind_tbls; /* Pointer to next element. */ + rte_rwlock_t ind_tbls_lock; uint32_t refcnt; /**< Reference counter. */ /**< Verbs modify header action object. */ uint8_t ft_type; /**< Flow table type, Rx or Tx. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 4dec703366..ce363355c1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1634,7 +1634,11 @@ struct mlx5_list_entry *flow_dv_port_id_create_cb(struct mlx5_list *list, void *cb_ctx); void flow_dv_port_id_remove_cb(struct mlx5_list *list, struct mlx5_list_entry *entry); - +struct mlx5_list_entry *flow_dv_port_id_clone_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx); +void flow_dv_port_id_clone_free_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused); int flow_dv_push_vlan_match_cb(struct mlx5_list *list, struct mlx5_list_entry *entry, void *cb_ctx); struct mlx5_list_entry *flow_dv_push_vlan_create_cb(struct mlx5_list *list, @@ -1642,6 +1646,11 @@ struct mlx5_list_entry *flow_dv_push_vlan_create_cb(struct mlx5_list *list, void *cb_ctx); void flow_dv_push_vlan_remove_cb(struct mlx5_list *list, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_push_vlan_clone_cb + (struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx); +void flow_dv_push_vlan_clone_free_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); int flow_dv_sample_match_cb(struct mlx5_list *list, struct mlx5_list_entry *entry, void *cb_ctx); @@ -1650,6 +1659,11 @@ struct mlx5_list_entry *flow_dv_sample_create_cb(struct mlx5_list *list, void *cb_ctx); void flow_dv_sample_remove_cb(struct mlx5_list *list, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_sample_clone_cb + (struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx); +void flow_dv_sample_clone_free_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); int flow_dv_dest_array_match_cb(struct mlx5_list *list, struct mlx5_list_entry *entry, void *cb_ctx); @@ -1658,6 +1672,11 @@ struct mlx5_list_entry *flow_dv_dest_array_create_cb(struct mlx5_list *list, void *cb_ctx); void flow_dv_dest_array_remove_cb(struct mlx5_list *list, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_dest_array_clone_cb + (struct mlx5_list *list, + struct mlx5_list_entry *entry, void *cb_ctx); +void flow_dv_dest_array_clone_free_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); struct mlx5_aso_age_action *flow_aso_age_get_by_idx(struct rte_eth_dev *dev, uint32_t age_idx); int flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 897bcf52a6..68a9e70a98 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3820,6 +3820,39 @@ flow_dv_port_id_create_cb(struct mlx5_list *list, return &resource->entry; } +struct mlx5_list_entry * +flow_dv_port_id_clone_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_port_id_action_resource *resource; + uint32_t idx; + + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PORT_ID], &idx); + if (!resource) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate port_id action memory"); + return NULL; + } + memcpy(resource, entry, sizeof(*resource)); + resource->idx = idx; + return &resource->entry; +} + +void +flow_dv_port_id_clone_free_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_dv_port_id_action_resource *resource = + container_of(entry, typeof(*resource), entry); + + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PORT_ID], resource->idx); +} + /** * Find existing table port ID resource or create and register a new one. * @@ -3912,6 +3945,39 @@ flow_dv_push_vlan_create_cb(struct mlx5_list *list, return &resource->entry; } +struct mlx5_list_entry * +flow_dv_push_vlan_clone_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_push_vlan_action_resource *resource; + uint32_t idx; + + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_PUSH_VLAN], &idx); + if (!resource) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate push_vlan action memory"); + return NULL; + } + memcpy(resource, entry, sizeof(*resource)); + resource->idx = idx; + return &resource->entry; +} + +void +flow_dv_push_vlan_clone_free_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_flow_dv_push_vlan_action_resource *resource = + container_of(entry, typeof(*resource), entry); + + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PUSH_VLAN], resource->idx); +} + /** * Find existing push vlan resource or create and register a new one. * @@ -9848,6 +9914,36 @@ flow_dv_matcher_enable(uint32_t *match_criteria) return match_criteria_enable; } +static struct mlx5_list_entry * +flow_dv_matcher_clone_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_matcher *ref = ctx->data; + struct mlx5_flow_tbl_data_entry *tbl = container_of(ref->tbl, + typeof(*tbl), tbl); + struct mlx5_flow_dv_matcher *resource = mlx5_malloc(MLX5_MEM_ANY, + sizeof(*resource), + 0, SOCKET_ID_ANY); + + if (!resource) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot create matcher"); + return NULL; + } + memcpy(resource, entry, sizeof(*resource)); + resource->tbl = &tbl->tbl; + return &resource->entry; +} + +static void +flow_dv_matcher_clone_free_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry) +{ + mlx5_free(entry); +} + struct mlx5_hlist_entry * flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) { @@ -9914,10 +10010,12 @@ flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) MKSTR(matcher_name, "%s_%s_%u_%u_matcher_list", key.is_fdb ? "FDB" : "NIC", key.is_egress ? "egress" : "ingress", key.level, key.id); - mlx5_list_create(&tbl_data->matchers, matcher_name, 0, sh, + mlx5_list_create(&tbl_data->matchers, matcher_name, sh, flow_dv_matcher_create_cb, flow_dv_matcher_match_cb, - flow_dv_matcher_remove_cb); + flow_dv_matcher_remove_cb, + flow_dv_matcher_clone_cb, + flow_dv_matcher_clone_free_cb); return &tbl_data->entry; } @@ -10705,6 +10803,45 @@ flow_dv_sample_create_cb(struct mlx5_list *list __rte_unused, } +struct mlx5_list_entry * +flow_dv_sample_clone_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct rte_eth_dev *dev = ctx->dev; + struct mlx5_flow_dv_sample_resource *resource; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + uint32_t idx = 0; + + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_SAMPLE], &idx); + if (!resource) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate resource memory"); + return NULL; + } + memcpy(resource, entry, sizeof(*resource)); + resource->idx = idx; + resource->dev = dev; + return &resource->entry; +} + +void +flow_dv_sample_clone_free_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry) +{ + struct mlx5_flow_dv_sample_resource *resource = + container_of(entry, typeof(*resource), entry); + struct rte_eth_dev *dev = resource->dev; + struct mlx5_priv *priv = dev->data->dev_private; + + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_SAMPLE], + resource->idx); +} + /** * Find existing sample resource or create and register a new one. * @@ -10880,6 +11017,46 @@ flow_dv_dest_array_create_cb(struct mlx5_list *list __rte_unused, return NULL; } +struct mlx5_list_entry * +flow_dv_dest_array_clone_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct rte_eth_dev *dev = ctx->dev; + struct mlx5_flow_dv_dest_array_resource *resource; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + uint32_t res_idx = 0; + struct rte_flow_error *error = ctx->error; + + resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_DEST_ARRAY], + &res_idx); + if (!resource) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate dest-array memory"); + return NULL; + } + memcpy(resource, entry, sizeof(*resource)); + resource->idx = res_idx; + resource->dev = dev; + return &resource->entry; +} + +void +flow_dv_dest_array_clone_free_cb(struct mlx5_list *list __rte_unused, + struct mlx5_list_entry *entry) +{ + struct mlx5_flow_dv_dest_array_resource *resource = + container_of(entry, typeof(*resource), entry); + struct rte_eth_dev *dev = resource->dev; + struct mlx5_priv *priv = dev->data->dev_private; + + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_DEST_ARRAY], resource->idx); +} + /** * Find existing destination array resource or create and register a new one. * diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 3dcc71d51d..5450ddd388 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -229,6 +229,11 @@ int mlx5_hrxq_match_cb(struct mlx5_list *list, void *cb_ctx); void mlx5_hrxq_remove_cb(struct mlx5_list *list, struct mlx5_list_entry *entry); +struct mlx5_list_entry *mlx5_hrxq_clone_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, + void *cb_ctx __rte_unused); +void mlx5_hrxq_clone_free_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry); uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc); int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hxrq_idx); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 8395332507..f8769da8dc 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1857,20 +1857,18 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_ind_table_obj *ind_tbl; + rte_rwlock_read_lock(&priv->ind_tbls_lock); LIST_FOREACH(ind_tbl, &priv->ind_tbls, next) { if ((ind_tbl->queues_n == queues_n) && (memcmp(ind_tbl->queues, queues, ind_tbl->queues_n * sizeof(ind_tbl->queues[0])) - == 0)) + == 0)) { + __atomic_fetch_add(&ind_tbl->refcnt, 1, + __ATOMIC_RELAXED); break; + } } - if (ind_tbl) { - unsigned int i; - - __atomic_fetch_add(&ind_tbl->refcnt, 1, __ATOMIC_RELAXED); - for (i = 0; i != ind_tbl->queues_n; ++i) - mlx5_rxq_get(dev, ind_tbl->queues[i]); - } + rte_rwlock_read_unlock(&priv->ind_tbls_lock); return ind_tbl; } @@ -1893,19 +1891,20 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, bool standalone) { struct mlx5_priv *priv = dev->data->dev_private; - unsigned int i; + unsigned int i, ret; - if (__atomic_sub_fetch(&ind_tbl->refcnt, 1, __ATOMIC_RELAXED) == 0) - priv->obj_ops.ind_table_destroy(ind_tbl); + rte_rwlock_write_lock(&priv->ind_tbls_lock); + ret = __atomic_sub_fetch(&ind_tbl->refcnt, 1, __ATOMIC_RELAXED); + if (!ret && !standalone) + LIST_REMOVE(ind_tbl, next); + rte_rwlock_write_unlock(&priv->ind_tbls_lock); + if (ret) + return 1; + priv->obj_ops.ind_table_destroy(ind_tbl); for (i = 0; i != ind_tbl->queues_n; ++i) claim_nonzero(mlx5_rxq_release(dev, ind_tbl->queues[i])); - if (__atomic_load_n(&ind_tbl->refcnt, __ATOMIC_RELAXED) == 0) { - if (!standalone) - LIST_REMOVE(ind_tbl, next); - mlx5_free(ind_tbl); - return 0; - } - return 1; + mlx5_free(ind_tbl); + return 0; } /** @@ -1924,12 +1923,14 @@ mlx5_ind_table_obj_verify(struct rte_eth_dev *dev) struct mlx5_ind_table_obj *ind_tbl; int ret = 0; + rte_rwlock_read_lock(&priv->ind_tbls_lock); LIST_FOREACH(ind_tbl, &priv->ind_tbls, next) { DRV_LOG(DEBUG, "port %u indirection table obj %p still referenced", dev->data->port_id, (void *)ind_tbl); ++ret; } + rte_rwlock_read_unlock(&priv->ind_tbls_lock); return ret; } @@ -2015,8 +2016,11 @@ mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues, mlx5_free(ind_tbl); return NULL; } - if (!standalone) + if (!standalone) { + rte_rwlock_write_lock(&priv->ind_tbls_lock); LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next); + rte_rwlock_write_unlock(&priv->ind_tbls_lock); + } return ind_tbl; } @@ -2328,6 +2332,35 @@ mlx5_hrxq_create_cb(struct mlx5_list *list, return hrxq ? &hrxq->entry : NULL; } +struct mlx5_list_entry * +mlx5_hrxq_clone_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry, + void *cb_ctx __rte_unused) +{ + struct rte_eth_dev *dev = list->ctx; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq; + uint32_t hrxq_idx = 0; + + hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx); + if (!hrxq) + return NULL; + memcpy(hrxq, entry, sizeof(*hrxq) + MLX5_RSS_HASH_KEY_LEN); + hrxq->idx = hrxq_idx; + return &hrxq->entry; +} + +void +mlx5_hrxq_clone_free_cb(struct mlx5_list *list, + struct mlx5_list_entry *entry) +{ + struct rte_eth_dev *dev = list->ctx; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq = container_of(entry, typeof(*hrxq), entry); + + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq->idx); +} + /** * Get an Rx Hash queue. * diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index a2b5accb84..51cca68ea9 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -9,57 +9,68 @@ #include "mlx5_utils.h" -/********************* MLX5 list ************************/ - -static struct mlx5_list_entry * -mlx5_list_default_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, - void *ctx __rte_unused) -{ - return mlx5_malloc(MLX5_MEM_ZERO, list->entry_sz, 0, SOCKET_ID_ANY); -} - -static void -mlx5_list_default_remove_cb(struct mlx5_list *list __rte_unused, - struct mlx5_list_entry *entry) -{ - mlx5_free(entry); -} +/********************* mlx5 list ************************/ int -mlx5_list_create(struct mlx5_list *list, const char *name, - uint32_t entry_size, void *ctx, - mlx5_list_create_cb cb_create, - mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove) +mlx5_list_create(struct mlx5_list *list, const char *name, void *ctx, + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free) { + int i; + MLX5_ASSERT(list); - if (!cb_match || (!cb_create ^ !cb_remove)) + if (!cb_match || !cb_create || !cb_remove || !cb_clone || + !cb_clone_free) return -1; if (name) snprintf(list->name, sizeof(list->name), "%s", name); - list->entry_sz = entry_size; list->ctx = ctx; - list->cb_create = cb_create ? cb_create : mlx5_list_default_create_cb; + list->cb_create = cb_create; list->cb_match = cb_match; - list->cb_remove = cb_remove ? cb_remove : mlx5_list_default_remove_cb; + list->cb_remove = cb_remove; + list->cb_clone = cb_clone; + list->cb_clone_free = cb_clone_free; rte_rwlock_init(&list->lock); DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); - LIST_INIT(&list->head); + for (i = 0; i <= RTE_MAX_LCORE; i++) + LIST_INIT(&list->cache[i].h); return 0; } static struct mlx5_list_entry * -__list_lookup(struct mlx5_list *list, void *ctx, bool reuse) +__list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) { - struct mlx5_list_entry *entry; - - LIST_FOREACH(entry, &list->head, next) { - if (list->cb_match(list, entry, ctx)) + struct mlx5_list_entry *entry = LIST_FIRST(&list->cache[lcore_index].h); + uint32_t ret; + + while (entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (list->cb_match(list, entry, ctx)) { + if (lcore_index < RTE_MAX_LCORE) { + ret = __atomic_load_n(&entry->ref_cnt, + __ATOMIC_ACQUIRE); + if (ret == 0) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + } + } + entry = nentry; continue; + } if (reuse) { - __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_RELAXED); + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_ACQUIRE); + if (ret == 1u) { + /* Entry was invalid before, free it. */ + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + entry = nentry; + continue; + } DRV_LOG(DEBUG, "mlx5 list %s entry %p ref++: %u.", list->name, (void *)entry, entry->ref_cnt); } @@ -68,96 +79,141 @@ __list_lookup(struct mlx5_list *list, void *ctx, bool reuse) return entry; } -static struct mlx5_list_entry * -list_lookup(struct mlx5_list *list, void *ctx, bool reuse) +struct mlx5_list_entry * +mlx5_list_lookup(struct mlx5_list *list, void *ctx) { - struct mlx5_list_entry *entry; + struct mlx5_list_entry *entry = NULL; + int i; rte_rwlock_read_lock(&list->lock); - entry = __list_lookup(list, ctx, reuse); + for (i = 0; i < RTE_MAX_LCORE; i++) { + entry = __list_lookup(list, i, ctx, false); + if (entry) + break; + } rte_rwlock_read_unlock(&list->lock); return entry; } -struct mlx5_list_entry * -mlx5_list_lookup(struct mlx5_list *list, void *ctx) +static struct mlx5_list_entry * +mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, + struct mlx5_list_entry *gentry, void *ctx) { - return list_lookup(list, ctx, false); + struct mlx5_list_entry *lentry = list->cb_clone(list, gentry, ctx); + + if (!lentry) + return NULL; + lentry->ref_cnt = 1u; + lentry->gentry = gentry; + LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); + return lentry; } struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { - struct mlx5_list_entry *entry; + struct mlx5_list_entry *entry, *lentry; uint32_t prev_gen_cnt = 0; + int lcore_index = rte_lcore_index(rte_lcore_id()); MLX5_ASSERT(list); - prev_gen_cnt = __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE); + MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); + if (unlikely(lcore_index == -1)) { + rte_errno = ENOTSUP; + return NULL; + } + /* Lookup in local cache. */ + lentry = __list_lookup(list, lcore_index, ctx, true); + if (lentry) + return lentry; /* Lookup with read lock, reuse if found. */ - entry = list_lookup(list, ctx, true); - if (entry) - return entry; + rte_rwlock_read_lock(&list->lock); + entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); + if (entry == NULL) { + prev_gen_cnt = __atomic_load_n(&list->gen_cnt, + __ATOMIC_ACQUIRE); + rte_rwlock_read_unlock(&list->lock); + } else { + rte_rwlock_read_unlock(&list->lock); + return mlx5_list_cache_insert(list, lcore_index, entry, ctx); + } /* Not found, append with write lock - block read from other threads. */ rte_rwlock_write_lock(&list->lock); /* If list changed by other threads before lock, search again. */ if (prev_gen_cnt != __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE)) { /* Lookup and reuse w/o read lock. */ - entry = __list_lookup(list, ctx, true); - if (entry) - goto done; + entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); + if (entry) { + rte_rwlock_write_unlock(&list->lock); + return mlx5_list_cache_insert(list, lcore_index, entry, + ctx); + } } entry = list->cb_create(list, entry, ctx); - if (!entry) { - DRV_LOG(ERR, "Failed to init mlx5 list %s entry %p.", - list->name, (void *)entry); - goto done; + if (entry) { + lentry = mlx5_list_cache_insert(list, lcore_index, entry, ctx); + if (!lentry) { + list->cb_remove(list, entry); + } else { + entry->ref_cnt = 1u; + LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE].h, entry, + next); + __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_RELEASE); + __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", + list->name, (void *)entry, entry->ref_cnt); + } + } - entry->ref_cnt = 1; - LIST_INSERT_HEAD(&list->head, entry, next); - __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_RELEASE); - __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", - list->name, (void *)entry, entry->ref_cnt); -done: rte_rwlock_write_unlock(&list->lock); - return entry; + return lentry; } int mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { + struct mlx5_list_entry *gentry = entry->gentry; + + if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) + return 1; + if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) + return 1; rte_rwlock_write_lock(&list->lock); - MLX5_ASSERT(entry && entry->next.le_prev); - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref--: %u.", - list->name, (void *)entry, entry->ref_cnt); - if (--entry->ref_cnt) { + if (__atomic_load_n(&gentry->ref_cnt, __ATOMIC_ACQUIRE) == 0) { + __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_ACQUIRE); + __atomic_sub_fetch(&list->count, 1, __ATOMIC_ACQUIRE); + LIST_REMOVE(gentry, next); + list->cb_remove(list, gentry); rte_rwlock_write_unlock(&list->lock); - return 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", + list->name, (void *)gentry); + return 0; } - __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_ACQUIRE); - __atomic_sub_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - LIST_REMOVE(entry, next); - list->cb_remove(list, entry); rte_rwlock_write_unlock(&list->lock); - DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", - list->name, (void *)entry); - return 0; + return 1; } void mlx5_list_destroy(struct mlx5_list *list) { struct mlx5_list_entry *entry; + int i; MLX5_ASSERT(list); - /* no LIST_FOREACH_SAFE, using while instead */ - while (!LIST_EMPTY(&list->head)) { - entry = LIST_FIRST(&list->head); - LIST_REMOVE(entry, next); - list->cb_remove(list, entry); - DRV_LOG(DEBUG, "mlx5 list %s entry %p destroyed.", - list->name, (void *)entry); + for (i = 0; i <= RTE_MAX_LCORE; i++) { + while (!LIST_EMPTY(&list->cache[i].h)) { + entry = LIST_FIRST(&list->cache[i].h); + LIST_REMOVE(entry, next); + if (i == RTE_MAX_LCORE) { + list->cb_remove(list, entry); + DRV_LOG(DEBUG, "mlx5 list %s entry %p " + "destroyed.", list->name, + (void *)entry); + } else { + list->cb_clone_free(list, entry); + } + } } memset(list, 0, sizeof(*list)); } diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 593793345d..24ae2b2ccb 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -309,9 +309,14 @@ struct mlx5_list; */ struct mlx5_list_entry { LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ - uint32_t ref_cnt; /* Reference count. */ + uint32_t ref_cnt; /* 0 means, entry is invalid. */ + struct mlx5_list_entry *gentry; }; +struct mlx5_list_cache { + LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; +} __rte_cache_aligned; + /** * Type of callback function for entry removal. * @@ -339,6 +344,13 @@ typedef void (*mlx5_list_remove_cb)(struct mlx5_list *list, typedef int (*mlx5_list_match_cb)(struct mlx5_list *list, struct mlx5_list_entry *entry, void *ctx); +typedef struct mlx5_list_entry *(*mlx5_list_clone_cb) + (struct mlx5_list *list, + struct mlx5_list_entry *entry, void *ctx); + +typedef void (*mlx5_list_clone_free_cb)(struct mlx5_list *list, + struct mlx5_list_entry *entry); + /** * Type of function for user defined mlx5 list entry creation. * @@ -375,15 +387,17 @@ typedef struct mlx5_list_entry *(*mlx5_list_create_cb) */ struct mlx5_list { char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ - uint32_t entry_sz; /**< Entry size, 0: use create callback. */ - rte_rwlock_t lock; /* read/write lock. */ uint32_t gen_cnt; /* List modification will update generation count. */ uint32_t count; /* number of entries in list. */ void *ctx; /* user objects target to callback. */ + rte_rwlock_t lock; /* read/write lock. */ mlx5_list_create_cb cb_create; /**< entry create callback. */ mlx5_list_match_cb cb_match; /**< entry match callback. */ mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ - LIST_HEAD(mlx5_list_head, mlx5_list_entry) head; + mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ + mlx5_list_clone_free_cb cb_clone_free; + struct mlx5_list_cache cache[RTE_MAX_LCORE + 1]; + /* Lcore cache, last index is the global cache. */ }; /** @@ -393,8 +407,6 @@ struct mlx5_list { * Pointer to the hast list table. * @param name * Name of the mlx5 list. - * @param entry_size - * Entry size to allocate, 0 to allocate by creation callback. * @param ctx * Pointer to the list context data. * @param cb_create @@ -407,10 +419,12 @@ struct mlx5_list { * 0 on success, otherwise failure. */ int mlx5_list_create(struct mlx5_list *list, - const char *name, uint32_t entry_size, void *ctx, + const char *name, void *ctx, mlx5_list_create_cb cb_create, mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove); + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free); /** * Search an entry matching the key. diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index b10c47fee3..f6cf1928b2 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -608,9 +608,10 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - mlx5_list_create(&priv->hrxqs, "hrxq", 0, eth_dev, + mlx5_list_create(&priv->hrxqs, "hrxq", eth_dev, mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, - mlx5_hrxq_remove_cb); + mlx5_hrxq_remove_cb, mlx5_hrxq_clone_cb, + mlx5_hrxq_clone_free_cb); /* Query availability of metadata reg_c's. */ err = mlx5_flow_discover_mreg_c(eth_dev); if (err < 0) { From patchwork Fri Jul 2 06:18:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95162 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 21BA5A0A0C; Fri, 2 Jul 2021 08:19:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EF56341376; Fri, 2 Jul 2021 08:18:53 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2072.outbound.protection.outlook.com [40.107.236.72]) by mails.dpdk.org (Postfix) with ESMTP id 4FBAF4135B for ; Fri, 2 Jul 2021 08:18:50 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gP+Sal7KvKZBv3V+KTVZjXp+Doh0moVEoyQLNT6RV+BgeGVjK4/5HNLcdmk2Fby49P1BQDgdwVoPYRe4fgPsstDXtX7jwvqvXHzdcGX/T4vVAd48slGkF4TiSGmiXUTnlB9c1coP58R9s1Ek29vX5c7LEt93g/1ERzfCOFbn5qWsmcqXuj3oq7HrEZucLXkyQKs0iA299MC6PW63iDiKJYdmI8dgkRxX/QtkOP0oZO7asihQdwFqrR1VzD+11C+HuUjsI0qfvwcvPMr3zipgwTx3xA8mKRm9RXfpxgsBcICr7S7pZVHFP7kUoIXXTNDkGbxOMnlZFUDPZFKnnRfoBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wrUW7GrTNGe2j3O0k+er+bpDBYzHMA3HhlzYmRGx6jY=; b=Jnqv3qv2EqvmJz8punNMe+tqnNoAHtrn2047XWFUlAhkzsvkI3c4/izS4i1k9TIKhxbMfPeoHO88mAtU8gidyMxa8LLxKRm/177NNhjY991SZp3IRXB525zbVUDL+rIe+fnQ5eQKpVt3Vdss2bPAvG1ftVnK1cCGTd+gbttSDiC23AxkEQUMjoJxaZjpbRQ/tX+CAIz7c+OH8VFUnVVBpGjhDLOyY3fwJjoYv5dDL/mmwWki48MBdAF99R3ZrsxXaJQT3qZzHXqEIDFryFvlIqACKmvSOffuehjeN8eFMw+bOa+1H3If+bjetAcu0OM4KEDzr4XVM38vpJjv2LHOaA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wrUW7GrTNGe2j3O0k+er+bpDBYzHMA3HhlzYmRGx6jY=; b=d8qqMP53sWtgwpJG60Zs+Y4D3F+Au3cWMa+GqLlKBoD6IWSskl05DhXHgbdhChFrlpmeN49S2O7/JS9F5tBR5/gshJZ4HWkiQJcvF0AyqVy7mTAdO8bWzhgt8B9QCut9nrr7VrzTsFOdnMm7lvEnPPRRcNLQM8X1W7yQI/0zDbVS1JVntQYn48soK+h5MFxSy7xjEmTF5SW+zuIJJJJJsnQbZMi9PzO1IC71MP3OuWlH3lRpi1yqe6hujQwWoxb7d7LUUvL9agici26UxTGCk2STKplU3xYkyFDb7XihRicha9MTVE+e5tqlWKTlID4SwBg1vVtvFQZdJpPV1JJKgg== Received: from MW4PR03CA0100.namprd03.prod.outlook.com (2603:10b6:303:b7::15) by MWHPR12MB1311.namprd12.prod.outlook.com (2603:10b6:300:13::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23; Fri, 2 Jul 2021 06:18:46 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::60) by MW4PR03CA0100.outlook.office365.com (2603:10b6:303:b7::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21 via Frontend Transport; Fri, 2 Jul 2021 06:18:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:46 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:43 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:02 +0300 Message-ID: <20210702061816.10454-9-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1e23963e-40be-43fc-e7d9-08d93d21400d X-MS-TrafficTypeDiagnostic: MWHPR12MB1311: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:883; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: mYyE7IMsJqdrav52VwJW7V+ap1EaqeawT7JLjOb0y/4/4d66s9dXPYBsUPPdASrdBjIwF5j2JsTLdYp4J1CrolUdtpr7uuKcw9/NMCU0Kx1Oll+FS+azbkfTWNkTqWsiuRREGPoySEKcnkegMCHM0yZBgk1WTL3wdj9M9ylSAau2RcYbfd6dBf8qv8ReDeR1GeUfzzHf0Q6VJcdf3Esn+qU9uR6u9v27IBS3Gf5hpmDk0/nkDMRugavtC1Hpj8tPKx7F/TjOhWWHzi6NH9D1h55C7Ha8+z9m3rw+8urNEQyncEMFIMxjxYH2puDhF6gbyRMUPXqw/P8xwA2xYtS9sVPv0rUb76S/9aFMraUJdu3dHs5xC37BZHzkbnl6BJQ4O/jdTiVZ1ZCu1wVp8RQ4eaG/w6Hg2DXlaWqfVXTDJfRx5YmMAMF4mDTVmrs8ouXYTmqrdqS7i5OqdTvpc+jbIOQVHhRldz0wbeKwYt1cmXuLLpkc9z41fU4Am+5Tx/ckDEFAc4tpTWmMgDqgWfwRliZMc5dwCYICJ39ocZDWQ1vyItDSC7PXf5XPGvK3cUyInpsjur6aP9mPcbCd9SV6oism7qOBkwMAyj8nGcEzwxSC1LW2Y1G4KXhdG+TIwbY/Duzx+AmEr8I5O8EJBuhKSJxA2Jloik4HfT73sY/S9MJbE1/YhjKkDWSONh4CfjgI8vGWcZesSIYL8pFDrPpuLg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(39860400002)(376002)(346002)(136003)(36840700001)(46966006)(36906005)(8936002)(82310400003)(82740400003)(6636002)(7696005)(47076005)(70586007)(70206006)(8676002)(26005)(316002)(6286002)(110136005)(54906003)(2616005)(1076003)(336012)(36860700001)(55016002)(478600001)(7636003)(16526019)(426003)(5660300002)(186003)(2906002)(356005)(4326008)(36756003)(83380400001)(6666004)(86362001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:46.6414 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1e23963e-40be-43fc-e7d9-08d93d21400d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1311 Subject: [dpdk-dev] [PATCH v3 08/22] net/mlx5: minimize list critical sections X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad The mlx5 internal list utility is thread safe. In order to synchronize list access between the threads, a RW lock is taken for the critical sections. The create\remove\clone\clone_free operations are in the critical sections. These operations are heavy and make the critical sections heavy because they are used for memory and other resources allocations\deallocations. Moved out the operations from the critical sections and use generation counter in order to detect parallel allocations. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 86 ++++++++++++++++++----------------- drivers/net/mlx5/mlx5_utils.h | 5 +- 2 files changed, 48 insertions(+), 43 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 51cca68ea9..772b352af5 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -101,7 +101,7 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, { struct mlx5_list_entry *lentry = list->cb_clone(list, gentry, ctx); - if (!lentry) + if (unlikely(!lentry)) return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; @@ -112,8 +112,8 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { - struct mlx5_list_entry *entry, *lentry; - uint32_t prev_gen_cnt = 0; + struct mlx5_list_entry *entry, *local_entry; + volatile uint32_t prev_gen_cnt = 0; int lcore_index = rte_lcore_index(rte_lcore_id()); MLX5_ASSERT(list); @@ -122,51 +122,56 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_errno = ENOTSUP; return NULL; } - /* Lookup in local cache. */ - lentry = __list_lookup(list, lcore_index, ctx, true); - if (lentry) - return lentry; - /* Lookup with read lock, reuse if found. */ + /* 1. Lookup in local cache. */ + local_entry = __list_lookup(list, lcore_index, ctx, true); + if (local_entry) + return local_entry; + /* 2. Lookup with read lock on global list, reuse if found. */ rte_rwlock_read_lock(&list->lock); entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); - if (entry == NULL) { - prev_gen_cnt = __atomic_load_n(&list->gen_cnt, - __ATOMIC_ACQUIRE); - rte_rwlock_read_unlock(&list->lock); - } else { + if (likely(entry)) { rte_rwlock_read_unlock(&list->lock); return mlx5_list_cache_insert(list, lcore_index, entry, ctx); } - /* Not found, append with write lock - block read from other threads. */ + prev_gen_cnt = list->gen_cnt; + rte_rwlock_read_unlock(&list->lock); + /* 3. Prepare new entry for global list and for cache. */ + entry = list->cb_create(list, entry, ctx); + if (unlikely(!entry)) + return NULL; + local_entry = list->cb_clone(list, entry, ctx); + if (unlikely(!local_entry)) { + list->cb_remove(list, entry); + return NULL; + } + entry->ref_cnt = 1u; + local_entry->ref_cnt = 1u; + local_entry->gentry = entry; rte_rwlock_write_lock(&list->lock); - /* If list changed by other threads before lock, search again. */ - if (prev_gen_cnt != __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE)) { - /* Lookup and reuse w/o read lock. */ - entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); - if (entry) { + /* 4. Make sure the same entry was not created before the write lock. */ + if (unlikely(prev_gen_cnt != list->gen_cnt)) { + struct mlx5_list_entry *oentry = __list_lookup(list, + RTE_MAX_LCORE, + ctx, true); + + if (unlikely(oentry)) { + /* 4.5. Found real race!!, reuse the old entry. */ rte_rwlock_write_unlock(&list->lock); - return mlx5_list_cache_insert(list, lcore_index, entry, - ctx); - } - } - entry = list->cb_create(list, entry, ctx); - if (entry) { - lentry = mlx5_list_cache_insert(list, lcore_index, entry, ctx); - if (!lentry) { list->cb_remove(list, entry); - } else { - entry->ref_cnt = 1u; - LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE].h, entry, - next); - __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_RELEASE); - __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", - list->name, (void *)entry, entry->ref_cnt); + list->cb_clone_free(list, local_entry); + return mlx5_list_cache_insert(list, lcore_index, oentry, + ctx); } - } + /* 5. Update lists. */ + LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE].h, entry, next); + list->gen_cnt++; rte_rwlock_write_unlock(&list->lock); - return lentry; + LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); + __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", + list->name, (void *)entry, entry->ref_cnt); + return local_entry; } int @@ -180,12 +185,11 @@ mlx5_list_unregister(struct mlx5_list *list, if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; rte_rwlock_write_lock(&list->lock); - if (__atomic_load_n(&gentry->ref_cnt, __ATOMIC_ACQUIRE) == 0) { - __atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_ACQUIRE); - __atomic_sub_fetch(&list->count, 1, __ATOMIC_ACQUIRE); + if (likely(gentry->ref_cnt == 0)) { LIST_REMOVE(gentry, next); - list->cb_remove(list, gentry); rte_rwlock_write_unlock(&list->lock); + list->cb_remove(list, gentry); + __atomic_sub_fetch(&list->count, 1, __ATOMIC_ACQUIRE); DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", list->name, (void *)gentry); return 0; diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 24ae2b2ccb..684d1e8a2a 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -387,8 +387,9 @@ typedef struct mlx5_list_entry *(*mlx5_list_create_cb) */ struct mlx5_list { char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ - uint32_t gen_cnt; /* List modification will update generation count. */ - uint32_t count; /* number of entries in list. */ + volatile uint32_t gen_cnt; + /* List modification will update generation count. */ + volatile uint32_t count; /* number of entries in list. */ void *ctx; /* user objects target to callback. */ rte_rwlock_t lock; /* read/write lock. */ mlx5_list_create_cb cb_create; /**< entry create callback. */ From patchwork Fri Jul 2 06:18:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95161 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 612BFA0A0C; Fri, 2 Jul 2021 08:19:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D200241371; Fri, 2 Jul 2021 08:18:52 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2060.outbound.protection.outlook.com [40.107.244.60]) by mails.dpdk.org (Postfix) with ESMTP id C6EEE41358 for ; Fri, 2 Jul 2021 08:18:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lWJeFj31yZ/KfUnP5efnwrIfvuBeL/trrC2AFgtfbF9KbEO+CUGt7HJBkpgVjFpfboOnEKVdYDeaAVEgOXDEv9yxwL0P1zCN7ZkUdC5w1ZkhA4IC2Qq/3NNDPKYeTPRJYzoiQPM53XmjjNNiUxs97mjeFk1pvbv/ANRbJs+yy/F5CTvv8Ne/QbwmZBLh9zaHDVkVFqgpm4kv3K6fgdMpqPAfoFYkRpsOdKyduOxZeVnooclzULwosS+PdCNWlw9CcxUiNEDBMdsgUER6DwJ5+As01oHZmXoSc2Y5Xn7bQJLwMNbGngg86X5OrRD4YC77Ct2qb2yFT0QN2nDIYXwn/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UyrDB+s/v1ml5LIIf8QAM9Tnnii4/e5908TuWg4LkmE=; b=FFGPFE0atlCA5nIuPpUb5HmsyS0xQ58hLKsygfJOtV5ZThdmNeLblMyjgKpjkzDQQWFfKOk42lSnlG95l+2zfO1en6eoaQFaqx+Sr0kZdizFjzf1wKBUDmDR5y0Zug3KigenZT+OSnR1MVQVBG4x2UClOrj1gGxuK5GEfESdlrdB73ME2Q+3sLydtmF+ipoB6QMZ4sCDNVQAbQXuDuOqjRAQbKw8QRu6wP6QhIwnTuEj51i4aOEqMGIdx20KDvrQEcRmt2fiMJWxXpSNRhRgGEch3XM5Rgl09WcTnaQ/8BlSCTjfUkTuWlfT0rl05twmOkKNULqVQusQVL6rO3OdrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UyrDB+s/v1ml5LIIf8QAM9Tnnii4/e5908TuWg4LkmE=; b=nLAkKdMWi4KrfJI9qmO0PKbiLlGPSwKW6791ZfLW2IUuGa0oeXGYOk5ADp7mG8DLLoVe/fl+V08kZvGibbsCqVAk2Cv3pdmhxxJGuc3RTojghBxG8fID0iX4m1ZT6UKQahbg+r4PxzQpzvMshqlRMqARdYcsDrHAG55JAv8YSck3aWSwNXW+bYFGBvPQ7edFIHEE0KK2BPqulBwjxhuQdFUqeGwuZAK861Kxkk/Jl4q0mFdW1t3dfSwcDmWB3eApts+Po/OXaarlnfwNaBhsOXwbfk8FBk0PEP1o0kmKD3X+Vwjj0cvPvlKtaB2aMFeao/TXJtg6eYV3ODgUZ0RPPA== Received: from MW4PR03CA0095.namprd03.prod.outlook.com (2603:10b6:303:b7::10) by DM6PR12MB3164.namprd12.prod.outlook.com (2603:10b6:5:188::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23; Fri, 2 Jul 2021 06:18:48 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::67) by MW4PR03CA0095.outlook.office365.com (2603:10b6:303:b7::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Fri, 2 Jul 2021 06:18:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:48 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:45 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:03 +0300 Message-ID: <20210702061816.10454-10-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8090fb01-13e3-4602-7b6b-08d93d2140e4 X-MS-TrafficTypeDiagnostic: DM6PR12MB3164: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:326; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: R6yj2avIcTnwShLYTb2r7MWFPdE2ezWb7njtYni5TV1soSr/3fg+DL15sNnrpU57nBaw9McpbK/S1VINjsEK7dBlMDLNmM/Pfvm2e4Q8SmzMrbsrN+xkUFibJhE7Al8tANCxXqkmEhjmgrX6MWFsQQ3VbZaEKS8DY/u7Dw41Quts3EeS5i6ianUylnD8He996yTUCpvZF9Gs3TXkCqfQVg+CkmsAD4IjzfXiB6zwIVnhdkmYShfGU5dSIv19mDs/TNBDTuG6+qgt/osqqniDTfl0VDHUGhvr0zH1QlLV6Y978r2MeL7NCD28h6AlRLTWKECwvEudk5Ym/QLypGoc+b/niUgfDOW+txYgQFXMvZI32PfiXS9HYDZp8hV7YD8EUXkXs2pIn31wJ696yjl+Mrm953zKrqnnyGkY1JFk60OvrQPcylyIUo/GvPpAv/FbcIUH4TFxWN7HeuOtJU+x8LnHEtXMeKahCbOA0eLptW54eiutA3hbylinM4YkANZggQdFku+6OY5f2szrCtIrGAqKW0vZglYXmWxAwZBv9+AdRhMdlGY8WKVLAbc8Hi0CPVKpyLKRRPQLQvPzIpbxwcNH91B8Gl3KXoo0m434/7MKVoiZnSxNe/iO2QLRyUg0qxch8P+EJdCBBqDSmi+pGI3eHsJqhrDHDACZaJFZcOtVJr2If5u05iN5za7/IiHL8bh1XxoiUsBLo12+vB42VA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966006)(36840700001)(1076003)(2906002)(316002)(86362001)(36906005)(4326008)(36860700001)(70586007)(7696005)(47076005)(54906003)(8676002)(110136005)(356005)(5660300002)(70206006)(7636003)(36756003)(82740400003)(55016002)(83380400001)(6666004)(6286002)(478600001)(186003)(6636002)(16526019)(82310400003)(426003)(336012)(26005)(8936002)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:48.0447 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8090fb01-13e3-4602-7b6b-08d93d2140e4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3164 Subject: [dpdk-dev] [PATCH v3 09/22] net/mlx5: manage list cache entries release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad When a cache entry is allocated by lcore A and is released by lcore B, the driver should synchronize the cache list access of lcore A. The design decision is to manage a counter per lcore cache that will be increased atomically when the non-original lcore decreases the reference counter of cache entry to 0. In list register operation, before the running lcore starts a lookup in its cache, it will check the counter in order to free invalid entries in its cache. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 79 +++++++++++++++++++++++------------ drivers/net/mlx5/mlx5_utils.h | 2 + 2 files changed, 54 insertions(+), 27 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 772b352af5..7cdf44dcf7 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -47,36 +47,25 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) uint32_t ret; while (entry != NULL) { - struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); - - if (list->cb_match(list, entry, ctx)) { - if (lcore_index < RTE_MAX_LCORE) { + if (list->cb_match(list, entry, ctx) == 0) { + if (reuse) { + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_ACQUIRE) - 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", + list->name, (void *)entry, + entry->ref_cnt); + } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_ACQUIRE); - if (ret == 0) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - } - } - entry = nentry; - continue; - } - if (reuse) { - ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_ACQUIRE); - if (ret == 1u) { - /* Entry was invalid before, free it. */ - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - entry = nentry; - continue; } - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref++: %u.", - list->name, (void *)entry, entry->ref_cnt); + if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + return entry; + if (reuse && ret == 0) + entry->ref_cnt--; /* Invalid entry. */ } - break; + entry = LIST_NEXT(entry, next); } - return entry; + return NULL; } struct mlx5_list_entry * @@ -105,10 +94,31 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; + lentry->lcore_idx = (uint32_t)lcore_index; LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); return lentry; } +static void +__list_cache_clean(struct mlx5_list *list, int lcore_index) +{ + struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_entry *entry = LIST_FIRST(&c->h); + uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, + __ATOMIC_RELAXED); + + while (inv_cnt != 0 && entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + inv_cnt--; + } + entry = nentry; + } +} + struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { @@ -122,6 +132,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_errno = ENOTSUP; return NULL; } + /* 0. Free entries that was invalidated by other lcores. */ + __list_cache_clean(list, lcore_index); /* 1. Lookup in local cache. */ local_entry = __list_lookup(list, lcore_index, ctx, true); if (local_entry) @@ -147,6 +159,7 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) entry->ref_cnt = 1u; local_entry->ref_cnt = 1u; local_entry->gentry = entry; + local_entry->lcore_idx = (uint32_t)lcore_index; rte_rwlock_write_lock(&list->lock); /* 4. Make sure the same entry was not created before the write lock. */ if (unlikely(prev_gen_cnt != list->gen_cnt)) { @@ -169,8 +182,8 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_rwlock_write_unlock(&list->lock); LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", - list->name, (void *)entry, entry->ref_cnt); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + (void *)entry, entry->ref_cnt); return local_entry; } @@ -179,9 +192,21 @@ mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { struct mlx5_list_entry *gentry = entry->gentry; + int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; + lcore_idx = rte_lcore_index(rte_lcore_id()); + MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); + if (entry->lcore_idx == (uint32_t)lcore_idx) { + LIST_REMOVE(entry, next); + list->cb_clone_free(list, entry); + } else if (likely(lcore_idx != -1)) { + __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __ATOMIC_RELAXED); + } else { + return 0; + } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) return 1; rte_rwlock_write_lock(&list->lock); diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 684d1e8a2a..ffa9cd5142 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -310,11 +310,13 @@ struct mlx5_list; struct mlx5_list_entry { LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ uint32_t ref_cnt; /* 0 means, entry is invalid. */ + uint32_t lcore_idx; struct mlx5_list_entry *gentry; }; struct mlx5_list_cache { LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; + uint32_t inv_cnt; /* Invalid entries counter. */ } __rte_cache_aligned; /** From patchwork Fri Jul 2 06:18:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95164 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C03BA0A0C; Fri, 2 Jul 2021 08:20:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7EFD241385; Fri, 2 Jul 2021 08:18:56 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2061.outbound.protection.outlook.com [40.107.244.61]) by mails.dpdk.org (Postfix) with ESMTP id 5BEDC41346 for ; Fri, 2 Jul 2021 08:18:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dx0Z9aBdFbQB60cLw7jrlo11FXz+xtMolVxo7PDJUZNbfLVEgDogRxFGC6v7n+M1HdbrNU01stz9hB2cfXkPkEPHninVRZ/qGxwLVQwQd5gy6sBmwxImdB9SaNHm0KgnogFijW0wbRYRF54AnZqa4gOcKdTcZRwUFqdr67jyVbWuIiWi7wptWYP5Hf12KC5lH8lBUmkC3MVakP/zYUPdau0/4HxjsFsly/M6AzOE/cxi6gpeMvp7UIMiUjqOPh1o2wDFTXN3a/GBNGn7GjCYmM0E6w2tPHhEs4oA+iySaWORkpVySkiRqA2kWgfmlTmpxbR1u77yGgneicJ76KiWkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=n0ZO3LzgDLilvO0Hjo8/zxkeJs1+OOB1uCF2gI2tPJM=; b=TzlW3JRHd8yl+gUhCiuGDX+A/kIGZpRxKxJX0600S1alpxAMkVpFuR9tiV1O1NUMqUZ6qqg8CSv3/HlAus0qdVLjTjzZfHyHOzgv+JBPlYr37mITa0IP+DyBpLZeILTqNzSSKadLMWjocujFRcoJb4CEfpF8VVYw8MnU/pEs0BHjR5EzqrYeK+/Eisu4L1YupVIZTkXKDb60QMlwJFaYssHZgZSpzP+TeUTmM3fOaY8f6KgaFV3o2hX7qmhkiS/ZdagWey0Skt0Y923BqGZn7pxG+/WTukc9VsMyylzq/6WNW7CShZWpIL9Y0eYpq4l8QPdFeRHIZOgVLJwt2V+IyA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=n0ZO3LzgDLilvO0Hjo8/zxkeJs1+OOB1uCF2gI2tPJM=; b=oKXrbJZgEkw96SJxvMFGhxUQcpLBMYZ+lvNpfLez9waC3Rhs9I8kHbKGQHf3lDeCZ8bl49Je+9h0KAaTy0POyBAkroJTv+1GHEcxjaT5URhm6+uS25RWV0kbrAy5EXsBXPjw/1FgDp3JIMFSrNkfDAPWhalBNBYhdSiImZV7okMVAeZkPU8RtJPuJ3xGE0YaO2BzkTXjRkMAHD8EUV/jSyCH1vcxxkiynrDkfF8QPY2cA0X7DCFIeA/ojybDFZWZg5zD6fkQFtFT6wuPEJc4h/45Xr9ZG1E95fk+9xd2NIFZT/1EIQX669agr8aTwiW/TkMoC3u3tFlEkAg4HyQdrQ== Received: from MW4PR03CA0118.namprd03.prod.outlook.com (2603:10b6:303:b7::33) by BN8PR12MB3297.namprd12.prod.outlook.com (2603:10b6:408:9f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.23; Fri, 2 Jul 2021 06:18:50 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::70) by MW4PR03CA0118.outlook.office365.com (2603:10b6:303:b7::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:49 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:47 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:04 +0300 Message-ID: <20210702061816.10454-11-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b457b40e-d64a-47a7-7bbc-08d93d2141b6 X-MS-TrafficTypeDiagnostic: BN8PR12MB3297: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:249; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5vCf0rGW/Aq8mqn1EOw3dFxEthSR7ISJj/wwg2Y1s0yUt83Bp7IvtUT6MF1vVbFv+Pkdtbz5YANZ5x9Ce0xzmgwJK0c31YskaYInZ/ZgblcWtOqwUKHTMHlzmQVf4PWxMBMdpv7ik/K4wxZrkxlrnKMDOvMpVxgF5SparGXHeLRFI54/RSX/emWTnO3NRN1zZF1LK+7oGOdO6POUzh99gnCnlMC85isGN12MLHlAUXP6V1YeR1vP8vhhsm+cD6EgJRouYRSgCRS5Pp/KrhgMuE0W5R4I/OszpHQ949x1hT1uLbT9MIeQqzsbyzNRufXQ+mqvU9MZj4YWhVPG9/6Jn6KPWTT2XoTQkJpOoCW9Xw0eAkUkVvChgtJJIS4ltkaMxumDsO2MZ6UMHnT3GrJEzDY1CBdURc4uLhU7tFBMXVP0J0JDeocP24Elx1JfHWp+cxF1PN8JbVcxiHpiDwX6iQSsgmxcIYUysl19AFCgja2zzqA2hNSUYl7/plFYUAW4y+nIrHRKP0aDf8SjZLmXlcVeUptKg+9vKztDJsODFuTSBhVOCPRvfh4qFiNq+SGkHXfQfidCXO5wEur5YESWBWKff/D3lWPLXAPNi7gDi9vRhIn35wUAsrq/PwL4nPfvmT2VpF6ApiN6V1KO5dF+bVp14sPEQhkZZznBS10qux6ezTuU2ckM+SW+1oz7BeInyE712JGGybu41mB2IDB+IA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(346002)(136003)(396003)(376002)(46966006)(36840700001)(336012)(47076005)(6286002)(426003)(82310400003)(4326008)(2906002)(70586007)(36906005)(8676002)(70206006)(36756003)(110136005)(1076003)(2616005)(316002)(82740400003)(186003)(7636003)(6666004)(478600001)(8936002)(54906003)(83380400001)(356005)(26005)(16526019)(5660300002)(7696005)(6636002)(55016002)(36860700001)(86362001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:49.4139 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b457b40e-d64a-47a7-7bbc-08d93d2141b6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3297 Subject: [dpdk-dev] [PATCH v3 10/22] net/mlx5: relax the list utility atomic operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad The atomic operation in the list utility no need a barriers because the critical part are managed by RW lock. Relax them. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 7cdf44dcf7..29248c80ed 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -50,13 +50,13 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) if (list->cb_match(list, entry, ctx) == 0) { if (reuse) { ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_ACQUIRE) - 1; + __ATOMIC_RELAXED) - 1; DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", list->name, (void *)entry, entry->ref_cnt); } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, - __ATOMIC_ACQUIRE); + __ATOMIC_RELAXED); } if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) return entry; @@ -181,7 +181,7 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) list->gen_cnt++; rte_rwlock_write_unlock(&list->lock); LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); - __atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE); + __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, (void *)entry, entry->ref_cnt); return local_entry; @@ -194,7 +194,7 @@ mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *gentry = entry->gentry; int lcore_idx; - if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) + if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; lcore_idx = rte_lcore_index(rte_lcore_id()); MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); @@ -207,14 +207,14 @@ mlx5_list_unregister(struct mlx5_list *list, } else { return 0; } - if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_ACQUIRE) != 0) + if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; rte_rwlock_write_lock(&list->lock); if (likely(gentry->ref_cnt == 0)) { LIST_REMOVE(gentry, next); rte_rwlock_write_unlock(&list->lock); list->cb_remove(list, gentry); - __atomic_sub_fetch(&list->count, 1, __ATOMIC_ACQUIRE); + __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", list->name, (void *)gentry); return 0; From patchwork Fri Jul 2 06:18:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95163 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 86B98A0A0C; Fri, 2 Jul 2021 08:19:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0948741372; Fri, 2 Jul 2021 08:18:55 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2085.outbound.protection.outlook.com [40.107.212.85]) by mails.dpdk.org (Postfix) with ESMTP id 5CD5441368 for ; Fri, 2 Jul 2021 08:18:52 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TGaYnRFZzcI+hLsECtcIq6fv3vO4pppZUw/u+LEHUa4yMc1EjKG/9O/58vJqcmrvce8I5UUOgj1EtzR3wAfjvY5wY9D70EpI4O7yw4vX1HpW0lrDJofNPw/A0xNxA7myfly1wzNRsbuj9A7B6sOkNs/Z52NzcigRc+tX1HGBllGAc+p8RtIuXDfLy9rkQca9vEZD7N34hJX7+S6Mjchjx/xpVIGq6ZDl0T5Amoax6ad4U7xbBUBLUzUlOZ64Y8B9dvuJBuyfAmQUMcY3r69qIa70m+vwQTzg4oAaysCA7QKBarPXTudSM8pqU95ynxTziGBOwadnSg0ZXg5ko1OkcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zmIUcMNsICcvw5nu9kzgl0YkTN2QL3d94W4P5kB2iuM=; b=PurNTaztyH969r+Iq9IskDBS7tm2P1QySqEMOHZaJvrlpQ+2l9wIM+v7ZLRQk97Xrbh+fDINma0qwZNqMoBlbeakJSuKQJbMe+A9Bp2SBUCj9qRmbkcNgmOP8vN7U2x/1PdEF4P1po5MEwksxNoUL/f8pT8wFfuv79YmxjnTr/vk+yJZBA9FluvMgqS1ILevmuiCFER8Km/8Zcw/dDxPgvou3tV1wp7BJL+MkqBMBD6nSTICv5+m/Ivk7/b+a5c+U8P8JI0W1ozhON/dwW6tJHXvpH8Y/4dgSQJb7liGCqEEvgKI6Ua5nDmbgalREGaiZCgDQZGDRNd2sAfjcuxDng== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zmIUcMNsICcvw5nu9kzgl0YkTN2QL3d94W4P5kB2iuM=; b=Q0u6D2opHl0n/GhuULBZaUqtZfs3sMobis4OQH8odQs5CbaLicOI0AyPyr1oHF93LlMFdZMRAL5aN7HVsjwDxFm0aFWqbb/o+agdL7rFxWNP2Gve38EWoh6vO6vniXl/JmgXXRfcEvF/mLCcrz1ghSRi2sH6DlSizars4mnRgl00nlzY7zM1OjIz/siSBVnmR+RnP5cKOHGz5FqDnzYf3ROxRVmLs/bR2GUP09ySUWpEGGZJvSN+925scBrhv4TzRee5mNrUA33525CA7O9U8eLMKuJLZSpdxtG2iUGIPmsAnq+g0gSp9+15a6QaMNA+8d+/9x7rY8V0ldKbt2YS3g== Received: from MW4PR03CA0119.namprd03.prod.outlook.com (2603:10b6:303:b7::34) by DM6PR12MB3452.namprd12.prod.outlook.com (2603:10b6:5:115::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Fri, 2 Jul 2021 06:18:50 +0000 Received: from CO1NAM11FT022.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b7:cafe::72) by MW4PR03CA0119.outlook.office365.com (2603:10b6:303:b7::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Fri, 2 Jul 2021 06:18:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT022.mail.protection.outlook.com (10.13.175.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:50 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:48 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:05 +0300 Message-ID: <20210702061816.10454-12-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9600bb81-3cc6-4b82-5343-08d93d21425d X-MS-TrafficTypeDiagnostic: DM6PR12MB3452: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:241; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QKst6ngStP8layIOdKoQgXirslx9evmv3yQlNlS56TL3ETGGT25uLo/V5vMd84KdiLTEXidqBcy64qHdLW6JPm1xWVCxYvs0jkvcsefeaLnvbqaMa2YL6YJZkd0NyxrI3icD18uaQn2xDIehwWW1i5m3h7dv+N9ExXELYgnNGjy7WrWG3me/kfTtQiqyYd6dzZ2SBAasF2CzN7MwaZD6LRbhxRzgtrO4qeiEP3yQLfMVJt5bdiOWR7VgTFvPm6ObYrYt/VXDWzmx/IipxxJGUqreDUXHBbJCNnZaTCqcQwYjKUuVkHR7CFsIEyRUhKhWs6InK78onUNexlRDGZkvcGC6sk5f3+rFOJrqYzQ3CHwCq441VTJjtpeL01kIt8y9c0glh/XeQmeYZ7ei8fLWYd7zfmKi5EJfUbmu84Hf2e6l/HcRmlPiuD+K+MSWddl+k9wsaTV0RNoO4wxzlvx+K0puBP6/am/pTDebCRFRePpTtwV3kgmvrjX7kiEgYbOc4koV9egN6sXl1x46ZBvz91xtG2GERJWBGmPqMy9ZXbkQTWX2mW7ohDWAoNSohMZg216KtbIBIjkXIrh6JxhZPXgrH+Hrf3sYvy/4mwvd1PqleIAmGd9YKKBu8DurjrKSpx5tkeRKa2CWrER5iXjjdOuU5KJNle2mstGAD306vTz/UfW5sW3C9WGfXbQfHq4FFrFpehU99U1WEmV9jkbOsA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(46966006)(36840700001)(83380400001)(5660300002)(36860700001)(2616005)(36756003)(16526019)(4326008)(26005)(70586007)(82310400003)(7696005)(82740400003)(47076005)(30864003)(7636003)(6286002)(186003)(1076003)(478600001)(6636002)(6666004)(356005)(316002)(55016002)(426003)(70206006)(36906005)(2906002)(8676002)(86362001)(8936002)(336012)(110136005)(54906003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:50.5193 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9600bb81-3cc6-4b82-5343-08d93d21425d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT022.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3452 Subject: [dpdk-dev] [PATCH v3 11/22] net/mlx5: allocate list memory by the create API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad Currently, the list memory was allocated by the list API caller. Move it to be allocated by the create API in order to save consistence with the hlist utility. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 105 ++++++++++++++++++++--------- drivers/net/mlx5/mlx5.c | 3 +- drivers/net/mlx5/mlx5.h | 10 +-- drivers/net/mlx5/mlx5_flow.h | 2 +- drivers/net/mlx5/mlx5_flow_dv.c | 56 ++++++++------- drivers/net/mlx5/mlx5_rxq.c | 6 +- drivers/net/mlx5/mlx5_utils.c | 19 ++++-- drivers/net/mlx5/mlx5_utils.h | 15 ++--- drivers/net/mlx5/windows/mlx5_os.c | 2 +- 9 files changed, 137 insertions(+), 81 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 8a043526da..87b63d852b 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -274,36 +274,44 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) #ifdef HAVE_IBV_FLOW_DV_SUPPORT /* Init port id action list. */ snprintf(s, sizeof(s), "%s_port_id_action_list", sh->ibdev_name); - mlx5_list_create(&sh->port_id_action_list, s, sh, - flow_dv_port_id_create_cb, - flow_dv_port_id_match_cb, - flow_dv_port_id_remove_cb, - flow_dv_port_id_clone_cb, - flow_dv_port_id_clone_free_cb); + sh->port_id_action_list = mlx5_list_create(s, sh, + flow_dv_port_id_create_cb, + flow_dv_port_id_match_cb, + flow_dv_port_id_remove_cb, + flow_dv_port_id_clone_cb, + flow_dv_port_id_clone_free_cb); + if (!sh->port_id_action_list) + goto error; /* Init push vlan action list. */ snprintf(s, sizeof(s), "%s_push_vlan_action_list", sh->ibdev_name); - mlx5_list_create(&sh->push_vlan_action_list, s, sh, - flow_dv_push_vlan_create_cb, - flow_dv_push_vlan_match_cb, - flow_dv_push_vlan_remove_cb, - flow_dv_push_vlan_clone_cb, - flow_dv_push_vlan_clone_free_cb); + sh->push_vlan_action_list = mlx5_list_create(s, sh, + flow_dv_push_vlan_create_cb, + flow_dv_push_vlan_match_cb, + flow_dv_push_vlan_remove_cb, + flow_dv_push_vlan_clone_cb, + flow_dv_push_vlan_clone_free_cb); + if (!sh->push_vlan_action_list) + goto error; /* Init sample action list. */ snprintf(s, sizeof(s), "%s_sample_action_list", sh->ibdev_name); - mlx5_list_create(&sh->sample_action_list, s, sh, - flow_dv_sample_create_cb, - flow_dv_sample_match_cb, - flow_dv_sample_remove_cb, - flow_dv_sample_clone_cb, - flow_dv_sample_clone_free_cb); + sh->sample_action_list = mlx5_list_create(s, sh, + flow_dv_sample_create_cb, + flow_dv_sample_match_cb, + flow_dv_sample_remove_cb, + flow_dv_sample_clone_cb, + flow_dv_sample_clone_free_cb); + if (!sh->sample_action_list) + goto error; /* Init dest array action list. */ snprintf(s, sizeof(s), "%s_dest_array_list", sh->ibdev_name); - mlx5_list_create(&sh->dest_array_list, s, sh, - flow_dv_dest_array_create_cb, - flow_dv_dest_array_match_cb, - flow_dv_dest_array_remove_cb, - flow_dv_dest_array_clone_cb, - flow_dv_dest_array_clone_free_cb); + sh->dest_array_list = mlx5_list_create(s, sh, + flow_dv_dest_array_create_cb, + flow_dv_dest_array_match_cb, + flow_dv_dest_array_remove_cb, + flow_dv_dest_array_clone_cb, + flow_dv_dest_array_clone_free_cb); + if (!sh->dest_array_list) + goto error; /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, @@ -447,6 +455,22 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) sh->tunnel_hub = NULL; } mlx5_free_table_hash_list(priv); + if (sh->port_id_action_list) { + mlx5_list_destroy(sh->port_id_action_list); + sh->port_id_action_list = NULL; + } + if (sh->push_vlan_action_list) { + mlx5_list_destroy(sh->push_vlan_action_list); + sh->push_vlan_action_list = NULL; + } + if (sh->sample_action_list) { + mlx5_list_destroy(sh->sample_action_list); + sh->sample_action_list = NULL; + } + if (sh->dest_array_list) { + mlx5_list_destroy(sh->dest_array_list); + sh->dest_array_list = NULL; + } return err; } @@ -508,9 +532,23 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv) mlx5_release_tunnel_hub(sh, priv->dev_port); sh->tunnel_hub = NULL; } - mlx5_list_destroy(&sh->port_id_action_list); - mlx5_list_destroy(&sh->push_vlan_action_list); mlx5_free_table_hash_list(priv); + if (sh->port_id_action_list) { + mlx5_list_destroy(sh->port_id_action_list); + sh->port_id_action_list = NULL; + } + if (sh->push_vlan_action_list) { + mlx5_list_destroy(sh->push_vlan_action_list); + sh->push_vlan_action_list = NULL; + } + if (sh->sample_action_list) { + mlx5_list_destroy(sh->sample_action_list); + sh->sample_action_list = NULL; + } + if (sh->dest_array_list) { + mlx5_list_destroy(sh->dest_array_list); + sh->dest_array_list = NULL; + } } /** @@ -1710,11 +1748,13 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - mlx5_list_create(&priv->hrxqs, "hrxq", eth_dev, mlx5_hrxq_create_cb, - mlx5_hrxq_match_cb, - mlx5_hrxq_remove_cb, - mlx5_hrxq_clone_cb, - mlx5_hrxq_clone_free_cb); + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, mlx5_hrxq_create_cb, + mlx5_hrxq_match_cb, + mlx5_hrxq_remove_cb, + mlx5_hrxq_clone_cb, + mlx5_hrxq_clone_free_cb); + if (!priv->hrxqs) + goto error; rte_rwlock_init(&priv->ind_tbls_lock); /* Query availability of metadata reg_c's. */ err = mlx5_flow_discover_mreg_c(eth_dev); @@ -1771,7 +1811,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_drop_action_destroy(eth_dev); if (own_domain_id) claim_zero(rte_eth_switch_domain_free(priv->domain_id)); - mlx5_list_destroy(&priv->hrxqs); + if (priv->hrxqs) + mlx5_list_destroy(priv->hrxqs); mlx5_free(priv); if (eth_dev != NULL) eth_dev->data->dev_private = NULL; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 9aade013c5..775ea15adb 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1611,7 +1611,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) if (ret) DRV_LOG(WARNING, "port %u some flows still remain", dev->data->port_id); - mlx5_list_destroy(&priv->hrxqs); + if (priv->hrxqs) + mlx5_list_destroy(priv->hrxqs); /* * Free the shared context in last turn, because the cleanup * routines above may use some shared fields, like diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 546bee761e..df5cba3d45 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1114,10 +1114,10 @@ struct mlx5_dev_ctx_shared { struct mlx5_hlist *encaps_decaps; /* Encap/decap action hash list. */ struct mlx5_hlist *modify_cmds; struct mlx5_hlist *tag_table; - struct mlx5_list port_id_action_list; /* Port ID action list. */ - struct mlx5_list push_vlan_action_list; /* Push VLAN actions. */ - struct mlx5_list sample_action_list; /* List of sample actions. */ - struct mlx5_list dest_array_list; + struct mlx5_list *port_id_action_list; /* Port ID action list. */ + struct mlx5_list *push_vlan_action_list; /* Push VLAN actions. */ + struct mlx5_list *sample_action_list; /* List of sample actions. */ + struct mlx5_list *dest_array_list; /* List of destination array actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ void *default_miss_action; /* Default miss action. */ @@ -1359,7 +1359,7 @@ struct mlx5_priv { struct mlx5_obj_ops obj_ops; /* HW objects operations. */ LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */ LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */ - struct mlx5_list hrxqs; /* Hash Rx queues. */ + struct mlx5_list *hrxqs; /* Hash Rx queues. */ LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */ LIST_HEAD(txqobj, mlx5_txq_obj) txqsobj; /* Verbs/DevX Tx queues. */ /* Indirection tables. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index ce363355c1..ce4d205e86 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -590,7 +590,7 @@ struct mlx5_flow_tbl_data_entry { /**< hash list entry, 64-bits key inside. */ struct mlx5_flow_tbl_resource tbl; /**< flow table resource. */ - struct mlx5_list matchers; + struct mlx5_list *matchers; /**< matchers' header associated with the flow table. */ struct mlx5_flow_dv_jump_tbl_resource jump; /**< jump resource, at most one for each table created. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 68a9e70a98..d588e6fd37 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3883,7 +3883,7 @@ flow_dv_port_id_action_resource_register .data = ref, }; - entry = mlx5_list_register(&priv->sh->port_id_action_list, &ctx); + entry = mlx5_list_register(priv->sh->port_id_action_list, &ctx); if (!entry) return -rte_errno; resource = container_of(entry, typeof(*resource), entry); @@ -4008,7 +4008,7 @@ flow_dv_push_vlan_action_resource_register .data = ref, }; - entry = mlx5_list_register(&priv->sh->push_vlan_action_list, &ctx); + entry = mlx5_list_register(priv->sh->push_vlan_action_list, &ctx); if (!entry) return -rte_errno; resource = container_of(entry, typeof(*resource), entry); @@ -10010,12 +10010,22 @@ flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) MKSTR(matcher_name, "%s_%s_%u_%u_matcher_list", key.is_fdb ? "FDB" : "NIC", key.is_egress ? "egress" : "ingress", key.level, key.id); - mlx5_list_create(&tbl_data->matchers, matcher_name, sh, - flow_dv_matcher_create_cb, - flow_dv_matcher_match_cb, - flow_dv_matcher_remove_cb, - flow_dv_matcher_clone_cb, - flow_dv_matcher_clone_free_cb); + tbl_data->matchers = mlx5_list_create(matcher_name, sh, + flow_dv_matcher_create_cb, + flow_dv_matcher_match_cb, + flow_dv_matcher_remove_cb, + flow_dv_matcher_clone_cb, + flow_dv_matcher_clone_free_cb); + if (!tbl_data->matchers) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot create tbl matcher list"); + mlx5_flow_os_destroy_flow_action(tbl_data->jump.action); + mlx5_flow_os_destroy_flow_tbl(tbl->obj); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], idx); + return NULL; + } return &tbl_data->entry; } @@ -10143,7 +10153,7 @@ flow_dv_tbl_remove_cb(struct mlx5_hlist *list, tbl_data->tunnel->tunnel_id : 0, tbl_data->group_id); } - mlx5_list_destroy(&tbl_data->matchers); + mlx5_list_destroy(tbl_data->matchers); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], tbl_data->idx); } @@ -10275,7 +10285,7 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, return -rte_errno; /* No need to refill the error info */ tbl_data = container_of(tbl, struct mlx5_flow_tbl_data_entry, tbl); ref->tbl = tbl; - entry = mlx5_list_register(&tbl_data->matchers, &ctx); + entry = mlx5_list_register(tbl_data->matchers, &ctx); if (!entry) { flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); return rte_flow_error_set(error, ENOMEM, @@ -10872,7 +10882,7 @@ flow_dv_sample_resource_register(struct rte_eth_dev *dev, .data = ref, }; - entry = mlx5_list_register(&priv->sh->sample_action_list, &ctx); + entry = mlx5_list_register(priv->sh->sample_action_list, &ctx); if (!entry) return -rte_errno; resource = container_of(entry, typeof(*resource), entry); @@ -11087,7 +11097,7 @@ flow_dv_dest_array_resource_register(struct rte_eth_dev *dev, .data = ref, }; - entry = mlx5_list_register(&priv->sh->dest_array_list, &ctx); + entry = mlx5_list_register(priv->sh->dest_array_list, &ctx); if (!entry) return -rte_errno; resource = container_of(entry, typeof(*resource), entry); @@ -13553,7 +13563,7 @@ flow_dv_matcher_release(struct rte_eth_dev *dev, int ret; MLX5_ASSERT(matcher->matcher_object); - ret = mlx5_list_unregister(&tbl->matchers, &matcher->entry); + ret = mlx5_list_unregister(tbl->matchers, &matcher->entry); flow_dv_tbl_resource_release(MLX5_SH(dev), &tbl->tbl); return ret; } @@ -13696,7 +13706,7 @@ flow_dv_port_id_action_resource_release(struct rte_eth_dev *dev, if (!resource) return 0; MLX5_ASSERT(resource->action); - return mlx5_list_unregister(&priv->sh->port_id_action_list, + return mlx5_list_unregister(priv->sh->port_id_action_list, &resource->entry); } @@ -13754,7 +13764,7 @@ flow_dv_push_vlan_action_resource_release(struct rte_eth_dev *dev, if (!resource) return 0; MLX5_ASSERT(resource->action); - return mlx5_list_unregister(&priv->sh->push_vlan_action_list, + return mlx5_list_unregister(priv->sh->push_vlan_action_list, &resource->entry); } @@ -13835,7 +13845,7 @@ flow_dv_sample_resource_release(struct rte_eth_dev *dev, if (!resource) return 0; MLX5_ASSERT(resource->verbs_action); - return mlx5_list_unregister(&priv->sh->sample_action_list, + return mlx5_list_unregister(priv->sh->sample_action_list, &resource->entry); } @@ -13883,7 +13893,7 @@ flow_dv_dest_array_resource_release(struct rte_eth_dev *dev, if (!resource) return 0; MLX5_ASSERT(resource->action); - return mlx5_list_unregister(&priv->sh->dest_array_list, + return mlx5_list_unregister(priv->sh->dest_array_list, &resource->entry); } @@ -14727,7 +14737,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev, if (sub_policy->color_matcher[i]) { tbl = container_of(sub_policy->color_matcher[i]->tbl, typeof(*tbl), tbl); - mlx5_list_unregister(&tbl->matchers, + mlx5_list_unregister(tbl->matchers, &sub_policy->color_matcher[i]->entry); sub_policy->color_matcher[i] = NULL; } @@ -15461,7 +15471,7 @@ flow_dv_destroy_mtr_drop_tbls(struct rte_eth_dev *dev) if (mtrmng->def_matcher[i]) { tbl = container_of(mtrmng->def_matcher[i]->tbl, struct mlx5_flow_tbl_data_entry, tbl); - mlx5_list_unregister(&tbl->matchers, + mlx5_list_unregister(tbl->matchers, &mtrmng->def_matcher[i]->entry); mtrmng->def_matcher[i] = NULL; } @@ -15471,7 +15481,7 @@ flow_dv_destroy_mtr_drop_tbls(struct rte_eth_dev *dev) container_of(mtrmng->drop_matcher[i][j]->tbl, struct mlx5_flow_tbl_data_entry, tbl); - mlx5_list_unregister(&tbl->matchers, + mlx5_list_unregister(tbl->matchers, &mtrmng->drop_matcher[i][j]->entry); mtrmng->drop_matcher[i][j] = NULL; } @@ -15604,7 +15614,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, matcher.mask.size); - entry = mlx5_list_register(&tbl_data->matchers, &ctx); + entry = mlx5_list_register(tbl_data->matchers, &ctx); if (!entry) { DRV_LOG(ERR, "Failed to register meter drop matcher."); return -1; @@ -16013,7 +16023,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, matcher.crc = rte_raw_cksum ((const void *)matcher.mask.buf, matcher.mask.size); - entry = mlx5_list_register(&tbl_data->matchers, &ctx); + entry = mlx5_list_register(tbl_data->matchers, &ctx); if (!entry) { DRV_LOG(ERR, "Failed to register meter " "drop default matcher."); @@ -16050,7 +16060,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, matcher.crc = rte_raw_cksum ((const void *)matcher.mask.buf, matcher.mask.size); - entry = mlx5_list_register(&tbl_data->matchers, &ctx); + entry = mlx5_list_register(tbl_data->matchers, &ctx); if (!entry) { DRV_LOG(ERR, "Failed to register meter drop matcher."); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index f8769da8dc..aa9e973d10 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2385,7 +2385,7 @@ uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, if (rss_desc->shared_rss) { hrxq = __mlx5_hrxq_create(dev, rss_desc); } else { - entry = mlx5_list_register(&priv->hrxqs, &ctx); + entry = mlx5_list_register(priv->hrxqs, &ctx); if (!entry) return 0; hrxq = container_of(entry, typeof(*hrxq), entry); @@ -2415,7 +2415,7 @@ int mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hrxq_idx) if (!hrxq) return 0; if (!hrxq->standalone) - return mlx5_list_unregister(&priv->hrxqs, &hrxq->entry); + return mlx5_list_unregister(priv->hrxqs, &hrxq->entry); __mlx5_hrxq_remove(dev, hrxq); return 0; } @@ -2503,7 +2503,7 @@ mlx5_hrxq_verify(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; - return mlx5_list_get_entry_num(&priv->hrxqs); + return mlx5_list_get_entry_num(priv->hrxqs); } /** diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 29248c80ed..a4526444f9 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -11,20 +11,25 @@ /********************* mlx5 list ************************/ -int -mlx5_list_create(struct mlx5_list *list, const char *name, void *ctx, +struct mlx5_list * +mlx5_list_create(const char *name, void *ctx, mlx5_list_create_cb cb_create, mlx5_list_match_cb cb_match, mlx5_list_remove_cb cb_remove, mlx5_list_clone_cb cb_clone, mlx5_list_clone_free_cb cb_clone_free) { + struct mlx5_list *list; int i; - MLX5_ASSERT(list); if (!cb_match || !cb_create || !cb_remove || !cb_clone || - !cb_clone_free) - return -1; + !cb_clone_free) { + rte_errno = EINVAL; + return NULL; + } + list = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*list), 0, SOCKET_ID_ANY); + if (!list) + return NULL; if (name) snprintf(list->name, sizeof(list->name), "%s", name); list->ctx = ctx; @@ -37,7 +42,7 @@ mlx5_list_create(struct mlx5_list *list, const char *name, void *ctx, DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); for (i = 0; i <= RTE_MAX_LCORE; i++) LIST_INIT(&list->cache[i].h); - return 0; + return list; } static struct mlx5_list_entry * @@ -244,7 +249,7 @@ mlx5_list_destroy(struct mlx5_list *list) } } } - memset(list, 0, sizeof(*list)); + mlx5_free(list); } uint32_t diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index ffa9cd5142..0bf2f5f5ca 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -419,15 +419,14 @@ struct mlx5_list { * @param cb_remove * Callback function for entry remove. * @return - * 0 on success, otherwise failure. + * List pointer on success, otherwise NULL. */ -int mlx5_list_create(struct mlx5_list *list, - const char *name, void *ctx, - mlx5_list_create_cb cb_create, - mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove, - mlx5_list_clone_cb cb_clone, - mlx5_list_clone_free_cb cb_clone_free); +struct mlx5_list *mlx5_list_create(const char *name, void *ctx, + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free); /** * Search an entry matching the key. diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index f6cf1928b2..97a8f04e39 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -608,7 +608,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - mlx5_list_create(&priv->hrxqs, "hrxq", eth_dev, + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, mlx5_hrxq_remove_cb, mlx5_hrxq_clone_cb, mlx5_hrxq_clone_free_cb); From patchwork Fri Jul 2 06:18:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95167 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A5E91A0A0C; Fri, 2 Jul 2021 08:20:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 022564138E; Fri, 2 Jul 2021 08:19:01 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2060.outbound.protection.outlook.com [40.107.92.60]) by mails.dpdk.org (Postfix) with ESMTP id 11BD341327 for ; Fri, 2 Jul 2021 08:18:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oJOsciyMzjJi8wAx8z5zfYtO1aBLegGh+p8y57wyLXufUTJ3KO3uBWL4qF8yQ5MHwUIIO9xbA5vn1K9Jf0HLq+XvaCOZoTOegTY8A0bTs4NtPY6/tBdX9h8a6SlPF9mI1kSiN9gL/How9hE0iujWx5WGYnLkL94oX5b4JPhSQLNM5M6t0iVDy3BTiBFmlnnrCG90SOXR397kv3dzUT6+QgHM87816ZUjADnfLdk1iPD49WYd4ObcHc3IYDqvJK3wbuQUm9fTRihF+fNghNLmAcUUhxB/alngsVXjD4mCqUfWRTCN/sqJWXtH2mXfM3qvHbzHJNjiqiM97645d9yNgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aU19b2xTdtHl3M5V1u86+bB09vTCT0blIGZajw+TpRY=; b=HwCuhuBptn59FMuLwYkt8Csm6hr9K3dWRLGzcWVBnrt1tts7Ak63LrUs48bhaTaKHmHPXQY25YD1w6QS9/AF/PyBvToJex/GPQD8ifUls/67s+Ka2oQCTAXbMDGNCRyxMvsIKmEONTPXSbkQgm5zs5YHJVPJvv5unwbp9MQ8AV+JB7F8hmAMl03y12snu1SpdM/RQ3GntzMKJoSiRX9DeDBYJ7VKEGvIE7miq9ydess98tJhh5Vl6Lwq8mUjyetdW0OA0QdAbGcMFereWzhwbs7Q5eqrxeNxrepuCrC96LGiyTKkT6be6a98EkCCaAVuoixtVPCSgX56pwdKt8j3Hg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aU19b2xTdtHl3M5V1u86+bB09vTCT0blIGZajw+TpRY=; b=dtAk+2/fU2HAogqIESC0LAZcrME69lTc9d64aBE2LhWKkBD0tibebdIvXU8KNJskfPpFIR+YUEaN/sJy2TIOFYJReUbST7e3LIa2kCiVFP9V3gXQNw3jJaRLzt1UgwS8ZeTpgZ8TNVJ1t2lF4vLqxfAMen8J3yjuTX60l1W2BPJgPrsH0pzTyxPzScU7X65cGW+09SYWNMrE0C5OQ9GiDoMr5uDVsU86ulsdb7Cfs64JcBrJgOhXVDcsKur3E874RAmNs6rjo3KJjak83BLPTCNqA6TaY57HuV8zNx2y88f88Gbl20bXd5iDGgtZdoGQxaDZUvLvMNApzlqbdkMevQ== Received: from BN7PR06CA0043.namprd06.prod.outlook.com (2603:10b6:408:34::20) by CY4PR12MB1511.namprd12.prod.outlook.com (2603:10b6:910:4::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23; Fri, 2 Jul 2021 06:18:54 +0000 Received: from BN8NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:408:34:cafe::92) by BN7PR06CA0043.outlook.office365.com (2603:10b6:408:34::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Fri, 2 Jul 2021 06:18:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT052.mail.protection.outlook.com (10.13.177.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:53 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:50 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:06 +0300 Message-ID: <20210702061816.10454-13-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 23c302c3-3a56-4899-5065-08d93d21445a X-MS-TrafficTypeDiagnostic: CY4PR12MB1511: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:73; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: yIm2DOVC6eIG6ujBgNN5ecbO8yM7/IaGYEe0SBUX44I1Q2Fy5PwIAmNREPqdlNb0MxJbtCfvvPCxy0ro35amSMvfAs4u1J/fXaViCCoUOTiIzoA3ytQXIyFJ7+UiCjWfsb4bKz6cISvow1VTTwZBArkd3MlqULsdxXdtwwsb0nT3FVjcQOwOu+rEna87k+g0hPsCXd9zjWu8/H2uKIxEezW0ChxL9XXDjcdm8lAMa91fUSUelGnPpqRDiJmSTNmPrlq5Usl/npk5BFYNH5Fdtp6m3IYw3U1tHwVOJ0H87/NJVUhxZQuFWk+UeLWo853+ap4CzqcskPTQ5gdOtYLZu+vnlXRzRt/pGzDpEP1GeNd9NLLxYotb9lYtL08mtCN+WWc6pWH+yZRv4ZmMcIehLBOBK8ncmWXZMpNlp8P6nJPDtAXjda+LlwsKk2s5jXaZbk42DhjGl1OUTqwZRr0jXa3Gl8FHtwAYqom1lTw86tK5Qve6MK9/X+ZsDJTcSyb/WMmP5WqiL0hfxaZvSQigOmfJviqrUKHsDXyAxsEGaVO1fSgYQf+E4KT/+ZbDcfEx8uUsF7E8Nq6MgywGYiqYHQullrZzmCmzRJYBS6bbqsdBsu+8AIJp6fXJjePugbETL/bMEElCLzdHaCc7IyXj3L911jWoVYUv54gP5wUTGvJT9b106Z+Rm3e8h9YCDefBjbtyf9sKVtrFxR3S7DA2SMjResAPZ98sOnK7aTYOrc8= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(46966006)(36840700001)(110136005)(2616005)(54906003)(7696005)(36756003)(26005)(16526019)(4326008)(186003)(30864003)(55016002)(83380400001)(2906002)(5660300002)(82310400003)(426003)(336012)(6666004)(86362001)(316002)(6286002)(478600001)(70206006)(36860700001)(70586007)(36906005)(47076005)(8676002)(8936002)(6636002)(1076003)(82740400003)(356005)(7636003)(579004)(559001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:53.7761 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 23c302c3-3a56-4899-5065-08d93d21445a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1511 Subject: [dpdk-dev] [PATCH v3 12/22] common/mlx5: add per-lcore cache to hash list utility X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad Using the mlx5 list utility object in the hlist buckets. This patch moves the list utility object to the common utility, creates all the clone operations for all the hlist instances in the driver. Also adjust all the utility callbacks to be generic for both list and hlist. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- doc/guides/nics/mlx5.rst | 5 + doc/guides/rel_notes/release_21_08.rst | 6 + drivers/common/mlx5/mlx5_common.h | 2 + drivers/common/mlx5/mlx5_common_utils.c | 466 +++++++++++++++++------- drivers/common/mlx5/mlx5_common_utils.h | 267 ++++++++++---- drivers/common/mlx5/version.map | 7 + drivers/net/mlx5/linux/mlx5_os.c | 46 +-- drivers/net/mlx5/mlx5.c | 10 +- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.c | 155 +++++--- drivers/net/mlx5/mlx5_flow.h | 185 +++++----- drivers/net/mlx5/mlx5_flow_dv.c | 407 ++++++++++++--------- drivers/net/mlx5/mlx5_rx.h | 13 +- drivers/net/mlx5/mlx5_rxq.c | 53 +-- drivers/net/mlx5/mlx5_utils.c | 251 ------------- drivers/net/mlx5/mlx5_utils.h | 197 ---------- drivers/net/mlx5/windows/mlx5_os.c | 2 +- 17 files changed, 1029 insertions(+), 1044 deletions(-) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index eb44a070b1..9bd3846e0d 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -445,6 +445,11 @@ Limitations - 256 ports maximum. - 4M connections maximum. +- Multiple-thread flow insertion: + + - In order to achieve best insertion rate, application should manage the flows on the rte-lcore. + - Better to configure ``reclaim_mem_mode`` as 0 to accelerate the flow object allocate and release with cache. + Statistics ---------- diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index a6ecfdf3ce..f6cd1d137d 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated Mellanox mlx5 driver.** + + Updated the Mellanox mlx5 driver with new features and improvements, including: + + * Optimize multiple-thread flow insertion rate. + Removed Items ------------- diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 1fbefe0fa6..1809ff1e95 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -14,6 +14,8 @@ #include #include #include +#include +#include #include #include "mlx5_prm.h" diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index ad2011e858..4e385c616a 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -11,39 +11,324 @@ #include "mlx5_common_utils.h" #include "mlx5_common_log.h" -/********************* Hash List **********************/ +/********************* mlx5 list ************************/ + +static int +mlx5_list_init(struct mlx5_list *list, const char *name, void *ctx, + bool lcores_share, mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free) +{ + int i; + + if (!cb_match || !cb_create || !cb_remove || !cb_clone || + !cb_clone_free) { + rte_errno = EINVAL; + return -EINVAL; + } + if (name) + snprintf(list->name, sizeof(list->name), "%s", name); + list->ctx = ctx; + list->lcores_share = lcores_share; + list->cb_create = cb_create; + list->cb_match = cb_match; + list->cb_remove = cb_remove; + list->cb_clone = cb_clone; + list->cb_clone_free = cb_clone_free; + rte_rwlock_init(&list->lock); + DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); + for (i = 0; i <= RTE_MAX_LCORE; i++) + LIST_INIT(&list->cache[i].h); + return 0; +} + +struct mlx5_list * +mlx5_list_create(const char *name, void *ctx, bool lcores_share, + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free) +{ + struct mlx5_list *list; + + list = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*list), 0, SOCKET_ID_ANY); + if (!list) + return NULL; + if (mlx5_list_init(list, name, ctx, lcores_share, + cb_create, cb_match, cb_remove, cb_clone, + cb_clone_free) != 0) { + mlx5_free(list); + return NULL; + } + return list; +} + +static struct mlx5_list_entry * +__list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) +{ + struct mlx5_list_entry *entry = LIST_FIRST(&list->cache[lcore_index].h); + uint32_t ret; + + while (entry != NULL) { + if (list->cb_match(list->ctx, entry, ctx) == 0) { + if (reuse) { + ret = __atomic_add_fetch(&entry->ref_cnt, 1, + __ATOMIC_RELAXED) - 1; + DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", + list->name, (void *)entry, + entry->ref_cnt); + } else if (lcore_index < RTE_MAX_LCORE) { + ret = __atomic_load_n(&entry->ref_cnt, + __ATOMIC_RELAXED); + } + if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + return entry; + if (reuse && ret == 0) + entry->ref_cnt--; /* Invalid entry. */ + } + entry = LIST_NEXT(entry, next); + } + return NULL; +} + +struct mlx5_list_entry * +mlx5_list_lookup(struct mlx5_list *list, void *ctx) +{ + struct mlx5_list_entry *entry = NULL; + int i; + + rte_rwlock_read_lock(&list->lock); + for (i = 0; i < RTE_MAX_LCORE; i++) { + entry = __list_lookup(list, i, ctx, false); + if (entry) + break; + } + rte_rwlock_read_unlock(&list->lock); + return entry; +} + +static struct mlx5_list_entry * +mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, + struct mlx5_list_entry *gentry, void *ctx) +{ + struct mlx5_list_entry *lentry = list->cb_clone(list->ctx, gentry, ctx); + + if (unlikely(!lentry)) + return NULL; + lentry->ref_cnt = 1u; + lentry->gentry = gentry; + lentry->lcore_idx = (uint32_t)lcore_index; + LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); + return lentry; +} + +static void +__list_cache_clean(struct mlx5_list *list, int lcore_index) +{ + struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_entry *entry = LIST_FIRST(&c->h); + uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, + __ATOMIC_RELAXED); + + while (inv_cnt != 0 && entry != NULL) { + struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); + + if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { + LIST_REMOVE(entry, next); + if (list->lcores_share) + list->cb_clone_free(list->ctx, entry); + else + list->cb_remove(list->ctx, entry); + inv_cnt--; + } + entry = nentry; + } +} + +struct mlx5_list_entry * +mlx5_list_register(struct mlx5_list *list, void *ctx) +{ + struct mlx5_list_entry *entry, *local_entry; + volatile uint32_t prev_gen_cnt = 0; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + MLX5_ASSERT(list); + MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); + if (unlikely(lcore_index == -1)) { + rte_errno = ENOTSUP; + return NULL; + } + /* 0. Free entries that was invalidated by other lcores. */ + __list_cache_clean(list, lcore_index); + /* 1. Lookup in local cache. */ + local_entry = __list_lookup(list, lcore_index, ctx, true); + if (local_entry) + return local_entry; + if (list->lcores_share) { + /* 2. Lookup with read lock on global list, reuse if found. */ + rte_rwlock_read_lock(&list->lock); + entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); + if (likely(entry)) { + rte_rwlock_read_unlock(&list->lock); + return mlx5_list_cache_insert(list, lcore_index, entry, + ctx); + } + prev_gen_cnt = list->gen_cnt; + rte_rwlock_read_unlock(&list->lock); + } + /* 3. Prepare new entry for global list and for cache. */ + entry = list->cb_create(list->ctx, ctx); + if (unlikely(!entry)) + return NULL; + entry->ref_cnt = 1u; + if (!list->lcores_share) { + entry->lcore_idx = (uint32_t)lcore_index; + LIST_INSERT_HEAD(&list->cache[lcore_index].h, entry, next); + __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "MLX5 list %s c%d entry %p new: %u.", + list->name, lcore_index, (void *)entry, entry->ref_cnt); + return entry; + } + local_entry = list->cb_clone(list->ctx, entry, ctx); + if (unlikely(!local_entry)) { + list->cb_remove(list->ctx, entry); + return NULL; + } + local_entry->ref_cnt = 1u; + local_entry->gentry = entry; + local_entry->lcore_idx = (uint32_t)lcore_index; + rte_rwlock_write_lock(&list->lock); + /* 4. Make sure the same entry was not created before the write lock. */ + if (unlikely(prev_gen_cnt != list->gen_cnt)) { + struct mlx5_list_entry *oentry = __list_lookup(list, + RTE_MAX_LCORE, + ctx, true); + + if (unlikely(oentry)) { + /* 4.5. Found real race!!, reuse the old entry. */ + rte_rwlock_write_unlock(&list->lock); + list->cb_remove(list->ctx, entry); + list->cb_clone_free(list->ctx, local_entry); + return mlx5_list_cache_insert(list, lcore_index, oentry, + ctx); + } + } + /* 5. Update lists. */ + LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE].h, entry, next); + list->gen_cnt++; + rte_rwlock_write_unlock(&list->lock); + LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); + __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + (void *)entry, entry->ref_cnt); + return local_entry; +} -static struct mlx5_hlist_entry * -mlx5_hlist_default_create_cb(struct mlx5_hlist *h, uint64_t key __rte_unused, - void *ctx __rte_unused) +int +mlx5_list_unregister(struct mlx5_list *list, + struct mlx5_list_entry *entry) { - return mlx5_malloc(MLX5_MEM_ZERO, h->entry_sz, 0, SOCKET_ID_ANY); + struct mlx5_list_entry *gentry = entry->gentry; + int lcore_idx; + + if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) + return 1; + lcore_idx = rte_lcore_index(rte_lcore_id()); + MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); + if (entry->lcore_idx == (uint32_t)lcore_idx) { + LIST_REMOVE(entry, next); + if (list->lcores_share) + list->cb_clone_free(list->ctx, entry); + else + list->cb_remove(list->ctx, entry); + } else if (likely(lcore_idx != -1)) { + __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __ATOMIC_RELAXED); + } else { + return 0; + } + if (!list->lcores_share) { + __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", + list->name, (void *)entry); + return 0; + } + if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) + return 1; + rte_rwlock_write_lock(&list->lock); + if (likely(gentry->ref_cnt == 0)) { + LIST_REMOVE(gentry, next); + rte_rwlock_write_unlock(&list->lock); + list->cb_remove(list->ctx, gentry); + __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", + list->name, (void *)gentry); + return 0; + } + rte_rwlock_write_unlock(&list->lock); + return 1; } static void -mlx5_hlist_default_remove_cb(struct mlx5_hlist *h __rte_unused, - struct mlx5_hlist_entry *entry) +mlx5_list_uninit(struct mlx5_list *list) +{ + struct mlx5_list_entry *entry; + int i; + + MLX5_ASSERT(list); + for (i = 0; i <= RTE_MAX_LCORE; i++) { + while (!LIST_EMPTY(&list->cache[i].h)) { + entry = LIST_FIRST(&list->cache[i].h); + LIST_REMOVE(entry, next); + if (i == RTE_MAX_LCORE) { + list->cb_remove(list->ctx, entry); + DRV_LOG(DEBUG, "mlx5 list %s entry %p " + "destroyed.", list->name, + (void *)entry); + } else { + list->cb_clone_free(list->ctx, entry); + } + } + } +} + +void +mlx5_list_destroy(struct mlx5_list *list) +{ + mlx5_list_uninit(list); + mlx5_free(list); +} + +uint32_t +mlx5_list_get_entry_num(struct mlx5_list *list) { - mlx5_free(entry); + MLX5_ASSERT(list); + return __atomic_load_n(&list->count, __ATOMIC_RELAXED); } +/********************* Hash List **********************/ + struct mlx5_hlist * -mlx5_hlist_create(const char *name, uint32_t size, uint32_t entry_size, - uint32_t flags, mlx5_hlist_create_cb cb_create, - mlx5_hlist_match_cb cb_match, mlx5_hlist_remove_cb cb_remove) +mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, + bool lcores_share, void *ctx, mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free) { struct mlx5_hlist *h; uint32_t act_size; uint32_t alloc_size; uint32_t i; - if (!size || !cb_match || (!cb_create ^ !cb_remove)) - return NULL; /* Align to the next power of 2, 32bits integer is enough now. */ if (!rte_is_power_of_2(size)) { act_size = rte_align32pow2(size); - DRV_LOG(DEBUG, "Size 0x%" PRIX32 " is not power of 2, " - "will be aligned to 0x%" PRIX32 ".", size, act_size); + DRV_LOG(WARNING, "Size 0x%" PRIX32 " is not power of 2, will " + "be aligned to 0x%" PRIX32 ".", size, act_size); } else { act_size = size; } @@ -57,61 +342,24 @@ mlx5_hlist_create(const char *name, uint32_t size, uint32_t entry_size, name ? name : "None"); return NULL; } - if (name) - snprintf(h->name, MLX5_HLIST_NAMESIZE, "%s", name); - h->table_sz = act_size; h->mask = act_size - 1; - h->entry_sz = entry_size; - h->direct_key = !!(flags & MLX5_HLIST_DIRECT_KEY); - h->write_most = !!(flags & MLX5_HLIST_WRITE_MOST); - h->cb_create = cb_create ? cb_create : mlx5_hlist_default_create_cb; - h->cb_match = cb_match; - h->cb_remove = cb_remove ? cb_remove : mlx5_hlist_default_remove_cb; - for (i = 0; i < act_size; i++) - rte_rwlock_init(&h->buckets[i].lock); - DRV_LOG(DEBUG, "Hash list with %s size 0x%" PRIX32 " is created.", - h->name, act_size); - return h; -} - -static struct mlx5_hlist_entry * -__hlist_lookup(struct mlx5_hlist *h, uint64_t key, uint32_t idx, - void *ctx, bool reuse) -{ - struct mlx5_hlist_head *first; - struct mlx5_hlist_entry *node; - - MLX5_ASSERT(h); - first = &h->buckets[idx].head; - LIST_FOREACH(node, first, next) { - if (!h->cb_match(h, node, key, ctx)) { - if (reuse) { - __atomic_add_fetch(&node->ref_cnt, 1, - __ATOMIC_RELAXED); - DRV_LOG(DEBUG, "Hash list %s entry %p " - "reuse: %u.", - h->name, (void *)node, node->ref_cnt); - } - break; + h->lcores_share = lcores_share; + h->direct_key = direct_key; + for (i = 0; i < act_size; i++) { + if (mlx5_list_init(&h->buckets[i].l, name, ctx, lcores_share, + cb_create, cb_match, cb_remove, cb_clone, + cb_clone_free) != 0) { + mlx5_free(h); + return NULL; } } - return node; + DRV_LOG(DEBUG, "Hash list %s with size 0x%" PRIX32 " was created.", + name, act_size); + return h; } -static struct mlx5_hlist_entry * -hlist_lookup(struct mlx5_hlist *h, uint64_t key, uint32_t idx, - void *ctx, bool reuse) -{ - struct mlx5_hlist_entry *node; - - MLX5_ASSERT(h); - rte_rwlock_read_lock(&h->buckets[idx].lock); - node = __hlist_lookup(h, key, idx, ctx, reuse); - rte_rwlock_read_unlock(&h->buckets[idx].lock); - return node; -} -struct mlx5_hlist_entry * +struct mlx5_list_entry * mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) { uint32_t idx; @@ -120,102 +368,44 @@ mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - return hlist_lookup(h, key, idx, ctx, false); + return mlx5_list_lookup(&h->buckets[idx].l, ctx); } -struct mlx5_hlist_entry* +struct mlx5_list_entry* mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) { uint32_t idx; - struct mlx5_hlist_head *first; - struct mlx5_hlist_bucket *b; - struct mlx5_hlist_entry *entry; - uint32_t prev_gen_cnt = 0; + struct mlx5_list_entry *entry; if (h->direct_key) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - MLX5_ASSERT(h); - b = &h->buckets[idx]; - /* Use write lock directly for write-most list. */ - if (!h->write_most) { - prev_gen_cnt = __atomic_load_n(&b->gen_cnt, __ATOMIC_ACQUIRE); - entry = hlist_lookup(h, key, idx, ctx, true); - if (entry) - return entry; + entry = mlx5_list_register(&h->buckets[idx].l, ctx); + if (likely(entry)) { + if (h->lcores_share) + entry->gentry->bucket_idx = idx; + else + entry->bucket_idx = idx; } - rte_rwlock_write_lock(&b->lock); - /* Check if the list changed by other threads. */ - if (h->write_most || - prev_gen_cnt != __atomic_load_n(&b->gen_cnt, __ATOMIC_ACQUIRE)) { - entry = __hlist_lookup(h, key, idx, ctx, true); - if (entry) - goto done; - } - first = &b->head; - entry = h->cb_create(h, key, ctx); - if (!entry) { - rte_errno = ENOMEM; - DRV_LOG(DEBUG, "Can't allocate hash list %s entry.", h->name); - goto done; - } - entry->idx = idx; - entry->ref_cnt = 1; - LIST_INSERT_HEAD(first, entry, next); - __atomic_add_fetch(&b->gen_cnt, 1, __ATOMIC_ACQ_REL); - DRV_LOG(DEBUG, "Hash list %s entry %p new: %u.", - h->name, (void *)entry, entry->ref_cnt); -done: - rte_rwlock_write_unlock(&b->lock); return entry; } int -mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry) +mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry) { - uint32_t idx = entry->idx; + uint32_t idx = h->lcores_share ? entry->gentry->bucket_idx : + entry->bucket_idx; - rte_rwlock_write_lock(&h->buckets[idx].lock); - MLX5_ASSERT(entry && entry->ref_cnt && entry->next.le_prev); - DRV_LOG(DEBUG, "Hash list %s entry %p deref: %u.", - h->name, (void *)entry, entry->ref_cnt); - if (--entry->ref_cnt) { - rte_rwlock_write_unlock(&h->buckets[idx].lock); - return 1; - } - LIST_REMOVE(entry, next); - /* Set to NULL to get rid of removing action for more than once. */ - entry->next.le_prev = NULL; - h->cb_remove(h, entry); - rte_rwlock_write_unlock(&h->buckets[idx].lock); - DRV_LOG(DEBUG, "Hash list %s entry %p removed.", - h->name, (void *)entry); - return 0; + return mlx5_list_unregister(&h->buckets[idx].l, entry); } void mlx5_hlist_destroy(struct mlx5_hlist *h) { - uint32_t idx; - struct mlx5_hlist_entry *entry; + uint32_t i; - MLX5_ASSERT(h); - for (idx = 0; idx < h->table_sz; ++idx) { - /* No LIST_FOREACH_SAFE, using while instead. */ - while (!LIST_EMPTY(&h->buckets[idx].head)) { - entry = LIST_FIRST(&h->buckets[idx].head); - LIST_REMOVE(entry, next); - /* - * The owner of whole element which contains data entry - * is the user, so it's the user's duty to do the clean - * up and the free work because someone may not put the - * hlist entry at the beginning(suggested to locate at - * the beginning). Or else the default free function - * will be used. - */ - h->cb_remove(h, entry); - } - } + for (i = 0; i <= h->mask; i++) + mlx5_list_uninit(&h->buckets[i].l); mlx5_free(h); } diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index ed378ce9bd..61b30a45ca 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -7,106 +7,227 @@ #include "mlx5_common.h" -#define MLX5_HLIST_DIRECT_KEY 0x0001 /* Use the key directly as hash index. */ -#define MLX5_HLIST_WRITE_MOST 0x0002 /* List mostly used for append new. */ +/************************ mlx5 list *****************************/ -/** Maximum size of string for naming the hlist table. */ -#define MLX5_HLIST_NAMESIZE 32 +/** Maximum size of string for naming. */ +#define MLX5_NAME_SIZE 32 -struct mlx5_hlist; +struct mlx5_list; /** - * Structure of the entry in the hash list, user should define its own struct - * that contains this in order to store the data. The 'key' is 64-bits right - * now and its user's responsibility to guarantee there is no collision. + * Structure of the entry in the mlx5 list, user should define its own struct + * that contains this in order to store the data. */ -struct mlx5_hlist_entry { - LIST_ENTRY(mlx5_hlist_entry) next; /* entry pointers in the list. */ - uint32_t idx; /* Bucket index the entry belongs to. */ - uint32_t ref_cnt; /* Reference count. */ -}; +struct mlx5_list_entry { + LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ + uint32_t ref_cnt __rte_aligned(8); /* 0 means, entry is invalid. */ + uint32_t lcore_idx; + union { + struct mlx5_list_entry *gentry; + uint32_t bucket_idx; + }; +} __rte_packed; -/** Structure for hash head. */ -LIST_HEAD(mlx5_hlist_head, mlx5_hlist_entry); +struct mlx5_list_cache { + LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; + uint32_t inv_cnt; /* Invalid entries counter. */ +} __rte_cache_aligned; /** * Type of callback function for entry removal. * - * @param list - * The hash list. + * @param tool_ctx + * The tool instance user context. * @param entry * The entry in the list. */ -typedef void (*mlx5_hlist_remove_cb)(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry); +typedef void (*mlx5_list_remove_cb)(void *tool_ctx, + struct mlx5_list_entry *entry); /** * Type of function for user defined matching. * - * @param list - * The hash list. + * @param tool_ctx + * The tool instance context. * @param entry * The entry in the list. - * @param key - * The new entry key. * @param ctx * The pointer to new entry context. * * @return * 0 if matching, non-zero number otherwise. */ -typedef int (*mlx5_hlist_match_cb)(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry, - uint64_t key, void *ctx); +typedef int (*mlx5_list_match_cb)(void *tool_ctx, + struct mlx5_list_entry *entry, void *ctx); + +typedef struct mlx5_list_entry *(*mlx5_list_clone_cb)(void *tool_ctx, + struct mlx5_list_entry *entry, void *ctx); + +typedef void (*mlx5_list_clone_free_cb)(void *tool_ctx, + struct mlx5_list_entry *entry); /** - * Type of function for user defined hash list entry creation. + * Type of function for user defined mlx5 list entry creation. * - * @param list - * The hash list. - * @param key - * The key of the new entry. + * @param tool_ctx + * The mlx5 tool instance context. * @param ctx * The pointer to new entry context. * * @return - * Pointer to allocated entry on success, NULL otherwise. + * Pointer of entry on success, NULL otherwise. + */ +typedef struct mlx5_list_entry *(*mlx5_list_create_cb)(void *tool_ctx, + void *ctx); + +/** + * Linked mlx5 list structure. + * + * Entry in mlx5 list could be reused if entry already exists, + * reference count will increase and the existing entry returns. + * + * When destroy an entry from list, decrease reference count and only + * destroy when no further reference. + * + * Linked list is designed for limited number of entries, + * read mostly, less modification. + * + * For huge amount of entries, please consider hash list. + * + */ +struct mlx5_list { + char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ + void *ctx; /* user objects target to callback. */ + bool lcores_share; /* Whether to share objects between the lcores. */ + mlx5_list_create_cb cb_create; /**< entry create callback. */ + mlx5_list_match_cb cb_match; /**< entry match callback. */ + mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ + mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ + mlx5_list_clone_free_cb cb_clone_free; + struct mlx5_list_cache cache[RTE_MAX_LCORE + 1]; + /* Lcore cache, last index is the global cache. */ + volatile uint32_t gen_cnt; /* List modification may update it. */ + volatile uint32_t count; /* number of entries in list. */ + rte_rwlock_t lock; /* read/write lock. */ +}; + +/** + * Create a mlx5 list. + * + * @param list + * Pointer to the hast list table. + * @param name + * Name of the mlx5 list. + * @param ctx + * Pointer to the list context data. + * @param lcores_share + * Whether to share objects between the lcores. + * @param cb_create + * Callback function for entry create. + * @param cb_match + * Callback function for entry match. + * @param cb_remove + * Callback function for entry remove. + * @return + * List pointer on success, otherwise NULL. + */ +__rte_internal +struct mlx5_list *mlx5_list_create(const char *name, void *ctx, + bool lcores_share, + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free); + +/** + * Search an entry matching the key. + * + * Result returned might be destroyed by other thread, must use + * this function only in main thread. + * + * @param list + * Pointer to the mlx5 list. + * @param ctx + * Common context parameter used by entry callback function. + * + * @return + * Pointer of the list entry if found, NULL otherwise. */ -typedef struct mlx5_hlist_entry *(*mlx5_hlist_create_cb) - (struct mlx5_hlist *list, - uint64_t key, void *ctx); +__rte_internal +struct mlx5_list_entry *mlx5_list_lookup(struct mlx5_list *list, + void *ctx); -/* Hash list bucket head. */ +/** + * Reuse or create an entry to the mlx5 list. + * + * @param list + * Pointer to the hast list table. + * @param ctx + * Common context parameter used by callback function. + * + * @return + * registered entry on success, NULL otherwise + */ +__rte_internal +struct mlx5_list_entry *mlx5_list_register(struct mlx5_list *list, + void *ctx); + +/** + * Remove an entry from the mlx5 list. + * + * User should guarantee the validity of the entry. + * + * @param list + * Pointer to the hast list. + * @param entry + * Entry to be removed from the mlx5 list table. + * @return + * 0 on entry removed, 1 on entry still referenced. + */ +__rte_internal +int mlx5_list_unregister(struct mlx5_list *list, + struct mlx5_list_entry *entry); + +/** + * Destroy the mlx5 list. + * + * @param list + * Pointer to the mlx5 list. + */ +__rte_internal +void mlx5_list_destroy(struct mlx5_list *list); + +/** + * Get entry number from the mlx5 list. + * + * @param list + * Pointer to the hast list. + * @return + * mlx5 list entry number. + */ +__rte_internal +uint32_t +mlx5_list_get_entry_num(struct mlx5_list *list); + +/********************* Hash List **********************/ + +/* Hash list bucket. */ struct mlx5_hlist_bucket { - struct mlx5_hlist_head head; /* List head. */ - rte_rwlock_t lock; /* Bucket lock. */ - uint32_t gen_cnt; /* List modification will update generation count. */ + struct mlx5_list l; } __rte_cache_aligned; /** * Hash list table structure * - * Entry in hash list could be reused if entry already exists, reference - * count will increase and the existing entry returns. - * - * When destroy an entry from list, decrease reference count and only - * destroy when no further reference. + * The hash list bucket using the mlx5_list object for managing. */ struct mlx5_hlist { - char name[MLX5_HLIST_NAMESIZE]; /**< Name of the hash list. */ - /**< number of heads, need to be power of 2. */ - uint32_t table_sz; - uint32_t entry_sz; /**< Size of entry, used to allocate entry. */ - /**< mask to get the index of the list heads. */ - uint32_t mask; - bool direct_key; /* Use the new entry key directly as hash index. */ - bool write_most; /* List mostly used for append new or destroy. */ - void *ctx; - mlx5_hlist_create_cb cb_create; /**< entry create callback. */ - mlx5_hlist_match_cb cb_match; /**< entry match callback. */ - mlx5_hlist_remove_cb cb_remove; /**< entry remove callback. */ + uint32_t mask; /* A mask for the bucket index range. */ + uint8_t flags; + bool direct_key; /* Whether to use the key directly as hash index. */ + bool lcores_share; /* Whether to share objects between the lcores. */ struct mlx5_hlist_bucket buckets[] __rte_cache_aligned; - /**< list bucket arrays. */ }; /** @@ -123,23 +244,33 @@ struct mlx5_hlist { * Heads array size of the hash list. * @param entry_size * Entry size to allocate if cb_create not specified. - * @param flags - * The hash list attribute flags. + * @param direct key + * Whether to use the key directly as hash index. + * @param lcores_share + * Whether to share objects between the lcores. + * @param ctx + * The hlist instance context. * @param cb_create * Callback function for entry create. * @param cb_match * Callback function for entry match. - * @param cb_destroy - * Callback function for entry destroy. + * @param cb_remove + * Callback function for entry remove. + * @param cb_clone + * Callback function for entry clone. + * @param cb_clone_free + * Callback function for entry clone free. * @return * Pointer of the hash list table created, NULL on failure. */ __rte_internal struct mlx5_hlist *mlx5_hlist_create(const char *name, uint32_t size, - uint32_t entry_size, uint32_t flags, - mlx5_hlist_create_cb cb_create, - mlx5_hlist_match_cb cb_match, - mlx5_hlist_remove_cb cb_destroy); + bool direct_key, bool lcores_share, + void *ctx, mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free); /** * Search an entry matching the key. @@ -158,7 +289,7 @@ struct mlx5_hlist *mlx5_hlist_create(const char *name, uint32_t size, * Pointer of the hlist entry if found, NULL otherwise. */ __rte_internal -struct mlx5_hlist_entry *mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, +struct mlx5_list_entry *mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx); /** @@ -177,7 +308,7 @@ struct mlx5_hlist_entry *mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, * registered entry on success, NULL otherwise */ __rte_internal -struct mlx5_hlist_entry *mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, +struct mlx5_list_entry *mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx); /** @@ -192,7 +323,7 @@ struct mlx5_hlist_entry *mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, * 0 on entry removed, 1 on entry still referenced. */ __rte_internal -int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry); +int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry); /** * Destroy the hash list table, all the entries already inserted into the lists diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index b8be73a77b..e6586d6f6f 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -73,6 +73,13 @@ INTERNAL { mlx5_glue; + mlx5_list_create; + mlx5_list_register; + mlx5_list_unregister; + mlx5_list_lookup; + mlx5_list_get_entry_num; + mlx5_list_destroy; + mlx5_hlist_create; mlx5_hlist_lookup; mlx5_hlist_register; diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 87b63d852b..cf573a9a4d 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -261,7 +261,7 @@ static int mlx5_alloc_shared_dr(struct mlx5_priv *priv) { struct mlx5_dev_ctx_shared *sh = priv->sh; - char s[MLX5_HLIST_NAMESIZE] __rte_unused; + char s[MLX5_NAME_SIZE] __rte_unused; int err; MLX5_ASSERT(sh && sh->refcnt); @@ -274,7 +274,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) #ifdef HAVE_IBV_FLOW_DV_SUPPORT /* Init port id action list. */ snprintf(s, sizeof(s), "%s_port_id_action_list", sh->ibdev_name); - sh->port_id_action_list = mlx5_list_create(s, sh, + sh->port_id_action_list = mlx5_list_create(s, sh, true, flow_dv_port_id_create_cb, flow_dv_port_id_match_cb, flow_dv_port_id_remove_cb, @@ -284,7 +284,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init push vlan action list. */ snprintf(s, sizeof(s), "%s_push_vlan_action_list", sh->ibdev_name); - sh->push_vlan_action_list = mlx5_list_create(s, sh, + sh->push_vlan_action_list = mlx5_list_create(s, sh, true, flow_dv_push_vlan_create_cb, flow_dv_push_vlan_match_cb, flow_dv_push_vlan_remove_cb, @@ -294,7 +294,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init sample action list. */ snprintf(s, sizeof(s), "%s_sample_action_list", sh->ibdev_name); - sh->sample_action_list = mlx5_list_create(s, sh, + sh->sample_action_list = mlx5_list_create(s, sh, true, flow_dv_sample_create_cb, flow_dv_sample_match_cb, flow_dv_sample_remove_cb, @@ -304,7 +304,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Init dest array action list. */ snprintf(s, sizeof(s), "%s_dest_array_list", sh->ibdev_name); - sh->dest_array_list = mlx5_list_create(s, sh, + sh->dest_array_list = mlx5_list_create(s, sh, true, flow_dv_dest_array_create_cb, flow_dv_dest_array_match_cb, flow_dv_dest_array_remove_cb, @@ -314,44 +314,44 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); - sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, - MLX5_HLIST_WRITE_MOST, - flow_dv_tag_create_cb, + sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, false, + false, sh, flow_dv_tag_create_cb, flow_dv_tag_match_cb, - flow_dv_tag_remove_cb); + flow_dv_tag_remove_cb, + flow_dv_tag_clone_cb, + flow_dv_tag_clone_free_cb); if (!sh->tag_table) { DRV_LOG(ERR, "tags with hash creation failed."); err = ENOMEM; goto error; } - sh->tag_table->ctx = sh; snprintf(s, sizeof(s), "%s_hdr_modify", sh->ibdev_name); sh->modify_cmds = mlx5_hlist_create(s, MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, - 0, MLX5_HLIST_WRITE_MOST | - MLX5_HLIST_DIRECT_KEY, + true, false, sh, flow_dv_modify_create_cb, flow_dv_modify_match_cb, - flow_dv_modify_remove_cb); + flow_dv_modify_remove_cb, + flow_dv_modify_clone_cb, + flow_dv_modify_clone_free_cb); if (!sh->modify_cmds) { DRV_LOG(ERR, "hdr modify hash creation failed"); err = ENOMEM; goto error; } - sh->modify_cmds->ctx = sh; snprintf(s, sizeof(s), "%s_encaps_decaps", sh->ibdev_name); sh->encaps_decaps = mlx5_hlist_create(s, MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, - 0, MLX5_HLIST_DIRECT_KEY | - MLX5_HLIST_WRITE_MOST, + true, true, sh, flow_dv_encap_decap_create_cb, flow_dv_encap_decap_match_cb, - flow_dv_encap_decap_remove_cb); + flow_dv_encap_decap_remove_cb, + flow_dv_encap_decap_clone_cb, + flow_dv_encap_decap_clone_free_cb); if (!sh->encaps_decaps) { DRV_LOG(ERR, "encap decap hash creation failed"); err = ENOMEM; goto error; } - sh->encaps_decaps->ctx = sh; #endif #ifdef HAVE_MLX5DV_DR void *domain; @@ -1748,7 +1748,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - priv->hrxqs = mlx5_list_create("hrxq", eth_dev, mlx5_hrxq_create_cb, + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, true, + mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, mlx5_hrxq_remove_cb, mlx5_hrxq_clone_cb, @@ -1780,15 +1781,16 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->sh->dv_regc0_mask) { priv->mreg_cp_tbl = mlx5_hlist_create(MLX5_FLOW_MREG_HNAME, MLX5_FLOW_MREG_HTABLE_SZ, - 0, 0, + false, true, eth_dev, flow_dv_mreg_create_cb, flow_dv_mreg_match_cb, - flow_dv_mreg_remove_cb); + flow_dv_mreg_remove_cb, + flow_dv_mreg_clone_cb, + flow_dv_mreg_clone_free_cb); if (!priv->mreg_cp_tbl) { err = ENOMEM; goto error; } - priv->mreg_cp_tbl->ctx = eth_dev; } rte_spinlock_init(&priv->shared_act_sl); mlx5_flow_counter_mode_config(eth_dev); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 775ea15adb..0e80408511 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1358,20 +1358,22 @@ mlx5_alloc_table_hash_list(struct mlx5_priv *priv __rte_unused) /* Tables are only used in DV and DR modes. */ #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) struct mlx5_dev_ctx_shared *sh = priv->sh; - char s[MLX5_HLIST_NAMESIZE]; + char s[MLX5_NAME_SIZE]; MLX5_ASSERT(sh); snprintf(s, sizeof(s), "%s_flow_table", priv->sh->ibdev_name); sh->flow_tbls = mlx5_hlist_create(s, MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE, - 0, 0, flow_dv_tbl_create_cb, + false, true, sh, + flow_dv_tbl_create_cb, flow_dv_tbl_match_cb, - flow_dv_tbl_remove_cb); + flow_dv_tbl_remove_cb, + flow_dv_tbl_clone_cb, + flow_dv_tbl_clone_free_cb); if (!sh->flow_tbls) { DRV_LOG(ERR, "flow tables with hash creation failed."); err = ENOMEM; return err; } - sh->flow_tbls->ctx = sh; #ifndef HAVE_MLX5DV_DR struct rte_flow_error error; struct rte_eth_dev *dev = &rte_eth_devices[priv->dev_data->port_id]; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index df5cba3d45..f3768ee028 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -84,6 +84,7 @@ struct mlx5_flow_cb_ctx { struct rte_eth_dev *dev; struct rte_flow_error *error; void *data; + void *data2; }; /* Device attributes used in mlx5 PMD */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 450a84a6c5..7bd45d3895 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3983,28 +3983,27 @@ flow_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, uint32_t flow_idx); int -flow_dv_mreg_match_cb(struct mlx5_hlist *list __rte_unused, - struct mlx5_hlist_entry *entry, - uint64_t key, void *cb_ctx __rte_unused) +flow_dv_mreg_match_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { + struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_mreg_copy_resource *mcp_res = - container_of(entry, typeof(*mcp_res), hlist_ent); + container_of(entry, typeof(*mcp_res), hlist_ent); - return mcp_res->mark_id != key; + return mcp_res->mark_id != *(uint32_t *)(ctx->data); } -struct mlx5_hlist_entry * -flow_dv_mreg_create_cb(struct mlx5_hlist *list, uint64_t key, - void *cb_ctx) +struct mlx5_list_entry * +flow_dv_mreg_create_cb(void *tool_ctx, void *cb_ctx) { - struct rte_eth_dev *dev = list->ctx; + struct rte_eth_dev *dev = tool_ctx; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_mreg_copy_resource *mcp_res; struct rte_flow_error *error = ctx->error; uint32_t idx = 0; int ret; - uint32_t mark_id = key; + uint32_t mark_id = *(uint32_t *)(ctx->data); struct rte_flow_attr attr = { .group = MLX5_FLOW_MREG_CP_TABLE_GROUP, .ingress = 1, @@ -4110,6 +4109,36 @@ flow_dv_mreg_create_cb(struct mlx5_hlist *list, uint64_t key, return &mcp_res->hlist_ent; } +struct mlx5_list_entry * +flow_dv_mreg_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, + void *cb_ctx __rte_unused) +{ + struct rte_eth_dev *dev = tool_ctx; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_mreg_copy_resource *mcp_res; + uint32_t idx = 0; + + mcp_res = mlx5_ipool_malloc(priv->sh->ipool[MLX5_IPOOL_MCP], &idx); + if (!mcp_res) { + rte_errno = ENOMEM; + return NULL; + } + memcpy(mcp_res, oentry, sizeof(*mcp_res)); + mcp_res->idx = idx; + return &mcp_res->hlist_ent; +} + +void +flow_dv_mreg_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_flow_mreg_copy_resource *mcp_res = + container_of(entry, typeof(*mcp_res), hlist_ent); + struct rte_eth_dev *dev = tool_ctx; + struct mlx5_priv *priv = dev->data->dev_private; + + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], mcp_res->idx); +} + /** * Add a flow of copying flow metadata registers in RX_CP_TBL. * @@ -4140,10 +4169,11 @@ flow_mreg_add_copy_action(struct rte_eth_dev *dev, uint32_t mark_id, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_hlist_entry *entry; + struct mlx5_list_entry *entry; struct mlx5_flow_cb_ctx ctx = { .dev = dev, .error = error, + .data = &mark_id, }; /* Check if already registered. */ @@ -4156,11 +4186,11 @@ flow_mreg_add_copy_action(struct rte_eth_dev *dev, uint32_t mark_id, } void -flow_dv_mreg_remove_cb(struct mlx5_hlist *list, struct mlx5_hlist_entry *entry) +flow_dv_mreg_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_flow_mreg_copy_resource *mcp_res = - container_of(entry, typeof(*mcp_res), hlist_ent); - struct rte_eth_dev *dev = list->ctx; + container_of(entry, typeof(*mcp_res), hlist_ent); + struct rte_eth_dev *dev = tool_ctx; struct mlx5_priv *priv = dev->data->dev_private; MLX5_ASSERT(mcp_res->rix_flow); @@ -4206,14 +4236,17 @@ flow_mreg_del_copy_action(struct rte_eth_dev *dev, static void flow_mreg_del_default_copy_action(struct rte_eth_dev *dev) { - struct mlx5_hlist_entry *entry; + struct mlx5_list_entry *entry; struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_cb_ctx ctx; + uint32_t mark_id; /* Check if default flow is registered. */ if (!priv->mreg_cp_tbl) return; - entry = mlx5_hlist_lookup(priv->mreg_cp_tbl, - MLX5_DEFAULT_COPY_ID, NULL); + mark_id = MLX5_DEFAULT_COPY_ID; + ctx.data = &mark_id; + entry = mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id, &ctx); if (!entry) return; mlx5_hlist_unregister(priv->mreg_cp_tbl, entry); @@ -4239,6 +4272,8 @@ flow_mreg_add_default_copy_action(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_mreg_copy_resource *mcp_res; + struct mlx5_flow_cb_ctx ctx; + uint32_t mark_id; /* Check whether extensive metadata feature is engaged. */ if (!priv->config.dv_flow_en || @@ -4250,9 +4285,11 @@ flow_mreg_add_default_copy_action(struct rte_eth_dev *dev, * Add default mreg copy flow may be called multiple time, but * only be called once in stop. Avoid register it twice. */ - if (mlx5_hlist_lookup(priv->mreg_cp_tbl, MLX5_DEFAULT_COPY_ID, NULL)) + mark_id = MLX5_DEFAULT_COPY_ID; + ctx.data = &mark_id; + if (mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id, &ctx)) return 0; - mcp_res = flow_mreg_add_copy_action(dev, MLX5_DEFAULT_COPY_ID, error); + mcp_res = flow_mreg_add_copy_action(dev, mark_id, error); if (!mcp_res) return -rte_errno; return 0; @@ -8350,7 +8387,7 @@ tunnel_mark_decode(struct rte_eth_dev *dev, uint32_t mark) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_hlist_entry *he; + struct mlx5_list_entry *he; union tunnel_offload_mark mbits = { .val = mark }; union mlx5_flow_tbl_key table_key = { { @@ -8362,16 +8399,20 @@ tunnel_mark_decode(struct rte_eth_dev *dev, uint32_t mark) .is_egress = 0, } }; - he = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); + struct mlx5_flow_cb_ctx ctx = { + .data = &table_key.v64, + }; + + he = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, &ctx); return he ? container_of(he, struct mlx5_flow_tbl_data_entry, entry) : NULL; } static void -mlx5_flow_tunnel_grp2tbl_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry) +mlx5_flow_tunnel_grp2tbl_remove_cb(void *tool_ctx, + struct mlx5_list_entry *entry) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct tunnel_tbl_entry *tte = container_of(entry, typeof(*tte), hash); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TNL_TBL_ID], @@ -8380,26 +8421,26 @@ mlx5_flow_tunnel_grp2tbl_remove_cb(struct mlx5_hlist *list, } static int -mlx5_flow_tunnel_grp2tbl_match_cb(struct mlx5_hlist *list __rte_unused, - struct mlx5_hlist_entry *entry, - uint64_t key, void *cb_ctx __rte_unused) +mlx5_flow_tunnel_grp2tbl_match_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { + struct mlx5_flow_cb_ctx *ctx = cb_ctx; union tunnel_tbl_key tbl = { - .val = key, + .val = *(uint64_t *)(ctx->data), }; struct tunnel_tbl_entry *tte = container_of(entry, typeof(*tte), hash); return tbl.tunnel_id != tte->tunnel_id || tbl.group != tte->group; } -static struct mlx5_hlist_entry * -mlx5_flow_tunnel_grp2tbl_create_cb(struct mlx5_hlist *list, uint64_t key, - void *ctx __rte_unused) +static struct mlx5_list_entry * +mlx5_flow_tunnel_grp2tbl_create_cb(void *tool_ctx, void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct tunnel_tbl_entry *tte; union tunnel_tbl_key tbl = { - .val = key, + .val = *(uint64_t *)(ctx->data), }; tte = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, @@ -8428,13 +8469,36 @@ mlx5_flow_tunnel_grp2tbl_create_cb(struct mlx5_hlist *list, uint64_t key, return NULL; } +static struct mlx5_list_entry * +mlx5_flow_tunnel_grp2tbl_clone_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *oentry, + void *cb_ctx __rte_unused) +{ + struct tunnel_tbl_entry *tte = mlx5_malloc(MLX5_MEM_SYS, sizeof(*tte), + 0, SOCKET_ID_ANY); + + if (!tte) + return NULL; + memcpy(tte, oentry, sizeof(*tte)); + return &tte->hash; +} + +static void +mlx5_flow_tunnel_grp2tbl_clone_free_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry) +{ + struct tunnel_tbl_entry *tte = container_of(entry, typeof(*tte), hash); + + mlx5_free(tte); +} + static uint32_t tunnel_flow_group_to_flow_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, struct rte_flow_error *error) { - struct mlx5_hlist_entry *he; + struct mlx5_list_entry *he; struct tunnel_tbl_entry *tte; union tunnel_tbl_key key = { .tunnel_id = tunnel ? tunnel->tunnel_id : 0, @@ -8442,9 +8506,12 @@ tunnel_flow_group_to_flow_table(struct rte_eth_dev *dev, }; struct mlx5_flow_tunnel_hub *thub = mlx5_tunnel_hub(dev); struct mlx5_hlist *group_hash; + struct mlx5_flow_cb_ctx ctx = { + .data = &key.val, + }; group_hash = tunnel ? tunnel->groups : thub->groups; - he = mlx5_hlist_register(group_hash, key.val, NULL); + he = mlx5_hlist_register(group_hash, key.val, &ctx); if (!he) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR_GROUP, @@ -8558,15 +8625,17 @@ mlx5_flow_tunnel_allocate(struct rte_eth_dev *dev, DRV_LOG(ERR, "Tunnel ID %d exceed max limit.", id); return NULL; } - tunnel->groups = mlx5_hlist_create("tunnel groups", 1024, 0, 0, + tunnel->groups = mlx5_hlist_create("tunnel groups", 1024, false, true, + priv->sh, mlx5_flow_tunnel_grp2tbl_create_cb, mlx5_flow_tunnel_grp2tbl_match_cb, - mlx5_flow_tunnel_grp2tbl_remove_cb); + mlx5_flow_tunnel_grp2tbl_remove_cb, + mlx5_flow_tunnel_grp2tbl_clone_cb, + mlx5_flow_tunnel_grp2tbl_clone_free_cb); if (!tunnel->groups) { mlx5_ipool_free(ipool, id); return NULL; } - tunnel->groups->ctx = priv->sh; /* initiate new PMD tunnel */ memcpy(&tunnel->app_tunnel, app_tunnel, sizeof(*app_tunnel)); tunnel->tunnel_id = id; @@ -8666,15 +8735,17 @@ int mlx5_alloc_tunnel_hub(struct mlx5_dev_ctx_shared *sh) LIST_INIT(&thub->tunnels); rte_spinlock_init(&thub->sl); thub->groups = mlx5_hlist_create("flow groups", - rte_align32pow2(MLX5_MAX_TABLES), 0, - 0, mlx5_flow_tunnel_grp2tbl_create_cb, + rte_align32pow2(MLX5_MAX_TABLES), + false, true, sh, + mlx5_flow_tunnel_grp2tbl_create_cb, mlx5_flow_tunnel_grp2tbl_match_cb, - mlx5_flow_tunnel_grp2tbl_remove_cb); + mlx5_flow_tunnel_grp2tbl_remove_cb, + mlx5_flow_tunnel_grp2tbl_clone_cb, + mlx5_flow_tunnel_grp2tbl_clone_free_cb); if (!thub->groups) { err = -rte_errno; goto err; } - thub->groups->ctx = sh; sh->tunnel_hub = thub; return 0; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index ce4d205e86..ab4e8c5c4f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -480,7 +480,7 @@ struct mlx5_flow_dv_matcher { /* Encap/decap resource structure. */ struct mlx5_flow_dv_encap_decap_resource { - struct mlx5_hlist_entry entry; + struct mlx5_list_entry entry; /* Pointer to next element. */ uint32_t refcnt; /**< Reference counter. */ void *action; @@ -495,7 +495,7 @@ struct mlx5_flow_dv_encap_decap_resource { /* Tag resource structure. */ struct mlx5_flow_dv_tag_resource { - struct mlx5_hlist_entry entry; + struct mlx5_list_entry entry; /**< hash list entry for tag resource, tag value as the key. */ void *action; /**< Tag action object. */ @@ -519,7 +519,7 @@ struct mlx5_flow_dv_tag_resource { /* Modify resource structure */ struct mlx5_flow_dv_modify_hdr_resource { - struct mlx5_hlist_entry entry; + struct mlx5_list_entry entry; void *action; /**< Modify header action object. */ /* Key area for hash list matching: */ uint8_t ft_type; /**< Flow table type, Rx or Tx. */ @@ -569,7 +569,7 @@ struct mlx5_flow_mreg_copy_resource { * - Key is 32/64-bit MARK action ID. * - MUST be the first entry. */ - struct mlx5_hlist_entry hlist_ent; + struct mlx5_list_entry hlist_ent; LIST_ENTRY(mlx5_flow_mreg_copy_resource) next; /* List entry for device flows. */ uint32_t idx; @@ -586,7 +586,7 @@ struct mlx5_flow_tbl_tunnel_prm { /* Table data structure of the hash organization. */ struct mlx5_flow_tbl_data_entry { - struct mlx5_hlist_entry entry; + struct mlx5_list_entry entry; /**< hash list entry, 64-bits key inside. */ struct mlx5_flow_tbl_resource tbl; /**< flow table resource. */ @@ -926,7 +926,7 @@ struct mlx5_flow_tunnel_hub { /* convert jump group to flow table ID in tunnel rules */ struct tunnel_tbl_entry { - struct mlx5_hlist_entry hash; + struct mlx5_list_entry hash; uint32_t flow_table; uint32_t tunnel_id; uint32_t group; @@ -1573,110 +1573,105 @@ int mlx5_action_handle_flush(struct rte_eth_dev *dev); void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id); int mlx5_alloc_tunnel_hub(struct mlx5_dev_ctx_shared *sh); -/* Hash list callbacks for flow tables: */ -struct mlx5_hlist_entry *flow_dv_tbl_create_cb(struct mlx5_hlist *list, - uint64_t key, void *entry_ctx); -int flow_dv_tbl_match_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry, uint64_t key, +struct mlx5_list_entry *flow_dv_tbl_create_cb(void *tool_ctx, void *entry_ctx); +int flow_dv_tbl_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_tbl_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry); +void flow_dv_tbl_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_tbl_clone_cb(void *tool_ctx, + struct mlx5_list_entry *oentry, + void *entry_ctx); +void flow_dv_tbl_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); struct mlx5_flow_tbl_resource *flow_dv_tbl_resource_get(struct rte_eth_dev *dev, uint32_t table_level, uint8_t egress, uint8_t transfer, bool external, const struct mlx5_flow_tunnel *tunnel, uint32_t group_id, uint8_t dummy, uint32_t table_id, struct rte_flow_error *error); -struct mlx5_hlist_entry *flow_dv_tag_create_cb(struct mlx5_hlist *list, - uint64_t key, void *cb_ctx); -int flow_dv_tag_match_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry, uint64_t key, +struct mlx5_list_entry *flow_dv_tag_create_cb(void *tool_ctx, void *cb_ctx); +int flow_dv_tag_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_tag_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry); - -int flow_dv_modify_match_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry, - uint64_t key, void *cb_ctx); -struct mlx5_hlist_entry *flow_dv_modify_create_cb(struct mlx5_hlist *list, - uint64_t key, void *ctx); -void flow_dv_modify_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry); - -struct mlx5_hlist_entry *flow_dv_mreg_create_cb(struct mlx5_hlist *list, - uint64_t key, void *ctx); -int flow_dv_mreg_match_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry, uint64_t key, +void flow_dv_tag_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_tag_clone_cb(void *tool_ctx, + struct mlx5_list_entry *oentry, + void *cb_ctx); +void flow_dv_tag_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); + +int flow_dv_modify_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, + void *cb_ctx); +struct mlx5_list_entry *flow_dv_modify_create_cb(void *tool_ctx, void *ctx); +void flow_dv_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_modify_clone_cb(void *tool_ctx, + struct mlx5_list_entry *oentry, + void *ctx); +void flow_dv_modify_clone_free_cb(void *tool_ctx, + struct mlx5_list_entry *entry); + +struct mlx5_list_entry *flow_dv_mreg_create_cb(void *tool_ctx, void *ctx); +int flow_dv_mreg_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_mreg_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry); - -int flow_dv_encap_decap_match_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry, - uint64_t key, void *cb_ctx); -struct mlx5_hlist_entry *flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, - uint64_t key, void *cb_ctx); -void flow_dv_encap_decap_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry); - -int flow_dv_matcher_match_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, void *ctx); -struct mlx5_list_entry *flow_dv_matcher_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, - void *ctx); -void flow_dv_matcher_remove_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry); +void flow_dv_mreg_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_mreg_clone_cb(void *tool_ctx, + struct mlx5_list_entry *entry, + void *ctx); +void flow_dv_mreg_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); -int flow_dv_port_id_match_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, void *cb_ctx); -struct mlx5_list_entry *flow_dv_port_id_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, - void *cb_ctx); -void flow_dv_port_id_remove_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry); -struct mlx5_list_entry *flow_dv_port_id_clone_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, +int flow_dv_encap_decap_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_port_id_clone_free_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused); -int flow_dv_push_vlan_match_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, void *cb_ctx); -struct mlx5_list_entry *flow_dv_push_vlan_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, - void *cb_ctx); -void flow_dv_push_vlan_remove_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry); -struct mlx5_list_entry *flow_dv_push_vlan_clone_cb - (struct mlx5_list *list, - struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_push_vlan_clone_free_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry); - -int flow_dv_sample_match_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, void *cb_ctx); -struct mlx5_list_entry *flow_dv_sample_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, - void *cb_ctx); -void flow_dv_sample_remove_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry); -struct mlx5_list_entry *flow_dv_sample_clone_cb - (struct mlx5_list *list, - struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_sample_clone_free_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry); - -int flow_dv_dest_array_match_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, void *cb_ctx); -struct mlx5_list_entry *flow_dv_dest_array_create_cb(struct mlx5_list *list, +struct mlx5_list_entry *flow_dv_encap_decap_create_cb(void *tool_ctx, + void *cb_ctx); +void flow_dv_encap_decap_remove_cb(void *tool_ctx, + struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_dest_array_remove_cb(struct mlx5_list *list, +void flow_dv_encap_decap_clone_free_cb(void *tool_ctx, + struct mlx5_list_entry *entry); + +int flow_dv_matcher_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, + void *ctx); +struct mlx5_list_entry *flow_dv_matcher_create_cb(void *tool_ctx, void *ctx); +void flow_dv_matcher_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); + +int flow_dv_port_id_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, + void *cb_ctx); +struct mlx5_list_entry *flow_dv_port_id_create_cb(void *tool_ctx, void *cb_ctx); +void flow_dv_port_id_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_port_id_clone_cb(void *tool_ctx, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx); +void flow_dv_port_id_clone_free_cb(void *tool_ctx, + struct mlx5_list_entry *entry __rte_unused); + +int flow_dv_push_vlan_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, + void *cb_ctx); +struct mlx5_list_entry *flow_dv_push_vlan_create_cb(void *tool_ctx, + void *cb_ctx); +void flow_dv_push_vlan_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_push_vlan_clone_cb(void *tool_ctx, + struct mlx5_list_entry *entry, void *cb_ctx); +void flow_dv_push_vlan_clone_free_cb(void *tool_ctx, + struct mlx5_list_entry *entry); + +int flow_dv_sample_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, + void *cb_ctx); +struct mlx5_list_entry *flow_dv_sample_create_cb(void *tool_ctx, void *cb_ctx); +void flow_dv_sample_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_sample_clone_cb(void *tool_ctx, + struct mlx5_list_entry *entry, void *cb_ctx); +void flow_dv_sample_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry); -struct mlx5_list_entry *flow_dv_dest_array_clone_cb - (struct mlx5_list *list, - struct mlx5_list_entry *entry, void *cb_ctx); -void flow_dv_dest_array_clone_free_cb(struct mlx5_list *list, + +int flow_dv_dest_array_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, + void *cb_ctx); +struct mlx5_list_entry *flow_dv_dest_array_create_cb(void *tool_ctx, + void *cb_ctx); +void flow_dv_dest_array_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *flow_dv_dest_array_clone_cb(void *tool_ctx, + struct mlx5_list_entry *entry, void *cb_ctx); +void flow_dv_dest_array_clone_free_cb(void *tool_ctx, + struct mlx5_list_entry *entry); + struct mlx5_aso_age_action *flow_aso_age_get_by_idx(struct rte_eth_dev *dev, uint32_t age_idx); int flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index d588e6fd37..dbe98823bf 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3580,25 +3580,9 @@ flow_dv_validate_action_aso_ct(struct rte_eth_dev *dev, return 0; } -/** - * Match encap_decap resource. - * - * @param list - * Pointer to the hash list. - * @param entry - * Pointer to exist resource entry object. - * @param key - * Key of the new entry. - * @param ctx_cb - * Pointer to new encap_decap resource. - * - * @return - * 0 on matching, none-zero otherwise. - */ int -flow_dv_encap_decap_match_cb(struct mlx5_hlist *list __rte_unused, - struct mlx5_hlist_entry *entry, - uint64_t key __rte_unused, void *cb_ctx) +flow_dv_encap_decap_match_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_encap_decap_resource *ctx_resource = ctx->data; @@ -3617,25 +3601,10 @@ flow_dv_encap_decap_match_cb(struct mlx5_hlist *list __rte_unused, return -1; } -/** - * Allocate encap_decap resource. - * - * @param list - * Pointer to the hash list. - * @param entry - * Pointer to exist resource entry object. - * @param ctx_cb - * Pointer to new encap_decap resource. - * - * @return - * 0 on matching, none-zero otherwise. - */ -struct mlx5_hlist_entry * -flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, - uint64_t key __rte_unused, - void *cb_ctx) +struct mlx5_list_entry * +flow_dv_encap_decap_create_cb(void *tool_ctx, void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5dv_dr_domain *domain; struct mlx5_flow_dv_encap_decap_resource *ctx_resource = ctx->data; @@ -3673,6 +3642,38 @@ flow_dv_encap_decap_create_cb(struct mlx5_hlist *list, return &resource->entry; } +struct mlx5_list_entry * +flow_dv_encap_decap_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_encap_decap_resource *cache_resource; + uint32_t idx; + + cache_resource = mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], + &idx); + if (!cache_resource) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate resource memory"); + return NULL; + } + memcpy(cache_resource, oentry, sizeof(*cache_resource)); + cache_resource->idx = idx; + return &cache_resource->entry; +} + +void +flow_dv_encap_decap_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_dv_encap_decap_resource *res = + container_of(entry, typeof(*res), entry); + + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_DECAP_ENCAP], res->idx); +} + /** * Find existing encap/decap resource or create and register a new one. * @@ -3697,7 +3698,7 @@ flow_dv_encap_decap_resource_register { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_hlist_entry *entry; + struct mlx5_list_entry *entry; union { struct { uint32_t ft_type:8; @@ -3774,23 +3775,21 @@ flow_dv_jump_tbl_resource_register } int -flow_dv_port_id_match_cb(struct mlx5_list *list __rte_unused, +flow_dv_port_id_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_port_id_action_resource *ref = ctx->data; struct mlx5_flow_dv_port_id_action_resource *res = - container_of(entry, typeof(*res), entry); + container_of(entry, typeof(*res), entry); return ref->port_id != res->port_id; } struct mlx5_list_entry * -flow_dv_port_id_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +flow_dv_port_id_create_cb(void *tool_ctx, void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_port_id_action_resource *ref = ctx->data; struct mlx5_flow_dv_port_id_action_resource *resource; @@ -3821,11 +3820,11 @@ flow_dv_port_id_create_cb(struct mlx5_list *list, } struct mlx5_list_entry * -flow_dv_port_id_clone_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +flow_dv_port_id_clone_cb(void *tool_ctx, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_port_id_action_resource *resource; uint32_t idx; @@ -3843,12 +3842,11 @@ flow_dv_port_id_clone_cb(struct mlx5_list *list, } void -flow_dv_port_id_clone_free_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry) +flow_dv_port_id_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_port_id_action_resource *resource = - container_of(entry, typeof(*resource), entry); + container_of(entry, typeof(*resource), entry); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PORT_ID], resource->idx); } @@ -3893,23 +3891,21 @@ flow_dv_port_id_action_resource_register } int -flow_dv_push_vlan_match_cb(struct mlx5_list *list __rte_unused, - struct mlx5_list_entry *entry, void *cb_ctx) +flow_dv_push_vlan_match_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_push_vlan_action_resource *ref = ctx->data; struct mlx5_flow_dv_push_vlan_action_resource *res = - container_of(entry, typeof(*res), entry); + container_of(entry, typeof(*res), entry); return ref->vlan_tag != res->vlan_tag || ref->ft_type != res->ft_type; } struct mlx5_list_entry * -flow_dv_push_vlan_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +flow_dv_push_vlan_create_cb(void *tool_ctx, void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_push_vlan_action_resource *ref = ctx->data; struct mlx5_flow_dv_push_vlan_action_resource *resource; @@ -3946,11 +3942,11 @@ flow_dv_push_vlan_create_cb(struct mlx5_list *list, } struct mlx5_list_entry * -flow_dv_push_vlan_clone_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +flow_dv_push_vlan_clone_cb(void *tool_ctx, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_push_vlan_action_resource *resource; uint32_t idx; @@ -3968,12 +3964,11 @@ flow_dv_push_vlan_clone_cb(struct mlx5_list *list, } void -flow_dv_push_vlan_clone_free_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry) +flow_dv_push_vlan_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_push_vlan_action_resource *resource = - container_of(entry, typeof(*resource), entry); + container_of(entry, typeof(*resource), entry); mlx5_ipool_free(sh->ipool[MLX5_IPOOL_PUSH_VLAN], resource->idx); } @@ -5294,30 +5289,14 @@ flow_dv_validate_action_modify_ipv6_dscp(const uint64_t action_flags, return ret; } -/** - * Match modify-header resource. - * - * @param list - * Pointer to the hash list. - * @param entry - * Pointer to exist resource entry object. - * @param key - * Key of the new entry. - * @param ctx - * Pointer to new modify-header resource. - * - * @return - * 0 on matching, non-zero otherwise. - */ int -flow_dv_modify_match_cb(struct mlx5_hlist *list __rte_unused, - struct mlx5_hlist_entry *entry, - uint64_t key __rte_unused, void *cb_ctx) +flow_dv_modify_match_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_modify_hdr_resource *ref = ctx->data; struct mlx5_flow_dv_modify_hdr_resource *resource = - container_of(entry, typeof(*resource), entry); + container_of(entry, typeof(*resource), entry); uint32_t key_len = sizeof(*ref) - offsetof(typeof(*ref), ft_type); key_len += ref->actions_num * sizeof(ref->actions[0]); @@ -5325,11 +5304,10 @@ flow_dv_modify_match_cb(struct mlx5_hlist *list __rte_unused, memcmp(&ref->ft_type, &resource->ft_type, key_len); } -struct mlx5_hlist_entry * -flow_dv_modify_create_cb(struct mlx5_hlist *list, uint64_t key __rte_unused, - void *cb_ctx) +struct mlx5_list_entry * +flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5dv_dr_domain *ns; struct mlx5_flow_dv_modify_hdr_resource *entry; @@ -5368,6 +5346,33 @@ flow_dv_modify_create_cb(struct mlx5_hlist *list, uint64_t key __rte_unused, return &entry->entry; } +struct mlx5_list_entry * +flow_dv_modify_clone_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *oentry, void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_modify_hdr_resource *entry; + struct mlx5_flow_dv_modify_hdr_resource *ref = ctx->data; + uint32_t data_len = ref->actions_num * sizeof(ref->actions[0]); + + entry = mlx5_malloc(0, sizeof(*entry) + data_len, 0, SOCKET_ID_ANY); + if (!entry) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate resource memory"); + return NULL; + } + memcpy(entry, oentry, sizeof(*entry) + data_len); + return &entry->entry; +} + +void +flow_dv_modify_clone_free_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry) +{ + mlx5_free(entry); +} + /** * Validate the sample action. * @@ -5639,7 +5644,7 @@ flow_dv_modify_hdr_resource_register uint32_t key_len = sizeof(*resource) - offsetof(typeof(*resource), ft_type) + resource->actions_num * sizeof(resource->actions[0]); - struct mlx5_hlist_entry *entry; + struct mlx5_list_entry *entry; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = resource, @@ -9915,7 +9920,7 @@ flow_dv_matcher_enable(uint32_t *match_criteria) } static struct mlx5_list_entry * -flow_dv_matcher_clone_cb(struct mlx5_list *list __rte_unused, +flow_dv_matcher_clone_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -9938,22 +9943,22 @@ flow_dv_matcher_clone_cb(struct mlx5_list *list __rte_unused, } static void -flow_dv_matcher_clone_free_cb(struct mlx5_list *list __rte_unused, +flow_dv_matcher_clone_free_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry) { mlx5_free(entry); } -struct mlx5_hlist_entry * -flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) +struct mlx5_list_entry * +flow_dv_tbl_create_cb(void *tool_ctx, void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct rte_eth_dev *dev = ctx->dev; struct mlx5_flow_tbl_data_entry *tbl_data; - struct mlx5_flow_tbl_tunnel_prm *tt_prm = ctx->data; + struct mlx5_flow_tbl_tunnel_prm *tt_prm = ctx->data2; struct rte_flow_error *error = ctx->error; - union mlx5_flow_tbl_key key = { .v64 = key64 }; + union mlx5_flow_tbl_key key = { .v64 = *(uint64_t *)(ctx->data) }; struct mlx5_flow_tbl_resource *tbl; void *domain; uint32_t idx = 0; @@ -10010,7 +10015,7 @@ flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) MKSTR(matcher_name, "%s_%s_%u_%u_matcher_list", key.is_fdb ? "FDB" : "NIC", key.is_egress ? "egress" : "ingress", key.level, key.id); - tbl_data->matchers = mlx5_list_create(matcher_name, sh, + tbl_data->matchers = mlx5_list_create(matcher_name, sh, true, flow_dv_matcher_create_cb, flow_dv_matcher_match_cb, flow_dv_matcher_remove_cb, @@ -10030,13 +10035,13 @@ flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx) } int -flow_dv_tbl_match_cb(struct mlx5_hlist *list __rte_unused, - struct mlx5_hlist_entry *entry, uint64_t key64, - void *cb_ctx __rte_unused) +flow_dv_tbl_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, + void *cb_ctx) { + struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_tbl_data_entry *tbl_data = container_of(entry, struct mlx5_flow_tbl_data_entry, entry); - union mlx5_flow_tbl_key key = { .v64 = key64 }; + union mlx5_flow_tbl_key key = { .v64 = *(uint64_t *)(ctx->data) }; return tbl_data->level != key.level || tbl_data->id != key.id || @@ -10045,6 +10050,39 @@ flow_dv_tbl_match_cb(struct mlx5_hlist *list __rte_unused, tbl_data->is_egress != !!key.is_egress; } +struct mlx5_list_entry * +flow_dv_tbl_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, + void *cb_ctx) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_tbl_data_entry *tbl_data; + struct rte_flow_error *error = ctx->error; + uint32_t idx = 0; + + tbl_data = mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_JUMP], &idx); + if (!tbl_data) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate flow table data entry"); + return NULL; + } + memcpy(tbl_data, oentry, sizeof(*tbl_data)); + tbl_data->idx = idx; + return &tbl_data->entry; +} + +void +flow_dv_tbl_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_tbl_data_entry *tbl_data = + container_of(entry, struct mlx5_flow_tbl_data_entry, entry); + + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_JUMP], tbl_data->idx); +} + /** * Get a flow table. * @@ -10095,9 +10133,10 @@ flow_dv_tbl_resource_get(struct rte_eth_dev *dev, struct mlx5_flow_cb_ctx ctx = { .dev = dev, .error = error, - .data = &tt_prm, + .data = &table_key.v64, + .data2 = &tt_prm, }; - struct mlx5_hlist_entry *entry; + struct mlx5_list_entry *entry; struct mlx5_flow_tbl_data_entry *tbl_data; entry = mlx5_hlist_register(priv->sh->flow_tbls, table_key.v64, &ctx); @@ -10116,12 +10155,11 @@ flow_dv_tbl_resource_get(struct rte_eth_dev *dev, } void -flow_dv_tbl_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry) +flow_dv_tbl_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_tbl_data_entry *tbl_data = - container_of(entry, struct mlx5_flow_tbl_data_entry, entry); + container_of(entry, struct mlx5_flow_tbl_data_entry, entry); MLX5_ASSERT(entry && sh); if (tbl_data->jump.action) @@ -10129,7 +10167,7 @@ flow_dv_tbl_remove_cb(struct mlx5_hlist *list, if (tbl_data->tbl.obj) mlx5_flow_os_destroy_flow_tbl(tbl_data->tbl.obj); if (tbl_data->tunnel_offload && tbl_data->external) { - struct mlx5_hlist_entry *he; + struct mlx5_list_entry *he; struct mlx5_hlist *tunnel_grp_hash; struct mlx5_flow_tunnel_hub *thub = sh->tunnel_hub; union tunnel_tbl_key tunnel_key = { @@ -10138,11 +10176,14 @@ flow_dv_tbl_remove_cb(struct mlx5_hlist *list, .group = tbl_data->group_id }; uint32_t table_level = tbl_data->level; + struct mlx5_flow_cb_ctx ctx = { + .data = (void *)&tunnel_key.val, + }; tunnel_grp_hash = tbl_data->tunnel ? tbl_data->tunnel->groups : thub->groups; - he = mlx5_hlist_lookup(tunnel_grp_hash, tunnel_key.val, NULL); + he = mlx5_hlist_lookup(tunnel_grp_hash, tunnel_key.val, &ctx); if (he) mlx5_hlist_unregister(tunnel_grp_hash, he); DRV_LOG(DEBUG, @@ -10181,7 +10222,7 @@ flow_dv_tbl_resource_release(struct mlx5_dev_ctx_shared *sh, } int -flow_dv_matcher_match_cb(struct mlx5_list *list __rte_unused, +flow_dv_matcher_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -10196,11 +10237,9 @@ flow_dv_matcher_match_cb(struct mlx5_list *list __rte_unused, } struct mlx5_list_entry * -flow_dv_matcher_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +flow_dv_matcher_create_cb(void *tool_ctx, void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_matcher *ref = ctx->data; struct mlx5_flow_dv_matcher *resource; @@ -10297,29 +10336,29 @@ flow_dv_matcher_register(struct rte_eth_dev *dev, return 0; } -struct mlx5_hlist_entry * -flow_dv_tag_create_cb(struct mlx5_hlist *list, uint64_t key, void *ctx) +struct mlx5_list_entry * +flow_dv_tag_create_cb(void *tool_ctx, void *cb_ctx) { - struct mlx5_dev_ctx_shared *sh = list->ctx; - struct rte_flow_error *error = ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_tag_resource *entry; uint32_t idx = 0; int ret; entry = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_TAG], &idx); if (!entry) { - rte_flow_error_set(error, ENOMEM, + rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot allocate resource memory"); return NULL; } entry->idx = idx; - entry->tag_id = key; - ret = mlx5_flow_os_create_flow_action_tag(key, + entry->tag_id = *(uint32_t *)(ctx->data); + ret = mlx5_flow_os_create_flow_action_tag(entry->tag_id, &entry->action); if (ret) { mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TAG], idx); - rte_flow_error_set(error, ENOMEM, + rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot create action"); return NULL; @@ -10328,14 +10367,45 @@ flow_dv_tag_create_cb(struct mlx5_hlist *list, uint64_t key, void *ctx) } int -flow_dv_tag_match_cb(struct mlx5_hlist *list __rte_unused, - struct mlx5_hlist_entry *entry, uint64_t key, - void *cb_ctx __rte_unused) +flow_dv_tag_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, + void *cb_ctx) +{ + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_tag_resource *tag = + container_of(entry, struct mlx5_flow_dv_tag_resource, entry); + + return *(uint32_t *)(ctx->data) != tag->tag_id; +} + +struct mlx5_list_entry * +flow_dv_tag_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, + void *cb_ctx) { + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_cb_ctx *ctx = cb_ctx; + struct mlx5_flow_dv_tag_resource *entry; + uint32_t idx = 0; + + entry = mlx5_ipool_malloc(sh->ipool[MLX5_IPOOL_TAG], &idx); + if (!entry) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate tag resource memory"); + return NULL; + } + memcpy(entry, oentry, sizeof(*entry)); + entry->idx = idx; + return &entry->entry; +} + +void +flow_dv_tag_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) +{ + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_tag_resource *tag = - container_of(entry, struct mlx5_flow_dv_tag_resource, entry); + container_of(entry, struct mlx5_flow_dv_tag_resource, entry); - return key != tag->tag_id; + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_TAG], tag->idx); } /** @@ -10362,9 +10432,13 @@ flow_dv_tag_resource_register { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_dv_tag_resource *resource; - struct mlx5_hlist_entry *entry; + struct mlx5_list_entry *entry; + struct mlx5_flow_cb_ctx ctx = { + .error = error, + .data = &tag_be24, + }; - entry = mlx5_hlist_register(priv->sh->tag_table, tag_be24, error); + entry = mlx5_hlist_register(priv->sh->tag_table, tag_be24, &ctx); if (entry) { resource = container_of(entry, struct mlx5_flow_dv_tag_resource, entry); @@ -10376,12 +10450,11 @@ flow_dv_tag_resource_register } void -flow_dv_tag_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry) +flow_dv_tag_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_tag_resource *tag = - container_of(entry, struct mlx5_flow_dv_tag_resource, entry); + container_of(entry, struct mlx5_flow_dv_tag_resource, entry); MLX5_ASSERT(tag && sh && tag->action); claim_zero(mlx5_flow_os_destroy_flow_action(tag->action)); @@ -10696,7 +10769,7 @@ flow_dv_sample_sub_actions_release(struct rte_eth_dev *dev, } int -flow_dv_sample_match_cb(struct mlx5_list *list __rte_unused, +flow_dv_sample_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; @@ -10725,9 +10798,7 @@ flow_dv_sample_match_cb(struct mlx5_list *list __rte_unused, } struct mlx5_list_entry * -flow_dv_sample_create_cb(struct mlx5_list *list __rte_unused, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +flow_dv_sample_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct rte_eth_dev *dev = ctx->dev; @@ -10814,7 +10885,7 @@ flow_dv_sample_create_cb(struct mlx5_list *list __rte_unused, } struct mlx5_list_entry * -flow_dv_sample_clone_cb(struct mlx5_list *list __rte_unused, +flow_dv_sample_clone_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry __rte_unused, void *cb_ctx) { @@ -10840,16 +10911,15 @@ flow_dv_sample_clone_cb(struct mlx5_list *list __rte_unused, } void -flow_dv_sample_clone_free_cb(struct mlx5_list *list __rte_unused, +flow_dv_sample_clone_free_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry) { struct mlx5_flow_dv_sample_resource *resource = - container_of(entry, typeof(*resource), entry); + container_of(entry, typeof(*resource), entry); struct rte_eth_dev *dev = resource->dev; struct mlx5_priv *priv = dev->data->dev_private; - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_SAMPLE], - resource->idx); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_SAMPLE], resource->idx); } /** @@ -10892,14 +10962,14 @@ flow_dv_sample_resource_register(struct rte_eth_dev *dev, } int -flow_dv_dest_array_match_cb(struct mlx5_list *list __rte_unused, +flow_dv_dest_array_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_dest_array_resource *ctx_resource = ctx->data; struct rte_eth_dev *dev = ctx->dev; struct mlx5_flow_dv_dest_array_resource *resource = - container_of(entry, typeof(*resource), entry); + container_of(entry, typeof(*resource), entry); uint32_t idx = 0; if (ctx_resource->num_of_dest == resource->num_of_dest && @@ -10921,9 +10991,7 @@ flow_dv_dest_array_match_cb(struct mlx5_list *list __rte_unused, } struct mlx5_list_entry * -flow_dv_dest_array_create_cb(struct mlx5_list *list __rte_unused, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +flow_dv_dest_array_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct rte_eth_dev *dev = ctx->dev; @@ -11028,9 +11096,9 @@ flow_dv_dest_array_create_cb(struct mlx5_list *list __rte_unused, } struct mlx5_list_entry * -flow_dv_dest_array_clone_cb(struct mlx5_list *list __rte_unused, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +flow_dv_dest_array_clone_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry __rte_unused, + void *cb_ctx) { struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct rte_eth_dev *dev = ctx->dev; @@ -11056,8 +11124,8 @@ flow_dv_dest_array_clone_cb(struct mlx5_list *list __rte_unused, } void -flow_dv_dest_array_clone_free_cb(struct mlx5_list *list __rte_unused, - struct mlx5_list_entry *entry) +flow_dv_dest_array_clone_free_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry) { struct mlx5_flow_dv_dest_array_resource *resource = container_of(entry, typeof(*resource), entry); @@ -13531,7 +13599,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, } void -flow_dv_matcher_remove_cb(struct mlx5_list *list __rte_unused, +flow_dv_matcher_remove_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry) { struct mlx5_flow_dv_matcher *resource = container_of(entry, @@ -13568,19 +13636,10 @@ flow_dv_matcher_release(struct rte_eth_dev *dev, return ret; } -/** - * Release encap_decap resource. - * - * @param list - * Pointer to the hash list. - * @param entry - * Pointer to exist resource entry object. - */ void -flow_dv_encap_decap_remove_cb(struct mlx5_hlist *list, - struct mlx5_hlist_entry *entry) +flow_dv_encap_decap_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_encap_decap_resource *res = container_of(entry, typeof(*res), entry); @@ -13640,8 +13699,8 @@ flow_dv_jump_tbl_resource_release(struct rte_eth_dev *dev, } void -flow_dv_modify_remove_cb(struct mlx5_hlist *list __rte_unused, - struct mlx5_hlist_entry *entry) +flow_dv_modify_remove_cb(void *tool_ctx __rte_unused, + struct mlx5_list_entry *entry) { struct mlx5_flow_dv_modify_hdr_resource *res = container_of(entry, typeof(*res), entry); @@ -13673,10 +13732,9 @@ flow_dv_modify_hdr_resource_release(struct rte_eth_dev *dev, } void -flow_dv_port_id_remove_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry) +flow_dv_port_id_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_port_id_action_resource *resource = container_of(entry, typeof(*resource), entry); @@ -13730,10 +13788,9 @@ flow_dv_shared_rss_action_release(struct rte_eth_dev *dev, uint32_t srss) } void -flow_dv_push_vlan_remove_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry) +flow_dv_push_vlan_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct mlx5_dev_ctx_shared *sh = list->ctx; + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_dv_push_vlan_action_resource *resource = container_of(entry, typeof(*resource), entry); @@ -13802,7 +13859,7 @@ flow_dv_fate_resource_release(struct rte_eth_dev *dev, } void -flow_dv_sample_remove_cb(struct mlx5_list *list __rte_unused, +flow_dv_sample_remove_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry) { struct mlx5_flow_dv_sample_resource *resource = container_of(entry, @@ -13850,7 +13907,7 @@ flow_dv_sample_resource_release(struct rte_eth_dev *dev, } void -flow_dv_dest_array_remove_cb(struct mlx5_list *list __rte_unused, +flow_dv_dest_array_remove_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry) { struct mlx5_flow_dv_dest_array_resource *resource = diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 5450ddd388..3f2b99fb65 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -222,17 +222,14 @@ int mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, uint16_t *queues, const uint32_t queues_n, bool standalone); -struct mlx5_list_entry *mlx5_hrxq_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, void *cb_ctx); -int mlx5_hrxq_match_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, +struct mlx5_list_entry *mlx5_hrxq_create_cb(void *tool_ctx, void *cb_ctx); +int mlx5_hrxq_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx); -void mlx5_hrxq_remove_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry); -struct mlx5_list_entry *mlx5_hrxq_clone_cb(struct mlx5_list *list, +void mlx5_hrxq_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry); +struct mlx5_list_entry *mlx5_hrxq_clone_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx __rte_unused); -void mlx5_hrxq_clone_free_cb(struct mlx5_list *list, +void mlx5_hrxq_clone_free_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry); uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev, struct mlx5_flow_rss_desc *rss_desc); diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index aa9e973d10..7893b3edd4 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2093,25 +2093,10 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, return ret; } -/** - * Match an Rx Hash queue. - * - * @param list - * mlx5 list pointer. - * @param entry - * Hash queue entry pointer. - * @param cb_ctx - * Context of the callback function. - * - * @return - * 0 if match, none zero if not match. - */ int -mlx5_hrxq_match_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, - void *cb_ctx) +mlx5_hrxq_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx) { - struct rte_eth_dev *dev = list->ctx; + struct rte_eth_dev *dev = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_rss_desc *rss_desc = ctx->data; struct mlx5_hrxq *hrxq = container_of(entry, typeof(*hrxq), entry); @@ -2251,10 +2236,9 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) * Hash queue entry pointer. */ void -mlx5_hrxq_remove_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry) +mlx5_hrxq_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct rte_eth_dev *dev = list->ctx; + struct rte_eth_dev *dev = tool_ctx; struct mlx5_hrxq *hrxq = container_of(entry, typeof(*hrxq), entry); __mlx5_hrxq_remove(dev, hrxq); @@ -2305,25 +2289,10 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, return NULL; } -/** - * Create an Rx Hash queue. - * - * @param list - * mlx5 list pointer. - * @param entry - * Hash queue entry pointer. - * @param cb_ctx - * Context of the callback function. - * - * @return - * queue entry on success, NULL otherwise. - */ struct mlx5_list_entry * -mlx5_hrxq_create_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry __rte_unused, - void *cb_ctx) +mlx5_hrxq_create_cb(void *tool_ctx, void *cb_ctx) { - struct rte_eth_dev *dev = list->ctx; + struct rte_eth_dev *dev = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_rss_desc *rss_desc = ctx->data; struct mlx5_hrxq *hrxq; @@ -2333,11 +2302,10 @@ mlx5_hrxq_create_cb(struct mlx5_list *list, } struct mlx5_list_entry * -mlx5_hrxq_clone_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry, +mlx5_hrxq_clone_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx __rte_unused) { - struct rte_eth_dev *dev = list->ctx; + struct rte_eth_dev *dev = tool_ctx; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_hrxq *hrxq; uint32_t hrxq_idx = 0; @@ -2351,10 +2319,9 @@ mlx5_hrxq_clone_cb(struct mlx5_list *list, } void -mlx5_hrxq_clone_free_cb(struct mlx5_list *list, - struct mlx5_list_entry *entry) +mlx5_hrxq_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - struct rte_eth_dev *dev = list->ctx; + struct rte_eth_dev *dev = tool_ctx; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_hrxq *hrxq = container_of(entry, typeof(*hrxq), entry); diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index a4526444f9..94abe79860 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -8,257 +8,6 @@ #include "mlx5_utils.h" - -/********************* mlx5 list ************************/ - -struct mlx5_list * -mlx5_list_create(const char *name, void *ctx, - mlx5_list_create_cb cb_create, - mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove, - mlx5_list_clone_cb cb_clone, - mlx5_list_clone_free_cb cb_clone_free) -{ - struct mlx5_list *list; - int i; - - if (!cb_match || !cb_create || !cb_remove || !cb_clone || - !cb_clone_free) { - rte_errno = EINVAL; - return NULL; - } - list = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*list), 0, SOCKET_ID_ANY); - if (!list) - return NULL; - if (name) - snprintf(list->name, sizeof(list->name), "%s", name); - list->ctx = ctx; - list->cb_create = cb_create; - list->cb_match = cb_match; - list->cb_remove = cb_remove; - list->cb_clone = cb_clone; - list->cb_clone_free = cb_clone_free; - rte_rwlock_init(&list->lock); - DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); - for (i = 0; i <= RTE_MAX_LCORE; i++) - LIST_INIT(&list->cache[i].h); - return list; -} - -static struct mlx5_list_entry * -__list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) -{ - struct mlx5_list_entry *entry = LIST_FIRST(&list->cache[lcore_index].h); - uint32_t ret; - - while (entry != NULL) { - if (list->cb_match(list, entry, ctx) == 0) { - if (reuse) { - ret = __atomic_add_fetch(&entry->ref_cnt, 1, - __ATOMIC_RELAXED) - 1; - DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", - list->name, (void *)entry, - entry->ref_cnt); - } else if (lcore_index < RTE_MAX_LCORE) { - ret = __atomic_load_n(&entry->ref_cnt, - __ATOMIC_RELAXED); - } - if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) - return entry; - if (reuse && ret == 0) - entry->ref_cnt--; /* Invalid entry. */ - } - entry = LIST_NEXT(entry, next); - } - return NULL; -} - -struct mlx5_list_entry * -mlx5_list_lookup(struct mlx5_list *list, void *ctx) -{ - struct mlx5_list_entry *entry = NULL; - int i; - - rte_rwlock_read_lock(&list->lock); - for (i = 0; i < RTE_MAX_LCORE; i++) { - entry = __list_lookup(list, i, ctx, false); - if (entry) - break; - } - rte_rwlock_read_unlock(&list->lock); - return entry; -} - -static struct mlx5_list_entry * -mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, - struct mlx5_list_entry *gentry, void *ctx) -{ - struct mlx5_list_entry *lentry = list->cb_clone(list, gentry, ctx); - - if (unlikely(!lentry)) - return NULL; - lentry->ref_cnt = 1u; - lentry->gentry = gentry; - lentry->lcore_idx = (uint32_t)lcore_index; - LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); - return lentry; -} - -static void -__list_cache_clean(struct mlx5_list *list, int lcore_index) -{ - struct mlx5_list_cache *c = &list->cache[lcore_index]; - struct mlx5_list_entry *entry = LIST_FIRST(&c->h); - uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, - __ATOMIC_RELAXED); - - while (inv_cnt != 0 && entry != NULL) { - struct mlx5_list_entry *nentry = LIST_NEXT(entry, next); - - if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - inv_cnt--; - } - entry = nentry; - } -} - -struct mlx5_list_entry * -mlx5_list_register(struct mlx5_list *list, void *ctx) -{ - struct mlx5_list_entry *entry, *local_entry; - volatile uint32_t prev_gen_cnt = 0; - int lcore_index = rte_lcore_index(rte_lcore_id()); - - MLX5_ASSERT(list); - MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); - if (unlikely(lcore_index == -1)) { - rte_errno = ENOTSUP; - return NULL; - } - /* 0. Free entries that was invalidated by other lcores. */ - __list_cache_clean(list, lcore_index); - /* 1. Lookup in local cache. */ - local_entry = __list_lookup(list, lcore_index, ctx, true); - if (local_entry) - return local_entry; - /* 2. Lookup with read lock on global list, reuse if found. */ - rte_rwlock_read_lock(&list->lock); - entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); - if (likely(entry)) { - rte_rwlock_read_unlock(&list->lock); - return mlx5_list_cache_insert(list, lcore_index, entry, ctx); - } - prev_gen_cnt = list->gen_cnt; - rte_rwlock_read_unlock(&list->lock); - /* 3. Prepare new entry for global list and for cache. */ - entry = list->cb_create(list, entry, ctx); - if (unlikely(!entry)) - return NULL; - local_entry = list->cb_clone(list, entry, ctx); - if (unlikely(!local_entry)) { - list->cb_remove(list, entry); - return NULL; - } - entry->ref_cnt = 1u; - local_entry->ref_cnt = 1u; - local_entry->gentry = entry; - local_entry->lcore_idx = (uint32_t)lcore_index; - rte_rwlock_write_lock(&list->lock); - /* 4. Make sure the same entry was not created before the write lock. */ - if (unlikely(prev_gen_cnt != list->gen_cnt)) { - struct mlx5_list_entry *oentry = __list_lookup(list, - RTE_MAX_LCORE, - ctx, true); - - if (unlikely(oentry)) { - /* 4.5. Found real race!!, reuse the old entry. */ - rte_rwlock_write_unlock(&list->lock); - list->cb_remove(list, entry); - list->cb_clone_free(list, local_entry); - return mlx5_list_cache_insert(list, lcore_index, oentry, - ctx); - } - } - /* 5. Update lists. */ - LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE].h, entry, next); - list->gen_cnt++; - rte_rwlock_write_unlock(&list->lock); - LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); - __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, - (void *)entry, entry->ref_cnt); - return local_entry; -} - -int -mlx5_list_unregister(struct mlx5_list *list, - struct mlx5_list_entry *entry) -{ - struct mlx5_list_entry *gentry = entry->gentry; - int lcore_idx; - - if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) - return 1; - lcore_idx = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); - if (entry->lcore_idx == (uint32_t)lcore_idx) { - LIST_REMOVE(entry, next); - list->cb_clone_free(list, entry); - } else if (likely(lcore_idx != -1)) { - __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, - __ATOMIC_RELAXED); - } else { - return 0; - } - if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) - return 1; - rte_rwlock_write_lock(&list->lock); - if (likely(gentry->ref_cnt == 0)) { - LIST_REMOVE(gentry, next); - rte_rwlock_write_unlock(&list->lock); - list->cb_remove(list, gentry); - __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); - DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", - list->name, (void *)gentry); - return 0; - } - rte_rwlock_write_unlock(&list->lock); - return 1; -} - -void -mlx5_list_destroy(struct mlx5_list *list) -{ - struct mlx5_list_entry *entry; - int i; - - MLX5_ASSERT(list); - for (i = 0; i <= RTE_MAX_LCORE; i++) { - while (!LIST_EMPTY(&list->cache[i].h)) { - entry = LIST_FIRST(&list->cache[i].h); - LIST_REMOVE(entry, next); - if (i == RTE_MAX_LCORE) { - list->cb_remove(list, entry); - DRV_LOG(DEBUG, "mlx5 list %s entry %p " - "destroyed.", list->name, - (void *)entry); - } else { - list->cb_clone_free(list, entry); - } - } - } - mlx5_free(list); -} - -uint32_t -mlx5_list_get_entry_num(struct mlx5_list *list) -{ - MLX5_ASSERT(list); - return __atomic_load_n(&list->count, __ATOMIC_RELAXED); -} - /********************* Indexed pool **********************/ static inline void diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 0bf2f5f5ca..7d9b64c877 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -296,203 +296,6 @@ log2above(unsigned int v) return l + r; } -/************************ mlx5 list *****************************/ - -/** Maximum size of string for naming. */ -#define MLX5_NAME_SIZE 32 - -struct mlx5_list; - -/** - * Structure of the entry in the mlx5 list, user should define its own struct - * that contains this in order to store the data. - */ -struct mlx5_list_entry { - LIST_ENTRY(mlx5_list_entry) next; /* Entry pointers in the list. */ - uint32_t ref_cnt; /* 0 means, entry is invalid. */ - uint32_t lcore_idx; - struct mlx5_list_entry *gentry; -}; - -struct mlx5_list_cache { - LIST_HEAD(mlx5_list_head, mlx5_list_entry) h; - uint32_t inv_cnt; /* Invalid entries counter. */ -} __rte_cache_aligned; - -/** - * Type of callback function for entry removal. - * - * @param list - * The mlx5 list. - * @param entry - * The entry in the list. - */ -typedef void (*mlx5_list_remove_cb)(struct mlx5_list *list, - struct mlx5_list_entry *entry); - -/** - * Type of function for user defined matching. - * - * @param list - * The mlx5 list. - * @param entry - * The entry in the list. - * @param ctx - * The pointer to new entry context. - * - * @return - * 0 if matching, non-zero number otherwise. - */ -typedef int (*mlx5_list_match_cb)(struct mlx5_list *list, - struct mlx5_list_entry *entry, void *ctx); - -typedef struct mlx5_list_entry *(*mlx5_list_clone_cb) - (struct mlx5_list *list, - struct mlx5_list_entry *entry, void *ctx); - -typedef void (*mlx5_list_clone_free_cb)(struct mlx5_list *list, - struct mlx5_list_entry *entry); - -/** - * Type of function for user defined mlx5 list entry creation. - * - * @param list - * The mlx5 list. - * @param entry - * The new allocated entry, NULL if list entry size unspecified, - * New entry has to be allocated in callback and return. - * @param ctx - * The pointer to new entry context. - * - * @return - * Pointer of entry on success, NULL otherwise. - */ -typedef struct mlx5_list_entry *(*mlx5_list_create_cb) - (struct mlx5_list *list, - struct mlx5_list_entry *entry, - void *ctx); - -/** - * Linked mlx5 list structure. - * - * Entry in mlx5 list could be reused if entry already exists, - * reference count will increase and the existing entry returns. - * - * When destroy an entry from list, decrease reference count and only - * destroy when no further reference. - * - * Linked list is designed for limited number of entries, - * read mostly, less modification. - * - * For huge amount of entries, please consider hash list. - * - */ -struct mlx5_list { - char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ - volatile uint32_t gen_cnt; - /* List modification will update generation count. */ - volatile uint32_t count; /* number of entries in list. */ - void *ctx; /* user objects target to callback. */ - rte_rwlock_t lock; /* read/write lock. */ - mlx5_list_create_cb cb_create; /**< entry create callback. */ - mlx5_list_match_cb cb_match; /**< entry match callback. */ - mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ - mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ - mlx5_list_clone_free_cb cb_clone_free; - struct mlx5_list_cache cache[RTE_MAX_LCORE + 1]; - /* Lcore cache, last index is the global cache. */ -}; - -/** - * Create a mlx5 list. - * - * @param list - * Pointer to the hast list table. - * @param name - * Name of the mlx5 list. - * @param ctx - * Pointer to the list context data. - * @param cb_create - * Callback function for entry create. - * @param cb_match - * Callback function for entry match. - * @param cb_remove - * Callback function for entry remove. - * @return - * List pointer on success, otherwise NULL. - */ -struct mlx5_list *mlx5_list_create(const char *name, void *ctx, - mlx5_list_create_cb cb_create, - mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove, - mlx5_list_clone_cb cb_clone, - mlx5_list_clone_free_cb cb_clone_free); - -/** - * Search an entry matching the key. - * - * Result returned might be destroyed by other thread, must use - * this function only in main thread. - * - * @param list - * Pointer to the mlx5 list. - * @param ctx - * Common context parameter used by entry callback function. - * - * @return - * Pointer of the list entry if found, NULL otherwise. - */ -struct mlx5_list_entry *mlx5_list_lookup(struct mlx5_list *list, - void *ctx); - -/** - * Reuse or create an entry to the mlx5 list. - * - * @param list - * Pointer to the hast list table. - * @param ctx - * Common context parameter used by callback function. - * - * @return - * registered entry on success, NULL otherwise - */ -struct mlx5_list_entry *mlx5_list_register(struct mlx5_list *list, - void *ctx); - -/** - * Remove an entry from the mlx5 list. - * - * User should guarantee the validity of the entry. - * - * @param list - * Pointer to the hast list. - * @param entry - * Entry to be removed from the mlx5 list table. - * @return - * 0 on entry removed, 1 on entry still referenced. - */ -int mlx5_list_unregister(struct mlx5_list *list, - struct mlx5_list_entry *entry); - -/** - * Destroy the mlx5 list. - * - * @param list - * Pointer to the mlx5 list. - */ -void mlx5_list_destroy(struct mlx5_list *list); - -/** - * Get entry number from the mlx5 list. - * - * @param list - * Pointer to the hast list. - * @return - * mlx5 list entry number. - */ -uint32_t -mlx5_list_get_entry_num(struct mlx5_list *list); - /********************************* indexed pool *************************/ /** diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index 97a8f04e39..af2f684648 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -608,7 +608,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, err = ENOTSUP; goto error; } - priv->hrxqs = mlx5_list_create("hrxq", eth_dev, + priv->hrxqs = mlx5_list_create("hrxq", eth_dev, true, mlx5_hrxq_create_cb, mlx5_hrxq_match_cb, mlx5_hrxq_remove_cb, mlx5_hrxq_clone_cb, mlx5_hrxq_clone_free_cb); From patchwork Fri Jul 2 06:18:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95165 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1AAEAA0A0C; Fri, 2 Jul 2021 08:20:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B06B841338; Fri, 2 Jul 2021 08:18:58 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2074.outbound.protection.outlook.com [40.107.223.74]) by mails.dpdk.org (Postfix) with ESMTP id D768C41388 for ; Fri, 2 Jul 2021 08:18:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mx4IjQL36pCae1S0NZ8AkJ32wvdgYy5pES7Oa9/YgJLCL60csSmd2U3nT0O/Kl9uMK3zpDJW0r4UefExXi7RvqFjvtRZZN9yaR1piJkPo1yQ8/XcgutvJoHzBhZPvDlBR/6JJ6PAwhmccVvrnDKoCx8Napscg73wgSFnOhNO0C/g8VnX+uNk7iG2Af4pY8CweeamAQsu4xEscSHI7hLI0SUxzVPAPwBbaAb0BdBtfga87FvWPXVD/hou58awBTrE+KxxfBC6rQtBSB9IAWEGBun6FwSqa0Mrl/NaT+B8hz+6DaoCxMDVmYO8ksYTxx29xw7nuB1EMmZA2HlQ+rPy0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ebGxWGM3g20oWPBOHp+a/cPV1Df8XkWgnid70s0Sqwo=; b=bJVg4BN/wFHQ7whNfZjWh8rrKeIGAMhRbAE4vvF4xBD0DM3gvhraH/uuyae3Mz/ZHMSuPpHXxc6vliO3IGlAb9D4nNDV9DPSHLN8MDSGjbork/LE478rzkzlojYe5ctHqg0ckXapzhsg+tBHIJH8U3P+0HE003guu8CxLdubwY7g/CnjKwiaWwvQte/CzoVMsjWzsSmdVd93dzvO7YFgaR4ogkz6CCbGfQMXxrCqxzVqeZVpfzgHMZO2VKVDIHe7MwXD1CEqBLvnJHKN88O3dYYZG8CPYXD2Ea2p134qbcTR2ehd8Lfolr2fnVgxCZk3zrv1872S1A+cBlEb+Y1iwQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ebGxWGM3g20oWPBOHp+a/cPV1Df8XkWgnid70s0Sqwo=; b=bbcxUprH/IDC7eHtqgRSh6CvL1a7Ca6U/+w2lXrTUz+B9PZj2qqNMBxhotLbTI5zO2CA64xgvGis2R9L4USqnv7DD+Onb3mmWeur+8A8sSMzdBiPDEKW7yXJfwK0TM75RaFyr19ri2qBYRZKM4NPIpt5KRJqBNMvOCpkbeCp4O8lf+ZnU6jcJ9FgRLBLmObCbMByQcnOy/DgMut+Ypokz41N2sfngEr59z35UzfwmIpiSmPu0JtgUoDV6WIwWBG3SOcHXJHXLDgkFbHVCsp5b3sUhgDDvkot05mVKWop1BoBNePcqrKU2GfrhRUwY11aSIEcoQvA0JrLUFINxGBpvA== Received: from BN7PR06CA0043.namprd06.prod.outlook.com (2603:10b6:408:34::20) by BN6PR12MB1345.namprd12.prod.outlook.com (2603:10b6:404:18::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 2 Jul 2021 06:18:54 +0000 Received: from BN8NAM11FT052.eop-nam11.prod.protection.outlook.com (2603:10b6:408:34:cafe::92) by BN7PR06CA0043.outlook.office365.com (2603:10b6:408:34::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Fri, 2 Jul 2021 06:18:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT052.mail.protection.outlook.com (10.13.177.210) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:54 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:52 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:07 +0300 Message-ID: <20210702061816.10454-14-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b6019786-4560-4a61-298f-08d93d2144dd X-MS-TrafficTypeDiagnostic: BN6PR12MB1345: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5797; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4zTn5EVjVFegXIfQjofPC4ths4isILjIoVEH42LtdjsZDqiFjWDXEHqFVeFlSh/qcEIG3Sqjia6LhJLRIxUEbqJNsEPDEUIeVyihl/BYrv8SRSYOVcTzd/xNvny2hAEEuAJlQdf7OT+Hrjl7n4rxHmzrJxHJxBbqWmWlGtefYGaGOfu8+8VoO8LaKjdCNjkukULCnBo4nsp0wNnKBi4fRyc1Di5y64Q6MGqrxR5VfruucujCENUcpLXH/j9z4g2C8Ec2lW5xfdxneeCzGWWHRR3CUrgX/bDh83ohJ5lTCDpoRAE7S6Gy55DvMDbQjl0GsY4X2o13clpflV6YwhEwT8qwkNHlIlg/1Z7ot4iZRBKxNsbjmy/xrbfAIxI4tYHWXC8BJ9DbxbgA0XgpqV9S5j1nIbpxS1iKrq9mFA4g4qxp0Xdp85uzF2j4yVDvx3T0GRAL1bIJI9O1VpLvJnvv8B0BAdiIRKSEW25lV1I8myEXZJGGq9ROZ04jha/B4zKTKlzqoq4Nz5q4ApseHW2wuZImdpiaN/ikGfQ2xIFzQ3uRAgcsbaqfaTDvQv6QfVrmmyXkCKLy46LCwT4PIGqi05LcAr1krecjExQQi86nIsMFye9IOej4YbDqaJa4cAEjvtCvSnkuIf46PekWmaau+2bQPV9OTx4W2WN1FxAQMqIBxqtAmu/+sHNbl6gvkPvL4ShUb4+xIRa6rm/Hh0S3CA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(346002)(136003)(376002)(39860400002)(46966006)(36840700001)(86362001)(186003)(16526019)(478600001)(2616005)(26005)(426003)(55016002)(36756003)(110136005)(5660300002)(47076005)(82310400003)(316002)(54906003)(336012)(4326008)(36906005)(7696005)(83380400001)(1076003)(36860700001)(82740400003)(2906002)(6286002)(70206006)(8676002)(6636002)(6666004)(356005)(8936002)(70586007)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:54.6536 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b6019786-4560-4a61-298f-08d93d2144dd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT052.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1345 Subject: [dpdk-dev] [PATCH v3 13/22] net/mlx5: move modify header allocator to ipool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Matan Azrad Modify header actions are allocated by mlx5_malloc which has a big overhead of memory and allocation time. One of the action types under the modify header object is SET_TAG, The SET_TAG action is commonly not reused by the flows and each flow has its own value. Hence, the mlx5_malloc becomes a bottleneck in flow insertion rate in the common cases of SET_TAG. Use ipool allocator for SET_TAG action. Ipool allocator has less overhead of memory and insertion rate and has better synchronization mechanism in multithread cases. Different ipool is created for each optional size of modify header handler. Signed-off-by: Matan Azrad Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5.c | 4 ++ drivers/net/mlx5/mlx5.h | 14 ++++++ drivers/net/mlx5/mlx5_flow.h | 14 +----- drivers/net/mlx5/mlx5_flow_dv.c | 79 ++++++++++++++++++++++++++++----- 4 files changed, 86 insertions(+), 25 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 0e80408511..713accf675 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -801,6 +801,7 @@ mlx5_flow_ipool_create(struct mlx5_dev_ctx_shared *sh, } } + /** * Release the flow resources' indexed mempool. * @@ -814,6 +815,9 @@ mlx5_flow_ipool_destroy(struct mlx5_dev_ctx_shared *sh) for (i = 0; i < MLX5_IPOOL_MAX; ++i) mlx5_ipool_destroy(sh->ipool[i]); + for (i = 0; i < MLX5_MAX_MODIFY_NUM; ++i) + if (sh->mdh_ipools[i]) + mlx5_ipool_destroy(sh->mdh_ipools[i]); } /* diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f3768ee028..c7239e1137 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -36,6 +36,19 @@ #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) +/* + * Number of modification commands. + * The maximal actions amount in FW is some constant, and it is 16 in the + * latest releases. In some old releases, it will be limited to 8. + * Since there is no interface to query the capacity, the maximal value should + * be used to allow PMD to create the flow. The validation will be done in the + * lower driver layer or FW. A failure will be returned if exceeds the maximal + * supported actions number on the root table. + * On non-root tables, there is no limitation, but 32 is enough right now. + */ +#define MLX5_MAX_MODIFY_NUM 32 +#define MLX5_ROOT_TBL_MODIFY_NUM 16 + enum mlx5_ipool_index { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) MLX5_IPOOL_DECAP_ENCAP = 0, /* Pool for encap/decap resource. */ @@ -1123,6 +1136,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ void *default_miss_action; /* Default miss action. */ struct mlx5_indexed_pool *ipool[MLX5_IPOOL_MAX]; + struct mlx5_indexed_pool *mdh_ipools[MLX5_MAX_MODIFY_NUM]; /* Memory Pool for mlx5 flow resources. */ struct mlx5_l3t_tbl *cnt_id_tbl; /* Shared counter lookup table. */ /* Shared interrupt handler section. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index ab4e8c5c4f..4552aaa803 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -504,23 +504,11 @@ struct mlx5_flow_dv_tag_resource { uint32_t tag_id; /**< Tag ID. */ }; -/* - * Number of modification commands. - * The maximal actions amount in FW is some constant, and it is 16 in the - * latest releases. In some old releases, it will be limited to 8. - * Since there is no interface to query the capacity, the maximal value should - * be used to allow PMD to create the flow. The validation will be done in the - * lower driver layer or FW. A failure will be returned if exceeds the maximal - * supported actions number on the root table. - * On non-root tables, there is no limitation, but 32 is enough right now. - */ -#define MLX5_MAX_MODIFY_NUM 32 -#define MLX5_ROOT_TBL_MODIFY_NUM 16 - /* Modify resource structure */ struct mlx5_flow_dv_modify_hdr_resource { struct mlx5_list_entry entry; void *action; /**< Modify header action object. */ + uint32_t idx; /* Key area for hash list matching: */ uint8_t ft_type; /**< Flow table type, Rx or Tx. */ uint8_t actions_num; /**< Number of modification actions. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index dbe98823bf..e702b78358 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5304,6 +5304,45 @@ flow_dv_modify_match_cb(void *tool_ctx __rte_unused, memcmp(&ref->ft_type, &resource->ft_type, key_len); } +static struct mlx5_indexed_pool * +flow_dv_modify_ipool_get(struct mlx5_dev_ctx_shared *sh, uint8_t index) +{ + struct mlx5_indexed_pool *ipool = __atomic_load_n + (&sh->mdh_ipools[index], __ATOMIC_SEQ_CST); + + if (!ipool) { + struct mlx5_indexed_pool *expected = NULL; + struct mlx5_indexed_pool_config cfg = + (struct mlx5_indexed_pool_config) { + .size = sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + (index + 1) * + sizeof(struct mlx5_modification_cmd), + .trunk_size = 64, + .grow_trunk = 3, + .grow_shift = 2, + .need_lock = 1, + .release_mem_en = 1, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_modify_action_resource", + }; + + cfg.size = RTE_ALIGN(cfg.size, sizeof(ipool)); + ipool = mlx5_ipool_create(&cfg); + if (!ipool) + return NULL; + if (!__atomic_compare_exchange_n(&sh->mdh_ipools[index], + &expected, ipool, false, + __ATOMIC_SEQ_CST, + __ATOMIC_SEQ_CST)) { + mlx5_ipool_destroy(ipool); + ipool = __atomic_load_n(&sh->mdh_ipools[index], + __ATOMIC_SEQ_CST); + } + } + return ipool; +} + struct mlx5_list_entry * flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) { @@ -5312,12 +5351,20 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) struct mlx5dv_dr_domain *ns; struct mlx5_flow_dv_modify_hdr_resource *entry; struct mlx5_flow_dv_modify_hdr_resource *ref = ctx->data; + struct mlx5_indexed_pool *ipool = flow_dv_modify_ipool_get(sh, + ref->actions_num - 1); int ret; uint32_t data_len = ref->actions_num * sizeof(ref->actions[0]); uint32_t key_len = sizeof(*ref) - offsetof(typeof(*ref), ft_type); + uint32_t idx; - entry = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*entry) + data_len, 0, - SOCKET_ID_ANY); + if (unlikely(!ipool)) { + rte_flow_error_set(ctx->error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot allocate modify ipool"); + return NULL; + } + entry = mlx5_ipool_zmalloc(ipool, &idx); if (!entry) { rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -5337,25 +5384,29 @@ flow_dv_modify_create_cb(void *tool_ctx, void *cb_ctx) (sh->ctx, ns, entry, data_len, &entry->action); if (ret) { - mlx5_free(entry); + mlx5_ipool_free(sh->mdh_ipools[ref->actions_num - 1], idx); rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, "cannot create modification action"); return NULL; } + entry->idx = idx; return &entry->entry; } struct mlx5_list_entry * -flow_dv_modify_clone_cb(void *tool_ctx __rte_unused, - struct mlx5_list_entry *oentry, void *cb_ctx) +flow_dv_modify_clone_cb(void *tool_ctx, struct mlx5_list_entry *oentry, + void *cb_ctx) { + struct mlx5_dev_ctx_shared *sh = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_dv_modify_hdr_resource *entry; struct mlx5_flow_dv_modify_hdr_resource *ref = ctx->data; uint32_t data_len = ref->actions_num * sizeof(ref->actions[0]); + uint32_t idx; - entry = mlx5_malloc(0, sizeof(*entry) + data_len, 0, SOCKET_ID_ANY); + entry = mlx5_ipool_malloc(sh->mdh_ipools[ref->actions_num - 1], + &idx); if (!entry) { rte_flow_error_set(ctx->error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -5363,14 +5414,18 @@ flow_dv_modify_clone_cb(void *tool_ctx __rte_unused, return NULL; } memcpy(entry, oentry, sizeof(*entry) + data_len); + entry->idx = idx; return &entry->entry; } void -flow_dv_modify_clone_free_cb(void *tool_ctx __rte_unused, - struct mlx5_list_entry *entry) +flow_dv_modify_clone_free_cb(void *tool_ctx, struct mlx5_list_entry *entry) { - mlx5_free(entry); + struct mlx5_dev_ctx_shared *sh = tool_ctx; + struct mlx5_flow_dv_modify_hdr_resource *res = + container_of(entry, typeof(*res), entry); + + mlx5_ipool_free(sh->mdh_ipools[res->actions_num - 1], res->idx); } /** @@ -13699,14 +13754,14 @@ flow_dv_jump_tbl_resource_release(struct rte_eth_dev *dev, } void -flow_dv_modify_remove_cb(void *tool_ctx __rte_unused, - struct mlx5_list_entry *entry) +flow_dv_modify_remove_cb(void *tool_ctx, struct mlx5_list_entry *entry) { struct mlx5_flow_dv_modify_hdr_resource *res = container_of(entry, typeof(*res), entry); + struct mlx5_dev_ctx_shared *sh = tool_ctx; claim_zero(mlx5_flow_os_destroy_flow_action(res->action)); - mlx5_free(entry); + mlx5_ipool_free(sh->mdh_ipools[res->actions_num - 1], res->idx); } /** From patchwork Fri Jul 2 06:18:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95166 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 413DEA0A0C; Fri, 2 Jul 2021 08:20:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D5E484138A; Fri, 2 Jul 2021 08:18:59 +0200 (CEST) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam08on2065.outbound.protection.outlook.com [40.107.102.65]) by mails.dpdk.org (Postfix) with ESMTP id 06DEF4137E for ; Fri, 2 Jul 2021 08:18:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K/fJYNsrtlJUgVgEolaGGxkVuBGaVA+DQYs6txAkXhxPmQh0Cjfkrlz+fhZWzTzIdRg/8IU+JIozJFA+ERgZLTiHD7D63mN2MdIe/fLLHF3M+t4l7d+hm70qYi1u/kxvB47sCd8WGobhEFJXcHw7+X0oqw+cmBAsBeUys2g5y5WzWrq3XJXQ2nwxeWYEOCLsI9H/L4tONoMGTqSQaHwYocwrj2ChLyvm5pQJvx4IwflXv6W2xPJwheD6XYWtwk2vJoQjYVtnDEiaKXUJ+bzpOUHizXdezSBkfMa5aXtzKgYtD/i9xnYymrx/OUC8QF23/QDLIAO+VIUTPa2ZzQpS4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=l3LmN+AS9/0U5ydZCtIGn5QirS8nw3Mk/L0kVAkHic8=; b=TLSl8pi9jHRfEOBoC1Aoeur3GyEZwOkA/oiZSFaqrJhGZVA7CQdyXHSqV9Z+rC1EzO+cDHRw5NbBbzb21PSeXjtqkO/fyeR9Ui5/E7m/Q7HgiNxEBJ5Vhc0LyTcs1fn6jQ6hFQTXfZe1bGuaa5Nng79axvODUYYUF//RlVcufC/6eXKvLVV3eLkpuVgtMtzKnqzWy94Th+6OqB/sK+vyKRTt8M6oFEHmZRto23A3rTQlAKzwERaGFScN5xMSZnV7P/akqsrSKVfsVihpMMnup+tcMpvBueIdZweqwmgA+Qjaynj3m82dKbAGQ95sH7xVv3Ae6SuYtU7XI72YNHQLBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=l3LmN+AS9/0U5ydZCtIGn5QirS8nw3Mk/L0kVAkHic8=; b=LwKljoTkH5sd0rFS5ON2uERWZ3rCJglI3tQJpdLUkdpKH78boqeWM5+C97nMtdrygwXNp064Za3MSKEGlP6hs4YeH999WJL4vsUquL4W6ylHKTC+Bph4ODfO5CWLB27HOJr8MnGQgmOIUqr9hdOCiwPZOwS7q3YzPJsgYuvuCigtaykD/UZQy3Pou+w8PSBGrZ1Jz31zWbywdGFyW175UgmuhUtkqd8GY2SQOMkhLGn4SX6r8Os3j7j21QHzkS4bNAGQkezdTAI1Od52zKnf6+p9+IMJEcDucBfWI7ooE9SwpPhmA9giKWyZWyfBMmYmW6K8R0Unl5gz8IuQSVu/GQ== Received: from BN1PR13CA0004.namprd13.prod.outlook.com (2603:10b6:408:e2::9) by DM6PR12MB4579.namprd12.prod.outlook.com (2603:10b6:5:2ac::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 2 Jul 2021 06:18:56 +0000 Received: from BN8NAM11FT006.eop-nam11.prod.protection.outlook.com (2603:10b6:408:e2:cafe::af) by BN1PR13CA0004.outlook.office365.com (2603:10b6:408:e2::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.8 via Frontend Transport; Fri, 2 Jul 2021 06:18:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT006.mail.protection.outlook.com (10.13.177.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:56 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:54 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:08 +0300 Message-ID: <20210702061816.10454-15-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 84dc661a-2db7-48bc-2333-08d93d2145d1 X-MS-TrafficTypeDiagnostic: DM6PR12MB4579: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:628; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PagxcxCARJo11E7NE+b2T749wlNRAdCtyhJtAJWaVkHObmg3/82Z1o3DkuMsGJCnVgrgw6XhQB5jpI5qiHmc71/tOUgB1Y1pzRSADpALyYWpDZI8VR2V/cdpYobUniK5Xc/iZfEKsO19HqEC0ITntBSwOLPNjVA5AEdLgPSy/d2nuQdd9bOyrKqgiJ/9ijXh6RTKnACJ32SaC1AxTUzME2/sf75DeZdkvI5ewt51Z+G2R2icTVDvVrNwemIIJ2v1KIRxv0ZTVGAwi189wsoEF2aHh4HK00UMcgaMzwpefrh8PW16gddWzvAcBmoaH9WZuPX8RR6ozSUYAdauye/QFFXVYD0uWz2DJOsCA6iaQ1bhO0KuG6KV94+InUwn1Lj0yppvbgfAplxGZFYDwOTpNfIxvzn02voMtM3wSBTTmLAEedtF6ydxKQ6109zETUU+GoRBWQUTKDXcV3iHCgrD78mhH3HrExCYpCUZxTB80o31dXDNOclimeq1FrDu8irqzOrS5FnFU1AurB5Dj5wFqee02RTzP82Ps0q3kiX6tDH52mEX9aIwh0YTbGzJStYKJNUU86gqyFv7eDeL8Oav0AHZMbOq17qqP5EW/AuXuBQ3YK7x1v0UnnisgweOWippAbXi4rWkxkNNvHmgXVpYXdpJA6dvhfd4IK2s4BaSOyBT1qCEC1ftKQZOuwqSWmFftqtf2AxYa0Dg6zh/EvW2gA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(39860400002)(136003)(376002)(346002)(36840700001)(46966006)(82310400003)(5660300002)(8676002)(6286002)(8936002)(70206006)(36906005)(70586007)(6666004)(1076003)(186003)(26005)(54906003)(47076005)(110136005)(478600001)(7636003)(82740400003)(16526019)(4326008)(356005)(86362001)(316002)(55016002)(336012)(426003)(2616005)(83380400001)(36756003)(7696005)(6636002)(36860700001)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:56.2385 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 84dc661a-2db7-48bc-2333-08d93d2145d1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT006.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4579 Subject: [dpdk-dev] [PATCH v3 14/22] net/mlx5: adjust the hash bucket size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" With the new per core optimization to the list, the hash bucket size can be tuned to a more accurate number. This commit adjusts the hash bucket size. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 2 +- drivers/net/mlx5/mlx5.c | 2 +- drivers/net/mlx5/mlx5_defs.h | 6 +++--- drivers/net/mlx5/mlx5_flow.c | 5 ++--- 4 files changed, 7 insertions(+), 8 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index cf573a9a4d..a82dc4db00 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -50,7 +50,7 @@ #include "mlx5_nl.h" #include "mlx5_devx.h" -#define MLX5_TAGS_HLIST_ARRAY_SIZE 8192 +#define MLX5_TAGS_HLIST_ARRAY_SIZE (1 << 15) #ifndef HAVE_IBV_MLX5_MOD_MPW #define MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED (1 << 2) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 713accf675..8fb7f4442d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -373,7 +373,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { #define MLX5_FLOW_MIN_ID_POOL_SIZE 512 #define MLX5_ID_GENERATION_ARRAY_FACTOR 16 -#define MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE 4096 +#define MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE 1024 /** * Decide whether representor ID is a HPF(host PF) port on BF2. diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 906aa43c5a..ca67ce8213 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -178,15 +178,15 @@ sizeof(struct rte_ipv4_hdr)) /* Size of the simple hash table for metadata register table. */ -#define MLX5_FLOW_MREG_HTABLE_SZ 4096 +#define MLX5_FLOW_MREG_HTABLE_SZ 64 #define MLX5_FLOW_MREG_HNAME "MARK_COPY_TABLE" #define MLX5_DEFAULT_COPY_ID UINT32_MAX /* Size of the simple hash table for header modify table. */ -#define MLX5_FLOW_HDR_MODIFY_HTABLE_SZ (1 << 16) +#define MLX5_FLOW_HDR_MODIFY_HTABLE_SZ (1 << 15) /* Size of the simple hash table for encap decap table. */ -#define MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ (1 << 16) +#define MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ (1 << 12) /* Hairpin TX/RX queue configuration parameters. */ #define MLX5_HAIRPIN_QUEUE_STRIDE 6 diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 7bd45d3895..15a895ea48 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -8625,7 +8625,7 @@ mlx5_flow_tunnel_allocate(struct rte_eth_dev *dev, DRV_LOG(ERR, "Tunnel ID %d exceed max limit.", id); return NULL; } - tunnel->groups = mlx5_hlist_create("tunnel groups", 1024, false, true, + tunnel->groups = mlx5_hlist_create("tunnel groups", 64, false, true, priv->sh, mlx5_flow_tunnel_grp2tbl_create_cb, mlx5_flow_tunnel_grp2tbl_match_cb, @@ -8734,8 +8734,7 @@ int mlx5_alloc_tunnel_hub(struct mlx5_dev_ctx_shared *sh) return -ENOMEM; LIST_INIT(&thub->tunnels); rte_spinlock_init(&thub->sl); - thub->groups = mlx5_hlist_create("flow groups", - rte_align32pow2(MLX5_MAX_TABLES), + thub->groups = mlx5_hlist_create("flow groups", 64, false, true, sh, mlx5_flow_tunnel_grp2tbl_create_cb, mlx5_flow_tunnel_grp2tbl_match_cb, From patchwork Fri Jul 2 06:18:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95168 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74EDAA0A0C; Fri, 2 Jul 2021 08:20:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1D9C641392; Fri, 2 Jul 2021 08:19:02 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2047.outbound.protection.outlook.com [40.107.93.47]) by mails.dpdk.org (Postfix) with ESMTP id 71B934137F for ; Fri, 2 Jul 2021 08:18:59 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WbMl6I2fYwiiGj9dPN5dygzsDxznH2KyivjLS096BKEJZ//Jsjv4FaW2rIJwnJRfPN0a7XFfCKiNZLoqkErcYIbWpbWyMvtDPCLn/hkk5Vn+6IlG9RLmriVt1V7/xwVwTNW9N6TDyDL2MaNLRrY+PJKHbYTwIDjFUXDfFVBc75J208k4KHPl5hSfLg2JSEdyJUWoZ3ogJDYLmuFkpxERh0SNRWAVQHXJdF15woyDAmTgU/Z9g40GVeu4GL1htiD18Pat+E1JcaaIQTOfpgPRYqsXDVXdjb1/+v4CuXCLuXTocUgatnkW2r6qxJ1Zuz8J7f61sk+VcbhNJpoqifVakQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EkgzUpz5t/xcsSlUPDMfJQbwf+SjZKFxgQuaQST2xPs=; b=MKsd8171A8Y3/YRMK78wbTTOiiGvo/TfmZzkC+Kc3RO0kTVKtawh7gBElADu6PCc4pYZU3GplmwWre4hZ+25u+IJbtaWzXfaz+QO7ckpkdd0+ovtVXWsmVInGaKjzo/8zR/9Qj7cMryaMm6+KhY8ARhXfU+VE1Ej3r9kc8iJbI2rqt6qlVLXYYmnCPaIaiuDjC7KzmJtkOlX4tbPAqT+1OKG1aam/3eQ2FRghN2T2lgIcs3XfXLWGKrOkj67eaXGXv2JSS3uT9nhL1fnvaFlA2fO61jRmnlX7UkfIZGpPnlv0KhanNZRlh5om1Vqf/euuGj6lp5fXqB9gJa6KG4OqQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EkgzUpz5t/xcsSlUPDMfJQbwf+SjZKFxgQuaQST2xPs=; b=JGNOB3FtZgve1l0D0dN/pM2m1l2IxpgbN0MZsiFG50/b+1FgO9FUJk1epRdzFbshpmno1BUVjjIfOVkg7mN9wddPhej4+LWAb4yTq1lN7ZZl1Hdc1W1os/fHS1twKJnKuL9GyBvmaecY8SIt2zoofxs6u7peY4rzhXy/01VSfDkSN4oJg0r/a9XutKRI8xgi0QCemlLgvS35Y1ecVsiPVNmZAKKc7AHqMKRhNwYwq9By2M5ROASIZo7SGIsPoLY3f5B4LA78SPPlJG4Vn2TPbDZtPM2bg6Uuxzvi1ELf6Np49K06eaNDNPI4dxjoUsASqXxQWZI7/4mdi3QdXc4yDA== Received: from BN6PR1201CA0009.namprd12.prod.outlook.com (2603:10b6:405:4c::19) by CH2PR12MB3797.namprd12.prod.outlook.com (2603:10b6:610:27::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 2 Jul 2021 06:18:58 +0000 Received: from BN8NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:405:4c:cafe::d8) by BN6PR1201CA0009.outlook.office365.com (2603:10b6:405:4c::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT026.mail.protection.outlook.com (10.13.177.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:18:57 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:55 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:09 +0300 Message-ID: <20210702061816.10454-16-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1fd84974-efe3-45c1-4aa9-08d93d2146d4 X-MS-TrafficTypeDiagnostic: CH2PR12MB3797: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:71; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bTxq99QLYgbvRv7HHPh4GkrrcxgAMaLnwiOClzmBmCjpgRJBPQv2Ly4289RPC3wIqdvRo9tgReddL9rqffdaxdWub9BUe+y0qy4oUXfRZFewtVAFLVes7AS+zN4LP3K7vXMc69JqyV8Psf2ZxhTgBBbgk9/dKRADrgjDlc0oCwGd2miTXBEK48Q210/f0AhlROrexP9wld9I/hw31ItborVNkXAoHk7VlVfYR5XGhBtCbSvy5oO6Bn3qGdmAeJOcfsEHoJZiRl1yoyPkeWi0MadITC9N1AE5mSQ61a/GcrGKOeDJnEHTOuKN16FxKs2z7TTGzRe+KMcXfd4hBH/ao45Nv2ALf/tIcyvWr2oqGa9swYgc8p+34stdM0YEfrpHL764dKQXfhCJ6XZyC82fEeIw/8DBl/bURfKSpy+LpmA5DeT7rIQ6dQol90XuAE7Vtdr3gctbE20KiaMh4lxxJh/RN8l4OYv3hR2FruUucmTMlJDXsCHTEtHShGEBw6PIcTTb5YUwUUu9hiUlgGpQA1d5jgU2CKmlnNcbhWuCSOlTQDaATjft0Oo4nDMtd0JdUCqXh4v/zoDTwfO4RWFfLEMGDYX8S6Ga4d6E73Hw/ctht6JaEF1/v/tuDSgyuDxLyKFA03v14MzkWOpJJhygFVul6+6ch6lbEae9jMFsdsNAXI+xzzPmhYKkyqGWWdP6dP8NiGkMve2tb1p9v7NVbg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(376002)(396003)(136003)(39860400002)(346002)(46966006)(36840700001)(5660300002)(16526019)(186003)(36860700001)(47076005)(8936002)(356005)(7636003)(6636002)(2906002)(55016002)(83380400001)(6286002)(82740400003)(1076003)(86362001)(36756003)(70586007)(7696005)(426003)(478600001)(4326008)(336012)(2616005)(26005)(36906005)(70206006)(82310400003)(54906003)(110136005)(316002)(6666004)(8676002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:18:57.9289 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1fd84974-efe3-45c1-4aa9-08d93d2146d4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB3797 Subject: [dpdk-dev] [PATCH v3 15/22] common/mlx5: allocate cache list memory individually X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, the list's local cache instance memory is allocated with the list. As the local cache instance array size is RTE_MAX_LCORE, most of the cases the system will only have very limited cores. allocate the instance memory individually per core will be more economic to the memory. This commit changes the instance array to pointer array, allocate the local cache memory only when the core is to be used. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 62 ++++++++++++++++++------- drivers/common/mlx5/mlx5_common_utils.h | 2 +- 2 files changed, 45 insertions(+), 19 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index 4e385c616a..f75b1cb0da 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -15,14 +15,13 @@ static int mlx5_list_init(struct mlx5_list *list, const char *name, void *ctx, - bool lcores_share, mlx5_list_create_cb cb_create, + bool lcores_share, struct mlx5_list_cache *gc, + mlx5_list_create_cb cb_create, mlx5_list_match_cb cb_match, mlx5_list_remove_cb cb_remove, mlx5_list_clone_cb cb_clone, mlx5_list_clone_free_cb cb_clone_free) { - int i; - if (!cb_match || !cb_create || !cb_remove || !cb_clone || !cb_clone_free) { rte_errno = EINVAL; @@ -38,9 +37,11 @@ mlx5_list_init(struct mlx5_list *list, const char *name, void *ctx, list->cb_clone = cb_clone; list->cb_clone_free = cb_clone_free; rte_rwlock_init(&list->lock); + if (lcores_share) { + list->cache[RTE_MAX_LCORE] = gc; + LIST_INIT(&list->cache[RTE_MAX_LCORE]->h); + } DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); - for (i = 0; i <= RTE_MAX_LCORE; i++) - LIST_INIT(&list->cache[i].h); return 0; } @@ -53,11 +54,16 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, mlx5_list_clone_free_cb cb_clone_free) { struct mlx5_list *list; + struct mlx5_list_cache *gc = NULL; - list = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*list), 0, SOCKET_ID_ANY); + list = mlx5_malloc(MLX5_MEM_ZERO, + sizeof(*list) + (lcores_share ? sizeof(*gc) : 0), + 0, SOCKET_ID_ANY); if (!list) return NULL; - if (mlx5_list_init(list, name, ctx, lcores_share, + if (lcores_share) + gc = (struct mlx5_list_cache *)(list + 1); + if (mlx5_list_init(list, name, ctx, lcores_share, gc, cb_create, cb_match, cb_remove, cb_clone, cb_clone_free) != 0) { mlx5_free(list); @@ -69,7 +75,8 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, static struct mlx5_list_entry * __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) { - struct mlx5_list_entry *entry = LIST_FIRST(&list->cache[lcore_index].h); + struct mlx5_list_entry *entry = + LIST_FIRST(&list->cache[lcore_index]->h); uint32_t ret; while (entry != NULL) { @@ -121,14 +128,14 @@ mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, lentry->ref_cnt = 1u; lentry->gentry = gentry; lentry->lcore_idx = (uint32_t)lcore_index; - LIST_INSERT_HEAD(&list->cache[lcore_index].h, lentry, next); + LIST_INSERT_HEAD(&list->cache[lcore_index]->h, lentry, next); return lentry; } static void __list_cache_clean(struct mlx5_list *list, int lcore_index) { - struct mlx5_list_cache *c = &list->cache[lcore_index]; + struct mlx5_list_cache *c = list->cache[lcore_index]; struct mlx5_list_entry *entry = LIST_FIRST(&c->h); uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, __ATOMIC_RELAXED); @@ -161,6 +168,17 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) rte_errno = ENOTSUP; return NULL; } + if (unlikely(!list->cache[lcore_index])) { + list->cache[lcore_index] = mlx5_malloc(0, + sizeof(struct mlx5_list_cache), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (!list->cache[lcore_index]) { + rte_errno = ENOMEM; + return NULL; + } + list->cache[lcore_index]->inv_cnt = 0; + LIST_INIT(&list->cache[lcore_index]->h); + } /* 0. Free entries that was invalidated by other lcores. */ __list_cache_clean(list, lcore_index); /* 1. Lookup in local cache. */ @@ -186,7 +204,7 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) entry->ref_cnt = 1u; if (!list->lcores_share) { entry->lcore_idx = (uint32_t)lcore_index; - LIST_INSERT_HEAD(&list->cache[lcore_index].h, entry, next); + LIST_INSERT_HEAD(&list->cache[lcore_index]->h, entry, next); __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "MLX5 list %s c%d entry %p new: %u.", list->name, lcore_index, (void *)entry, entry->ref_cnt); @@ -217,10 +235,10 @@ mlx5_list_register(struct mlx5_list *list, void *ctx) } } /* 5. Update lists. */ - LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE].h, entry, next); + LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE]->h, entry, next); list->gen_cnt++; rte_rwlock_write_unlock(&list->lock); - LIST_INSERT_HEAD(&list->cache[lcore_index].h, local_entry, next); + LIST_INSERT_HEAD(&list->cache[lcore_index]->h, local_entry, next); __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, (void *)entry, entry->ref_cnt); @@ -245,7 +263,7 @@ mlx5_list_unregister(struct mlx5_list *list, else list->cb_remove(list->ctx, entry); } else if (likely(lcore_idx != -1)) { - __atomic_add_fetch(&list->cache[entry->lcore_idx].inv_cnt, 1, + __atomic_add_fetch(&list->cache[entry->lcore_idx]->inv_cnt, 1, __ATOMIC_RELAXED); } else { return 0; @@ -280,8 +298,10 @@ mlx5_list_uninit(struct mlx5_list *list) MLX5_ASSERT(list); for (i = 0; i <= RTE_MAX_LCORE; i++) { - while (!LIST_EMPTY(&list->cache[i].h)) { - entry = LIST_FIRST(&list->cache[i].h); + if (!list->cache[i]) + continue; + while (!LIST_EMPTY(&list->cache[i]->h)) { + entry = LIST_FIRST(&list->cache[i]->h); LIST_REMOVE(entry, next); if (i == RTE_MAX_LCORE) { list->cb_remove(list->ctx, entry); @@ -292,6 +312,8 @@ mlx5_list_uninit(struct mlx5_list *list) list->cb_clone_free(list->ctx, entry); } } + if (i != RTE_MAX_LCORE) + mlx5_free(list->cache[i]); } } @@ -320,6 +342,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, mlx5_list_clone_free_cb cb_clone_free) { struct mlx5_hlist *h; + struct mlx5_list_cache *gc; uint32_t act_size; uint32_t alloc_size; uint32_t i; @@ -333,7 +356,9 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, act_size = size; } alloc_size = sizeof(struct mlx5_hlist) + - sizeof(struct mlx5_hlist_bucket) * act_size; + sizeof(struct mlx5_hlist_bucket) * act_size; + if (lcores_share) + alloc_size += sizeof(struct mlx5_list_cache) * act_size; /* Using zmalloc, then no need to initialize the heads. */ h = mlx5_malloc(MLX5_MEM_ZERO, alloc_size, RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); @@ -345,8 +370,10 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, h->mask = act_size - 1; h->lcores_share = lcores_share; h->direct_key = direct_key; + gc = (struct mlx5_list_cache *)&h->buckets[act_size]; for (i = 0; i < act_size; i++) { if (mlx5_list_init(&h->buckets[i].l, name, ctx, lcores_share, + lcores_share ? &gc[i] : NULL, cb_create, cb_match, cb_remove, cb_clone, cb_clone_free) != 0) { mlx5_free(h); @@ -358,7 +385,6 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, return h; } - struct mlx5_list_entry * mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) { diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 61b30a45ca..979dfafad4 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -104,7 +104,7 @@ struct mlx5_list { mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ mlx5_list_clone_free_cb cb_clone_free; - struct mlx5_list_cache cache[RTE_MAX_LCORE + 1]; + struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; /* Lcore cache, last index is the global cache. */ volatile uint32_t gen_cnt; /* List modification may update it. */ volatile uint32_t count; /* number of entries in list. */ From patchwork Fri Jul 2 06:18:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95169 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B21CDA0A0C; Fri, 2 Jul 2021 08:20:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D5D7D413A4; Fri, 2 Jul 2021 08:19:03 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2078.outbound.protection.outlook.com [40.107.236.78]) by mails.dpdk.org (Postfix) with ESMTP id 8DB8041353 for ; Fri, 2 Jul 2021 08:19:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ljf0Qsb479TjnMulJzasWhboOPzIhX/xBegZbiWXkVJM+xF+OmOcDLqn7ZoCdmG1RGgaO0F/GvQTPW7684PLQZGejHS39I7h+FRipwaCH9BE42krf6yQrsni6fD6OuNAvSiRhCff5WKweMxHi5kYCXxuOjtPqyl7sYhQtZ4VDmdrjbNpamtg8yddlyIfeAGpxL+vt4awN1o8NXd98FADQegeLxZ0A34nT/yfGHOSGRmi163N3VpQNIqXSsqlmxT2ZrKSjw22vmHdCNaa4Zf/D05W+EbCgg8rGtkEx8U5w5bGHFMh/ch1nEJ6TDcjUORSn0CYsBcl4vUYTPljisD45Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OsF7TEanHe9G4RKUc67QdO6yn1b9peXCU9l0oHLR7AI=; b=kr3RaigI85QDFEH9ESrU98M7aI457wmVYm8uH1JofhCHCMIBjUduyMEGOogGdynsaUw4ejPsV3lSOwGsu25yjZiIIJ/CZmHMZe36KaeimVTmEUnL0zLmUHrXgpAOnm3qjbqvHiCKRuuTJUfdvceJD3nJDC8bHzsvVNcc+hjADAasFVJ8q/Ok9dH72/8bj6GrmkmMlf8XrLxjIiOA2mWzEX1BL/kwRe0yHOjYoSuIrvgIiQBk2kGctSyr8K0MOoFZIjeN5S4QYYStyITfYFS2hY2Er+3mroouPuN980ilHVqCSRAev6si/SWw8Wfb1eXyU33bKSh2Tr/737Nc05ACcA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OsF7TEanHe9G4RKUc67QdO6yn1b9peXCU9l0oHLR7AI=; b=VTG1PkNLzsIZk+M910h1QOlGDSw+CS8MAYqwsGSVvnkTB/+aie1fnNVXvuf1nOKjwG8O+TNipw9/eltvtnnea3Hk5iRdJO7xDO4QWXB5NpgI5LoLFFj1C9XsvqigY6M/cE2xZ9LAoJ7bX2V9t8ltSmYENVxUfQSrPeRiw2onQBldtCXSjlWqDes9fjF1MQQEIvkrQXCAiqyUFSAfGApYMFS81GxolZK7Tpu0aTJt1oURVnpxhIyE3aq5yGuFJkUVGUuYRm56XA0MvSVfJ/2xnwt2dq6SM3PuEvkmwVvvY2yvbuc1wkQGnxWupEvP/+i8T/XBezQWi4p4ZJtaY4kvvg== Received: from BN0PR04CA0153.namprd04.prod.outlook.com (2603:10b6:408:eb::8) by DM6PR12MB3289.namprd12.prod.outlook.com (2603:10b6:5:15d::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.23; Fri, 2 Jul 2021 06:19:00 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::ab) by BN0PR04CA0153.outlook.office365.com (2603:10b6:408:eb::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:00 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:57 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:10 +0300 Message-ID: <20210702061816.10454-17-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3ee3f014-a582-463f-0d1b-08d93d214847 X-MS-TrafficTypeDiagnostic: DM6PR12MB3289: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:224; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uhWOIux2KMg03GWr5VqmcYux6NPLGi59GVATKU5VRr1647n3owwkOPCgiGx3n4azkZNrc+2fIFfRiF7plvkm4f8Ax6fQaeUlMrScy4l4Yp0FutI4WopvYjaIey8sfRGJ1gwqhPJN1WJGtzDN/0f8BHomhlKq+yB3fdXx40tfi2R9gZ5VfQVhATviYGV757m4NTaF3Y6jhYGaoZZzu92noKNv30phadqpqrWBeszdmVLrdURndNUv5/gxaLzBP1wRqvguokBfzrZRyPZHvp7aMEhw+nNXkdoeYonLcPRflJn+nsqfWkZUjNnqwdH7debpnzTfu9akf5XK6pIm1VU6YGBsSfhkb2CJY97ZBcZoQ5ztJnHQzAk2hs6PJSSJSFti7p9NvQvRXA0kWqBAf/8kFHhADMeOSVPpWKryP7enveKRzNVFR3W5qo/deUvqcMytTCmM4aKFSdExr/weR+xTpSwQ8G7H+onh55q1Rn94m9CCiwfB+SE/0Gf5ALqAYOJwmPqURNOuArryCkX3mJC6k+0qErWSlmxlq1plfSHpjl39xho1er/WwqE168+gIOMqLa9L0RSaYXHUW3Kz8jtD1zbYse6vlMPzfbghqEwu/bK5ZgvcVQJkQnPULyMbkU8Bu+ADCCWpugk9OJs+jCZYDEFjc8odtM1WNkb/a6AHVDLLBVeu13qBni9z2xOVvW7MLEnJJjLuWzl6K+MAgThT+1rwaA10cFx6bPrKb7ftJ78= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(136003)(346002)(376002)(39860400002)(36840700001)(46966006)(1076003)(7636003)(6286002)(316002)(2906002)(110136005)(83380400001)(7696005)(82310400003)(86362001)(6666004)(356005)(5660300002)(36906005)(82740400003)(54906003)(26005)(4326008)(8936002)(55016002)(426003)(8676002)(70586007)(47076005)(36860700001)(478600001)(2616005)(16526019)(70206006)(186003)(336012)(6636002)(36756003)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:00.3557 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3ee3f014-a582-463f-0d1b-08d93d214847 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3289 Subject: [dpdk-dev] [PATCH v3 16/22] net/mlx5: enable index pool per-core cache X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit enables the tag and header modify action index pool per-core cache in non-reclaim memory mode. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5.c | 4 +++- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 3 ++- 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 8fb7f4442d..bf1463c289 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -214,7 +214,8 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .grow_trunk = 3, .grow_shift = 2, .need_lock = 1, - .release_mem_en = 1, + .release_mem_en = 0, + .per_core_cache = (1 << 16), .malloc = mlx5_malloc, .free = mlx5_free, .type = "mlx5_tag_ipool", @@ -1128,6 +1129,7 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, } sh->refcnt = 1; sh->max_port = spawn->max_port; + sh->reclaim_mode = config->reclaim_mode; strncpy(sh->ibdev_name, mlx5_os_get_ctx_device_name(sh->ctx), sizeof(sh->ibdev_name) - 1); strncpy(sh->ibdev_path, mlx5_os_get_ctx_device_path(sh->ctx), diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index c7239e1137..01198b6cc7 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1093,6 +1093,7 @@ struct mlx5_dev_ctx_shared { uint32_t qp_ts_format:2; /* QP timestamp formats supported. */ uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */ uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */ + uint32_t reclaim_mode:1; /* Reclaim memory. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ void *ctx; /* Verbs/DV/DevX context. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index e702b78358..f84a2b1a5d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -5321,7 +5321,8 @@ flow_dv_modify_ipool_get(struct mlx5_dev_ctx_shared *sh, uint8_t index) .grow_trunk = 3, .grow_shift = 2, .need_lock = 1, - .release_mem_en = 1, + .release_mem_en = !!sh->reclaim_mode, + .per_core_cache = sh->reclaim_mode ? 0 : (1 << 16), .malloc = mlx5_malloc, .free = mlx5_free, .type = "mlx5_modify_action_resource", From patchwork Fri Jul 2 06:18:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95170 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8DBC6A0A0C; Fri, 2 Jul 2021 08:20:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DF11941399; Fri, 2 Jul 2021 08:19:05 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2055.outbound.protection.outlook.com [40.107.237.55]) by mails.dpdk.org (Postfix) with ESMTP id 1D6E141397 for ; Fri, 2 Jul 2021 08:19:03 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dHH68ogAU6wFo186euZqVOTp7dYwxWvhW1x1TDGUw0+jAI46OdHU8u7C/Bhhjky88UR4bUwEhjsmj0aSgRyQokKduKr2GTt2F+PYz1oTX8Xyor1/e2FqagNB8GlAIvzPkXtcBCc0AxaY51OLTBh5ULlKXiyKy+3uB09AdSK08CFErnOdfH7szLE7MGPll7WAd12AwdW2zpzQe0OYf2yUCtbDKJKngJXfmoU1UaaX7Zy+Sv+grlG7rtXWw07x655YSjohCXTGNWAZ2VvWaHEGTGcjYIx1aoiQTZ3jaWECgMSajcsVn3H0WS0Wu83rsCG/ziZB5rWMgLsiKq1wts0p4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Dftkb3cHFwcYUXNoV0SGA1XfAJJSGTD5LlIc7jxBqvw=; b=S7zdn+aBSHB3pHAXffUU/lio3wj2c8p2FXPfnjukeL0GI+55u1dicXpG3S5AqqmH9+Oqb4Qmtcmnk38ziAKiJaj6BPEp0GEY+Eu2sUkf900byXGZIloU1Z5auXgdwgTD3eMGOU/VKR3RtcwRhlCi3606asx5Q9FO7zalr9gou/dEdY0XcheJpwB6bfvIdR3TbSCzuKBInuhL2Pq7QMQxkWI0lZPh/i9Qmscnfn1aNvTVe7Wpa2DtH0H1/BFAo746PyovKRlUlg58ZN674AljJbplc9hs7I8Cy7WkZRCHu56u+7wOxz8JsxqDotJAeT0oXYBTrn2C4X8esE6r2D8Bhw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Dftkb3cHFwcYUXNoV0SGA1XfAJJSGTD5LlIc7jxBqvw=; b=kxnS06SKhndHdbyHlLOLsv7thqZR/izjbaor2uqPrTlqWJGxj6entaEU819+CNbqCJR91yL/SwtPHRGrQJQRv3uJtYbEhm9CdRiBvpvl7+3Bzk4Fzhw8BxeOh2lWmxmcAxYyeVhrvDLhDOaI3NG7++L9dNh1i/WYSsNDGCQdLI+K3wxgXxKK+ReyEAqd1AaQZEdL5fugY4/BkDciQ4vWgMoBO55fITgFWlX/m2Z3TqT9YGsbfZb5sYdkIUOhKPOX/iOHY+7wLx2sqyaLp1/FvaheF2JP2vs0rwR5sn+462fGdDqD8Wc594sb+xf7k6rzzW0MXXgu6tk7rul9C+WF5w== Received: from BN0PR04CA0156.namprd04.prod.outlook.com (2603:10b6:408:eb::11) by DM6PR12MB4252.namprd12.prod.outlook.com (2603:10b6:5:211::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Fri, 2 Jul 2021 06:19:01 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::ba) by BN0PR04CA0156.outlook.office365.com (2603:10b6:408:eb::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.23 via Frontend Transport; Fri, 2 Jul 2021 06:19:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:01 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:18:59 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:11 +0300 Message-ID: <20210702061816.10454-18-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 84ed67e9-1fbb-45c2-808d-08d93d2148f7 X-MS-TrafficTypeDiagnostic: DM6PR12MB4252: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: fpCHU863PtgQrJk2TyHUZmBhBmJ3fMxS03UxjVJWEHw00MqBgxOTgRQokYRYIzX1vo1V41SvySrDYGX/DuxU7CEqe3UfpajhQd4kuBOIOWRZ9SW0OBRskD+dtx7vUTLaKVOLxu4crZcapXbdr/tB5reSMhbZFa0Pmdzk8yg1fi8khUId4fPJ7gA5bBdn+mvwve2vzp8/oOsSLZLGNkSu0OlnWtbmQUy/kkHXHcmaD7vfqbAd0cqvrWTY1fumJuWsmWf3VICI4wSWDjnK1aQmfPisM9tS0VqYVQhCzk49fVAee9lYvPgFE9dIsaXNfEtbQ5gk6y08Z+UxY3K5LMJc1NwvJxgJ73/7wje5hvOb5s8EkBISr9VFKX91xVbP1PPcbe0sX1LMSBXKDFhHSffzagJ1/9oPDdCHubVBN22/wmrSDnm6NTg9Dtey6bdYGHu1fzMxqaZpsmLlsHNMaDSKlQMj2C1oHSv2GJ5NseBVdOXZ+vynE58JAB4X9B9cSxQYJtQLNAthqNqbOc0BsB7wzoHNBRCU7S0Mvd0Wh3AJlnBRqLzH/gM8RObsCenlBiavqUrKSYrwpJBOkZ6qtIa27tVxe1AId4jcirMilkgQIHmG7hyOBnDWnDjbpTIxdGI7SiqcRTH9Mx+NFecgEzQsHQ5eziiAdp/TNGCtUIGmY5bcrFL5HRrrI1GkMpYGJrLnyUZsD+6KVaX541aoSMekmQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(396003)(136003)(376002)(39860400002)(36840700001)(46966006)(356005)(36756003)(83380400001)(478600001)(7696005)(2906002)(36906005)(336012)(110136005)(6286002)(6666004)(82310400003)(55016002)(47076005)(54906003)(316002)(1076003)(82740400003)(36860700001)(7636003)(86362001)(4326008)(186003)(8676002)(26005)(5660300002)(6636002)(70586007)(16526019)(8936002)(70206006)(2616005)(426003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:01.5291 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 84ed67e9-1fbb-45c2-808d-08d93d2148f7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4252 Subject: [dpdk-dev] [PATCH v3 17/22] net/mlx5: optimize hash list table allocate on demand X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, all the hash list tables are allocated during start up. Since different applications may only use dedicated limited actions, optimized the hash list table allocate on demand will save initial memory. This commit optimizes hash list table allocate on demand. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 44 +---------------- drivers/net/mlx5/mlx5_defs.h | 6 +++ drivers/net/mlx5/mlx5_flow_dv.c | 79 ++++++++++++++++++++++++++++-- drivers/net/mlx5/windows/mlx5_os.c | 2 - 4 files changed, 82 insertions(+), 49 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index a82dc4db00..75324e35d8 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -50,8 +50,6 @@ #include "mlx5_nl.h" #include "mlx5_devx.h" -#define MLX5_TAGS_HLIST_ARRAY_SIZE (1 << 15) - #ifndef HAVE_IBV_MLX5_MOD_MPW #define MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED (1 << 2) #define MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW (1 << 3) @@ -312,46 +310,6 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) flow_dv_dest_array_clone_free_cb); if (!sh->dest_array_list) goto error; - /* Create tags hash list table. */ - snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); - sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, false, - false, sh, flow_dv_tag_create_cb, - flow_dv_tag_match_cb, - flow_dv_tag_remove_cb, - flow_dv_tag_clone_cb, - flow_dv_tag_clone_free_cb); - if (!sh->tag_table) { - DRV_LOG(ERR, "tags with hash creation failed."); - err = ENOMEM; - goto error; - } - snprintf(s, sizeof(s), "%s_hdr_modify", sh->ibdev_name); - sh->modify_cmds = mlx5_hlist_create(s, MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, - true, false, sh, - flow_dv_modify_create_cb, - flow_dv_modify_match_cb, - flow_dv_modify_remove_cb, - flow_dv_modify_clone_cb, - flow_dv_modify_clone_free_cb); - if (!sh->modify_cmds) { - DRV_LOG(ERR, "hdr modify hash creation failed"); - err = ENOMEM; - goto error; - } - snprintf(s, sizeof(s), "%s_encaps_decaps", sh->ibdev_name); - sh->encaps_decaps = mlx5_hlist_create(s, - MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, - true, true, sh, - flow_dv_encap_decap_create_cb, - flow_dv_encap_decap_match_cb, - flow_dv_encap_decap_remove_cb, - flow_dv_encap_decap_clone_cb, - flow_dv_encap_decap_clone_free_cb); - if (!sh->encaps_decaps) { - DRV_LOG(ERR, "encap decap hash creation failed"); - err = ENOMEM; - goto error; - } #endif #ifdef HAVE_MLX5DV_DR void *domain; @@ -396,7 +354,7 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv) goto error; } #endif - if (!sh->tunnel_hub) + if (!sh->tunnel_hub && priv->config.dv_miss_info) err = mlx5_alloc_tunnel_hub(sh); if (err) { DRV_LOG(ERR, "mlx5_alloc_tunnel_hub failed err=%d", err); diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index ca67ce8213..fe86bb40d3 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -188,6 +188,12 @@ /* Size of the simple hash table for encap decap table. */ #define MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ (1 << 12) +/* Size of the hash table for tag table. */ +#define MLX5_TAGS_HLIST_ARRAY_SIZE (1 << 15) + +/* Size fo the hash table for SFT table. */ +#define MLX5_FLOW_SFT_HLIST_ARRAY_SIZE 4096 + /* Hairpin TX/RX queue configuration parameters. */ #define MLX5_HAIRPIN_QUEUE_STRIDE 6 #define MLX5_HAIRPIN_JUMBO_LOG_SIZE (14 + 2) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f84a2b1a5d..bb70e8557f 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -310,6 +310,41 @@ mlx5_flow_tunnel_ip_check(const struct rte_flow_item *item __rte_unused, } } +static inline struct mlx5_hlist * +flow_dv_hlist_prepare(struct mlx5_dev_ctx_shared *sh, struct mlx5_hlist **phl, + const char *name, uint32_t size, bool direct_key, + bool lcores_share, void *ctx, + mlx5_list_create_cb cb_create, + mlx5_list_match_cb cb_match, + mlx5_list_remove_cb cb_remove, + mlx5_list_clone_cb cb_clone, + mlx5_list_clone_free_cb cb_clone_free) +{ + struct mlx5_hlist *hl; + struct mlx5_hlist *expected = NULL; + char s[MLX5_NAME_SIZE]; + + hl = __atomic_load_n(phl, __ATOMIC_SEQ_CST); + if (likely(hl)) + return hl; + snprintf(s, sizeof(s), "%s_%s", sh->ibdev_name, name); + hl = mlx5_hlist_create(s, size, direct_key, lcores_share, + ctx, cb_create, cb_match, cb_remove, cb_clone, + cb_clone_free); + if (!hl) { + DRV_LOG(ERR, "%s hash creation failed", name); + rte_errno = ENOMEM; + return NULL; + } + if (!__atomic_compare_exchange_n(phl, &expected, hl, false, + __ATOMIC_SEQ_CST, + __ATOMIC_SEQ_CST)) { + mlx5_hlist_destroy(hl); + hl = __atomic_load_n(phl, __ATOMIC_SEQ_CST); + } + return hl; +} + /* Update VLAN's VID/PCP based on input rte_flow_action. * * @param[in] action @@ -3724,8 +3759,20 @@ flow_dv_encap_decap_resource_register .error = error, .data = resource, }; + struct mlx5_hlist *encaps_decaps; uint64_t key64; + encaps_decaps = flow_dv_hlist_prepare(sh, &sh->encaps_decaps, + "encaps_decaps", + MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, + true, true, sh, + flow_dv_encap_decap_create_cb, + flow_dv_encap_decap_match_cb, + flow_dv_encap_decap_remove_cb, + flow_dv_encap_decap_clone_cb, + flow_dv_encap_decap_clone_free_cb); + if (unlikely(!encaps_decaps)) + return -rte_errno; resource->flags = dev_flow->dv.group ? 0 : 1; key64 = __rte_raw_cksum(&encap_decap_key.v32, sizeof(encap_decap_key.v32), 0); @@ -3733,7 +3780,7 @@ flow_dv_encap_decap_resource_register MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TUNNEL_TO_L2 && resource->size) key64 = __rte_raw_cksum(resource->buf, resource->size, key64); - entry = mlx5_hlist_register(sh->encaps_decaps, key64, &ctx); + entry = mlx5_hlist_register(encaps_decaps, key64, &ctx); if (!entry) return -rte_errno; resource = container_of(entry, typeof(*resource), entry); @@ -5705,8 +5752,20 @@ flow_dv_modify_hdr_resource_register .error = error, .data = resource, }; + struct mlx5_hlist *modify_cmds; uint64_t key64; + modify_cmds = flow_dv_hlist_prepare(sh, &sh->modify_cmds, + "hdr_modify", + MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, + true, false, sh, + flow_dv_modify_create_cb, + flow_dv_modify_match_cb, + flow_dv_modify_remove_cb, + flow_dv_modify_clone_cb, + flow_dv_modify_clone_free_cb); + if (unlikely(!modify_cmds)) + return -rte_errno; resource->root = !dev_flow->dv.group; if (resource->actions_num > flow_dv_modify_hdr_action_max(dev, resource->root)) @@ -5714,7 +5773,7 @@ flow_dv_modify_hdr_resource_register RTE_FLOW_ERROR_TYPE_ACTION, NULL, "too many modify header items"); key64 = __rte_raw_cksum(&resource->ft_type, key_len, 0); - entry = mlx5_hlist_register(sh->modify_cmds, key64, &ctx); + entry = mlx5_hlist_register(modify_cmds, key64, &ctx); if (!entry) return -rte_errno; resource = container_of(entry, typeof(*resource), entry); @@ -10493,8 +10552,20 @@ flow_dv_tag_resource_register .error = error, .data = &tag_be24, }; - - entry = mlx5_hlist_register(priv->sh->tag_table, tag_be24, &ctx); + struct mlx5_hlist *tag_table; + + tag_table = flow_dv_hlist_prepare(priv->sh, &priv->sh->tag_table, + "tags", + MLX5_TAGS_HLIST_ARRAY_SIZE, + false, false, priv->sh, + flow_dv_tag_create_cb, + flow_dv_tag_match_cb, + flow_dv_tag_remove_cb, + flow_dv_tag_clone_cb, + flow_dv_tag_clone_free_cb); + if (unlikely(!tag_table)) + return -rte_errno; + entry = mlx5_hlist_register(tag_table, tag_be24, &ctx); if (entry) { resource = container_of(entry, struct mlx5_flow_dv_tag_resource, entry); diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index af2f684648..0e6f7003b0 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -30,8 +30,6 @@ #include "mlx5_flow.h" #include "mlx5_devx.h" -#define MLX5_TAGS_HLIST_ARRAY_SIZE 8192 - static const char *MZ_MLX5_PMD_SHARED_DATA = "mlx5_pmd_shared_data"; /* Spinlock for mlx5_shared_data allocation. */ From patchwork Fri Jul 2 06:18:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95171 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2E20AA0A0C; Fri, 2 Jul 2021 08:20:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8B25C413AF; Fri, 2 Jul 2021 08:19:07 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2040.outbound.protection.outlook.com [40.107.220.40]) by mails.dpdk.org (Postfix) with ESMTP id C46EB41332 for ; Fri, 2 Jul 2021 08:19:04 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nAFyRd/HgiTbQgOnKdMipLT8dv7TLp+8ZjkEny1j+HANwetvxFX2eTJ6/zVg+sWyIDK00GmEnnJH7RYhoP69ZGxCFKOjz8kUOKJW2GZp+h7f+Rtx2HaWhjBC0CE7cy/BTGFlX2a9sE0naMPVj/S3xsl5kyG3FecyoSybAdDiznDRJQb6iFmeHdAreR0zXAVobrNtCSpeqoF/DoNaZ4j26UeO5btA9iihJ0Ab4B8z4472D3h6zxECjjbZPdqI6IXId3im+tOvoqwjInRvUxShpn5/6LFniK42dn+aJraNDZF53M+0KWgmUEG+sFDD47FUNWsnUSMjSHolRCmOcw0d5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yWnD4MrB3zTRuCdpV2/FQWTqIpS75n9wQbEiVbKrvfQ=; b=oFOnZ73yHTQdhwA8SU7h/223d2HsZYOj6Q6nMT8x5CFy1oH87npUBQPJkoNannThqioDyo+WYr2TFxR/2sT1tt3WDKYej+p84yTP2IM/1EXHuCUrBUYoT5aKSjd7ytI2ZHItRDYLXPy61OuMQTx9/tnxhTu6giOpCiXPiaxyjtfdYQ6oh0iNRF9n904kCdzszfQFxCMaiAzEXJhZSsitr5gj9BvypXRxZYdJtXlU0VnCIg4mNUifjHn+iKDmr0xfIfHQLWYD1sq6QdzwK0Xdnz3eVhQWZyZdXHq9SD3I9yW1yI3Bq4SiyuVMxoSWb6tvtrl7FM1YULpcPPoW8s12fw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yWnD4MrB3zTRuCdpV2/FQWTqIpS75n9wQbEiVbKrvfQ=; b=ZMyL7YoQ8K9rAqPOIAhAyb6k/HESD4gwkqx/UkrvMu/skqmcLpj6MUFMwDPcdL/9gSjgno/NImGza3n2p+DvURwfix2m/lDPAs9/agP4n+BI1l4zSgiDv8hSPAE6ogqBl68/Cyvmamksgan92fYs0YCdS11InkCpTRL3oldV3Q131FrwcEMNzDQkDlfAtoy49rPf0M+iRqphR83b4POKd8ehVYzTKFMBFDtKlmAdUEw5oUcZnAOT2MfpQPKjP8qk7vSnh+RYxOtD8+WxQNRSTxs1PX3L9ZL6s6W306OEloSncTW9lqTb7njcsx9f4tzPvkWpYGPpjA3lwT/pfDj2Sg== Received: from BN0PR04CA0172.namprd04.prod.outlook.com (2603:10b6:408:eb::27) by BL0PR12MB2436.namprd12.prod.outlook.com (2603:10b6:207:42::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.19; Fri, 2 Jul 2021 06:19:03 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::59) by BN0PR04CA0172.outlook.office365.com (2603:10b6:408:eb::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:02 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:19:00 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:12 +0300 Message-ID: <20210702061816.10454-19-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3ff3aab8-7ea7-41e9-e585-08d93d2149bf X-MS-TrafficTypeDiagnostic: BL0PR12MB2436: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:46; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6OuleW5eQhS+M5CBVdKUV+VR+j37FAz9gbA5H9r5hysPUrEGW5olyUhnh2ryVXoIE+VIYCyvaxe1Q5HiuOt8AA01pF0+cERhK3pxHqFeBTqoDkyhMRaU6tBk4S+Rkn9E4K/HbnAa5auiOOdO7JmIspJaY+zHjTlagY5KqLg4204HDJT3UC84Zz1zvELBCGw7xD1OlxDs7N0VPqBk9PGGTzpICLQ6bxKC8twyBE91Hbn5hK2lHVzF08bBpzVEwaCYnFHeOMFjwiGY+rkyQ8WKwCgODoFtfk9UnGmCkoHJ8lZc5jvDljgDoLfrtsKK4+xLzGhW+mf+YrYCixi+9GKOaijE/yUp3EpXav3G0iBcfymuyynbfZFfVKcYsdPDDA1/Bs5z167cBOLn4PSO3hH4Eqf1pQ1aq16QFTM0SbATu+K2LEkj/c7q0gHymh1waMdAcoVi1zwqCgXD6I/rt/SYvSpMZd3a7Ibnku5opijAdEi9g6dFSLj8w9eQXXRXSU7qyl5JQT+1MmmEHCG/6ijfXbL3jf67tNFt9/7kV6TlpSP7mtctMZ+CdbYMdmxxXgpCYCV1+lsGLpu4ilmGnrI2Hw48RwL9FoV7U2dPZPHbDVOwX2ncUqKizVv/cv3pqSnIpQpVKP7pdSgySI+LrQChr2+KvQnkqsohLZ3KUTSjjDO+GaeE4yLj+MBwrxJ3mOr0l0TIHm4CsRVP7FUgYkd+ag== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(46966006)(36840700001)(30864003)(5660300002)(8676002)(336012)(47076005)(26005)(83380400001)(1076003)(36756003)(82310400003)(55016002)(82740400003)(70206006)(2906002)(36860700001)(6286002)(426003)(6636002)(2616005)(7636003)(6666004)(7696005)(110136005)(16526019)(186003)(36906005)(356005)(70586007)(54906003)(316002)(86362001)(8936002)(4326008)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:02.8354 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3ff3aab8-7ea7-41e9-e585-08d93d2149bf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2436 Subject: [dpdk-dev] [PATCH v3 18/22] common/mlx5: optimize cache list object memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, hash list uses the cache list as bucket list. The list in the buckets have the same name, ctx and callbacks. This wastes the memory. This commit abstracts all the name, ctx and callback members in the list to a constant struct and others to the inconstant struct, uses the wrapper functions to satisfy both hash list and cache list can set the list constant and inconstant struct individually. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 295 ++++++++++++++---------- drivers/common/mlx5/mlx5_common_utils.h | 45 ++-- 2 files changed, 201 insertions(+), 139 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index f75b1cb0da..858c8d8164 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -14,34 +14,16 @@ /********************* mlx5 list ************************/ static int -mlx5_list_init(struct mlx5_list *list, const char *name, void *ctx, - bool lcores_share, struct mlx5_list_cache *gc, - mlx5_list_create_cb cb_create, - mlx5_list_match_cb cb_match, - mlx5_list_remove_cb cb_remove, - mlx5_list_clone_cb cb_clone, - mlx5_list_clone_free_cb cb_clone_free) +mlx5_list_init(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, + struct mlx5_list_cache *gc) { - if (!cb_match || !cb_create || !cb_remove || !cb_clone || - !cb_clone_free) { - rte_errno = EINVAL; - return -EINVAL; + rte_rwlock_init(&l_inconst->lock); + if (l_const->lcores_share) { + l_inconst->cache[RTE_MAX_LCORE] = gc; + LIST_INIT(&l_inconst->cache[RTE_MAX_LCORE]->h); } - if (name) - snprintf(list->name, sizeof(list->name), "%s", name); - list->ctx = ctx; - list->lcores_share = lcores_share; - list->cb_create = cb_create; - list->cb_match = cb_match; - list->cb_remove = cb_remove; - list->cb_clone = cb_clone; - list->cb_clone_free = cb_clone_free; - rte_rwlock_init(&list->lock); - if (lcores_share) { - list->cache[RTE_MAX_LCORE] = gc; - LIST_INIT(&list->cache[RTE_MAX_LCORE]->h); - } - DRV_LOG(DEBUG, "mlx5 list %s initialized.", list->name); + DRV_LOG(DEBUG, "mlx5 list %s initialized.", l_const->name); return 0; } @@ -56,16 +38,30 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, struct mlx5_list *list; struct mlx5_list_cache *gc = NULL; + if (!cb_match || !cb_create || !cb_remove || !cb_clone || + !cb_clone_free) { + rte_errno = EINVAL; + return NULL; + } list = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*list) + (lcores_share ? sizeof(*gc) : 0), 0, SOCKET_ID_ANY); + if (!list) return NULL; + if (name) + snprintf(list->l_const.name, + sizeof(list->l_const.name), "%s", name); + list->l_const.ctx = ctx; + list->l_const.lcores_share = lcores_share; + list->l_const.cb_create = cb_create; + list->l_const.cb_match = cb_match; + list->l_const.cb_remove = cb_remove; + list->l_const.cb_clone = cb_clone; + list->l_const.cb_clone_free = cb_clone_free; if (lcores_share) gc = (struct mlx5_list_cache *)(list + 1); - if (mlx5_list_init(list, name, ctx, lcores_share, gc, - cb_create, cb_match, cb_remove, cb_clone, - cb_clone_free) != 0) { + if (mlx5_list_init(&list->l_inconst, &list->l_const, gc) != 0) { mlx5_free(list); return NULL; } @@ -73,19 +69,21 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, } static struct mlx5_list_entry * -__list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) +__list_lookup(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, + int lcore_index, void *ctx, bool reuse) { struct mlx5_list_entry *entry = - LIST_FIRST(&list->cache[lcore_index]->h); + LIST_FIRST(&l_inconst->cache[lcore_index]->h); uint32_t ret; while (entry != NULL) { - if (list->cb_match(list->ctx, entry, ctx) == 0) { + if (l_const->cb_match(l_const->ctx, entry, ctx) == 0) { if (reuse) { ret = __atomic_add_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) - 1; DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", - list->name, (void *)entry, + l_const->name, (void *)entry, entry->ref_cnt); } else if (lcore_index < RTE_MAX_LCORE) { ret = __atomic_load_n(&entry->ref_cnt, @@ -101,41 +99,55 @@ __list_lookup(struct mlx5_list *list, int lcore_index, void *ctx, bool reuse) return NULL; } -struct mlx5_list_entry * -mlx5_list_lookup(struct mlx5_list *list, void *ctx) +static inline struct mlx5_list_entry * +_mlx5_list_lookup(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, void *ctx) { struct mlx5_list_entry *entry = NULL; int i; - rte_rwlock_read_lock(&list->lock); + rte_rwlock_read_lock(&l_inconst->lock); for (i = 0; i < RTE_MAX_LCORE; i++) { - entry = __list_lookup(list, i, ctx, false); + if (!l_inconst->cache[i]) + continue; + entry = __list_lookup(l_inconst, l_const, i, ctx, false); if (entry) break; } - rte_rwlock_read_unlock(&list->lock); + rte_rwlock_read_unlock(&l_inconst->lock); return entry; } +struct mlx5_list_entry * +mlx5_list_lookup(struct mlx5_list *list, void *ctx) +{ + return _mlx5_list_lookup(&list->l_inconst, &list->l_const, ctx); +} + + static struct mlx5_list_entry * -mlx5_list_cache_insert(struct mlx5_list *list, int lcore_index, +mlx5_list_cache_insert(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, int lcore_index, struct mlx5_list_entry *gentry, void *ctx) { - struct mlx5_list_entry *lentry = list->cb_clone(list->ctx, gentry, ctx); + struct mlx5_list_entry *lentry = + l_const->cb_clone(l_const->ctx, gentry, ctx); if (unlikely(!lentry)) return NULL; lentry->ref_cnt = 1u; lentry->gentry = gentry; lentry->lcore_idx = (uint32_t)lcore_index; - LIST_INSERT_HEAD(&list->cache[lcore_index]->h, lentry, next); + LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, lentry, next); return lentry; } static void -__list_cache_clean(struct mlx5_list *list, int lcore_index) +__list_cache_clean(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, + int lcore_index) { - struct mlx5_list_cache *c = list->cache[lcore_index]; + struct mlx5_list_cache *c = l_inconst->cache[lcore_index]; struct mlx5_list_entry *entry = LIST_FIRST(&c->h); uint32_t inv_cnt = __atomic_exchange_n(&c->inv_cnt, 0, __ATOMIC_RELAXED); @@ -145,108 +157,123 @@ __list_cache_clean(struct mlx5_list *list, int lcore_index) if (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED) == 0) { LIST_REMOVE(entry, next); - if (list->lcores_share) - list->cb_clone_free(list->ctx, entry); + if (l_const->lcores_share) + l_const->cb_clone_free(l_const->ctx, entry); else - list->cb_remove(list->ctx, entry); + l_const->cb_remove(l_const->ctx, entry); inv_cnt--; } entry = nentry; } } -struct mlx5_list_entry * -mlx5_list_register(struct mlx5_list *list, void *ctx) +static inline struct mlx5_list_entry * +_mlx5_list_register(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, + void *ctx) { struct mlx5_list_entry *entry, *local_entry; volatile uint32_t prev_gen_cnt = 0; int lcore_index = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(list); + MLX5_ASSERT(l_inconst); MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); if (unlikely(lcore_index == -1)) { rte_errno = ENOTSUP; return NULL; } - if (unlikely(!list->cache[lcore_index])) { - list->cache[lcore_index] = mlx5_malloc(0, + if (unlikely(!l_inconst->cache[lcore_index])) { + l_inconst->cache[lcore_index] = mlx5_malloc(0, sizeof(struct mlx5_list_cache), RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); - if (!list->cache[lcore_index]) { + if (!l_inconst->cache[lcore_index]) { rte_errno = ENOMEM; return NULL; } - list->cache[lcore_index]->inv_cnt = 0; - LIST_INIT(&list->cache[lcore_index]->h); + l_inconst->cache[lcore_index]->inv_cnt = 0; + LIST_INIT(&l_inconst->cache[lcore_index]->h); } /* 0. Free entries that was invalidated by other lcores. */ - __list_cache_clean(list, lcore_index); + __list_cache_clean(l_inconst, l_const, lcore_index); /* 1. Lookup in local cache. */ - local_entry = __list_lookup(list, lcore_index, ctx, true); + local_entry = __list_lookup(l_inconst, l_const, lcore_index, ctx, true); if (local_entry) return local_entry; - if (list->lcores_share) { + if (l_const->lcores_share) { /* 2. Lookup with read lock on global list, reuse if found. */ - rte_rwlock_read_lock(&list->lock); - entry = __list_lookup(list, RTE_MAX_LCORE, ctx, true); + rte_rwlock_read_lock(&l_inconst->lock); + entry = __list_lookup(l_inconst, l_const, RTE_MAX_LCORE, + ctx, true); if (likely(entry)) { - rte_rwlock_read_unlock(&list->lock); - return mlx5_list_cache_insert(list, lcore_index, entry, - ctx); + rte_rwlock_read_unlock(&l_inconst->lock); + return mlx5_list_cache_insert(l_inconst, l_const, + lcore_index, + entry, ctx); } - prev_gen_cnt = list->gen_cnt; - rte_rwlock_read_unlock(&list->lock); + prev_gen_cnt = l_inconst->gen_cnt; + rte_rwlock_read_unlock(&l_inconst->lock); } /* 3. Prepare new entry for global list and for cache. */ - entry = list->cb_create(list->ctx, ctx); + entry = l_const->cb_create(l_const->ctx, ctx); if (unlikely(!entry)) return NULL; entry->ref_cnt = 1u; - if (!list->lcores_share) { + if (!l_const->lcores_share) { entry->lcore_idx = (uint32_t)lcore_index; - LIST_INSERT_HEAD(&list->cache[lcore_index]->h, entry, next); - __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); + LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, + entry, next); + __atomic_add_fetch(&l_inconst->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "MLX5 list %s c%d entry %p new: %u.", - list->name, lcore_index, (void *)entry, entry->ref_cnt); + l_const->name, lcore_index, + (void *)entry, entry->ref_cnt); return entry; } - local_entry = list->cb_clone(list->ctx, entry, ctx); + local_entry = l_const->cb_clone(l_const->ctx, entry, ctx); if (unlikely(!local_entry)) { - list->cb_remove(list->ctx, entry); + l_const->cb_remove(l_const->ctx, entry); return NULL; } local_entry->ref_cnt = 1u; local_entry->gentry = entry; local_entry->lcore_idx = (uint32_t)lcore_index; - rte_rwlock_write_lock(&list->lock); + rte_rwlock_write_lock(&l_inconst->lock); /* 4. Make sure the same entry was not created before the write lock. */ - if (unlikely(prev_gen_cnt != list->gen_cnt)) { - struct mlx5_list_entry *oentry = __list_lookup(list, + if (unlikely(prev_gen_cnt != l_inconst->gen_cnt)) { + struct mlx5_list_entry *oentry = __list_lookup(l_inconst, + l_const, RTE_MAX_LCORE, ctx, true); if (unlikely(oentry)) { /* 4.5. Found real race!!, reuse the old entry. */ - rte_rwlock_write_unlock(&list->lock); - list->cb_remove(list->ctx, entry); - list->cb_clone_free(list->ctx, local_entry); - return mlx5_list_cache_insert(list, lcore_index, oentry, - ctx); + rte_rwlock_write_unlock(&l_inconst->lock); + l_const->cb_remove(l_const->ctx, entry); + l_const->cb_clone_free(l_const->ctx, local_entry); + return mlx5_list_cache_insert(l_inconst, l_const, + lcore_index, + oentry, ctx); } } /* 5. Update lists. */ - LIST_INSERT_HEAD(&list->cache[RTE_MAX_LCORE]->h, entry, next); - list->gen_cnt++; - rte_rwlock_write_unlock(&list->lock); - LIST_INSERT_HEAD(&list->cache[lcore_index]->h, local_entry, next); - __atomic_add_fetch(&list->count, 1, __ATOMIC_RELAXED); - DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", list->name, + LIST_INSERT_HEAD(&l_inconst->cache[RTE_MAX_LCORE]->h, entry, next); + l_inconst->gen_cnt++; + rte_rwlock_write_unlock(&l_inconst->lock); + LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, local_entry, next); + __atomic_add_fetch(&l_inconst->count, 1, __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "mlx5 list %s entry %p new: %u.", l_const->name, (void *)entry, entry->ref_cnt); return local_entry; } -int -mlx5_list_unregister(struct mlx5_list *list, +struct mlx5_list_entry * +mlx5_list_register(struct mlx5_list *list, void *ctx) +{ + return _mlx5_list_register(&list->l_inconst, &list->l_const, ctx); +} + +static inline int +_mlx5_list_unregister(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const, struct mlx5_list_entry *entry) { struct mlx5_list_entry *gentry = entry->gentry; @@ -258,69 +285,77 @@ mlx5_list_unregister(struct mlx5_list *list, MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); if (entry->lcore_idx == (uint32_t)lcore_idx) { LIST_REMOVE(entry, next); - if (list->lcores_share) - list->cb_clone_free(list->ctx, entry); + if (l_const->lcores_share) + l_const->cb_clone_free(l_const->ctx, entry); else - list->cb_remove(list->ctx, entry); + l_const->cb_remove(l_const->ctx, entry); } else if (likely(lcore_idx != -1)) { - __atomic_add_fetch(&list->cache[entry->lcore_idx]->inv_cnt, 1, - __ATOMIC_RELAXED); + __atomic_add_fetch(&l_inconst->cache[entry->lcore_idx]->inv_cnt, + 1, __ATOMIC_RELAXED); } else { return 0; } - if (!list->lcores_share) { - __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + if (!l_const->lcores_share) { + __atomic_sub_fetch(&l_inconst->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", - list->name, (void *)entry); + l_const->name, (void *)entry); return 0; } if (__atomic_sub_fetch(&gentry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; - rte_rwlock_write_lock(&list->lock); + rte_rwlock_write_lock(&l_inconst->lock); if (likely(gentry->ref_cnt == 0)) { LIST_REMOVE(gentry, next); - rte_rwlock_write_unlock(&list->lock); - list->cb_remove(list->ctx, gentry); - __atomic_sub_fetch(&list->count, 1, __ATOMIC_RELAXED); + rte_rwlock_write_unlock(&l_inconst->lock); + l_const->cb_remove(l_const->ctx, gentry); + __atomic_sub_fetch(&l_inconst->count, 1, __ATOMIC_RELAXED); DRV_LOG(DEBUG, "mlx5 list %s entry %p removed.", - list->name, (void *)gentry); + l_const->name, (void *)gentry); return 0; } - rte_rwlock_write_unlock(&list->lock); + rte_rwlock_write_unlock(&l_inconst->lock); return 1; } +int +mlx5_list_unregister(struct mlx5_list *list, + struct mlx5_list_entry *entry) +{ + return _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry); +} + static void -mlx5_list_uninit(struct mlx5_list *list) +mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, + struct mlx5_list_const *l_const) { struct mlx5_list_entry *entry; int i; - MLX5_ASSERT(list); + MLX5_ASSERT(l_inconst); for (i = 0; i <= RTE_MAX_LCORE; i++) { - if (!list->cache[i]) + if (!l_inconst->cache[i]) continue; - while (!LIST_EMPTY(&list->cache[i]->h)) { - entry = LIST_FIRST(&list->cache[i]->h); + while (!LIST_EMPTY(&l_inconst->cache[i]->h)) { + entry = LIST_FIRST(&l_inconst->cache[i]->h); LIST_REMOVE(entry, next); if (i == RTE_MAX_LCORE) { - list->cb_remove(list->ctx, entry); + l_const->cb_remove(l_const->ctx, entry); DRV_LOG(DEBUG, "mlx5 list %s entry %p " - "destroyed.", list->name, + "destroyed.", l_const->name, (void *)entry); } else { - list->cb_clone_free(list->ctx, entry); + l_const->cb_clone_free(l_const->ctx, entry); } } if (i != RTE_MAX_LCORE) - mlx5_free(list->cache[i]); + mlx5_free(l_inconst->cache[i]); } } void mlx5_list_destroy(struct mlx5_list *list) { - mlx5_list_uninit(list); + mlx5_list_uninit(&list->l_inconst, &list->l_const); mlx5_free(list); } @@ -328,7 +363,7 @@ uint32_t mlx5_list_get_entry_num(struct mlx5_list *list) { MLX5_ASSERT(list); - return __atomic_load_n(&list->count, __ATOMIC_RELAXED); + return __atomic_load_n(&list->l_inconst.count, __ATOMIC_RELAXED); } /********************* Hash List **********************/ @@ -347,6 +382,11 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, uint32_t alloc_size; uint32_t i; + if (!cb_match || !cb_create || !cb_remove || !cb_clone || + !cb_clone_free) { + rte_errno = EINVAL; + return NULL; + } /* Align to the next power of 2, 32bits integer is enough now. */ if (!rte_is_power_of_2(size)) { act_size = rte_align32pow2(size); @@ -356,7 +396,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, act_size = size; } alloc_size = sizeof(struct mlx5_hlist) + - sizeof(struct mlx5_hlist_bucket) * act_size; + sizeof(struct mlx5_hlist_bucket) * act_size; if (lcores_share) alloc_size += sizeof(struct mlx5_list_cache) * act_size; /* Using zmalloc, then no need to initialize the heads. */ @@ -367,15 +407,21 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, name ? name : "None"); return NULL; } + if (name) + snprintf(h->l_const.name, sizeof(h->l_const.name), "%s", name); + h->l_const.ctx = ctx; + h->l_const.lcores_share = lcores_share; + h->l_const.cb_create = cb_create; + h->l_const.cb_match = cb_match; + h->l_const.cb_remove = cb_remove; + h->l_const.cb_clone = cb_clone; + h->l_const.cb_clone_free = cb_clone_free; h->mask = act_size - 1; - h->lcores_share = lcores_share; h->direct_key = direct_key; gc = (struct mlx5_list_cache *)&h->buckets[act_size]; for (i = 0; i < act_size; i++) { - if (mlx5_list_init(&h->buckets[i].l, name, ctx, lcores_share, - lcores_share ? &gc[i] : NULL, - cb_create, cb_match, cb_remove, cb_clone, - cb_clone_free) != 0) { + if (mlx5_list_init(&h->buckets[i].l, &h->l_const, + lcores_share ? &gc[i] : NULL) != 0) { mlx5_free(h); return NULL; } @@ -385,6 +431,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, return h; } + struct mlx5_list_entry * mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) { @@ -394,7 +441,7 @@ mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - return mlx5_list_lookup(&h->buckets[idx].l, ctx); + return _mlx5_list_lookup(&h->buckets[idx].l, &h->l_const, ctx); } struct mlx5_list_entry* @@ -407,9 +454,9 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - entry = mlx5_list_register(&h->buckets[idx].l, ctx); + entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx); if (likely(entry)) { - if (h->lcores_share) + if (h->l_const.lcores_share) entry->gentry->bucket_idx = idx; else entry->bucket_idx = idx; @@ -420,10 +467,10 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry) { - uint32_t idx = h->lcores_share ? entry->gentry->bucket_idx : + uint32_t idx = h->l_const.lcores_share ? entry->gentry->bucket_idx : entry->bucket_idx; - return mlx5_list_unregister(&h->buckets[idx].l, entry); + return _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry); } void @@ -432,6 +479,6 @@ mlx5_hlist_destroy(struct mlx5_hlist *h) uint32_t i; for (i = 0; i <= h->mask; i++) - mlx5_list_uninit(&h->buckets[i].l); + mlx5_list_uninit(&h->buckets[i].l, &h->l_const); mlx5_free(h); } diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 979dfafad4..9e8ebe772a 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -80,6 +80,32 @@ typedef void (*mlx5_list_clone_free_cb)(void *tool_ctx, typedef struct mlx5_list_entry *(*mlx5_list_create_cb)(void *tool_ctx, void *ctx); +/** + * Linked mlx5 list constant object. + */ +struct mlx5_list_const { + char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ + void *ctx; /* user objects target to callback. */ + bool lcores_share; /* Whether to share objects between the lcores. */ + mlx5_list_create_cb cb_create; /**< entry create callback. */ + mlx5_list_match_cb cb_match; /**< entry match callback. */ + mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ + mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ + mlx5_list_clone_free_cb cb_clone_free; + /**< entry clone free callback. */ +}; + +/** + * Linked mlx5 list inconstant data. + */ +struct mlx5_list_inconst { + rte_rwlock_t lock; /* read/write lock. */ + volatile uint32_t gen_cnt; /* List modification may update it. */ + volatile uint32_t count; /* number of entries in list. */ + struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; + /* Lcore cache, last index is the global cache. */ +}; + /** * Linked mlx5 list structure. * @@ -96,19 +122,8 @@ typedef struct mlx5_list_entry *(*mlx5_list_create_cb)(void *tool_ctx, * */ struct mlx5_list { - char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ - void *ctx; /* user objects target to callback. */ - bool lcores_share; /* Whether to share objects between the lcores. */ - mlx5_list_create_cb cb_create; /**< entry create callback. */ - mlx5_list_match_cb cb_match; /**< entry match callback. */ - mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ - mlx5_list_clone_cb cb_clone; /**< entry clone callback. */ - mlx5_list_clone_free_cb cb_clone_free; - struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; - /* Lcore cache, last index is the global cache. */ - volatile uint32_t gen_cnt; /* List modification may update it. */ - volatile uint32_t count; /* number of entries in list. */ - rte_rwlock_t lock; /* read/write lock. */ + struct mlx5_list_const l_const; + struct mlx5_list_inconst l_inconst; }; /** @@ -214,7 +229,7 @@ mlx5_list_get_entry_num(struct mlx5_list *list); /* Hash list bucket. */ struct mlx5_hlist_bucket { - struct mlx5_list l; + struct mlx5_list_inconst l; } __rte_cache_aligned; /** @@ -226,7 +241,7 @@ struct mlx5_hlist { uint32_t mask; /* A mask for the bucket index range. */ uint8_t flags; bool direct_key; /* Whether to use the key directly as hash index. */ - bool lcores_share; /* Whether to share objects between the lcores. */ + struct mlx5_list_const l_const; /* List constant data. */ struct mlx5_hlist_bucket buckets[] __rte_cache_aligned; }; From patchwork Fri Jul 2 06:18:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95172 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 498D9A0A0C; Fri, 2 Jul 2021 08:21:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AE4344139F; Fri, 2 Jul 2021 08:19:08 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2081.outbound.protection.outlook.com [40.107.243.81]) by mails.dpdk.org (Postfix) with ESMTP id 3AC97413A0 for ; Fri, 2 Jul 2021 08:19:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Jiu7U4P8DkQ93w9EPB4/ujjAzg6Ljim4lRm4AAqh5V1kQS/UGiv/lb3pE50p/lOO5ItMV9l3xzUXpg+lBLLNGwCBT/AYFoFEuTfoc058Ci6kFocGY62sy1qfge2Dinob596mbfhDyWXv9b9vetICDnce8xTtL2B7X4DNg6tFlwCt+EWRv0TwnXVxDXzocxLOvsK3UYv8P0fNGVuzhWVEIOB2doW7g4fXj/AY6VgFElV7wrTJvsr29IGXvXlmAnBKSBnQ9q4KjbumVNCsSW/n1D8puwwpOxTiF6V5NWXwHXZNtsTOivpM5c4Zd+ihcq4jbR9g59+lyw/TK6Rfy4/X+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8rITX0f9dEISZvgVv4IkLDt1OAhtDjVti+koBVym8LA=; b=lfp73iLK/1358x2lL7+ADP45mVsHGPlolsLaMhNpy1uJYqalzKE4ovozzqKWqYsnuBuL6Pog9C2Tg34Wit9Vd5NZj7WOXZ9e1sT90lMnrNBR7JTZWnqaxvDSB/2V4T/xYp02y512tGgGppirderO9BD0P/wEKJWCfftyWOVTN31qTDE7T9gesjkapwNhfz4FJMMFUupY7Z/WmGo0jtgPRmedbPmZFGkBcI1ndTidBVlIFrxPPCM7y0ZzrUt8ShnC1I0+DkLz/HDpmGDuvoH/yeYhwnlFWtz2+dOohNFqOdcjY463YQLfBfh3UQrAkJ6JRdON9AzWVvsFw09+WpCnNQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8rITX0f9dEISZvgVv4IkLDt1OAhtDjVti+koBVym8LA=; b=Pu4BCrKj2fi4Eh3S7J5W517LKP+SqBAqkBfTexNb21ywEbpbNa1nkWcKU9FAogI3r8DZ9YRx0C/wM+4isn2/uX4u58cnpDyL4dikwPaMKGJC/sQiWUHi6W6vpxEV4GJJ0Fy4v5G1EEZS6mcWLRnznN4DKrzs2xSwPtk3VaxCRxcgT8YqTTGQKMkRLt4oXNx2Kw6dfc7pQE1lzlEsDkQ2tYgZCSxsc6dhkeNlRXJ6GuzVEm9OoTFewK+euZwi3mm+Za6vsI0lvQwoEVHaCgEUgmK3lZpv7w02B3oFMe9vKdTjDJ4VC2VzbOzAuv3QSsn4WOsuormhpsout1SocjjSbw== Received: from BN0PR04CA0153.namprd04.prod.outlook.com (2603:10b6:408:eb::8) by MW3PR12MB4361.namprd12.prod.outlook.com (2603:10b6:303:5a::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.19; Fri, 2 Jul 2021 06:19:05 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::ab) by BN0PR04CA0153.outlook.office365.com (2603:10b6:408:eb::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:04 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:04 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:19:02 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:13 +0300 Message-ID: <20210702061816.10454-20-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 428b5638-c088-4dc0-cb6b-08d93d214a9d X-MS-TrafficTypeDiagnostic: MW3PR12MB4361: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:124; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VK60qcTMIH4Pje/dCRgQQV43mRxFSiPk+E8a5/mQXz88zUCynGDIN+hOVbvnllXVgXQPfH8m0RYFjfvId37B6aijhU7c7WYTb0v62J0ZSsuVyyRjC7B2spSkTzsI5jXMmpYVgqweFtZTFFeIPa4VaproKMFq3tn0uxs9hbyh9y63uGAWM46TgGr0Qe7ZS9UtTU8HIj78mpPze7cZlcwQ0VRju431UR5gWpueT3/VJFcmAk81BE6S3c1KQ0XiW36dmOP9nLI4kSx1BMJYQeOge+O4d1yATt3rbhNpcygIkBmPBYbqBVwE6/qw2ALmGdJtu7m5M+3QeQ2iKN+gQ0jTNQeUWV6+kIkS7adV5/RKdon9Gb7cBjPgEV7zotemSdr4jxtFhpRV4mM3+nupzsMEzjrTwlwsg9VMqkm6obG0aFBPlPnRVpyBQUo5BllKToQ+4H9urj7OEButJPwDPNWM4SLZ2fM83UpS9Uf29JI0CVlN5xaXF0oxhGLfDfuo+ODcn6mRcME0iL7GXnuioIn48OTcd2b8jo2GyJDVmbZud7hZgSl5QaIIZtDP8xAPBX0OXH1WPuVch/mEl2KTgmm4ABie2r2rAtQ+nQAOLx0R9PFTZ3LhixlEEB1nHlDORCtKL38kdXRfhpCLNpRqy0ErpF4/RZPtyVAYFp14hVBfAVoSbXcrMcmKgxGjrVwzLxzQLa821Rg7fLq7gWUgVCCe4w== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(39860400002)(376002)(346002)(136003)(36840700001)(46966006)(86362001)(47076005)(6636002)(4744005)(82310400003)(70206006)(36860700001)(55016002)(36756003)(8676002)(70586007)(5660300002)(8936002)(336012)(26005)(7696005)(4326008)(6666004)(6286002)(1076003)(2906002)(82740400003)(186003)(36906005)(316002)(54906003)(16526019)(2616005)(110136005)(356005)(7636003)(426003)(478600001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:04.2897 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 428b5638-c088-4dc0-cb6b-08d93d214a9d X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR12MB4361 Subject: [dpdk-dev] [PATCH v3 19/22] net/mlx5: change memory release configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit changes the index pool memory release configuration to 0 when memory reclaim mode is not required. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index bf1463c289..6b7225e55d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -797,6 +797,8 @@ mlx5_flow_ipool_create(struct mlx5_dev_ctx_shared *sh, if (config->reclaim_mode) { cfg.release_mem_en = 1; cfg.per_core_cache = 0; + } else { + cfg.release_mem_en = 0; } sh->ipool[i] = mlx5_ipool_create(&cfg); } From patchwork Fri Jul 2 06:18:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95173 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B1A7A0A0C; Fri, 2 Jul 2021 08:21:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C925C413B5; Fri, 2 Jul 2021 08:19:09 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2050.outbound.protection.outlook.com [40.107.237.50]) by mails.dpdk.org (Postfix) with ESMTP id 66D664131F for ; Fri, 2 Jul 2021 08:19:07 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dyRXoXGqpXEAwfWTaeHaHrtSt6rD4wT+uSB3mOV+CyGCqRfmonluvGV7whx4b75qNwhHZzZQeufDlO5J+3w+ZuNIgOvibho48Xm7V93dNsmzcQki8rpo1Zp0UwBJtHObS8c/JdHfRbxlI/gWPlpZ7FdlE8fDejYXJKEDLwKhCD+wGd2TouxA2AG1xubTuoe/onswolqeElgECl50aOYHqqCrx/qlX2xzM9Y6OKBoFjo6xHraLaax45I8c3CsZJ8W9RHOnkmGd0sR9sIj1fLf/lrCorrUeXiduGbMnKw3MPazxB1odofSaRWFDAKc6O2p6f+/bCLr0G90VAkmy7iIXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XGXzZ+JG/VwiWi8hxkFosCFLnf9Eo1qxNqpqbKvOc+g=; b=gWlVPOPWI0DJESSyV6IURlYjzzJSnnTcJhaqw8avb/HLsWWPEpAnZZSJI1jaZyUuxYVAYcsWuD3cUJYS8cMrffpc6RmyJXAl2nQGxtwPhCQdq2OVbIDgfr2OtYGkyNZlJv1D/3WRHmm67oXrvaDeMBYMQLNNu8Pk3dDn0gfrQ5LtliE/pWQTU4aNHddN8HLmjGllLg0t6N3xODOc4Kw4OdUTDJp0yFiLYVQzhqPm7D3OvjZGMzxGdT5355o45FtQJ8ux1z5UR18jSabquR3eVpYSx7YjkozUbjP5CF93QNA3UWx82iwTokESONfKeAkZTxzDEoJxf9XM/xFY4Vgnbg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XGXzZ+JG/VwiWi8hxkFosCFLnf9Eo1qxNqpqbKvOc+g=; b=QwrsWN4McSfhM1Aanefs1Ae7waLDGEUt4napKNvTqfmPq9oBYiYyEpht3lcNfAMvvH80j84WBkqaaEV2f8hv1ONThpfLg7wbqGABTOGg4tBMOV2M0z1kE2vuKyWLHAzJsSBSVF+ZY1+TqubEMzhtDy9Ew01k2MzvN3MqCJz9oZcCY33iIky/ReJp0hDJcsfJ9MLveiS0KtuHyyIRpvaInDtzo0BkpkbcHGVSnLaLn/qMhlSn17VyXb43gbeQsw/v20JoQQa5k74y45MMGhOlMGqIi7aKyUhugSCSai2rdhvPho21V9gPLEf3a7Xz+VRHfDKdwcd76/LfzVD6LBo0Xg== Received: from BN0PR04CA0153.namprd04.prod.outlook.com (2603:10b6:408:eb::8) by DM6PR12MB3561.namprd12.prod.outlook.com (2603:10b6:5:3e::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.22; Fri, 2 Jul 2021 06:19:06 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::16) by BN0PR04CA0153.outlook.office365.com (2603:10b6:408:eb::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:06 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:06 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:19:04 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:14 +0300 Message-ID: <20210702061816.10454-21-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2bbe6427-cfdb-4444-0489-08d93d214bb7 X-MS-TrafficTypeDiagnostic: DM6PR12MB3561: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:72; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Q+k9wK1MN8SEhAzyYPzVMy8kjp/L0OwQP7GL7mb94pHK2rkpWi7new1omFEwlFNmselhGHycKV2WUQKyeON8n4a3AzbRc6C0xU40kMuNOpwU+Y3gIjmcjVSyKofHck8gUDpyeuuEb7DgwFTVSa6Ghu2+Xiy8GlKJeHL9gC+IYjY9QKIRrXfZsRY3Im3V7vV+G6d74kLfLPfMF/OJRZSgUnt61UFOSPUOc4Dx7EOEbuSBTxZsVpFItyyQv1piunxW3op68/KQhvPzi3LXREZ4gewKiAnZrLZbLcrBAAq0G7NAYPxm9xlZdiSECeb90oGTBJ0E7EYKwA37vaji91po5u4BSmb5Tb+CUfSxeR3lslqoLoSe/q2XZ7+mDeMT+bCHGYbkezx+8+mv4mvWOyDj5ZiFXbUxKra/u4vCeKLMUBki1W+f3RqWU4urlhMjN3nVWGuh77mOf/iiVx2cWNioKtNBcfzPBCg9SBse465PTiqLXZT0JKoKjifMQMeRjdjwh+/PXgfOFHELgoY0hVOupQAYIxRcBFR/e7vMexf8FVP08j4baLgVREoTOQqjjT+enujcqEz2CLb6qrtwqlLLhWqaGzhzxSTAD2OZoMkN8as0CRUDT8Khex6hRSU6oUk1XOZIGdSL25w2GwrOLje8853rvtEispnNFyQUwe2GnuWKPr76/Cph1mc9zMlFc+d434oVt1+U/KPLSMBB5wMPVA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(39860400002)(346002)(376002)(136003)(46966006)(36840700001)(36756003)(83380400001)(6286002)(55016002)(2616005)(6666004)(2906002)(186003)(16526019)(7636003)(356005)(82310400003)(336012)(82740400003)(1076003)(54906003)(5660300002)(6636002)(478600001)(316002)(36860700001)(7696005)(86362001)(110136005)(70206006)(26005)(36906005)(8676002)(426003)(8936002)(70586007)(47076005)(4326008); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:06.1187 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2bbe6427-cfdb-4444-0489-08d93d214bb7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3561 Subject: [dpdk-dev] [PATCH v3 20/22] net/mlx5: support index pool none local core operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit supports the index pool none local core operations with an extra cache. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_utils.c | 75 +++++++++++++++++++++++++---------- drivers/net/mlx5/mlx5_utils.h | 3 +- 2 files changed, 56 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 94abe79860..c34d6d62a8 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -114,6 +114,7 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg) mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); if (!cfg->per_core_cache) pool->free_list = TRUNK_INVALID; + rte_spinlock_init(&pool->nlcore_lock); return pool; } @@ -354,20 +355,14 @@ mlx5_ipool_allocate_from_global(struct mlx5_indexed_pool *pool, int cidx) } static void * -mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +_mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, int cidx, uint32_t idx) { struct mlx5_indexed_trunk *trunk; struct mlx5_indexed_cache *lc; uint32_t trunk_idx; uint32_t entry_idx; - int cidx; MLX5_ASSERT(idx); - cidx = rte_lcore_index(rte_lcore_id()); - if (unlikely(cidx == -1)) { - rte_errno = ENOTSUP; - return NULL; - } lc = mlx5_ipool_update_global_cache(pool, cidx); idx -= 1; trunk_idx = mlx5_trunk_idx_get(pool, idx); @@ -378,15 +373,27 @@ mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, uint32_t idx) } static void * -mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, uint32_t *idx) +mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, uint32_t idx) { + void *entry; int cidx; cidx = rte_lcore_index(rte_lcore_id()); if (unlikely(cidx == -1)) { - rte_errno = ENOTSUP; - return NULL; + cidx = RTE_MAX_LCORE; + rte_spinlock_lock(&pool->nlcore_lock); } + entry = _mlx5_ipool_get_cache(pool, cidx, idx); + if (unlikely(cidx == RTE_MAX_LCORE)) + rte_spinlock_unlock(&pool->nlcore_lock); + return entry; +} + + +static void * +_mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, int cidx, + uint32_t *idx) +{ if (unlikely(!pool->cache[cidx])) { pool->cache[cidx] = pool->cfg.malloc(MLX5_MEM_ZERO, sizeof(struct mlx5_ipool_per_lcore) + @@ -399,29 +406,40 @@ mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, uint32_t *idx) } else if (pool->cache[cidx]->len) { pool->cache[cidx]->len--; *idx = pool->cache[cidx]->idx[pool->cache[cidx]->len]; - return mlx5_ipool_get_cache(pool, *idx); + return _mlx5_ipool_get_cache(pool, cidx, *idx); } /* Not enough idx in global cache. Keep fetching from global. */ *idx = mlx5_ipool_allocate_from_global(pool, cidx); if (unlikely(!(*idx))) return NULL; - return mlx5_ipool_get_cache(pool, *idx); + return _mlx5_ipool_get_cache(pool, cidx, *idx); } -static void -mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +static void * +mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, uint32_t *idx) { + void *entry; int cidx; + + cidx = rte_lcore_index(rte_lcore_id()); + if (unlikely(cidx == -1)) { + cidx = RTE_MAX_LCORE; + rte_spinlock_lock(&pool->nlcore_lock); + } + entry = _mlx5_ipool_malloc_cache(pool, cidx, idx); + if (unlikely(cidx == RTE_MAX_LCORE)) + rte_spinlock_unlock(&pool->nlcore_lock); + return entry; +} + +static void +_mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, int cidx, uint32_t idx) +{ struct mlx5_ipool_per_lcore *ilc; struct mlx5_indexed_cache *gc, *olc = NULL; uint32_t reclaim_num = 0; MLX5_ASSERT(idx); - cidx = rte_lcore_index(rte_lcore_id()); - if (unlikely(cidx == -1)) { - rte_errno = ENOTSUP; - return; - } /* * When index was allocated on core A but freed on core B. In this * case check if local cache on core B was allocated before. @@ -464,6 +482,21 @@ mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, uint32_t idx) pool->cache[cidx]->len++; } +static void +mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +{ + int cidx; + + cidx = rte_lcore_index(rte_lcore_id()); + if (unlikely(cidx == -1)) { + cidx = RTE_MAX_LCORE; + rte_spinlock_lock(&pool->nlcore_lock); + } + _mlx5_ipool_free_cache(pool, cidx, idx); + if (unlikely(cidx == RTE_MAX_LCORE)) + rte_spinlock_unlock(&pool->nlcore_lock); +} + void * mlx5_ipool_malloc(struct mlx5_indexed_pool *pool, uint32_t *idx) { @@ -643,7 +676,7 @@ mlx5_ipool_destroy(struct mlx5_indexed_pool *pool) MLX5_ASSERT(pool); mlx5_ipool_lock(pool); if (pool->cfg.per_core_cache) { - for (i = 0; i < RTE_MAX_LCORE; i++) { + for (i = 0; i <= RTE_MAX_LCORE; i++) { /* * Free only old global cache. Pool gc will be * freed at last. @@ -712,7 +745,7 @@ mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool) for (i = 0; i < gc->len; i++) rte_bitmap_clear(ibmp, gc->idx[i] - 1); /* Clear core cache. */ - for (i = 0; i < RTE_MAX_LCORE; i++) { + for (i = 0; i < RTE_MAX_LCORE + 1; i++) { struct mlx5_ipool_per_lcore *ilc = pool->cache[i]; if (!ilc) diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 7d9b64c877..060c52f022 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -248,6 +248,7 @@ struct mlx5_ipool_per_lcore { struct mlx5_indexed_pool { struct mlx5_indexed_pool_config cfg; /* Indexed pool configuration. */ rte_spinlock_t rsz_lock; /* Pool lock for multiple thread usage. */ + rte_spinlock_t nlcore_lock; /* Dim of trunk pointer array. */ union { struct { @@ -259,7 +260,7 @@ struct mlx5_indexed_pool { struct { struct mlx5_indexed_cache *gc; /* Global cache. */ - struct mlx5_ipool_per_lcore *cache[RTE_MAX_LCORE]; + struct mlx5_ipool_per_lcore *cache[RTE_MAX_LCORE + 1]; /* Local cache. */ struct rte_bitmap *ibmp; void *bmp_mem; From patchwork Fri Jul 2 06:18:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95174 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3CE86A0A0C; Fri, 2 Jul 2021 08:21:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 415B4413C2; Fri, 2 Jul 2021 08:19:12 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2066.outbound.protection.outlook.com [40.107.93.66]) by mails.dpdk.org (Postfix) with ESMTP id 648CC413B6 for ; Fri, 2 Jul 2021 08:19:11 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dLdx0lKRgHx2c+yiygIRmtLgUCrfZ21Fs5ACquk6kDRipU5crL93zzQoigSp9qdli03Q21bgcq6/pKQ1jJa92WO3JqcEZsXf3iuFHBAJ4Ywr5JOBUhSEni82ye2J2Nl6ayvCg4hlwDCm9NttNbAXz6YzYMVHHvfvn9AVbsld2w6L64hoTSWFaGeNxVyfU1XppTuk2wWbQ5nPkqpO2m+btTSbiKqe+uMB7f30byTbKTV/kpCTdDjmg9fA5tuKf8Sv4esA0d/EMRPruD9W950SmAVdUnCs3tz+OTQm9bBpwYWeCckwKw8uweZyWwAhq9cxafhm1XmcHjuVHTo4BUfbSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+jhBS6ovPUw8H4IN7UspcaJS2RGj2DPTGyWN2S2Q1lA=; b=KiH5FC7cbtdrw3wj1RSTuLRv0KTPG2wdyRX3+jwBtsO4CwA9kTTthBqI0epYsgpzlMCIapAHcUBUqaJo/Tbmou8/KvNbULukh4nlKVDbOkqb9pFNgq0xFXXTgZf/n+NQ1t9ecx1NzsfW3+IXS6ErB4bCs4lCbiBwazYWF2JwgmTm8vqj2yz4tXd1BgRls8m17BQ55RG2TG6kj40xvX/GkHhmy4668ldZWbo1s7Il9nWSD/pOr9RRuLGDwz/kibsjeuMfjKseeaFlWuXYEmvoZXK1fEpaV++HpMKmjHL49K7zeIXFmAhAYnBDposAnsQ6BITr38w0M8VWJTtNjihcIQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+jhBS6ovPUw8H4IN7UspcaJS2RGj2DPTGyWN2S2Q1lA=; b=AF90HV6C1b8IMWD65NqmGKwxErHvPyyQaRpPz18DrbrPuX8YNXJLZ6/kXi9d5KgFrhgTE/oHIud6mzQLU4OIfBlwJZSwa9TuoIwale8A3HA6jYVrB1y/abvKs1m9CrIFU9pyxpDEJ5V1nA5kPF0tSwQ3/s3y86jiXOWQ8x1dX6xudp7lhhpXvaU6rmKCrBkinbBbvh/4YC7h92OEURsi1XPt78Z91T2pIqnRaBua8GbveFT4OWGfsZkuDkuwwo+Dx9vIlO5Mogg0vGdt7Ei/JuQzO4sFZKzcWd192Oww9vZQlTgFgBhcgy9jPN+0jdT/0eWPykmef4IwGUjDze3QIQ== Received: from BN0PR04CA0180.namprd04.prod.outlook.com (2603:10b6:408:eb::35) by MN2PR12MB4437.namprd12.prod.outlook.com (2603:10b6:208:26f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18; Fri, 2 Jul 2021 06:19:10 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::f9) by BN0PR04CA0180.outlook.office365.com (2603:10b6:408:eb::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21 via Frontend Transport; Fri, 2 Jul 2021 06:19:09 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:09 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:19:06 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:15 +0300 Message-ID: <20210702061816.10454-22-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1e82aae8-0e34-482a-292a-08d93d214de0 X-MS-TrafficTypeDiagnostic: MN2PR12MB4437: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:51; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nLOdomcs6zVpNXVcm1YMhAAyNxEAUFbUrbSxCmlQ9kAuBj5yzEWYlgm+E4EmDpzmCoAafq5WLNCvvGEAF/HmC6gMAHFGPFw0hd/oWxfq3gy31YYdMN1nyN3mVFHT/ycujAW00TaqzovW3XHo/hRiKJjZPn99TKbvn9kVLzhvYFS7A6vY+9StdWRJezpW2qy0tm46rQI2uD6XnX81gmin/gyoBsqekRbFKDmKmPQepEzOAk8RXtm+aXSEMmvihTiYwkk0Mu0JoghGty2YXMDF4yFx9RhEETywUWm4KQv8AgrAica22yg0Oa0uwE5Dp9n5gycWJWM1Ea1wQgWMvXyll1y0n+goojvgyISbPQ3cK5MCz9Y/QBEV1efW8LRh+a60gJQcevySRE7OlkZA6HwuzQdBxzn4XpDfrWh6nxIqQUHmkhfeEO8mUDAzZW6hjFYd7vYvcptvVRRzo0J/5X5p5Us5+tH0YU1vcahXLANpK5RU1JuNEgAmQFlPVspXmOIlFdT7/tUEgeCG/k2IxhEESHI8Km9URP452D0sUqWloZIueziYV+qh+zOzJyW93YOdZK7prW/920LLT0QwjnDdqnupQQiEKRFjbKXEQGhA1NLhyqDkxsD1Uy9EzmsBfRsjb4WTp4a2PzCLiPBePs+63NevPMcNF8NIovZOFqgHT1Sav3OPGmm0w4sbvpVg4ltFi/5DzI7TUrSlUAbaqvn9sw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(36840700001)(46966006)(6666004)(54906003)(4326008)(8936002)(2906002)(36906005)(6286002)(478600001)(6636002)(8676002)(2616005)(426003)(1076003)(110136005)(336012)(316002)(70206006)(26005)(5660300002)(36756003)(356005)(7636003)(82740400003)(86362001)(70586007)(82310400003)(55016002)(16526019)(36860700001)(7696005)(186003)(47076005)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:09.7428 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1e82aae8-0e34-482a-292a-08d93d214de0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4437 Subject: [dpdk-dev] [PATCH v3 21/22] net/mlx5: support list none local core operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit supports the list none local core operations with an extra sub-list. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_utils.c | 92 +++++++++++++++++-------- drivers/common/mlx5/mlx5_common_utils.h | 9 ++- 2 files changed, 71 insertions(+), 30 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_utils.c b/drivers/common/mlx5/mlx5_common_utils.c index 858c8d8164..d58d0d08ab 100644 --- a/drivers/common/mlx5/mlx5_common_utils.c +++ b/drivers/common/mlx5/mlx5_common_utils.c @@ -20,8 +20,8 @@ mlx5_list_init(struct mlx5_list_inconst *l_inconst, { rte_rwlock_init(&l_inconst->lock); if (l_const->lcores_share) { - l_inconst->cache[RTE_MAX_LCORE] = gc; - LIST_INIT(&l_inconst->cache[RTE_MAX_LCORE]->h); + l_inconst->cache[MLX5_LIST_GLOBAL] = gc; + LIST_INIT(&l_inconst->cache[MLX5_LIST_GLOBAL]->h); } DRV_LOG(DEBUG, "mlx5 list %s initialized.", l_const->name); return 0; @@ -59,6 +59,7 @@ mlx5_list_create(const char *name, void *ctx, bool lcores_share, list->l_const.cb_remove = cb_remove; list->l_const.cb_clone = cb_clone; list->l_const.cb_clone_free = cb_clone_free; + rte_spinlock_init(&list->l_const.nlcore_lock); if (lcores_share) gc = (struct mlx5_list_cache *)(list + 1); if (mlx5_list_init(&list->l_inconst, &list->l_const, gc) != 0) { @@ -85,11 +86,11 @@ __list_lookup(struct mlx5_list_inconst *l_inconst, DRV_LOG(DEBUG, "mlx5 list %s entry %p ref: %u.", l_const->name, (void *)entry, entry->ref_cnt); - } else if (lcore_index < RTE_MAX_LCORE) { + } else if (lcore_index < MLX5_LIST_GLOBAL) { ret = __atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED); } - if (likely(ret != 0 || lcore_index == RTE_MAX_LCORE)) + if (likely(ret != 0 || lcore_index == MLX5_LIST_GLOBAL)) return entry; if (reuse && ret == 0) entry->ref_cnt--; /* Invalid entry. */ @@ -107,10 +108,11 @@ _mlx5_list_lookup(struct mlx5_list_inconst *l_inconst, int i; rte_rwlock_read_lock(&l_inconst->lock); - for (i = 0; i < RTE_MAX_LCORE; i++) { + for (i = 0; i < MLX5_LIST_GLOBAL; i++) { if (!l_inconst->cache[i]) continue; - entry = __list_lookup(l_inconst, l_const, i, ctx, false); + entry = __list_lookup(l_inconst, l_const, i, + ctx, false); if (entry) break; } @@ -170,18 +172,11 @@ __list_cache_clean(struct mlx5_list_inconst *l_inconst, static inline struct mlx5_list_entry * _mlx5_list_register(struct mlx5_list_inconst *l_inconst, struct mlx5_list_const *l_const, - void *ctx) + void *ctx, int lcore_index) { struct mlx5_list_entry *entry, *local_entry; volatile uint32_t prev_gen_cnt = 0; - int lcore_index = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(l_inconst); - MLX5_ASSERT(lcore_index < RTE_MAX_LCORE); - if (unlikely(lcore_index == -1)) { - rte_errno = ENOTSUP; - return NULL; - } if (unlikely(!l_inconst->cache[lcore_index])) { l_inconst->cache[lcore_index] = mlx5_malloc(0, sizeof(struct mlx5_list_cache), @@ -202,7 +197,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, if (l_const->lcores_share) { /* 2. Lookup with read lock on global list, reuse if found. */ rte_rwlock_read_lock(&l_inconst->lock); - entry = __list_lookup(l_inconst, l_const, RTE_MAX_LCORE, + entry = __list_lookup(l_inconst, l_const, MLX5_LIST_GLOBAL, ctx, true); if (likely(entry)) { rte_rwlock_read_unlock(&l_inconst->lock); @@ -241,7 +236,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, if (unlikely(prev_gen_cnt != l_inconst->gen_cnt)) { struct mlx5_list_entry *oentry = __list_lookup(l_inconst, l_const, - RTE_MAX_LCORE, + MLX5_LIST_GLOBAL, ctx, true); if (unlikely(oentry)) { @@ -255,7 +250,7 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, } } /* 5. Update lists. */ - LIST_INSERT_HEAD(&l_inconst->cache[RTE_MAX_LCORE]->h, entry, next); + LIST_INSERT_HEAD(&l_inconst->cache[MLX5_LIST_GLOBAL]->h, entry, next); l_inconst->gen_cnt++; rte_rwlock_write_unlock(&l_inconst->lock); LIST_INSERT_HEAD(&l_inconst->cache[lcore_index]->h, local_entry, next); @@ -268,21 +263,30 @@ _mlx5_list_register(struct mlx5_list_inconst *l_inconst, struct mlx5_list_entry * mlx5_list_register(struct mlx5_list *list, void *ctx) { - return _mlx5_list_register(&list->l_inconst, &list->l_const, ctx); + struct mlx5_list_entry *entry; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&list->l_const.nlcore_lock); + } + entry = _mlx5_list_register(&list->l_inconst, &list->l_const, ctx, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&list->l_const.nlcore_lock); + return entry; } static inline int _mlx5_list_unregister(struct mlx5_list_inconst *l_inconst, struct mlx5_list_const *l_const, - struct mlx5_list_entry *entry) + struct mlx5_list_entry *entry, + int lcore_idx) { struct mlx5_list_entry *gentry = entry->gentry; - int lcore_idx; if (__atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_RELAXED) != 0) return 1; - lcore_idx = rte_lcore_index(rte_lcore_id()); - MLX5_ASSERT(lcore_idx < RTE_MAX_LCORE); if (entry->lcore_idx == (uint32_t)lcore_idx) { LIST_REMOVE(entry, next); if (l_const->lcores_share) @@ -321,7 +325,19 @@ int mlx5_list_unregister(struct mlx5_list *list, struct mlx5_list_entry *entry) { - return _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry); + int ret; + int lcore_index = rte_lcore_index(rte_lcore_id()); + + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&list->l_const.nlcore_lock); + } + ret = _mlx5_list_unregister(&list->l_inconst, &list->l_const, entry, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&list->l_const.nlcore_lock); + return ret; + } static void @@ -332,13 +348,13 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, int i; MLX5_ASSERT(l_inconst); - for (i = 0; i <= RTE_MAX_LCORE; i++) { + for (i = 0; i < MLX5_LIST_MAX; i++) { if (!l_inconst->cache[i]) continue; while (!LIST_EMPTY(&l_inconst->cache[i]->h)) { entry = LIST_FIRST(&l_inconst->cache[i]->h); LIST_REMOVE(entry, next); - if (i == RTE_MAX_LCORE) { + if (i == MLX5_LIST_GLOBAL) { l_const->cb_remove(l_const->ctx, entry); DRV_LOG(DEBUG, "mlx5 list %s entry %p " "destroyed.", l_const->name, @@ -347,7 +363,7 @@ mlx5_list_uninit(struct mlx5_list_inconst *l_inconst, l_const->cb_clone_free(l_const->ctx, entry); } } - if (i != RTE_MAX_LCORE) + if (i != MLX5_LIST_GLOBAL) mlx5_free(l_inconst->cache[i]); } } @@ -416,6 +432,7 @@ mlx5_hlist_create(const char *name, uint32_t size, bool direct_key, h->l_const.cb_remove = cb_remove; h->l_const.cb_clone = cb_clone; h->l_const.cb_clone_free = cb_clone_free; + rte_spinlock_init(&h->l_const.nlcore_lock); h->mask = act_size - 1; h->direct_key = direct_key; gc = (struct mlx5_list_cache *)&h->buckets[act_size]; @@ -449,28 +466,45 @@ mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) { uint32_t idx; struct mlx5_list_entry *entry; + int lcore_index = rte_lcore_index(rte_lcore_id()); if (h->direct_key) idx = (uint32_t)(key & h->mask); else idx = rte_hash_crc_8byte(key, 0) & h->mask; - entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx); + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&h->l_const.nlcore_lock); + } + entry = _mlx5_list_register(&h->buckets[idx].l, &h->l_const, ctx, + lcore_index); if (likely(entry)) { if (h->l_const.lcores_share) entry->gentry->bucket_idx = idx; else entry->bucket_idx = idx; } + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&h->l_const.nlcore_lock); return entry; } int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_list_entry *entry) { + int lcore_index = rte_lcore_index(rte_lcore_id()); + int ret; uint32_t idx = h->l_const.lcores_share ? entry->gentry->bucket_idx : entry->bucket_idx; - - return _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry); + if (unlikely(lcore_index == -1)) { + lcore_index = MLX5_LIST_NLCORE; + rte_spinlock_lock(&h->l_const.nlcore_lock); + } + ret = _mlx5_list_unregister(&h->buckets[idx].l, &h->l_const, entry, + lcore_index); + if (unlikely(lcore_index == MLX5_LIST_NLCORE)) + rte_spinlock_unlock(&h->l_const.nlcore_lock); + return ret; } void diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h index 9e8ebe772a..95106afc5e 100644 --- a/drivers/common/mlx5/mlx5_common_utils.h +++ b/drivers/common/mlx5/mlx5_common_utils.h @@ -11,6 +11,12 @@ /** Maximum size of string for naming. */ #define MLX5_NAME_SIZE 32 +/** Maximum size of list. */ +#define MLX5_LIST_MAX (RTE_MAX_LCORE + 2) +/** Global list index. */ +#define MLX5_LIST_GLOBAL ((MLX5_LIST_MAX) - 1) +/** None rte core list index. */ +#define MLX5_LIST_NLCORE ((MLX5_LIST_MAX) - 2) struct mlx5_list; @@ -87,6 +93,7 @@ struct mlx5_list_const { char name[MLX5_NAME_SIZE]; /**< Name of the mlx5 list. */ void *ctx; /* user objects target to callback. */ bool lcores_share; /* Whether to share objects between the lcores. */ + rte_spinlock_t nlcore_lock; /* Lock for non-lcore list. */ mlx5_list_create_cb cb_create; /**< entry create callback. */ mlx5_list_match_cb cb_match; /**< entry match callback. */ mlx5_list_remove_cb cb_remove; /**< entry remove callback. */ @@ -102,7 +109,7 @@ struct mlx5_list_inconst { rte_rwlock_t lock; /* read/write lock. */ volatile uint32_t gen_cnt; /* List modification may update it. */ volatile uint32_t count; /* number of entries in list. */ - struct mlx5_list_cache *cache[RTE_MAX_LCORE + 1]; + struct mlx5_list_cache *cache[MLX5_LIST_MAX]; /* Lcore cache, last index is the global cache. */ }; From patchwork Fri Jul 2 06:18:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 95175 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 42B75A0A0C; Fri, 2 Jul 2021 08:21:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 65697413C4; Fri, 2 Jul 2021 08:19:13 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2060.outbound.protection.outlook.com [40.107.93.60]) by mails.dpdk.org (Postfix) with ESMTP id 50B8B413C4 for ; Fri, 2 Jul 2021 08:19:12 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Sj2isn9xwTW0fQCxX/4gZsquaG/E2PYAzvo+UmO6+GCMNc264F6KFfm4IcZ+Q0cxQR4IeLOoq84yvbVogtjVWWnazIwOr/PP6wvnZYNp4XWiLMtjraxxxMinZFTpzWOTXwaSSE1xEZuiEP2oLtjNeASCjs31/btMzmPWW5Np8i1lUMh52ONxxSEabEFnPG+E6r55IkceQeaAZd4B1PocRPHdVqSOTWmWaVYGPkUqzs3/+pzKy8HzISoQRLxgBiVC1yLj8o41cHH/RKwACAVQFauE1nNcQaw0MQkwlz1yLMZ+lWuVGjDo8JakmwFSuZqDTfnIWBXeVi3ggDJlCxZI5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jf0WosUb0J5nOPlTECv5/FZR9BpVRZ49Qsp8jXrWDVM=; b=GUT8pgpXgpNZ9zEYsRcCzv7LQ8juEIDxp+1TmY4DNN1H+tjGyXD5KrLCLP8lH+RxSIAzG2R6YurOUIM7qkyVK7eTeJXb25Xpqp3ka52fgENTWzBbDcJ3G4nhm1fK3UvxR6ZDmFMsfSLs2S+4rSmXZSwNx05PpWqs4rMitma5DUuBuT1JTvs2lLrzvPwD/oHLFYkYUMjhLteidzhBr+cSjPyQCiJxBvRuBQmbwpWL5+XWVyJQ7EIUA45Cj8inMEfpBShyNspuMLMweWjnHjq8SDIUCUpJfI9arOlmvcmYaGllTLPt5R7yUiddcFiKrXJ7ilUHVrP9+8npWhP0l/Z0bw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jf0WosUb0J5nOPlTECv5/FZR9BpVRZ49Qsp8jXrWDVM=; b=nunkSdu/tu1dUHEHpvldS0Om9EQ763RoZ2bNz5wVOCiQrwp/Pf1EXyEM3c8XzRSCBB96y7P/HJQ9TEi03q58ODnpj/kzyEcb2Ex1Q4A1zQt/HQB1aO9uvaLX/4gZtJXz6nXfZqIN5PpnkWwNgOlj76gOuLDhixZK314U3HRtA/hAdO2Q4LI/gjoRM1IlA8bTp1JMhfZtssrrXI/fxpBXJ0c/8dxsXHAHQ3UpoKrBk7RcvkOAPmHuM0vI4aXtnNV7se3OXYuDoNRkWiyJUrWIKFy3VwlDOLFyX3PysdC4/ovP5tBUZnF2Qpx7CIQbp3kTkaSuGXX8spVV7l86p/hHWA== Received: from BN0PR04CA0170.namprd04.prod.outlook.com (2603:10b6:408:eb::25) by DM4PR12MB5167.namprd12.prod.outlook.com (2603:10b6:5:396::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.21; Fri, 2 Jul 2021 06:19:11 +0000 Received: from BN8NAM11FT059.eop-nam11.prod.protection.outlook.com (2603:10b6:408:eb:cafe::82) by BN0PR04CA0170.outlook.office365.com (2603:10b6:408:eb::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:11 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT059.mail.protection.outlook.com (10.13.177.120) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4287.22 via Frontend Transport; Fri, 2 Jul 2021 06:19:10 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Jul 2021 06:19:07 +0000 From: Suanming Mou To: , CC: , , Date: Fri, 2 Jul 2021 09:18:16 +0300 Message-ID: <20210702061816.10454-23-suanmingm@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20210702061816.10454-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> <20210702061816.10454-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 53b8b4f9-a1ff-4a49-7190-08d93d214e89 X-MS-TrafficTypeDiagnostic: DM4PR12MB5167: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:202; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: E2ZQwy3DyXhOV/YBkEpQeCqSHcQbnT4t5V23GhbPdIAvufDE8gt3fMrmjCzLgcnqnlniMdgs4rk8HCRfacOBUDmUo+JHmEND9BHRmSO5yn0oIe61luX051aNzJSdw+u+/MJ18h2k9yvk0CNNdtgofMmfl2lH7hweUOu023FqOM4gxKJVkZbobe+QvtCKvjHUpVLF/gDuz9BbqDtzaJJsfdbF52IB9suTJRiLT0Ao0S/BAZ0ng0YbUcCHQgNx6GJl8HNPZLl1q31cIi8tbw+0UsKU7P6KTp44Wwj1Kv3hhI7W/sv324dO14Bz1Z+7bs2RdSQ1n70kUJWamziSr2qgCZogkeDZCMNG5nuHMeoftERlnSxf4xbs/U0Ys/BxFDC/KogqF/YlHBXXLNQrXor3Y89xPoowZVMgG+CLiVyxDcmcjteXXJqQrGKB+a4Gi4S/W8BxIxZAjfFOxmk0J3kE6vf4TbqSF/7FGMDus3dBX8zuUXr03vGF2HHWnFOPwq7M5Wa8KGEuMHQaWpURb0p5wnqeAgGQh6xD3hCR/G5vnD9DAcsgHLHzQPyDtHxKXKkI6maLwfFnOjyFywqcbb3zug3RgNgK6cDYngHFySEMfYsSdF58iS3MYnYJRnfh04mnHVeCb5L4APl5qnnW3z9UtjYzgZ/afZ+OxCwBvTjAys6SIpJ8WcWySfQTx3DgPazOfgdRkO0nNpr5mSddadXw4Q== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(396003)(346002)(136003)(376002)(39860400002)(36840700001)(46966006)(478600001)(54906003)(6286002)(26005)(83380400001)(186003)(82740400003)(4326008)(36906005)(336012)(426003)(7696005)(2906002)(110136005)(86362001)(316002)(6636002)(2616005)(16526019)(47076005)(7636003)(55016002)(70206006)(356005)(70586007)(8936002)(36860700001)(82310400003)(8676002)(1076003)(5660300002)(6666004)(36756003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2021 06:19:10.8572 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 53b8b4f9-a1ff-4a49-7190-08d93d214e89 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT059.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5167 Subject: [dpdk-dev] [PATCH v3 22/22] net/mlx5: optimize Rx queue match X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As hrxq struct has the indirect table pointer, while matching the hrxq, better to use the hrxq indirect table instead of searching from the list. This commit optimizes the hrxq indirect table matching. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5_rxq.c | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 7893b3edd4..23685d7654 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2094,23 +2094,19 @@ mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, } int -mlx5_hrxq_match_cb(void *tool_ctx, struct mlx5_list_entry *entry, void *cb_ctx) +mlx5_hrxq_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, + void *cb_ctx) { - struct rte_eth_dev *dev = tool_ctx; struct mlx5_flow_cb_ctx *ctx = cb_ctx; struct mlx5_flow_rss_desc *rss_desc = ctx->data; struct mlx5_hrxq *hrxq = container_of(entry, typeof(*hrxq), entry); - struct mlx5_ind_table_obj *ind_tbl; - if (hrxq->rss_key_len != rss_desc->key_len || + return (hrxq->rss_key_len != rss_desc->key_len || memcmp(hrxq->rss_key, rss_desc->key, rss_desc->key_len) || - hrxq->hash_fields != rss_desc->hash_fields) - return 1; - ind_tbl = mlx5_ind_table_obj_get(dev, rss_desc->queue, - rss_desc->queue_num); - if (ind_tbl) - mlx5_ind_table_obj_release(dev, ind_tbl, hrxq->standalone); - return ind_tbl != hrxq->ind_table; + hrxq->hash_fields != rss_desc->hash_fields || + hrxq->ind_table->queues_n != rss_desc->queue_num || + memcmp(hrxq->ind_table->queues, rss_desc->queue, + rss_desc->queue_num * sizeof(rss_desc->queue[0]))); } /**