From patchwork Thu May 27 09:34:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 93475 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC9EDA0548; Thu, 27 May 2021 11:34:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 28A22410F0; Thu, 27 May 2021 11:34:28 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2066.outbound.protection.outlook.com [40.107.92.66]) by mails.dpdk.org (Postfix) with ESMTP id 7FD7B407FF for ; Thu, 27 May 2021 11:34:25 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BCf0lxcLxKkaPfO2dPgpfYeJsvTP49x0bwXQooPr6QAnGjld0I856VQFkzX72ityBhduPedXijTb7C2VoENuZiXWeZWZ5wmWAbhpBEd9YzQQGuwPMRZmgFvwK9q8jkkVuFRoNEeNB1hua3eucgPliie8eJsA/uVpek6kgdW+tkCjZGI5Ba3cvtPoE7CRaPFuhujcsaUAuUdplZM4hutc6LuIrpExuDkAwkFdKboV65ELsGh1yJT9wpKbUKJ7cfWwtDO7+fr4WS9SgJFYsm+lcWHyN/6R4d4dr3+xe6+ckYuSAyiXaVDagKivuauMnRgY2N7nL3UeVaUGxLmO7KWr4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=As8UIQ6MySHuSI+30RNh+vFbWRkJ1H7BWTRBODrHtkI=; b=a8SPbOYTrPh7B4jK0W4N7U0kYDZLweGA9jLG9PnN7CExdLT+6mx513ODOajb6FbrhRSwbbarHuXZIR6+5YoV9epONORVt+tvVskQUHoqSNK8oKPuozfYiLhrqvmLSZzbJYaqR8An6ZFhqQWQwjPqjCeYj8sBiitLF7vok8lgXQJm3dGgcIz1Vx/d0WEoaiYAOioSjVx0p3ETdjtpcQ7LTxEZghcs33m0AP50aBiLhGan2Qcl1fvdwybuFlsUn81tMO7FK1yls8YXyBniSG+RnejjvEHqyp0bLsR07QeuuhWvILfhES33+yxUJZT0Pkf371CN7ChO7ZGqKNWQi1TB1w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=As8UIQ6MySHuSI+30RNh+vFbWRkJ1H7BWTRBODrHtkI=; b=cLL3YFynefXTlG/G0I4/0iUUwso5444aoI8RgMUY+rL+06fPrcTpv6K/qySDdOkLcfeRtOqVVqgAMjkrS7u/G1nhJzmI0W0XCfclIh/otyCrLMA9hzzNizcj0dGWX1PhHSvP0XNkz8n4m7Ae5haBaZBDpPy1tyg2+A4T6iAIT/d/jiXtOAXPKvqVgur0w8n0cQuxm0U9XVeZ5h/OVbeSWC9zQgo+rPnrByaeO2SWscWOY44MOKvcb97621wSJX20IljTurMzUXDDPDx52skykjnzwv2LjQmmOgq4uNUEyfyo+JAIRWiNsWnaxL7Bsj+mHL/55htDj/tJvWnhlhm6nA== Received: from MWHPR14CA0068.namprd14.prod.outlook.com (2603:10b6:300:81::30) by BL0PR12MB2339.namprd12.prod.outlook.com (2603:10b6:207:4e::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23; Thu, 27 May 2021 09:34:24 +0000 Received: from CO1NAM11FT009.eop-nam11.prod.protection.outlook.com (2603:10b6:300:81:cafe::71) by MWHPR14CA0068.outlook.office365.com (2603:10b6:300:81::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.21 via Frontend Transport; Thu, 27 May 2021 09:34:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT009.mail.protection.outlook.com (10.13.175.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4129.25 via Frontend Transport; Thu, 27 May 2021 09:34:23 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 May 2021 09:34:21 +0000 From: Suanming Mou To: , CC: , Date: Thu, 27 May 2021 12:34:00 +0300 Message-ID: <20210527093403.1153127-2-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210527093403.1153127-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b78e2c4d-102e-4b80-262b-08d920f29cda X-MS-TrafficTypeDiagnostic: BL0PR12MB2339: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Xl9/NxhgFgYQ9eeBguve4/kLOKEToE9psLzvBKrMrQ4P3shs5lhtImwa98hFeNdythplBgjPEM1GnKdpIDMXe8kCMVClY95q9xxwV2IJ7mk02qTsh0q1MkLn2nw4CuY6quHHwzq6WhnB/5Btrctl9G2nx6oieqNCR/0vO4kdkIsuSFm7ckUg9FRgABJgUpkoqEsTa8jEbcHAxHJ+WTw3Jj8SHM0uVrsMY0mRQRveLlxE+c6Z+YY4jKsZLa6+kiJWxXXPNVN030JG9aOxe31Bphs3NQ1l7lXYz0To53xRQovcz5af1TXKAHauqqrvSNUJ5IftRElNjiV+E5xbPkl57vg4bsC+VUZE7t6G/CyUlUzxNe8yzBVo2G8X04WVZYTeE7avjvPu/NW/oget2dUX3KLTPXNUh7fDQKtKoj9Pt+0rzTUMnx3vVA2mSEtgxQ2ts+9xYHCUUTtB5rtpSBCUekIgvp7zjWOlTPEnt05zD1PG5726eGp4hopGlVRJvTzjiUhLDG5xiEukubrLGTGzwQmrtihTRzHh9rAd69W5Psk77Rsx38/9XCmJHublrSSu7z1qISP4q8Lmqqs5KfLkF9s2Zvl2SdXghvcbip54oWEd65/vcRZ5/bmL98KcYXdIo1Lxo7gAIoQ05eelw3AiO7AmdyehWc2QxjGpJm7AYD8= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(346002)(39860400002)(396003)(376002)(36840700001)(46966006)(2616005)(8676002)(6286002)(7696005)(426003)(70586007)(356005)(36906005)(82740400003)(54906003)(6666004)(8936002)(82310400003)(55016002)(86362001)(336012)(7636003)(26005)(47076005)(83380400001)(70206006)(36756003)(316002)(6636002)(186003)(478600001)(2906002)(4326008)(110136005)(5660300002)(1076003)(16526019)(36860700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 09:34:23.4323 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b78e2c4d-102e-4b80-262b-08d920f29cda X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT009.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2339 Subject: [dpdk-dev] [PATCH 1/4] net/mlx5: add index allocate with up limit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The index pool can be used as ID allocator. In the ID allocator case, to support the maximum ID can be allocated is very useful since some registers only want limited bits of ID. This patch add the maximum ID configurable for the index pool. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 14 ++++++++++++-- drivers/net/mlx5/mlx5_utils.h | 1 + 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 18fe23e4fb..bf2b2ebc72 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -270,6 +270,9 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg) if (i > 0) pool->grow_tbl[i] += pool->grow_tbl[i - 1]; } + if (!pool->cfg.max_idx) + pool->cfg.max_idx = + mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); return pool; } @@ -282,9 +285,11 @@ mlx5_ipool_grow(struct mlx5_indexed_pool *pool) size_t trunk_size = 0; size_t data_size; size_t bmp_size; - uint32_t idx; + uint32_t idx, cur_max_idx, i; - if (pool->n_trunk_valid == TRUNK_MAX_IDX) + cur_max_idx = mlx5_trunk_idx_offset_get(pool, pool->n_trunk_valid); + if (pool->n_trunk_valid == TRUNK_MAX_IDX || + cur_max_idx >= pool->cfg.max_idx) return -ENOMEM; if (pool->n_trunk_valid == pool->n_trunk) { /* No free trunk flags, expand trunk list. */ @@ -336,6 +341,11 @@ mlx5_ipool_grow(struct mlx5_indexed_pool *pool) trunk->bmp = rte_bitmap_init_with_all_set(data_size, &trunk->data [RTE_CACHE_LINE_ROUNDUP(data_size * pool->cfg.size)], bmp_size); + /* Clear the overhead bits in the trunk if it happens. */ + if (cur_max_idx + data_size > pool->cfg.max_idx) { + for (i = pool->cfg.max_idx - cur_max_idx; i < data_size; i++) + rte_bitmap_clear(trunk->bmp, i); + } MLX5_ASSERT(trunk->bmp); pool->n_trunk_valid++; #ifdef POOL_DEBUG diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index b54517c6df..15870e14c2 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -208,6 +208,7 @@ struct mlx5_indexed_pool_config { uint32_t need_lock:1; /* Lock is needed for multiple thread usage. */ uint32_t release_mem_en:1; /* Rlease trunk when it is free. */ + uint32_t max_idx; /* The maximum index can be allocated. */ const char *type; /* Memory allocate type name. */ void *(*malloc)(uint32_t flags, size_t size, unsigned int align, int socket); From patchwork Thu May 27 09:34:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 93476 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0631EA0548; Thu, 27 May 2021 11:34:39 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3AA4841103; Thu, 27 May 2021 11:34:29 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2050.outbound.protection.outlook.com [40.107.223.50]) by mails.dpdk.org (Postfix) with ESMTP id E2D6D410EE for ; Thu, 27 May 2021 11:34:26 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Zlux73MjcJR+RmtAifyYgsBoVEkMOuW7ZzYWrdAaTIIJ1pvFt03xNzxaIFmmThK6xmFKuJsfkf1DJkGyShpCpotaHLxP10cNv+NkTmqw8XMQJ7rZ8mhsh0TKAlF4I1bgThiVtvXJZsC+HvADlwIUWQ+a1s3b2OGWeK3ezwAdV5s+05UUQ4g8WI5HXXWEk2SVrUsJVe8jeDOpIi0XbfTfPuntqDNyvZn8hJSkuWrEcSIQPH0MKtx7xldZcrPh5ugqO8hoTRvwVO51Jyl4tgJE+RRXVkYRR8lrqF8JXtZqTg10svosQLBcaMeKY7qBINkBiyzzbdrwfHcVmrhy7Ox6Ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oiS968SzIs0wI1CWXAMwBC+EdrGgb961Yt2eANaRgWE=; b=P8VhGjICe7XN2GyOH9yLvT/UzBWsBKkUnMoxKTKtONAJtR5OES+M+oedT53aFjwi9ALtWUMNSPlLGQLIyq5GeiQzzOkK3IAwQRjzV+OgvuQo2x9FiYd2jSen+9T7p4n5zZMZsv4XvXdzC91qmOT4juYcyDSFbZKwAIck/mUaJPDzI8tcr1NQlrcXMDQZc3Ig1Ahbbbq9g7qWtCANLf2XlR87IrN8GJvGgb40yAlSat2KCiO1UntkJKOt60jhz+5M1+LTp8uoZQbIYwldGqC4a4jYMQiKAJE1HviZw1CAs2VvRb5t5nQnhx8jNMa4tfveEdK+tNBHO9P9PdiIKJiRqw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oiS968SzIs0wI1CWXAMwBC+EdrGgb961Yt2eANaRgWE=; b=JDTH/DFKbRMN11cfIIGZvDdUb17j8FKlhTaft9+RvlmJ5pBVaND42nDARqZ/cnOmD97rHOeLdwGjzEhB18b2ydmRRFf34qqgKDHv7JH2+iJNxzDuYbDKS61zPJtUz0Oq6BK+RoEVACTVlu4GQeytREk8HPtl7TOXXlsHzrSMf3XZnJGzuB9+XndOf40WUpc4S6LidLRwIcuwetD6JSdLrI0pGMjks++YzYiKV1EpteHggzLCe9T/PBfaoMNeg4nWmcLkAkJ01BmKDNQ6SJQTkY/VLpeITJzTTBRvuGC88l5lGqQ5fvhQCsXcO4D8sjbs+YPAiOdjTbhqLiB3DTmPWQ== Received: from CO2PR04CA0009.namprd04.prod.outlook.com (2603:10b6:102:1::19) by BN8PR12MB3443.namprd12.prod.outlook.com (2603:10b6:408:63::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.26; Thu, 27 May 2021 09:34:25 +0000 Received: from CO1NAM11FT060.eop-nam11.prod.protection.outlook.com (2603:10b6:102:1:cafe::6e) by CO2PR04CA0009.outlook.office365.com (2603:10b6:102:1::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20 via Frontend Transport; Thu, 27 May 2021 09:34:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT060.mail.protection.outlook.com (10.13.175.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4129.25 via Frontend Transport; Thu, 27 May 2021 09:34:25 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 May 2021 09:34:23 +0000 From: Suanming Mou To: , CC: , Date: Thu, 27 May 2021 12:34:01 +0300 Message-ID: <20210527093403.1153127-3-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210527093403.1153127-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 48302e1f-c624-4bf6-1d24-08d920f29dd3 X-MS-TrafficTypeDiagnostic: BN8PR12MB3443: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:122; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SmrWG99nRp6uEWtIOQAUw9+LdxpBM5J1UnzN/kLRP7RG2JKkw0JyZcDp49sTBKQge/k1a4KqAKdiguAor4HswOpjdYrtD5uSMbkfLt/VjHAGVYDe6XCumiRdz7oSPAryH5dH91Ha4LI+UD9RleA2W8J81XerJHe52roB/Lfda3wHgg+4hzettpzxxX2e0N/p35DbmYf0MJw4kx8IKbwjXv6xBTwfEeM3yasvJ4KYb0jcMzqRRkmCdRVa1INCTmmjLtLflhT3lYVXGNPsg7wr1i8pj5AL0OouHE6XYGivpVUj9paWPqDlyd77x16eOg79u9g4gQsd7FoFV8U8R/JyD6/+m6KKDZEAdfZFsX9D02S88kqhQrixN3l0gd+CgSRtgd6CxCSOu40qYJIe5TmLFxdzXH4JN8B1uQ+N49ycMZwnNRFc/6UOKn3nLue+pWquGVKZXdwET1ryjj6M8oz6c88uQaQnNExMlUHAnNwBiTV6Y+NRPAhdokFBtjZQaalVyvznzW/0lamkF/Rm2XLV1JGAlDzrFwmEPXoUhLXlyGUDE/gCz+IJnCdq2HARYs0EVuYM7FJRIRKTw/pjHF2Pm67QRC6WUXOf1Te7+O9aIbJaZxGpXGLOGiNx8m92KKUM83egbOoddQuka3kF1vkyaQ== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(46966006)(36840700001)(6666004)(83380400001)(6636002)(5660300002)(110136005)(336012)(426003)(36906005)(86362001)(316002)(478600001)(7696005)(70206006)(8676002)(356005)(4326008)(36756003)(36860700001)(26005)(7636003)(47076005)(2616005)(82310400003)(16526019)(186003)(8936002)(55016002)(54906003)(2906002)(30864003)(1076003)(70586007)(82740400003)(6286002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 09:34:25.0239 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 48302e1f-c624-4bf6-1d24-08d920f29dd3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT060.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB3443 Subject: [dpdk-dev] [PATCH 2/4] net/mlx5: add indexed pool local cache X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For object which wants efficient index allocate and free, local cache will be very helpful. Two level cache is added, one as local and another as global. The global cache is able to save all the allocated index. That means all the allocated index will not be freed. Once the local cache is full, the extra index will be flushed to the global cache. Once local cache is empty, first try to fetch more index from global, if global is still empty, allocate new trunk and more index. This commit adds a new local ring cache mechanism for indexed pool. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 253 +++++++++++++++++++++++++++++++++- drivers/net/mlx5/mlx5_utils.h | 55 +++++++- 2 files changed, 306 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index bf2b2ebc72..900de2a831 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -273,6 +273,22 @@ mlx5_ipool_create(struct mlx5_indexed_pool_config *cfg) if (!pool->cfg.max_idx) pool->cfg.max_idx = mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1); + if (cfg->per_core_cache) { + char cache_name[64] = { 0 }; + + for (i = 0; i < MLX5_IPOOL_MAX_CORES; i++) { + snprintf(cache_name, RTE_DIM(cache_name), + "Ipool_cache-%p-%u", (void *)pool, i); + pool->l_idx_c[i] = rte_ring_create(cache_name, + cfg->per_core_cache, SOCKET_ID_ANY, + RING_F_SC_DEQ | RING_F_SP_ENQ); + if (!pool->l_idx_c[i]) { + printf("Ipool allocate ring failed:%d\n", i); + mlx5_free(pool); + return NULL; + } + } + } return pool; } @@ -355,6 +371,232 @@ mlx5_ipool_grow(struct mlx5_indexed_pool *pool) return 0; } +static struct mlx5_indexed_trunk * +mlx5_ipool_grow_cache(struct mlx5_indexed_pool *pool) +{ + struct mlx5_indexed_trunk *trunk; + struct mlx5_indexed_cache *p, *pi; + size_t trunk_size = 0; + size_t data_size; + uint32_t cur_max_idx, trunk_idx; + int n_grow; + int cidx = 0; + char cache_name[64] = { 0 }; + + cur_max_idx = mlx5_trunk_idx_offset_get(pool, pool->n_trunk_valid); + if (pool->n_trunk_valid == TRUNK_MAX_IDX || + cur_max_idx >= pool->cfg.max_idx) + return NULL; + cidx = rte_lcore_index(rte_lcore_id()); + if (cidx == -1 || cidx > (MLX5_IPOOL_MAX_CORES - 1)) + cidx = 0; + trunk_idx = __atomic_fetch_add(&pool->n_trunk_valid, 1, + __ATOMIC_ACQUIRE); + /* Have enough space in trunk array. Allocate the trunk directly. */ + if (trunk_idx < __atomic_load_n(&pool->n_trunk, __ATOMIC_ACQUIRE)) + goto allocate_trunk; + mlx5_ipool_lock(pool); + /* Double check if trunks array has been resized. */ + if (trunk_idx < __atomic_load_n(&pool->n_trunk, __ATOMIC_ACQUIRE)) { + mlx5_ipool_unlock(pool); + goto allocate_trunk; + } + n_grow = trunk_idx ? pool->n_trunk : + RTE_CACHE_LINE_SIZE / sizeof(void *); + cur_max_idx = mlx5_trunk_idx_offset_get(pool, pool->n_trunk + n_grow); + /* Resize the trunk array. */ + p = pool->cfg.malloc(MLX5_MEM_ZERO, ((trunk_idx + n_grow) * + sizeof(struct mlx5_indexed_trunk *)) + sizeof(*p), + RTE_CACHE_LINE_SIZE, rte_socket_id()); + if (!p) { + mlx5_ipool_unlock(pool); + return NULL; + } + p->trunks = (struct mlx5_indexed_trunk **)(p + 1); + if (pool->trunks_g) + memcpy(p->trunks, pool->trunks_g->trunks, trunk_idx * + sizeof(struct mlx5_indexed_trunk *)); + memset(RTE_PTR_ADD(p->trunks, trunk_idx * sizeof(void *)), 0, + n_grow * sizeof(void *)); + /* Resize the global index cache ring. */ + pi = pool->cfg.malloc(MLX5_MEM_ZERO, sizeof(*pi), 0, rte_socket_id()); + if (!pi) { + mlx5_free(p); + mlx5_ipool_unlock(pool); + return NULL; + } + snprintf(cache_name, RTE_DIM(cache_name), "Idxc-%p-%u", + (void *)pool, trunk_idx); + pi->ring = rte_ring_create(cache_name, rte_align32pow2(cur_max_idx), + SOCKET_ID_ANY, RING_F_SC_DEQ | RING_F_SP_ENQ); + if (!pi->ring) { + mlx5_free(p); + mlx5_free(pi); + mlx5_ipool_unlock(pool); + return NULL; + } + p->ref_cnt = 1; + pool->trunks_g = p; + pi->ref_cnt = 1; + pool->idx_g = pi; + /* Check if trunks array is not used any more. */ + if (pool->trunks_c[cidx] && (!(--pool->trunks_c[cidx]->ref_cnt))) + mlx5_free(pool->trunks_c[cidx]); + /* Check if index cache is not used any more. */ + if (pool->idx_c[cidx] && + (!(--pool->idx_c[cidx]->ref_cnt))) { + rte_ring_free(pool->idx_c[cidx]->ring); + mlx5_free(pool->idx_c[cidx]); + } + pool->trunks_c[cidx] = p; + pool->idx_c[cidx] = pi; + __atomic_fetch_add(&pool->n_trunk, n_grow, __ATOMIC_ACQUIRE); + mlx5_ipool_unlock(pool); +allocate_trunk: + /* Initialize the new trunk. */ + trunk_size = sizeof(*trunk); + data_size = mlx5_trunk_size_get(pool, trunk_idx); + trunk_size += RTE_CACHE_LINE_ROUNDUP(data_size * pool->cfg.size); + trunk = pool->cfg.malloc(0, trunk_size, + RTE_CACHE_LINE_SIZE, rte_socket_id()); + if (!trunk) + return NULL; + pool->trunks_g->trunks[trunk_idx] = trunk; + trunk->idx = trunk_idx; + trunk->free = data_size; +#ifdef POOL_DEBUG + pool->trunk_new++; +#endif + return trunk; +} + +static void * +mlx5_ipool_get_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +{ + struct mlx5_indexed_trunk *trunk; + uint32_t trunk_idx; + uint32_t entry_idx; + int cidx = 0; + + if (!idx) + return NULL; + cidx = rte_lcore_index(rte_lcore_id()); + if (cidx == -1) + cidx = 0; + if (pool->trunks_c[cidx] != pool->trunks_g) { + mlx5_ipool_lock(pool); + /* Rlease the old ring if we are the last thread cache it. */ + if (pool->trunks_c[cidx] && + !(--pool->trunks_c[cidx]->ref_cnt)) + mlx5_free(pool->trunks_c[cidx]); + pool->trunks_c[cidx] = pool->trunks_g; + pool->trunks_c[cidx]->ref_cnt++; + mlx5_ipool_unlock(pool); + } + idx -= 1; + trunk_idx = mlx5_trunk_idx_get(pool, idx); + trunk = pool->trunks_c[cidx]->trunks[trunk_idx]; + if (!trunk) + return NULL; + entry_idx = idx - mlx5_trunk_idx_offset_get(pool, trunk->idx); + return &trunk->data[entry_idx * pool->cfg.size]; +} + +static void * +mlx5_ipool_malloc_cache(struct mlx5_indexed_pool *pool, uint32_t *idx) +{ + struct mlx5_indexed_trunk *trunk; + uint32_t i, ts_idx, fetch_size; + int cidx = 0; + union mlx5_indexed_qd qd; + + cidx = rte_lcore_index(rte_lcore_id()); + if (cidx == -1) + cidx = 0; + /* Try local cache firstly. */ + if (!rte_ring_dequeue(pool->l_idx_c[cidx], &qd.ptr)) { + *idx = qd.idx; + return mlx5_ipool_get_cache(pool, qd.idx); + } + if (pool->idx_g) { + /* + * Try fetch from the global cache. Check if global cache ring + * updated first. + */ + if (pool->idx_c[cidx] != pool->idx_g) { + mlx5_ipool_lock(pool); + /* Rlease the old ring as last user. */ + if (pool->idx_c[cidx] && + !(--pool->idx_c[cidx]->ref_cnt)) { + rte_ring_free(pool->idx_c[cidx]->ring); + pool->cfg.free(pool->idx_c[cidx]); + } + pool->idx_c[cidx] = pool->idx_g; + pool->idx_c[cidx]->ref_cnt++; + mlx5_ipool_unlock(pool); + } + fetch_size = pool->cfg.trunk_size; + while (!rte_ring_dequeue(pool->idx_g->ring, &qd.ptr)) { + if (unlikely(!(--fetch_size))) { + *idx = qd.idx; + return mlx5_ipool_get_cache(pool, qd.idx); + } + rte_ring_enqueue(pool->l_idx_c[cidx], qd.ptr); + } + } + /* Not enough idx in global cache. Keep fetching from new trunk. */ + trunk = mlx5_ipool_grow_cache(pool); + if (!trunk) + return NULL; + /* Get trunk start index. */ + ts_idx = mlx5_trunk_idx_offset_get(pool, trunk->idx); + /* Enqueue trunk_size - 1 to local cache ring. */ + for (i = 0; i < trunk->free - 1; i++) { + qd.idx = ts_idx + i + 1; + rte_ring_enqueue(pool->l_idx_c[cidx], qd.ptr); + } + /* Return trunk's final entry. */ + *idx = ts_idx + i + 1; + return &trunk->data[i * pool->cfg.size]; +} + +static void +mlx5_ipool_free_cache(struct mlx5_indexed_pool *pool, uint32_t idx) +{ + int cidx; + uint32_t i, reclaim_num = 0; + union mlx5_indexed_qd qd; + + if (!idx) + return; + cidx = rte_lcore_index(rte_lcore_id()); + if (cidx == -1) + cidx = 0; + qd.idx = idx; + /* Try to enqueue to local index cache. */ + if (!rte_ring_enqueue(pool->l_idx_c[cidx], qd.ptr)) + return; + /* Update the global index cache ring if needed. */ + if (pool->idx_c[cidx] != pool->idx_g) { + mlx5_ipool_lock(pool); + /* Rlease the old ring if we are the last thread cache it. */ + if (!(--pool->idx_c[cidx]->ref_cnt)) + rte_ring_free(pool->idx_c[cidx]->ring); + pool->idx_c[cidx] = pool->idx_g; + pool->idx_c[cidx]->ref_cnt++; + mlx5_ipool_unlock(pool); + } + reclaim_num = pool->cfg.per_core_cache >> 4; + /* Local index cache full, try with global index cache. */ + rte_ring_enqueue(pool->idx_c[cidx]->ring, qd.ptr); + /* Dequeue the index from local cache to global. */ + for (i = 0; i < reclaim_num; i++) { + /* No need to check the return value. */ + rte_ring_dequeue(pool->l_idx_c[cidx], &qd.ptr); + rte_ring_enqueue(pool->idx_c[cidx]->ring, qd.ptr); + } +} + void * mlx5_ipool_malloc(struct mlx5_indexed_pool *pool, uint32_t *idx) { @@ -363,6 +605,8 @@ mlx5_ipool_malloc(struct mlx5_indexed_pool *pool, uint32_t *idx) uint32_t iidx = 0; void *p; + if (pool->cfg.per_core_cache) + return mlx5_ipool_malloc_cache(pool, idx); mlx5_ipool_lock(pool); if (pool->free_list == TRUNK_INVALID) { /* If no available trunks, grow new. */ @@ -432,6 +676,8 @@ mlx5_ipool_free(struct mlx5_indexed_pool *pool, uint32_t idx) if (!idx) return; + if (pool->cfg.per_core_cache) + return mlx5_ipool_free_cache(pool, idx); idx -= 1; mlx5_ipool_lock(pool); trunk_idx = mlx5_trunk_idx_get(pool, idx); @@ -497,6 +743,8 @@ mlx5_ipool_get(struct mlx5_indexed_pool *pool, uint32_t idx) if (!idx) return NULL; + if (pool->cfg.per_core_cache) + return mlx5_ipool_get_cache(pool, idx); idx -= 1; mlx5_ipool_lock(pool); trunk_idx = mlx5_trunk_idx_get(pool, idx); @@ -524,7 +772,10 @@ mlx5_ipool_destroy(struct mlx5_indexed_pool *pool) MLX5_ASSERT(pool); mlx5_ipool_lock(pool); - trunks = pool->trunks; + if (pool->cfg.per_core_cache) + trunks = pool->trunks_g->trunks; + else + trunks = pool->trunks; for (i = 0; i < pool->n_trunk; i++) { if (trunks[i]) pool->cfg.free(trunks[i]); diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 15870e14c2..4fe82d4a5f 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -181,10 +181,17 @@ typedef int32_t (*mlx5_l3t_alloc_callback_fn)(void *ctx, #define TRUNK_MAX_IDX ((1 << TRUNK_IDX_BITS) - 1) #define TRUNK_INVALID TRUNK_MAX_IDX #define MLX5_IPOOL_DEFAULT_TRUNK_SIZE (1 << (28 - TRUNK_IDX_BITS)) +#define MLX5_IPOOL_MAX_CORES (1 << 4) #ifdef RTE_LIBRTE_MLX5_DEBUG #define POOL_DEBUG 1 #endif +union mlx5_indexed_qd { + RTE_STD_C11 + void *ptr; + uint32_t idx; +}; + struct mlx5_indexed_pool_config { uint32_t size; /* Pool entry size. */ uint32_t trunk_size:22; @@ -209,6 +216,11 @@ struct mlx5_indexed_pool_config { /* Lock is needed for multiple thread usage. */ uint32_t release_mem_en:1; /* Rlease trunk when it is free. */ uint32_t max_idx; /* The maximum index can be allocated. */ + uint32_t per_core_cache; + /* + * Cache entry number per core for performance. Should not be + * set with release_mem_en. + */ const char *type; /* Memory allocate type name. */ void *(*malloc)(uint32_t flags, size_t size, unsigned int align, int socket); @@ -217,6 +229,7 @@ struct mlx5_indexed_pool_config { }; struct mlx5_indexed_trunk { + rte_spinlock_t lock; /* Trunk lock for multiple thread usage. */ uint32_t idx; /* Trunk id. */ uint32_t prev; /* Previous free trunk in free list. */ uint32_t next; /* Next free trunk in free list. */ @@ -225,13 +238,29 @@ struct mlx5_indexed_trunk { uint8_t data[] __rte_cache_aligned; /* Entry data start. */ }; +struct mlx5_indexed_cache { + union { + struct rte_ring *ring; + struct mlx5_indexed_trunk **trunks; + }; + uint32_t ref_cnt; + uint32_t res; +}; + struct mlx5_indexed_pool { struct mlx5_indexed_pool_config cfg; /* Indexed pool configuration. */ rte_spinlock_t lock; /* Pool lock for multiple thread usage. */ uint32_t n_trunk_valid; /* Trunks allocated. */ uint32_t n_trunk; /* Trunk pointer array size. */ /* Dim of trunk pointer array. */ - struct mlx5_indexed_trunk **trunks; + union { + struct mlx5_indexed_trunk **trunks; + struct mlx5_indexed_cache *trunks_g; + }; + struct mlx5_indexed_cache *trunks_c[MLX5_IPOOL_MAX_CORES]; + struct mlx5_indexed_cache *idx_g; + struct mlx5_indexed_cache *idx_c[MLX5_IPOOL_MAX_CORES]; + struct rte_ring *l_idx_c[MLX5_IPOOL_MAX_CORES]; uint32_t free_list; /* Index to first free trunk. */ #ifdef POOL_DEBUG uint32_t n_entry; @@ -542,6 +571,30 @@ int mlx5_ipool_destroy(struct mlx5_indexed_pool *pool); */ void mlx5_ipool_dump(struct mlx5_indexed_pool *pool); +/** + * This function flushes all the cache index back to pool trunk. + * + * @param pool + * Pointer to the index memory pool handler. + * + */ + +void mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool); + +/** + * This function gets the available entry from pos. + * + * @param pool + * Pointer to the index memory pool handler. + * @param pos + * Pointer to the index position start from. + * + * @return + * - Pointer to the next available entry. + * + */ +void *mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos); + /** * This function allocates new empty Three-level table. * From patchwork Thu May 27 09:34:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 93477 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A66FA0548; Thu, 27 May 2021 11:34:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 68E494110E; Thu, 27 May 2021 11:34:30 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2040.outbound.protection.outlook.com [40.107.220.40]) by mails.dpdk.org (Postfix) with ESMTP id D5BE041103 for ; Thu, 27 May 2021 11:34:28 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fJeYJ0Ckp00hQFheLOQK8lrm0ZiHjYee4zafZL6UWqHb5C43xFUnuu1DoYtbpbFExaHAPriXTrkB4D96JbDxXU8Os8D+30vlNF9Q6/4uw4NHIWWuFXxZUYZHesYoeIxJ5Opt5h7gOCi3QLWxHFS8VcbUjGMzeltnKteIlnmZMU39YeRgAIUZMh5XERadxPdM+2JxZmt44vVp/fVtTAmGZx99ektRaBw543OsMp2+6cedZQQie1kbfbmyE30ozAjyxru+gB5x6xlC7Ch+tuDANcMNWQp3fpG7viivvFjIrDPpYBzPZ4nJThVGD7HKv7Xze2WEHfPh+SDurzJzReGeqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vZFHW/CgBUiD4ahwrEDizIldgFVG8KHilqM5XIB9dEo=; b=IIX1ok4fF/KZP6CY8Iv8pGlNc8003zIiHlHZ3OZUbZSqz4L98GtM89gUIm8uv8RlKC4NI5jYZjukxYeWnKcKgGM72nJ++iFqUauKbHzcQDpch7vOeqIxyEz+KlzgSkjROWnNynvaqkDKbv9pTU77ZT2WGPnTwGZYE5RFkTG84puUyjtkFuib8/66w2Cgz/AOqaEyQpLKvVpp4wRnyiZc1sRd4fVZ3A/9LrvDVegYoZMyu6KyYzX+OhERRm21BOQ7e957AkBbLrfRoAmJQXir9fvkjDamsx1XcCUCCE7+Td1m7ZIZ7WfYY+O0PVFHdxJlXCeRL8pbB3q+kM+Ysyu3Mg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vZFHW/CgBUiD4ahwrEDizIldgFVG8KHilqM5XIB9dEo=; b=AghpXxVRBCr3flppeQj6uTWJji0Z6d9k0qK4WhGwt5KQwh1yxu81qRVIi0C7DSo2OT985fq81vuqytDEhwoad+OQJ3SInwDZN1Sqhf/LLZ++o/9xgnLgGqcmO+s8KiNw6LOS4QOTysdwuyZUDKoaZsiNIuCNsVlb6FxCMyHMx6q76IQrQxDJLmbBZWLlLmIQMOKiThdsTpjmVVnQjznSXMc/LzvgfmZRw7IogFEV1edbGgzDVIg1WLCPWz7G37nPLW9Fzv0ae1NTPa3rafJ6Oh/Kz+JhhlbTQ0N7UhEWaVqWdYs75UI5c+WRgK+P/7z1qXwBBRwfpRalAoihnWF+nA== Received: from MW4PR04CA0284.namprd04.prod.outlook.com (2603:10b6:303:89::19) by BN7PR12MB2834.namprd12.prod.outlook.com (2603:10b6:408:31::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20; Thu, 27 May 2021 09:34:27 +0000 Received: from CO1NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::3e) by MW4PR04CA0284.outlook.office365.com (2603:10b6:303:89::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22 via Frontend Transport; Thu, 27 May 2021 09:34:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT047.mail.protection.outlook.com (10.13.174.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4129.25 via Frontend Transport; Thu, 27 May 2021 09:34:26 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 May 2021 09:34:25 +0000 From: Suanming Mou To: , CC: , Date: Thu, 27 May 2021 12:34:02 +0300 Message-ID: <20210527093403.1153127-4-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210527093403.1153127-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e8adb3cc-a1a5-4caf-198e-08d920f29ec6 X-MS-TrafficTypeDiagnostic: BN7PR12MB2834: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:156; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MQnuupq/2u4xw2nQlf41yWCKvebOBmnTxppE82xQKliYjlpC7++60QxPlV2tCPwACfTmQ3uYFGhbxrDwMqO3rsDIn9EhmbAN59RkZsufWDFso8Nsht2ZfhAa31GAZ/8oTpFHA59NQE3Meop7eq+KP1gNZCJdOLeL8+shTpCQvKx4ie1mFoi+MB8aK+i5ZZlxqK4xacVI7mIC/CFwjZMx8DSMGbG+R4SOxThSiIGe36rH/A+RRKw9yg2Yv/lmMBB/kZNDvVqErn3112hVRx6mtBXRqCImTIohD5yecw0b/eBcJ3SY4hZX4R+hgNX1AznaJIzfoZ0VEJpENpFruzck6H2CdnDuuXyZgCIJpT51bOmzc8XF5j4EF7pYLuOnAH57neu3tZH9iSoSEXVV+JTIZa95Q01daFgxb5P7MzvvWpY4ej6iKRliLHiGvuRWPcMl+d1IEcATitKlB8FMpfv7Mj6ylQBcHtWa6+MniUhZiNpmfMZtzqxG2k0S1/BLrEtnadBB6mIlLWt4B+CEiPqsjXXEneWQ4NbBZ2PAf4WR1KsQi5HF4pHDyylzNiaBc2DtaqIJVzNVNPYy7Za12UBmo7jRfQXO5QbjsqUUj1pU8k25y+FrUjtclUaTojkH0p5A14KWYlYv5xEKUEkvjGIVMA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(396003)(346002)(376002)(136003)(36840700001)(46966006)(478600001)(54906003)(7696005)(316002)(110136005)(6286002)(4326008)(36906005)(8936002)(86362001)(5660300002)(70206006)(1076003)(6636002)(26005)(55016002)(82740400003)(8676002)(426003)(356005)(2616005)(70586007)(36860700001)(2906002)(336012)(7636003)(82310400003)(83380400001)(36756003)(186003)(47076005)(6666004)(16526019); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 09:34:26.6513 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e8adb3cc-a1a5-4caf-198e-08d920f29ec6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN7PR12MB2834 Subject: [dpdk-dev] [PATCH 3/4] net/mlx5: add index pool cache flush X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds the per-core cache indexed pool cache flush function. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 70 +++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_utils.h | 13 +++++++ 2 files changed, 83 insertions(+) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 900de2a831..ee6c2e293e 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -452,6 +452,21 @@ mlx5_ipool_grow_cache(struct mlx5_indexed_pool *pool) pool->idx_c[cidx] = pi; __atomic_fetch_add(&pool->n_trunk, n_grow, __ATOMIC_ACQUIRE); mlx5_ipool_unlock(pool); + /* Pre-allocate the bitmap. */ + if (pool->ibmp) + pool->cfg.free(pool->ibmp); + data_size = sizeof(*pool->ibmp); + data_size += rte_bitmap_get_memory_footprint(cur_max_idx); + /* rte_bitmap requires memory cacheline aligned. */ + pool->ibmp = pool->cfg.malloc(MLX5_MEM_ZERO, data_size, + RTE_CACHE_LINE_SIZE, rte_socket_id()); + if (!pool->ibmp) + return NULL; + pool->ibmp->num = cur_max_idx; + pool->ibmp->mem_size = data_size - sizeof(*pool->ibmp); + pool->ibmp->mem = (void *)(pool->ibmp + 1); + pool->ibmp->bmp = rte_bitmap_init(pool->ibmp->num, + pool->ibmp->mem, pool->ibmp->mem_size); allocate_trunk: /* Initialize the new trunk. */ trunk_size = sizeof(*trunk); @@ -787,6 +802,61 @@ mlx5_ipool_destroy(struct mlx5_indexed_pool *pool) return 0; } +void +mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool) +{ + uint32_t i; + struct rte_ring *ring_c; + char cache_name[64]; + union mlx5_indexed_qd qd; + uint32_t bmp_num, mem_size; + uint32_t num = 0; + + /* Create a new ring to save all cached index. */ + snprintf(cache_name, RTE_DIM(cache_name), "Ip_%p", + (void *)pool->idx_g->ring); + ring_c = rte_ring_create(cache_name, pool->ibmp->num, + SOCKET_ID_ANY, RING_F_SC_DEQ | RING_F_SP_ENQ); + if (!ring_c) + return; + /* Reset bmp. */ + bmp_num = mlx5_trunk_idx_offset_get(pool, pool->n_trunk_valid); + mem_size = rte_bitmap_get_memory_footprint(bmp_num); + pool->ibmp->bmp = rte_bitmap_init_with_all_set(bmp_num, + pool->ibmp->mem, mem_size); + /* Flush core cache. */ + for (i = 0; i < MLX5_IPOOL_MAX_CORES; i++) { + while (!rte_ring_dequeue(pool->l_idx_c[i], &qd.ptr)) { + rte_bitmap_clear(pool->ibmp->bmp, (qd.idx - 1)); + rte_ring_enqueue(ring_c, qd.ptr); + num++; + } + } + /* Flush global cache. */ + while (!rte_ring_dequeue(pool->idx_g->ring, &qd.ptr)) { + rte_bitmap_clear(pool->ibmp->bmp, (qd.idx - 1)); + rte_ring_enqueue(ring_c, qd.ptr); + num++; + } + rte_ring_free(pool->idx_g->ring); + pool->idx_g->ring = ring_c; +} + +void * +mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos) +{ + uint64_t slab = 0; + uint32_t iidx = *pos; + + if (!pool->ibmp || !rte_bitmap_scan(pool->ibmp->bmp, &iidx, &slab)) + return NULL; + iidx += __builtin_ctzll(slab); + rte_bitmap_clear(pool->ibmp->bmp, iidx); + iidx++; + *pos = iidx; + return mlx5_ipool_get(pool, iidx); +} + void mlx5_ipool_dump(struct mlx5_indexed_pool *pool) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 4fe82d4a5f..03d5164485 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -247,6 +247,13 @@ struct mlx5_indexed_cache { uint32_t res; }; +struct mlx5_indexed_bmp { + struct rte_bitmap *bmp; + void *mem; + uint32_t mem_size; + uint32_t num; +} __rte_cache_aligned; + struct mlx5_indexed_pool { struct mlx5_indexed_pool_config cfg; /* Indexed pool configuration. */ rte_spinlock_t lock; /* Pool lock for multiple thread usage. */ @@ -261,6 +268,7 @@ struct mlx5_indexed_pool { struct mlx5_indexed_cache *idx_g; struct mlx5_indexed_cache *idx_c[MLX5_IPOOL_MAX_CORES]; struct rte_ring *l_idx_c[MLX5_IPOOL_MAX_CORES]; + struct mlx5_indexed_bmp *ibmp; uint32_t free_list; /* Index to first free trunk. */ #ifdef POOL_DEBUG uint32_t n_entry; @@ -861,4 +869,9 @@ struct { \ (entry); \ idx++, (entry) = mlx5_l3t_get_next((tbl), &idx)) +#define MLX5_IPOOL_FOREACH(ipool, idx, entry) \ + for ((idx) = 1, mlx5_ipool_flush_cache((ipool)), \ + entry = mlx5_ipool_get_next((ipool), &idx); \ + (entry); idx++, entry = mlx5_ipool_get_next((ipool), &idx)) + #endif /* RTE_PMD_MLX5_UTILS_H_ */ From patchwork Thu May 27 09:34:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 93478 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52E14A0548; Thu, 27 May 2021 11:34:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C4D3441118; Thu, 27 May 2021 11:34:31 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2050.outbound.protection.outlook.com [40.107.223.50]) by mails.dpdk.org (Postfix) with ESMTP id 50AC54110B for ; Thu, 27 May 2021 11:34:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oK+tiyGYXeUUNHakJh0A/RNpcCd7tb/XQQpSjfpDUVTLymXjDI6Da0ySsDaWyqhcZOh8zyqYAXjnZ2AJR3HBrk/obNIOpQI+sQa+tCyTDJ6S/P3t3MEEZQFyr8BZpt7DM2vyQ/1JvU1MDHmlyFXrvMq4p3nZL+BtmR807er0F2n2jJyz1POp7tSXxTgu+QPGCT70V487wDIjLSJ/RAH7ERd3NSwivrpAat9W6eV+oQSuryYev0TK97JoV1woXzCBFJQjwHqOU71K/r5UPWG1d+R89lI/4CEKFFEgxxx5rOfDA9EYZ84Wf6+gI2tp/1O/Azri78oX7O9++dQy/9vB/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=frLCko39zo2fDrwry3leXbxpkMsuUzr6Q5+nlwX1FkE=; b=WHOxzeYpNYY1Ujg99ddqYg8orXeIk88jamO5skJLDrn0BtC3FKWiwecqj+NUH/FaW+ZhEubgHRn669mfWoZHjtg7pTk7sXz/LesxuulLJewek1l4QaZdtLwl+wc6WIIR9U0Yu9qjiWJcW13DX00qVofjhOShZKEg9cpQyG0lNxfsxfJEWWSgD/foOidd97DR7HyzN7VKY/gGsYbUt3c75p6tsgCsSgjSQyAwbaWErGvwYKHrKeOa3VUUCk4cgQ/uiUz6qrPLUhQRAwioJ1DUh4um4ze2bI5M2bN2D0QaltwMSDUy2BF6sOmge9GuURk1f6ouRg/MrkHcIMVXV/zIXA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=frLCko39zo2fDrwry3leXbxpkMsuUzr6Q5+nlwX1FkE=; b=ihMwRuZb4aSkKUK7fKMkgdkufB3jpA5tLmmew7PsOJ+W6Hc6sH5whkHSdIJH+OeWvJhKRaDX1BoqBeDm7KrmJiHEz+qa+j5xKMB0SqQlMhFPrT9aANItHKvTEZ4ngRJgipDD+r+m1OMwS+Wykd9CN4NbHKPKxqg7JomchzhDgW90chkoNyTovtjx1kXUAGv2H5byXa1Y54B1k2/HZm+ib9kMOYHBCzUDuTs/RiOKppLx6KShh83x1jwv8WKGPJtBlws7SlpopV5pKuaHOE+DQ8yf4GYCDIdev8DCdJxGQUZhFXOIvsGgyVw7Gu9bFcNYbItiJyhRjXtfODQLEcftmQ== Received: from MWHPR19CA0059.namprd19.prod.outlook.com (2603:10b6:300:94::21) by DM5PR12MB1177.namprd12.prod.outlook.com (2603:10b6:3:6d::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.27; Thu, 27 May 2021 09:34:28 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:300:94:cafe::25) by MWHPR19CA0059.outlook.office365.com (2603:10b6:300:94::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22 via Frontend Transport; Thu, 27 May 2021 09:34:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4129.25 via Frontend Transport; Thu, 27 May 2021 09:34:28 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 May 2021 09:34:26 +0000 From: Suanming Mou To: , CC: , Date: Thu, 27 May 2021 12:34:03 +0300 Message-ID: <20210527093403.1153127-5-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210527093403.1153127-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9a359597-4cee-4790-5bf0-08d920f29fc1 X-MS-TrafficTypeDiagnostic: DM5PR12MB1177: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:568; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: En/w19NGlexP2pZXXgZugtppj4LeBC4RMs5JL79kWePr68smaaUpKsdbzt8cVt7K79zLF+AUqYfsnz5/AwSqh8fbuJNYsE9L5jNG28ABiJGj2nqIcztTr8FKmfjPi2eD8oiOGJbKFlkluY2YrN4Oc9RaT4MJALIplBefQtlI+Rh+KmsUAQpAcmuArh5tzwU1lCOyEO83jAWHl/tMyiX0TuVcf2bmVRHpwfTAK8QlKptBvW4hldymFoWSWWRmpTIAKQEhMfk9rIYuX4ov5zJfqUn85obEqw/kQeqrw/aGQ4eFqfcHtWDlB7onVaFz0arYtVrYqEuAO8eeCtqa/CKQacgbjS0Z5e2TsT1HJavk8C87DrCgTzCa3OsTFFZjsEGcGV7oVcVztShlt69FZk8naC/C73y/z0kwH0hd7HuGXNF3RRiNRs8a9YlDFXSr5rQs4+JHhlqrQctbD2mVz6muoCvhJrHAtfqUTrh/s7VdD/2TJIkZvu1BeGPLHj4pptd43aAnnLg4Rr17lbPOLMPl6m1stM5ER2mygYIiKKOQDbqZfBNWqLMXSb1OVt91kFvfnIg34LJDCqhqi6A5HNeFW57SBKnaa8u2Tvp9XxujoNDiC7E0RMaplxnn8Sf3d1GYMoFrLC6H39t6QG91NQl3XPiGS+10tYWCC4JgHWwwh0M= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(136003)(376002)(39860400002)(396003)(46966006)(36840700001)(356005)(70206006)(70586007)(55016002)(8936002)(426003)(30864003)(6286002)(316002)(16526019)(86362001)(6666004)(186003)(6636002)(82310400003)(2616005)(36756003)(83380400001)(5660300002)(4326008)(1076003)(36860700001)(54906003)(8676002)(7636003)(47076005)(82740400003)(26005)(7696005)(336012)(110136005)(36906005)(478600001)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 09:34:28.2710 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9a359597-4cee-4790-5bf0-08d920f29fc1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1177 Subject: [dpdk-dev] [PATCH 4/4] net/mlx5: replace flow list with index pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The flow list is used to save the create flows and to be used only when port closes all the flows need to be flushed. This commit creates a new function to get the available entries from the index pool. Then flows can be flushed from index pool. Signed-off-by: Suanming Mou --- drivers/net/mlx5/linux/mlx5_os.c | 11 +++ drivers/net/mlx5/mlx5.c | 3 +- drivers/net/mlx5/mlx5.h | 4 +- drivers/net/mlx5/mlx5_flow.c | 111 +++++++++++-------------------- drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 5 ++ drivers/net/mlx5/mlx5_trigger.c | 8 +-- 7 files changed, 61 insertions(+), 82 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 534a56a555..c56be18d24 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1608,6 +1608,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_set_min_inline(spawn, config); /* Store device configuration on private structure. */ priv->config = *config; + struct mlx5_indexed_pool_config icfg = { + .size = sizeof(struct rte_flow), + .trunk_size = 4096, + .need_lock = 1, + .release_mem_en = 0, + .malloc = mlx5_malloc, + .free = mlx5_free, + .per_core_cache = 1 << 23, + .type = "rte_flow_ipool", + }; + priv->flows = mlx5_ipool_create(&icfg); /* Create context for virtual machine VLAN workaround. */ priv->vmwa_context = mlx5_vlan_vmwa_init(eth_dev, spawn->ifindex); if (config->dv_flow_en) { diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index cf1815cb74..b0f9f1a45e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -323,6 +323,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .grow_shift = 2, .need_lock = 1, .release_mem_en = 1, + .per_core_cache = 1 << 23, .malloc = mlx5_malloc, .free = mlx5_free, .type = "mlx5_flow_handle_ipool", @@ -1528,7 +1529,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) * If all the flows are already flushed in the device stop stage, * then this will return directly without any action. */ - mlx5_flow_list_flush(dev, &priv->flows, true); + mlx5_flow_list_flush(dev, false, true); mlx5_action_handle_flush(dev); mlx5_flow_meter_flush(dev, NULL); /* Prevent crashes when queues are still in use. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 32b2817bf2..f6b72cd352 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1344,7 +1344,7 @@ struct mlx5_priv { unsigned int (*reta_idx)[]; /* RETA index table. */ unsigned int reta_idx_n; /* RETA index size. */ struct mlx5_drop drop_queue; /* Flow drop queues. */ - uint32_t flows; /* RTE Flow rules. */ + struct mlx5_indexed_pool *flows; /* RTE Flow rules. */ uint32_t ctrl_flows; /* Control flow rules. */ rte_spinlock_t flow_list_lock; struct mlx5_obj_ops obj_ops; /* HW objects operations. */ @@ -1596,7 +1596,7 @@ struct rte_flow *mlx5_flow_create(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error); -void mlx5_flow_list_flush(struct rte_eth_dev *dev, uint32_t *list, bool active); +void mlx5_flow_list_flush(struct rte_eth_dev *dev, bool control, bool active); int mlx5_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error); int mlx5_flow_query(struct rte_eth_dev *dev, struct rte_flow *flow, const struct rte_flow_action *action, void *data, diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index dbeca571b6..8f9eea2c00 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3068,31 +3068,6 @@ mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item, MLX5_ITEM_RANGE_NOT_ACCEPTED, error); } -/** - * Release resource related QUEUE/RSS action split. - * - * @param dev - * Pointer to Ethernet device. - * @param flow - * Flow to release id's from. - */ -static void -flow_mreg_split_qrss_release(struct rte_eth_dev *dev, - struct rte_flow *flow) -{ - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t handle_idx; - struct mlx5_flow_handle *dev_handle; - - SILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], flow->dev_handles, - handle_idx, dev_handle, next) - if (dev_handle->split_flow_id && - !dev_handle->is_meter_flow_id) - mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], - dev_handle->split_flow_id); -} - static int flow_null_validate(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_attr *attr __rte_unused, @@ -3388,7 +3363,6 @@ flow_drv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) const struct mlx5_flow_driver_ops *fops; enum mlx5_flow_drv_type type = flow->drv_type; - flow_mreg_split_qrss_release(dev, flow); MLX5_ASSERT(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX); fops = flow_get_drv_ops(type); fops->destroy(dev, flow); @@ -3971,7 +3945,7 @@ flow_check_hairpin_split(struct rte_eth_dev *dev, /* Declare flow create/destroy prototype in advance. */ static uint32_t -flow_list_create(struct rte_eth_dev *dev, uint32_t *list, +flow_list_create(struct rte_eth_dev *dev, bool control, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action actions[], @@ -4100,7 +4074,7 @@ flow_dv_mreg_create_cb(struct mlx5_hlist *list, uint64_t key, * be applied, removed, deleted in ardbitrary order * by list traversing. */ - mcp_res->rix_flow = flow_list_create(dev, NULL, &attr, items, + mcp_res->rix_flow = flow_list_create(dev, false, &attr, items, actions, false, error); if (!mcp_res->rix_flow) { mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MCP], idx); @@ -6062,7 +6036,7 @@ flow_rss_workspace_adjust(struct mlx5_flow_workspace *wks, * A flow index on success, 0 otherwise and rte_errno is set. */ static uint32_t -flow_list_create(struct rte_eth_dev *dev, uint32_t *list, +flow_list_create(struct rte_eth_dev *dev, bool control, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action original_actions[], @@ -6130,7 +6104,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, external, hairpin_flow, error); if (ret < 0) goto error_before_hairpin_split; - flow = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], &idx); + flow = mlx5_ipool_zmalloc(priv->flows, &idx); if (!flow) { rte_errno = ENOMEM; goto error_before_hairpin_split; @@ -6256,12 +6230,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, if (ret < 0) goto error; } - if (list) { - rte_spinlock_lock(&priv->flow_list_lock); - ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, idx, - flow, next); - rte_spinlock_unlock(&priv->flow_list_lock); - } + flow->control = control; flow_rxq_flags_set(dev, flow); rte_free(translated_actions); tunnel = flow_tunnel_from_rule(wks->flows); @@ -6283,7 +6252,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, mlx5_ipool_get (priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], rss_desc->shared_rss))->refcnt, 1, __ATOMIC_RELAXED); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], idx); + mlx5_ipool_free(priv->flows, idx); rte_errno = ret; /* Restore rte_errno. */ ret = rte_errno; rte_errno = ret; @@ -6335,10 +6304,9 @@ mlx5_flow_create_esw_table_zero_flow(struct rte_eth_dev *dev) .type = RTE_FLOW_ACTION_TYPE_END, }, }; - struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_error error; - return (void *)(uintptr_t)flow_list_create(dev, &priv->ctrl_flows, + return (void *)(uintptr_t)flow_list_create(dev, true, &attr, &pattern, actions, false, &error); } @@ -6390,8 +6358,6 @@ mlx5_flow_create(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error) { - struct mlx5_priv *priv = dev->data->dev_private; - /* * If the device is not started yet, it is not allowed to created a * flow from application. PMD default flows and traffic control flows @@ -6407,7 +6373,7 @@ mlx5_flow_create(struct rte_eth_dev *dev, return NULL; } - return (void *)(uintptr_t)flow_list_create(dev, &priv->flows, + return (void *)(uintptr_t)flow_list_create(dev, false, attr, items, actions, true, error); } @@ -6429,8 +6395,7 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, uint32_t flow_idx) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = mlx5_ipool_get(priv->sh->ipool - [MLX5_IPOOL_RTE_FLOW], flow_idx); + struct rte_flow *flow = mlx5_ipool_get(priv->flows, flow_idx); if (!flow) return; @@ -6443,7 +6408,7 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, flow_drv_destroy(dev, flow); if (list) { rte_spinlock_lock(&priv->flow_list_lock); - ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, + ILIST_REMOVE(priv->flows, list, flow_idx, flow, next); rte_spinlock_unlock(&priv->flow_list_lock); } @@ -6456,7 +6421,7 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, mlx5_flow_tunnel_free(dev, tunnel); } flow_mreg_del_copy_action(dev, flow); - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], flow_idx); + mlx5_ipool_free(priv->flows, flow_idx); } /** @@ -6464,19 +6429,23 @@ flow_list_destroy(struct rte_eth_dev *dev, uint32_t *list, * * @param dev * Pointer to Ethernet device. - * @param list - * Pointer to the Indexed flow list. + * @param control + * Flow type to be flushed. * @param active * If flushing is called avtively. */ void -mlx5_flow_list_flush(struct rte_eth_dev *dev, uint32_t *list, bool active) +mlx5_flow_list_flush(struct rte_eth_dev *dev, bool control, bool active) { - uint32_t num_flushed = 0; + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t num_flushed = 0, fidx = 1; + struct rte_flow *flow; - while (*list) { - flow_list_destroy(dev, list, *list); - num_flushed++; + MLX5_IPOOL_FOREACH(priv->flows, fidx, flow) { + if (flow->control == control) { + flow_list_destroy(dev, NULL, fidx); + num_flushed++; + } } if (active) { DRV_LOG(INFO, "port %u: %u flows flushed before stopping", @@ -6647,18 +6616,20 @@ mlx5_flow_pop_thread_workspace(void) * @return the number of flows not released. */ int -mlx5_flow_verify(struct rte_eth_dev *dev) +mlx5_flow_verify(struct rte_eth_dev *dev __rte_unused) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow *flow; - uint32_t idx; + uint32_t idx = 0; int ret = 0; - ILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], priv->flows, idx, - flow, next) { + flow = mlx5_ipool_get_next(priv->flows, &idx); + while (flow) { DRV_LOG(DEBUG, "port %u flow %p still referenced", dev->data->port_id, (void *)flow); - ++ret; + idx++; + ret++; + flow = mlx5_ipool_get_next(priv->flows, &idx); } return ret; } @@ -6678,7 +6649,6 @@ int mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, uint32_t queue) { - struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_attr attr = { .egress = 1, .priority = 0, @@ -6711,7 +6681,7 @@ mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, actions[0].type = RTE_FLOW_ACTION_TYPE_JUMP; actions[0].conf = &jump; actions[1].type = RTE_FLOW_ACTION_TYPE_END; - flow_idx = flow_list_create(dev, &priv->ctrl_flows, + flow_idx = flow_list_create(dev, true, &attr, items, actions, false, &error); if (!flow_idx) { DRV_LOG(DEBUG, @@ -6801,7 +6771,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, action_rss.types = 0; for (i = 0; i != priv->reta_idx_n; ++i) queue[i] = (*priv->reta_idx)[i]; - flow_idx = flow_list_create(dev, &priv->ctrl_flows, + flow_idx = flow_list_create(dev, true, &attr, items, actions, false, &error); if (!flow_idx) return -rte_errno; @@ -6843,7 +6813,6 @@ mlx5_ctrl_flow(struct rte_eth_dev *dev, int mlx5_flow_lacp_miss(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; /* * The LACP matching is done by only using ether type since using * a multicast dst mac causes kernel to give low priority to this flow. @@ -6877,7 +6846,7 @@ mlx5_flow_lacp_miss(struct rte_eth_dev *dev) }, }; struct rte_flow_error error; - uint32_t flow_idx = flow_list_create(dev, &priv->ctrl_flows, + uint32_t flow_idx = flow_list_create(dev, true, &attr, items, actions, false, &error); if (!flow_idx) @@ -6896,9 +6865,7 @@ mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error __rte_unused) { - struct mlx5_priv *priv = dev->data->dev_private; - - flow_list_destroy(dev, &priv->flows, (uintptr_t)(void *)flow); + flow_list_destroy(dev, NULL, (uintptr_t)(void *)flow); return 0; } @@ -6912,9 +6879,7 @@ int mlx5_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error __rte_unused) { - struct mlx5_priv *priv = dev->data->dev_private; - - mlx5_flow_list_flush(dev, &priv->flows, false); + mlx5_flow_list_flush(dev, false, false); return 0; } @@ -6965,8 +6930,7 @@ flow_drv_query(struct rte_eth_dev *dev, { struct mlx5_priv *priv = dev->data->dev_private; const struct mlx5_flow_driver_ops *fops; - struct rte_flow *flow = mlx5_ipool_get(priv->sh->ipool - [MLX5_IPOOL_RTE_FLOW], + struct rte_flow *flow = mlx5_ipool_get(priv->flows, flow_idx); enum mlx5_flow_drv_type ftype; @@ -7832,10 +7796,9 @@ mlx5_flow_discover_mreg_c(struct rte_eth_dev *dev) if (!config->dv_flow_en) break; /* Create internal flow, validation skips copy action. */ - flow_idx = flow_list_create(dev, NULL, &attr, items, + flow_idx = flow_list_create(dev, false, &attr, items, actions, false, &error); - flow = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], - flow_idx); + flow = mlx5_ipool_get(priv->flows, flow_idx); if (!flow) continue; config->flow_mreg_c[n++] = idx; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2f2aa962f9..cb0b97a2d4 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1000,6 +1000,7 @@ struct rte_flow { ILIST_ENTRY(uint32_t)next; /**< Index to the next flow structure. */ uint32_t dev_handles; /**< Device flow handles that are part of the flow. */ + uint32_t control:1; uint32_t drv_type:2; /**< Driver type. */ uint32_t tunnel:1; uint32_t meter:24; /**< Holds flow meter id. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index c50649a107..d039794c9a 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13830,6 +13830,11 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) dev_handle->split_flow_id) mlx5_ipool_free(fm->flow_ipool, dev_handle->split_flow_id); + else if (dev_handle->split_flow_id && + !dev_handle->is_meter_flow_id) + mlx5_ipool_free(priv->sh->ipool + [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], + dev_handle->split_flow_id); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], tmp_idx); } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index ae7fcca229..48cf780786 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -1187,7 +1187,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) /* Control flows for default traffic can be removed firstly. */ mlx5_traffic_disable(dev); /* All RX queue flags will be cleared in the flush interface. */ - mlx5_flow_list_flush(dev, &priv->flows, true); + mlx5_flow_list_flush(dev, false, true); mlx5_flow_meter_rxq_flush(dev); mlx5_rx_intr_vec_disable(dev); priv->sh->port[priv->dev_port - 1].ih_port_id = RTE_MAX_ETHPORTS; @@ -1370,7 +1370,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) return 0; error: ret = rte_errno; /* Save rte_errno before cleanup. */ - mlx5_flow_list_flush(dev, &priv->ctrl_flows, false); + mlx5_flow_list_flush(dev, true, false); rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -1385,9 +1385,7 @@ mlx5_traffic_enable(struct rte_eth_dev *dev) void mlx5_traffic_disable(struct rte_eth_dev *dev) { - struct mlx5_priv *priv = dev->data->dev_private; - - mlx5_flow_list_flush(dev, &priv->ctrl_flows, false); + mlx5_flow_list_flush(dev, true, false); } /**