From patchwork Mon Jun 17 15:55:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54877 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0E32F1BFFA; Mon, 17 Jun 2019 17:57:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 37D801BFB6 for ; Mon, 17 Jun 2019 17:57:10 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5HFppIu000981 for ; Mon, 17 Jun 2019 08:57:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=4xEJY/5BkQjftI32K5M9cpH9eYN3GfgpBQsO/PJpimM=; b=IzJJ/7eGegBnA+NCYRKoqgt5h6G34ElQE1zaVBJncXGcLbcsFVZXqVcH0jsU1u7Fo7QZ RM8GG0z1ux3jDiZmrsHfvfGGajWVhUba7HJWFwKKW6/EVM0yeDUL4ZfRpQzkDqX6W3w4 0TreseteveqD3pZ24DoYG98CeQocClCJYvc9P7iugTXe7wYb0lHu6ZtAXOMmtAfM45aG 6qtRMZ5raM5YUtBcKyCdfoasNcpCeHAUqvBY2BpPDbBZqMKfYo1fmDu8cjDBnWg1E8zL 7XiAIAniNXD04aqrD8KcVdlj9nWhANj4JP4yPooY1vYjpHcllxJISH7OlpYYMDIbRP5u ug== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2t506hyb44-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 17 Jun 2019 08:57:09 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 17 Jun 2019 08:57:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 17 Jun 2019 08:57:08 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 635223F703F; Mon, 17 Jun 2019 08:57:06 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru CC: Harman Kalra Date: Mon, 17 Jun 2019 21:25:36 +0530 Message-ID: <20190617155537.36144-27-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> References: <20190601014905.45531-1-jerinj@marvell.com> <20190617155537.36144-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-17_07:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 26/27] mempool/octeontx2: add devargs for max pool selection X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The maximum number of mempools per application needs to be configured on HW during mempool driver initialization. HW can support up to 1M mempools, Since each mempool costs set of HW resources, the max_pools devargs parameter is being introduced to configure the number of mempools required for the application. For example: -w 0002:02:00.0,max_pools=512 With the above configuration, the driver will set up only 512 mempools for the given application to save HW resources. Signed-off-by: Jerin Jacob Signed-off-by: Harman Kalra --- drivers/mempool/octeontx2/otx2_mempool.c | 41 +++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-) diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c index 1bcb86cf4..ff7fcac85 100644 --- a/drivers/mempool/octeontx2/otx2_mempool.c +++ b/drivers/mempool/octeontx2/otx2_mempool.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -142,6 +143,42 @@ otx2_aura_size_to_u32(uint8_t val) return 1 << (val + 6); } +static int +parse_max_pools(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + if (val < otx2_aura_size_to_u32(NPA_AURA_SZ_128)) + val = 128; + if (val > otx2_aura_size_to_u32(NPA_AURA_SZ_1M)) + val = BIT_ULL(20); + + *(uint8_t *)extra_args = rte_log2_u32(val) - 6; + return 0; +} + +#define OTX2_MAX_POOLS "max_pools" + +static uint8_t +otx2_parse_aura_size(struct rte_devargs *devargs) +{ + uint8_t aura_sz = NPA_AURA_SZ_128; + struct rte_kvargs *kvlist; + + if (devargs == NULL) + goto exit; + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + goto exit; + + rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz); + rte_kvargs_free(kvlist); +exit: + return aura_sz; +} + static inline int npa_lf_attach(struct otx2_mbox *mbox) { @@ -234,7 +271,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev) if (rc) goto npa_detach; - aura_sz = NPA_AURA_SZ_128; + aura_sz = otx2_parse_aura_size(pci_dev->device.devargs); nr_pools = otx2_aura_size_to_u32(aura_sz); lf = &dev->npalf; @@ -397,3 +434,5 @@ static struct rte_pci_driver pci_npa = { RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa); RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map); RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2, + OTX2_MAX_POOLS "=<128-1048576>");