From patchwork Fri Mar 17 12:47:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 21860 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 5D6D2D010; Fri, 17 Mar 2017 13:54:08 +0100 (CET) Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0046.outbound.protection.outlook.com [104.47.40.46]) by dpdk.org (Postfix) with ESMTP id 92BB2CF66 for ; Fri, 17 Mar 2017 13:47:33 +0100 (CET) Received: from BY2PR03CA046.namprd03.prod.outlook.com (10.141.249.19) by CY1PR0301MB1964.namprd03.prod.outlook.com (10.164.1.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.961.17; Fri, 17 Mar 2017 12:47:31 +0000 Received: from BN1BFFO11FD040.protection.gbl (2a01:111:f400:7c10::1:129) by BY2PR03CA046.outlook.office365.com (2a01:111:e400:2c5d::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.977.11 via Frontend Transport; Fri, 17 Mar 2017 12:47:31 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; intel.com; dkim=none (message not signed) header.d=none; intel.com; dmarc=fail action=none header.from=nxp.com; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BN1BFFO11FD040.mail.protection.outlook.com (10.58.144.103) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.977.7 via Frontend Transport; Fri, 17 Mar 2017 12:47:30 +0000 Received: from bf-netperf1.idc ([10.232.134.28]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id v2HClOJH000805; Fri, 17 Mar 2017 05:47:27 -0700 From: Hemant Agrawal To: , CC: , , Date: Fri, 17 Mar 2017 18:17:18 +0530 Message-ID: <1489754838-1455-2-git-send-email-hemant.agrawal@nxp.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1489754838-1455-1-git-send-email-hemant.agrawal@nxp.com> References: <1489754838-1455-1-git-send-email-hemant.agrawal@nxp.com> X-EOPAttributedMessage: 0 X-Matching-Connectors: 131342284508381975; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(336005)(39450400003)(39410400002)(39380400002)(39840400002)(39860400002)(39850400002)(39400400002)(2980300002)(1109001)(1110001)(339900001)(3190300001)(189002)(199003)(9170700003)(85426001)(2950100002)(86362001)(36756003)(189998001)(5660300001)(77096006)(53936002)(6666003)(356003)(47776003)(305945005)(54906002)(104016004)(76176999)(2906002)(8656002)(38730400002)(50986999)(105606002)(5003940100001)(50466002)(106466001)(48376002)(4326008)(81156014)(8676002)(81166006)(33646002)(50226002)(8936002); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR0301MB1964; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; MLV:ovrnspm; MX:1; A:1; PTR:InfoDomainNonexistent; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN1BFFO11FD040; 1:02q+aOok5b5wXN1GrHKviM+yyYl/pvaHmHECldutV7pO3c9XI5NKu+VG3/795dIRXXE1JxjQbsUQlkOzltcONnI9JGjp9uYAB6rbjk1S45e56EOadrL4m7+loqn69Eg2o29mTtnopA53zC32NuZIGvSdT6jkdnVOIjO964niuPt1rWeyo37g46LS6jemdiKVSo3dxF9k9xR6qGeG5v/aRCL+eSdyVAdI3reXnJa7tDZv7ndkF8nRdcIzTO9rRZMgeBus47KZRRZMvebVElWYiGs/g7stV8StJ/1RbYSdw4lUgTV4/mtwiir/8ssGOMp1ikE2zAue0lhPAJepqzsAw8l7aCJ3ckz6gd4oat+Ghc0+iFhBD7y2OFoY0GjlM9s4O6nUS/56aQbF2TfjTyy6DECRu+Ifg7tdBmN9HePP2G+SB7yJtCtyoTdeckeXk1jdJZTPoB4KQ40G+tIUlIiXMnbaK4SGWCSN4rRTHsf+V4LH6UjQQI666zaQB+bl08hhD8EtoUoE9PmzTiijVeekzxETg1VtrHU/SxLlpH3T0YW1kuMseMpfT0/yFEUrYOCbxiMHm9aGM8QqoKRyO7VIxiABUQ0JJ2K5UKuBtWq7W8SdatoSeFR5EXRNcRDF/7ytV1wbFk1l9QsX9TgY0HhpLZjVRzI1XpF3eZgMvi0Hddc= MIME-Version: 1.0 X-MS-Office365-Filtering-Correlation-Id: aa02b619-57ef-402a-387f-08d46d33c680 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:CY1PR0301MB1964; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB1964; 3:Lmjy8JxxpJCC4VUM3vWrgMiTg+r4i34fWetJxjxhYNl6J6dyhOx9bRqoN37FueR0/2CGNSQbwhlUPIJBnsChX8LfqC1GxrH3EPG8VPonJmIm7NdmPd4iuRe4NUvnzSGWPtSAlMZM1BTVkQl43jriRY3qpLH/CUhBzL3y4+OcRuU5IxOo9vKWTPqVYyHJQg6mSUbRyKE55MEc2+5Dpt7cymLh6gY+pbfyZpl6khHpanqRV7bHZEtVMOW+9wBzsNyycAHl2+S8dnqaLZCdIlcuwS/KiBDiSx2SYwL5lD9ZvO1tKSlBov5o4axtPNrJUbPyATAzxz4/3PuapdVfAQrCfkDwcjIsv3UGHyjh02KC6spA8GDeQckcOHIeXIYGUfKX; 25:WPERXshUGDzYGwBtwBXVNI8dNiSRH9kIgHnquclVIEfa2P1Y35uaXxbRt7ucjVFlAou4/C4L55VxDu9g6EgAFINL/QoL1+bGzbtsOmbSFPOyCwqutN9X1GaZZB1YxdjODfoWnGOAPpy1+lrqop2l6U/otlx7mCDnb8OtgsSB5Y7ZoKQQSkmD9eKgiwgUbuLLqAJ2nWNNQpcwyzVj73ys2k2luQ2Z3H9mhYdVGUr771aEwvyC1YNmf8mErVLWxpLeCJ0GXXC62frBwHO+YQoT5tLbbPr0DKf70yoAecqkkD02ij9f8vItrM8rbQr95xNj5O8fpYdJX5Ghtmk5tqdyRiGd1kISp+vrute8LZxiESOCoExDMKsNvfakTvAR1rd74EmuqeBTzUGbf+6tYwi5wYdaVaclu31ix9ksJ4XCQ9foGMglCBm7LJLKe8iNkd6AzLFfMnVfE2DE/M96vj69/A== X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB1964; 31:lcz6MxsJFeFNwEjjL2pbn8+ZfwfXFSLYyFHdzShpjBVHpTmSxXb5SEFDus9yqBGYvJCeWggHhCIN9fQPHJOX9fdqnMQXaZlTfNzM/msL2nvgN1q45tSJ6qspfGVrLaP0IyYV9nKVUzJzFt71w1PY/hEKGDjfdcOWfq3huVj0DWkM3MXOmnBLC63bt0wn+4Sd1JZB2a2ElltJ+2gPR2DRYU4CNPwlxHBmuVF2QHq/34nDmgnSjwF/0S347HIbuY7uMV6t668NiGEgizLwRwOAAw== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(275809806118684); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095060)(601004)(2401047)(13024025)(13023025)(13018025)(8121501046)(5005006)(13015025)(13017025)(10201501046)(3002001)(6055026)(6096035)(20161123565025)(20161123561025)(20161123563025)(20161123559025)(20161123556025); SRVR:CY1PR0301MB1964; BCL:0; PCL:0; RULEID:(400006); SRVR:CY1PR0301MB1964; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB1964; 4:IeuviahYq+uJJr0xseQy81mJvNXmY4+KGi4Dw3Pp0T0ICmrj0xwE2TAtV96BTI/nPB1LfI3cHsmAxJCeVr8PI3agx33sPvCp/vV/XdT+y9IF5JZd8lxKVj29Zj/fX695VrVh9GmgkbpFZbWUIe7sEM+Q+xHckWu2+c+UklgeTNpTGl835YXNHkZK7k5VNSIN/IvNL9mg2FOdgUaakoy7OEldCauZXvEqghSfd1+sR7GSrWLgHGj8P+4KjsOJpZ0bs5URObCEkqLf20z+saEtJvRya/u7IeOx2f/+XqvqKbN/R1Q05AN9UyaEsuuGUEp/YS57INuYsVZtHvverl3lM0JhjSGtNDmckjAo4iPuDoMZ2cz3Xv2NmjIM1Uk3QpeTt/cO+V0VZCoXE5bCTwLnKK4WO4gNTS9RM9wEuiEAiNhyEViz6ZOSOCX602WyjYfqSwmIt8Gb8JlPrQdCPod6nZsGv5dh49C6n9RhX00qMeeCpIsRIdh0sywGSqlWrFmeKraVmCy/eNXxEQ9s59bnlLwQQsjdrY3KZ+VvyhKOqumLpvPBP0ezSg9mEwbM1Fjk55W8MS2jVKp/t4hwUhnte4+qRSTnKFO8lt/PLhsreAHDIeUqVGT+4dC+MavVs0gsDOzwoMLGmXoxnGfQEOvVsZqaGEK+FLcPj2fC+xAHgi++d48ZVCt4xoYy1GktSKkge9VKIn8KZZBgB6yWNuHnJ4v34sGQW0W3mwhXawIzJ+pJ/LKHIGshfGEzJkYUftgKSq6mE7N7NZ7o4wUHe+/+2w== X-Forefront-PRVS: 0249EFCB0B X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY1PR0301MB1964; 23:NlNs+JGIn1V4mm1xrob+jdhfSq/Q/PbuK+0JhnC?= l1Mu/QwFr4a5usRLWq4PsPIOl+Q3WBLsn4PzCGSNZzCsZZPim6XJ2B9G1eRtOt51iaAUYvbktd/5purHHl2jtnb2wscFQd1vFM+ShrBZ0YWtsL9a6vcuovaYdJZeTwm/LrBF2Lo0X0PNrLs8Bo3w1Bm6S64Txw/XhEegEDDPWuKoBLhOOqkXHp6raOJIsVNik0sx8VejXJe1b2qDFM3B+9fMJ9qLZFsPPg2O/vTHqo3EWzhYjsujh+823KvVQqqdy+qneH8B+0dfF5IEH43OFT+9fX9QSpaoK8/1v+4cDeqMyv8upNzRM1XCO7yO7i1JG9KzyG1Yf89aGIqwZwAnNgQkqqeHAf3l55TnyTteWauk0ASCuLShL+1IAFyAVjbAtt5eLUdXKp2jXwXZyA4wFTjuXIguSlGHhA0t69QeT9LewL0eJz7kUV9CyBQnERCxV0oCM+kNJC3uHQmhgo3j7hBbvBupOmevRRvxc0R6OiNw8biLIvyI/6VUtQHO8ZN1K3hfwQchPlgGXK+Wb0b0WGgeYuc/PdAOAM+2Ri1+WJr1yYcghof4rGr8wgWKTurNZIcUJP6PfAcI9Ybw6flUwczoFthq+/GgNFAxKx//Zzo2ZqrlJxycgitnGVyFZU+iqt/bsBJr0oKQ3SIjvg0nACye1g7bIA98S869GqDZjnDkxemUb2XahPEeLD4BKMDOYcrhWmQphLf6qnk6ud6gnahYZhPwjvgX3npNsXwWzJyeUZkJVZVwiyB6he2QpDa3Ud0Z2kZf9YJ9wmnkd1AKy6UKLr4qxC5T5g8JYsZQD++9TYzjqvY635s0KJV4QIxf22z7P5Wm7t2JfAW8H/dPg5abHqe//Uo5ccA/XueJWMPc0EWd3bJTQEQO84laeppM74RevpFiDiJ5IfKlpS/eCO909LninH/aprlE99apzRdd07p3wapdVdivHUPRFWmgQCZM1g/l4QV4Up2jgjyf5pIo+ZfV+EtMD/eQsOU4lSOOQueXng4UTanAIkRQxyon6KeYJdAFfp8VcdtwquUGmMG/cqs62c4tdLRFONU93WYyb7eLf0qaBxxuBpwT3/LuuwGHBJQWFY3jPX3oFKRs6yUIMN+vbP+JTDYP5CCjsqfcODHiXkWnFStEumHsPmZHuHUu6dOv0SdujHqaDuDTgKK44 X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB1964; 6:UialIbj/tcq7qi4LicLvnin6twvWn2zH6BSi4V6fhqbqpMt6mKqs0sS/ox0eOwRBUI9T64tT6AqyQFFl1XDBDslufVxEBGE0M95ShAGZeKzXxWVk8qlrE4G3+/j3jMJq4rL5ln15Krg2exVL33rhmtz5OA11c1MgtEIrabq8iddYHnxnGmSTsxUV6FdMwogTY1z/EcJHrLWbipS1p0jYUSEPPUwaj7b2OIT7u4HkNKdrzx0qwhHakLZqlZGpOrDZYIhkohHZW09ghXEAP090dFz6wFCydO2o8hAb5TIVV7R2rps2I1plttIG5mgKweEvlCNiO4qiEB2SMSp3r6lMhiaYRGrDMP+A20UevU6/K6WBVhMlM1UscvlnwRsBmgtjY8p61wzchvldZPfGW4e8RexnxxU3D+OG7g/MzoIWL+I=; 5:RIux7CjBj9S1MuE6sGIKLaQtvo/5MAuFyUFDnBsC6Ki2oZ54LPqCT301SOwnOVjr3GbWM0tXOUKTJOljbtuAdQGLW6FQI1cRmwyHJbARjGjPPyyAZ95VXeYrSBdaig/QbGudIfEtSiTdRJ6FvaG/izuy5HgP1q8EFgZWgF4j7npAR1MAGmbvmD/P6s7VU5kI; 24:Eqgy6SunQhykZXIWcqW3GPVSIEPgpb6xqj58NhKNKJQliw8vQ6FYpqQ6VCiFQ3sXKqXpXArlCBm33IMJXFX32ohu7zSssDd48QBful6jLb8= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CY1PR0301MB1964; 7:B5t+etJLFfFOVYF8cVTsXW+PIEaeINZUch2PW0ZOEgdv2Cq3HOQOMENe+UXxRiG6eyFiRDXn+pIXqG3bBmpgDV2IgEuququJXKAhSnFxY/R3tpP33pIRCoRDp6YLtSJXuE9hhzxsqz4yqoZbkbpqD1Sf4XVO+eQli6LDMEpDo2Fv1AodwSyZvDRYi066ynAaeJznogjxAv5WSQEpNdNFCfQmaeId6czCKX/2EjkblXkBUC8NtnRz4Ls5HE87Ch/NaVupiZ7bveXXGvT7370R+wqCZ6t1YGsOZg3qhm0lVRL/V5maLSCjXpXx9SzHbJ9TV2402uYSGMRkSraUnrX1tQ== X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Mar 2017 12:47:30.5418 (UTC) X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR0301MB1964 Subject: [dpdk-dev] [PATCH v1] mempool/dpaa2: add DPAA2 hardware offloaded mempool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" DPAA2 Hardware Mempool handlers allow enqueue/dequeue from NXP's QBMAN hardware block. CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS is set to 'dpaa2', if the pool is enabled. This memory pool currently supports packet mbuf type blocks only. Signed-off-by: Hemant Agrawal --- MAINTAINERS | 1 + config/common_base | 5 + config/defconfig_arm64-dpaa2-linuxapp-gcc | 8 + drivers/Makefile | 1 + drivers/bus/Makefile | 2 + drivers/mempool/Makefile | 40 +++ drivers/mempool/dpaa2/Makefile | 72 ++++ drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 374 +++++++++++++++++++++ drivers/mempool/dpaa2/dpaa2_hw_mempool.h | 91 +++++ .../mempool/dpaa2/rte_mempool_dpaa2_version.map | 8 + 10 files changed, 602 insertions(+) create mode 100644 drivers/mempool/Makefile create mode 100644 drivers/mempool/dpaa2/Makefile create mode 100644 drivers/mempool/dpaa2/dpaa2_hw_mempool.c create mode 100644 drivers/mempool/dpaa2/dpaa2_hw_mempool.h create mode 100644 drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map diff --git a/MAINTAINERS b/MAINTAINERS index e9b1ac1..229b919 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -352,6 +352,7 @@ F: doc/guides/nics/nfp.rst NXP dpaa2 M: Hemant Agrawal F: drivers/bus/fslmc/ +F: drivers/mempool/dpaa2/ QLogic bnx2x M: Harish Patil diff --git a/config/common_base b/config/common_base index dfe5db2..1c3bbe0 100644 --- a/config/common_base +++ b/config/common_base @@ -292,6 +292,11 @@ CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_DRIVER=n CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_MBOX=n # +# Compile Support Libraries for NXP DPAA2 +# +CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL=n + +# # Compile NXP DPAA2 FSL-MC Bus # CONFIG_RTE_LIBRTE_FSLMC_BUS=n diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc index 365ae5a..47a5eee 100644 --- a/config/defconfig_arm64-dpaa2-linuxapp-gcc +++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc @@ -42,6 +42,14 @@ CONFIG_RTE_ARCH_ARM_TUNE="cortex-a57+fp+simd" CONFIG_RTE_MAX_LCORE=8 CONFIG_RTE_MAX_NUMA_NODES=1 +CONFIG_RTE_PKTMBUF_HEADROOM=256 + +# +# Compile Support Libraries for DPAA2 +# +CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL=n +CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa2" + # # Compile NXP DPAA2 FSL-MC Bus # diff --git a/drivers/Makefile b/drivers/Makefile index e937449..88e1005 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -32,6 +32,7 @@ include $(RTE_SDK)/mk/rte.vars.mk DIRS-y += bus +DIRS-y += mempool DIRS-y += net DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile index 8f7864b..70fbe79 100644 --- a/drivers/bus/Makefile +++ b/drivers/bus/Makefile @@ -31,7 +31,9 @@ include $(RTE_SDK)/mk/rte.vars.mk +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y) CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_PMD) +endif DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile new file mode 100644 index 0000000..fb19049 --- /dev/null +++ b/drivers/mempool/Makefile @@ -0,0 +1,40 @@ +# BSD LICENSE +# +# Copyright(c) 2016 NXP. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of NXP nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y) +CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD) +endif + +DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2 + +include $(RTE_SDK)/mk/rte.subdir.mk diff --git a/drivers/mempool/dpaa2/Makefile b/drivers/mempool/dpaa2/Makefile new file mode 100644 index 0000000..cc5f068 --- /dev/null +++ b/drivers/mempool/dpaa2/Makefile @@ -0,0 +1,72 @@ +# BSD LICENSE +# +# Copyright(c) 2016 NXP. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of NXP nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_mempool_dpaa2.a + +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y) +CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD) +endif + +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y) +CFLAGS += -O0 -g +CFLAGS += "-Wno-error" +else +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) +endif + +CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc +CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include +CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal + +# versioning export map +EXPORT_MAP := rte_mempool_dpaa2_version.map + +# Lbrary version +LIBABIVER := 1 + +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2_hw_mempool.c + +# library dependencies +DEPDIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += lib/librte_eal +DEPDIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += lib/librte_mempool +DEPDIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += drivers/bus/fslmc + +LDLIBS += -lrte_bus_fslmc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c new file mode 100644 index 0000000..a8a530c --- /dev/null +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c @@ -0,0 +1,374 @@ +/*- + * BSD LICENSE + * + * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. + * Copyright (c) 2016 NXP. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Freescale Semiconductor, Inc nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include "dpaa2_hw_mempool.h" + +struct dpaa2_bp_info rte_dpaa2_bpid_info[MAX_BPID]; +static struct dpaa2_bp_list *h_bp_list; + +static int +rte_hw_mbuf_create_pool(struct rte_mempool *mp) +{ + struct dpaa2_bp_list *bp_list; + struct dpaa2_dpbp_dev *avail_dpbp; + struct dpbp_attr dpbp_attr; + uint32_t bpid; + int ret, p_ret; + + avail_dpbp = dpaa2_alloc_dpbp_dev(); + + if (!avail_dpbp) { + PMD_DRV_LOG(ERR, "DPAA2 resources not available"); + return -ENOENT; + } + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret) { + RTE_LOG(ERR, PMD, "Failure in affining portal\n"); + return ret; + } + } + + ret = dpbp_enable(&avail_dpbp->dpbp, CMD_PRI_LOW, avail_dpbp->token); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Resource enable failure with" + " err code: %d\n", ret); + return ret; + } + + ret = dpbp_get_attributes(&avail_dpbp->dpbp, CMD_PRI_LOW, + avail_dpbp->token, &dpbp_attr); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Resource read failure with" + " err code: %d\n", ret); + p_ret = ret; + ret = dpbp_disable(&avail_dpbp->dpbp, CMD_PRI_LOW, + avail_dpbp->token); + return p_ret; + } + + /* Allocate the bp_list which will be added into global_bp_list */ + bp_list = (struct dpaa2_bp_list *)malloc(sizeof(struct dpaa2_bp_list)); + if (!bp_list) { + PMD_INIT_LOG(ERR, "No heap memory available"); + return -ENOMEM; + } + + /* Set parameters of buffer pool list */ + bp_list->buf_pool.num_bufs = mp->size; + bp_list->buf_pool.size = mp->elt_size + - sizeof(struct rte_mbuf) - rte_pktmbuf_priv_size(mp); + bp_list->buf_pool.bpid = dpbp_attr.bpid; + bp_list->buf_pool.h_bpool_mem = NULL; + bp_list->buf_pool.mp = mp; + bp_list->buf_pool.dpbp_node = avail_dpbp; + bp_list->next = h_bp_list; + + bpid = dpbp_attr.bpid; + + rte_dpaa2_bpid_info[bpid].meta_data_size = sizeof(struct rte_mbuf) + + rte_pktmbuf_priv_size(mp); + rte_dpaa2_bpid_info[bpid].bp_list = bp_list; + rte_dpaa2_bpid_info[bpid].bpid = bpid; + + mp->pool_data = (void *)&rte_dpaa2_bpid_info[bpid]; + + PMD_INIT_LOG(DEBUG, "BP List created for bpid =%d", dpbp_attr.bpid); + + h_bp_list = bp_list; + /* Identification for our offloaded pool_data structure + */ + mp->flags |= MEMPOOL_F_HW_PKT_POOL; + return 0; +} + +static void +rte_hw_mbuf_free_pool(struct rte_mempool *mp) +{ + struct dpaa2_bp_info *bpinfo; + struct dpaa2_bp_list *bp; + struct dpaa2_dpbp_dev *dpbp_node; + + if (!mp->pool_data) { + PMD_DRV_LOG(ERR, "Not a valid dpaa22 pool"); + return; + } + + bpinfo = (struct dpaa2_bp_info *)mp->pool_data; + bp = bpinfo->bp_list; + dpbp_node = bp->buf_pool.dpbp_node; + + dpbp_disable(&(dpbp_node->dpbp), CMD_PRI_LOW, dpbp_node->token); + + if (h_bp_list == bp) { + h_bp_list = h_bp_list->next; + } else { /* if it is not the first node */ + struct dpaa2_bp_list *prev = h_bp_list, *temp; + temp = h_bp_list->next; + while (temp) { + if (temp == bp) { + prev->next = temp->next; + free(bp); + break; + } + prev = temp; + temp = temp->next; + } + } + + dpaa2_free_dpbp_dev(dpbp_node); +} + +static void +rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused, + void * const *obj_table, + uint32_t bpid, + uint32_t meta_data_size, + int count) +{ + struct qbman_release_desc releasedesc; + struct qbman_swp *swp; + int ret; + int i, n; + uint64_t bufs[DPAA2_MBUF_MAX_ACQ_REL]; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret != 0) { + RTE_LOG(ERR, PMD, "Failed to allocate IO portal"); + return; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + /* Create a release descriptor required for releasing + * buffers into QBMAN + */ + qbman_release_desc_clear(&releasedesc); + qbman_release_desc_set_bpid(&releasedesc, bpid); + + n = count % DPAA2_MBUF_MAX_ACQ_REL; + if (unlikely(!n)) + goto aligned; + + /* convert mbuf to buffers for the remainder */ + for (i = 0; i < n ; i++) { +#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA + bufs[i] = (uint64_t)rte_mempool_virt2phy(pool, obj_table[i]) + + meta_data_size; +#else + bufs[i] = (uint64_t)obj_table[i] + meta_data_size; +#endif + } + + /* feed them to bman */ + do { + ret = qbman_swp_release(swp, &releasedesc, bufs, n); + } while (ret == -EBUSY); + +aligned: + /* if there are more buffers to free */ + while (n < count) { + /* convert mbuf to buffers */ + for (i = 0; i < DPAA2_MBUF_MAX_ACQ_REL; i++) { +#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA + bufs[i] = (uint64_t) + rte_mempool_virt2phy(pool, obj_table[n + i]) + + meta_data_size; +#else + bufs[i] = (uint64_t)obj_table[n + i] + meta_data_size; +#endif + } + + do { + ret = qbman_swp_release(swp, &releasedesc, bufs, + DPAA2_MBUF_MAX_ACQ_REL); + } while (ret == -EBUSY); + n += DPAA2_MBUF_MAX_ACQ_REL; + } +} + +int +rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool, + void **obj_table, unsigned int count) +{ +#ifdef RTE_LIBRTE_DPAA2_DEBUG_DRIVER + static int alloc; +#endif + struct qbman_swp *swp; + uint16_t bpid; + uint64_t bufs[DPAA2_MBUF_MAX_ACQ_REL]; + int i, ret; + unsigned int n = 0; + struct dpaa2_bp_info *bp_info; + + bp_info = mempool_to_bpinfo(pool); + + if (!(bp_info->bp_list)) { + RTE_LOG(ERR, PMD, "DPAA2 buffer pool not configured\n"); + return -ENOENT; + } + + bpid = bp_info->bpid; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret != 0) { + RTE_LOG(ERR, PMD, "Failed to allocate IO portal"); + return ret; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + while (n < count) { + /* Acquire is all-or-nothing, so we drain in 7s, + * then the remainder. + */ + if ((count - n) > DPAA2_MBUF_MAX_ACQ_REL) { + ret = qbman_swp_acquire(swp, bpid, bufs, + DPAA2_MBUF_MAX_ACQ_REL); + } else { + ret = qbman_swp_acquire(swp, bpid, bufs, + count - n); + } + /* In case of less than requested number of buffers available + * in pool, qbman_swp_acquire returns 0 + */ + if (ret <= 0) { + PMD_TX_LOG(ERR, "Buffer acquire failed with" + " err code: %d", ret); + /* The API expect the exact number of requested bufs */ + /* Releasing all buffers allocated */ + rte_dpaa2_mbuf_release(pool, obj_table, bpid, + bp_info->meta_data_size, n); + return ret; + } + /* assigning mbuf from the acquired objects */ + for (i = 0; (i < ret) && bufs[i]; i++) { + DPAA2_MODIFY_IOVA_TO_VADDR(bufs[i], uint64_t); + obj_table[n] = (struct rte_mbuf *) + (bufs[i] - bp_info->meta_data_size); + rte_mbuf_refcnt_set((struct rte_mbuf *)obj_table[n], 0); + PMD_TX_LOG(DEBUG, "Acquired %p address %p from BMAN", + (void *)bufs[i], (void *)obj_table[n]); + n++; + } + } + +#ifdef RTE_LIBRTE_DPAA2_DEBUG_DRIVER + alloc += n; + PMD_TX_LOG(DEBUG, "Total = %d , req = %d done = %d", + alloc, count, n); +#endif + return 0; +} + +static int +rte_hw_mbuf_free_bulk(struct rte_mempool *pool, + void * const *obj_table, unsigned int n) +{ + struct dpaa2_bp_info *bp_info; + + bp_info = mempool_to_bpinfo(pool); + if (!(bp_info->bp_list)) { + RTE_LOG(ERR, PMD, "DPAA2 buffer pool not configured"); + return -ENOENT; + } + rte_dpaa2_mbuf_release(pool, obj_table, bp_info->bpid, + bp_info->meta_data_size, n); + + return 0; +} + +static unsigned int +rte_hw_mbuf_get_count(const struct rte_mempool *mp) +{ + int ret; + unsigned int num_of_bufs = 0; + struct dpaa2_bp_info *bp_info; + struct dpaa2_dpbp_dev *dpbp_node; + + if (!mp || !mp->pool_data) { + RTE_LOG(ERR, PMD, "Invalid mempool provided"); + return 0; + } + + bp_info = (struct dpaa2_bp_info *)mp->pool_data; + dpbp_node = bp_info->bp_list->buf_pool.dpbp_node; + + ret = dpbp_get_num_free_bufs(&dpbp_node->dpbp, CMD_PRI_LOW, + dpbp_node->token, &num_of_bufs); + if (ret) { + RTE_LOG(ERR, PMD, "Unable to obtain free buf count (err=%d)", + ret); + return 0; + } + + RTE_LOG(DEBUG, PMD, "Free bufs = %u", num_of_bufs); + + return num_of_bufs; +} + +struct rte_mempool_ops dpaa2_mpool_ops = { + .name = "dpaa2", + .alloc = rte_hw_mbuf_create_pool, + .free = rte_hw_mbuf_free_pool, + .enqueue = rte_hw_mbuf_free_bulk, + .dequeue = rte_dpaa2_mbuf_alloc_bulk, + .get_count = rte_hw_mbuf_get_count, +}; + +MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops); diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h new file mode 100644 index 0000000..4f2fcd7 --- /dev/null +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h @@ -0,0 +1,91 @@ +/*- + * BSD LICENSE + * + * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. + * Copyright (c) 2016 NXP. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Freescale Semiconductor, Inc nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _DPAA2_HW_DPBP_H_ +#define _DPAA2_HW_DPBP_H_ + +#define DPAA2_MAX_BUF_POOLS 8 + +struct buf_pool_cfg { + void *addr; + /**< The address from where DPAA2 will carve out the buffers */ + phys_addr_t phys_addr; + /**< Physical address of the memory provided in addr */ + uint32_t num; + /**< Number of buffers */ + uint32_t size; + /**< Size including headroom for each buffer */ + uint16_t align; + /**< Buffer alignment (in bytes) */ + uint16_t bpid; + /**< Autogenerated buffer pool ID for internal use */ +}; + +struct buf_pool { + uint32_t size; /**< Size of the Pool */ + uint32_t num_bufs; /**< Number of buffers in Pool */ + uint16_t bpid; /**< Pool ID, from pool configuration */ + uint8_t *h_bpool_mem; /**< Internal context data */ + struct rte_mempool *mp; /**< DPDK RTE EAL pool reference */ + struct dpaa2_dpbp_dev *dpbp_node; /**< Hardware context */ +}; + +/*! + * Buffer pool list configuration structure. User need to give DPAA2 the + * valid number of 'num_buf_pools'. + */ +struct dpaa2_bp_list_cfg { + struct buf_pool_cfg buf_pool; /* Configuration of each buffer pool*/ +}; + +struct dpaa2_bp_list { + struct dpaa2_bp_list *next; + struct rte_mempool *mp; + struct buf_pool buf_pool; +}; + +struct dpaa2_bp_info { + uint32_t meta_data_size; + uint32_t bpid; + struct dpaa2_bp_list *bp_list; +}; + +#define mempool_to_bpinfo(mp) ((struct dpaa2_bp_info *)(mp)->pool_data) +#define mempool_to_bpid(mp) ((mempool_to_bpinfo(mp))->bpid) + +extern struct dpaa2_bp_info rte_dpaa2_bpid_info[MAX_BPID]; + +int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool, + void **obj_table, unsigned int count); + +#endif /* _DPAA2_HW_DPBP_H_ */ diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map new file mode 100644 index 0000000..a8aa685 --- /dev/null +++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map @@ -0,0 +1,8 @@ +DPDK_17.05 { + global: + + rte_dpaa2_bpid_info; + rte_dpaa2_mbuf_alloc_bulk; + + local: *; +};