From patchwork Sun Apr 9 07:59:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 23372 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 0FFBC2C18; Sun, 9 Apr 2017 09:57:46 +0200 (CEST) Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0069.outbound.protection.outlook.com [104.47.40.69]) by dpdk.org (Postfix) with ESMTP id 646952BD1 for ; Sun, 9 Apr 2017 09:57:43 +0200 (CEST) Received: from CY1PR03CA0026.namprd03.prod.outlook.com (10.174.128.36) by BY2PR03MB285.namprd03.prod.outlook.com (10.242.37.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1005.10; Sun, 9 Apr 2017 07:57:41 +0000 Received: from BL2FFO11FD020.protection.gbl (2a01:111:f400:7c09::170) by CY1PR03CA0026.outlook.office365.com (2603:10b6:600::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1019.17 via Frontend Transport; Sun, 9 Apr 2017 07:57:40 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; intel.com; dkim=none (message not signed) header.d=none; intel.com; dmarc=fail action=none header.from=nxp.com; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BL2FFO11FD020.mail.protection.outlook.com (10.173.161.38) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.1019.14 via Frontend Transport; Sun, 9 Apr 2017 07:57:39 +0000 Received: from DTS-02.ap.freescale.net (DTS-02.ap.freescale.net [10.232.132.223]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id v397vXXJ001021; Sun, 9 Apr 2017 00:57:36 -0700 From: Hemant Agrawal To: , CC: , , Date: Sun, 9 Apr 2017 13:29:46 +0530 Message-ID: <1491724786-6468-2-git-send-email-hemant.agrawal@nxp.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1491724786-6468-1-git-send-email-hemant.agrawal@nxp.com> References: <1489754838-1455-2-git-send-email-hemant.agrawal@nxp.com> <1491724786-6468-1-git-send-email-hemant.agrawal@nxp.com> X-EOPAttributedMessage: 0 X-Matching-Connectors: 131361982596963335; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(336005)(39450400003)(39410400002)(39400400002)(39380400002)(39840400002)(39860400002)(39850400002)(2980300002)(1110001)(1109001)(339900001)(3190300001)(199003)(189002)(9170700003)(105606002)(47776003)(106466001)(81166006)(8676002)(50986999)(5660300001)(8936002)(76176999)(38730400002)(305945005)(50226002)(48376002)(104016004)(36756003)(50466002)(356003)(4326008)(33646002)(2906002)(85426001)(5003940100001)(77096006)(6666003)(575784001)(53936002)(2950100002)(86362001)(8656002)(189998001)(54906002); DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR03MB285; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; MLV:ovrnspm; A:1; MX:1; PTR:InfoDomainNonexistent; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BL2FFO11FD020; 1:Fh96SzsacrkL6aYPfBh/SKF/WuZtS4/CNwsXVf5dE8banfFR3rgyORYSPiUoSKQQLw4Mab21Ci3gEiyN4VMuKfAuNx92ajJx7jP5TQbCvLV8CLxu5wVCGld1ACh9lL2bw/wFDW4K5dlOGjwkYelakrpyS7ZsiqWlLNTWwZe8zXpP8MncssS3ahjoBs480nELvYhYb2oz5jFJDrLamQN9vdHWqu0dCnUjp4p2K7JVoJkxjqbgAiX0zF9Vt8zYyswpmzXt1WDR+KWKmr54L2LEnjMle+srVoTnhITBQaygU+Qrf2aY8iEGE1FKRaiG63CcwMXKtAo4ZBeJclAQdBIFfG9UMUk6O3uHxiy+p7tSVt/Et+fxZF+Wz91HEpPtk9/VuvGhbYPU7jAqREqIpnSXnZXuJ6uk61tc0fy8YciMSAFY3yZG8aM6B2RgcKJVv4BpqtqMIbes1wzPn1Lt2U52Uk/fU7LrI5epbejr+NZ7vfc36jIIux4gW1+BWmWsT9jEfsEYG5Vyn1O+SgropgQSgnptj/KIWEoaQye9HnN5zXw66/3UQD7ryh3gssjzA8EiBkainlxJse7Uf+HwBadvzqh75HoyvSJwpMqNxKsjSdw4JcM3xb1prbZjNHU/054xm1PqzlXyq5Q7De9mxtZHlBO7xedLYImG+aFTg3wwmHyctUxDXGm6mZNkEMtbg1NC MIME-Version: 1.0 X-MS-Office365-Filtering-Correlation-Id: a26c3ef0-f3a4-48c9-dc07-08d47f1e1825 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(201703131430075)(201703131517081); SRVR:BY2PR03MB285; X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB285; 3:/R/tY8tNSUQYH/MAS3gawwLjotG5yPVY2C/U4Hspd0vkYDPHIIUN950bMxaRLl9wY4VHg4uJ2tD9lQlyTMlQ8tXgcSECRbXHuPnJvDFbZ+Lzodr093DbxQZJwGjFLXRJywJzRbEdIzV4VcBXCxQOLnt/1b0bZsA7zbbGJ9HAn6JAHM2qcwPp5bOTkvmmK8DHiftA7lqt9QxZjTQiPiSJPyt2hnULExhz6QMHVAwlfFsn+atMb2tK0Rtf+rtv5kd8y4XMLfcR7ovcLM2Ber5zf+VTqP5vt6JasgplCNimFxmkFnbaD//Cuqa4mDoJRpnpvXDYahRT+VZbguB+k05Q5AXb/h4P6RMEiFobu+5ljs9Xf+n16XrYxINC37Nvd9BitBNLDIEGnMPFbc6VdlvROlObkMsIlv8kM/7IKh1peDM2+MNVAFrjNBK3t71Y/qzZ; 25:36fJpUcSEx6jTrkhPVI1C8Z5NfbuUXlQi1Sj+Hm4zLbjM4ZXCJEJkiAhn8vc2q4NHcMjXq6NbNXsl2wu+4i7jcLiEs3hWFIVI95+uXqXjR3OiC3JKQ9zGxkEHgA/iYsB2uf9zTf4+u178GhKcLcjEJHregkSTV/D/mqa8UwANYgdeXQU8/mvMyhgkedYaXJvrZzJw8hAT27PFkEi6bwR36c6g1LDss2fLli5aYxtYuwujg1ONA/NoCF4POrqzhXj8fN2XQRn5aSx1ia6BaYF/uL8sJsR4QVzeooKh+jdCrDel2zbhbtRzKxmdufto/RTNHbEki2iEEwCsJILkTiUrfzXaKAw5ioreIyxKqtS71X6pJgPF6roswgMqPZ0DVyDzxUqSL0QI8G6dxbuWmYgjQsF+dC0DNgvaukAcIauwhLVWiZ5fNczVlKARpRyHv54fpSiWXfD32wRJfP6arU1Og== X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB285; 31:8WlDM0StyLAdvQZ81gzfST7qto1fhyq3JI8tRSBsy1Q7qP/tI+d+auztsHRGQuC1iRtiOPQn+1XmK0HaCRlOehJyngEG5J183tGJ0IpmUA3Yz5+HHO1dELEumORVtM/Dkt6P0t9xmXrais82qZREFgGWUV3DnnQnYa4OLtAkomx5LxiehuTH2RbsaCQtmccYCRQxivjRvKzUsopLxsHDHvB6no0r1w4zR07OC9zHp+JW8DLBZPuBRCXNDscukl25o2+aLVIuBx9HI2dJi82swg== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(275809806118684); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095135)(601004)(2401047)(13018025)(13024025)(13023025)(13017025)(13015025)(8121501046)(5005006)(10201501046)(93006095)(93001095)(3002001)(6055026)(6096035)(20161123563025)(201703131430075)(201703131448075)(201703131433075)(201703161259075)(20161123556025)(20161123565025)(20161123561025); SRVR:BY2PR03MB285; BCL:0; PCL:0; RULEID:(400006); SRVR:BY2PR03MB285; X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB285; 4:mAOQcjmTn7UxOmcr7BMK7+NVG9Pt3qJdHkcFvtWlVaZU879k+v09iM4+2l7ipwPlKG0T1HqKfwP86FQce6GraprIhOIyU6SAWTpt6nt0YF8gidjaMVioY99AOY01R2r42+hnzez6GV/DAT3L1B1RwUIz9Pqis4HYuAOinccrLDqcDY9Ptu3/wfHON35i2trbX/Qm6M96VtgueLSaZeshFMzOhohMkbPffuMvHC8F4JTUENhHPhpvXLuVM1FhrQBAFcuDQ4zLs686r1rv4yqo6jknyw1PW/uqf/unBPZkAF8Z8Sj/2Kq37Io+6zYW62hqZ8BYFWfi8F3PyDV6Zl1jYBzBEusBY/wKK83MOFkvtaUPS9MO5+diCaqtd9pd5FAanT1A2Xie4lbRweK+AOGG45qze061lzmLo3MjAfXeBJM7puwO+njc7v9CqovxflcJCPyPXI2Rc9imPjyLQfeJXJ8E1Iql/o/3GCbWGdOQnaLuAn6hra3UGO+lbpod2uPfWupKdiYEB7tfSBd4aqtjGiakpPRV1JxMwFbf02cZ0aY/07VhvDm9urR/AikbLEPAedMbdMTJEBm5MVlKOz5evlX24bBiw90uybLPtl2rWfWFJ5H1hheVaLMLbJ7w+qPSLI8FVfOaUPfgcnElqACbDsbY76zzWapaMMwmYsqngIzQ5jXZoAt8fZfUpVnTI3l5ENMuYq8PnPY/ZNB9nNiQ3aZygHcX4nOcOxoAbOl6wt1iwQsLw4/kM+YbFDt4S1Nhq1IQhnkXYSy8Jm9ulMSG6Sw9bnTV4yP8kHB9cClL2IPOuL83dWpNIJ8X4H5XLjxtSv0Y/sZB1Pj3ST381DNDooz6V2KzddbHjlE+p/SQJZvvOHRM8EBDmhJgDJmfxVplaKDOT2tEIYinLvEytvUmimcCOsflEazQhsDHgDNvih8= X-Forefront-PRVS: 02723F29C4 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BY2PR03MB285; 23:yGulMP77koAhhcR9ilvSGX4kZ5BCO9pvJeZb/U1MhB?= e5Jno5RxUhjZ+ogOEOicQCyR138wqW7knzBnWPiy3TCO1LfJyTYKKLL83wJe1mu31FgK7ZDG4vO3jXTwlFyhMRU1UXQaiwe79kM22ZTPRRO6ULmxf6NNP4JLkQsfESJ9q04EEadSU284No/C5ppfTBXZHzJLI5Wf3YGdBWC6x9PsL1xzQSZ8zGN1joiCJjuf6UA5KUbiLAro6W5312zWW4tqRvGFu9MNojjnn+hU/46/W1XaPtTLEdLY1SIsRs8o8V6iO4qdH0o7B72q4FoVIF8wTM5hyYlk/XhKUTdBRRthPiLOGPzjYdb7LWTVsSXLaK9/Uxi7dGiJdfJ1ih7XB2pBupBJbWKQuGV3CvkNCb7x4rfZkSoT3bSsyqx/UQQrxLQMmAbWfClSBLQvdd/wrOj01SA274dA0CnGrKMLSz+xwBnVKaW78uxVm63IA34E6TQU3S28QaIzNMWvJ6YXvL6t1ERuG7nTbsse6Mj6gXyWiU+9oBIkeU2dackVTDhL4VDhJLsvX5uqGoXMfelFI5OY7gXCVhMs18VI7j99HRsb15uQCbnJodPQDY7//2fPBbB7A2sfSBPGw81DRmbmr5rtVHEHFYjDBJ71L+9Fh/vd1hbIn0BfgrTFvuqIugFao/YNeDwfKiOrchLzEDy8CPwwgaE5+55UCMECmzHOI5kHhtJcEayUNXTqsBwBh4tCzlYxf2bydeeqIVVTxkFYG7gKb3AQBFi6pFv0HNmQH8fzTyTjuItqvginzonx44xxwq1FtSEabJyxCVrmdb252VCP+Y89+EWt+/Au/w85lzM2yiaeRW7ZKvoyWWO3+JZtcmsgmVNr3XH9giIDlaMFjhX0sHTgaEgs2lmRuV0BSJVOD2GWcXq835a3tVXDHuwZwrvqBRxXgYT5TKYovFyYOGwv90ncmuRNjY2TEvG5w6gYgm1D+LqgXpifzvgT6Z6IhnAaYjrNaNvydarsYGEcg91NBtZocsoqY80LKWPJrU5R4yAIEpNcLqz0It/rMVhccwzpVPQCKKgZUUeGsPe0/76jBCuyqQL4ApF/Y8wT6tEJtHsdm3EloOPC6fkme/oetPukZyih3EViCOtRqegraknp8zt+9J/czgqHAYlsmkT2AX8vNV70rzbkmrq850sM/AZmK0I5jZ1XBI2uFPnbkh X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB285; 6:fQuq4kFXObjr9LfLDJ1yIzWm9uW37GIW7vxMYZ8I63fSLROX5v9y3JDE3HBrP4yAFtiZ9H3L2ViB3nRrGb3iP1fAppnkt7iKu06qLGOlX0NsvcvPC1r1VfVL2jWGoKwvHok9PfqpxVQ+/VZWjoEXOtzz7lZ6oBwuyVvOiAxejijd4ZyC3trQNmIhc3TgUFbZT+cYmf/xMNSu7eiEZc88dMmDhN3b603c4Bzunr6p8RL0R5nP70SiU8m+V/9GaSbKyfTTIet3hYpcfweA/qlHoWNMWJ15ewc1VPnMQ2ysAvTaRcX9Nw8qlHbrEr8DP6lz8Bf8MN56VWwjMHBhS9sTC//dPwhj2mSFEgmnFgoITmf+exMJ92PUbV3u0vtJzukPaxxzNs8TrgAyu+Oe6my4IN7ZEQhFgPCuJH+kl9qo40ojiFvV6nXiI9UGK1R+I+zh9j0yfE+H/oVlL5XxA4mB5Q==; 5:wAHJMESFlGqMUquuhbgT/P2JzoR3bmQAKwvbeoVy9dfbB0EKp6m8JLNtk+v168/UP353+MNctYgDZLA4ewgYPLIfFDOiMZGOwBuEZ3LVcmsbnUWHKFcdsDBBRyYJMq8VFQCKit0IIbTfnUaoSMaRZkIx+DZ4nEdimc52yFfac041qRUWxljiX+fC3n+pylX3; 24:UaXvRYzR0pYFvwhAuey8yM8NR3uuN1RR95omBFsIQKrZuAZQbweG5N3zO1eWU2DOVNbjQIZLExJnIHCa/Dfn4Ddt4YBSN0BHE6Xp1FtGfP4= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB285; 7:eTlP9hbTMD/Q6hBxXK8LXhGtfoSQSfMLsMmiMW5JiiVbazA5ChVbkpHozmUqpW8zsf+xJ+Du92vMFav5j7hXOPZIqgCOfZQ8/RTUYFiR2djeyE8zY8XX4gFCpN7LmEWsceCHu9FHbxcUeKEu23cVhxJVpE0hP1UWP2rr3PCmi8q4t6kooQNu6+n1LNl6nyY5Ot1pHiC8UA3zJanGArs+kSCO1uAzY/ICYc9yTHlxeA2z9oQXZF/RQLjdFysQqXgNUHQJ1vIeCjUG/OZ8+gC1z+sqVetCKBS+ZLlXqoZjrG84ANj6DcBo0jNK7+KzjcLOauIDE0BwvnZ2DVQppc762A== X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Apr 2017 07:57:39.5247 (UTC) X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR03MB285 Subject: [dpdk-dev] [PATCH v2] mempool/dpaa2: add DPAA2 hardware offloaded mempool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" DPAA2 Hardware Mempool handlers allow enqueue/dequeue from NXP's QBMAN hardware block. CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS is set to 'dpaa2', if the pool is enabled. This memory pool currently supports packet mbuf type blocks only. Signed-off-by: Hemant Agrawal --- MAINTAINERS | 1 + config/common_base | 5 + config/defconfig_arm64-dpaa2-linuxapp-gcc | 8 + drivers/Makefile | 1 + drivers/bus/Makefile | 4 + drivers/mempool/Makefile | 2 + drivers/mempool/dpaa2/Makefile | 67 ++++ drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 372 +++++++++++++++++++++ drivers/mempool/dpaa2/dpaa2_hw_mempool.h | 91 +++++ .../mempool/dpaa2/rte_mempool_dpaa2_version.map | 8 + 10 files changed, 559 insertions(+) create mode 100644 drivers/mempool/dpaa2/Makefile create mode 100644 drivers/mempool/dpaa2/dpaa2_hw_mempool.c create mode 100644 drivers/mempool/dpaa2/dpaa2_hw_mempool.h create mode 100644 drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 06ae25a..d0a08cb 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -367,6 +367,7 @@ F: doc/guides/nics/nfp.rst NXP dpaa2 M: Hemant Agrawal F: drivers/bus/fslmc/ +F: drivers/mempool/dpaa2/ QLogic bnx2x M: Harish Patil diff --git a/config/common_base b/config/common_base index 51294ba..c02e259 100644 --- a/config/common_base +++ b/config/common_base @@ -306,6 +306,11 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n CONFIG_RTE_LIBRTE_FSLMC_BUS=n # +# Compile Support Libraries for NXP DPAA2 +# +CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL=n + +# # Compile burst-oriented VIRTIO PMD driver # CONFIG_RTE_LIBRTE_VIRTIO_PMD=y diff --git a/config/defconfig_arm64-dpaa2-linuxapp-gcc b/config/defconfig_arm64-dpaa2-linuxapp-gcc index 365ae5a..47a5eee 100644 --- a/config/defconfig_arm64-dpaa2-linuxapp-gcc +++ b/config/defconfig_arm64-dpaa2-linuxapp-gcc @@ -42,6 +42,14 @@ CONFIG_RTE_ARCH_ARM_TUNE="cortex-a57+fp+simd" CONFIG_RTE_MAX_LCORE=8 CONFIG_RTE_MAX_NUMA_NODES=1 +CONFIG_RTE_PKTMBUF_HEADROOM=256 + +# +# Compile Support Libraries for DPAA2 +# +CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL=n +CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa2" + # # Compile NXP DPAA2 FSL-MC Bus # diff --git a/drivers/Makefile b/drivers/Makefile index fbc2351..19459fd 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -33,6 +33,7 @@ include $(RTE_SDK)/mk/rte.vars.mk DIRS-y += bus DIRS-y += mempool +DEPDIRS-mempool := bus DIRS-y += net DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile index a0725ac..1aab1cc 100644 --- a/drivers/bus/Makefile +++ b/drivers/bus/Makefile @@ -33,6 +33,10 @@ include $(RTE_SDK)/mk/rte.vars.mk core-libs := librte_eal librte_mbuf librte_mempool librte_ring librte_ether +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL),y) +CONFIG_RTE_LIBRTE_FSLMC_BUS = $(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) +endif + DIRS-$(CONFIG_RTE_LIBRTE_FSLMC_BUS) += fslmc DEPDIRS-fslmc = ${core-libs} diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile index 0c6c45c..8fd40e1 100644 --- a/drivers/mempool/Makefile +++ b/drivers/mempool/Makefile @@ -33,6 +33,8 @@ include $(RTE_SDK)/mk/rte.vars.mk core-libs := librte_eal librte_mempool librte_ring +DIRS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2 +DEPDIRS-dpaa2 = $(core-libs) DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring DEPDIRS-ring = $(core-libs) DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK) += stack diff --git a/drivers/mempool/dpaa2/Makefile b/drivers/mempool/dpaa2/Makefile new file mode 100644 index 0000000..3af3ac8 --- /dev/null +++ b/drivers/mempool/dpaa2/Makefile @@ -0,0 +1,67 @@ +# BSD LICENSE +# +# Copyright(c) 2016 NXP. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of NXP nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_mempool_dpaa2.a + +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y) +CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL = $(CONFIG_RTE_LIBRTE_DPAA2_PMD) +endif + +ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_DEBUG_INIT),y) +CFLAGS += -O0 -g +CFLAGS += "-Wno-error" +else +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) +endif + +CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc +CFLAGS += -I$(RTE_SDK)/drivers/bus/fslmc/qbman/include +CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal + +# versioning export map +EXPORT_MAP := rte_mempool_dpaa2_version.map + +# Lbrary version +LIBABIVER := 1 + +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_DPAA2_MEMPOOL) += dpaa2_hw_mempool.c + +LDLIBS += -lrte_bus_fslmc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c new file mode 100644 index 0000000..cded29b --- /dev/null +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c @@ -0,0 +1,372 @@ +/*- + * BSD LICENSE + * + * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. + * Copyright (c) 2016 NXP. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Freescale Semiconductor, Inc nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include "dpaa2_hw_mempool.h" + +struct dpaa2_bp_info rte_dpaa2_bpid_info[MAX_BPID]; +static struct dpaa2_bp_list *h_bp_list; + +static int +rte_hw_mbuf_create_pool(struct rte_mempool *mp) +{ + struct dpaa2_bp_list *bp_list; + struct dpaa2_dpbp_dev *avail_dpbp; + struct dpbp_attr dpbp_attr; + uint32_t bpid; + int ret, p_ret; + + avail_dpbp = dpaa2_alloc_dpbp_dev(); + + if (!avail_dpbp) { + PMD_DRV_LOG(ERR, "DPAA2 resources not available"); + return -ENOENT; + } + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret) { + RTE_LOG(ERR, PMD, "Failure in affining portal\n"); + return ret; + } + } + + ret = dpbp_enable(&avail_dpbp->dpbp, CMD_PRI_LOW, avail_dpbp->token); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Resource enable failure with" + " err code: %d\n", ret); + return ret; + } + + ret = dpbp_get_attributes(&avail_dpbp->dpbp, CMD_PRI_LOW, + avail_dpbp->token, &dpbp_attr); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Resource read failure with" + " err code: %d\n", ret); + p_ret = ret; + ret = dpbp_disable(&avail_dpbp->dpbp, CMD_PRI_LOW, + avail_dpbp->token); + return p_ret; + } + + /* Allocate the bp_list which will be added into global_bp_list */ + bp_list = (struct dpaa2_bp_list *)malloc(sizeof(struct dpaa2_bp_list)); + if (!bp_list) { + PMD_INIT_LOG(ERR, "No heap memory available"); + return -ENOMEM; + } + + /* Set parameters of buffer pool list */ + bp_list->buf_pool.num_bufs = mp->size; + bp_list->buf_pool.size = mp->elt_size + - sizeof(struct rte_mbuf) - rte_pktmbuf_priv_size(mp); + bp_list->buf_pool.bpid = dpbp_attr.bpid; + bp_list->buf_pool.h_bpool_mem = NULL; + bp_list->buf_pool.dpbp_node = avail_dpbp; + /* Identification for our offloaded pool_data structure */ + bp_list->dpaa2_ops_index = mp->ops_index; + bp_list->next = h_bp_list; + bp_list->mp = mp; + + bpid = dpbp_attr.bpid; + + rte_dpaa2_bpid_info[bpid].meta_data_size = sizeof(struct rte_mbuf) + + rte_pktmbuf_priv_size(mp); + rte_dpaa2_bpid_info[bpid].bp_list = bp_list; + rte_dpaa2_bpid_info[bpid].bpid = bpid; + + mp->pool_data = (void *)&rte_dpaa2_bpid_info[bpid]; + + PMD_INIT_LOG(DEBUG, "BP List created for bpid =%d", dpbp_attr.bpid); + + h_bp_list = bp_list; + return 0; +} + +static void +rte_hw_mbuf_free_pool(struct rte_mempool *mp) +{ + struct dpaa2_bp_info *bpinfo; + struct dpaa2_bp_list *bp; + struct dpaa2_dpbp_dev *dpbp_node; + + if (!mp->pool_data) { + PMD_DRV_LOG(ERR, "Not a valid dpaa22 pool"); + return; + } + + bpinfo = (struct dpaa2_bp_info *)mp->pool_data; + bp = bpinfo->bp_list; + dpbp_node = bp->buf_pool.dpbp_node; + + dpbp_disable(&(dpbp_node->dpbp), CMD_PRI_LOW, dpbp_node->token); + + if (h_bp_list == bp) { + h_bp_list = h_bp_list->next; + } else { /* if it is not the first node */ + struct dpaa2_bp_list *prev = h_bp_list, *temp; + temp = h_bp_list->next; + while (temp) { + if (temp == bp) { + prev->next = temp->next; + free(bp); + break; + } + prev = temp; + temp = temp->next; + } + } + + dpaa2_free_dpbp_dev(dpbp_node); +} + +static void +rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused, + void * const *obj_table, + uint32_t bpid, + uint32_t meta_data_size, + int count) +{ + struct qbman_release_desc releasedesc; + struct qbman_swp *swp; + int ret; + int i, n; + uint64_t bufs[DPAA2_MBUF_MAX_ACQ_REL]; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret != 0) { + RTE_LOG(ERR, PMD, "Failed to allocate IO portal"); + return; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + /* Create a release descriptor required for releasing + * buffers into QBMAN + */ + qbman_release_desc_clear(&releasedesc); + qbman_release_desc_set_bpid(&releasedesc, bpid); + + n = count % DPAA2_MBUF_MAX_ACQ_REL; + if (unlikely(!n)) + goto aligned; + + /* convert mbuf to buffers for the remainder */ + for (i = 0; i < n ; i++) { +#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA + bufs[i] = (uint64_t)rte_mempool_virt2phy(pool, obj_table[i]) + + meta_data_size; +#else + bufs[i] = (uint64_t)obj_table[i] + meta_data_size; +#endif + } + + /* feed them to bman */ + do { + ret = qbman_swp_release(swp, &releasedesc, bufs, n); + } while (ret == -EBUSY); + +aligned: + /* if there are more buffers to free */ + while (n < count) { + /* convert mbuf to buffers */ + for (i = 0; i < DPAA2_MBUF_MAX_ACQ_REL; i++) { +#ifdef RTE_LIBRTE_DPAA2_USE_PHYS_IOVA + bufs[i] = (uint64_t) + rte_mempool_virt2phy(pool, obj_table[n + i]) + + meta_data_size; +#else + bufs[i] = (uint64_t)obj_table[n + i] + meta_data_size; +#endif + } + + do { + ret = qbman_swp_release(swp, &releasedesc, bufs, + DPAA2_MBUF_MAX_ACQ_REL); + } while (ret == -EBUSY); + n += DPAA2_MBUF_MAX_ACQ_REL; + } +} + +int +rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool, + void **obj_table, unsigned int count) +{ +#ifdef RTE_LIBRTE_DPAA2_DEBUG_DRIVER + static int alloc; +#endif + struct qbman_swp *swp; + uint16_t bpid; + uint64_t bufs[DPAA2_MBUF_MAX_ACQ_REL]; + int i, ret; + unsigned int n = 0; + struct dpaa2_bp_info *bp_info; + + bp_info = mempool_to_bpinfo(pool); + + if (!(bp_info->bp_list)) { + RTE_LOG(ERR, PMD, "DPAA2 buffer pool not configured\n"); + return -ENOENT; + } + + bpid = bp_info->bpid; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret != 0) { + RTE_LOG(ERR, PMD, "Failed to allocate IO portal"); + return ret; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + while (n < count) { + /* Acquire is all-or-nothing, so we drain in 7s, + * then the remainder. + */ + if ((count - n) > DPAA2_MBUF_MAX_ACQ_REL) { + ret = qbman_swp_acquire(swp, bpid, bufs, + DPAA2_MBUF_MAX_ACQ_REL); + } else { + ret = qbman_swp_acquire(swp, bpid, bufs, + count - n); + } + /* In case of less than requested number of buffers available + * in pool, qbman_swp_acquire returns 0 + */ + if (ret <= 0) { + PMD_TX_LOG(ERR, "Buffer acquire failed with" + " err code: %d", ret); + /* The API expect the exact number of requested bufs */ + /* Releasing all buffers allocated */ + rte_dpaa2_mbuf_release(pool, obj_table, bpid, + bp_info->meta_data_size, n); + return ret; + } + /* assigning mbuf from the acquired objects */ + for (i = 0; (i < ret) && bufs[i]; i++) { + DPAA2_MODIFY_IOVA_TO_VADDR(bufs[i], uint64_t); + obj_table[n] = (struct rte_mbuf *) + (bufs[i] - bp_info->meta_data_size); + PMD_TX_LOG(DEBUG, "Acquired %p address %p from BMAN", + (void *)bufs[i], (void *)obj_table[n]); + n++; + } + } + +#ifdef RTE_LIBRTE_DPAA2_DEBUG_DRIVER + alloc += n; + PMD_TX_LOG(DEBUG, "Total = %d , req = %d done = %d", + alloc, count, n); +#endif + return 0; +} + +static int +rte_hw_mbuf_free_bulk(struct rte_mempool *pool, + void * const *obj_table, unsigned int n) +{ + struct dpaa2_bp_info *bp_info; + + bp_info = mempool_to_bpinfo(pool); + if (!(bp_info->bp_list)) { + RTE_LOG(ERR, PMD, "DPAA2 buffer pool not configured"); + return -ENOENT; + } + rte_dpaa2_mbuf_release(pool, obj_table, bp_info->bpid, + bp_info->meta_data_size, n); + + return 0; +} + +static unsigned int +rte_hw_mbuf_get_count(const struct rte_mempool *mp) +{ + int ret; + unsigned int num_of_bufs = 0; + struct dpaa2_bp_info *bp_info; + struct dpaa2_dpbp_dev *dpbp_node; + + if (!mp || !mp->pool_data) { + RTE_LOG(ERR, PMD, "Invalid mempool provided"); + return 0; + } + + bp_info = (struct dpaa2_bp_info *)mp->pool_data; + dpbp_node = bp_info->bp_list->buf_pool.dpbp_node; + + ret = dpbp_get_num_free_bufs(&dpbp_node->dpbp, CMD_PRI_LOW, + dpbp_node->token, &num_of_bufs); + if (ret) { + RTE_LOG(ERR, PMD, "Unable to obtain free buf count (err=%d)", + ret); + return 0; + } + + RTE_LOG(DEBUG, PMD, "Free bufs = %u", num_of_bufs); + + return num_of_bufs; +} + +struct rte_mempool_ops dpaa2_mpool_ops = { + .name = "dpaa2", + .alloc = rte_hw_mbuf_create_pool, + .free = rte_hw_mbuf_free_pool, + .enqueue = rte_hw_mbuf_free_bulk, + .dequeue = rte_dpaa2_mbuf_alloc_bulk, + .get_count = rte_hw_mbuf_get_count, +}; + +MEMPOOL_REGISTER_OPS(dpaa2_mpool_ops); diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h new file mode 100644 index 0000000..c4d7fc0 --- /dev/null +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h @@ -0,0 +1,91 @@ +/*- + * BSD LICENSE + * + * Copyright (c) 2016 Freescale Semiconductor, Inc. All rights reserved. + * Copyright (c) 2016 NXP. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Freescale Semiconductor, Inc nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _DPAA2_HW_DPBP_H_ +#define _DPAA2_HW_DPBP_H_ + +#define DPAA2_MAX_BUF_POOLS 8 + +struct buf_pool_cfg { + void *addr; + /**< The address from where DPAA2 will carve out the buffers */ + phys_addr_t phys_addr; + /**< Physical address of the memory provided in addr */ + uint32_t num; + /**< Number of buffers */ + uint32_t size; + /**< Size including headroom for each buffer */ + uint16_t align; + /**< Buffer alignment (in bytes) */ + uint16_t bpid; + /**< Autogenerated buffer pool ID for internal use */ +}; + +struct buf_pool { + uint32_t size; /**< Size of the Pool */ + uint32_t num_bufs; /**< Number of buffers in Pool */ + uint16_t bpid; /**< Pool ID, from pool configuration */ + uint8_t *h_bpool_mem; /**< Internal context data */ + struct dpaa2_dpbp_dev *dpbp_node; /**< Hardware context */ +}; + +/*! + * Buffer pool list configuration structure. User need to give DPAA2 the + * valid number of 'num_buf_pools'. + */ +struct dpaa2_bp_list_cfg { + struct buf_pool_cfg buf_pool; /* Configuration of each buffer pool*/ +}; + +struct dpaa2_bp_list { + struct dpaa2_bp_list *next; + struct rte_mempool *mp; /**< DPDK RTE EAL pool reference */ + int32_t dpaa2_ops_index; /**< Index into DPDK Mempool ops table */ + struct buf_pool buf_pool; +}; + +struct dpaa2_bp_info { + uint32_t meta_data_size; + uint32_t bpid; + struct dpaa2_bp_list *bp_list; +}; + +#define mempool_to_bpinfo(mp) ((struct dpaa2_bp_info *)(mp)->pool_data) +#define mempool_to_bpid(mp) ((mempool_to_bpinfo(mp))->bpid) + +extern struct dpaa2_bp_info rte_dpaa2_bpid_info[MAX_BPID]; + +int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool, + void **obj_table, unsigned int count); + +#endif /* _DPAA2_HW_DPBP_H_ */ diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map new file mode 100644 index 0000000..a8aa685 --- /dev/null +++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map @@ -0,0 +1,8 @@ +DPDK_17.05 { + global: + + rte_dpaa2_bpid_info; + rte_dpaa2_mbuf_alloc_bulk; + + local: *; +};