From patchwork Thu Aug 24 13:28:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Shukla X-Patchwork-Id: 27858 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 7D1A27D9A; Thu, 24 Aug 2017 15:30:04 +0200 (CEST) Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0071.outbound.protection.outlook.com [104.47.32.71]) by dpdk.org (Postfix) with ESMTP id 8B8A62C15 for ; Thu, 24 Aug 2017 15:30:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=Nnr1KQQCfk61iplQhvOeaiqZCm/0Rs6xPXEh4nf7CJQ=; b=iF//kPztLwPwnHDE6iOEhNtiR1LOT9DYp95E5/Wtus7pjtolpiaAZVSpf8CqhWifR6esiZnXbCwKOTzVH9wouaVhHKkI3UfC1aphuRyYt6IqJTj6dtOksScEfXdSJKZoha+CxtY2iuSnbX5jiHh423JB48jvxG5s40blgqSH+DM= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Santosh.Shukla@cavium.com; Received: from localhost.localdomain (14.140.2.178) by CY4PR07MB3093.namprd07.prod.outlook.com (10.172.115.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.1.1362.18; Thu, 24 Aug 2017 13:29:59 +0000 From: Santosh Shukla To: olivier.matz@6wind.com, dev@dpdk.org Cc: thomas@monjalon.net, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, Santosh Shukla Date: Thu, 24 Aug 2017 18:58:56 +0530 Message-Id: <20170824132903.32057-5-santosh.shukla@caviumnetworks.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170824132903.32057-1-santosh.shukla@caviumnetworks.com> References: <20170824132903.32057-1-santosh.shukla@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [14.140.2.178] X-ClientProxiedBy: BM1PR01CA0071.INDPRD01.PROD.OUTLOOK.COM (10.174.208.139) To CY4PR07MB3093.namprd07.prod.outlook.com (10.172.115.7) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 68bfeb83-d9ee-48be-d420-08d4eaf4392e X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254152)(300000503095)(300135400095)(201703131423075)(201703031133081)(201702281549075)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:CY4PR07MB3093; X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3093; 3:NYXXoLIRHFb7yx3l6VJBGA463vJAT0lwBAiaOx4jzvN6kgfo90AqUKJov4q3RIY7UNrwOr70DDFSwN6sPvnl0NJzmKn51DcCkBzLhm9LloFM/WlGP1jk0TfJyjXKYm2LpMmAULsTEsGDe+gb8SaET1Fec8mARNtweK5ONCzdUInDLVm1Y66Ib0eAZFQSBSvpbli3+licZMXL4wUuu/P53HBW2ifPwbxagNsBJsOrhIbjmK1tj3qyr4myxqH9QNdd; 25:Llh4fIrtoexjqV8WbmPXiFBfHKb0cOqHTUAgQCBsS0C3qSjyHIEbpDN8xQ/FlTg75uDpdqi6Ut4w8Xp5SGC6Q3YkxGY/bFGYyDKVW/e90ZRS41sYWRUktZnomurWhOCcZJdCQRCv3yUnQ2YhLHOfFYAyyZkmZ9fUlfxmjlBYCBzqyC/leL3K3Z74nDBj9SoqlSjWnY8klOPRNTjNvhHJxVq9ZqwY2IFTo28r1Z0thCUFmOzUQr7eCIDkr8V4a5iV5eNrvl3XJhirDejdNEPxOVtDuE75Y8z9zYnegrGBDvw4M48W6UmkH1oTl8eKtd6Cc3U8Y9AmR38XlOjmdJqq3g==; 31:airCQ9hIdkaDh70l5NWSh56CCrZBADFxtW23KuDGd9BO7YTBb1xwVMOoCsSPjZaiGoD9sBF+VCd7dyFmt6WW4x0Ia8Fiw36s42pijSJawPrUE77BVY0X3uPBe1dKHdzq3YS27iKNZuhgy1oboPLVAO2G3ObH0/U+hApZhslbRMfBo/BR7m+fjrxCVHjEVGaT/uy5mDdX7rmhQTGxm9RX6lI3EOHtRsYyeZRGNz4VYHs= X-MS-TrafficTypeDiagnostic: CY4PR07MB3093: X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3093; 20:EsB1SXDBhFKeKwW//i4w/HHJgYzyYO9u6zheqkFNaxMepOIn5+et2LOZ6npGaJQPhYGiArJPa3yunCzWYibLRAiMmIhGPo8Dvlpkr+5pdrDqdMvePXlQ9GBSZQOz3zmDvmEO9thRkPmOCqwUJRqHzdr9PO8/UpHBodb8irQzpupwJhvxFnnXBUbdC4wbIW9Yhu37bm7sv5EDrp8gPXmCXnmGLr80TuiunjTcLoK3ovT6t0V9O2wGZREXqj9Acdzgi3SRfaPKN6ic7B32IUxfAphIdEP4KoJ3HTCTauXIAvaS7+DX7OS4bQnX+P6alN8xhyBKNLcaMHw5bLpqY4MgeNrkUOC4AXl6TJ/vrCYgcGEc/vbNPP8QkgdUo8DT5ljkNesZhJO25uLu6PTzWewQ48OtIluC7NZFZjdUtmCQfEMTTh7eEqb6HkkTp81feFkrejy/fPfUPunl/FKBIqWF5iEZ+S8D28h1PcwP0MggIw8ZBS42fOOqZuYUTGC0ZcrR/bRm9F2G0MR/fTSuF7aHxH7PO3iq+aWjXifVG4jI1rDkqH2V/Vv3sADPL29ztwCkc7nKNr/ehoUxR0rtg3WovjTBVukcYAk4KvqcWmiAXQQ=; 4:mMp1tppHTqmqslp8+6uGlB2JLf3ul1WRwBevwVS96UkfPN/ey3F83EobXjKv80+bnTGyla/3aXMARIutLNBxYLwfkSSpqec9cPTyihjB3R5dMbQb2CL6cedj3sUvlRKcL87yVZbvo9JsWmCqe0HHDv8js0r8DgFvQ75WqEcnX942xS4LTnZAF4017Cbopl+lCmCU2uNkwMJIvcbrwm22BtWi7xhdn8j66FsPS4NZnvWMxyUqCpSoxBwkwoHFF6n5 X-Exchange-Antispam-Report-Test: UriScan:; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(8121501046)(5005006)(100000703101)(100105400095)(10201501046)(3002001)(93006095)(6041248)(20161123560025)(20161123558100)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123564025)(20161123562025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:CY4PR07MB3093; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:CY4PR07MB3093; X-Forefront-PRVS: 04097B7F7F X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(7370300001)(4630300001)(6069001)(6009001)(189002)(199003)(110136004)(107886003)(53936002)(105586002)(101416001)(7736002)(50226002)(76176999)(42186005)(68736007)(6486002)(33646002)(50986999)(6512007)(106356001)(1076002)(2906002)(6506006)(3846002)(6116002)(478600001)(97736004)(4326008)(5660300001)(7350300001)(47776003)(5009440100003)(72206003)(575784001)(50466002)(8656003)(48376002)(25786009)(81156014)(81166006)(66066001)(305945005)(6666003)(5003940100001)(5890100001)(42882006)(36756003)(8676002)(189998001)(2950100002)(110426004); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR07MB3093; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY4PR07MB3093; 23:Z4zMRQDAxzanFvMof/5jf2dWMWvCLwkfcVZ4mgOkd?= NNFCY+140t4TqJ3sHZdcOWSZjlC3NHT7K6JfAYzvyfXMxHN0BkGf535GOdOnU+gPgLoecyLRsfClvzYAmsUXfcqxMz3pJeaY9Q+N3FM5CY8ZAI20KRSf975qFqzWjjoiJaNHJJWvztzzvXZ2rCQ0aNljDmtqfb7RoV8EpslPoyM35AFjuoRJxbmLqjxwZjwUCErOWN1sqIKQAMHAU+bycq2r/jmgcnBEZhhLB6uS640qsg9ZR7vlFpDR0gvWjAqbu+vTHcICWJgtB4e22H3UyMDfSqtcoSBmopuFuhBeeweZeTVC12/g0OGLyWO5i3bxHSBXRK75xfBUKGcAOzJD/uzNYSxQrL6BHmZ47I9t07ukPQE0NT9b+ytgtqtlzRjsrv5WB1znlRjIpqbix0S+cS0XQ4Bhsql0CATUql2d4RVYSp9x7PI0n4zc3Lb4B5FyyhcBCSSUkPFnu0SNqoI2vYeSLN/nK4gxVt1CIrqaUMk5qde+KlHROKqb2hJSvDb3My4RjB+hb9hMVbnG9MpzL3MYLJtmcLOl0/nWnm74XIQF00C3hI3SlV3bq/XImqECEGfI4QKxuM8W2+gbT4IkFKQQUXq9vT1q2K7STJdeOoNynzKWUiMjh6WV/XroYXdpznxJVMjV8iUG5GgpV+/z9LRs9wpbcdHXUcYrDgHv/WkiJt56iUyNrE3IZMa+OihWcpYDtOujgV+SeuLyFpGFOVvX+knmlUyxOPABicP+bxUIsmzJzfK2g8lVnv39KIdzXqVuUBrDbkXRP4j/im0Po/XihakVGkG0V5UhZl9jMIhdpaHUIxY8UIvZ/ORPBuoeqAbhrrN2nWVmjEEDe1qFOgxQJezzpozCxr196VjKJRobZo9N8iN+WRdxeHB4ZdzkEEpd3iEFERD+mJqYdmQjy5UEOGCKGVzvWj41NepG3GfIb+lrtcjIRkRKLnkBj89jviKVlQT+z0SQgSbuF1C36cpdcVTKlWUKIHSdI5kujnBOKQFm8u2JWK1TeAk+jkT3ll1Yl9sgwpR34jN97gCE+vjold76l2lUR2LD5mN6guxVufp0IRkox2cBduTn+dPhCf3i3Owb8yomA9olMcpdXA4OzksV5DYeVx7FFdo2rYLSPsq61m9umA61ieLhIwUn1Lz2jt/+ylyduZ/viwQyu2jVxSyIryMfOP3kuEjcr3QOZkEUO3Gey+dzGo+qHNG86uHrMrWgA/rM7mOIXGBWqHW X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3093; 6:KfT3l239YitapurXQefCTH6/gYIajiSRhKCYfst6jg2dh0XIPMWd6I31VX83A21yzmXljmbEc/vatYCsxj3EljXhSIlCyP9zrB/X9lrZySiDVk3DtuMuDfkbaCvq6xyaGAuq0nopldoqXBqQUVVJC8ECSH0F+4q78JWj4XkIcW+ano716pHFGjJXPsm1D+6rNmk9fSSrYXqw+wC7Vh/4IncFIBxJ5yAIrSUtJB9Y5gOcwiSpUMP9eQX57F2/hKuBCnxK445BC0HlSdf0w85H29vpFZEy0yUAsvXtl7B4wG1MploHkQNu0KBkT0ZSTF3EvlwBkm4uD8tWXTMGvvjRLw==; 5:1pG5YEg+yY9G5azNVmen3S0wkUmt8YqNVqkWdaYxVI/890hQYz9K56pG6aCMBOdA7DHXWugCLjdGr6AlgMN6/rxdiQ7nUWmfbFt88XRHe9aYwqxZuKeCN+BD5f03LTRKdmOSz0Uv7eVXnf3H06eTZ9pQT+dafCJ/T0VcSRXoFsI=; 24:r/LcgGYm8NACF300EvUJFLBjY9G4p30CBasDhvX/JQU55uhy9Mx0aSDjXrqHF6/AJIODL5qp1EoKiUk5C94SVu/Qmc7R/3lvtTjqwhR5J38=; 7:LG9V7VmqyEdm+z4Su0CMTr5EKPC7IJJRueFDu5Wn/KxLwrLz2CG9iUqYnNw+URaHGfOSnjiQ+hKvcaURuw7loRD5ss7N7sEysUM5kKNur6BzYvU7oqo1R2k/8FY+tklG3RRL6N5Ch+CWUAX692cRtwthK78W1S3cra4qO0YVuec2QI2qmG+XGMHsu+tO6H5Vbsil5uOPRWHZXm7iJGccrAl8DIAoE+SdSoSI6+T8YQc= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Aug 2017 13:29:59.2434 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR07MB3093 Subject: [dpdk-dev] [PATCH v1 04/11] mempool/octeontx: implement pool alloc X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Upon pool allocation request by application, Octeontx FPA alloc does following: - Gets free pool from pci fpavf array. - Uses mbox to communicate fpapf driver about, * gpool-id * pool block_sz * alignemnt - Programs fpavf pool boundary. Signed-off-by: Santosh Shukla Signed-off-by: Jerin Jacob --- drivers/mempool/octeontx/Makefile | 1 + drivers/mempool/octeontx/octeontx_fpavf.c | 515 ++++++++++++++++++++++++ drivers/mempool/octeontx/octeontx_fpavf.h | 10 + drivers/mempool/octeontx/rte_mempool_octeontx.c | 88 ++++ 4 files changed, 614 insertions(+) create mode 100644 drivers/mempool/octeontx/rte_mempool_octeontx.c diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile index 55ca1d944..9c3389608 100644 --- a/drivers/mempool/octeontx/Makefile +++ b/drivers/mempool/octeontx/Makefile @@ -51,6 +51,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c +SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += rte_mempool_octeontx.c # this lib depends upon: DEPDIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += lib/librte_mbuf diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c index 0b4a9357f..85ddf0a03 100644 --- a/drivers/mempool/octeontx/octeontx_fpavf.c +++ b/drivers/mempool/octeontx/octeontx_fpavf.c @@ -46,9 +46,75 @@ #include #include #include +#include +#include #include "octeontx_fpavf.h" +/* FPA Mbox Message */ +#define IDENTIFY 0x0 + +#define FPA_CONFIGSET 0x1 +#define FPA_CONFIGGET 0x2 +#define FPA_START_COUNT 0x3 +#define FPA_STOP_COUNT 0x4 +#define FPA_ATTACHAURA 0x5 +#define FPA_DETACHAURA 0x6 +#define FPA_SETAURALVL 0x7 +#define FPA_GETAURALVL 0x8 + +#define FPA_COPROC 0x1 + +/* fpa mbox struct */ +struct octeontx_mbox_fpa_cfg { + int aid; + uint64_t pool_cfg; + uint64_t pool_stack_base; + uint64_t pool_stack_end; + uint64_t aura_cfg; +}; + +struct __attribute__((__packed__)) gen_req { + uint32_t value; +}; + +struct __attribute__((__packed__)) idn_req { + uint8_t domain_id; +}; + +struct __attribute__((__packed__)) gen_resp { + uint16_t domain_id; + uint16_t vfid; +}; + +struct __attribute__((__packed__)) dcfg_resp { + uint8_t sso_count; + uint8_t ssow_count; + uint8_t fpa_count; + uint8_t pko_count; + uint8_t tim_count; + uint8_t net_port_count; + uint8_t virt_port_count; +}; + +#define FPA_MAX_POOL 32 +#define FPA_PF_PAGE_SZ 4096 + +#define FPA_LN_SIZE 128 +#define FPA_ROUND_UP(x, size) \ + ((((unsigned long)(x)) + size-1) & (~(size-1))) +#define FPA_OBJSZ_2_CACHE_LINE(sz) (((sz) + RTE_CACHE_LINE_MASK) >> 7) +#define FPA_CACHE_LINE_2_OBJSZ(sz) ((sz) << 7) + +#define POOL_ENA (0x1 << 0) +#define POOL_DIS (0x0 << 0) +#define POOL_SET_NAT_ALIGN (0x1 << 1) +#define POOL_DIS_NAT_ALIGN (0x0 << 1) +#define POOL_STYPE(x) (((x) & 0x1) << 2) +#define POOL_LTYPE(x) (((x) & 0x3) << 3) +#define POOL_BUF_OFFSET(x) (((x) & 0x7fffULL) << 16) +#define POOL_BUF_SIZE(x) (((x) & 0x7ffULL) << 32) + struct fpavf_res { void *pool_stack_base; void *bar0; @@ -67,6 +133,455 @@ struct octeontx_fpadev { static struct octeontx_fpadev fpadev; +/* lock is taken by caller */ +static int +octeontx_fpa_gpool_alloc(unsigned int object_size) +{ + struct fpavf_res *res = NULL; + uint16_t gpool; + unsigned int sz128; + + sz128 = FPA_OBJSZ_2_CACHE_LINE(object_size); + + for (gpool = 0; gpool < FPA_VF_MAX; gpool++) { + + /* Skip VF that is not mapped Or _inuse */ + if ((fpadev.pool[gpool].bar0 == NULL) || + (fpadev.pool[gpool].is_inuse == true)) + continue; + + res = &fpadev.pool[gpool]; + + RTE_ASSERT(res->domain_id != (uint16_t)~0); + RTE_ASSERT(res->vf_id != (uint16_t)~0); + RTE_ASSERT(res->stack_ln_ptr != 0); + + if (res->sz128 == 0) { + res->sz128 = sz128; + + fpavf_log_dbg("gpool %d blk_sz %d\n", gpool, sz128); + return gpool; + } + } + + return -ENOSPC; +} + +/* lock is taken by caller */ +static __rte_always_inline uintptr_t +octeontx_fpa_gpool2handle(uint16_t gpool) +{ + struct fpavf_res *res = NULL; + + RTE_ASSERT(gpool < FPA_VF_MAX); + + res = &fpadev.pool[gpool]; + if (unlikely(res == NULL)) + return 0; + + return (uintptr_t)res->bar0; +} + +/* lock is taken by caller */ +static __rte_always_inline int +octeontx_fpa_handle2gpool(uintptr_t handle) +{ + uint16_t gpool; + + for (gpool = 0; gpool < FPA_VF_MAX; gpool++) { + if ((uintptr_t)fpadev.pool[gpool].bar0 != handle) + continue; + + return gpool; + } + /* No entry */ + return -ENOSPC; +} + +static __rte_always_inline bool +octeontx_fpa_handle_valid(uintptr_t handle) +{ + struct fpavf_res *res = NULL; + uint8_t gpool; + bool ret = false; + + if (unlikely(!handle)) + return ret; + + for (gpool = 0; gpool < FPA_VF_MAX; gpool++) { + if ((uintptr_t)fpadev.pool[gpool].bar0 != handle) + continue; + + res = &fpadev.pool[gpool]; + + if (res->sz128 == 0 || res->domain_id == (uint16_t)~0 || + res->stack_ln_ptr == 0) + ret = false; + else + ret = true; + break; + } + + return ret; +} + +static int +octeontx_fpapf_pool_setup(unsigned int gpool, unsigned int buf_size, + signed short buf_offset, unsigned int max_buf_count) +{ + void *memptr = NULL; + phys_addr_t phys_addr; + unsigned int memsz; + struct fpavf_res *fpa = NULL; + uint64_t reg; + struct octeontx_mbox_hdr hdr; + struct dcfg_resp resp; + struct octeontx_mbox_fpa_cfg cfg; + int ret = -1; + + fpa = &fpadev.pool[gpool]; + memsz = FPA_ROUND_UP(max_buf_count / fpa->stack_ln_ptr, FPA_LN_SIZE) * + FPA_LN_SIZE; + + /* Round-up to page size */ + memsz = (memsz + FPA_PF_PAGE_SZ - 1) & ~(uintptr_t)(FPA_PF_PAGE_SZ-1); + memptr = rte_malloc(NULL, memsz, RTE_CACHE_LINE_SIZE); + if (memptr == NULL) { + ret = -ENOMEM; + goto err; + } + + /* Configure stack */ + fpa->pool_stack_base = memptr; + phys_addr = rte_malloc_virt2phy(memptr); + + buf_size /= FPA_LN_SIZE; + + /* POOL setup */ + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_CONFIGSET; + hdr.vfid = fpa->vf_id; + hdr.res_code = 0; + + buf_offset /= FPA_LN_SIZE; + reg = POOL_BUF_SIZE(buf_size) | POOL_BUF_OFFSET(buf_offset) | + POOL_LTYPE(0x2) | POOL_STYPE(0) | POOL_SET_NAT_ALIGN | + POOL_ENA; + + cfg.aid = 0; + cfg.pool_cfg = reg; + cfg.pool_stack_base = phys_addr; + cfg.pool_stack_end = phys_addr + memsz; + cfg.aura_cfg = (1 << 9); + + ret = octeontx_ssovf_mbox_send(&hdr, &cfg, + sizeof(struct octeontx_mbox_fpa_cfg), + &resp, sizeof(resp)); + if (ret < 0) { + ret = -EACCES; + goto err; + } + + fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64 "\n", + fpa->vf_id, gpool, cfg.aid, (unsigned int)cfg.pool_cfg, + cfg.pool_stack_base, cfg.pool_stack_end, cfg.aura_cfg); + + /* Now pool is in_use */ + fpa->is_inuse = true; + +err: + if (ret < 0) + rte_free(memptr); + + return ret; +} + +static int +octeontx_fpapf_pool_destroy(unsigned int gpool_index) +{ + struct octeontx_mbox_hdr hdr; + struct dcfg_resp resp; + struct octeontx_mbox_fpa_cfg cfg; + struct fpavf_res *fpa = NULL; + int ret = -1; + + fpa = &fpadev.pool[gpool_index]; + + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_CONFIGSET; + hdr.vfid = fpa->vf_id; + hdr.res_code = 0; + + /* reset and free the pool */ + cfg.aid = 0; + cfg.pool_cfg = 0; + cfg.pool_stack_base = 0; + cfg.pool_stack_end = 0; + cfg.aura_cfg = 0; + + ret = octeontx_ssovf_mbox_send(&hdr, &cfg, + sizeof(struct octeontx_mbox_fpa_cfg), + &resp, sizeof(resp)); + if (ret < 0) { + ret = -EACCES; + goto err; + } + + ret = 0; +err: + /* anycase free pool stack memory */ + rte_free(fpa->pool_stack_base); + fpa->pool_stack_base = NULL; + return ret; +} + +static int +octeontx_fpapf_aura_attach(unsigned int gpool_index) +{ + struct octeontx_mbox_hdr hdr; + struct dcfg_resp resp; + struct octeontx_mbox_fpa_cfg cfg; + int ret = 0; + + if (gpool_index >= FPA_MAX_POOL) { + ret = -EINVAL; + goto err; + } + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_ATTACHAURA; + hdr.vfid = gpool_index; + hdr.res_code = 0; + memset(&cfg, 0x0, sizeof(struct octeontx_mbox_fpa_cfg)); + cfg.aid = gpool_index; /* gpool is guara */ + + ret = octeontx_ssovf_mbox_send(&hdr, &cfg, + sizeof(struct octeontx_mbox_fpa_cfg), + &resp, sizeof(resp)); + if (ret < 0) { + fpavf_log_err("Could not attach fpa "); + fpavf_log_err("aura %d to pool %d. Err=%d. FuncErr=%d\n", + gpool_index, gpool_index, ret, hdr.res_code); + ret = -EACCES; + goto err; + } +err: + return ret; +} + +static int +octeontx_fpapf_aura_detach(unsigned int gpool_index) +{ + struct octeontx_mbox_fpa_cfg cfg = {0}; + struct octeontx_mbox_hdr hdr = {0}; + int ret = 0; + + if (gpool_index >= FPA_MAX_POOL) { + ret = -EINVAL; + goto err; + } + + cfg.aid = gpool_index; /* gpool is gaura */ + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_DETACHAURA; + hdr.vfid = gpool_index; + ret = octeontx_ssovf_mbox_send(&hdr, &cfg, sizeof(cfg), NULL, 0); + if (ret < 0) { + fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d\n", + gpool_index, ret, hdr.res_code); + ret = -EINVAL; + } + +err: + return ret; +} + +static int +octeontx_fpavf_pool_setup(uintptr_t handle, unsigned long memsz, + void *memva, uint16_t gpool) +{ + uint64_t va_end; + + if (unlikely(!handle)) + return -ENODEV; + + va_end = (uintptr_t)memva + memsz; + va_end &= ~RTE_CACHE_LINE_MASK; + + /* VHPOOL setup */ + fpavf_write64((uintptr_t)memva, + (void *)((uintptr_t)handle + + FPA_VF_VHPOOL_START_ADDR(gpool))); + fpavf_write64(va_end, + (void *)((uintptr_t)handle + + FPA_VF_VHPOOL_END_ADDR(gpool))); + return 0; +} + +static int +octeontx_fpapf_start_count(uint16_t gpool_index) +{ + int ret = 0; + struct octeontx_mbox_hdr hdr = {0}; + + if (gpool_index >= FPA_MAX_POOL) { + ret = -EINVAL; + goto err; + } + + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_START_COUNT; + hdr.vfid = gpool_index; + ret = octeontx_ssovf_mbox_send(&hdr, NULL, 0, NULL, 0); + if (ret < 0) { + fpavf_log_err("Could not start buffer counting for "); + fpavf_log_err("FPA pool %d. Err=%d. FuncErr=%d\n", + gpool_index, ret, hdr.res_code); + ret = -EINVAL; + goto err; + } + +err: + return ret; +} + +static __rte_always_inline int +octeontx_fpavf_free(unsigned int gpool) +{ + int ret = 0; + + if (gpool >= FPA_MAX_POOL) { + ret = -EINVAL; + goto err; + } + + /* Pool is free */ + fpadev.pool[gpool].is_inuse = false; + +err: + return ret; +} + +static __rte_always_inline int +octeontx_gpool_free(uint16_t gpool) +{ + if (fpadev.pool[gpool].sz128 != 0) { + fpadev.pool[gpool].sz128 = 0; + return 0; + } + return -EINVAL; +} + +/* + * Return buffer size for a given pool + */ +int +octeontx_fpa_bufpool_block_size(uintptr_t handle) +{ + struct fpavf_res *res = NULL; + int gpool; + + if (unlikely(!octeontx_fpa_handle_valid(handle))) + return -EINVAL; + + gpool = octeontx_fpa_handle2gpool(handle); + res = &fpadev.pool[gpool]; + return FPA_CACHE_LINE_2_OBJSZ(res->sz128); +} + +uintptr_t +octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count, + unsigned int buf_offset, char **va_start, + int node_id) +{ + unsigned int gpool; + void *memva; + unsigned long memsz; + uintptr_t gpool_handle; + int res; + + RTE_SET_USED(node_id); + FPAVF_STATIC_ASSERTION(sizeof(struct rte_mbuf) <= + OCTEONTX_FPAVF_BUF_OFFSET); + + if (unlikely(*va_start == NULL)) + goto error_end; + + object_size = RTE_CACHE_LINE_ROUNDUP(object_size); + if (object_size > FPA_MAX_OBJ_SIZE) { + errno = EINVAL; + goto error_end; + } + + rte_spinlock_lock(&fpadev.lock); + res = octeontx_fpa_gpool_alloc(object_size); + + /* Bail if failed */ + if (unlikely(res < 0)) { + errno = res; + goto error_unlock; + } + + /* get fpavf */ + gpool = res; + + /* get pool handle */ + gpool_handle = octeontx_fpa_gpool2handle(gpool); + if (!octeontx_fpa_handle_valid(gpool_handle)) { + errno = ENOSPC; + goto error_gpool_free; + } + + res = octeontx_fpapf_pool_setup(gpool, object_size, buf_offset, + object_count); + if (res < 0) { + errno = res; + goto error_gpool_free; + } + + /* populate AURA fields */ + res = octeontx_fpapf_aura_attach(gpool); + if (res < 0) { + errno = res; + goto error_pool_destroy; + } + + /* vf pool setup */ + memsz = object_size * object_count; + memva = *va_start; + res = octeontx_fpavf_pool_setup(gpool_handle, memsz, memva, gpool); + if (res < 0) { + errno = res; + goto error_gaura_detach; + } + + /* Release lock */ + rte_spinlock_unlock(&fpadev.lock); + + /* populate AURA registers */ + fpavf_write64(object_count, (void *)((uintptr_t)gpool_handle + + FPA_VF_VHAURA_CNT(gpool))); + fpavf_write64(object_count, (void *)((uintptr_t)gpool_handle + + FPA_VF_VHAURA_CNT_LIMIT(gpool))); + fpavf_write64(object_count + 1, (void *)((uintptr_t)gpool_handle + + FPA_VF_VHAURA_CNT_THRESHOLD(gpool))); + + octeontx_fpapf_start_count(gpool); + + return gpool_handle; + +error_gaura_detach: + (void) octeontx_fpapf_aura_detach(gpool); +error_pool_destroy: + octeontx_fpavf_free(gpool); + octeontx_fpapf_pool_destroy(gpool); +error_gpool_free: + octeontx_gpool_free(gpool); +error_unlock: + rte_spinlock_unlock(&fpadev.lock); +error_end: + return (uintptr_t)NULL; +} + static void octeontx_fpavf_setup(void) { diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h index c43b1a7d2..3e8a2682f 100644 --- a/drivers/mempool/octeontx/octeontx_fpavf.h +++ b/drivers/mempool/octeontx/octeontx_fpavf.h @@ -88,6 +88,10 @@ #define FPA_VF0_APERTURE_SHIFT 22 #define FPA_AURA_SET_SIZE 16 +#define FPA_MAX_OBJ_SIZE (128 * 1024) +#define OCTEONTX_FPAVF_BUF_OFFSET 128 + +#define FPAVF_STATIC_ASSERTION(s) _Static_assert(s, #s) /* * In Cavium OcteonTX SoC, all accesses to the device registers are @@ -126,4 +130,10 @@ do { \ } while (0) #endif +uintptr_t +octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count, + unsigned int buf_offset, char **va_start, + int node); +int +octeontx_fpa_bufpool_block_size(uintptr_t handle); #endif /* __OCTEONTX_FPAVF_H__ */ diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c new file mode 100644 index 000000000..73648aa7f --- /dev/null +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c @@ -0,0 +1,88 @@ +/* + * BSD LICENSE + * + * Copyright (C) 2017 Cavium Inc. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include +#include +#include +#include + +#include "octeontx_fpavf.h" + +static int +octeontx_fpavf_alloc(struct rte_mempool *mp) +{ + uintptr_t pool; + uint32_t memseg_count = mp->size; + uint32_t object_size; + uintptr_t va_start; + int rc = 0; + + /* virtual hugepage mapped addr */ + va_start = ~(uint64_t)0; + + object_size = mp->elt_size + mp->header_size + mp->trailer_size; + + pool = octeontx_fpa_bufpool_create(object_size, memseg_count, + OCTEONTX_FPAVF_BUF_OFFSET, + (char **)&va_start, + mp->socket_id); + rc = octeontx_fpa_bufpool_block_size(pool); + if (rc < 0) + goto _end; + + if ((uint32_t)rc != object_size) + fpavf_log_err("buffer size mismatch: %d instead of %u\n", + rc, object_size); + + fpavf_log_info("Pool created %p with .. ", (void *)pool); + fpavf_log_info("obj_sz %d, cnt %d\n", object_size, memseg_count); + + /* assign pool handle to mempool */ + mp->pool_id = (uint64_t)pool; + + return 0; + +_end: + return rc; +} + +static struct rte_mempool_ops octeontx_fpavf_ops = { + .name = "octeontx_fpavf", + .alloc = octeontx_fpavf_alloc, + .free = NULL, + .enqueue = NULL, + .dequeue = NULL, + .get_count = NULL, + .get_capabilities = NULL, + .update_range = NULL, +}; + +MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);