From patchwork Sun Oct 8 12:40:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Santosh Shukla X-Patchwork-Id: 29887 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7B71F1B197; Sun, 8 Oct 2017 14:41:11 +0200 (CEST) Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0074.outbound.protection.outlook.com [104.47.40.74]) by dpdk.org (Postfix) with ESMTP id 0834C1B197 for ; Sun, 8 Oct 2017 14:41:09 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=Z9s3UVw3sFBPU/rUPWFOfEisORf3CJG1dolOds9Spic=; b=f1NO8nqJVDYc/Jz3h96wR92bIzh9lHmI5vy7rV3huUb1b8HquQjPDNL+HtRUgUnA/KO6LRS4C5M5mWD1i9EO1+MoxOQNAM9CfI9njzma4Ll72doUz+nYJ4AK/uC4UsTyq4LR6edo6Y1/qUYTTfMaCpKWTsbuv8ct2m5egrx+ZgU= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Santosh.Shukla@cavium.com; Received: from localhost.localdomain (14.140.2.178) by MWHPR07MB3103.namprd07.prod.outlook.com (10.172.95.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.77.7; Sun, 8 Oct 2017 12:41:04 +0000 From: Santosh Shukla To: olivier.matz@6wind.com, dev@dpdk.org Cc: thomas@monjalon.net, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, Santosh Shukla Date: Sun, 8 Oct 2017 18:10:05 +0530 Message-Id: <20171008124011.1577-5-santosh.shukla@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171008124011.1577-1-santosh.shukla@caviumnetworks.com> References: <20170831063719.19273-1-santosh.shukla@caviumnetworks.com> <20171008124011.1577-1-santosh.shukla@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [14.140.2.178] X-ClientProxiedBy: BMXPR01CA0045.INDPRD01.PROD.OUTLOOK.COM (10.174.214.31) To MWHPR07MB3103.namprd07.prod.outlook.com (10.172.95.9) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 363606a2-c850-4aaa-e47b-08d50e49d862 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(2017030254152)(2017052603199)(201703131423075)(201703031133081)(201702281549075); SRVR:MWHPR07MB3103; X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3103; 3:zzQYpxnCyPkAiWrG7FaDyjSZvm+JuYh8iNhp5An1xCzY3e15yFMVU6tQ79V1TWVItkYc7XZookLch99iTOoPFsPJ/hcAkEIG/AX25ZG5D4FIRo+PpGCo0WG2wjclGunMaiKPQpS9tAYBV2e1S4HtZVuubVxMiBvQ+NwRlVdEhnGd8C7Ez2rXmcowd8Cxbxe93+BYCm4Jh9UG1TVI50B/MZbvsV3NkKnvTWYxwYqQ2RiLvKrRo+J1/MHxhCTOx8e9; 25:IDQdxQ8MC9r9UDqs6PXbU+5W5hnenEeOCUr/l0R49J5yeEzPwUF5VM+6+S/3/2pRGJn9GZ7wmKL38TnFh1Z5uCxGNFkGhpmaDiu/rofPIV+5BKcq1xGLnyR8AdKbKF9ML08ZJhFV7OeMYVYHiKNbGVK+KdXbO9yCZtb8AsNPjQURVC1BoFucSvxjcWxidtf9o4mV6+J3NoWTrYrO1CZh3qDr9wfqmEKxzLBcTB4ITJ2mfts+ZZEhVFWxzJGBTbZrb0RKUGVw8XQ9C0JWJGuka/jasEYXp44PgbPOQfhYZIE0+QqKbQYL1ebIn0QBsLDhZZQevVFLnI/0Var1z36KuQ==; 31:KPgbgxrxmiIjsJFmNcOSyWA95X1NZ9kaPzZPQIyBMv/4r+ER+4nP96cgnRMclTCVsZtiFcgrz2+hqU6X4z7OQ4oOwNG9LewbbE6PHS8+pM9xZfcGyPmB7RjQomak3P5mVN3qaWyphk2h3vl6VzrkKYI1Vlt5rVNpnWtR6fxj3WNmTb3bPUMZpRjzuvkEb4cH1ThLHdxvZwO8KBhP4ssOXFd2Xw/12nXmAEzD3BUwdoY= X-MS-TrafficTypeDiagnostic: MWHPR07MB3103: X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3103; 20:G0QT+o5lnRzQ7sYsRJ02ipBT2CNNUVkd6pPaVFGptynqZiShJyx/UfPoD7HtJUf2nMeP+wRNuEAF+fLHzrXk4fWWsfV1izthdyxEHs6cak4kyikdQ6qOLUKYdSCfmK+eI0lnJizyclozVgKxve32BJ9sab+eA19treFS6VWFyAnR2AD+5q+A0YjFzzoFAznqhldHPdIHCw1TEq6bZE8pBXx9+KnMVGvtLNOlj9T6S5w3ZJW0Arhqh7sZmJbeFPoapT3hoXQlMeGW08kmP9q4vj38ZJWweudB6pblPEGFmGT13mt4eviq4BHcYElbG2yIc6SCWwEwxHFi3s+xLDYAVe8xObUVlE0k8dl/EwlBea5ly3VZw4SWZwcL/NqyHz8VRcbsYDuQNiJwTBtIvn/xX43o2mezvrD0YV2Y746t/reTkyq9L20CBRPnPSu7KoNMAAOuppPFUfFnsISo3oS8hdfV4Z5+zfRWbblFWFZuG0vZQ7Asdk7Lun1Hq9w91hR3xLEWJOM4hbSXBlNMKXUFJeVc+CqOvbn9EiFa59KwgfkcHji5xM+eWhjSkmJfztOtNJz/wanmdjnRD7XV/m3HdNfXwxEULcWswRd3PilVBIM=; 4:ZO1BXVahwjSFA484e8x8YWyWpLH5DpIKLq0x42sEOvOSMW7EWz0EpCukn+aGD4EIpvz/YDN1sYWUzAVhqi4j0Lx0YyjzbvNkb5OINuYMzmWzqQ/pD4M7FW9O4jBAOxzRXU87I6tu9WrGvWiA0ElcbWHCKaJK5/gxER8FzktTk8w3Lrdg+nbCaD9Mhka0W0m16fnNn+yGa+1HJN00MgLcyjaQTDK0YcxJbd2XfkCzaOruUS1Shba48sBUeWHXGr/w X-Exchange-Antispam-Report-Test: UriScan:; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(3002001)(100000703101)(100105400095)(93006095)(10201501046)(6041248)(20161123562025)(20161123560025)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123564025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:MWHPR07MB3103; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:MWHPR07MB3103; X-Forefront-PRVS: 0454444834 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(6069001)(376002)(346002)(189002)(199003)(3846002)(16526018)(1076002)(6512007)(2950100002)(33646002)(478600001)(97736004)(107886003)(50466002)(25786009)(106356001)(8936002)(48376002)(36756003)(4326008)(5660300001)(50226002)(105586002)(305945005)(5009440100003)(16586007)(189998001)(72206003)(66066001)(76176999)(81156014)(6486002)(5003940100001)(8656003)(68736007)(47776003)(316002)(53936002)(7736002)(81166006)(50986999)(2906002)(42882006)(575784001)(6116002)(8676002)(101416001)(5890100001)(6506006)(110426004); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR07MB3103; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MWHPR07MB3103; 23:jRsJxp1Dcv1N+TyUODFYe1VZp2syFx8k5QbHhcHVu?= RETtrMW9M93WaKfgF65F3vtRNQtRpgsFV/nHDiFtJH7Kn0dm6AindcWnkMtM1RBbbXggA3E5mDhm2540U8JP9q2+ighC4ID303BrPqLSKpV7FkvsSjf0T0MM2q7gCb1jxmOx84SouJSZbBtan74sbP+lP/UYTdAoIMlBJudMsVChMMIXlEwP09En7bTR9Bpub4bdBMVxrTk44m7j4N68aTCHB1GAw5sdJZLGunKs5EXIO3+FiIW1JK4ZDYjYF3gu5wQhA37HLXtQTcSK+9zo3uVOBkKTNE9QwZC+zoTOfW+YqMGA/GFPWDCJlADvmBGFJ6TI1gfs8m7c0kqv75Ai6G0+E7WhFa9VDMCqO607UD6LUmFOmeVOMbyvfbC+jOFlgaRhUGroVWdvTBUmWoiwqTFucI4uKBmoOHW3wvcKL5zHx4JFuE6WeEKJzp9APwQeMNrFt1asRu0Tx7v1KMkSPjbUJv25FDF124c7TwIOH1zx9dur7wsdYytSH33mU7ZcQ/psXLK9D5fvHr2TrPPB9Ou6T8ONz8/HG8vj2n7a+LseJEqTzOoemeisNMp9wTBO2MFdkGWlne4pElZicifTqsqFR5rfUFK+hlTWS0CLoC5d7iNc2XIgjTDfRYpUv1wI4RRv/vX1mdFufCj3pT5FhsQPJfu+WCsYhOcOsMZY5p2iXqzhcpCMIMjkI+dwQ3+vuS1at3xIR6sRt7d8pnWGT19zaweb0BOgV0oxOWqcXnjmYPSP4TlUCqNAGh14EJPGHeisAsgxKodZ6Fit/y9aT546H8C8Jh/gs7XgJe2LReGeLS3N4YmqLU9NgMFj83MoVQQehMXweV0eZpMGDx2V/yc06dZF45sW32NckFW4gEeqQci4Rwn9qWKdByj34LvVxwG4q8sKNzoqyCLX4vJ6MNiUH6lGPMU3ocOTEnoIeeGPwbmnbrjE4ks/6t6nNkXTPLbAPVlnqzMUzEIIO2TtyL2Baakz0QZ8WFW65pZTZoGnLYOeTjMrdgBbawr9rpAfvlcU3LFOwoRpoeFXXQ8xmicb0pZ3of6slyZimDe0WwSyq09yMzEOlniIQHrPHpgdcWKW0gRaxs3t7459jr9mfUuCtxf+lb/SN3TwmW5djKYb0PtA24+9U7IJ9UPnUlLrQMUfhpEb3W+Uuco6UUxRBq9+CAI7gWy+sIWW2QQwo370g== X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3103; 6:v9Shdwh7916AX5iCQug91hdwFEq+cThM9KyLDAG9B38KNaAZkS+Jt50jLgsmEiUK6CCt7R4n64eaEhlyJqP1JZtMI5Z6A3HpfRwagap7hFklFSa8cA7HKgfjdm6gFTOBzadHYZy/0XijmgzejcPyHBrs4NpbGIZUi1WM8TNFfg8Ci9rNSZwNaRyOzghXjXkSQ8Kr1+CNRzKnDKXC1bqZWFtiDCboRe1H4yMc7RIzCbtZDSYZtzgEy7u7WFFbHxquKCyqpAJ9dd5ePyVidjod+QqU63QqPCyohy8Vkgit7x1cRitnjY0Bk7UXJNZalgAq5vkdlNypH+Z26F+10YK7ow==; 5:4F/iT/VW+ueCa2ZX6u4XXH0ighJsOW4/7s/CkCtWfP5oQ49BbOwOWsEQizVDOmtTBBmG/e4X6N461+Eu1LRNTsf/b4iaBfjrCvzRANbdOBtRmlBtechOAnNW0NDFbl+MaIQlo7wo4aWklVKp+9TGBg==; 24:WNRz70PNHc47ZJs14eoZUJ3wc2Ll73ZO4TPUFwnxZdBdPGCQFS98cwz46GQaaZ/Ikd4PZO8UOEliR1ZvTQymjGoQgNDi4WiQbcVy177i2Ag=; 7:d4miZ818yG2HGsAEjXEw7w23w6QlhvynonclirozwNkHWlfY8vBlIYFDhB+ulwbb6YD7FHmvFIXo/P+zm8seAX87jRUEICXYzcLE5Q3d5dp4q6Cry3LSBoQhJeqaG7bWiPBy1lmm9Vt55/NNPoXzIbuWZH2v37y2TUbLGUaQZDfGECzXKaAhu4voc85RR/I7l+q85eQc4KHsxEamwDSDxCMS1lO07uSKmuRPRvxsxVo= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2017 12:41:04.3554 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR07MB3103 Subject: [dpdk-dev] [PATCH v3 04/10] mempool/octeontx: add support for alloc X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Upon pool allocation request by application, Octeontx FPA alloc does following: - Gets free pool from pci fpavf array. - Uses mbox to communicate fpapf driver about, * gpool-id * pool block_sz * alignemnt - Programs fpavf pool boundary. Signed-off-by: Santosh Shukla Signed-off-by: Jerin Jacob --- drivers/mempool/octeontx/Makefile | 1 + drivers/mempool/octeontx/octeontx_fpavf.c | 514 ++++++++++++++++++++++++ drivers/mempool/octeontx/octeontx_fpavf.h | 17 + drivers/mempool/octeontx/rte_mempool_octeontx.c | 88 ++++ 4 files changed, 620 insertions(+) create mode 100644 drivers/mempool/octeontx/rte_mempool_octeontx.c diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile index 55ca1d944..9c3389608 100644 --- a/drivers/mempool/octeontx/Makefile +++ b/drivers/mempool/octeontx/Makefile @@ -51,6 +51,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c +SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += rte_mempool_octeontx.c # this lib depends upon: DEPDIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += lib/librte_mbuf diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c index 0b4a9357f..c0c9d8325 100644 --- a/drivers/mempool/octeontx/octeontx_fpavf.c +++ b/drivers/mempool/octeontx/octeontx_fpavf.c @@ -46,9 +46,75 @@ #include #include #include +#include +#include #include "octeontx_fpavf.h" +/* FPA Mbox Message */ +#define IDENTIFY 0x0 + +#define FPA_CONFIGSET 0x1 +#define FPA_CONFIGGET 0x2 +#define FPA_START_COUNT 0x3 +#define FPA_STOP_COUNT 0x4 +#define FPA_ATTACHAURA 0x5 +#define FPA_DETACHAURA 0x6 +#define FPA_SETAURALVL 0x7 +#define FPA_GETAURALVL 0x8 + +#define FPA_COPROC 0x1 + +/* fpa mbox struct */ +struct octeontx_mbox_fpa_cfg { + int aid; + uint64_t pool_cfg; + uint64_t pool_stack_base; + uint64_t pool_stack_end; + uint64_t aura_cfg; +}; + +struct __attribute__((__packed__)) gen_req { + uint32_t value; +}; + +struct __attribute__((__packed__)) idn_req { + uint8_t domain_id; +}; + +struct __attribute__((__packed__)) gen_resp { + uint16_t domain_id; + uint16_t vfid; +}; + +struct __attribute__((__packed__)) dcfg_resp { + uint8_t sso_count; + uint8_t ssow_count; + uint8_t fpa_count; + uint8_t pko_count; + uint8_t tim_count; + uint8_t net_port_count; + uint8_t virt_port_count; +}; + +#define FPA_MAX_POOL 32 +#define FPA_PF_PAGE_SZ 4096 + +#define FPA_LN_SIZE 128 +#define FPA_ROUND_UP(x, size) \ + ((((unsigned long)(x)) + size-1) & (~(size-1))) +#define FPA_OBJSZ_2_CACHE_LINE(sz) (((sz) + RTE_CACHE_LINE_MASK) >> 7) +#define FPA_CACHE_LINE_2_OBJSZ(sz) ((sz) << 7) + +#define POOL_ENA (0x1 << 0) +#define POOL_DIS (0x0 << 0) +#define POOL_SET_NAT_ALIGN (0x1 << 1) +#define POOL_DIS_NAT_ALIGN (0x0 << 1) +#define POOL_STYPE(x) (((x) & 0x1) << 2) +#define POOL_LTYPE(x) (((x) & 0x3) << 3) +#define POOL_BUF_OFFSET(x) (((x) & 0x7fffULL) << 16) +#define POOL_BUF_SIZE(x) (((x) & 0x7ffULL) << 32) + struct fpavf_res { void *pool_stack_base; void *bar0; @@ -67,6 +133,454 @@ struct octeontx_fpadev { static struct octeontx_fpadev fpadev; +/* lock is taken by caller */ +static int +octeontx_fpa_gpool_alloc(unsigned int object_size) +{ + struct fpavf_res *res = NULL; + uint16_t gpool; + unsigned int sz128; + + sz128 = FPA_OBJSZ_2_CACHE_LINE(object_size); + + for (gpool = 0; gpool < FPA_VF_MAX; gpool++) { + + /* Skip VF that is not mapped Or _inuse */ + if ((fpadev.pool[gpool].bar0 == NULL) || + (fpadev.pool[gpool].is_inuse == true)) + continue; + + res = &fpadev.pool[gpool]; + + RTE_ASSERT(res->domain_id != (uint16_t)~0); + RTE_ASSERT(res->vf_id != (uint16_t)~0); + RTE_ASSERT(res->stack_ln_ptr != 0); + + if (res->sz128 == 0) { + res->sz128 = sz128; + + fpavf_log_dbg("gpool %d blk_sz %d\n", gpool, sz128); + return gpool; + } + } + + return -ENOSPC; +} + +/* lock is taken by caller */ +static __rte_always_inline uintptr_t +octeontx_fpa_gpool2handle(uint16_t gpool) +{ + struct fpavf_res *res = NULL; + + RTE_ASSERT(gpool < FPA_VF_MAX); + + res = &fpadev.pool[gpool]; + if (unlikely(res == NULL)) + return 0; + + return (uintptr_t)res->bar0 | gpool; +} + +static __rte_always_inline bool +octeontx_fpa_handle_valid(uintptr_t handle) +{ + struct fpavf_res *res = NULL; + uint8_t gpool; + int i; + bool ret = false; + + if (unlikely(!handle)) + return ret; + + /* get the gpool */ + gpool = octeontx_fpa_bufpool_gpool(handle); + + /* get the bar address */ + handle &= ~(uint64_t)FPA_GPOOL_MASK; + for (i = 0; i < FPA_VF_MAX; i++) { + if ((uintptr_t)fpadev.pool[i].bar0 != handle) + continue; + + /* validate gpool */ + if (gpool != i) + return false; + + res = &fpadev.pool[i]; + + if (res->sz128 == 0 || res->domain_id == (uint16_t)~0 || + res->stack_ln_ptr == 0) + ret = false; + else + ret = true; + break; + } + + return ret; +} + +static int +octeontx_fpapf_pool_setup(unsigned int gpool, unsigned int buf_size, + signed short buf_offset, unsigned int max_buf_count) +{ + void *memptr = NULL; + phys_addr_t phys_addr; + unsigned int memsz; + struct fpavf_res *fpa = NULL; + uint64_t reg; + struct octeontx_mbox_hdr hdr; + struct dcfg_resp resp; + struct octeontx_mbox_fpa_cfg cfg; + int ret = -1; + + fpa = &fpadev.pool[gpool]; + memsz = FPA_ROUND_UP(max_buf_count / fpa->stack_ln_ptr, FPA_LN_SIZE) * + FPA_LN_SIZE; + + /* Round-up to page size */ + memsz = (memsz + FPA_PF_PAGE_SZ - 1) & ~(uintptr_t)(FPA_PF_PAGE_SZ-1); + memptr = rte_malloc(NULL, memsz, RTE_CACHE_LINE_SIZE); + if (memptr == NULL) { + ret = -ENOMEM; + goto err; + } + + /* Configure stack */ + fpa->pool_stack_base = memptr; + phys_addr = rte_malloc_virt2phy(memptr); + + buf_size /= FPA_LN_SIZE; + + /* POOL setup */ + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_CONFIGSET; + hdr.vfid = fpa->vf_id; + hdr.res_code = 0; + + buf_offset /= FPA_LN_SIZE; + reg = POOL_BUF_SIZE(buf_size) | POOL_BUF_OFFSET(buf_offset) | + POOL_LTYPE(0x2) | POOL_STYPE(0) | POOL_SET_NAT_ALIGN | + POOL_ENA; + + cfg.aid = 0; + cfg.pool_cfg = reg; + cfg.pool_stack_base = phys_addr; + cfg.pool_stack_end = phys_addr + memsz; + cfg.aura_cfg = (1 << 9); + + ret = octeontx_ssovf_mbox_send(&hdr, &cfg, + sizeof(struct octeontx_mbox_fpa_cfg), + &resp, sizeof(resp)); + if (ret < 0) { + ret = -EACCES; + goto err; + } + + fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64 "\n", + fpa->vf_id, gpool, cfg.aid, (unsigned int)cfg.pool_cfg, + cfg.pool_stack_base, cfg.pool_stack_end, cfg.aura_cfg); + + /* Now pool is in_use */ + fpa->is_inuse = true; + +err: + if (ret < 0) + rte_free(memptr); + + return ret; +} + +static int +octeontx_fpapf_pool_destroy(unsigned int gpool_index) +{ + struct octeontx_mbox_hdr hdr; + struct dcfg_resp resp; + struct octeontx_mbox_fpa_cfg cfg; + struct fpavf_res *fpa = NULL; + int ret = -1; + + fpa = &fpadev.pool[gpool_index]; + + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_CONFIGSET; + hdr.vfid = fpa->vf_id; + hdr.res_code = 0; + + /* reset and free the pool */ + cfg.aid = 0; + cfg.pool_cfg = 0; + cfg.pool_stack_base = 0; + cfg.pool_stack_end = 0; + cfg.aura_cfg = 0; + + ret = octeontx_ssovf_mbox_send(&hdr, &cfg, + sizeof(struct octeontx_mbox_fpa_cfg), + &resp, sizeof(resp)); + if (ret < 0) { + ret = -EACCES; + goto err; + } + + ret = 0; +err: + /* anycase free pool stack memory */ + rte_free(fpa->pool_stack_base); + fpa->pool_stack_base = NULL; + return ret; +} + +static int +octeontx_fpapf_aura_attach(unsigned int gpool_index) +{ + struct octeontx_mbox_hdr hdr; + struct dcfg_resp resp; + struct octeontx_mbox_fpa_cfg cfg; + int ret = 0; + + if (gpool_index >= FPA_MAX_POOL) { + ret = -EINVAL; + goto err; + } + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_ATTACHAURA; + hdr.vfid = gpool_index; + hdr.res_code = 0; + memset(&cfg, 0x0, sizeof(struct octeontx_mbox_fpa_cfg)); + cfg.aid = gpool_index; /* gpool is guara */ + + ret = octeontx_ssovf_mbox_send(&hdr, &cfg, + sizeof(struct octeontx_mbox_fpa_cfg), + &resp, sizeof(resp)); + if (ret < 0) { + fpavf_log_err("Could not attach fpa "); + fpavf_log_err("aura %d to pool %d. Err=%d. FuncErr=%d\n", + gpool_index, gpool_index, ret, hdr.res_code); + ret = -EACCES; + goto err; + } +err: + return ret; +} + +static int +octeontx_fpapf_aura_detach(unsigned int gpool_index) +{ + struct octeontx_mbox_fpa_cfg cfg = {0}; + struct octeontx_mbox_hdr hdr = {0}; + int ret = 0; + + if (gpool_index >= FPA_MAX_POOL) { + ret = -EINVAL; + goto err; + } + + cfg.aid = gpool_index; /* gpool is gaura */ + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_DETACHAURA; + hdr.vfid = gpool_index; + ret = octeontx_ssovf_mbox_send(&hdr, &cfg, sizeof(cfg), NULL, 0); + if (ret < 0) { + fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d\n", + gpool_index, ret, hdr.res_code); + ret = -EINVAL; + } + +err: + return ret; +} + +static int +octeontx_fpavf_pool_setup(uintptr_t handle, unsigned long memsz, + void *memva, uint16_t gpool) +{ + uint64_t va_end; + + if (unlikely(!handle)) + return -ENODEV; + + va_end = (uintptr_t)memva + memsz; + va_end &= ~RTE_CACHE_LINE_MASK; + + /* VHPOOL setup */ + fpavf_write64((uintptr_t)memva, + (void *)((uintptr_t)handle + + FPA_VF_VHPOOL_START_ADDR(gpool))); + fpavf_write64(va_end, + (void *)((uintptr_t)handle + + FPA_VF_VHPOOL_END_ADDR(gpool))); + return 0; +} + +static int +octeontx_fpapf_start_count(uint16_t gpool_index) +{ + int ret = 0; + struct octeontx_mbox_hdr hdr = {0}; + + if (gpool_index >= FPA_MAX_POOL) { + ret = -EINVAL; + goto err; + } + + hdr.coproc = FPA_COPROC; + hdr.msg = FPA_START_COUNT; + hdr.vfid = gpool_index; + ret = octeontx_ssovf_mbox_send(&hdr, NULL, 0, NULL, 0); + if (ret < 0) { + fpavf_log_err("Could not start buffer counting for "); + fpavf_log_err("FPA pool %d. Err=%d. FuncErr=%d\n", + gpool_index, ret, hdr.res_code); + ret = -EINVAL; + goto err; + } + +err: + return ret; +} + +static __rte_always_inline int +octeontx_fpavf_free(unsigned int gpool) +{ + int ret = 0; + + if (gpool >= FPA_MAX_POOL) { + ret = -EINVAL; + goto err; + } + + /* Pool is free */ + fpadev.pool[gpool].is_inuse = false; + +err: + return ret; +} + +static __rte_always_inline int +octeontx_gpool_free(uint16_t gpool) +{ + if (fpadev.pool[gpool].sz128 != 0) { + fpadev.pool[gpool].sz128 = 0; + return 0; + } + return -EINVAL; +} + +/* + * Return buffer size for a given pool + */ +int +octeontx_fpa_bufpool_block_size(uintptr_t handle) +{ + struct fpavf_res *res = NULL; + uint8_t gpool; + + if (unlikely(!octeontx_fpa_handle_valid(handle))) + return -EINVAL; + + /* get the gpool */ + gpool = octeontx_fpa_bufpool_gpool(handle); + res = &fpadev.pool[gpool]; + return FPA_CACHE_LINE_2_OBJSZ(res->sz128); +} + +uintptr_t +octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count, + unsigned int buf_offset, char **va_start, + int node_id) +{ + unsigned int gpool; + void *memva; + unsigned long memsz; + uintptr_t gpool_handle; + uintptr_t pool_bar; + int res; + + RTE_SET_USED(node_id); + FPAVF_STATIC_ASSERTION(sizeof(struct rte_mbuf) <= + OCTEONTX_FPAVF_BUF_OFFSET); + + if (unlikely(*va_start == NULL)) + goto error_end; + + object_size = RTE_CACHE_LINE_ROUNDUP(object_size); + if (object_size > FPA_MAX_OBJ_SIZE) { + errno = EINVAL; + goto error_end; + } + + rte_spinlock_lock(&fpadev.lock); + res = octeontx_fpa_gpool_alloc(object_size); + + /* Bail if failed */ + if (unlikely(res < 0)) { + errno = res; + goto error_unlock; + } + + /* get fpavf */ + gpool = res; + + /* get pool handle */ + gpool_handle = octeontx_fpa_gpool2handle(gpool); + if (!octeontx_fpa_handle_valid(gpool_handle)) { + errno = ENOSPC; + goto error_gpool_free; + } + + /* Get pool bar address from handle */ + pool_bar = gpool_handle & ~(uint64_t)FPA_GPOOL_MASK; + + res = octeontx_fpapf_pool_setup(gpool, object_size, buf_offset, + object_count); + if (res < 0) { + errno = res; + goto error_gpool_free; + } + + /* populate AURA fields */ + res = octeontx_fpapf_aura_attach(gpool); + if (res < 0) { + errno = res; + goto error_pool_destroy; + } + + /* vf pool setup */ + memsz = object_size * object_count; + memva = *va_start; + res = octeontx_fpavf_pool_setup(pool_bar, memsz, memva, gpool); + if (res < 0) { + errno = res; + goto error_gaura_detach; + } + + /* Release lock */ + rte_spinlock_unlock(&fpadev.lock); + + /* populate AURA registers */ + fpavf_write64(object_count, (void *)((uintptr_t)pool_bar + + FPA_VF_VHAURA_CNT(gpool))); + fpavf_write64(object_count, (void *)((uintptr_t)pool_bar + + FPA_VF_VHAURA_CNT_LIMIT(gpool))); + fpavf_write64(object_count + 1, (void *)((uintptr_t)pool_bar + + FPA_VF_VHAURA_CNT_THRESHOLD(gpool))); + + octeontx_fpapf_start_count(gpool); + + return gpool_handle; + +error_gaura_detach: + (void) octeontx_fpapf_aura_detach(gpool); +error_pool_destroy: + octeontx_fpavf_free(gpool); + octeontx_fpapf_pool_destroy(gpool); +error_gpool_free: + octeontx_gpool_free(gpool); +error_unlock: + rte_spinlock_unlock(&fpadev.lock); +error_end: + return (uintptr_t)NULL; +} + static void octeontx_fpavf_setup(void) { diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h index c43b1a7d2..23a458363 100644 --- a/drivers/mempool/octeontx/octeontx_fpavf.h +++ b/drivers/mempool/octeontx/octeontx_fpavf.h @@ -58,6 +58,7 @@ #define PCI_DEVICE_ID_OCTEONTX_FPA_VF 0xA053 #define FPA_VF_MAX 32 +#define FPA_GPOOL_MASK (FPA_VF_MAX-1) /* FPA VF register offsets */ #define FPA_VF_INT(x) (0x200ULL | ((x) << 22)) @@ -88,6 +89,10 @@ #define FPA_VF0_APERTURE_SHIFT 22 #define FPA_AURA_SET_SIZE 16 +#define FPA_MAX_OBJ_SIZE (128 * 1024) +#define OCTEONTX_FPAVF_BUF_OFFSET 128 + +#define FPAVF_STATIC_ASSERTION(s) _Static_assert(s, #s) /* * In Cavium OcteonTX SoC, all accesses to the device registers are @@ -126,4 +131,16 @@ do { \ } while (0) #endif +uintptr_t +octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count, + unsigned int buf_offset, char **va_start, + int node); +int +octeontx_fpa_bufpool_block_size(uintptr_t handle); + +static __rte_always_inline uint8_t +octeontx_fpa_bufpool_gpool(uintptr_t handle) +{ + return (uint8_t)handle & FPA_GPOOL_MASK; +} #endif /* __OCTEONTX_FPAVF_H__ */ diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c new file mode 100644 index 000000000..d930a81f9 --- /dev/null +++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c @@ -0,0 +1,88 @@ +/* + * BSD LICENSE + * + * Copyright (C) 2017 Cavium Inc. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ +#include +#include +#include +#include + +#include "octeontx_fpavf.h" + +static int +octeontx_fpavf_alloc(struct rte_mempool *mp) +{ + uintptr_t pool; + uint32_t memseg_count = mp->size; + uint32_t object_size; + uintptr_t va_start; + int rc = 0; + + /* virtual hugepage mapped addr */ + va_start = ~(uint64_t)0; + + object_size = mp->elt_size + mp->header_size + mp->trailer_size; + + pool = octeontx_fpa_bufpool_create(object_size, memseg_count, + OCTEONTX_FPAVF_BUF_OFFSET, + (char **)&va_start, + mp->socket_id); + rc = octeontx_fpa_bufpool_block_size(pool); + if (rc < 0) + goto _end; + + if ((uint32_t)rc != object_size) + fpavf_log_err("buffer size mismatch: %d instead of %u\n", + rc, object_size); + + fpavf_log_info("Pool created %p with .. ", (void *)pool); + fpavf_log_info("obj_sz %d, cnt %d\n", object_size, memseg_count); + + /* assign pool handle to mempool */ + mp->pool_id = (uint64_t)pool; + + return 0; + +_end: + return rc; +} + +static struct rte_mempool_ops octeontx_fpavf_ops = { + .name = "octeontx_fpavf", + .alloc = octeontx_fpavf_alloc, + .free = NULL, + .enqueue = NULL, + .dequeue = NULL, + .get_count = NULL, + .get_capabilities = NULL, + .register_memory_area = NULL, +}; + +MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);