From patchwork Mon Nov 18 09:50:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahaf Shuler X-Patchwork-Id: 63077 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66ABEA0351; Mon, 18 Nov 2019 10:50:10 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6F626B62; Mon, 18 Nov 2019 10:50:09 +0100 (CET) Received: from EUR03-DB5-obe.outbound.protection.outlook.com (mail-eopbgr40066.outbound.protection.outlook.com [40.107.4.66]) by dpdk.org (Postfix) with ESMTP id 8644C1F5 for ; Mon, 18 Nov 2019 10:50:08 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aCcmbmtP1xSaCpfnbWZfW98uZU1J8BC0WtJ3yXE/ICH6LyKaWZ7gUSuqzhJtMWbqpksrXhlV1ynh17sdJEm/9DeYJO4NQzimZ3TFPV3oqvEADIMq6tH6lEFdxDRiGqzV3UF1OOOi4fGtQyWJ0raF0dASojzWkVHMl1gp/nfLUFS+4qISBJNPrgoDxZsyqkkyIk7O5e/YLcx/5+fGLlrlqEQtq6JEqxPTsJgRKD4pYZhf/F7iSmNloJYEg3QOmYW93KJPDnO4UJmsh0cf/V4rECYqGh3XAx6TsDR1K1AHDRbQKg0UQFh5w8iglG3Wr70s0MpIPJ72us07pD90Mqe/Tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gAmC2pj3WD+O0a36E/84Du1UZWclPG7unVpQDAqRR7s=; b=IgbWEiheC8TPKCpETcRsds32zfe5VGeMHn8jMwcpYlR4itmPoq8J0yDEW0Xx0XdtgaZm/XAqhGqjZfm5RSZ14ViIyM7cBBp4D7XEC7rtSwTMJW5NiiN+D6zfezRwCuqfx8pBxFh8GlRUFwuGuVlQC5s5BvUj1TSYpqS7wKDcZ15S9WMHNXI1GFN2pYXmK9lh81mVwWe305SuR8RpD5RUyRB65P6PPIQOjwueX4wknDF8sZ2Wnfj2Cx6AzmyUkEiV53mtkxqeVm3gpCLbmMjIcYi9K881N67Mpw2bJ7EHkJUcUXZIGdVACAvowd8Hu+PLrh7T+G9K/D7coNjKBNvZbg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gAmC2pj3WD+O0a36E/84Du1UZWclPG7unVpQDAqRR7s=; b=ePGC2CMqL+73QRGf2QSfMPOnWaBvXoP7hDn99bM/UO7z0QVLp8TtU35V8+OZEZcTDtdPoi/FhuQ/Lt5T+XTJB3G1dzNjHLWGMWeZ5W9NEr13f6kLdnYnLdDNTtmVsSAYkM0VJ77jbVG7MGn5RXG7TjIAzitY8MkvQKMBCnovG44= Received: from AM0PR0502MB3795.eurprd05.prod.outlook.com (52.133.45.150) by AM0PR0502MB3635.eurprd05.prod.outlook.com (52.133.46.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2451.30; Mon, 18 Nov 2019 09:50:07 +0000 Received: from AM0PR0502MB3795.eurprd05.prod.outlook.com ([fe80::c9c0:7e1c:6dae:7e4d]) by AM0PR0502MB3795.eurprd05.prod.outlook.com ([fe80::c9c0:7e1c:6dae:7e4d%4]) with mapi id 15.20.2451.029; Mon, 18 Nov 2019 09:50:07 +0000 From: Shahaf Shuler To: "olivier.matz@6wind.com" , Thomas Monjalon , "dev@dpdk.org" , "arybchenko@solarflare.com" CC: Asaf Penso , Olga Shern , Alex Rosenbaum , "eagostini@nvidia.com" Thread-Topic: [RFC v20.20] mbuf: introduce pktmbuf pool with pinned external buffers Thread-Index: AQHVnfWPYsHcGt5DkE+XLyfZbYYUDA== Date: Mon, 18 Nov 2019 09:50:07 +0000 Message-ID: <20191118094938.192850-1-shahafs@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-mailer: git-send-email 2.12.0 x-clientproxiedby: AM0PR10CA0013.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::23) To AM0PR0502MB3795.eurprd05.prod.outlook.com (2603:10a6:208:1b::22) authentication-results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [94.188.199.18] x-ms-publictraffictype: Email x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: 546d1751-0d81-4019-00b6-08d76c0cb19f x-ms-traffictypediagnostic: AM0PR0502MB3635:|AM0PR0502MB3635: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-forefront-prvs: 0225B0D5BC x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(366004)(136003)(376002)(39860400002)(396003)(346002)(189003)(199004)(71200400001)(2616005)(14444005)(305945005)(25786009)(99286004)(71190400001)(66556008)(64756008)(66946007)(66476007)(66446008)(86362001)(2201001)(7736002)(1076003)(5660300002)(476003)(6486002)(4326008)(186003)(6116002)(6436002)(52116002)(8936002)(26005)(6512007)(486006)(6506007)(386003)(2906002)(66066001)(102836004)(36756003)(110136005)(478600001)(50226002)(2501003)(14454004)(316002)(5024004)(8676002)(81156014)(81166006)(3846002)(54906003)(256004); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR0502MB3635; H:AM0PR0502MB3795.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: gO0XY2qESqzzpiCQoELSHeaP7WoZ0Ahw1Qhzq6jhjsII+i6ap8KDHI/wN2DsVQkqW6r97nFjYrEbS8De3IFGgKB23JMxt8d34JcRbPl7S6VV0r4dO9eiOEoKFDo/ba2kFnY/xwwTC1b9zMmTubYpHNwg+eKzZxUMMvnvJjwrSUinsUqDqzi9TWozK0m/tfdXaoxea5/vX+ZCzwvXbANjx7gSvTTvpxDOwgUbzzPmIxYT8LvOD3GVlRLlfvkYYKDgE/BrsPQLI5EAvXTttLG3ez7jOGHgDjW6AMvSbSk8DqfIJAOZJMJIpjDr9bNDiaDhWaGt/uYaimDBkwIfCWw913kfTkt9jSAEPHOoZ4FbJE24KDsD72ikAa1autXd6cOS1IkFPuIejtAdFwGPJRGxTNRzF6AHjFPPiOmQU/SzjOC99dH9kDdM2EUqjZK1FZ3s MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 546d1751-0d81-4019-00b6-08d76c0cb19f X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Nov 2019 09:50:07.4088 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: QQglG9LmMJTjF3rJ1pmssc9MB1TrKAG/jGwCakN6rrSx92xUniNIl+sGZGM0U1euu0cLCYmhtZYbNasvKxuAXA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0502MB3635 Subject: [dpdk-dev] [RFC v20.20] mbuf: introduce pktmbuf pool with pinned external buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Today's pktmbuf pool contains only mbufs with no external buffers. This means data buffer for the mbuf should be placed right after the mbuf structure (+ the private data when enabled). On some cases, the application would want to have the buffers allocated from a different device in the platform. This is in order to do zero copy for the packet directly to the device memory. Examples for such devices can be GPU or storage device. For such cases the native pktmbuf pool does not fit since each mbuf would need to point to external buffer. To support above, the pktmbuf pool will be populated with mbuf pointing to the device buffers using the mbuf external buffer feature. The PMD will populate its receive queues with those buffer, so that every packet received will be scattered directly to the device memory. on the other direction, embedding the buffer pointer to the transmit queues of the NIC, will make the DMA to fetch device memory using peer to peer communication. Such mbuf with external buffer should be handled with care when mbuf is freed. Mainly The external buffer should not be detached, so that it can be reused for the next packet receive. This patch introduce a new flag on the rte_pktmbuf_pool_private structure to specify this mempool is for mbuf with pinned external buffer. Upon detach this flag is validated and buffer is not detached. A new mempool create wrapper is also introduced to help application to create and populate such mempool. Signed-off-by: Shahaf Shuler --- lib/librte_mbuf/rte_mbuf.h | 75 ++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 69 insertions(+), 6 deletions(-) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 92d81972ab..e631dfff30 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -295,6 +295,13 @@ rte_mbuf_to_priv(struct rte_mbuf *m) } /** + * When set pktmbuf mempool will hold only mbufs with pinned external buffer. + * The external buffer will be attached on the mbuf creation and will not be + * detached by the mbuf free calls. + * mbuf should not contain any room for data after the mbuf structure. + */ +#define RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF (1 << 0) +/** * Private data in case of pktmbuf pool. * * A structure that contains some pktmbuf_pool-specific data that are @@ -303,6 +310,7 @@ rte_mbuf_to_priv(struct rte_mbuf *m) struct rte_pktmbuf_pool_private { uint16_t mbuf_data_room_size; /**< Size of data space in each mbuf. */ uint16_t mbuf_priv_size; /**< Size of private area in each mbuf. */ + uint32_t flags; /**< Use RTE_PKTMMBUF_POOL_F_*. */ }; #ifdef RTE_LIBRTE_MBUF_DEBUG @@ -660,6 +668,50 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, int socket_id); /** + * Create a mbuf pool with pinned external buffers. + * + * This function creates and initializes a packet mbuf pool that contains + * only mbufs with external buffer. It is a wrapper to rte_mempool functions. + * + * @param name + * The name of the mbuf pool. + * @param n + * The number of elements in the mbuf pool. The optimum size (in terms + * of memory usage) for a mempool is when n is a power of two minus one: + * n = (2^q - 1). + * @param cache_size + * Size of the per-core object cache. See rte_mempool_create() for + * details. + * @param priv_size + * Size of application private are between the rte_mbuf structure + * and the data buffer. This value must be aligned to RTE_MBUF_PRIV_ALIGN. + * @param socket_id + * The socket identifier where the mempool memory should be allocated. The + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the + * reserved zone. + * @param buffers + * Array of buffers to be attached to the mbufs in the pool. + * Array size should be n. + * @param buffers_len + * Array of buffer length. buffers_len[i] describes the length of a buffer + * pointed by buffer[i]. + * @return + * The pointer to the new allocated mempool, on success. NULL on error + * with rte_errno set appropriately. Possible rte_errno values include: + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure + * - E_RTE_SECONDARY - function was called from a secondary process instance + * - EINVAL - cache size provided is too large, or priv_size is not aligned. + * - ENOSPC - the maximum number of memzones has already been allocated + * - EEXIST - a memzone with the same name already exists + * - ENOMEM - no appropriate memory area found in which to create memzone + */ +struct rte_mempool * +rte_pktmbuf_ext_buffer_pool_create(const char *name, unsigned n, + unsigned cache_size, uint16_t priv_size, + int socket_id, void **buffers, + uint16_t *buffer_len); + +/** * Create a mbuf pool with a given mempool ops name * * This function creates and initializes a packet mbuf pool. It is @@ -1137,25 +1189,36 @@ __rte_pktmbuf_free_direct(struct rte_mbuf *m) static inline void rte_pktmbuf_detach(struct rte_mbuf *m) { struct rte_mempool *mp = m->pool; + struct rte_pktmbuf_pool_private *priv = + (struct rte_pktmbuf_pool_private *)rte_mempool_get_priv(mp); + uint8_t pinned_ext_mbuf = priv->flags & + RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF; uint32_t mbuf_size, buf_len; uint16_t priv_size; - if (RTE_MBUF_HAS_EXTBUF(m)) - __rte_pktmbuf_free_extbuf(m); - else + if (RTE_MBUF_HAS_EXTBUF(m)) { + if (pinned_ext_mbuf) { + m->ol_flags = EXT_ATTACHED_MBUF; + goto reset_data; + } else { + __rte_pktmbuf_free_extbuf(m); + } + } else { __rte_pktmbuf_free_direct(m); + } - priv_size = rte_pktmbuf_priv_size(mp); + priv_size = priv->mbuf_priv_size; mbuf_size = (uint32_t)(sizeof(struct rte_mbuf) + priv_size); - buf_len = rte_pktmbuf_data_room_size(mp); + buf_len = priv->mbuf_data_room_size; m->priv_size = priv_size; m->buf_addr = (char *)m + mbuf_size; m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size; m->buf_len = (uint16_t)buf_len; + m->ol_flags = 0; +reset_data: rte_pktmbuf_reset_headroom(m); m->data_len = 0; - m->ol_flags = 0; } /** From patchwork Tue Jan 14 07:49:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Slava Ovsiienko X-Patchwork-Id: 64617 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A611BA04FD; Tue, 14 Jan 2020 08:50:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2A48B1C25A; Tue, 14 Jan 2020 08:50:03 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 84BFD1C22A for ; Tue, 14 Jan 2020 08:50:01 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@mellanox.com) with ESMTPS (AES256-SHA encrypted); 14 Jan 2020 09:49:58 +0200 Received: from pegasus11.mtr.labs.mlnx (pegasus11.mtr.labs.mlnx [10.210.16.104]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 00E7nw4W008564; Tue, 14 Jan 2020 09:49:58 +0200 Received: from pegasus11.mtr.labs.mlnx (localhost [127.0.0.1]) by pegasus11.mtr.labs.mlnx (8.14.7/8.14.7) with ESMTP id 00E7nwBA009292; Tue, 14 Jan 2020 07:49:58 GMT Received: (from viacheslavo@localhost) by pegasus11.mtr.labs.mlnx (8.14.7/8.14.7/Submit) id 00E7nwIE009291; Tue, 14 Jan 2020 07:49:58 GMT X-Authentication-Warning: pegasus11.mtr.labs.mlnx: viacheslavo set sender to viacheslavo@mellanox.com using -f From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: matan@mellanox.com, rasland@mellanox.com, orika@mellanox.com, shahafs@mellanox.com, olivier.matz@6wind.com, stephen@networkplumber.org Date: Tue, 14 Jan 2020 07:49:51 +0000 Message-Id: <1578988193-9015-3-git-send-email-viacheslavo@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1578988193-9015-1-git-send-email-viacheslavo@mellanox.com> References: <20191118094938.192850-1-shahafs@mellanox.com> <1578988193-9015-1-git-send-email-viacheslavo@mellanox.com> Subject: [dpdk-dev] [PATCH v2 2/4] mbuf: create packet pool with external memory buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The dedicated routine rte_pktmbuf_pool_create_extbuf() is provided to create mbuf pool with data buffers located in the pinned external memory. The application provides the external memory description and routine initialises each mbuf with appropriate virtual and physical buffer address. It is entirely application responsibility to register external memory with rte_extmem_register() API, map this memory, etc. The new introduced flag RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF is set in private pool structure, specifying the new special pool type. The allocated mbufs from pool of this kind will have the EXT_ATTACHED_MBUF flag set and NULL shared info pointer, because external buffers are not supposed to be freed and sharing management is not needed. Also, these mbufs can not be attached to other mbufs (not intended to be indirect). Signed-off-by: Viacheslav Ovsiienko --- lib/librte_mbuf/rte_mbuf.c | 144 ++++++++++++++++++++++++++++++++++- lib/librte_mbuf/rte_mbuf.h | 86 ++++++++++++++++++++- lib/librte_mbuf/rte_mbuf_version.map | 1 + 3 files changed, 228 insertions(+), 3 deletions(-) diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index 8fa7f49..d151469 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -59,9 +59,9 @@ } RTE_ASSERT(mp->elt_size >= sizeof(struct rte_mbuf) + - user_mbp_priv->mbuf_data_room_size + + ((user_mbp_priv->flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF) ? + 0 : user_mbp_priv->mbuf_data_room_size) + user_mbp_priv->mbuf_priv_size); - RTE_ASSERT(user_mbp_priv->flags == 0); mbp_priv = rte_mempool_get_priv(mp); memcpy(mbp_priv, user_mbp_priv, sizeof(*mbp_priv)); @@ -107,6 +107,63 @@ m->next = NULL; } +/* + * pktmbuf constructor for the pool with pinned external buffer, + * given as a callback function to rte_mempool_obj_iter() in + * rte_pktmbuf_pool_create_extbuf(). Set the fields of a packet + * mbuf to their default values. + */ +void +rte_pktmbuf_init_extmem(struct rte_mempool *mp, + void *opaque_arg, + void *_m, + __attribute__((unused)) unsigned int i) +{ + struct rte_mbuf *m = _m; + struct rte_pktmbuf_extmem_init_ctx *ctx = opaque_arg; + struct rte_pktmbuf_extmem *ext_mem; + uint32_t mbuf_size, buf_len, priv_size; + + priv_size = rte_pktmbuf_priv_size(mp); + mbuf_size = sizeof(struct rte_mbuf) + priv_size; + buf_len = rte_pktmbuf_data_room_size(mp); + + RTE_ASSERT(RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) == priv_size); + RTE_ASSERT(mp->elt_size >= mbuf_size); + RTE_ASSERT(buf_len <= UINT16_MAX); + + memset(m, 0, mbuf_size); + m->priv_size = priv_size; + m->buf_len = (uint16_t)buf_len; + + /* set the data buffer pointers to external memory */ + ext_mem = ctx->ext_mem + ctx->ext; + + RTE_ASSERT(ctx->ext < ctx->ext_num); + RTE_ASSERT(ctx->off < ext_mem->buf_len); + + m->buf_addr = RTE_PTR_ADD(ext_mem->buf_ptr, ctx->off); + m->buf_iova = ext_mem->buf_iova == RTE_BAD_IOVA ? + RTE_BAD_IOVA : (ext_mem->buf_iova + ctx->off); + + ctx->off += ext_mem->elt_size; + if (ctx->off >= ext_mem->buf_len) { + ctx->off = 0; + ++ctx->ext; + } + /* keep some headroom between start of buffer and data */ + m->data_off = RTE_MIN(RTE_PKTMBUF_HEADROOM, (uint16_t)m->buf_len); + + /* init some constant fields */ + m->pool = mp; + m->nb_segs = 1; + m->port = MBUF_INVALID_PORT; + m->ol_flags = EXT_ATTACHED_MBUF; + rte_mbuf_refcnt_set(m, 1); + m->next = NULL; +} + + /* Helper to create a mbuf pool with given mempool ops name*/ struct rte_mempool * rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, @@ -169,6 +226,89 @@ struct rte_mempool * data_room_size, socket_id, NULL); } +/* Helper to create a mbuf pool with pinned external data buffers. */ +struct rte_mempool * +rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, + unsigned int cache_size, uint16_t priv_size, + uint16_t data_room_size, int socket_id, + struct rte_pktmbuf_extmem *ext_mem, unsigned int ext_num) +{ + struct rte_mempool *mp; + struct rte_pktmbuf_pool_private mbp_priv; + struct rte_pktmbuf_extmem_init_ctx init_ctx; + const char *mp_ops_name; + unsigned int elt_size; + unsigned int i, n_elts = 0; + int ret; + + if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { + RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + priv_size); + rte_errno = EINVAL; + return NULL; + } + /* Check the external memory descriptors. */ + for (i = 0; i < ext_num; i++) { + struct rte_pktmbuf_extmem *extm = ext_mem + i; + + if (!extm->elt_size || !extm->buf_len || !extm->buf_ptr) { + RTE_LOG(ERR, MBUF, "invalid extmem descriptor\n"); + rte_errno = EINVAL; + return NULL; + } + if (data_room_size > extm->elt_size) { + RTE_LOG(ERR, MBUF, "ext elt_size=%u is too small\n", + priv_size); + rte_errno = EINVAL; + return NULL; + } + n_elts += extm->buf_len / extm->elt_size; + } + /* Check whether enough external memory provided. */ + if (n_elts < n) { + RTE_LOG(ERR, MBUF, "not enough extmem\n"); + rte_errno = ENOMEM; + return NULL; + } + elt_size = sizeof(struct rte_mbuf) + (unsigned int)priv_size; + memset(&mbp_priv, 0, sizeof(mbp_priv)); + mbp_priv.mbuf_data_room_size = data_room_size; + mbp_priv.mbuf_priv_size = priv_size; + mbp_priv.flags = RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF; + + mp = rte_mempool_create_empty(name, n, elt_size, cache_size, + sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); + if (mp == NULL) + return NULL; + + mp_ops_name = rte_mbuf_best_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); + if (ret != 0) { + RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + rte_pktmbuf_pool_init(mp, &mbp_priv); + + ret = rte_mempool_populate_default(mp); + if (ret < 0) { + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + + init_ctx = (struct rte_pktmbuf_extmem_init_ctx){ + .ext_mem = ext_mem, + .ext_num = ext_num, + .ext = 0, + .off = 0, + }; + rte_mempool_obj_iter(mp, rte_pktmbuf_init_extmem, &init_ctx); + + return mp; +} + /* do some sanity checks on a mbuf: panic if it fails */ void rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header) diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index 46ae76c..c14b8a1 100644 --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -643,6 +643,34 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp) void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg, void *m, unsigned i); +/** The context to initialize the mbufs with pinned external buffers. */ +struct rte_pktmbuf_extmem_init_ctx { + struct rte_pktmbuf_extmem *ext_mem; /* pointer to descriptor array. */ + unsigned int ext_num; /* number of descriptors in array. */ + unsigned int ext; /* loop descriptor index. */ + size_t off; /* loop buffer offset. */ +}; + +/** + * The packet mbuf constructor for pools with pinned external memory. + * + * This function initializes some fields in the mbuf structure that are + * not modified by the user once created (origin pool, buffer start + * address, and so on). This function is given as a callback function to + * rte_mempool_obj_iter() called from rte_mempool_create_extmem(). + * + * @param mp + * The mempool from which mbufs originate. + * @param opaque_arg + * A pointer to the rte_pktmbuf_extmem_init_ctx - initialization + * context structure + * @param m + * The mbuf to initialize. + * @param i + * The index of the mbuf in the pool table. + */ +void rte_pktmbuf_init_extmem(struct rte_mempool *mp, void *opaque_arg, + void *m, unsigned int i); /** * A packet mbuf pool constructor. @@ -744,6 +772,62 @@ struct rte_mempool * unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size, int socket_id, const char *ops_name); +/** A structure that describes the pinned external buffer segment. */ +struct rte_pktmbuf_extmem { + void *buf_ptr; /**< The virtual address of data buffer. */ + rte_iova_t buf_iova; /**< The IO address of the data buffer. */ + size_t buf_len; /**< External buffer length in bytes. */ + uint16_t elt_size; /**< mbuf element size in bytes. */ +}; + +/** + * Create a mbuf pool with external pinned data buffers. + * + * This function creates and initializes a packet mbuf pool that contains + * only mbufs with external buffer. It is a wrapper to rte_mempool functions. + * + * @param name + * The name of the mbuf pool. + * @param n + * The number of elements in the mbuf pool. The optimum size (in terms + * of memory usage) for a mempool is when n is a power of two minus one: + * n = (2^q - 1). + * @param cache_size + * Size of the per-core object cache. See rte_mempool_create() for + * details. + * @param priv_size + * Size of application private are between the rte_mbuf structure + * and the data buffer. This value must be aligned to RTE_MBUF_PRIV_ALIGN. + * @param data_room_size + * Size of data buffer in each mbuf, including RTE_PKTMBUF_HEADROOM. + * @param socket_id + * The socket identifier where the memory should be allocated. The + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the + * reserved zone. + * @param ext_mem + * Pointer to the array of structures describing the external memory + * for data buffers. It is caller responsibility to register this memory + * with rte_extmem_register() (if needed), map this memory to appropriate + * physical device, etc. + * @param ext_num + * Number of elements in the ext_mem array. + * @return + * The pointer to the new allocated mempool, on success. NULL on error + * with rte_errno set appropriately. Possible rte_errno values include: + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure + * - E_RTE_SECONDARY - function was called from a secondary process instance + * - EINVAL - cache size provided is too large, or priv_size is not aligned. + * - ENOSPC - the maximum number of memzones has already been allocated + * - EEXIST - a memzone with the same name already exists + * - ENOMEM - no appropriate memory area found in which to create memzone + */ +__rte_experimental +struct rte_mempool * +rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, + unsigned int cache_size, uint16_t priv_size, + uint16_t data_room_size, int socket_id, + struct rte_pktmbuf_extmem *ext_mem, unsigned int ext_num); + /** * Get the data room size of mbufs stored in a pktmbuf_pool * @@ -819,7 +903,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m) m->nb_segs = 1; m->port = MBUF_INVALID_PORT; - m->ol_flags = 0; + m->ol_flags &= EXT_ATTACHED_MBUF; m->packet_type = 0; rte_pktmbuf_reset_headroom(m); diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map index 3bbb476..ab161bc 100644 --- a/lib/librte_mbuf/rte_mbuf_version.map +++ b/lib/librte_mbuf/rte_mbuf_version.map @@ -44,5 +44,6 @@ EXPERIMENTAL { rte_mbuf_dyn_dump; rte_pktmbuf_copy; rte_pktmbuf_free_bulk; + rte_pktmbuf_pool_create_extbuf; };