From patchwork Fri Jan 14 16:36:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Morten_Br=C3=B8rup?= X-Patchwork-Id: 105869 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A9D5EA00C3; Fri, 14 Jan 2022 17:37:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8065441186; Fri, 14 Jan 2022 17:37:00 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 2B54C41171 for ; Fri, 14 Jan 2022 17:36:59 +0100 (CET) Received: from dkrd2.smartsharesys.local ([192.168.4.12]) by smartserver.smartsharesystems.com with Microsoft SMTPSVC(6.0.3790.4675); Fri, 14 Jan 2022 17:36:56 +0100 From: =?utf-8?q?Morten_Br=C3=B8rup?= To: olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru Cc: bruce.richardson@intel.com, jerinjacobk@gmail.com, dev@dpdk.org, =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH] mempool: fix get objects from mempool with cache Date: Fri, 14 Jan 2022 17:36:50 +0100 Message-Id: <20220114163650.94288-1-mb@smartsharesystems.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> References: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> MIME-Version: 1.0 X-OriginalArrivalTime: 14 Jan 2022 16:36:56.0846 (UTC) FILETIME=[F1F95AE0:01D80964] X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A flush threshold for the mempool cache was introduced in DPDK version 1.3, but rte_mempool_do_generic_get() was not completely updated back then, and some inefficiencies were introduced. This patch fixes the following in rte_mempool_do_generic_get(): 1. The code that initially screens the cache request was not updated with the change in DPDK version 1.3. The initial screening compared the request length to the cache size, which was correct before, but became irrelevant with the introduction of the flush threshold. E.g. the cache can hold up to flushthresh objects, which is more than its size, so some requests were not served from the cache, even though they could be. The initial screening has now been corrected to match the initial screening in rte_mempool_do_generic_put(), which verifies that a cache is present, and that the length of the request does not overflow the memory allocated for the cache. 2. The function is a helper for rte_mempool_generic_get(), so it must behave according to the description of that function. Specifically, objects must first be returned from the cache, subsequently from the ring. After the change in DPDK version 1.3, this was not the behavior when the request was partially satisfied from the cache; instead, the objects from the ring were returned ahead of the objects from the cache. This is bad for CPUs with a small L1 cache, which benefit from having the hot objects first in the returned array. (This is also the reason why the function returns the objects in reverse order.) Now, all code paths first return objects from the cache, subsequently from the ring. 3. If the cache could not be backfilled, the function would attempt to get all the requested objects from the ring (instead of only the number of requested objects minus the objects available in the ring), and the function would fail if that failed. Now, the first part of the request is always satisfied from the cache, and if the subsequent backfilling of the cache from the ring fails, only the remaining requested objects are retrieved from the ring. 4. The code flow for satisfying the request from the cache was slightly inefficient: The likely code path where the objects are simply served from the cache was treated as unlikely. Now it is treated as likely. And in the code path where the cache was backfilled first, numbers were added and subtracted from the cache length; now this code path simply sets the cache length to its final value. 5. Some comments were not correct anymore. The comments have been updated. Most importanly, the description of the succesful return value was inaccurate. Success only returns 0, not >= 0. Signed-off-by: Morten Brørup Reviewed-by: Bruce Richardson --- lib/mempool/rte_mempool.h | 81 ++++++++++++++++++++++++++++----------- 1 file changed, 59 insertions(+), 22 deletions(-) diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 1e7a3c1527..88f1b8b7ab 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -1443,6 +1443,10 @@ rte_mempool_put(struct rte_mempool *mp, void *obj) /** * @internal Get several objects from the mempool; used internally. + * + * If cache is enabled, objects are returned from the cache in Last In First + * Out (LIFO) order for the benefit of CPUs with small L1 cache. + * * @param mp * A pointer to the mempool structure. * @param obj_table @@ -1452,7 +1456,7 @@ rte_mempool_put(struct rte_mempool *mp, void *obj) * @param cache * A pointer to a mempool cache structure. May be NULL if not needed. * @return - * - >=0: Success; number of objects supplied. + * - 0: Success; got n objects. * - <0: Error; code of ring dequeue function. */ static __rte_always_inline int @@ -1463,38 +1467,71 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, uint32_t index, len; void **cache_objs; - /* No cache provided or cannot be satisfied from cache */ - if (unlikely(cache == NULL || n >= cache->size)) + /* No cache provided or if get would overflow mem allocated for cache */ + if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE)) goto ring_dequeue; - cache_objs = cache->objs; + cache_objs = &cache->objs[cache->len]; + + if (n <= cache->len) { + /* The entire request can be satisfied from the cache. */ + cache->len -= n; + for (index = 0; index < n; index++) + *obj_table++ = *--cache_objs; - /* Can this be satisfied from the cache? */ - if (cache->len < n) { - /* No. Backfill the cache first, and then fill from it */ - uint32_t req = n + (cache->size - cache->len); + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); - /* How many do we require i.e. number to fill the cache + the request */ - ret = rte_mempool_ops_dequeue_bulk(mp, - &cache->objs[cache->len], req); + return 0; + } + + /* Satisfy the first part of the request by depleting the cache. */ + len = cache->len; + for (index = 0; index < len; index++) + *obj_table++ = *--cache_objs; + + /* Number of objects remaining to satisfy the request. */ + len = n - len; + + /* Fill the cache from the ring; fetch size + remaining objects. */ + ret = rte_mempool_ops_dequeue_bulk(mp, cache->objs, + cache->size + len); + if (unlikely(ret < 0)) { + /* + * We are buffer constrained, and not able to allocate + * cache + remaining. + * Do not fill the cache, just satisfy the remaining part of + * the request directly from the ring. + */ + ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, len); if (unlikely(ret < 0)) { /* - * In the off chance that we are buffer constrained, - * where we are not able to allocate cache + n, go to - * the ring directly. If that fails, we are truly out of - * buffers. + * That also failed. + * No furter action is required to roll the first + * part of the request back into the cache, as both + * cache->len and the objects in the cache are intact. */ - goto ring_dequeue; + RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n); + + return ret; } - cache->len += req; + /* Commit that the cache was emptied. */ + cache->len = 0; + + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); + RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); + + return 0; } - /* Now fill in the response ... */ - for (index = 0, len = cache->len - 1; index < n; ++index, len--, obj_table++) - *obj_table = cache_objs[len]; + cache_objs = &cache->objs[cache->size + len]; - cache->len -= n; + /* Satisfy the remaining part of the request from the filled cache. */ + cache->len = cache->size; + for (index = 0; index < len; index++) + *obj_table++ = *--cache_objs; RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n); @@ -1503,7 +1540,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, ring_dequeue: - /* get remaining objects from ring */ + /* Get the objects from the ring. */ ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n); if (ret < 0) {