From patchwork Sun Oct 9 13:37:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 117744 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 350A7A0542; Sun, 9 Oct 2022 15:37:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07E0A40146; Sun, 9 Oct 2022 15:37:45 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 2A8B0400D5 for ; Sun, 9 Oct 2022 15:37:43 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 115) id C88207D; Sun, 9 Oct 2022 16:37:42 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on mail1.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.6 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id 53BF169; Sun, 9 Oct 2022 16:37:41 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 53BF169 Authentication-Results: shelob.oktetlabs.ru/53BF169; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz Cc: dev@dpdk.org, =?utf-8?q?Morten_Br=C3=B8rup?= , Bruce Richardson Subject: [PATCH v6 1/4] mempool: check driver enqueue result in one place Date: Sun, 9 Oct 2022 16:37:34 +0300 Message-Id: <20221009133737.795377-2-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> References: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enqueue operation must not fail. Move corresponding debug check from one particular case to dequeue operation helper in order to do it for all invocations. Log critical message with useful information instead of rte_panic(). Make rte_mempool_do_generic_put() implementation more readable and fix incosistency when return value is not checked in one place and checked in another. Signed-off-by: Andrew Rybchenko Reviewed-by: Morten Brørup --- lib/mempool/rte_mempool.h | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 2401c4ac80..bc29d49aab 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -786,12 +786,19 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, unsigned n) { struct rte_mempool_ops *ops; + int ret; RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1); RTE_MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n); rte_mempool_trace_ops_enqueue_bulk(mp, obj_table, n); ops = rte_mempool_get_ops(mp->ops_index); - return ops->enqueue(mp, obj_table, n); + ret = ops->enqueue(mp, obj_table, n); +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG + if (unlikely(ret < 0)) + RTE_LOG(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s\n", + n, mp->name); +#endif + return ret; } /** @@ -1351,12 +1358,7 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, ring_enqueue: /* push remaining objects in ring */ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG - if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0) - rte_panic("cannot put objects in mempool\n"); -#else rte_mempool_ops_enqueue_bulk(mp, obj_table, n); -#endif } From patchwork Sun Oct 9 13:37:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 117746 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62489A0542; Sun, 9 Oct 2022 15:37:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA06642670; Sun, 9 Oct 2022 15:37:46 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 8B51D400D5 for ; Sun, 9 Oct 2022 15:37:43 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 115) id 312C087; Sun, 9 Oct 2022 16:37:42 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on mail1.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.6 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id C49B97F; Sun, 9 Oct 2022 16:37:41 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru C49B97F Authentication-Results: shelob.oktetlabs.ru/C49B97F; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz Cc: dev@dpdk.org, =?utf-8?q?Morten_Br=C3=B8rup?= , Bruce Richardson Subject: [PATCH v6 2/4] mempool: avoid usage of term ring on put Date: Sun, 9 Oct 2022 16:37:35 +0300 Message-Id: <20221009133737.795377-3-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> References: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Term ring is misleading since it is the default, but still just one of possible drivers to store objects. Signed-off-by: Andrew Rybchenko Reviewed-by: Morten Brørup --- lib/mempool/rte_mempool.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index bc29d49aab..a072e5554b 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -1331,7 +1331,7 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, /* No cache provided or if put would overflow mem allocated for cache */ if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE)) - goto ring_enqueue; + goto driver_enqueue; cache_objs = &cache->objs[cache->len]; @@ -1339,7 +1339,7 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, * The cache follows the following algorithm * 1. Add the objects to the cache * 2. Anything greater than the cache min value (if it crosses the - * cache flush threshold) is flushed to the ring. + * cache flush threshold) is flushed to the backend. */ /* Add elements back into the cache */ @@ -1355,9 +1355,9 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, return; -ring_enqueue: +driver_enqueue: - /* push remaining objects in ring */ + /* push objects to the backend */ rte_mempool_ops_enqueue_bulk(mp, obj_table, n); } From patchwork Sun Oct 9 13:37:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 117747 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04F45A0542; Sun, 9 Oct 2022 15:38:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D399D42802; Sun, 9 Oct 2022 15:37:47 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id E20E040042 for ; Sun, 9 Oct 2022 15:37:44 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 115) id 7F39098; Sun, 9 Oct 2022 16:37:44 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on mail1.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.6 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id B850592; Sun, 9 Oct 2022 16:37:42 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru B850592 Authentication-Results: shelob.oktetlabs.ru/B850592; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz Cc: dev@dpdk.org, =?utf-8?q?Morten_Br=C3=B8rup?= , Bruce Richardson Subject: [PATCH v6 3/4] mempool: fix cache flushing algorithm Date: Sun, 9 Oct 2022 16:37:36 +0300 Message-Id: <20221009133737.795377-4-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> References: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Morten Brørup Fix the rte_mempool_do_generic_put() caching flushing algorithm to keep hot objects in cache instead of cold ones. The algorithm was: 1. Add the objects to the cache. 2. Anything greater than the cache size (if it crosses the cache flush threshold) is flushed to the backend. Please note that the description in the source code said that it kept "cache min value" objects after flushing, but the function actually kept the cache full after flushing, which the above description reflects. Now, the algorithm is: 1. If the objects cannot be added to the cache without crossing the flush threshold, flush some cached objects to the backend to free up required space. 2. Add the objects to the cache. The most recent (hot) objects were flushed, leaving the oldest (cold) objects in the mempool cache. The bug degraded performance, because flushing prevented immediate reuse of the (hot) objects already in the CPU cache. Now, the existing (cold) objects in the mempool cache are flushed before the new (hot) objects are added the to the mempool cache. Since nearby code is touched anyway fix flush threshold comparison to do flushing if the threshold is really exceed, not just reached. I.e. it must be "len > flushthresh", not "len >= flushthresh". Consider a flush multiplier of 1 instead of 1.5; the cache would be flushed already when reaching size objects, not when exceeding size objects. In other words, the cache would not be able to hold "size" objects, which is clearly a bug. The bug could degraded performance due to premature flushing. Since we never exceed flush threshold now, cache size in the mempool may be decreased from RTE_MEMPOOL_CACHE_MAX_SIZE * 3 to RTE_MEMPOOL_CACHE_MAX_SIZE * 2. In fact it could be CALC_CACHE_FLUSHTHRESH(RTE_MEMPOOL_CACHE_MAX_SIZE), but flush threshold multiplier is internal. Signed-off-by: Morten Brørup Signed-off-by: Andrew Rybchenko Reviewed-by: Morten Brørup Acked-by: Olivier Matz --- lib/mempool/rte_mempool.c | 5 +++++ lib/mempool/rte_mempool.h | 43 +++++++++++++++++++++++---------------- 2 files changed, 31 insertions(+), 17 deletions(-) diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index de59009baf..4ba8ab7b63 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -746,6 +746,11 @@ rte_mempool_free(struct rte_mempool *mp) static void mempool_cache_init(struct rte_mempool_cache *cache, uint32_t size) { + /* Check that cache have enough space for flush threshold */ + RTE_BUILD_BUG_ON(CALC_CACHE_FLUSHTHRESH(RTE_MEMPOOL_CACHE_MAX_SIZE) > + RTE_SIZEOF_FIELD(struct rte_mempool_cache, objs) / + RTE_SIZEOF_FIELD(struct rte_mempool_cache, objs[0])); + cache->size = size; cache->flushthresh = CALC_CACHE_FLUSHTHRESH(size); cache->len = 0; diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index a072e5554b..e3364ed7b8 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -90,7 +90,7 @@ struct rte_mempool_cache { * Cache is allocated to this size to allow it to overflow in certain * cases to avoid needless emptying of cache. */ - void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache objects */ + void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 2]; /**< Cache objects */ } __rte_cache_aligned; /** @@ -1329,30 +1329,39 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1); RTE_MEMPOOL_STAT_ADD(mp, put_objs, n); - /* No cache provided or if put would overflow mem allocated for cache */ - if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE)) + /* No cache provided or the request itself is too big for the cache */ + if (unlikely(cache == NULL || n > cache->flushthresh)) goto driver_enqueue; - cache_objs = &cache->objs[cache->len]; - /* - * The cache follows the following algorithm - * 1. Add the objects to the cache - * 2. Anything greater than the cache min value (if it crosses the - * cache flush threshold) is flushed to the backend. + * The cache follows the following algorithm: + * 1. If the objects cannot be added to the cache without crossing + * the flush threshold, flush the cache to the backend. + * 2. Add the objects to the cache. */ - /* Add elements back into the cache */ - rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n); - - cache->len += n; + if (cache->len + n <= cache->flushthresh) { + cache_objs = &cache->objs[cache->len]; + cache->len += n; + } else { + unsigned int keep = (n >= cache->size) ? 0 : (cache->size - n); - if (cache->len >= cache->flushthresh) { - rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size], - cache->len - cache->size); - cache->len = cache->size; + /* + * If number of object to keep in the cache is positive: + * keep = cache->size - n < cache->flushthresh - n < cache->len + * since cache->flushthresh > cache->size. + * If keep is 0, cache->len cannot be 0 anyway since + * n <= cache->flushthresh and we'd no be here with + * cache->len == 0. + */ + cache_objs = &cache->objs[keep]; + rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len - keep); + cache->len = keep + n; } + /* Add the objects to the cache. */ + rte_memcpy(cache_objs, obj_table, sizeof(void *) * n); + return; driver_enqueue: From patchwork Sun Oct 9 13:37:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 117748 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E38EBA0542; Sun, 9 Oct 2022 15:38:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B238C4281E; Sun, 9 Oct 2022 15:37:48 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 0DA9E4021E for ; Sun, 9 Oct 2022 15:37:45 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 115) id B0BD690; Sun, 9 Oct 2022 16:37:44 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on mail1.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD autolearn=no autolearn_force=no version=3.4.6 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id 277B083; Sun, 9 Oct 2022 16:37:43 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 277B083 Authentication-Results: shelob.oktetlabs.ru/277B083; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: Olivier Matz Cc: dev@dpdk.org, =?utf-8?q?Morten_Br=C3=B8rup?= , Bruce Richardson Subject: [PATCH v6 4/4] mempool: flush cache completely on overflow Date: Sun, 9 Oct 2022 16:37:37 +0300 Message-Id: <20221009133737.795377-5-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> References: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> <20221009133737.795377-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The cache was still full after flushing. In the opposite direction, i.e. when getting objects from the cache, the cache is refilled to full level when it crosses the low watermark (which happens to be zero). Similarly, the cache should be flushed to empty level when it crosses the high watermark (which happens to be 1.5 x the size of the cache). The existing flushing behaviour was suboptimal for real applications, because crossing the low or high watermark typically happens when the application is in a state where the number of put/get events are out of balance, e.g. when absorbing a burst of packets into a QoS queue (getting more mbufs from the mempool), or when a burst of packets is trickling out from the QoS queue (putting the mbufs back into the mempool). Now, the mempool cache is completely flushed when crossing the flush threshold, so only the newly put (hot) objects remain in the mempool cache afterwards. This bug degraded performance caused by too frequent flushing. Consider this application scenario: Either, an lcore thread in the application is in a state of balance, where it uses the mempool cache within its flush/refill boundaries; in this situation, the flush method is less important, and this fix is irrelevant. Or, an lcore thread in the application is out of balance (either permanently or temporarily), and mostly gets or puts objects from/to the mempool. If it mostly puts objects, not flushing all of the objects will cause more frequent flushing. This is the scenario addressed by this fix. E.g.: Cache size=256, flushthresh=384 (1.5x size), initial len=256; application burst len=32. If there are "size" objects in the cache after flushing, the cache is flushed at every 4th burst. If the cache is flushed completely, the cache is only flushed at every 16th burst. As you can see, this bug caused the cache to be flushed 4x too frequently in this example. And when/if the application thread breaks its pattern of continuously putting objects, and suddenly starts to get objects instead, it will either get objects already in the cache, or the get() function will refill the cache. The concept of not flushing the cache completely was probably based on an assumption that it is more likely for an application's lcore thread to get() after flushing than to put() after flushing. I strongly disagree with this assumption! If an application thread is continuously putting so much that it overflows the cache, it is much more likely to keep putting than it is to start getting. If in doubt, consider how CPU branch predictors work: When the application has done something many times consecutively, the branch predictor will expect the application to do the same again, rather than suddenly do something else. Signed-off-by: Morten Brørup Signed-off-by: Andrew Rybchenko Reviewed-by: Morten Brørup Acked-by: Olivier Matz --- lib/mempool/rte_mempool.h | 16 +++------------- 1 file changed, 3 insertions(+), 13 deletions(-) diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index e3364ed7b8..26b2697572 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -1344,19 +1344,9 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table, cache_objs = &cache->objs[cache->len]; cache->len += n; } else { - unsigned int keep = (n >= cache->size) ? 0 : (cache->size - n); - - /* - * If number of object to keep in the cache is positive: - * keep = cache->size - n < cache->flushthresh - n < cache->len - * since cache->flushthresh > cache->size. - * If keep is 0, cache->len cannot be 0 anyway since - * n <= cache->flushthresh and we'd no be here with - * cache->len == 0. - */ - cache_objs = &cache->objs[keep]; - rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len - keep); - cache->len = keep + n; + cache_objs = &cache->objs[0]; + rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len); + cache->len = n; } /* Add the objects to the cache. */