From patchwork Wed Apr 25 16:32:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 38958 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2AC0EA488; Wed, 25 Apr 2018 18:32:35 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id D9C548E7C for ; Wed, 25 Apr 2018 18:32:33 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 3691A140079; Wed, 25 Apr 2018 16:32:32 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 25 Apr 2018 09:32:29 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25 via Frontend Transport; Wed, 25 Apr 2018 09:32:29 -0700 Received: from uklogin.uk.solarflarecom.com (uklogin.uk.solarflarecom.com [10.17.10.10]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w3PGWSMG003762; Wed, 25 Apr 2018 17:32:28 +0100 Received: from uklogin.uk.solarflarecom.com (localhost.localdomain [127.0.0.1]) by uklogin.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w3PGWSgr022770; Wed, 25 Apr 2018 17:32:28 +0100 From: Andrew Rybchenko To: CC: Olivier MATZ , "Artem V. Andreev" Date: Wed, 25 Apr 2018 17:32:20 +0100 Message-ID: <1524673942-22726-5-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.2.3 In-Reply-To: <1524673942-22726-1-git-send-email-arybchenko@solarflare.com> References: <1511539591-20966-1-git-send-email-arybchenko@solarflare.com> <1524673942-22726-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-MDID: 1524673953-GDV20o1Iasv2 Subject: [dpdk-dev] [PATCH v3 4/6] mempool/bucket: implement block dequeue operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Artem V. Andreev" Signed-off-by: Artem V. Andreev Signed-off-by: Andrew Rybchenko --- drivers/mempool/bucket/rte_mempool_bucket.c | 52 +++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c index ef822eb2a..24be24e96 100644 --- a/drivers/mempool/bucket/rte_mempool_bucket.c +++ b/drivers/mempool/bucket/rte_mempool_bucket.c @@ -294,6 +294,46 @@ bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n) return rc; } +static int +bucket_dequeue_contig_blocks(struct rte_mempool *mp, void **first_obj_table, + unsigned int n) +{ + struct bucket_data *bd = mp->pool_data; + const uint32_t header_size = bd->header_size; + struct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()]; + unsigned int n_buckets_from_stack = RTE_MIN(n, cur_stack->top); + struct bucket_header *hdr; + void **first_objp = first_obj_table; + + bucket_adopt_orphans(bd); + + n -= n_buckets_from_stack; + while (n_buckets_from_stack-- > 0) { + hdr = bucket_stack_pop_unsafe(cur_stack); + *first_objp++ = (uint8_t *)hdr + header_size; + } + if (n > 0) { + if (unlikely(rte_ring_dequeue_bulk(bd->shared_bucket_ring, + first_objp, n, NULL) != n)) { + /* Return the already dequeued buckets */ + while (first_objp-- != first_obj_table) { + bucket_stack_push(cur_stack, + (uint8_t *)*first_objp - + header_size); + } + rte_errno = ENOBUFS; + return -rte_errno; + } + while (n-- > 0) { + hdr = (struct bucket_header *)*first_objp; + hdr->lcore_id = rte_lcore_id(); + *first_objp++ = (uint8_t *)hdr + header_size; + } + } + + return 0; +} + static void count_underfilled_buckets(struct rte_mempool *mp, void *opaque, @@ -548,6 +588,16 @@ bucket_populate(struct rte_mempool *mp, unsigned int max_objs, return n_objs; } +static int +bucket_get_info(const struct rte_mempool *mp, struct rte_mempool_info *info) +{ + struct bucket_data *bd = mp->pool_data; + + info->contig_block_size = bd->obj_per_bucket; + return 0; +} + + static const struct rte_mempool_ops ops_bucket = { .name = "bucket", .alloc = bucket_alloc, @@ -557,6 +607,8 @@ static const struct rte_mempool_ops ops_bucket = { .get_count = bucket_get_count, .calc_mem_size = bucket_calc_mem_size, .populate = bucket_populate, + .get_info = bucket_get_info, + .dequeue_contig_blocks = bucket_dequeue_contig_blocks, };