From patchwork Thu Mar 5 08:20:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 66286 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9C42AA0573; Thu, 5 Mar 2020 09:21:57 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 693291BFCF; Thu, 5 Mar 2020 09:21:57 +0100 (CET) Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by dpdk.org (Postfix) with ESMTP id 95F351BFCE for ; Thu, 5 Mar 2020 09:21:56 +0100 (CET) Received: by mail-pf1-f196.google.com with SMTP id c144so2400728pfb.10 for ; Thu, 05 Mar 2020 00:21:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=D1mZ3/Ss2qsT9IG6dO7QebifpxZJN2St0scodiNbvYU=; b=Lj0d2K7Um73xpw+dbHFGCGlFkCY+nNoIFhu8z8ixiHt6//mouXMYHfgGW1D8BXaFcC wTfHgzaUbZGpw/C2kfDYFx54TrK5GlEUGKHUmvbqtdaDvcmo9JDR8G+C9aewd1Y028KT ZuJ1jiXS8fWz8u+njKD9Yd48JSWuAkRheUTpg+zwlC9Fx4gxgSIijk8bz4VPVNvbYivy 8LvtF2r0ILiSt/j6kUvEArzGLerBBKAwV3CtIpq02WnVx/Prs2PElnfQi9tf9jE6iG7E SPN/qjMyjZf+j9q1Azp7iHVlBVy5EHjPTlvrpZ5WcALNI/0PCebJbIGuNrjlE9uhL/d5 jQGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=D1mZ3/Ss2qsT9IG6dO7QebifpxZJN2St0scodiNbvYU=; b=pR+uDyhl8ER1p1pa7xsUUDp0KI/L+jP9uwDgv5+uz2Ohe3tUGVnDv3j7wQDboFwYBF Ynt09qjco3y/i7NqWAgS02qcYa2sYzkly3oYD39dOjDC9vNyoE+JgjSDTSWEmcNr4inD WrBWMKF3DRuVPMQ8ko3XKdtO3/yromglOotOv9S0xa4xqPmB3U0rvauQtluumHip4368 6lGDCPhb9drmdD+dQxUIGofFlviiyhzrJIxNo3lIlXuKkj9hwzOcO9mS8XbO4gwipX/8 h0uoJf2pBEAkwUR7iuDPaeNbJZdowLIyEArSe1JGIkEu2tO1548rd5LJP0D75hEF1CLZ uUkw== X-Gm-Message-State: ANhLgQ0hh2Z+w1e8oD4XvN30ADrTS5n8NYEd6h/bu4zRS/WBowSSgOY5 9m41pi5KSO+O2qL2PVn0jJxGU4ppFzM= X-Google-Smtp-Source: ADFU+vvet32oIWyEoow4vaH6tFyIWnijPqiiA1wVPwc1rFOKtuZuI8BoMhYDekNwGlA1jL0lvzMmOQ== X-Received: by 2002:aa7:93a6:: with SMTP id x6mr7268051pff.72.1583396515503; Thu, 05 Mar 2020 00:21:55 -0800 (PST) Received: from local.opencloud.tech.localdomain ([219.143.129.147]) by smtp.gmail.com with ESMTPSA id 4sm32972525pfn.90.2020.03.05.00.21.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Mar 2020 00:21:54 -0800 (PST) From: xiangxia.m.yue@gmail.com To: dev@dpdk.org, olivier.matz@6wind.com, arybchenko@solarflare.com, gage.eads@intel.com, artem.andreev@oktetlabs.ru, jerinj@marvell.com, ndabilpuram@marvell.com, vattunuru@marvell.com, hemant.agrawal@nxp.com Cc: Tonghao Zhang Date: Thu, 5 Mar 2020 16:20:40 +0800 Message-Id: <1583396440-25485-1-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com> Subject: [dpdk-dev] [PATCH dpdk-dev v2] mempool: sort the rte_mempool_ops by name X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Tonghao Zhang The order of mempool initiation affects mempool index in the rte_mempool_ops_table. For example, when building APPs with: $ gcc -lrte_mempool_bucket -lrte_mempool_ring ... The "bucket" mempool will be registered firstly, and its index in table is 0 while the index of "ring" mempool is 1. DPDK uses the mk/rte.app.mk to build APPs, and others, for example, Open vSwitch, use the libdpdk.a or libdpdk.so to build it. The mempool lib linked in dpdk and Open vSwitch is different. The mempool can be used between primary and secondary process, such as dpdk-pdump and pdump-pmd/Open vSwitch(pdump enabled). There will be a crash because dpdk-pdump creates the "ring_mp_mc" ring which index in table is 0, but the index of "bucket" ring is 0 in Open vSwitch. If Open vSwitch use the index 0 to get mempool ops and malloc memory from mempool. The crash will occur: bucket_dequeue (access null and crash) rte_mempool_get_ops (should get "ring_mp_mc", but get "bucket" mempool) rte_mempool_ops_dequeue_bulk ... rte_pktmbuf_alloc rte_pktmbuf_copy pdump_copy pdump_rx rte_eth_rx_burst To avoid the crash, there are some solution: * constructor priority: Different mempool uses different priority in RTE_INIT, but it's not easy to maintain. * change mk/rte.app.mk: Change the order in mk/rte.app.mk to be same as libdpdk.a/libdpdk.so, but when adding a new mempool driver in future, we must make sure the order. * register mempool orderly: Sort the mempool when registering, so the lib linked will not affect the index in mempool table. Signed-off-by: Tonghao Zhang Acked-by: Olivier Matz --- v2: 1. use the qsort to sort the mempool_ops. 2. tested: https://travis-ci.com/ovn-open-virtual-networks/dpdk-next-net/builds/151894026 --- lib/librte_mempool/rte_mempool_ops.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c index 22c5251..e9113cf 100644 --- a/lib/librte_mempool/rte_mempool_ops.c +++ b/lib/librte_mempool/rte_mempool_ops.c @@ -17,6 +17,15 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { .num_ops = 0 }; +static int +compare_mempool_ops(const void *a, const void *b) +{ + const struct rte_mempool_ops *m_a = a; + const struct rte_mempool_ops *m_b = b; + + return strcmp(m_a->name, m_b->name); +} + /* add a new ops struct in rte_mempool_ops_table, return its index. */ int rte_mempool_register_ops(const struct rte_mempool_ops *h) @@ -63,6 +72,11 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { ops->get_info = h->get_info; ops->dequeue_contig_blocks = h->dequeue_contig_blocks; + /* sort the rte_mempool_ops by name. the order of the mempool + * lib initiation will not affect rte_mempool_ops index. */ + qsort(rte_mempool_ops_table.ops, rte_mempool_ops_table.num_ops, + sizeof(rte_mempool_ops_table.ops[0]), compare_mempool_ops); + rte_spinlock_unlock(&rte_mempool_ops_table.sl); return ops_index;