Message ID | 1552663632-18742-1-git-send-email-hkalra@marvell.com (mailing list archive) |
---|---|
State | Changes Requested, archived |
Delegated to: | Thomas Monjalon |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BCE092C17; Fri, 15 Mar 2019 16:28:25 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 2E1E12BD3 for <dev@dpdk.org>; Fri, 15 Mar 2019 16:28:24 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x2FFMkK2009928; Fri, 15 Mar 2019 08:28:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : content-type : content-transfer-encoding : mime-version; s=pfpt0818; bh=8D8/IubP6j+9sdSCEmZifOCodDe4LpD7fZuCd2peELk=; b=JkkYbYcpwoU7EX1ERHOe/N0B4eN+cASuv4Jy+QXmw4uigE0Ls+++kxA4EUwKy0kwdGYf k5AmF/MFt5E5dXqRCohfWIx/o8jkh6QRiYe1dxsGvBALZFBD1zmsaJcm5thjN3fjmUZo d140+8LbgbxLJej2sYqFqD1Sw/TqtG8NWjTojHzI/OYYpgJK+dNDeBj+TG712YG7N4D2 0i+OJvZMlnVGBBeEOnnakAGJEQP7eGoJLvc+aQZ476eT4NYheflkUiZbzYIJUSWzR/dp n2ohhLAvCQkUqT0qDzajj72Rz4+blaf+frnQL/1R+UcdQmDdsw406hVbadKivv4G4P4C kw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2r832uhw5x-9 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 15 Mar 2019 08:28:22 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 15 Mar 2019 08:27:46 -0700 Received: from NAM02-SN1-obe.outbound.protection.outlook.com (104.47.36.58) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3 via Frontend Transport; Fri, 15 Mar 2019 08:27:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8D8/IubP6j+9sdSCEmZifOCodDe4LpD7fZuCd2peELk=; b=QjVjK0fu5bHW2MyyC4IA64uPl3xSC8fk4CN5Ahq3xDNbA5B9HF8R5NWsKU8fHGtjgIKKll5MXkhAr+u4XKB9XYDr5hW2YbkisgTDR8b+0buEcu/c/gmQjTzHpOSHCAEcPGYgB2HxwJv8XtjlMuIzMnZbokCl3g6YVJ5dM9eJNpE= Received: from SN1PR18MB2237.namprd18.prod.outlook.com (52.132.199.27) by SN1PR18MB2287.namprd18.prod.outlook.com (52.132.199.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1709.14; Fri, 15 Mar 2019 15:27:41 +0000 Received: from SN1PR18MB2237.namprd18.prod.outlook.com ([fe80::ad6a:84a9:7b19:f901]) by SN1PR18MB2237.namprd18.prod.outlook.com ([fe80::ad6a:84a9:7b19:f901%2]) with mapi id 15.20.1709.011; Fri, 15 Mar 2019 15:27:41 +0000 From: Harman Kalra <hkalra@marvell.com> To: "reshma.pattan@intel.com" <reshma.pattan@intel.com> CC: "dev@dpdk.org" <dev@dpdk.org>, Jerin Jacob Kollanukkaran <jerinj@marvell.com>, Harman Kalra <hkalra@marvell.com> Thread-Topic: [PATCH] app/pdump: enforcing pdump to use sw mempool Thread-Index: AQHU20OhvYnaD9t2l0mtylBq2EgxSg== Date: Fri, 15 Mar 2019 15:27:41 +0000 Message-ID: <1552663632-18742-1-git-send-email-hkalra@marvell.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BM1PR01CA0145.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:68::15) To SN1PR18MB2237.namprd18.prod.outlook.com (2603:10b6:802:2e::27) x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.7.4 x-originating-ip: [115.113.156.2] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 4316f666-7fd0-4aed-8f94-08d6a95ac392 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600127)(711020)(4605104)(2017052603328)(7153060)(7193020); SRVR:SN1PR18MB2287; x-ms-traffictypediagnostic: SN1PR18MB2287: x-microsoft-antispam-prvs: <SN1PR18MB22878ED0EFB5C21EA7A48149C5440@SN1PR18MB2287.namprd18.prod.outlook.com> x-forefront-prvs: 09778E995A x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(396003)(366004)(136003)(346002)(376002)(39850400004)(199004)(189003)(2351001)(7736002)(54906003)(50226002)(5660300002)(97736004)(8936002)(6916009)(107886003)(68736007)(106356001)(105586002)(71190400001)(71200400001)(6486002)(4744005)(36756003)(6436002)(52116002)(86362001)(53936002)(6116002)(25786009)(66066001)(4326008)(256004)(14444005)(5640700003)(81166006)(2616005)(476003)(81156014)(478600001)(486006)(8676002)(2501003)(99286004)(55236004)(316002)(6506007)(305945005)(3846002)(2906002)(386003)(26005)(102836004)(14454004)(186003)(6512007); DIR:OUT; SFP:1101; SCL:1; SRVR:SN1PR18MB2287; H:SN1PR18MB2237.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: marvell.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: bxDj0b2n3R5iYo5tJpROsfiH4nV1k7Anjw+mXVlxkygqja+smMS1fqLNvIMrq/0DPTnEQT3n26KHZzqmFjViu/wpnj7UfM/kKVsSn38wsRS/QDwGFW30rCOVns+kWIbzKqUKeW40gPS+fWX/tY//i9gurAGgiA2C/oyYQtCXZpTz/UqIjnmKQ0C9dQ8y91dxfYgW0nQ4cJ9SJYe2vQRb8X6iswudfeJiqWxMTPolk42lAb1kgYjorfPFzW7q4ONV9uBoAf5UEi1owgmYzDSjZvtBlpszQKfT5ugyK7DDWIFcRZbRZImZM8cPPuZiMZBQBdYSk2WTw9IMadUJRQvV2frt7B+ETnVNAVVOOLaf8xosS7cz3gIFXGqyU3+YPg/Zymk0xEmG6LfigJZqFARnh1cOJdaK+RFNhX3Z2lzCoOc= Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: 4316f666-7fd0-4aed-8f94-08d6a95ac392 X-MS-Exchange-CrossTenant-originalarrivaltime: 15 Mar 2019 15:27:41.8120 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR18MB2287 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-03-15_10:, , signatures=0 Subject: [dpdk-dev] [PATCH] app/pdump: enforcing pdump to use sw mempool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Series |
app/pdump: enforcing pdump to use sw mempool
|
|
Checks
Context | Check | Description |
---|---|---|
ci/checkpatch | success | coding style OK |
ci/Intel-compilation | success | Compilation OK |
ci/mellanox-Performance-Testing | success | Performance Testing PASS |
ci/intel-Performance-Testing | success | Performance Testing PASS |
Commit Message
Harman Kalra
March 15, 2019, 3:27 p.m. UTC
Since pdump uses SW rings to manage packets hence
pdump should use SW ring mempool for managing its
own copy of packets.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/pdump/main.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
Comments
15/03/2019 16:27, Harman Kalra: > Since pdump uses SW rings to manage packets hence > pdump should use SW ring mempool for managing its > own copy of packets. I'm not sure to understand the reasoning. Reshma, Olivier, Andrew, any opinion? Let's take a decision for this very old patch.
Hi, On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote: > 15/03/2019 16:27, Harman Kalra: > > Since pdump uses SW rings to manage packets hence > > pdump should use SW ring mempool for managing its > > own copy of packets. > > I'm not sure to understand the reasoning. > Reshma, Olivier, Andrew, any opinion? > > Let's take a decision for this very old patch. From what I understand, many mempools of packets are created, to store the copy of dumped packets. I suppose that it may not be possible to create as many mempools by using the "best" mbuf pool (from rte_mbuf_best_mempool_ops()). Using a "ring_mp_mc" as mempool ops should always be possible. I think it would be safer to use "ring_mp_mc" instead of CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be overriden on a specific platform. Olivier
On Fri, Jul 05, 2019 at 03:48:01PM +0200, Olivier Matz wrote: > External Email > > ---------------------------------------------------------------------- > Hi, > > On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote: > > 15/03/2019 16:27, Harman Kalra: > > > Since pdump uses SW rings to manage packets hence > > > pdump should use SW ring mempool for managing its > > > own copy of packets. > > > > I'm not sure to understand the reasoning. > > Reshma, Olivier, Andrew, any opinion? > > > > Let's take a decision for this very old patch. > > From what I understand, many mempools of packets are created, to > store the copy of dumped packets. I suppose that it may not be > possible to create as many mempools by using the "best" mbuf pool > (from rte_mbuf_best_mempool_ops()). > > Using a "ring_mp_mc" as mempool ops should always be possible. > I think it would be safer to use "ring_mp_mc" instead of > CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be > overriden on a specific platform. > > Olivier Following are some reasons for this patch: 1. As we all know dpdk-pdump app creates a mempool for receiving packets (from primary process) into the mbufs, which would get tx'ed into pcap device and freed. Using hw mempool for creating mempool for dpdk-pdump mbufs was generating segmentation fault because hw mempool vfio is setup by primary process and secondary will not have access to its bar regions. 2. Setting up a seperate hw mempool vfio device for secondary generates following error: "cannot find TAILQ entry for PCI device!" http://git.dpdk.org/dpdk/tree/drivers/bus/pci/linux/pci_vfio.c#n823 which means secondary cannot setup a new device which is not set by primary. 3. Since pdump creates mempool for its own local mbufs, we could not feel the requirement for hw mempool, as SW mempool in our opinion is capable enough for working in all conditions.
05/07/2019 16:39, Harman Kalra: > On Fri, Jul 05, 2019 at 03:48:01PM +0200, Olivier Matz wrote: > > On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote: > > > 15/03/2019 16:27, Harman Kalra: > > > > Since pdump uses SW rings to manage packets hence > > > > pdump should use SW ring mempool for managing its > > > > own copy of packets. > > > > > > I'm not sure to understand the reasoning. > > > Reshma, Olivier, Andrew, any opinion? > > > > > > Let's take a decision for this very old patch. > > > > From what I understand, many mempools of packets are created, to > > store the copy of dumped packets. I suppose that it may not be > > possible to create as many mempools by using the "best" mbuf pool > > (from rte_mbuf_best_mempool_ops()). > > > > Using a "ring_mp_mc" as mempool ops should always be possible. > > I think it would be safer to use "ring_mp_mc" instead of > > CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be > > overriden on a specific platform. > > > > Olivier > > Following are some reasons for this patch: > 1. As we all know dpdk-pdump app creates a mempool for receiving packets (from primary process) into the mbufs, which would get tx'ed into pcap device and freed. Using hw mempool for creating mempool for dpdk-pdump mbufs was generating segmentation fault because hw mempool vfio is setup by primary process and secondary will not have access to its bar regions. > > 2. Setting up a seperate hw mempool vfio device for secondary generates following error: > "cannot find TAILQ entry for PCI device!" > http://git.dpdk.org/dpdk/tree/drivers/bus/pci/linux/pci_vfio.c#n823 > which means secondary cannot setup a new device which is not set by primary. > > 3. Since pdump creates mempool for its own local mbufs, we could not feel the requirement for hw mempool, as SW mempool in our opinion is capable enough for working in all conditions. OK From the commit log, it is just missing to explain that HW mempool cannot be used in secondary if initialized in primary, and cannot be initialized in secondary process. Then it will become clear :) Please, do you want to reword a v2?
diff --git a/app/pdump/main.c b/app/pdump/main.c index ccf2a1d2f..d0e342645 100644 --- a/app/pdump/main.c +++ b/app/pdump/main.c @@ -598,11 +598,12 @@ create_mp_ring_vdev(void) mbuf_pool = rte_mempool_lookup(mempool_name); if (mbuf_pool == NULL) { /* create mempool */ - mbuf_pool = rte_pktmbuf_pool_create(mempool_name, + mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name, pt->total_num_mbufs, MBUF_POOL_CACHE_SIZE, 0, pt->mbuf_data_size, - rte_socket_id()); + rte_socket_id(), + RTE_MBUF_DEFAULT_MEMPOOL_OPS); if (mbuf_pool == NULL) { cleanup_rings(); rte_exit(EXIT_FAILURE,