Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/47924/?format=api
http://patches.dpdk.org/api/patches/47924/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/20181107093553.28868-1-vipin.varghese@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20181107093553.28868-1-vipin.varghese@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20181107093553.28868-1-vipin.varghese@intel.com", "date": "2018-11-07T09:35:52", "name": "[1/2] doc: add guide for debug and troubleshoot", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "11f6a1bfc3db725127640c3ad04f99623e6d97e6", "submitter": { "id": 882, "url": "http://patches.dpdk.org/api/people/882/?format=api", "name": "Varghese, Vipin", "email": "vipin.varghese@intel.com" }, "delegate": { "id": 1, "url": "http://patches.dpdk.org/api/users/1/?format=api", "username": "tmonjalo", "first_name": "Thomas", "last_name": "Monjalon", "email": "thomas@monjalon.net" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/20181107093553.28868-1-vipin.varghese@intel.com/mbox/", "series": [ { "id": 2310, "url": "http://patches.dpdk.org/api/series/2310/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=2310", "date": "2018-11-07T09:35:52", "name": "[1/2] doc: add guide for debug and troubleshoot", "version": 1, "mbox": "http://patches.dpdk.org/series/2310/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/47924/comments/", "check": "success", "checks": "http://patches.dpdk.org/api/patches/47924/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 88276239;\n\tWed, 7 Nov 2018 10:39:34 +0100 (CET)", "from mga03.intel.com (mga03.intel.com [134.134.136.65])\n\tby dpdk.org (Postfix) with ESMTP id 39360201\n\tfor <dev@dpdk.org>; Wed, 7 Nov 2018 10:39:32 +0100 (CET)", "from fmsmga003.fm.intel.com ([10.253.24.29])\n\tby orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t07 Nov 2018 01:39:31 -0800", "from unknown (HELO saesrv02-S2600CWR.intel.com) ([10.224.122.203])\n\tby FMSMGA003.fm.intel.com with ESMTP; 07 Nov 2018 01:39:28 -0800" ], "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.54,475,1534834800\"; d=\"scan'208\";a=\"94319845\"", "From": "Vipin Varghese <vipin.varghese@intel.com>", "To": "dev@dpdk.org,\n\tjohn.mcnamara@intel.com", "Cc": "stephen1.byrne@intel.com, michael.j.glynn@intel.com, amol.patel@intel.com,\n\tsivaprasad.tummala@intel.com, Vipin Varghese <vipin.varghese@intel.com>", "Date": "Wed, 7 Nov 2018 15:05:52 +0530", "Message-Id": "<20181107093553.28868-1-vipin.varghese@intel.com>", "X-Mailer": "git-send-email 2.17.1", "MIME-Version": "1.0", "Content-Type": "text/plain; charset=y", "Content-Transfer-Encoding": "8bit", "Subject": "[dpdk-dev] [PATCH 1/2] doc: add guide for debug and troubleshoot", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Add user guide for debug and troubleshoot for common issues and bottleneck\nfound in various application models running on single or multi stages.\n\nSigned-off-by: Vipin Varghese <vipin.varghese@intel.com>\n---\n---\n doc/guides/howto/debug_troubleshoot_guide.rst | 349 ++++++++++++++++++\n doc/guides/howto/index.rst | 1 +\n 2 files changed, 350 insertions(+)\n create mode 100644 doc/guides/howto/debug_troubleshoot_guide.rst", "diff": "diff --git a/doc/guides/howto/debug_troubleshoot_guide.rst b/doc/guides/howto/debug_troubleshoot_guide.rst\nnew file mode 100644\nindex 000000000..a76000231\n--- /dev/null\n+++ b/doc/guides/howto/debug_troubleshoot_guide.rst\n@@ -0,0 +1,349 @@\n+.. SPDX-License-Identifier: BSD-3-Clause\n+ Copyright(c) 2018 Intel Corporation.\n+\n+.. _debug_troubleshoot_via_pmd:\n+\n+Debug & Troubleshoot guide via PMD\n+==================================\n+\n+DPDK applications can be designed to run as single thread simple stage to\n+multiple threads with complex pipeline stages. These application can use poll\n+mode devices which helps in offloading CPU cycles. A few models are\n+\n+\t* single primary\n+\t* multiple primary\n+\t* single primary single secondary\n+\t* single primary multiple secondary\n+\n+In all the above cases, it is a tedious task to isolate, debug and understand\n+odd behaviour which can occurring random or periodic. The goal of guide is to\n+share and explore a few commonly seen patterns and beahviour. Then isolate and\n+identify the root cause via step by step debug at various processing stages.\n+\n+Application Overview\n+--------------------\n+\n+Let us take up an example application as reference for explaining issues and\n+patterns commonly seen. The sample application in discussion makes use of\n+single primary model with various pipeline stages. The application uses PMD\n+and libraries such as service cores, mempool, pkt mbuf, event, crypto, QoS\n+and eth.\n+\n+The overview of an application modeled using PMD is shown in\n+:numref:`dtg_sample_app_model`.\n+\n+.. _dtg_sample_app_model:\n+\n+.. figure:: img/dtg_sample_app_model.*\n+\n+ Overview of pipeline stage of an application\n+\n+Bottleneck Analysis\n+-------------------\n+\n+To debug the bottleneck and performance issues the desired application\n+is made to run in an environment matching as below\n+- Linux 64-bit|32-bit\n+- DPDK PMD and libraries are used\n+- Libraries and PMD are either static or shared. But not both\n+- Machine flag optimizations of gcc or compiler are made constant\n+\n+Is there mismatch in packet rate (received < send)?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+RX Port and associated core :numref:`dtg_rx_rate`.\n+\n+.. _dtg_rx_rate:\n+\n+.. figure:: img/dtg_rx_rate.*\n+\n+ RX send rate compared against Receieved rate\n+\n+#. are generic configuration correct?\n+\t- What is port Speed, Duplex? rte_eth_link_get()\n+\t- Is packet of higher sizes are dropped? rte_eth_get_mtu()\n+\t- Are only specific MAC are received? rte_eth_promiscuous_get()\n+\n+#. are there NIC specific drops?\n+\t- Check rte_eth_rx_queue_info_get() for nb_desc, scattered_rx,\n+\t- Check rte_eth_dev_stats() for Stats per queue\n+\t- Is stats of other queues shows no change via\n+\t rte_eth_dev_dev_rss_hash_conf_get()\n+\n+#. If problem still persists, this might be at RX lcore thread\n+\t- Check if RX thread, distributor or event rx adapter is holding or\n+\t processing more than required\n+\t- try using rte_prefetch_non_temporal() to intimate the mbuf in pulled\n+\t to cache for temporary\n+\n+\n+Are there packet drops (receive|transmit)?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+RX-TX Port and associated cores :numref:`dtg_rx_tx_drop`.\n+\n+.. _dtg_rx_tx_drop:\n+\n+.. figure:: img/dtg_rx_tx_drop.*\n+\n+ RX-TX drops\n+\n+#. at RX\n+\t- Get the rx queues by rte_eth_dev_info_get() for nb_rx_queues\n+\t- Check for miss, errors, qerros by rte_eth_dev_stats() for imissed,\n+\t ierrors, q_erros, rx_nombuf, rte_mbuf_ref_count\n+\n+#. at TX\n+\t- Are we doing in bulk to reduce the TX descriptor overhead?\n+\t- Check rte_eth_dev_stats() for oerrors, qerros, rte_mbuf_ref_count\n+\n+Are there object drops in producer point for ring?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Producer point for ring :numref:`dtg_producer_ring`.\n+\n+.. _dtg_producer_ring:\n+\n+.. figure:: img/dtg_producer_ring.*\n+\n+ Producer point for Rings\n+\n+#. Performance for Producer\n+\t- Fetch the type of RING 'rte_ring_dump()' for flags (RING_F_SP_ENQ)\n+\t- If '(burst enqueue - actual enqueue) > 0' check rte_ring_count() or\n+\t rte_ring_free_count()\n+\t- If 'burst or single enqueue is 0', then there is no more space check\n+\t rte_ring_full() or not\n+\n+Are there object drops in consumer point for ring?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Consumer point for ring :numref:`dtg_consumer_ring`.\n+\n+.. _dtg_consumer_ring:\n+\n+.. figure:: img/dtg_consumer_ring.*\n+\n+ Consumer point for Rings\n+\n+#. Performance for Consumer\n+\t- Fetch the type of RING – rte_ring_dump() for flags (RING_F_SC_DEQ)\n+\t- If '(burst dequeue - actual dequeue) > 0' for rte_ring_free_count()\n+\t- If 'burst or single enqueue' always results 0 check the ring is empty\n+\t via rte_ring_empty()\n+\n+Is packets or objects are not processed at desired rate?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Memory objects close to NUMA :numref:`dtg_mempool`.\n+\n+.. _dtg_mempool:\n+\n+.. figure:: img/dtg_mempool.*\n+\n+ Memory objects has to be close to device per NUMA\n+\n+#. Is the performance low?\n+\t- Are packets received from multiple NIC? rte_eth_dev_count_all()\n+\t- Are NIC interfaces on different socket? use rte_eth_dev_socket_id()\n+\t- Is mempool created with right socket? rte_mempool_create() or\n+\t rte_pktmbuf_pool_create()\n+\t- Are we seeing drop on specific socket? It might require more\n+\t mempool objects; try allocating more objects\n+\t- Is there single RX thread for multiple NIC? try having multiple\n+\t lcore to read from fixed interface or we might be hitting cache\n+\t limit, so Increase cache_size for pool_create()\n+\n+#. Are we are still seeing low performance\n+ - Check if sufficient objects in mempool by rte_mempool_avail_count()\n+ - Is failure in some pkt? we might be getting pkts with size > mbuf\n+\t data size. Check rte_pktmbuf_is_continguous()\n+ - If user pthread is used to access object access\n+\t rte_mempool_cache_create()\n+ - Try using 1GB huge pages instead of 2MB. If there is difference,\n+ try then rte_mem_lock_page() for 2MB pages\n+\n+.. note::\n+ Stall in release of MBUF can be because\n+\n+\t* Processing pipeline is too heavy\n+\t* Number of stages are too many\n+\t* TX is not transferred at desired rate\n+\t* Multi segment is not offloaded at TX device.\n+\t* Application misuse scenarios can be\n+\t\t- not freeing packets\n+\t\t- invalid rte_pktmbuf_refcnt_set\n+\t\t- invalid rte_pktmbuf_prefree_seg\n+\n+Is there difference in performance for crypto?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Crypto device and PMD :numref:`dtg_crypto`.\n+\n+.. _dtg_crypto:\n+\n+.. figure:: img/dtg_crypto.*\n+\n+ CRYPTO and interaction with PMD device\n+\n+#. are generic configuration correct?\n+\t- Get total crypto devices – rte_cryptodev_count()\n+\t- Cross check SW or HW flags are configured properly\n+\t rte_cryptodev_info_get() for feature_flags\n+\n+#. If enqueue request > actual enqueue (drops)?\n+\t- Is the queue pair setup for proper node\n+\t rte_cryptodev_queue_pair_setup() for socket_id\n+\t- Is the session_pool created from same socket_id as queue pair?\n+\t- Is enqueue thread same socket_id?\n+\t- rte_cryptodev_stats() for drops err_count for enqueue or dequeue\n+\t- Are there multiple threads enqueue or dequeue from same queue pair?\n+\n+#. If enqueue rate > dequeue rate?\n+\t- Is dequeue lcore thread is same socket_id?\n+\t- If SW crypto is in use, check if the CRYPTO Library build with\n+\t right (SIMD) flags Or check if the queue pair using CPU ISA by\n+\t rte_cryptodev_info_get() for feature_flags for AVX|SSE\n+\t- If we are using HW crypto – Is the card on same NUMA socket as\n+\t queue pair and session pool?\n+\n+worker functions not giving performance?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Custom worker function :numref:`dtg_distributor_worker`.\n+\n+.. _dtg_distributor_worker:\n+\n+.. figure:: img/dtg_distributor_worker.*\n+\n+ Custom worker function performance drops\n+\n+#. Performance\n+\t- Threads context switches more frequently? Identify lcore with\n+\t rte_lcore() and lcore index mapping with rte_lcore_index(). Best\n+\t performance when mapping of thread and core is 1:1.\n+\t- Check lcore role type and state? rte_eal_lcore_role for\n+\t rte, off and service. User function on service core might be\n+\t sharing timeslots with other functions.\n+\t- Check the cpu core? check rte_thread_get_affinity() and\n+\t rte_eal_get_lcore_state() for run state.\n+\n+#. Debug\n+\t- Mode of operation? rte_eal_get_configuration() for master, fetch\n+\t lcore|service|numa count, process_type.\n+\t- Check lcore run mode? rte_eal_lcore_role() for rte, off, service.\n+\t- process details? rte_dump_stack(), rte_dump_registers() and\n+\t rte_memdump() will give insights.\n+\n+service functions are not frequent enough?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+service functions on service cores :numref:`dtg_service`.\n+\n+.. _dtg_service:\n+\n+.. figure:: img/dtg_service.*\n+\n+ functions running on service cores\n+\n+#. Performance\n+\t- Get service core count? rte_service_lcore_count() and compare with\n+\t result of rte_eal_get_configuration()\n+\t- Check if registered service is available?\n+\t rte_service_get_by_name(), rte_service_get_count() and\n+\t rte_service_get_name()\n+\t- Is given service running parallel on multiple lcores?\n+\t rte_service_probe_capability() and rte_service_map_lcore_get()\n+\t- Is service running? rte_service_runstate_get()\n+\n+#. Debug\n+\t- Find how many services are running on specific service lcore by\n+\t rte_service_lcore_count_services()\n+\t- Generic debug via rte_service_dump()\n+\n+Is there bottleneck in eventdev?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+#. are generic configuration correct?\n+\t- Get event_dev devices? rte_event_dev_count()\n+\t- Are they created on correct socket_id? - rte_event_dev_socket_id()\n+\t- Check if HW or SW capabilities? - rte_event_dev_info_get() for\n+\t event_qos, queue_all_types, burst_mode, multiple_queue_port,\n+\t max_event_queue|dequeue_depth\n+\t- Is packet stuck in queue? check for stages (event qeueue) where\n+\t packets are looped back to same or previous stages.\n+\n+#. Performance drops in enqueue (event count > actual enqueue)?\n+\t- Dump the event_dev information? rte_event_dev_dump()\n+\t- Check stats for queue and port for eventdev\n+\t- Check the inflight, current queue element for enqueue|deqeue\n+\n+How to debug QoS via TM?\n+~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+TM on TX interface :numref:`dtg_qos_tx`.\n+\n+.. _dtg_qos_tx:\n+\n+.. figure:: img/dtg_qos_tx.*\n+\n+ Traffic Manager just before TX\n+\n+#. Is configuration right?\n+\t- Get current capabilities for DPDK port rte_tm_capabilities_get()\n+\t for max nodes, level, shaper_private, shaper_shared, sched_n_children\n+\t and stats_mask\n+\t- Check if current leaf are configured identically rte_tm_capabilities_get()\n+\t for lead_nodes_identicial\n+\t- Get leaf nodes for a dpdk port – rte_tn_get_number_of_leaf_node()\n+\t- Check level capabilities by rte_tm_level_capabilities_get for n_nodes\n+\t\t- Max, nonleaf_max, leaf_max\n+\t\t- identical, non_identical\n+\t\t- Shaper_private_supported\n+\t\t- Stats_mask\n+\t\t- Cman wred packet|byte supported\n+\t\t- Cman head drop supported\n+\t- Check node capabilities by rte_tm_node_capabilities_get for n_nodes\n+\t\t- Shaper_private_supported\n+\t\t- Stats_mask\n+\t\t- Cman wred packet|byte supported\n+\t\t- Cman head drop supported\n+\t- Debug via stats – rte_tm_stats_update() and rte_tm_node_stats_read()\n+\n+Packet is not of right format?\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+Packet capture before and after processing :numref:`dtg_pdump`.\n+\n+.. _dtg_pdump:\n+\n+.. figure:: img/dtg_pdump.*\n+\n+ Capture points of Traffic at RX-TX\n+\n+#. with primary enabling then secondary can access. Copies packets from\n+ specific RX or TX queues to secondary process ring buffers.\n+\n+.. note::\n+ Need to explore:\n+\t* if secondary shares same interface can we enable from secondary\n+\t for rx|tx happening on primary\n+\t* Specific PMD private data dump the details\n+\t* User private data if present, dump the details\n+\n+How to develop custom code to debug?\n+------------------------------------\n+\n+- For single process – the debug functionality is to be added in same\n+ process\n+- For multiple process – the debug functionality can be added to\n+ secondary multi process\n+\n+..\n+\n+These can be achieved by Primary’s Debug functions invoked via\n+\n+\t#. Timer call-back\n+\t#. Service function under service core\n+\t#. USR1 or USR2 signal handler\n+\ndiff --git a/doc/guides/howto/index.rst b/doc/guides/howto/index.rst\nindex a642a2be1..ca4905e29 100644\n--- a/doc/guides/howto/index.rst\n+++ b/doc/guides/howto/index.rst\n@@ -18,3 +18,4 @@ HowTo Guides\n virtio_user_as_exceptional_path\n packet_capture_framework\n telemetry\n+ debug_troubleshoot_guide.rst\n", "prefixes": [ "1/2" ] }{ "id": 47924, "url": "