From patchwork Tue Jan 2 12:24:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135642 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B14DA437F8; Tue, 2 Jan 2024 05:03:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3EF55402C3; Tue, 2 Jan 2024 05:03:27 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) by mails.dpdk.org (Postfix) with ESMTP id 63785402BE for ; Tue, 2 Jan 2024 05:03:25 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704168206; x=1735704206; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hNX1MSN66syHRWIH/4nNgvHNB7cxoiYnlVdSo6TlAs0=; b=GRP4vlU2m6FhzYjzHJY9icUMUPIsz3tYQ6aFbqN+qR3+69vZux7e/jJX ERs3WVJQJ2Zr4wNXdNnwnY1gTiM6W4f7aB6zXXwv9dZQqWLV5xDpZMq4U hMtCAVIvdkWTw1mNxamwXEcqRVLyaAmbiSf2ms3ufLk5tu2BxukNfLPx7 SozYnXZ59B7h0uFXqcEyX1xojksXdaQ1dbvnq2sONxbL6WHT/4frsyOpO WX1CijAvoYRKEPWsG/6JNqWMzXaNgUVFACOhCr0ROSyhWdJ2wmBnI7838 xr8ySgSr8+6+trlLAdEkYkAdZqDhP3TU0MdBh4nWkJckcnzrqBiEpueM+ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="3647043" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="3647043" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jan 2024 20:03:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="772725892" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="772725892" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga007.jf.intel.com with ESMTP; 01 Jan 2024 20:03:21 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH v5 1/2] net/ice: add Tx scheduling tree dump support Date: Tue, 2 Jan 2024 07:24:24 -0500 Message-Id: <20240102122425.3480836-1-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20231226185428.3158880-1-qi.z.zhang@intel.com> References: <20231226185428.3158880-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added Testpmd CLI support for dumping Tx scheduling tree. Usage: testpmd>txsched dump The output file is in "dot" format, which can be converted into an image file using Graphviz. - In "brief" mode, all scheduling nodes in the tree are displayed. - In "detail" mode, each node's configuration parameters are also displayed. Renamed `ice_ddp_package.c` to `ice_diagnose.c`, which now contains all CLI support for diagnostic purposes. Signed-off-by: Qi Zhang --- v5: - ignore the error when query node failed at queue level, as queue may be stopped. v4: - show node type in brief mode v3: - fix incorrect parameter when query rl profile v2: - fix CI build issue .../ice/{ice_ddp_package.c => ice_diagnose.c} | 373 ++++++++++++++++++ drivers/net/ice/ice_ethdev.h | 3 + drivers/net/ice/ice_testpmd.c | 65 +++ drivers/net/ice/meson.build | 2 +- drivers/net/ice/version.map | 1 + 5 files changed, 443 insertions(+), 1 deletion(-) rename drivers/net/ice/{ice_ddp_package.c => ice_diagnose.c} (60%) diff --git a/drivers/net/ice/ice_ddp_package.c b/drivers/net/ice/ice_diagnose.c similarity index 60% rename from drivers/net/ice/ice_ddp_package.c rename to drivers/net/ice/ice_diagnose.c index 0aa19eb282..2b9794d212 100644 --- a/drivers/net/ice/ice_ddp_package.c +++ b/drivers/net/ice/ice_diagnose.c @@ -11,6 +11,7 @@ #include #include "ice_ethdev.h" +#include "ice_rxtx.h" #define ICE_BLK_MAX_COUNT 512 #define ICE_BUFF_SEG_HEADER_FLAG 0x1 @@ -507,3 +508,375 @@ int rte_pmd_ice_dump_switch(uint16_t port, uint8_t **buff, uint32_t *size) return ice_dump_switch(dev, buff, size); } + +static void print_rl_profile(struct ice_aqc_rl_profile_elem *prof, + FILE *stream) +{ + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n", prof->profile_id); + fprintf(stream, "\t\t\t\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n", prof->max_burst_size); + fprintf(stream, "\t\t\t\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n", prof->rl_multiply); + fprintf(stream, "\t\t\t\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n", prof->wake_up_calc); + fprintf(stream, "\t\t\t\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\t\t\t\n", prof->rl_encode); + fprintf(stream, "\t\t\t\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\t\t
id%d
max burst size%d
rate limit multiply%d
wake up calculation%d
rate limit encode%d
\n"); + fprintf(stream, "\t\t\t\t\t\n"); +} + +static +void print_elem_type(FILE *stream, u8 type) +{ + switch (type) { + case 1: + fprintf(stream, "root"); + break; + case 2: + fprintf(stream, "tc"); + break; + case 3: + fprintf(stream, "se_generic"); + break; + case 4: + fprintf(stream, "entry_point"); + break; + case 5: + fprintf(stream, "leaf"); + break; + default: + fprintf(stream, "%d", type); + break; + } +} + +static +void print_valid_sections(FILE *stream, u8 vs) +{ + if ((vs & 0x1) != 0) + fprintf(stream, "generic "); + if ((vs & 0x2) != 0) + fprintf(stream, "cir "); + if ((vs & 0x4) != 0) + fprintf(stream, "eir "); + if ((vs & 0x8) != 0) + fprintf(stream, "shared "); +} + +static +void print_scheduling_mode(FILE *stream, bool flag) +{ + if (flag) + fprintf(stream, "pps"); + else + fprintf(stream, "bps"); +} + +static +void print_priority_mode(FILE *stream, bool flag) +{ + if (flag) + fprintf(stream, "single priority node"); + else + fprintf(stream, "wfq"); +} + +static +void print_node(struct ice_aqc_txsched_elem_data *data, + struct ice_aqc_rl_profile_elem *cir_prof, + struct ice_aqc_rl_profile_elem *eir_prof, + struct ice_aqc_rl_profile_elem *shared_prof, + bool detail, FILE *stream) +{ + fprintf(stream, "\tNODE_%d [\n", data->node_teid); + fprintf(stream, "\t\tlabel=<\n"); + + fprintf(stream, "\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n", data->node_teid); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\n"); + + if (!detail) + goto brief; + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n", (data->data.generic >> 1) & 0x7); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n", (data->data.generic >> 5) & 0x3); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n", data->data.flags & 0x1); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + if (cir_prof == NULL) + fprintf(stream, "\t\t\t\t\t\n"); + else + print_rl_profile(cir_prof, stream); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n", data->data.cir_bw.bw_alloc); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + if (eir_prof == NULL) + fprintf(stream, "\t\t\t\t\t\n"); + else + print_rl_profile(eir_prof, stream); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n", data->data.eir_bw.bw_alloc); + fprintf(stream, "\t\t\t\t\n"); + + fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\t\n"); + if (shared_prof == NULL) + fprintf(stream, "\t\t\t\t\t\n"); + else + print_rl_profile(shared_prof, stream); + fprintf(stream, "\t\t\t\t\n"); + +brief: + fprintf(stream, "\t\t\t
teid %d
type "); + print_elem_type(stream, data->data.elem_type); + fprintf(stream, "
valid sections "); + print_valid_sections(stream, data->data.valid_sections); + fprintf(stream, "
scheduling mode "); + print_scheduling_mode(stream, (data->data.generic & 0x1) != 0); + fprintf(stream, "
priority %d
priority mode"); + print_priority_mode(stream, ((data->data.generic >> 4) & 0x1) != 0); + fprintf(stream, "
adjustment value %d
suspended %d
cir bw profile default
cir bw weight %d
eir bw profile default
eir bw weight %d
shared rl profile default
\n"); + + fprintf(stream, "\t\t>\n"); + fprintf(stream, "\t\tshape=plain\n"); + fprintf(stream, "\t]\n"); + + if (data->parent_teid != 0xFFFFFFFF) + fprintf(stream, "\tNODE_%d -> NODE_%d\n", data->parent_teid, data->node_teid); +} + +static +int query_rl_profile(struct ice_hw *hw, + uint8_t level, uint8_t flags, uint16_t profile_id, + struct ice_aqc_rl_profile_elem *data) +{ + enum ice_status ice_status; + + data->level = level; + data->flags = flags; + data->profile_id = profile_id; + + ice_status = ice_aq_query_rl_profile(hw, 1, data, + sizeof(struct ice_aqc_rl_profile_elem), NULL); + + if (ice_status != ICE_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to query rl profile."); + return -EINVAL; + } + + return 0; +} + +static +int query_node(struct ice_hw *hw, uint32_t child, uint32_t *parent, + uint8_t level, bool detail, FILE *stream) +{ + struct ice_aqc_txsched_elem_data data; + enum ice_status status; + struct ice_aqc_rl_profile_elem cir_prof; + struct ice_aqc_rl_profile_elem eir_prof; + struct ice_aqc_rl_profile_elem shared_prof; + struct ice_aqc_rl_profile_elem *cp = NULL; + struct ice_aqc_rl_profile_elem *ep = NULL; + struct ice_aqc_rl_profile_elem *sp = NULL; + int ret; + + status = ice_sched_query_elem(hw, child, &data); + if (status != ICE_SUCCESS) { + if (level == hw->num_tx_sched_layers) { + /* ignore the error when a queue has been stopped. */ + PMD_DRV_LOG(WARNING, "Failed to query queue node %d.", child); + *parent = 0xffffffff; + return 0; + } else { + PMD_DRV_LOG(ERR, "Failed to query scheduling node %d.", child); + return -EINVAL; + } + } + + *parent = data.parent_teid; + + if (data.data.cir_bw.bw_profile_idx != 0) { + ret = query_rl_profile(hw, level, 0, data.data.cir_bw.bw_profile_idx, &cir_prof); + + if (ret) + return ret; + cp = &cir_prof; + } + + if (data.data.eir_bw.bw_profile_idx != 0) { + ret = query_rl_profile(hw, level, 1, data.data.eir_bw.bw_profile_idx, &eir_prof); + + if (ret) + return ret; + ep = &eir_prof; + } + + if (data.data.srl_id != 0) { + ret = query_rl_profile(hw, level, 2, data.data.srl_id, &shared_prof); + + if (ret) + return ret; + sp = &shared_prof; + } + + print_node(&data, cp, ep, sp, detail, stream); + + return 0; +} + +static +int query_nodes(struct ice_hw *hw, + uint32_t *children, int child_num, + uint32_t *parents, int *parent_num, + uint8_t level, bool detail, + FILE *stream) +{ + uint32_t parent; + int i; + int j; + + *parent_num = 0; + for (i = 0; i < child_num; i++) { + bool exist = false; + int ret; + + ret = query_node(hw, children[i], &parent, level, detail, stream); + if (ret) + return -EINVAL; + + for (j = 0; j < *parent_num; j++) { + if (parents[j] == parent) { + exist = true; + break; + } + } + + if (!exist && parent != 0xFFFFFFFF) + parents[(*parent_num)++] = parent; + } + + return 0; +} + +int rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream) +{ + struct rte_eth_dev *dev; + struct ice_hw *hw; + struct ice_pf *pf; + struct ice_q_ctx *q_ctx; + uint16_t q_num; + uint16_t i; + struct ice_tx_queue *txq; + uint32_t buf1[256]; + uint32_t buf2[256]; + uint32_t *children = buf1; + uint32_t *parents = buf2; + int child_num = 0; + int parent_num = 0; + uint8_t level; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV); + + dev = &rte_eth_devices[port]; + if (!is_ice_supported(dev)) + return -ENOTSUP; + + dev = &rte_eth_devices[port]; + hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + level = hw->num_tx_sched_layers; + + q_num = dev->data->nb_tx_queues; + + /* main vsi */ + for (i = 0; i < q_num; i++) { + txq = dev->data->tx_queues[i]; + q_ctx = ice_get_lan_q_ctx(hw, txq->vsi->idx, 0, i); + children[child_num++] = q_ctx->q_teid; + } + + /* fdir vsi */ + q_ctx = ice_get_lan_q_ctx(hw, pf->fdir.fdir_vsi->idx, 0, 0); + children[child_num++] = q_ctx->q_teid; + + fprintf(stream, "digraph tx_sched {\n"); + while (child_num > 0) { + int ret; + ret = query_nodes(hw, children, child_num, + parents, &parent_num, + level, detail, stream); + if (ret) + return ret; + + children = parents; + child_num = parent_num; + level--; + } + fprintf(stream, "}\n"); + + return 0; +} diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index d607f028e0..1338c80d14 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -739,4 +739,7 @@ int rte_pmd_ice_dump_package(uint16_t port, uint8_t **buff, uint32_t *size); __rte_experimental int rte_pmd_ice_dump_switch(uint16_t port, uint8_t **buff, uint32_t *size); + +__rte_experimental +int rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream); #endif /* _ICE_ETHDEV_H_ */ diff --git a/drivers/net/ice/ice_testpmd.c b/drivers/net/ice/ice_testpmd.c index a7a8d0c53c..98c02d68c6 100644 --- a/drivers/net/ice/ice_testpmd.c +++ b/drivers/net/ice/ice_testpmd.c @@ -3,6 +3,7 @@ */ #include +#include #include #include @@ -148,6 +149,63 @@ cmdline_parse_inst_t cmd_ddp_dump_switch = { }, }; +/* Dump Tx Scheduling Tree configuration, only for ice PF */ +struct cmd_txsched_dump_result { + cmdline_fixed_string_t txsched; + cmdline_fixed_string_t dump; + portid_t port_id; + cmdline_fixed_string_t mode; + char filepath[]; +}; + +cmdline_parse_token_string_t cmd_txsched_dump_txsched = + TOKEN_STRING_INITIALIZER(struct cmd_txsched_dump_result, txsched, "txsched"); +cmdline_parse_token_string_t cmd_txsched_dump_dump = + TOKEN_STRING_INITIALIZER(struct cmd_txsched_dump_result, dump, "dump"); +cmdline_parse_token_num_t cmd_txsched_dump_port_id = + TOKEN_NUM_INITIALIZER(struct cmd_txsched_dump_result, port_id, RTE_UINT16); +cmdline_parse_token_string_t cmd_txsched_dump_mode = + TOKEN_STRING_INITIALIZER(struct cmd_txsched_dump_result, mode, "brief#detail"); +cmdline_parse_token_string_t cmd_txsched_dump_filepath = + TOKEN_STRING_INITIALIZER(struct cmd_txsched_dump_result, filepath, NULL); + +static void +cmd_txsched_dump_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_txsched_dump_result *res = parsed_result; + bool detail = false; + FILE *fp; + + if (!strcmp(res->mode, "detail")) + detail = true; + + fp = fopen(res->filepath, "w"); + if (fp == NULL) { + fprintf(stderr, "Failed to open file\n"); + return; + } + + if (rte_pmd_ice_dump_txsched(res->port_id, detail, fp)) + fprintf(stderr, "Failed to dump Tx scheduring runtime configure.\n"); + fclose(fp); +} + +cmdline_parse_inst_t cmd_txsched_dump = { + .f = cmd_txsched_dump_parsed, + .data = NULL, + .help_str = "txsched dump ", + .tokens = { + (void *)&cmd_txsched_dump_txsched, + (void *)&cmd_txsched_dump_dump, + (void *)&cmd_txsched_dump_port_id, + (void *)&cmd_txsched_dump_mode, + (void *)&cmd_txsched_dump_filepath, + NULL, + }, +}; + static struct testpmd_driver_commands ice_cmds = { .commands = { { @@ -161,8 +219,15 @@ static struct testpmd_driver_commands ice_cmds = { "ddp dump switch (port_id) (config_path)\n" " Dump a runtime switch configure on a port\n\n", + }, + { + &cmd_txsched_dump, + "txsched dump (port_id) (config_path)\n" + " Dump tx scheduling runtime configure on a port\n\n", + }, { NULL, NULL }, }, }; + TESTPMD_ADD_DRIVER_COMMANDS(ice_cmds) diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build index a957fc5d3a..b7f2188e62 100644 --- a/drivers/net/ice/meson.build +++ b/drivers/net/ice/meson.build @@ -6,7 +6,7 @@ objs = [base_objs] sources = files( 'ice_acl_filter.c', - 'ice_ddp_package.c', + 'ice_diagnose.c', 'ice_ethdev.c', 'ice_fdir_filter.c', 'ice_generic_flow.c', diff --git a/drivers/net/ice/version.map b/drivers/net/ice/version.map index 4e924c8f4d..24b425d6f7 100644 --- a/drivers/net/ice/version.map +++ b/drivers/net/ice/version.map @@ -8,4 +8,5 @@ EXPERIMENTAL { # added in 19.11 rte_pmd_ice_dump_package; rte_pmd_ice_dump_switch; + rte_pmd_ice_dump_txsched; }; From patchwork Tue Jan 2 12:24:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 135643 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BBE76437F8; Tue, 2 Jan 2024 05:03:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36A9C40A6F; Tue, 2 Jan 2024 05:03:29 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) by mails.dpdk.org (Postfix) with ESMTP id 38162402BE for ; Tue, 2 Jan 2024 05:03:26 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704168207; x=1735704207; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EQwVtuagQMVm1qnFNnWEgKGxQSgzAZ7eDJxPY6tplVQ=; b=bjllHhDMzfm+QP74rdqbtoD7ELPL12KYY8FoBD/CmRB13KXgDYREMqlh IsGC+GQsYdFacS4GJ0pMKYCUYMvNqRzjBv7s8FzBQ2MMGlTZjZIPmFlZl kOeE1COl3d6sjEoNNxWl0xooLLLUUYg8Dv8Y4cyUKNpdyMf42QSsQdlrA ro5715mxJAGyVCcHjxLMFUNrB10NkuaB0/miq4wBB0aQHcz+yc7FCaQ6v 7ppDMGAYxVA/BSzM7UzT0EzaIYK9QihySaHr7lasG10FfrKnBspI8+2+X 3HSAF83f7QVMeLQ6k/FhsCqyD75xXorsPSVt03NsRR28/mu6PV5cXSRuZ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="3647045" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="3647045" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jan 2024 20:03:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10940"; a="772725895" X-IronPort-AV: E=Sophos;i="6.04,324,1695711600"; d="scan'208";a="772725895" Received: from dpdk-qzhan15-test02.sh.intel.com ([10.67.115.37]) by orsmga007.jf.intel.com with ESMTP; 01 Jan 2024 20:03:23 -0800 From: Qi Zhang To: qiming.yang@intel.com, wenjun1.wu@intel.com Cc: dev@dpdk.org, Qi Zhang Subject: [PATCH v5 2/2] doc: add document for diagnostic utilities Date: Tue, 2 Jan 2024 07:24:25 -0500 Message-Id: <20240102122425.3480836-2-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20240102122425.3480836-1-qi.z.zhang@intel.com> References: <20231226185428.3158880-1-qi.z.zhang@intel.com> <20240102122425.3480836-1-qi.z.zhang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Document CLI for diagnose purpose. Signed-off-by: Qi Zhang --- doc/guides/nics/ice.rst | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index 820a385b06..29309abe4d 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -411,6 +411,42 @@ To start ``testpmd``, and add vlan 10 to port 0: testpmd> rx_vlan add 10 0 +Diagnostic Utilities +-------------------- + +Dump DDP Package +~~~~~~~~~~~~~~~~ + +Dump the runtime packet processing pipeline configuration into a +binary file. This helps the support team diagnose hardware +configuration issues. + +Usage:: + + testpmd>ddp dump + +Dump Switch Configurations +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Dump detail hardware configurations related to the switch pipeline +stage into a binary file. + +Usage:: + + testpmd>ddp dump switch + +Dump Tx Scheduling Tree +~~~~~~~~~~~~~~~~~~~~~~~ + +Dump the runtime Tx scheduling tree into a DOT file. + +Usage:: + + testpmd>txsched dump + +In "brief" mode, all scheduling nodes in the tree are displayed. +In "detail" mode, each node's configuration parameters are also displayed. + Limitations or Known issues ---------------------------