From patchwork Mon Aug 12 15:28:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143083 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B6DDF457A1; Mon, 12 Aug 2024 17:28:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E747D402BB; Mon, 12 Aug 2024 17:28:33 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id BADE74014F for ; Mon, 12 Aug 2024 17:28:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476511; x=1755012511; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WMVHp79XXfZ9w+MVBk38li3O6kN2asMtgwE7JzjBQ6g=; b=jiP61zWCVnbLt1NTmZlwyeUpac6PIkJm5Sc5IxAtwxGSl31JflaK/4Cm m7REaLRVTBLjccUCbIxFkkm6CNX8j66GWtnHaNVoklcVJZkf5kl59YtTu 5YcOdq83sQjLoGsCHhpE6UrddtBWsS7qIw1XFfiYl5P/1M40z9973Gntd 3cdC0o8GXjw1pxEsd3a4jTbLhmnb1W7T94qsvqJ5+BhV2Y8lWim3NGKO3 ffcxBrvVG4f59+7SRmJW/n/IS6ji1ehTtWkFyEM0pAgbzC7aKwFfP5nvY 68TXA2Fhxe976T6XvU9DO17AB5Q7V+3zhhGebi8bkeJkWfvTviOR768wL g==; X-CSE-ConnectionGUID: vYRqR/OiSMWRTjdujv3x8A== X-CSE-MsgGUID: YvN9OK4QSTCtV/sFlZrQ2w== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743029" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743029" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:30 -0700 X-CSE-ConnectionGUID: CSrVVXL0QhimxiSmr0JnUQ== X-CSE-MsgGUID: hppJpep1RUmpsZo6ia4HRg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222510" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:30 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 01/16] net/ice: add traffic management node query function Date: Mon, 12 Aug 2024 16:28:00 +0100 Message-ID: <20240812152815.1132697-2-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement the new node querying function for the "ice" net driver. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_tm.c | 48 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 8a29a9e744..459446a6b0 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -17,6 +17,11 @@ static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t weight, uint32_t level_id, const struct rte_tm_node_params *params, struct rte_tm_error *error); +static int ice_node_query(const struct rte_eth_dev *dev, uint32_t node_id, + uint32_t *parent_node_id, uint32_t *priority, + uint32_t *weight, uint32_t *level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error); static int ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error); static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, @@ -35,6 +40,7 @@ const struct rte_tm_ops ice_tm_ops = { .node_add = ice_tm_node_add, .node_delete = ice_tm_node_delete, .node_type_get = ice_node_type_get, + .node_query = ice_node_query, .hierarchy_commit = ice_hierarchy_commit, }; @@ -219,6 +225,48 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, return 0; } +static int +ice_node_query(const struct rte_eth_dev *dev, uint32_t node_id, + uint32_t *parent_node_id, uint32_t *priority, + uint32_t *weight, uint32_t *level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_tm_node *tm_node; + + if (node_id == RTE_TM_NODE_ID_NULL) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "invalid node id"; + return -EINVAL; + } + + /* check if the node id exists */ + tm_node = find_node(pf->tm_conf.root, node_id); + if (!tm_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "no such node"; + return -EEXIST; + } + + if (parent_node_id != NULL) + *parent_node_id = tm_node->parent->id; + + if (priority != NULL) + *priority = tm_node->priority; + + if (weight != NULL) + *weight = tm_node->weight; + + if (level_id != NULL) + *level_id = tm_node->level; + + if (params != NULL) + *params = tm_node->params; + + return 0; +} + static inline struct ice_tm_shaper_profile * ice_shaper_profile_search(struct rte_eth_dev *dev, uint32_t shaper_profile_id) From patchwork Mon Aug 12 15:28:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143084 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6EC7B457A1; Mon, 12 Aug 2024 17:28:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6B20040B9B; Mon, 12 Aug 2024 17:28:35 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 5ABAE4014F for ; Mon, 12 Aug 2024 17:28:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476512; x=1755012512; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dsMeQYQMD84RlgK6NKVflTBI/D9ELWC/eMpFhKbzYRI=; b=dNDL816x8TjtGsGr4ZZZYJRys7X4px13hwxLFdL5yo3HJ/RvzaqbAMUw sAbSLaGmXE4szeoc0RYQnD9peFh7uRP8SvFkvbrfh7mAI8yy6uxsPQ1s5 +Zr7KVEUQULSwvLDjHmwtQPgrIdFqjRFW+hRWgd+ViRbJVxzn8NBCACDh s5x1mj9eEbVyif8SQgFADRf6d7TN3ep2XBxWHPoTT3EtXxNKacDfJGiAN y/pUcSAccXP6O/D3Ck7Jnj0jWmTR52ZDI/hNod3ZenOQzhbRml0H9g/NJ /WXefHTP+JfG7QI1SYDGyi5oXeZLcwh2uTYX+Qv9KNhE/TYInnMjBvKUN g==; X-CSE-ConnectionGUID: KUEa0CdYTACvndsmtZ8fmA== X-CSE-MsgGUID: QyZ2ShomQ8y91Rcdm09DtA== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743030" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743030" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:31 -0700 X-CSE-ConnectionGUID: tJU7GvO6T2q8jSwX6R2KLQ== X-CSE-MsgGUID: DkeSg1/VQLyoSlBj+Dc06w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222520" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:30 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 02/16] net/ice: detect stopping a flow-director queue twice Date: Mon, 12 Aug 2024 16:28:01 +0100 Message-ID: <20240812152815.1132697-3-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If the flow-director queue is stopped at some point during the running of an application, the shutdown procedure for the port issues an error as it tries to stop the queue a second time, and fails to do so. We can eliminate this error by setting the tail-register pointer to NULL on stop, and checking for that condition in subsequent stop calls. Since the register pointer is set on start, any restarting of the queue will allow a stop call to progress as normal. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_rxtx.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index f270498ed1..a150d28e73 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1139,6 +1139,10 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) tx_queue_id); return -EINVAL; } + if (txq->qtx_tail == NULL) { + PMD_DRV_LOG(INFO, "TX queue %u not started\n", tx_queue_id); + return 0; + } vsi = txq->vsi; q_ids[0] = txq->reg_idx; @@ -1153,6 +1157,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) } txq->tx_rel_mbufs(txq); + txq->qtx_tail = NULL; return 0; } From patchwork Mon Aug 12 15:28:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143085 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3A9D457A1; Mon, 12 Aug 2024 17:28:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B88A740BA4; Mon, 12 Aug 2024 17:28:36 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id EFC9A4014F for ; Mon, 12 Aug 2024 17:28:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476512; x=1755012512; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0xnoZfd/s5v5Pc79O1SdPkkC1ECMz+5//A7pKn5vEXU=; b=Gwz/ljMeUtuHvJdtNPgxnccNwppkQJBpy3v6AmYaIDR/LyUp9/GfFKJt pTMm4jjL3zmhybjKYw3DPgyjlvS49445c0jd4B0y/BsriQ8zKfojoJIbY YimvYru9wC8swYEdButjjEaQx8zdUUhsHTpTBXpEn/+Dpc2J74sJyQJEa 01IUnLEow9ONirE2JSe2nHA3eJOCkRJghJhPpvdD36lLe/gDbDejdiV2G sXaDMdpj4cUf06x5ruOUeytFAGYrOxE5DKhqsZYef06xkZ9DFx0RTDKzb LNsbs1tpa82bMSpVeHt/N145SWvQvjhKJl9fQ4ycREdXqo26rM0l6bRtL g==; X-CSE-ConnectionGUID: HsxPahvvSYi1k+r0oEHUVw== X-CSE-MsgGUID: gBXpuy9IRZWbKYvYnXfB4w== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743032" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743032" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:32 -0700 X-CSE-ConnectionGUID: Ux8T1JHgSvmB8PHckyN6pQ== X-CSE-MsgGUID: fRDV7EMRQ/+ur6GhShTl6g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222528" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:31 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 03/16] net/ice: improve Tx scheduler graph output Date: Mon, 12 Aug 2024 16:28:02 +0100 Message-ID: <20240812152815.1132697-4-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The function to dump the TX scheduler topology only adds to the chart nodes connected to TX queues or for the flow director VSI. Change the function to work recursively from the root node and thereby include all scheduler nodes, whether in use or not, in the dump. Also, improve the output of the Tx scheduler graphing function: * Add VSI details to each node in graph * When number of children is >16, skip middle nodes to reduce size of the graph, otherwise dot output is unviewable for large hierarchies * For VSIs other than zero, use dot's clustering method to put those VSIs into subgraphs with borders * For leaf nodes, display queue numbers for the any nodes assigned to ethdev NIC Tx queues Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_diagnose.c | 196 ++++++++++++--------------------- 1 file changed, 69 insertions(+), 127 deletions(-) diff --git a/drivers/net/ice/ice_diagnose.c b/drivers/net/ice/ice_diagnose.c index c357554707..623d84e37d 100644 --- a/drivers/net/ice/ice_diagnose.c +++ b/drivers/net/ice/ice_diagnose.c @@ -545,29 +545,15 @@ static void print_rl_profile(struct ice_aqc_rl_profile_elem *prof, fprintf(stream, "\t\t\t\t\t\n"); } -static -void print_elem_type(FILE *stream, u8 type) +static const char * +get_elem_type(u8 type) { - switch (type) { - case 1: - fprintf(stream, "root"); - break; - case 2: - fprintf(stream, "tc"); - break; - case 3: - fprintf(stream, "se_generic"); - break; - case 4: - fprintf(stream, "entry_point"); - break; - case 5: - fprintf(stream, "leaf"); - break; - default: - fprintf(stream, "%d", type); - break; - } + static const char * const ice_sched_node_types[] = { + "Undefined", "Root", "TC", "SE Generic", "SW Entry", "Leaf" + }; + if (type < RTE_DIM(ice_sched_node_types)) + return ice_sched_node_types[type]; + return "*UNKNOWN*"; } static @@ -602,7 +588,9 @@ void print_priority_mode(FILE *stream, bool flag) } static -void print_node(struct ice_aqc_txsched_elem_data *data, +void print_node(struct ice_sched_node *node, + struct rte_eth_dev_data *ethdata, + struct ice_aqc_txsched_elem_data *data, struct ice_aqc_rl_profile_elem *cir_prof, struct ice_aqc_rl_profile_elem *eir_prof, struct ice_aqc_rl_profile_elem *shared_prof, @@ -613,17 +601,19 @@ void print_node(struct ice_aqc_txsched_elem_data *data, fprintf(stream, "\t\t\t\n"); - fprintf(stream, "\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\t\n", data->node_teid); - fprintf(stream, "\t\t\t\t\n"); - - fprintf(stream, "\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\t\n"); - fprintf(stream, "\t\t\t\t\n"); + fprintf(stream, "\t\t\t\t\n", data->node_teid); + fprintf(stream, "\t\t\t\t\n", + get_elem_type(data->data.elem_type)); + fprintf(stream, "\t\t\t\t\n", node->vsi_handle); + if (data->data.elem_type == ICE_AQC_ELEM_TYPE_LEAF) { + for (uint16_t i = 0; i < ethdata->nb_tx_queues; i++) { + struct ice_tx_queue *q = ethdata->tx_queues[i]; + if (q->q_teid == data->node_teid) { + fprintf(stream, "\t\t\t\t\n", i); + break; + } + } + } if (!detail) goto brief; @@ -705,8 +695,6 @@ void print_node(struct ice_aqc_txsched_elem_data *data, fprintf(stream, "\t\tshape=plain\n"); fprintf(stream, "\t]\n"); - if (data->parent_teid != 0xFFFFFFFF) - fprintf(stream, "\tNODE_%d -> NODE_%d\n", data->parent_teid, data->node_teid); } static @@ -731,112 +719,92 @@ int query_rl_profile(struct ice_hw *hw, return 0; } -static -int query_node(struct ice_hw *hw, uint32_t child, uint32_t *parent, - uint8_t level, bool detail, FILE *stream) +static int +query_node(struct ice_hw *hw, struct rte_eth_dev_data *ethdata, + struct ice_sched_node *node, bool detail, FILE *stream) { - struct ice_aqc_txsched_elem_data data; + struct ice_aqc_txsched_elem_data *data = &node->info; struct ice_aqc_rl_profile_elem cir_prof; struct ice_aqc_rl_profile_elem eir_prof; struct ice_aqc_rl_profile_elem shared_prof; struct ice_aqc_rl_profile_elem *cp = NULL; struct ice_aqc_rl_profile_elem *ep = NULL; struct ice_aqc_rl_profile_elem *sp = NULL; - int status, ret; - - status = ice_sched_query_elem(hw, child, &data); - if (status != ICE_SUCCESS) { - if (level == hw->num_tx_sched_layers) { - /* ignore the error when a queue has been stopped. */ - PMD_DRV_LOG(WARNING, "Failed to query queue node %d.", child); - *parent = 0xffffffff; - return 0; - } - PMD_DRV_LOG(ERR, "Failed to query scheduling node %d.", child); - return -EINVAL; - } - - *parent = data.parent_teid; + u8 level = node->tx_sched_layer; + int ret; - if (data.data.cir_bw.bw_profile_idx != 0) { - ret = query_rl_profile(hw, level, 0, data.data.cir_bw.bw_profile_idx, &cir_prof); + if (data->data.cir_bw.bw_profile_idx != 0) { + ret = query_rl_profile(hw, level, 0, data->data.cir_bw.bw_profile_idx, &cir_prof); if (ret) return ret; cp = &cir_prof; } - if (data.data.eir_bw.bw_profile_idx != 0) { - ret = query_rl_profile(hw, level, 1, data.data.eir_bw.bw_profile_idx, &eir_prof); + if (data->data.eir_bw.bw_profile_idx != 0) { + ret = query_rl_profile(hw, level, 1, data->data.eir_bw.bw_profile_idx, &eir_prof); if (ret) return ret; ep = &eir_prof; } - if (data.data.srl_id != 0) { - ret = query_rl_profile(hw, level, 2, data.data.srl_id, &shared_prof); + if (data->data.srl_id != 0) { + ret = query_rl_profile(hw, level, 2, data->data.srl_id, &shared_prof); if (ret) return ret; sp = &shared_prof; } - print_node(&data, cp, ep, sp, detail, stream); + print_node(node, ethdata, data, cp, ep, sp, detail, stream); return 0; } -static -int query_nodes(struct ice_hw *hw, - uint32_t *children, int child_num, - uint32_t *parents, int *parent_num, - uint8_t level, bool detail, - FILE *stream) +static int +query_node_recursive(struct ice_hw *hw, struct rte_eth_dev_data *ethdata, + struct ice_sched_node *node, bool detail, FILE *stream) { - uint32_t parent; - int i; - int j; - - *parent_num = 0; - for (i = 0; i < child_num; i++) { - bool exist = false; - int ret; + bool close = false; + if (node->parent != NULL && node->vsi_handle != node->parent->vsi_handle) { + fprintf(stream, "subgraph cluster_%u {\n", node->vsi_handle); + fprintf(stream, "\tlabel = \"VSI %u\";\n", node->vsi_handle); + close = true; + } - ret = query_node(hw, children[i], &parent, level, detail, stream); - if (ret) - return -EINVAL; + int ret = query_node(hw, ethdata, node, detail, stream); + if (ret != 0) + return ret; - for (j = 0; j < *parent_num; j++) { - if (parents[j] == parent) { - exist = true; - break; - } + for (uint16_t i = 0; i < node->num_children; i++) { + ret = query_node_recursive(hw, ethdata, node->children[i], detail, stream); + if (ret != 0) + return ret; + /* if we have a lot of nodes, skip a bunch in the middle */ + if (node->num_children > 16 && i == 2) { + uint16_t inc = node->num_children - 5; + fprintf(stream, "\tn%d_children [label=\"... +%d child nodes ...\"];\n", + node->info.node_teid, inc); + fprintf(stream, "\tNODE_%d -> n%d_children;\n", + node->info.node_teid, node->info.node_teid); + i += inc; } - - if (!exist && parent != 0xFFFFFFFF) - parents[(*parent_num)++] = parent; } + if (close) + fprintf(stream, "}\n"); + if (node->info.parent_teid != 0xFFFFFFFF) + fprintf(stream, "\tNODE_%d -> NODE_%d\n", + node->info.parent_teid, node->info.node_teid); return 0; } -int rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream) +int +rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream) { struct rte_eth_dev *dev; struct ice_hw *hw; - struct ice_pf *pf; - struct ice_q_ctx *q_ctx; - uint16_t q_num; - uint16_t i; - struct ice_tx_queue *txq; - uint32_t buf1[256]; - uint32_t buf2[256]; - uint32_t *children = buf1; - uint32_t *parents = buf2; - int child_num = 0; - int parent_num = 0; - uint8_t level; RTE_ETH_VALID_PORTID_OR_ERR_RET(port, -ENODEV); @@ -846,35 +814,9 @@ int rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream) dev = &rte_eth_devices[port]; hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - level = hw->num_tx_sched_layers; - - q_num = dev->data->nb_tx_queues; - - /* main vsi */ - for (i = 0; i < q_num; i++) { - txq = dev->data->tx_queues[i]; - q_ctx = ice_get_lan_q_ctx(hw, txq->vsi->idx, 0, i); - children[child_num++] = q_ctx->q_teid; - } - - /* fdir vsi */ - q_ctx = ice_get_lan_q_ctx(hw, pf->fdir.fdir_vsi->idx, 0, 0); - children[child_num++] = q_ctx->q_teid; fprintf(stream, "digraph tx_sched {\n"); - while (child_num > 0) { - int ret; - ret = query_nodes(hw, children, child_num, - parents, &parent_num, - level, detail, stream); - if (ret) - return ret; - - children = parents; - child_num = parent_num; - level--; - } + query_node_recursive(hw, dev->data, hw->port_info->root, detail, stream); fprintf(stream, "}\n"); return 0; From patchwork Mon Aug 12 15:28:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143086 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3FAC457A1; Mon, 12 Aug 2024 17:28:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B1DE40DC9; Mon, 12 Aug 2024 17:28:38 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 9746B40A79 for ; Mon, 12 Aug 2024 17:28:33 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476514; x=1755012514; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QE/rY82rr9NaaJgrQVtjlxeRkVxBP/3nCusjyPn6E7M=; b=Og6fnT3xoJfLvABooCXiWyYPFZXrUnOODUpDr8lRfM8Q8IM83NzXIZs9 FCfERhluFbwnqa7B5l/++Unt2+tlu6jAKo+KA4aD2tHxuJHpWm1OYuk0c WPG1uSaQFPkpdf3zZh92O5TcIkNUhSyaayYkuU0wTi/nndz+4vi+sWnLC 46Vi3JaxCRpj6o7WhwZdXfIndLEGOo9jixlJF4vEbd7WABThwbYt+U6yT YTTU5qXBsHAk5qF4640kadFCLNgnOmClEAs4h34yAeD5VSG7jetQukj4J fqOAAvYyah0Da5pyAD5FFRte+Gee2IqQ7qQtwyvE5v7v68eZm05Ef5y9B g==; X-CSE-ConnectionGUID: I9MLo4LvQEqVTk4wH/2r6g== X-CSE-MsgGUID: QeXgi39ARfKqXXxsKh09tQ== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743034" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743034" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:33 -0700 X-CSE-ConnectionGUID: SvzcnMFOSzCEJ2L6exCzyQ== X-CSE-MsgGUID: Wfbxaf9fR5CRpwjUnhYoOQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222536" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:32 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 04/16] net/ice: add option to choose DDP package file Date: Mon, 12 Aug 2024 16:28:03 +0100 Message-ID: <20240812152815.1132697-5-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The "Dynamic Device Personalization" package is loaded at initialization time by the driver, but the specific package file loaded depends upon what package file is found first by searching through a hard-coded list of firmware paths. To enable greater control over the package loading, we can add a device option to choose a specific DDP package file to load. Signed-off-by: Bruce Richardson --- doc/guides/nics/ice.rst | 9 +++++++++ drivers/net/ice/ice_ethdev.c | 34 ++++++++++++++++++++++++++++++++++ drivers/net/ice/ice_ethdev.h | 1 + 3 files changed, 44 insertions(+) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index ae975d19ad..58ccfbd1a5 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -108,6 +108,15 @@ Runtime Configuration -a 80:00.0,default-mac-disable=1 +- ``DDP Package File`` + + Rather than have the driver search for the DDP package to load, + or to override what package is used, + the ``ddp_pkg_file`` option can be used to provide the path to a specific package file. + For example:: + + -a 80:00.0,ddp_pkg_file=/path/to/ice-version.pkg + - ``Protocol extraction for per queue`` Configure the RX queues to do protocol extraction into mbuf for protocol diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 304f959b7e..3e7ceda9ce 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -36,6 +36,7 @@ #define ICE_ONE_PPS_OUT_ARG "pps_out" #define ICE_RX_LOW_LATENCY_ARG "rx_low_latency" #define ICE_MBUF_CHECK_ARG "mbuf_check" +#define ICE_DDP_FILENAME "ddp_pkg_file" #define ICE_CYCLECOUNTER_MASK 0xffffffffffffffffULL @@ -52,6 +53,7 @@ static const char * const ice_valid_args[] = { ICE_RX_LOW_LATENCY_ARG, ICE_DEFAULT_MAC_DISABLE, ICE_MBUF_CHECK_ARG, + ICE_DDP_FILENAME, NULL }; @@ -692,6 +694,18 @@ handle_field_name_arg(__rte_unused const char *key, const char *value, return 0; } +static int +handle_ddp_filename_arg(__rte_unused const char *key, const char *value, void *name_args) +{ + const char **filename = name_args; + if (strlen(value) >= ICE_MAX_PKG_FILENAME_SIZE) { + PMD_DRV_LOG(ERR, "The DDP package filename is too long : '%s'", value); + return -1; + } + *filename = strdup(value); + return 0; +} + static void ice_check_proto_xtr_support(struct ice_hw *hw) { @@ -1882,6 +1896,16 @@ int ice_load_pkg(struct ice_adapter *adapter, bool use_dsn, uint64_t dsn) size_t bufsz; int err; + if (adapter->devargs.ddp_filename != NULL) { + strlcpy(pkg_file, adapter->devargs.ddp_filename, sizeof(pkg_file)); + if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0) { + goto load_fw; + } else { + PMD_INIT_LOG(ERR, "Cannot load DDP file: %s\n", pkg_file); + return -1; + } + } + if (!use_dsn) goto no_dsn; @@ -2216,6 +2240,13 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) ret = rte_kvargs_process(kvlist, ICE_RX_LOW_LATENCY_ARG, &parse_bool, &ad->devargs.rx_low_latency); + if (ret) + goto bail; + + ret = rte_kvargs_process(kvlist, ICE_DDP_FILENAME, + &handle_ddp_filename_arg, &ad->devargs.ddp_filename); + if (ret) + goto bail; bail: rte_kvargs_free(kvlist); @@ -2689,6 +2720,8 @@ ice_dev_close(struct rte_eth_dev *dev) ice_free_hw_tbls(hw); rte_free(hw->port_info); hw->port_info = NULL; + free((void *)(uintptr_t)ad->devargs.ddp_filename); + ad->devargs.ddp_filename = NULL; ice_shutdown_all_ctrlq(hw, true); rte_free(pf->proto_xtr); pf->proto_xtr = NULL; @@ -6981,6 +7014,7 @@ RTE_PMD_REGISTER_PARAM_STRING(net_ice, ICE_PROTO_XTR_ARG "=[queue:]" ICE_SAFE_MODE_SUPPORT_ARG "=<0|1>" ICE_DEFAULT_MAC_DISABLE "=<0|1>" + ICE_DDP_FILENAME "=" ICE_RX_LOW_LATENCY_ARG "=<0|1>"); RTE_LOG_REGISTER_SUFFIX(ice_logtype_init, init, NOTICE); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 3ea9f37dc8..c211b5b9cc 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -568,6 +568,7 @@ struct ice_devargs { /* Name of the field. */ char xtr_field_name[RTE_MBUF_DYN_NAMESIZE]; uint64_t mbuf_check; + const char *ddp_filename; }; /** From patchwork Mon Aug 12 15:28:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143087 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95A36457A1; Mon, 12 Aug 2024 17:29:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7877740DD1; Mon, 12 Aug 2024 17:28:39 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 748C340B95 for ; Mon, 12 Aug 2024 17:28:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476515; x=1755012515; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ccMKSfEIz/zEqyYcO6QebfqSqOBKZ+tQjycugc1KZqs=; b=KoYzvlXnEULafZHbaE2ztEfGPL2KJRYO8IEha8HQ4OEqMEAdaAI491FX Vhj12dfE3BMMyRptKlummnCp3RkG5QfC7E7z5nG3TjFdp2gVj7KbLtzUv E3cjhtyyM2iGjoHDHzfN0uiLgy+9DofhzC3rPpoP98pldqn4QCJ984bHc IYtzj4xmSIQop6fIxmx0jAFsBiX/X4rFdomyydooabCx1/XrBkVREphJ7 /kibAu7TxFLcbeNuxf9TNgbMOZiiRldbZisePYa5r5k8hjkKPxWTmEfVs o/tj0QUV0xCD0sBFZRblM9Js09JcjMZCodGAJEjnRSe2ZWizf1AIouUJa g==; X-CSE-ConnectionGUID: fRz0/RrhS+CFOPo029aljg== X-CSE-MsgGUID: 8zhIztkXS9KC4M2yKYlctg== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743040" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743040" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:34 -0700 X-CSE-ConnectionGUID: sP+SeDUoT3OrHeBGEpM70w== X-CSE-MsgGUID: L6q0dRHpSc+8iSHX3ukc6g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222544" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:33 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 05/16] net/ice: add option to download scheduler topology Date: Mon, 12 Aug 2024 16:28:04 +0100 Message-ID: <20240812152815.1132697-6-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The DDP package file being loaded at init time may contain an alternative Tx Scheduler topology in it. Add driver option to load this topology at init time. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_ddp.c | 18 +++++++++++++++--- drivers/net/ice/base/ice_ddp.h | 4 ++-- drivers/net/ice/ice_ethdev.c | 24 +++++++++++++++--------- drivers/net/ice/ice_ethdev.h | 1 + 4 files changed, 33 insertions(+), 14 deletions(-) diff --git a/drivers/net/ice/base/ice_ddp.c b/drivers/net/ice/base/ice_ddp.c index 24506dfaea..e6c42c5274 100644 --- a/drivers/net/ice/base/ice_ddp.c +++ b/drivers/net/ice/base/ice_ddp.c @@ -1326,7 +1326,7 @@ ice_fill_hw_ptype(struct ice_hw *hw) * ice_copy_and_init_pkg() instead of directly calling ice_init_pkg() in this * case. */ -enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) +enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len, bool load_sched) { bool already_loaded = false; enum ice_ddp_state state; @@ -1344,6 +1344,18 @@ enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) return state; } + if (load_sched) { + enum ice_status res = ice_cfg_tx_topo(hw, buf, len); + if (res != ICE_SUCCESS) { + ice_debug(hw, ICE_DBG_INIT, "failed to apply sched topology (err: %d)\n", + res); + return ICE_DDP_PKG_ERR; + } + ice_debug(hw, ICE_DBG_INIT, "Topology download successful, reinitializing device\n"); + ice_deinit_hw(hw); + ice_init_hw(hw); + } + /* initialize package info */ state = ice_init_pkg_info(hw, pkg); if (state) @@ -1416,7 +1428,7 @@ enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) * related routines. */ enum ice_ddp_state -ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len) +ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len, bool load_sched) { enum ice_ddp_state state; u8 *buf_copy; @@ -1426,7 +1438,7 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len) buf_copy = (u8 *)ice_memdup(hw, buf, len, ICE_NONDMA_TO_NONDMA); - state = ice_init_pkg(hw, buf_copy, len); + state = ice_init_pkg(hw, buf_copy, len, load_sched); if (!ice_is_init_pkg_successful(state)) { /* Free the copy, since we failed to initialize the package */ ice_free(hw, buf_copy); diff --git a/drivers/net/ice/base/ice_ddp.h b/drivers/net/ice/base/ice_ddp.h index 5761920207..2feba2e91d 100644 --- a/drivers/net/ice/base/ice_ddp.h +++ b/drivers/net/ice/base/ice_ddp.h @@ -451,9 +451,9 @@ ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state, void * ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state, u32 sect_type); -enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len); +enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len, bool load_sched); enum ice_ddp_state -ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len); +ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len, bool load_sched); bool ice_is_init_pkg_successful(enum ice_ddp_state state); void ice_free_seg(struct ice_hw *hw); diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 3e7ceda9ce..0d2445a317 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -37,6 +37,7 @@ #define ICE_RX_LOW_LATENCY_ARG "rx_low_latency" #define ICE_MBUF_CHECK_ARG "mbuf_check" #define ICE_DDP_FILENAME "ddp_pkg_file" +#define ICE_DDP_LOAD_SCHED "ddp_load_sched_topo" #define ICE_CYCLECOUNTER_MASK 0xffffffffffffffffULL @@ -54,6 +55,7 @@ static const char * const ice_valid_args[] = { ICE_DEFAULT_MAC_DISABLE, ICE_MBUF_CHECK_ARG, ICE_DDP_FILENAME, + ICE_DDP_LOAD_SCHED, NULL }; @@ -1938,7 +1940,7 @@ int ice_load_pkg(struct ice_adapter *adapter, bool use_dsn, uint64_t dsn) load_fw: PMD_INIT_LOG(DEBUG, "DDP package name: %s", pkg_file); - err = ice_copy_and_init_pkg(hw, buf, bufsz); + err = ice_copy_and_init_pkg(hw, buf, bufsz, adapter->devargs.ddp_load_sched); if (!ice_is_init_pkg_successful(err)) { PMD_INIT_LOG(ERR, "ice_copy_and_init_hw failed: %d\n", err); free(buf); @@ -1971,19 +1973,18 @@ static int parse_bool(const char *key, const char *value, void *args) { int *i = (int *)args; - char *end; - int num; - num = strtoul(value, &end, 10); - - if (num != 0 && num != 1) { - PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", " - "value must be 0 or 1", + if (value == NULL || value[0] == '\0') { + PMD_DRV_LOG(WARNING, "key:\"%s\", requires a value, which must be 0 or 1", key); + return -1; + } + if (value[1] != '\0' || (value[0] != '0' && value[0] != '1')) { + PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1", value, key); return -1; } - *i = num; + *i = value[0] - '0'; return 0; } @@ -2248,6 +2249,10 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) if (ret) goto bail; + ret = rte_kvargs_process(kvlist, ICE_DDP_LOAD_SCHED, + &parse_bool, &ad->devargs.ddp_load_sched); + if (ret) + goto bail; bail: rte_kvargs_free(kvlist); return ret; @@ -7014,6 +7019,7 @@ RTE_PMD_REGISTER_PARAM_STRING(net_ice, ICE_PROTO_XTR_ARG "=[queue:]" ICE_SAFE_MODE_SUPPORT_ARG "=<0|1>" ICE_DEFAULT_MAC_DISABLE "=<0|1>" + ICE_DDP_LOAD_SCHED "=<0|1>" ICE_DDP_FILENAME "=" ICE_RX_LOW_LATENCY_ARG "=<0|1>"); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index c211b5b9cc..f31addb122 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -563,6 +563,7 @@ struct ice_devargs { uint8_t proto_xtr[ICE_MAX_QUEUE_NUM]; uint8_t pin_idx; uint8_t pps_out_ena; + int ddp_load_sched; int xtr_field_offs; uint8_t xtr_flag_offs[PROTO_XTR_MAX]; /* Name of the field. */ From patchwork Mon Aug 12 15:28:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143088 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6098F457A1; Mon, 12 Aug 2024 17:29:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23A0440DD8; Mon, 12 Aug 2024 17:28:41 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id EBD1340B99 for ; Mon, 12 Aug 2024 17:28:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476515; x=1755012515; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Rgp3nCjBnYGk6+bYR0XFomTHqEgLOuvbBBKSe7KP9LY=; b=aqe04+cwzxJkFk4bd9vgBbweIpEf9RgB0BN8PP5PI9yJBe23pJXnTL28 BGFB8ryyKmT9A0AAloogyOitLUEzgk+rcDtQvvVc6N8KhXofYHXH6St+w GxRjveTDSAQaboNqpO7FlS0lNRJTHxTi012MVn7lvV7pFgiCXw5FDMIQ4 ym5vg2VQdZK2v0nlszjAQhK9IG6DNIiQEpFAx3+wk9cJSp8+kLNk8RP0L aXTl9EjUXhHfDhuSLlm5yExR6Dn3+adALb+jd46Gx/z1KVNbkrIunWBB0 LNAvsMexDlK1MWXE1M36YUvBhFutmN7ystEr183HKbdo6o9DmeRfS+ODv w==; X-CSE-ConnectionGUID: 5SLEuHwnRLWP+DTxdv3T2w== X-CSE-MsgGUID: sx2EUS38ReKSofFjAtDo5A== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743043" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743043" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:35 -0700 X-CSE-ConnectionGUID: vmdqDiUKTA6pplPERW1jjw== X-CSE-MsgGUID: 1cj5pnBkQMS+2tcKqitwSQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222553" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:34 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 06/16] net/ice/base: allow init without TC class sched nodes Date: Mon, 12 Aug 2024 16:28:05 +0100 Message-ID: <20240812152815.1132697-7-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If DCB support is disabled via DDP image, there will not be any traffic class (TC) nodes in the scheduler tree immediately above the root level. To allow the driver to work with this scenario, we allow use of the root node as a dummy TC0 node in case where there are no TC nodes in the tree. For use of any other TC other than 0 (used by default in the driver), existing behaviour of returning NULL pointer is maintained. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 6 ++++++ drivers/net/ice/base/ice_type.h | 1 + 2 files changed, 7 insertions(+) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index 373c32a518..f75e5ae599 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -292,6 +292,10 @@ struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc) if (!pi || !pi->root) return NULL; + /* if no TC nodes, use root as TC node 0 */ + if (pi->has_tc == 0) + return tc == 0 ? pi->root : NULL; + for (i = 0; i < pi->root->num_children; i++) if (pi->root->children[i]->tc_num == tc) return pi->root->children[i]; @@ -1306,6 +1310,8 @@ int ice_sched_init_port(struct ice_port_info *pi) ICE_AQC_ELEM_TYPE_ENTRY_POINT) hw->sw_entry_point_layer = j; + if (buf[0].generic[j].data.elem_type == ICE_AQC_ELEM_TYPE_TC) + pi->has_tc = 1; status = ice_sched_add_node(pi, j, &buf[i].generic[j], NULL); if (status) goto err_init_port; diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 598a80155b..a70e4a8afa 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -1260,6 +1260,7 @@ struct ice_port_info { struct ice_qos_cfg qos_cfg; u8 is_vf:1; u8 is_custom_tx_enabled:1; + u8 has_tc:1; }; struct ice_switch_info { From patchwork Mon Aug 12 15:28:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143089 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D8C4457A1; Mon, 12 Aug 2024 17:29:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6D81840DDE; Mon, 12 Aug 2024 17:28:42 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 45EBF40B9D for ; Mon, 12 Aug 2024 17:28:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476516; x=1755012516; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GkONx/JTr0Ryhfm3YU25ux0z6dB+IFvsn9qzX1Ek7Sc=; b=Le9ZXacINHv9vTqmaQi7hA+xKmbtW3zZkE0DVk85rOky4z2DHdz8HXQh /5G3UyNZNWHQyk1f77ZkBZbeBSyta2rfOPgAqP9+c6w+kDpwfEyyoLkIa S+kLKs1kDeb20wim8YDTLEt1mfJ6YGwPhmW6ayYYlLYzXrQi1GpSJhJcp vUtEBWrIVKKuPuVC1caN1WNQicbYgAc23dZF7x4n0ALCkiink8XWdNWKb wSM1f1fjZzMx04kMSXq59I2whiIIAkZQEYYXcoQA7U/eeBuxE5LKXP9nk b3IQ/XI9T/nSd9uDmiNSRX8jGmJYvjEHbui+4DZRxT8PwvbEJVpBK6caY Q==; X-CSE-ConnectionGUID: QbnbE9odQBaJeCw97nHK6Q== X-CSE-MsgGUID: tJw7/ZYaQ+aoXsNhwajwwg== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743044" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743044" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:36 -0700 X-CSE-ConnectionGUID: cNg/2E0QRLuiC7ncp42SFA== X-CSE-MsgGUID: L3XiuRoNRbm7WVvZcpzTaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222558" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:35 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 07/16] net/ice/base: set VSI index on newly created nodes Date: Mon, 12 Aug 2024 16:28:06 +0100 Message-ID: <20240812152815.1132697-8-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The ice_sched_node type has got a field for the vsi to which the node belongs. This field was not getting set in "ice_sched_add_node", so add a line configuring this field for each node from its parent node. Similarly, when searching for a qgroup node, we can check for each node that the VSI information is correct. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index f75e5ae599..f6dc5ae173 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -200,6 +200,7 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer, node->in_use = true; node->parent = parent; node->tx_sched_layer = layer; + node->vsi_handle = parent->vsi_handle; parent->children[parent->num_children++] = node; node->info = elem; return 0; @@ -1581,7 +1582,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, /* make sure the qgroup node is part of the VSI subtree */ if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node)) if (qgrp_node->num_children < max_children && - qgrp_node->owner == owner) + qgrp_node->owner == owner && qgrp_node->vsi_handle == vsi_handle) break; qgrp_node = qgrp_node->sibling; } From patchwork Mon Aug 12 15:28:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143090 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 02EE7457A1; Mon, 12 Aug 2024 17:29:21 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AC68140E0F; Mon, 12 Aug 2024 17:28:43 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id DC28B40BA6 for ; Mon, 12 Aug 2024 17:28:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476517; x=1755012517; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/gS/eRyLGXBm/1l9JuK0a7+h9+9V8DKp+JWxmwnYhqE=; b=d+Fco1hdsHZcZr/RMxPmX4jEsMC5ZmGQ25uOJHAEhrfozNn1bphbvhLH Y5ChUZF/QrOu9OpvBnh9LqCV3jKxvGp5sBdWDN7YG4sG7YXhueDWael8V yNfCRpmsWUf0tsIa6oWw8dqrLYNIDCsTIukd05vmv4Yr8Hvd3PwHIWcDN 5ygZh6HMtIVMhQ2LcxtVSY8QbWsMvjVEVvpV6tRVi/2e/H82tASiGA4w9 V3aqyV8bEVzzEpz0leNeZWpdpvK1SpxCgKrdb6yc2kANkOK6EYLnirVjw qaliEdNg9HXgCzjKSGDYveJiMEoZk0Ij8HkGNuqL58CevZA5CCzWkEZiQ w==; X-CSE-ConnectionGUID: ex+z5yNMQQWzXeAcFbo1OQ== X-CSE-MsgGUID: LDy6AhteT6Kq+O930KqTmQ== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743047" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743047" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:36 -0700 X-CSE-ConnectionGUID: 8OXe/FniQVChZK3DP8qq6Q== X-CSE-MsgGUID: npFPWb31T3iTNqCoucUEdg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222568" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:36 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 08/16] net/ice/base: read VSI layer info from VSI Date: Mon, 12 Aug 2024 16:28:07 +0100 Message-ID: <20240812152815.1132697-9-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than computing from the number of HW layers the layer of the VSI, we can instead just read that info from the VSI node itself. This allows the layer to be changed at runtime. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index f6dc5ae173..e398984bf2 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -1559,7 +1559,6 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 max_children; qgrp_layer = ice_sched_get_qgrp_layer(pi->hw); - vsi_layer = ice_sched_get_vsi_layer(pi->hw); max_children = pi->hw->max_children[qgrp_layer]; vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle); @@ -1569,6 +1568,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, /* validate invalid VSI ID */ if (!vsi_node) return NULL; + vsi_layer = vsi_node->tx_sched_layer; /* If the queue group and vsi layer are same then queues * are all attached directly to VSI From patchwork Mon Aug 12 15:28:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143091 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7F6A457A1; Mon, 12 Aug 2024 17:29:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 94E9540E1D; Mon, 12 Aug 2024 17:28:45 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id B1E0D40B9A for ; Mon, 12 Aug 2024 17:28:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476518; x=1755012518; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jxdVXmPVYhO2u0GalZkvq+G7NSjyrBI6pORRv1zrw1g=; b=mqDpSJF/Zj7fu80Pifp0haW4J5UgTEvOecb9G95WTbqSAsaPCBSMpper lmxWF6H1Gry1I4yRxAhBLXibNcI+ibcub9uQsNgwXJ/KR/8zpc0yVxK1T iPR4SnHRDW6Ioqe2a6J17PFTUO6jWCmzG9VC3FzGUSVZZMtqyDvZHH6RU fLEJ/6cEM/oXSd7r79r9PPB32Iqw08QJAknhsn2K3mZfuCp2QHkYwiPsA TguXgGe3nIBQmjQHMX1Rj2TRlw6CbyXJ+8wTRJk+zVcItoQjUOVeVQSCL pn7B3Aw3UCz2Epb9bDaAyPsSrLyp/a/Lc1Rad2kCJpbuUfJwcHKkT34n8 w==; X-CSE-ConnectionGUID: ltfn61NbQHGAEmfFLTaFcg== X-CSE-MsgGUID: 5UiiysjjSgWSRCE0Wp9f3A== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743048" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743048" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:37 -0700 X-CSE-ConnectionGUID: wF4dQx4yTlyu31MxLqI4/w== X-CSE-MsgGUID: LJ0L+SzaSh+6wCbdm/11wQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222576" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:37 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 09/16] net/ice/base: remove 255 limit on sched child nodes Date: Mon, 12 Aug 2024 16:28:08 +0100 Message-ID: <20240812152815.1132697-10-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The Tx scheduler in the ice driver can be configured to have large numbers of child nodes at a given layer, but the driver code implicitly limited the number of nodes to 255 by using a u8 datatype for the number of children. Increase this to a 16-bit value throughout the code. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 25 ++++++++++++++----------- drivers/net/ice/base/ice_type.h | 2 +- 2 files changed, 15 insertions(+), 12 deletions(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index e398984bf2..be13833e1e 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -289,7 +289,7 @@ ice_sched_get_first_node(struct ice_port_info *pi, */ struct ice_sched_node *ice_sched_get_tc_node(struct ice_port_info *pi, u8 tc) { - u8 i; + u16 i; if (!pi || !pi->root) return NULL; @@ -316,7 +316,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node) { struct ice_sched_node *parent; struct ice_hw *hw = pi->hw; - u8 i, j; + u16 i, j; /* Free the children before freeing up the parent node * The parent array is updated below and that shifts the nodes @@ -1473,7 +1473,7 @@ bool ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base, struct ice_sched_node *node) { - u8 i; + u16 i; for (i = 0; i < base->num_children; i++) { struct ice_sched_node *child = base->children[i]; @@ -1510,7 +1510,7 @@ ice_sched_get_free_qgrp(struct ice_port_info *pi, struct ice_sched_node *qgrp_node, u8 owner) { struct ice_sched_node *min_qgrp; - u8 min_children; + u16 min_children; if (!qgrp_node) return qgrp_node; @@ -2070,7 +2070,7 @@ static void ice_sched_rm_agg_vsi_info(struct ice_port_info *pi, u16 vsi_handle) */ static bool ice_sched_is_leaf_node_present(struct ice_sched_node *node) { - u8 i; + u16 i; for (i = 0; i < node->num_children; i++) if (ice_sched_is_leaf_node_present(node->children[i])) @@ -2105,7 +2105,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner) ice_for_each_traffic_class(i) { struct ice_sched_node *vsi_node, *tc_node; - u8 j = 0; + u16 j = 0; tc_node = ice_sched_get_tc_node(pi, i); if (!tc_node) @@ -2173,7 +2173,7 @@ int ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle) */ bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node) { - u8 i; + u16 i; /* start from the leaf node */ for (i = 0; i < node->num_children; i++) @@ -2247,7 +2247,8 @@ ice_sched_get_free_vsi_parent(struct ice_hw *hw, struct ice_sched_node *node, u16 *num_nodes) { u8 l = node->tx_sched_layer; - u8 vsil, i; + u8 vsil; + u16 i; vsil = ice_sched_get_vsi_layer(hw); @@ -2289,7 +2290,7 @@ ice_sched_update_parent(struct ice_sched_node *new_parent, struct ice_sched_node *node) { struct ice_sched_node *old_parent; - u8 i, j; + u16 i, j; old_parent = node->parent; @@ -2389,7 +2390,8 @@ ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id, u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 }; u32 first_node_teid, vsi_teid; u16 num_nodes_added; - u8 aggl, vsil, i; + u8 aggl, vsil; + u16 i; int status; tc_node = ice_sched_get_tc_node(pi, tc); @@ -2505,7 +2507,8 @@ ice_move_all_vsi_to_dflt_agg(struct ice_port_info *pi, static bool ice_sched_is_agg_inuse(struct ice_port_info *pi, struct ice_sched_node *node) { - u8 vsil, i; + u8 vsil; + u16 i; vsil = ice_sched_get_vsi_layer(pi->hw); if (node->tx_sched_layer < vsil - 1) { diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index a70e4a8afa..35f832eb9f 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -1030,9 +1030,9 @@ struct ice_sched_node { struct ice_aqc_txsched_elem_data info; u32 agg_id; /* aggregator group ID */ u16 vsi_handle; + u16 num_children; u8 in_use; /* suspended or in use */ u8 tx_sched_layer; /* Logical Layer (1-9) */ - u8 num_children; u8 tc_num; u8 owner; #define ICE_SCHED_NODE_OWNER_LAN 0 From patchwork Mon Aug 12 15:28:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143092 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 222A9457A1; Mon, 12 Aug 2024 17:29:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A3B1340E1F; Mon, 12 Aug 2024 17:28:47 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 95EFB40B9A for ; Mon, 12 Aug 2024 17:28:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476519; x=1755012519; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WNGvCEHT2eO8jmDMYmqgWt/AiLTTGF2USXUYqSHc79s=; b=Krlw/8yJFdJaRDHrNLJeKjxMLwKOU1y7WCdifu7kKVsBl7QavVeD+YuT 8FmLUpE6zGGr7Wgzi5ZP/PIALcKFUs87JS2lJdHi231ebjrMQ4YbEHQ0y NaZuCDrr3fFQ7E+dHotGv7f5x163bSenJmY7V/Qa+TIbJKCLF4N3AApj5 IcosOGD+iXVlw1JnH2nXbLT9ED1727Rj71iYIiRkUBdDCxGz+G7gGMm11 iaGodvQ3d2xnh2EsWQneAnqokc+EnEarH5U6iUOlCfpt9V990ngq0u5sV t67F5RHSS5Q6FAoiItnq9FDWjtl64QzirjRu6QC9LtzqPf/F/urKPbFiT Q==; X-CSE-ConnectionGUID: PDQuNp7FQWyAG0BbI0gSwA== X-CSE-MsgGUID: La6ThGLvT0ypJSHvI3XFGQ== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743050" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743050" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:38 -0700 X-CSE-ConnectionGUID: 6RVii8+lS8aOzUtcmocO2w== X-CSE-MsgGUID: C9dHZGWFRRa0dni+OfiUgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222582" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:38 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 10/16] net/ice/base: optimize subtree searches Date: Mon, 12 Aug 2024 16:28:09 +0100 Message-ID: <20240812152815.1132697-11-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In a number of places throughout the driver code, we want to confirm that a scheduler node is indeed a child of another node. Currently, this is confirmed by searching down the tree from the base until the desired node is hit, a search which may hit many irrelevant tree nodes when recursing down wrong branches. By switching the direction of search, to check upwards from the node to the parent, we can avoid any incorrect paths, and so speed up processing. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index be13833e1e..f7d5f8f415 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -1475,20 +1475,12 @@ ice_sched_find_node_in_subtree(struct ice_hw *hw, struct ice_sched_node *base, { u16 i; - for (i = 0; i < base->num_children; i++) { - struct ice_sched_node *child = base->children[i]; - - if (node == child) - return true; - - if (child->tx_sched_layer > node->tx_sched_layer) - return false; - - /* this recursion is intentional, and wouldn't - * go more than 8 calls - */ - if (ice_sched_find_node_in_subtree(hw, child, node)) + if (base == node) + return true; + while (node->tx_sched_layer != 0 && node->parent != NULL) { + if (node->parent == base) return true; + node = node->parent; } return false; } From patchwork Mon Aug 12 15:28:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143093 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F19CC457A1; Mon, 12 Aug 2024 17:29:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0E1A540E36; Mon, 12 Aug 2024 17:28:49 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 7C99B40DD2 for ; Mon, 12 Aug 2024 17:28:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476520; x=1755012520; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AnuurfSOY/UYpOqgy9U+/yVVqMHFyHLQR9wqWwlXt+M=; b=J8mYveW+0LfyEey41BUnSxtqIiTMaxxaIlp4LmhWdA5gGyEOFLaiDySL Nuy0WrZkpKp/7KqWXjF9nijsg88lYtvp34XbrQOc5PhQBjuJIPOtNJmsf G9r7CvrZ4B8Syi7guAyZ1bjvIG6b1mbzlphcNJf+cgJdOJ7El+LD7tm0n zRpBYVsOp5+p9tzN79TSgjpEQllaUjWWxDnDQVXgfnc634EPhoQHbBrZl YJIcl+PVwIWz2JHDiH2akg/7idQi/K2Uu878iNav2SaZ/IQV1wpKxda46 9JkK3dY+u5rsAatuvi3zxTMl2Zgf5zOAnqBTaBrZRoOEYTwYbnZp5Y8G7 Q==; X-CSE-ConnectionGUID: RP7wbNZMT8ilhsKJRjasLw== X-CSE-MsgGUID: bn+l/19ERpSfv+YvhUWLpA== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743052" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743052" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:39 -0700 X-CSE-ConnectionGUID: 0ruvD7iKRuCJepVBGJmUDA== X-CSE-MsgGUID: VpqrxYcERsiGJpPf91qYwQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222587" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:39 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 11/16] net/ice/base: make functions non-static Date: Mon, 12 Aug 2024 16:28:10 +0100 Message-ID: <20240812152815.1132697-12-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org We will need to allocate more lanq contexts after a scheduler rework, so make that function non-static so accessible outside the file. For similar reasons, make the function to add a Tx scheduler node non-static Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_sched.c | 2 +- drivers/net/ice/base/ice_sched.h | 8 ++++++++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index f7d5f8f415..d88b836c38 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -570,7 +570,7 @@ ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids, * @tc: TC number * @new_numqs: number of queues */ -static int +int ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs) { struct ice_vsi_ctx *vsi_ctx; diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index 9f78516dfb..c7eb794963 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -270,4 +270,12 @@ int ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx); int ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node, enum ice_rl_type rl_type, u16 bw_alloc); + +int +ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, + struct ice_sched_node *parent, u8 layer, u16 num_nodes, + u16 *num_nodes_added, u32 *first_node_teid, + struct ice_sched_node **prealloc_nodes); +int +ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs); #endif /* _ICE_SCHED_H_ */ From patchwork Mon Aug 12 15:28:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143094 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 341CB457A1; Mon, 12 Aug 2024 17:29:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A2A040E2F; Mon, 12 Aug 2024 17:28:50 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 71A5E40DD2 for ; Mon, 12 Aug 2024 17:28:40 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476521; x=1755012521; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qOxFlFAxWuyEOUSQHTprnZIWb3sQBavBp1ESUG5WutQ=; b=A/DYM6gHWoO84Mf6R+8bpuwUIydBlK0EaN0anA3D2ZmKGnuMZ/5t831R Dbrfm/9qtnQTalhwFS+VTxcJm9ys/DGwTO7hFks/hot6pUkXIg4KrWPt5 xqeXhl7+kXN0MhLxNfAhhlrbCU2RyGfEAKdJ/ZoWzbSTTNPpvPvgyCejb y8vLNN7zZkQvhpkgZ717Tjm4Vh7/HTFeSsZ1+EMif8A20hI0/ViCgh3pQ f2hrHPPt0okyUDZJaCyxku5HkoNI3A6HAFmPcyTG/jdHDq30ccwRilYxl GYziTB5Oj3C4aCeAIMYDiTOptwTO5o9oJl3rihnSSSBr+iq2TkD0eBft4 Q==; X-CSE-ConnectionGUID: jex0pigNRziSYtlRSmTXHQ== X-CSE-MsgGUID: /EJ3Ym79SsOcNiDdS3rY3A== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743055" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743055" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:40 -0700 X-CSE-ConnectionGUID: YnI7+CXaT1i6wYvS5AMvOg== X-CSE-MsgGUID: lMNLUR1tTfmYs+bBqiLUtQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222591" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:40 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 12/16] net/ice/base: remove flag checks before topology upload Date: Mon, 12 Aug 2024 16:28:11 +0100 Message-ID: <20240812152815.1132697-13-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org DPDK should support more than just 9-level or 5-level topologies, so remove the checks for those particular settings. Signed-off-by: Bruce Richardson --- drivers/net/ice/base/ice_ddp.c | 33 --------------------------------- 1 file changed, 33 deletions(-) diff --git a/drivers/net/ice/base/ice_ddp.c b/drivers/net/ice/base/ice_ddp.c index e6c42c5274..744f015fe5 100644 --- a/drivers/net/ice/base/ice_ddp.c +++ b/drivers/net/ice/base/ice_ddp.c @@ -2373,38 +2373,6 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) return status; } - /* Is default topology already applied ? */ - if (!(flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && - hw->num_tx_sched_layers == 9) { - ice_debug(hw, ICE_DBG_INIT, "Loaded default topology\n"); - /* Already default topology is loaded */ - return ICE_ERR_ALREADY_EXISTS; - } - - /* Is new topology already applied ? */ - if ((flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && - hw->num_tx_sched_layers == 5) { - ice_debug(hw, ICE_DBG_INIT, "Loaded new topology\n"); - /* Already new topology is loaded */ - return ICE_ERR_ALREADY_EXISTS; - } - - /* Is set topology issued already ? */ - if (flags & ICE_AQC_TX_TOPO_FLAGS_ISSUED) { - ice_debug(hw, ICE_DBG_INIT, "Update tx topology was done by another PF\n"); - /* add a small delay before exiting */ - for (i = 0; i < 20; i++) - ice_msec_delay(100, true); - return ICE_ERR_ALREADY_EXISTS; - } - - /* Change the topology from new to default (5 to 9) */ - if (!(flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && - hw->num_tx_sched_layers == 5) { - ice_debug(hw, ICE_DBG_INIT, "Change topology from 5 to 9 layers\n"); - goto update_topo; - } - pkg_hdr = (struct ice_pkg_hdr *)buf; state = ice_verify_pkg(pkg_hdr, len); if (state) { @@ -2451,7 +2419,6 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) /* Get the new topology buffer */ new_topo = ((u8 *)section) + offset; -update_topo: /* acquire global lock to make sure that set topology issued * by one PF */ From patchwork Mon Aug 12 15:28:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143095 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A6DFC457A1; Mon, 12 Aug 2024 17:29:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 89CA940E40; Mon, 12 Aug 2024 17:28:51 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 6BA8740BA2 for ; Mon, 12 Aug 2024 17:28:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476522; x=1755012522; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PuI8ws8QZpu20EpZEZTMXgmWpEspJN7UcXGSE6TqDfI=; b=iPHi23aoUJMACLD9SurRw8O7VFyOVR4fXkrWTeogUF9wpuxSa/l3yj52 xuBF/juG2yijGf8iyg4fCK9vDFRaXKYv1bqjJ9Un42Ho8W04DJs1zAg3K Ug9EXSa3giy4lEXZ8hhB8e0bU6zwdJ70YqKFRrOmXG68vPJsBsVqtIdcX /hylA087D/BIHpxJkJJzh+AVi9VG8DWBhmCqByQkeBMXo0KJl4G00C02L 4gTeXVE3+NRVqoypWDFPjjByaZtqTJozuFKtFU/91rKwtSeaVDPKaf+iB ghLWSZ1rjf1HGFGLC2Q6e3azdmpFq3f8wCBQ1F7tW1i9Z/Z8GzgofwWmS g==; X-CSE-ConnectionGUID: jGyBDbPGT6qv0w+u/k4PFQ== X-CSE-MsgGUID: PB3fHX3oQ3ijUy/rOhZafA== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743056" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743056" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:41 -0700 X-CSE-ConnectionGUID: VuefSrjdTom7dF8vpDsgUA== X-CSE-MsgGUID: aS0E+lLBT0i41veMAv2wpQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222596" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:41 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 13/16] net/ice: limit the number of queues to sched capabilities Date: Mon, 12 Aug 2024 16:28:12 +0100 Message-ID: <20240812152815.1132697-14-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rather than assuming that each VSI can hold up to 256 queue pairs, or the reported device limit, query the available nodes in the scheduler tree to check that we are not overflowing the limit for number of child scheduling nodes at each level. Do this by multiplying max_children for each level beyond the VSI and using that as an additional cap on the number of queues. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_ethdev.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 0d2445a317..ab3f88fd7d 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -913,7 +913,7 @@ ice_vsi_config_default_rss(struct ice_aqc_vsi_props *info) } static int -ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi, +ice_vsi_config_tc_queue_mapping(struct ice_hw *hw, struct ice_vsi *vsi, struct ice_aqc_vsi_props *info, uint8_t enabled_tcmap) { @@ -929,13 +929,28 @@ ice_vsi_config_tc_queue_mapping(struct ice_vsi *vsi, } /* vector 0 is reserved and 1 vector for ctrl vsi */ - if (vsi->adapter->hw.func_caps.common_cap.num_msix_vectors < 2) + if (vsi->adapter->hw.func_caps.common_cap.num_msix_vectors < 2) { vsi->nb_qps = 0; - else + } else { vsi->nb_qps = RTE_MIN ((uint16_t)vsi->adapter->hw.func_caps.common_cap.num_msix_vectors - 2, RTE_MIN(vsi->nb_qps, ICE_MAX_Q_PER_TC)); + /* cap max QPs to what the HW reports as num-children for each layer. + * Multiply num_children for each layer from the entry_point layer to + * the qgroup, or second-last layer. + * Avoid any potential overflow by using uint32_t type and breaking loop + * once we have a number greater than the already configured max. + */ + uint32_t max_sched_vsi_nodes = 1; + for (uint8_t i = hw->sw_entry_point_layer; i < hw->num_tx_sched_layers - 1; i++) { + max_sched_vsi_nodes *= hw->max_children[i]; + if (max_sched_vsi_nodes >= vsi->nb_qps) + break; + } + vsi->nb_qps = RTE_MIN(vsi->nb_qps, max_sched_vsi_nodes); + } + /* nb_qps(hex) -> fls */ /* 0000 -> 0 */ /* 0001 -> 0 */ @@ -1707,7 +1722,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) rte_cpu_to_le_16(hw->func_caps.fd_fltr_best_effort); /* Enable VLAN/UP trip */ - ret = ice_vsi_config_tc_queue_mapping(vsi, + ret = ice_vsi_config_tc_queue_mapping(hw, vsi, &vsi_ctx.info, ICE_DEFAULT_TCMAP); if (ret) { @@ -1731,7 +1746,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) vsi_ctx.info.fd_options = rte_cpu_to_le_16(cfg); vsi_ctx.info.sw_id = hw->port_info->sw_id; vsi_ctx.info.sw_flags2 = ICE_AQ_VSI_SW_FLAG_LAN_ENA; - ret = ice_vsi_config_tc_queue_mapping(vsi, + ret = ice_vsi_config_tc_queue_mapping(hw, vsi, &vsi_ctx.info, ICE_DEFAULT_TCMAP); if (ret) { From patchwork Mon Aug 12 15:28:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143096 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4598457A1; Mon, 12 Aug 2024 17:30:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E76D240E45; Mon, 12 Aug 2024 17:28:52 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 3551B40DD7 for ; Mon, 12 Aug 2024 17:28:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476523; x=1755012523; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=X/pcpADV/QWbmbqXnhNCTnpv09tWgGzJIlzbo6zaRaA=; b=AO7JZsKhpz4T9TPd4aNZXfR5XBvXW/shF8zGaJs6Znq+GDqL+Ojxqggi mvCZBaFFIwyBqELQjF9ns5HVsW7FOpVmTP3YX4bWdkoilXE19Q7aVIZox nKYyKprolh/+thBRlj7WAqUYXDXxox3oGuNeKaXtNk8cvXM5H72jb9Tf4 Ke4JlGOCZmNyEI3XszRCLrDul5dT0yDpIqKO0/8xTYALbwrB8sfl67G+w /fVR57MeKHOTdTndy3hV+/cjlOGDjZVq2IPKjsBbdVpZuyr69aefPhRai VtFqqnJW6B7zrs1crfe0wOeWeToriuIVEkZxwQsopZ3ShKF+gMpTQWIeN w==; X-CSE-ConnectionGUID: idApyzuOSnuBMfVz3O2fEg== X-CSE-MsgGUID: XlKzxNrdSvCbJ61ztBzkQQ== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743060" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743060" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:42 -0700 X-CSE-ConnectionGUID: i/X42arZRUixWX0L4j1X4w== X-CSE-MsgGUID: 0FuTwP6/TKWRi71LdqSSUw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222604" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:42 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 14/16] net/ice: enhance Tx scheduler hierarchy support Date: Mon, 12 Aug 2024 16:28:13 +0100 Message-ID: <20240812152815.1132697-15-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Increase the flexibility of the Tx scheduler hierarchy support in the driver. If the HW/firmware allows it, allow creating up to 2k child nodes per scheduler node. Also expand the number of supported layers to the max available, rather than always just having 3 layers. One restriction on this change is that the topology needs to be configured and enabled before port queue setup, in many cases, and before port start in all cases. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_ethdev.c | 9 - drivers/net/ice/ice_ethdev.h | 15 +- drivers/net/ice/ice_rxtx.c | 10 + drivers/net/ice/ice_tm.c | 500 ++++++++++++++--------------------- 4 files changed, 216 insertions(+), 318 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index ab3f88fd7d..5a5967ff71 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3832,7 +3832,6 @@ ice_dev_start(struct rte_eth_dev *dev) int mask, ret; uint8_t timer = hw->func_caps.ts_func_info.tmr_index_owned; uint32_t pin_idx = ad->devargs.pin_idx; - struct rte_tm_error tm_err; ice_declare_bitmap(pmask, ICE_PROMISC_MAX); ice_zero_bitmap(pmask, ICE_PROMISC_MAX); @@ -3864,14 +3863,6 @@ ice_dev_start(struct rte_eth_dev *dev) } } - if (pf->tm_conf.committed) { - ret = ice_do_hierarchy_commit(dev, pf->tm_conf.clear_on_fail, &tm_err); - if (ret) { - PMD_DRV_LOG(ERR, "fail to commit Tx scheduler"); - goto rx_err; - } - } - ice_set_rx_function(dev); ice_set_tx_function(dev); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index f31addb122..cb1a7e8e0d 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -479,14 +479,6 @@ struct ice_tm_node { struct ice_sched_node *sched_node; }; -/* node type of Traffic Manager */ -enum ice_tm_node_type { - ICE_TM_NODE_TYPE_PORT, - ICE_TM_NODE_TYPE_QGROUP, - ICE_TM_NODE_TYPE_QUEUE, - ICE_TM_NODE_TYPE_MAX, -}; - /* Struct to store all the Traffic Manager configuration. */ struct ice_tm_conf { struct ice_shaper_profile_list shaper_profile_list; @@ -690,9 +682,6 @@ int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id, struct ice_rss_hash_cfg *cfg); void ice_tm_conf_init(struct rte_eth_dev *dev); void ice_tm_conf_uninit(struct rte_eth_dev *dev); -int ice_do_hierarchy_commit(struct rte_eth_dev *dev, - int clear_on_fail, - struct rte_tm_error *error); extern const struct rte_tm_ops ice_tm_ops; static inline int @@ -750,4 +739,8 @@ int rte_pmd_ice_dump_switch(uint16_t port, uint8_t **buff, uint32_t *size); __rte_experimental int rte_pmd_ice_dump_txsched(uint16_t port, bool detail, FILE *stream); + +int +ice_tm_setup_txq_node(struct ice_pf *pf, struct ice_hw *hw, uint16_t qid, uint32_t node_teid); + #endif /* _ICE_ETHDEV_H_ */ diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index a150d28e73..7a421bb364 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -747,6 +747,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) int err; struct ice_vsi *vsi; struct ice_hw *hw; + struct ice_pf *pf; struct ice_aqc_add_tx_qgrp *txq_elem; struct ice_tlan_ctx tx_ctx; int buf_len; @@ -777,6 +778,7 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) vsi = txq->vsi; hw = ICE_VSI_TO_HW(vsi); + pf = ICE_VSI_TO_PF(vsi); memset(&tx_ctx, 0, sizeof(tx_ctx)); txq_elem->num_txqs = 1; @@ -812,6 +814,14 @@ ice_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) /* store the schedule node id */ txq->q_teid = txq_elem->txqs[0].q_teid; + /* move the queue to correct position in hierarchy, if explicit hierarchy configured */ + if (pf->tm_conf.committed) + if (ice_tm_setup_txq_node(pf, hw, tx_queue_id, txq->q_teid) != 0) { + PMD_DRV_LOG(ERR, "Failed to set up txq traffic management node"); + rte_free(txq_elem); + return -EIO; + } + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; rte_free(txq_elem); diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 459446a6b0..80039c8aff 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -1,17 +1,17 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2022 Intel Corporation */ +#include #include #include "ice_ethdev.h" #include "ice_rxtx.h" -#define MAX_CHILDREN_PER_SCHED_NODE 8 -#define MAX_CHILDREN_PER_TM_NODE 256 +#define MAX_CHILDREN_PER_TM_NODE 2048 static int ice_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail, - __rte_unused struct rte_tm_error *error); + struct rte_tm_error *error); static int ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, uint32_t weight, uint32_t level_id, @@ -86,9 +86,10 @@ ice_tm_conf_uninit(struct rte_eth_dev *dev) } static int -ice_node_param_check(struct ice_pf *pf, uint32_t node_id, +ice_node_param_check(uint32_t node_id, uint32_t priority, uint32_t weight, const struct rte_tm_node_params *params, + bool is_leaf, struct rte_tm_error *error) { /* checked all the unsupported parameter */ @@ -123,7 +124,7 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id, } /* for non-leaf node */ - if (node_id >= pf->dev_data->nb_tx_queues) { + if (!is_leaf) { if (params->nonleaf.wfq_weight_mode) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; @@ -147,6 +148,11 @@ ice_node_param_check(struct ice_pf *pf, uint32_t node_id, } /* for leaf node */ + if (node_id >= RTE_MAX_QUEUES_PER_PORT) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "Node ID out of range for a leaf node."; + return -EINVAL; + } if (params->leaf.cman) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN; error->message = "Congestion management not supported"; @@ -193,11 +199,18 @@ find_node(struct ice_tm_node *root, uint32_t id) return NULL; } +static inline uint8_t +ice_get_leaf_level(struct ice_hw *hw) +{ + return hw->num_tx_sched_layers - 1 - hw->port_info->has_tc; +} + static int ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, int *is_leaf, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_tm_node *tm_node; if (!is_leaf || !error) @@ -217,7 +230,7 @@ ice_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, return -EINVAL; } - if (tm_node->level == ICE_TM_NODE_TYPE_QUEUE) + if (tm_node->level == ice_get_leaf_level(hw)) *is_leaf = true; else *is_leaf = false; @@ -389,16 +402,28 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, struct rte_tm_error *error) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_tm_shaper_profile *shaper_profile = NULL; struct ice_tm_node *tm_node; - struct ice_tm_node *parent_node; + struct ice_tm_node *parent_node = NULL; int ret; if (!params || !error) return -EINVAL; - ret = ice_node_param_check(pf, node_id, priority, weight, - params, error); + if (parent_node_id != RTE_TM_NODE_ID_NULL) { + parent_node = find_node(pf->tm_conf.root, parent_node_id); + if (!parent_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; + error->message = "parent not exist"; + return -EINVAL; + } + } + if (level_id == RTE_TM_NODE_LEVEL_ID_ANY && parent_node != NULL) + level_id = parent_node->level + 1; + + ret = ice_node_param_check(node_id, priority, weight, + params, level_id == ice_get_leaf_level(hw), error); if (ret) return ret; @@ -424,9 +449,9 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, /* root node if not have a parent */ if (parent_node_id == RTE_TM_NODE_ID_NULL) { /* check level */ - if (level_id != ICE_TM_NODE_TYPE_PORT) { + if (level_id != 0) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; - error->message = "Wrong level"; + error->message = "Wrong level, root node (NULL parent) must be at level 0"; return -EINVAL; } @@ -445,7 +470,7 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, if (!tm_node) return -ENOMEM; tm_node->id = node_id; - tm_node->level = ICE_TM_NODE_TYPE_PORT; + tm_node->level = 0; tm_node->parent = NULL; tm_node->reference_count = 0; tm_node->shaper_profile = shaper_profile; @@ -458,52 +483,29 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } /* check the parent node */ - parent_node = find_node(pf->tm_conf.root, parent_node_id); - if (!parent_node) { - error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; - error->message = "parent not exist"; - return -EINVAL; - } - if (parent_node->level != ICE_TM_NODE_TYPE_PORT && - parent_node->level != ICE_TM_NODE_TYPE_QGROUP) { + /* for n-level hierarchy, level n-1 is leaf, so last level with children is n-2 */ + if ((int)parent_node->level > hw->num_tx_sched_layers - 2) { error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; error->message = "parent is not valid"; return -EINVAL; } /* check level */ - if (level_id != RTE_TM_NODE_LEVEL_ID_ANY && - level_id != parent_node->level + 1) { + if (level_id != parent_node->level + 1) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; error->message = "Wrong level"; return -EINVAL; } - /* check the node number */ - if (parent_node->level == ICE_TM_NODE_TYPE_PORT) { - /* check the queue group number */ - if (parent_node->reference_count >= pf->dev_data->nb_tx_queues) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too many queue groups"; - return -EINVAL; - } - } else { - /* check the queue number */ - if (parent_node->reference_count >= - MAX_CHILDREN_PER_SCHED_NODE) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too many queues"; - return -EINVAL; - } - if (node_id >= pf->dev_data->nb_tx_queues) { - error->type = RTE_TM_ERROR_TYPE_NODE_ID; - error->message = "too large queue id"; - return -EINVAL; - } + /* check the max children allowed at this level */ + if (parent_node->reference_count >= hw->max_children[parent_node->level]) { + error->type = RTE_TM_ERROR_TYPE_CAPABILITIES; + error->message = "insufficient number of child nodes supported"; + return -EINVAL; } tm_node = rte_zmalloc(NULL, sizeof(struct ice_tm_node) + - sizeof(struct ice_tm_node *) * MAX_CHILDREN_PER_TM_NODE, + sizeof(struct ice_tm_node *) * hw->max_children[level_id], 0); if (!tm_node) return -ENOMEM; @@ -518,13 +520,11 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, (void *)((uint8_t *)tm_node + sizeof(struct ice_tm_node)); tm_node->parent->children[tm_node->parent->reference_count] = tm_node; - if (tm_node->priority != 0 && level_id != ICE_TM_NODE_TYPE_QUEUE && - level_id != ICE_TM_NODE_TYPE_QGROUP) + if (tm_node->priority != 0) PMD_DRV_LOG(WARNING, "priority != 0 not supported in level %d", level_id); - if (tm_node->weight != 1 && - level_id != ICE_TM_NODE_TYPE_QUEUE && level_id != ICE_TM_NODE_TYPE_QGROUP) + if (tm_node->weight != 1 && level_id == 0) PMD_DRV_LOG(WARNING, "weight != 1 not supported in level %d", level_id); @@ -569,7 +569,7 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, } /* root node */ - if (tm_node->level == ICE_TM_NODE_TYPE_PORT) { + if (tm_node->level == 0) { rte_free(tm_node); pf->tm_conf.root = NULL; return 0; @@ -589,53 +589,6 @@ ice_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, return 0; } -static int ice_move_recfg_lan_txq(struct rte_eth_dev *dev, - struct ice_sched_node *queue_sched_node, - struct ice_sched_node *dst_node, - uint16_t queue_id) -{ - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_aqc_move_txqs_data *buf; - struct ice_sched_node *queue_parent_node; - uint8_t txqs_moved; - int ret = ICE_SUCCESS; - uint16_t buf_size = ice_struct_size(buf, txqs, 1); - - buf = (struct ice_aqc_move_txqs_data *)ice_malloc(hw, sizeof(*buf)); - if (buf == NULL) - return -ENOMEM; - - queue_parent_node = queue_sched_node->parent; - buf->src_teid = queue_parent_node->info.node_teid; - buf->dest_teid = dst_node->info.node_teid; - buf->txqs[0].q_teid = queue_sched_node->info.node_teid; - buf->txqs[0].txq_id = queue_id; - - ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50, - NULL, buf, buf_size, &txqs_moved, NULL); - if (ret || txqs_moved == 0) { - PMD_DRV_LOG(ERR, "move lan queue %u failed", queue_id); - rte_free(buf); - return ICE_ERR_PARAM; - } - - if (queue_parent_node->num_children > 0) { - queue_parent_node->num_children--; - queue_parent_node->children[queue_parent_node->num_children] = NULL; - } else { - PMD_DRV_LOG(ERR, "invalid children number %d for queue %u", - queue_parent_node->num_children, queue_id); - rte_free(buf); - return ICE_ERR_PARAM; - } - dst_node->children[dst_node->num_children++] = queue_sched_node; - queue_sched_node->parent = dst_node; - ice_sched_query_elem(hw, queue_sched_node->info.node_teid, &queue_sched_node->info); - - rte_free(buf); - return ret; -} - static int ice_set_node_rate(struct ice_hw *hw, struct ice_tm_node *tm_node, struct ice_sched_node *sched_node) @@ -723,240 +676,191 @@ static int ice_cfg_hw_node(struct ice_hw *hw, return 0; } -static struct ice_sched_node *ice_get_vsi_node(struct ice_hw *hw) +int +ice_tm_setup_txq_node(struct ice_pf *pf, struct ice_hw *hw, uint16_t qid, uint32_t teid) { - struct ice_sched_node *node = hw->port_info->root; - uint32_t vsi_layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; - uint32_t i; + struct ice_sched_node *hw_node = ice_sched_find_node_by_teid(hw->port_info->root, teid); + struct ice_tm_node *sw_node = find_node(pf->tm_conf.root, qid); - for (i = 0; i < vsi_layer; i++) - node = node->children[0]; - - return node; -} - -static int ice_reset_noleaf_nodes(struct rte_eth_dev *dev) -{ - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_sched_node *vsi_node = ice_get_vsi_node(hw); - struct ice_tm_node *root = pf->tm_conf.root; - uint32_t i; - int ret; - - /* reset vsi_node */ - ret = ice_set_node_rate(hw, NULL, vsi_node); - if (ret) { - PMD_DRV_LOG(ERR, "reset vsi node failed"); - return ret; - } - - if (root == NULL) + /* not configured in hierarchy */ + if (sw_node == NULL) return 0; - for (i = 0; i < root->reference_count; i++) { - struct ice_tm_node *tm_node = root->children[i]; + sw_node->sched_node = hw_node; - if (tm_node->sched_node == NULL) - continue; + /* if the queue node has been put in the wrong place in hierarchy */ + if (hw_node->parent != sw_node->parent->sched_node) { + struct ice_aqc_move_txqs_data *buf; + uint8_t txqs_moved = 0; + uint16_t buf_size = ice_struct_size(buf, txqs, 1); + + buf = ice_malloc(hw, buf_size); + if (buf == NULL) + return -ENOMEM; - ret = ice_cfg_hw_node(hw, NULL, tm_node->sched_node); - if (ret) { - PMD_DRV_LOG(ERR, "reset queue group node %u failed", tm_node->id); - return ret; + struct ice_sched_node *parent = hw_node->parent; + struct ice_sched_node *new_parent = sw_node->parent->sched_node; + buf->src_teid = parent->info.node_teid; + buf->dest_teid = new_parent->info.node_teid; + buf->txqs[0].q_teid = hw_node->info.node_teid; + buf->txqs[0].txq_id = qid; + + int ret = ice_aq_move_recfg_lan_txq(hw, 1, true, false, false, false, 50, + NULL, buf, buf_size, &txqs_moved, NULL); + if (ret || txqs_moved == 0) { + PMD_DRV_LOG(ERR, "move lan queue %u failed", qid); + ice_free(hw, buf); + return ICE_ERR_PARAM; } - tm_node->sched_node = NULL; + + /* now update the ice_sched_nodes to match physical layout */ + new_parent->children[new_parent->num_children++] = hw_node; + hw_node->parent = new_parent; + ice_sched_query_elem(hw, hw_node->info.node_teid, &hw_node->info); + for (uint16_t i = 0; i < parent->num_children; i++) + if (parent->children[i] == hw_node) { + /* to remove, just overwrite the old node slot with the last ptr */ + parent->children[i] = parent->children[--parent->num_children]; + break; + } } - return 0; + return ice_cfg_hw_node(hw, sw_node, hw_node); } -static int ice_remove_leaf_nodes(struct rte_eth_dev *dev) +/* from a given node, recursively deletes all the nodes that belong to that vsi. + * Any nodes which can't be deleted because they have children belonging to a different + * VSI, are now also adjusted to belong to that VSI also + */ +static int +free_sched_node_recursive(struct ice_port_info *pi, const struct ice_sched_node *root, + struct ice_sched_node *node, uint8_t vsi_id) { - int ret = 0; - int i; + uint16_t i = 0; - for (i = 0; i < dev->data->nb_tx_queues; i++) { - ret = ice_tx_queue_stop(dev, i); - if (ret) { - PMD_DRV_LOG(ERR, "stop queue %u failed", i); - break; + while (i < node->num_children) { + if (node->children[i]->vsi_handle != vsi_id) { + i++; + continue; } + free_sched_node_recursive(pi, root, node->children[i], vsi_id); } - return ret; -} - -static int ice_add_leaf_nodes(struct rte_eth_dev *dev) -{ - int ret = 0; - int i; - - for (i = 0; i < dev->data->nb_tx_queues; i++) { - ret = ice_tx_queue_start(dev, i); - if (ret) { - PMD_DRV_LOG(ERR, "start queue %u failed", i); - break; - } + if (node != root) { + if (node->num_children == 0) + ice_free_sched_node(pi, node); + else + node->vsi_handle = node->children[0]->vsi_handle; } - return ret; + return 0; } -int ice_do_hierarchy_commit(struct rte_eth_dev *dev, - int clear_on_fail, - struct rte_tm_error *error) +static int +create_sched_node_recursive(struct ice_port_info *pi, struct ice_tm_node *sw_node, + struct ice_sched_node *hw_root, uint16_t *created) { - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ice_tm_node *root; - struct ice_sched_node *vsi_node = NULL; - struct ice_sched_node *queue_node; - struct ice_tx_queue *txq; - int ret_val = 0; - uint32_t i; - uint32_t idx_vsi_child; - uint32_t idx_qg; - uint32_t nb_vsi_child; - uint32_t nb_qg; - uint32_t qid; - uint32_t q_teid; - - /* remove leaf nodes */ - ret_val = ice_remove_leaf_nodes(dev); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "reset no-leaf nodes failed"); - goto fail_clear; - } - - /* reset no-leaf nodes. */ - ret_val = ice_reset_noleaf_nodes(dev); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "reset leaf nodes failed"); - goto add_leaf; - } - - /* config vsi node */ - vsi_node = ice_get_vsi_node(hw); - root = pf->tm_conf.root; - - ret_val = ice_set_node_rate(hw, root, vsi_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure vsi node %u bandwidth failed", - root->id); - goto add_leaf; - } - - /* config queue group nodes */ - nb_vsi_child = vsi_node->num_children; - nb_qg = vsi_node->children[0]->num_children; - - idx_vsi_child = 0; - idx_qg = 0; - - if (root == NULL) - goto commit; - - for (i = 0; i < root->reference_count; i++) { - struct ice_tm_node *tm_node = root->children[i]; - struct ice_tm_node *tm_child_node; - struct ice_sched_node *qgroup_sched_node = - vsi_node->children[idx_vsi_child]->children[idx_qg]; - uint32_t j; - - ret_val = ice_cfg_hw_node(hw, tm_node, qgroup_sched_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - - for (j = 0; j < tm_node->reference_count; j++) { - tm_child_node = tm_node->children[j]; - qid = tm_child_node->id; - ret_val = ice_tx_queue_start(dev, qid); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "start queue %u failed", qid); - goto reset_leaf; - } - txq = dev->data->tx_queues[qid]; - q_teid = txq->q_teid; - queue_node = ice_sched_get_node(hw->port_info, q_teid); - if (queue_node == NULL) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "get queue %u node failed", qid); - goto reset_leaf; - } - if (queue_node->info.parent_teid != qgroup_sched_node->info.node_teid) { - ret_val = ice_move_recfg_lan_txq(dev, queue_node, - qgroup_sched_node, qid); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "move queue %u failed", qid); - goto reset_leaf; - } - } - ret_val = ice_cfg_hw_node(hw, tm_child_node, queue_node); - if (ret_val) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, - "configure queue group node %u failed", - tm_node->id); - goto reset_leaf; - } - } - - idx_qg++; - if (idx_qg >= nb_qg) { - idx_qg = 0; - idx_vsi_child++; + struct ice_sched_node *parent = sw_node->sched_node; + uint32_t teid; + uint16_t added; + + /* first create all child nodes */ + for (uint16_t i = 0; i < sw_node->reference_count; i++) { + struct ice_tm_node *tm_node = sw_node->children[i]; + int res = ice_sched_add_elems(pi, hw_root, + parent, parent->tx_sched_layer + 1, + 1 /* num nodes */, &added, &teid, + NULL /* no pre-alloc */); + if (res != 0) { + PMD_DRV_LOG(ERR, "Error with ice_sched_add_elems, adding child node to teid %u\n", + parent->info.node_teid); + return -1; } - if (idx_vsi_child >= nb_vsi_child) { - error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; - PMD_DRV_LOG(ERR, "too many queues"); - goto reset_leaf; + struct ice_sched_node *hw_node = ice_sched_find_node_by_teid(parent, teid); + if (ice_cfg_hw_node(pi->hw, tm_node, hw_node) != 0) { + PMD_DRV_LOG(ERR, "Error configuring node %u at layer %u", + teid, parent->tx_sched_layer + 1); + return -1; } + tm_node->sched_node = hw_node; + created[hw_node->tx_sched_layer]++; } -commit: - pf->tm_conf.committed = true; - pf->tm_conf.clear_on_fail = clear_on_fail; + /* if we have just created the child nodes in the q-group, i.e. last non-leaf layer, + * then just return, rather than trying to create leaf nodes. + * That is done later at queue start. + */ + if (sw_node->level + 2 == ice_get_leaf_level(pi->hw)) + return 0; - return ret_val; + for (uint16_t i = 0; i < sw_node->reference_count; i++) { + if (sw_node->children[i]->reference_count == 0) + continue; -reset_leaf: - ice_remove_leaf_nodes(dev); -add_leaf: - ice_add_leaf_nodes(dev); - ice_reset_noleaf_nodes(dev); -fail_clear: - /* clear all the traffic manager configuration */ - if (clear_on_fail) { - ice_tm_conf_uninit(dev); - ice_tm_conf_init(dev); + if (create_sched_node_recursive(pi, sw_node->children[i], hw_root, created) < 0) + return -1; } - return ret_val; + return 0; } -static int ice_hierarchy_commit(struct rte_eth_dev *dev, - int clear_on_fail, - struct rte_tm_error *error) +static int +apply_topology_updates(struct rte_eth_dev *dev __rte_unused) { + return 0; +} + +static int +commit_new_hierarchy(struct rte_eth_dev *dev) +{ + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct ice_port_info *pi = hw->port_info; + struct ice_tm_node *sw_root = pf->tm_conf.root; + struct ice_sched_node *new_vsi_root = (pi->has_tc) ? pi->root->children[0] : pi->root; + uint16_t nodes_created_per_level[10] = {0}; /* counted per hw level, not per logical */ + uint8_t q_lvl = ice_get_leaf_level(hw); + uint8_t qg_lvl = q_lvl - 1; + + /* check if we have a previously applied topology */ + if (sw_root->sched_node != NULL) + return apply_topology_updates(dev); + + free_sched_node_recursive(pi, new_vsi_root, new_vsi_root, new_vsi_root->vsi_handle); + + sw_root->sched_node = new_vsi_root; + if (create_sched_node_recursive(pi, sw_root, new_vsi_root, nodes_created_per_level) < 0) + return -1; + for (uint16_t i = 0; i < RTE_DIM(nodes_created_per_level); i++) + PMD_DRV_LOG(DEBUG, "Created %u nodes at level %u\n", + nodes_created_per_level[i], i); + hw->vsi_ctx[pf->main_vsi->idx]->sched.vsi_node[0] = new_vsi_root; + + pf->main_vsi->nb_qps = + RTE_MIN(nodes_created_per_level[qg_lvl] * hw->max_children[qg_lvl], + hw->layer_info[q_lvl].max_device_nodes); + + pf->tm_conf.committed = true; /* set flag to be checks on queue start */ + + return ice_alloc_lan_q_ctx(hw, 0, 0, pf->main_vsi->nb_qps); +} - /* if device not started, simply set committed flag and return. */ - if (!dev->data->dev_started) { - pf->tm_conf.committed = true; - pf->tm_conf.clear_on_fail = clear_on_fail; - return 0; +static int +ice_hierarchy_commit(struct rte_eth_dev *dev, + int clear_on_fail, + struct rte_tm_error *error) +{ + RTE_SET_USED(error); + /* commit should only be done to topology before start! */ + if (dev->data->dev_started) + return -1; + + uint64_t start = rte_rdtsc(); + int ret = commit_new_hierarchy(dev); + if (ret < 0 && clear_on_fail) { + ice_tm_conf_uninit(dev); + ice_tm_conf_init(dev); } - - return ice_do_hierarchy_commit(dev, clear_on_fail, error); + uint64_t time = rte_rdtsc() - start; + PMD_DRV_LOG(DEBUG, "Time to apply hierarchy = %.1f\n", (float)time / rte_get_timer_hz()); + return ret; } From patchwork Mon Aug 12 15:28:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143097 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 96A61457A1; Mon, 12 Aug 2024 17:30:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 325D540E4B; Mon, 12 Aug 2024 17:28:54 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 0512F40E15 for ; Mon, 12 Aug 2024 17:28:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476524; x=1755012524; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yZoAcxk0Ds8+hVOoaYLkwtth7DLBjQcmE1bRlTNAIoY=; b=mWux2UyMN7+8zn25roHCxcRiofyRdz0H5gwe5LctzCMs4L8MgRIfP8vf QuB+muu3BA5K6stBVgXSByaCQxnw4kzp7ub4cN2G5pouYSZTylm4iTEUC mwTPvc9NHE399BNUf4AdkxUgMk4T9ZryMEGmkMBBYyksgTQtW5je652+S 5q3YjZWqXHgMt3x5X6GA0/Tv+mzKcezD5ejd5Q8Mrlfc37XVCXJHWbJ31 B1hlgzIbRZO1RjbLzwY8a2wX6D/5MCCDdWALRvETa7HXfIEjZog+P1Nz8 6Z0tM8RwoWT6r2BFaS4e0ydNlACmw1WZWR3EdhIJTt90NKalet23Zj4kf A==; X-CSE-ConnectionGUID: ELkwZpDnSGmAojUno6ihMg== X-CSE-MsgGUID: PFjH/RZ0TguftJOugJSG0g== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743061" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743061" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:43 -0700 X-CSE-ConnectionGUID: Nle6SatpSWqOywEx2ueuwg== X-CSE-MsgGUID: D0T8HpSdQ7COEoA5Z27duw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222617" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:43 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 15/16] net/ice: add minimal capability reporting API Date: Mon, 12 Aug 2024 16:28:14 +0100 Message-ID: <20240812152815.1132697-16-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Incomplete but reports number of available layers Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_ethdev.h | 1 + drivers/net/ice/ice_tm.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index cb1a7e8e0d..6bebc511e4 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -682,6 +682,7 @@ int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id, struct ice_rss_hash_cfg *cfg); void ice_tm_conf_init(struct rte_eth_dev *dev); void ice_tm_conf_uninit(struct rte_eth_dev *dev); + extern const struct rte_tm_ops ice_tm_ops; static inline int diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 80039c8aff..3dcd091c38 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -33,8 +33,12 @@ static int ice_shaper_profile_add(struct rte_eth_dev *dev, static int ice_shaper_profile_del(struct rte_eth_dev *dev, uint32_t shaper_profile_id, struct rte_tm_error *error); +static int ice_tm_capabilities_get(struct rte_eth_dev *dev, + struct rte_tm_capabilities *cap, + struct rte_tm_error *error); const struct rte_tm_ops ice_tm_ops = { + .capabilities_get = ice_tm_capabilities_get, .shaper_profile_add = ice_shaper_profile_add, .shaper_profile_delete = ice_shaper_profile_del, .node_add = ice_tm_node_add, @@ -864,3 +868,16 @@ ice_hierarchy_commit(struct rte_eth_dev *dev, PMD_DRV_LOG(DEBUG, "Time to apply hierarchy = %.1f\n", (float)time / rte_get_timer_hz()); return ret; } + +static int +ice_tm_capabilities_get(struct rte_eth_dev *dev, struct rte_tm_capabilities *cap, + struct rte_tm_error *error) +{ + struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + *cap = (struct rte_tm_capabilities){ + .n_levels_max = hw->num_tx_sched_layers - hw->port_info->has_tc, + }; + if (error) + error->type = RTE_TM_ERROR_TYPE_NONE; + return 0; +} From patchwork Mon Aug 12 15:28:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 143098 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D683A457A1; Mon, 12 Aug 2024 17:30:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5AF3440E54; Mon, 12 Aug 2024 17:28:55 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by mails.dpdk.org (Postfix) with ESMTP id 9912C40DD7 for ; Mon, 12 Aug 2024 17:28:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723476525; x=1755012525; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=38KJGRzJ5AN7wtMSQ5orzZpqVqWhr6XP1SJ+mURe0go=; b=CnzwXaxEi2xFKt9WRNs9Z/3eXg1h6xDpr1ZrQKhJhUiJwSMBp9KptzBo DCJHRLdsGVrOpyk/a0lnGtNq+4Zplv4u/j5ddsza5SEKijHN2UQB03Tm7 SHkSyAjpSHdpaqrTlgYY3mdK2P8GAu32+FxD5OZckdDUO5lFyY6bUngEb nwRsTMvfBQmfvZPK6YCEOd4A8CHIHfwhNqwUZjbxqsYLr9pKY8tUeKNhy RIv+tlq6QKTzfxGFd61VCSzN77oBBsQa65hbGOZnzc9bhDZz4J7aMN3fx wbKi961Vqmzud6uCYiWi+VG05m/S6hbm+2LgJ/+nBzED0+PQ0pToP3Y15 g==; X-CSE-ConnectionGUID: ePpLEH5nQDW3l4iOGQopog== X-CSE-MsgGUID: iJEgNHG7QGmxiLcJYXZtmg== X-IronPort-AV: E=McAfee;i="6700,10204,11162"; a="21743064" X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="21743064" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Aug 2024 08:28:44 -0700 X-CSE-ConnectionGUID: BbSHULVVRJiBzfvSrHUdlw== X-CSE-MsgGUID: 6AnnB8mbSG6Dm8cgSiGykQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,283,1716274800"; d="scan'208";a="63222624" Received: from silpixa00400562.ir.intel.com (HELO silpixa00401385.ir.intel.com) ([10.237.214.39]) by orviesa004.jf.intel.com with ESMTP; 12 Aug 2024 08:28:44 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [PATCH v3 16/16] net/ice: do early check on node level when adding Date: Mon, 12 Aug 2024 16:28:15 +0100 Message-ID: <20240812152815.1132697-17-bruce.richardson@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240812152815.1132697-1-bruce.richardson@intel.com> References: <20240807093407.452784-1-bruce.richardson@intel.com> <20240812152815.1132697-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When adding a new scheduler node, the parameters for leaf nodes and non-leaf nodes are different, and which parameter checks are done is determined by checking the node level i.e. if it's the lowest (leaf) node level or not. However, if the node level itself is incorrectly specified, the error messages got can be confusing since the user may add a leaf node using e.g. the testpmd command to explicitly add a leaf node, yet get error messages only relevant to non-leaf nodes due to an incorrect level parameter. We can avoid these confusing errors by doing a check that the level matches "parent->level + 1" before doing a more detailed parameter check. Signed-off-by: Bruce Richardson --- drivers/net/ice/ice_tm.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/net/ice/ice_tm.c b/drivers/net/ice/ice_tm.c index 3dcd091c38..e05ad8a8e7 100644 --- a/drivers/net/ice/ice_tm.c +++ b/drivers/net/ice/ice_tm.c @@ -426,6 +426,13 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, if (level_id == RTE_TM_NODE_LEVEL_ID_ANY && parent_node != NULL) level_id = parent_node->level + 1; + /* check level */ + if (parent_node != NULL && level_id != parent_node->level + 1) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; + error->message = "Wrong level"; + return -EINVAL; + } + ret = ice_node_param_check(node_id, priority, weight, params, level_id == ice_get_leaf_level(hw), error); if (ret) @@ -493,12 +500,6 @@ ice_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, error->message = "parent is not valid"; return -EINVAL; } - /* check level */ - if (level_id != parent_node->level + 1) { - error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; - error->message = "Wrong level"; - return -EINVAL; - } /* check the max children allowed at this level */ if (parent_node->reference_count >= hw->max_children[parent_node->level]) {
teid %d
type "); - print_elem_type(stream, data->data.elem_type); - fprintf(stream, "
teid%d
type%s
VSI%u
TXQ%u