From patchwork Fri Oct 9 12:39:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Savinay Dharmappa X-Patchwork-Id: 80158 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3873A04BC; Fri, 9 Oct 2020 14:39:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 517C21D618; Fri, 9 Oct 2020 14:39:35 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 6D7F01D600 for ; Fri, 9 Oct 2020 14:39:32 +0200 (CEST) IronPort-SDR: qwhEh3AyVZr6CtzahHcEnt64StwlF/f7A0unZw+ZmXoAMd9/mdieeaRR7r0kI1jxxQ/N56xg2v OffrCNGEto1Q== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="152397569" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152397569" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:39:31 -0700 IronPort-SDR: BJrz7iAe4u1FwtWE+5tKZFPG9yKoGBRek4Xw75JXzA3eZ2zXse9gtwaKf1dNmbakI2MidOj7TE Pk0tgQI2lYWg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528914493" Received: from silpixa00400629.ir.intel.com ([10.237.214.112]) by orsmga005.jf.intel.com with ESMTP; 09 Oct 2020 05:39:29 -0700 From: Savinay Dharmappa To: cristian.dumitrescu@intel.com, jasvinder.singh@intel.com, dev@dpdk.org Cc: savinay.dharmappa@intel.com Date: Fri, 9 Oct 2020 13:39:12 +0100 Message-Id: <20201009123919.43004-2-savinay.dharmappa@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009123919.43004-1-savinay.dharmappa@intel.com> References: <20201007140915.19491-1-savinay.dharmappa@intel.com> <20201009123919.43004-1-savinay.dharmappa@intel.com> Subject: [dpdk-dev] [PATCH v9 1/8] sched: add support profile table X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add subport profile table to internal port data structure and update the port config function. Signed-off-by: Savinay Dharmappa Signed-off-by: Jasvinder Singh --- doc/guides/rel_notes/release_20_11.rst | 3 + lib/librte_sched/rte_sched.c | 197 ++++++++++++++++++++++++- lib/librte_sched/rte_sched.h | 25 ++++ 3 files changed, 222 insertions(+), 3 deletions(-) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 808bdc4e5..6968c27f6 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -283,6 +283,9 @@ ABI Changes * ``ethdev`` internal functions are marked with ``__rte_internal`` tag. +* ``sched`` changes + + * Added new fields to ``struct rte_sched_subport_port_params``. Known Issues ------------ diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c index 75be8b6bd..a44638f31 100644 --- a/lib/librte_sched/rte_sched.c +++ b/lib/librte_sched/rte_sched.c @@ -101,6 +101,16 @@ enum grinder_state { e_GRINDER_READ_MBUF }; +struct rte_sched_subport_profile { + /* Token bucket (TB) */ + uint64_t tb_period; + uint64_t tb_credits_per_period; + uint64_t tb_size; + + uint64_t tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; + uint64_t tc_period; +}; + struct rte_sched_grinder { /* Pipe cache */ uint16_t pcache_qmask[RTE_SCHED_GRINDER_PCACHE_SIZE]; @@ -212,6 +222,8 @@ struct rte_sched_port { uint16_t pipe_queue[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; uint8_t pipe_tc[RTE_SCHED_QUEUES_PER_PIPE]; uint8_t tc_queue[RTE_SCHED_QUEUES_PER_PIPE]; + uint32_t n_subport_profiles; + uint32_t n_max_subport_profiles; uint64_t rate; uint32_t mtu; uint32_t frame_overhead; @@ -230,6 +242,7 @@ struct rte_sched_port { uint32_t subport_id; /* Large data structures */ + struct rte_sched_subport_profile *subport_profiles; struct rte_sched_subport *subports[0] __rte_cache_aligned; } __rte_cache_aligned; @@ -375,9 +388,61 @@ pipe_profile_check(struct rte_sched_pipe_params *params, return 0; } +static int +subport_profile_check(struct rte_sched_subport_profile_params *params, + uint64_t rate) +{ + uint32_t i; + + /* Check user parameters */ + if (params == NULL) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect value for parameter params\n", __func__); + return -EINVAL; + } + + if (params->tb_rate == 0 || params->tb_rate > rate) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect value for tb rate\n", __func__); + return -EINVAL; + } + + if (params->tb_size == 0) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect value for tb size\n", __func__); + return -EINVAL; + } + + for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { + uint64_t tc_rate = params->tc_rate[i]; + + if (tc_rate == 0 || (tc_rate > params->tb_rate)) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect value for tc rate\n", __func__); + return -EINVAL; + } + } + + if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect tc rate(best effort)\n", __func__); + return -EINVAL; + } + + if (params->tc_period == 0) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect value for tc period\n", __func__); + return -EINVAL; + } + + return 0; +} + static int rte_sched_port_check_params(struct rte_sched_port_params *params) { + uint32_t i; + if (params == NULL) { RTE_LOG(ERR, SCHED, "%s: Incorrect value for parameter params\n", __func__); @@ -414,6 +479,29 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) return -EINVAL; } + if (params->subport_profiles == NULL || + params->n_subport_profiles == 0 || + params->n_max_subport_profiles == 0 || + params->n_subport_profiles > params->n_max_subport_profiles) { + RTE_LOG(ERR, SCHED, + "%s: Incorrect value for subport profiles\n", __func__); + return -EINVAL; + } + + for (i = 0; i < params->n_subport_profiles; i++) { + struct rte_sched_subport_profile_params *p = + params->subport_profiles + i; + int status; + + status = subport_profile_check(p, params->rate); + if (status != 0) { + RTE_LOG(ERR, SCHED, + "%s: subport profile check failed(%d)\n", + __func__, status); + return -EINVAL; + } + } + /* n_pipes_per_subport: non-zero, power of 2 */ if (params->n_pipes_per_subport == 0 || !rte_is_power_of_2(params->n_pipes_per_subport)) { @@ -555,6 +643,42 @@ rte_sched_port_log_pipe_profile(struct rte_sched_subport *subport, uint32_t i) p->wrr_cost[0], p->wrr_cost[1], p->wrr_cost[2], p->wrr_cost[3]); } +static void +rte_sched_port_log_subport_profile(struct rte_sched_port *port, uint32_t i) +{ + struct rte_sched_subport_profile *p = port->subport_profiles + i; + + RTE_LOG(DEBUG, SCHED, "Low level config for subport profile %u:\n" + "Token bucket: period = %"PRIu64", credits per period = %"PRIu64"," + "size = %"PRIu64"\n" + "Traffic classes: period = %"PRIu64",\n" + "credits per period = [%"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64 + " %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64 + " %"PRIu64", %"PRIu64", %"PRIu64"]\n", + i, + + /* Token bucket */ + p->tb_period, + p->tb_credits_per_period, + p->tb_size, + + /* Traffic classes */ + p->tc_period, + p->tc_credits_per_period[0], + p->tc_credits_per_period[1], + p->tc_credits_per_period[2], + p->tc_credits_per_period[3], + p->tc_credits_per_period[4], + p->tc_credits_per_period[5], + p->tc_credits_per_period[6], + p->tc_credits_per_period[7], + p->tc_credits_per_period[8], + p->tc_credits_per_period[9], + p->tc_credits_per_period[10], + p->tc_credits_per_period[11], + p->tc_credits_per_period[12]); +} + static inline uint64_t rte_sched_time_ms_to_bytes(uint64_t time_ms, uint64_t rate) { @@ -623,6 +747,37 @@ rte_sched_pipe_profile_convert(struct rte_sched_subport *subport, dst->wrr_cost[3] = (uint8_t) wrr_cost[3]; } +static void +rte_sched_subport_profile_convert(struct rte_sched_subport_profile_params *src, + struct rte_sched_subport_profile *dst, + uint64_t rate) +{ + uint32_t i; + + /* Token Bucket */ + if (src->tb_rate == rate) { + dst->tb_credits_per_period = 1; + dst->tb_period = 1; + } else { + double tb_rate = (double) src->tb_rate + / (double) rate; + double d = RTE_SCHED_TB_RATE_CONFIG_ERR; + + rte_approx_64(tb_rate, d, &dst->tb_credits_per_period, + &dst->tb_period); + } + + dst->tb_size = src->tb_size; + + /* Traffic Classes */ + dst->tc_period = rte_sched_time_ms_to_bytes(src->tc_period, rate); + + for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) + dst->tc_credits_per_period[i] + = rte_sched_time_ms_to_bytes(src->tc_period, + src->tc_rate[i]); +} + static void rte_sched_subport_config_pipe_profile_table(struct rte_sched_subport *subport, struct rte_sched_subport_params *params, uint64_t rate) @@ -647,6 +802,24 @@ rte_sched_subport_config_pipe_profile_table(struct rte_sched_subport *subport, } } +static void +rte_sched_port_config_subport_profile_table(struct rte_sched_port *port, + struct rte_sched_port_params *params, + uint64_t rate) +{ + uint32_t i; + + for (i = 0; i < port->n_subport_profiles; i++) { + struct rte_sched_subport_profile_params *src + = params->subport_profiles + i; + struct rte_sched_subport_profile *dst + = port->subport_profiles + i; + + rte_sched_subport_profile_convert(src, dst, rate); + rte_sched_port_log_subport_profile(port, i); + } +} + static int rte_sched_subport_check_params(struct rte_sched_subport_params *params, uint32_t n_max_pipes_per_subport, @@ -793,7 +966,7 @@ struct rte_sched_port * rte_sched_port_config(struct rte_sched_port_params *params) { struct rte_sched_port *port = NULL; - uint32_t size0, size1; + uint32_t size0, size1, size2; uint32_t cycles_per_byte; uint32_t i, j; int status; @@ -808,10 +981,21 @@ rte_sched_port_config(struct rte_sched_port_params *params) size0 = sizeof(struct rte_sched_port); size1 = params->n_subports_per_port * sizeof(struct rte_sched_subport *); + size2 = params->n_max_subport_profiles * + sizeof(struct rte_sched_subport_profile); /* Allocate memory to store the data structures */ - port = rte_zmalloc_socket("qos_params", size0 + size1, RTE_CACHE_LINE_SIZE, - params->socket); + port = rte_zmalloc_socket("qos_params", size0 + size1, + RTE_CACHE_LINE_SIZE, params->socket); + if (port == NULL) { + RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + + return NULL; + } + + /* Allocate memory to store the subport profile */ + port->subport_profiles = rte_zmalloc_socket("subport_profile", size2, + RTE_CACHE_LINE_SIZE, params->socket); if (port == NULL) { RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); @@ -820,6 +1004,8 @@ rte_sched_port_config(struct rte_sched_port_params *params) /* User parameters */ port->n_subports_per_port = params->n_subports_per_port; + port->n_subport_profiles = params->n_subport_profiles; + port->n_max_subport_profiles = params->n_max_subport_profiles; port->n_pipes_per_subport = params->n_pipes_per_subport; port->n_pipes_per_subport_log2 = __builtin_ctz(params->n_pipes_per_subport); @@ -850,6 +1036,9 @@ rte_sched_port_config(struct rte_sched_port_params *params) port->time_cpu_bytes = 0; port->time = 0; + /* Subport profile table */ + rte_sched_port_config_subport_profile_table(port, params, port->rate); + cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT) / params->rate; port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte); @@ -905,6 +1094,7 @@ rte_sched_port_free(struct rte_sched_port *port) for (i = 0; i < port->n_subports_per_port; i++) rte_sched_subport_free(port, port->subports[i]); + rte_free(port->subport_profiles); rte_free(port); } @@ -961,6 +1151,7 @@ rte_sched_free_memory(struct rte_sched_port *port, uint32_t n_subports) rte_sched_subport_free(port, subport); } + rte_free(port->subport_profiles); rte_free(port); } diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h index 8a5a93c98..39339b7f1 100644 --- a/lib/librte_sched/rte_sched.h +++ b/lib/librte_sched/rte_sched.h @@ -192,6 +192,20 @@ struct rte_sched_subport_params { #endif }; +struct rte_sched_subport_profile_params { + /** Token bucket rate (measured in bytes per second) */ + uint64_t tb_rate; + + /** Token bucket size (measured in credits) */ + uint64_t tb_size; + + /** Traffic class rates (measured in bytes per second) */ + uint64_t tc_rate[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; + + /** Enforcement period for rates (measured in milliseconds) */ + uint64_t tc_period; +}; + /** Subport statistics */ struct rte_sched_subport_stats { /** Number of packets successfully written */ @@ -254,6 +268,17 @@ struct rte_sched_port_params { /** Number of subports */ uint32_t n_subports_per_port; + /** subport profile table. + * Every pipe is configured using one of the profiles from this table. + */ + struct rte_sched_subport_profile_params *subport_profiles; + + /** Profiles in the pipe profile table */ + uint32_t n_subport_profiles; + + /** Max allowed profiles in the pipe profile table */ + uint32_t n_max_subport_profiles; + /** Maximum number of subport pipes. * This parameter is used to reserve a fixed number of bits * in struct rte_mbuf::sched.queue_id for the pipe_id for all From patchwork Fri Oct 9 12:39:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Savinay Dharmappa X-Patchwork-Id: 80159 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCF26A04BC; Fri, 9 Oct 2020 14:40:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 628091D624; Fri, 9 Oct 2020 14:39:39 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id C0CB51D613 for ; Fri, 9 Oct 2020 14:39:33 +0200 (CEST) IronPort-SDR: NJM2okPy6D/guQcM5atV3JiVoZdY7bM8PLQJ9VqYfMx6dQuyTXbBJQmZrAG+1r8qoLS9nhnf+m ch2O5JsqYsAg== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="152397574" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152397574" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:39:33 -0700 IronPort-SDR: 3Mfjk47v3RWQcvk/yhvd5C9VDIriwELluVWtcRIB7JtRJBpCnf1gEPh4D8BIFKh09NcXZUNQuy xD7n2+Vb5ixQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528914501" Received: from silpixa00400629.ir.intel.com ([10.237.214.112]) by orsmga005.jf.intel.com with ESMTP; 09 Oct 2020 05:39:31 -0700 From: Savinay Dharmappa To: cristian.dumitrescu@intel.com, jasvinder.singh@intel.com, dev@dpdk.org Cc: savinay.dharmappa@intel.com Date: Fri, 9 Oct 2020 13:39:13 +0100 Message-Id: <20201009123919.43004-3-savinay.dharmappa@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009123919.43004-1-savinay.dharmappa@intel.com> References: <20201007140915.19491-1-savinay.dharmappa@intel.com> <20201009123919.43004-1-savinay.dharmappa@intel.com> Subject: [dpdk-dev] [PATCH v9 2/8] sched: introduce subport profile add function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" API to add new subport bandwidth profile. Signed-off-by: Savinay Dharmappa Signed-off-by: Jasvinder Singh --- lib/librte_sched/rte_sched.c | 66 ++++++++++++++++++++++++++ lib/librte_sched/rte_sched.h | 23 +++++++++ lib/librte_sched/rte_sched_version.map | 2 + 3 files changed, 91 insertions(+) diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c index a44638f31..895b40d72 100644 --- a/lib/librte_sched/rte_sched.c +++ b/lib/librte_sched/rte_sched.c @@ -1528,6 +1528,72 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, return 0; } +int +rte_sched_port_subport_profile_add(struct rte_sched_port *port, + struct rte_sched_subport_profile_params *params, + uint32_t *subport_profile_id) +{ + int status; + uint32_t i; + struct rte_sched_subport_profile *dst; + + /* Port */ + if (port == NULL) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect value for parameter port\n", __func__); + return -EINVAL; + } + + if (params == NULL) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect value for parameter profile\n", __func__); + return -EINVAL; + } + + if (subport_profile_id == NULL) { + RTE_LOG(ERR, SCHED, "%s: " + "Incorrect value for parameter subport_profile_id\n", + __func__); + return -EINVAL; + } + + dst = port->subport_profiles + port->n_subport_profiles; + + /* Subport profiles exceeds the max limit */ + if (port->n_subport_profiles >= port->n_max_subport_profiles) { + RTE_LOG(ERR, SCHED, "%s: " + "Number of subport profiles exceeds the max limit\n", + __func__); + return -EINVAL; + } + + status = subport_profile_check(params, port->rate); + if (status != 0) { + RTE_LOG(ERR, SCHED, + "%s: subport profile check failed(%d)\n", __func__, status); + return -EINVAL; + } + + rte_sched_subport_profile_convert(params, dst, port->rate); + + /* Subport profile should not exists */ + for (i = 0; i < port->n_subport_profiles; i++) + if (memcmp(port->subport_profiles + i, + dst, sizeof(*dst)) == 0) { + RTE_LOG(ERR, SCHED, + "%s: subport profile exists\n", __func__); + return -EINVAL; + } + + /* Subport profile commit */ + *subport_profile_id = port->n_subport_profiles; + port->n_subport_profiles++; + + rte_sched_port_log_subport_profile(port, *subport_profile_id); + + return 0; +} + static inline uint32_t rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport, diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h index 39339b7f1..aede2e986 100644 --- a/lib/librte_sched/rte_sched.h +++ b/lib/librte_sched/rte_sched.h @@ -336,6 +336,29 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, struct rte_sched_pipe_params *params, uint32_t *pipe_profile_id); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Hierarchical scheduler subport bandwidth profile add + * Note that this function is safe to use in runtime for adding new + * subport bandwidth profile as it doesn't have any impact on hiearchical + * structure of the scheduler. + * @param port + * Handle to port scheduler instance + * @param profile + * Subport bandwidth profile + * @param subport_profile_id + * Subport profile id + * @return + * 0 upon success, error code otherwise + */ +__rte_experimental +int +rte_sched_port_subport_profile_add(struct rte_sched_port *port, + struct rte_sched_subport_profile_params *profile, + uint32_t *subport_profile_id); + /** * Hierarchical scheduler subport configuration * diff --git a/lib/librte_sched/rte_sched_version.map b/lib/librte_sched/rte_sched_version.map index 3faef6f0a..ace284b7d 100644 --- a/lib/librte_sched/rte_sched_version.map +++ b/lib/librte_sched/rte_sched_version.map @@ -28,4 +28,6 @@ EXPERIMENTAL { global: rte_sched_subport_pipe_profile_add; + # added in 20.11 + rte_sched_port_subport_profile_add; }; From patchwork Fri Oct 9 12:39:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Savinay Dharmappa X-Patchwork-Id: 80160 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 525D9A04BC; Fri, 9 Oct 2020 14:40:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 13E5A1D62C; Fri, 9 Oct 2020 14:39:41 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 3ACDE1D616 for ; Fri, 9 Oct 2020 14:39:35 +0200 (CEST) IronPort-SDR: dKX+AhPpchdRlIumrD105irEr+PKW00AgFP4ky9gcuxdgtJqtGRSs2ezf+1MQWzIS4jiV43RWE YNxA9nVtBurw== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="152397579" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152397579" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:39:34 -0700 IronPort-SDR: DZHV16p3b9K882TMgfrOAWWBU1ACXAN8lxr/1SmS97jlh4Qcnuf45nKYzXyCrs8SRIT5SfpFPI meRVb+Ixfrcw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528914514" Received: from silpixa00400629.ir.intel.com ([10.237.214.112]) by orsmga005.jf.intel.com with ESMTP; 09 Oct 2020 05:39:33 -0700 From: Savinay Dharmappa To: cristian.dumitrescu@intel.com, jasvinder.singh@intel.com, dev@dpdk.org Cc: savinay.dharmappa@intel.com Date: Fri, 9 Oct 2020 13:39:14 +0100 Message-Id: <20201009123919.43004-4-savinay.dharmappa@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009123919.43004-1-savinay.dharmappa@intel.com> References: <20201007140915.19491-1-savinay.dharmappa@intel.com> <20201009123919.43004-1-savinay.dharmappa@intel.com> Subject: [dpdk-dev] [PATCH v9 3/8] sched: update subport rate dynamically X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to update subport rate dynamically. Signed-off-by: Savinay Dharmappa --- app/test/test_sched.c | 2 +- doc/guides/rel_notes/deprecation.rst | 6 - doc/guides/rel_notes/release_20_11.rst | 9 + drivers/net/softnic/rte_eth_softnic_tm.c | 6 +- examples/ip_pipeline/tmgr.c | 6 +- examples/qos_sched/init.c | 3 +- lib/librte_sched/rte_sched.c | 415 ++++++++++------------- lib/librte_sched/rte_sched.h | 13 +- 8 files changed, 213 insertions(+), 247 deletions(-) diff --git a/app/test/test_sched.c b/app/test/test_sched.c index fc31080ef..5e5c2a59b 100644 --- a/app/test/test_sched.c +++ b/app/test/test_sched.c @@ -138,7 +138,7 @@ test_sched(void) port = rte_sched_port_config(&port_param); TEST_ASSERT_NOT_NULL(port, "Error config sched port\n"); - err = rte_sched_subport_config(port, SUBPORT, subport_param); + err = rte_sched_subport_config(port, SUBPORT, subport_param, 0); TEST_ASSERT_SUCCESS(err, "Error config sched, err=%d\n", err); for (pipe = 0; pipe < subport_param[0].n_pipes_per_subport_enabled; pipe++) { diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 584e72087..f7363a585 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -212,12 +212,6 @@ Deprecation Notices in "rte_sched.h". These changes are aligned to improvements suggested in the RFC https://mails.dpdk.org/archives/dev/2018-November/120035.html. -* sched: To allow dynamic configuration of the subport bandwidth profile, - changes will be made to data structures ``rte_sched_subport_params``, - ``rte_sched_port_params`` and new data structure, API functions will be - defined in ``rte_sched.h``. These changes are aligned as suggested in the - RFC https://mails.dpdk.org/archives/dev/2020-July/175161.html - * metrics: The function ``rte_metrics_init`` will have a non-void return in order to notify errors instead of calling ``rte_exit``. diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 6968c27f6..85d56d46c 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -136,6 +136,12 @@ New Features * Extern objects and functions can be plugged into the pipeline. * Transaction-oriented table updates. +* **Added support to update subport bandwidth dynamically.** + + * Added new API ``rte_sched_port_subport_profile_add`` to add new + subport bandwidth profile to subport porfile table at runtime. + + * Added support to update subport rate dynamically. Removed Items ------------- @@ -287,6 +293,9 @@ ABI Changes * Added new fields to ``struct rte_sched_subport_port_params``. + * Added ``subport_profile_id`` as a argument to function + ``rte_sched_subport_config``. + Known Issues ------------ diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c index d30976378..5199dd2cd 100644 --- a/drivers/net/softnic/rte_eth_softnic_tm.c +++ b/drivers/net/softnic/rte_eth_softnic_tm.c @@ -92,7 +92,7 @@ softnic_tmgr_port_create(struct pmd_internals *p, status = rte_sched_subport_config(sched, subport_id, - &t->subport_params[subport_id]); + &t->subport_params[subport_id], 0); if (status) { rte_sched_port_free(sched); return NULL; @@ -1141,7 +1141,7 @@ update_subport_tc_rate(struct rte_eth_dev *dev, /* Update the subport configuration. */ if (rte_sched_subport_config(SCHED(p), - subport_id, &subport_params)) + subport_id, &subport_params, 0)) return -1; /* Commit changes. */ @@ -2912,7 +2912,7 @@ update_subport_rate(struct rte_eth_dev *dev, /* Update the subport configuration. */ if (rte_sched_subport_config(SCHED(p), subport_id, - &subport_params)) + &subport_params, 0)) return -1; /* Commit changes. */ diff --git a/examples/ip_pipeline/tmgr.c b/examples/ip_pipeline/tmgr.c index 91ccbf60f..46c6a83a4 100644 --- a/examples/ip_pipeline/tmgr.c +++ b/examples/ip_pipeline/tmgr.c @@ -119,7 +119,8 @@ tmgr_port_create(const char *name, struct tmgr_port_params *params) status = rte_sched_subport_config( s, i, - &subport_profile[0]); + &subport_profile[0], + 0); if (status) { rte_sched_port_free(s); @@ -180,7 +181,8 @@ tmgr_subport_config(const char *port_name, status = rte_sched_subport_config( port->s, subport_id, - &subport_profile[subport_profile_id]); + &subport_profile[subport_profile_id], + 0); return status; } diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c index 06328ddb2..b188c624b 100644 --- a/examples/qos_sched/init.c +++ b/examples/qos_sched/init.c @@ -314,7 +314,8 @@ app_init_sched_port(uint32_t portid, uint32_t socketid) } for (subport = 0; subport < port_params.n_subports_per_port; subport ++) { - err = rte_sched_subport_config(port, subport, &subport_params[subport]); + err = rte_sched_subport_config(port, subport, + &subport_params[subport], 0); if (err) { rte_exit(EXIT_FAILURE, "Unable to config sched subport %u, err=%d\n", subport, err); diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c index 895b40d72..7c5688068 100644 --- a/lib/librte_sched/rte_sched.c +++ b/lib/librte_sched/rte_sched.c @@ -123,6 +123,7 @@ struct rte_sched_grinder { uint32_t productive; uint32_t pindex; struct rte_sched_subport *subport; + struct rte_sched_subport_profile *subport_params; struct rte_sched_pipe *pipe; struct rte_sched_pipe_profile *pipe_params; @@ -151,16 +152,11 @@ struct rte_sched_grinder { struct rte_sched_subport { /* Token bucket (TB) */ uint64_t tb_time; /* time of last update */ - uint64_t tb_period; - uint64_t tb_credits_per_period; - uint64_t tb_size; uint64_t tb_credits; /* Traffic classes (TCs) */ uint64_t tc_time; /* time of next update */ - uint64_t tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; uint64_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; - uint64_t tc_period; /* TC oversubscription */ uint64_t tc_ov_wm; @@ -174,6 +170,8 @@ struct rte_sched_subport { /* Statistics */ struct rte_sched_subport_stats stats __rte_cache_aligned; + /* subport profile */ + uint32_t profile; /* Subport pipes */ uint32_t n_pipes_per_subport_enabled; uint32_t n_pipe_profiles; @@ -834,18 +832,6 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, return -EINVAL; } - if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb rate\n", __func__); - return -EINVAL; - } - - if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb size\n", __func__); - return -EINVAL; - } - /* qsize: if non-zero, power of 2, * no bigger than 32K (due to 16-bit read/write pointers) */ @@ -859,29 +845,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, } } - for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { - uint64_t tc_rate = params->tc_rate[i]; - uint16_t qsize = params->qsize[i]; - - if ((qsize == 0 && tc_rate != 0) || - (qsize != 0 && tc_rate == 0) || - (tc_rate > params->tb_rate)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc rate\n", __func__); - return -EINVAL; - } - } - - if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0 || - params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect qsize or tc rate(best effort)\n", __func__); - return -EINVAL; - } - - if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc period\n", __func__); + if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { + RTE_LOG(ERR, SCHED, "%s: Incorrect qsize\n", __func__); return -EINVAL; } @@ -1098,48 +1063,6 @@ rte_sched_port_free(struct rte_sched_port *port) rte_free(port); } -static void -rte_sched_port_log_subport_config(struct rte_sched_port *port, uint32_t i) -{ - struct rte_sched_subport *s = port->subports[i]; - - RTE_LOG(DEBUG, SCHED, "Low level config for subport %u:\n" - " Token bucket: period = %"PRIu64", credits per period = %"PRIu64 - ", size = %"PRIu64"\n" - " Traffic classes: period = %"PRIu64"\n" - " credits per period = [%"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64 - ", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64 - ", %"PRIu64", %"PRIu64", %"PRIu64"]\n" - " Best effort traffic class oversubscription: wm min = %"PRIu64 - ", wm max = %"PRIu64"\n", - i, - - /* Token bucket */ - s->tb_period, - s->tb_credits_per_period, - s->tb_size, - - /* Traffic classes */ - s->tc_period, - s->tc_credits_per_period[0], - s->tc_credits_per_period[1], - s->tc_credits_per_period[2], - s->tc_credits_per_period[3], - s->tc_credits_per_period[4], - s->tc_credits_per_period[5], - s->tc_credits_per_period[6], - s->tc_credits_per_period[7], - s->tc_credits_per_period[8], - s->tc_credits_per_period[9], - s->tc_credits_per_period[10], - s->tc_credits_per_period[11], - s->tc_credits_per_period[12], - - /* Best effort traffic class oversubscription */ - s->tc_ov_wm_min, - s->tc_ov_wm_max); -} - static void rte_sched_free_memory(struct rte_sched_port *port, uint32_t n_subports) { @@ -1158,10 +1081,12 @@ rte_sched_free_memory(struct rte_sched_port *port, uint32_t n_subports) int rte_sched_subport_config(struct rte_sched_port *port, uint32_t subport_id, - struct rte_sched_subport_params *params) + struct rte_sched_subport_params *params, + uint32_t subport_profile_id) { struct rte_sched_subport *s = NULL; uint32_t n_subports = subport_id; + struct rte_sched_subport_profile *profile; uint32_t n_subport_pipe_queues, i; uint32_t size0, size1, bmp_mem_size; int status; @@ -1181,165 +1106,183 @@ rte_sched_subport_config(struct rte_sched_port *port, return -EINVAL; } - status = rte_sched_subport_check_params(params, - port->n_pipes_per_subport, - port->rate); - if (status != 0) { - RTE_LOG(NOTICE, SCHED, - "%s: Port scheduler params check failed (%d)\n", - __func__, status); - + if (subport_profile_id >= port->n_max_subport_profiles) { + RTE_LOG(ERR, SCHED, "%s: " + "Number of subport profile exceeds the max limit\n", + __func__); rte_sched_free_memory(port, n_subports); return -EINVAL; } - /* Determine the amount of memory to allocate */ - size0 = sizeof(struct rte_sched_subport); - size1 = rte_sched_subport_get_array_base(params, - e_RTE_SCHED_SUBPORT_ARRAY_TOTAL); + /** Memory is allocated only on first invocation of the api for a + * given subport. Subsequent invocation on same subport will just + * update subport bandwidth parameter. + **/ + if (port->subports[subport_id] == NULL) { - /* Allocate memory to store the data structures */ - s = rte_zmalloc_socket("subport_params", size0 + size1, - RTE_CACHE_LINE_SIZE, port->socket); - if (s == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Memory allocation fails\n", __func__); + status = rte_sched_subport_check_params(params, + port->n_pipes_per_subport, + port->rate); + if (status != 0) { + RTE_LOG(NOTICE, SCHED, + "%s: Port scheduler params check failed (%d)\n", + __func__, status); - rte_sched_free_memory(port, n_subports); - return -ENOMEM; - } + rte_sched_free_memory(port, n_subports); + return -EINVAL; + } - n_subports++; + /* Determine the amount of memory to allocate */ + size0 = sizeof(struct rte_sched_subport); + size1 = rte_sched_subport_get_array_base(params, + e_RTE_SCHED_SUBPORT_ARRAY_TOTAL); - /* Port */ - port->subports[subport_id] = s; + /* Allocate memory to store the data structures */ + s = rte_zmalloc_socket("subport_params", size0 + size1, + RTE_CACHE_LINE_SIZE, port->socket); + if (s == NULL) { + RTE_LOG(ERR, SCHED, + "%s: Memory allocation fails\n", __func__); - /* Token Bucket (TB) */ - if (params->tb_rate == port->rate) { - s->tb_credits_per_period = 1; - s->tb_period = 1; - } else { - double tb_rate = ((double) params->tb_rate) / ((double) port->rate); - double d = RTE_SCHED_TB_RATE_CONFIG_ERR; + rte_sched_free_memory(port, n_subports); + return -ENOMEM; + } - rte_approx_64(tb_rate, d, &s->tb_credits_per_period, &s->tb_period); - } + n_subports++; - s->tb_size = params->tb_size; - s->tb_time = port->time; - s->tb_credits = s->tb_size / 2; + subport_profile_id = 0; - /* Traffic Classes (TCs) */ - s->tc_period = rte_sched_time_ms_to_bytes(params->tc_period, port->rate); - for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { - if (params->qsize[i]) - s->tc_credits_per_period[i] - = rte_sched_time_ms_to_bytes(params->tc_period, - params->tc_rate[i]); - } - s->tc_time = port->time + s->tc_period; - for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) - if (params->qsize[i]) - s->tc_credits[i] = s->tc_credits_per_period[i]; + /* Port */ + port->subports[subport_id] = s; - /* compile time checks */ - RTE_BUILD_BUG_ON(RTE_SCHED_PORT_N_GRINDERS == 0); - RTE_BUILD_BUG_ON(RTE_SCHED_PORT_N_GRINDERS & - (RTE_SCHED_PORT_N_GRINDERS - 1)); + s->tb_time = port->time; - /* User parameters */ - s->n_pipes_per_subport_enabled = params->n_pipes_per_subport_enabled; - memcpy(s->qsize, params->qsize, sizeof(params->qsize)); - s->n_pipe_profiles = params->n_pipe_profiles; - s->n_max_pipe_profiles = params->n_max_pipe_profiles; + /* compile time checks */ + RTE_BUILD_BUG_ON(RTE_SCHED_PORT_N_GRINDERS == 0); + RTE_BUILD_BUG_ON(RTE_SCHED_PORT_N_GRINDERS & + (RTE_SCHED_PORT_N_GRINDERS - 1)); + + /* User parameters */ + s->n_pipes_per_subport_enabled = + params->n_pipes_per_subport_enabled; + memcpy(s->qsize, params->qsize, sizeof(params->qsize)); + s->n_pipe_profiles = params->n_pipe_profiles; + s->n_max_pipe_profiles = params->n_max_pipe_profiles; #ifdef RTE_SCHED_RED - for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { - uint32_t j; + for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { + uint32_t j; - for (j = 0; j < RTE_COLORS; j++) { + for (j = 0; j < RTE_COLORS; j++) { /* if min/max are both zero, then RED is disabled */ - if ((params->red_params[i][j].min_th | - params->red_params[i][j].max_th) == 0) { - continue; + if ((params->red_params[i][j].min_th | + params->red_params[i][j].max_th) == 0) { + continue; + } + + if (rte_red_config_init(&s->red_config[i][j], + params->red_params[i][j].wq_log2, + params->red_params[i][j].min_th, + params->red_params[i][j].max_th, + params->red_params[i][j].maxp_inv) != 0) { + rte_sched_free_memory(port, n_subports); + + RTE_LOG(NOTICE, SCHED, + "%s: RED configuration init fails\n", + __func__); + return -EINVAL; + } } + } +#endif - if (rte_red_config_init(&s->red_config[i][j], - params->red_params[i][j].wq_log2, - params->red_params[i][j].min_th, - params->red_params[i][j].max_th, - params->red_params[i][j].maxp_inv) != 0) { - rte_sched_free_memory(port, n_subports); + /* Scheduling loop detection */ + s->pipe_loop = RTE_SCHED_PIPE_INVALID; + s->pipe_exhaustion = 0; + + /* Grinders */ + s->busy_grinders = 0; + + /* Queue base calculation */ + rte_sched_subport_config_qsize(s); + + /* Large data structures */ + s->pipe = (struct rte_sched_pipe *) + (s->memory + rte_sched_subport_get_array_base(params, + e_RTE_SCHED_SUBPORT_ARRAY_PIPE)); + s->queue = (struct rte_sched_queue *) + (s->memory + rte_sched_subport_get_array_base(params, + e_RTE_SCHED_SUBPORT_ARRAY_QUEUE)); + s->queue_extra = (struct rte_sched_queue_extra *) + (s->memory + rte_sched_subport_get_array_base(params, + e_RTE_SCHED_SUBPORT_ARRAY_QUEUE_EXTRA)); + s->pipe_profiles = (struct rte_sched_pipe_profile *) + (s->memory + rte_sched_subport_get_array_base(params, + e_RTE_SCHED_SUBPORT_ARRAY_PIPE_PROFILES)); + s->bmp_array = s->memory + rte_sched_subport_get_array_base( + params, e_RTE_SCHED_SUBPORT_ARRAY_BMP_ARRAY); + s->queue_array = (struct rte_mbuf **) + (s->memory + rte_sched_subport_get_array_base(params, + e_RTE_SCHED_SUBPORT_ARRAY_QUEUE_ARRAY)); + + /* Pipe profile table */ + rte_sched_subport_config_pipe_profile_table(s, params, + port->rate); + + /* Bitmap */ + n_subport_pipe_queues = rte_sched_subport_pipe_queues(s); + bmp_mem_size = rte_bitmap_get_memory_footprint( + n_subport_pipe_queues); + s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array, + bmp_mem_size); + if (s->bmp == NULL) { + RTE_LOG(ERR, SCHED, + "%s: Subport bitmap init error\n", __func__); - RTE_LOG(NOTICE, SCHED, - "%s: RED configuration init fails\n", __func__); - return -EINVAL; - } + rte_sched_free_memory(port, n_subports); + return -EINVAL; } - } -#endif - /* Scheduling loop detection */ - s->pipe_loop = RTE_SCHED_PIPE_INVALID; - s->pipe_exhaustion = 0; + for (i = 0; i < RTE_SCHED_PORT_N_GRINDERS; i++) + s->grinder_base_bmp_pos[i] = RTE_SCHED_PIPE_INVALID; - /* Grinders */ - s->busy_grinders = 0; +#ifdef RTE_SCHED_SUBPORT_TC_OV + /* TC oversubscription */ + s->tc_ov_wm_min = port->mtu; + s->tc_ov_wm = s->tc_ov_wm_max; + s->tc_ov_period_id = 0; + s->tc_ov = 0; + s->tc_ov_n = 0; + s->tc_ov_rate = 0; +#endif + } - /* Queue base calculation */ - rte_sched_subport_config_qsize(s); + { + /* update subport parameters from subport profile table*/ + profile = port->subport_profiles + subport_profile_id; - /* Large data structures */ - s->pipe = (struct rte_sched_pipe *) - (s->memory + rte_sched_subport_get_array_base(params, - e_RTE_SCHED_SUBPORT_ARRAY_PIPE)); - s->queue = (struct rte_sched_queue *) - (s->memory + rte_sched_subport_get_array_base(params, - e_RTE_SCHED_SUBPORT_ARRAY_QUEUE)); - s->queue_extra = (struct rte_sched_queue_extra *) - (s->memory + rte_sched_subport_get_array_base(params, - e_RTE_SCHED_SUBPORT_ARRAY_QUEUE_EXTRA)); - s->pipe_profiles = (struct rte_sched_pipe_profile *) - (s->memory + rte_sched_subport_get_array_base(params, - e_RTE_SCHED_SUBPORT_ARRAY_PIPE_PROFILES)); - s->bmp_array = s->memory + rte_sched_subport_get_array_base(params, - e_RTE_SCHED_SUBPORT_ARRAY_BMP_ARRAY); - s->queue_array = (struct rte_mbuf **) - (s->memory + rte_sched_subport_get_array_base(params, - e_RTE_SCHED_SUBPORT_ARRAY_QUEUE_ARRAY)); - - /* Pipe profile table */ - rte_sched_subport_config_pipe_profile_table(s, params, port->rate); + s = port->subports[subport_id]; - /* Bitmap */ - n_subport_pipe_queues = rte_sched_subport_pipe_queues(s); - bmp_mem_size = rte_bitmap_get_memory_footprint(n_subport_pipe_queues); - s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array, - bmp_mem_size); - if (s->bmp == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Subport bitmap init error\n", __func__); + s->tb_credits = profile->tb_size / 2; - rte_sched_free_memory(port, n_subports); - return -EINVAL; - } + s->tc_time = port->time + profile->tc_period; - for (i = 0; i < RTE_SCHED_PORT_N_GRINDERS; i++) - s->grinder_base_bmp_pos[i] = RTE_SCHED_PIPE_INVALID; + for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) + if (s->qsize[i]) + s->tc_credits[i] = + profile->tc_credits_per_period[i]; + else + profile->tc_credits_per_period[i] = 0; #ifdef RTE_SCHED_SUBPORT_TC_OV - /* TC oversubscription */ - s->tc_ov_wm_min = port->mtu; - s->tc_ov_wm_max = rte_sched_time_ms_to_bytes(params->tc_period, - s->pipe_tc_be_rate_max); - s->tc_ov_wm = s->tc_ov_wm_max; - s->tc_ov_period_id = 0; - s->tc_ov = 0; - s->tc_ov_n = 0; - s->tc_ov_rate = 0; + s->tc_ov_wm_max = rte_sched_time_ms_to_bytes(profile->tc_period, + s->pipe_tc_be_rate_max); #endif + s->profile = subport_profile_id; - rte_sched_port_log_subport_config(port, subport_id); + } + + rte_sched_port_log_subport_profile(port, subport_profile_id); return 0; } @@ -1351,6 +1294,7 @@ rte_sched_pipe_config(struct rte_sched_port *port, int32_t pipe_profile) { struct rte_sched_subport *s; + struct rte_sched_subport_profile *sp; struct rte_sched_pipe *p; struct rte_sched_pipe_profile *params; uint32_t n_subports = subport_id + 1; @@ -1391,14 +1335,15 @@ rte_sched_pipe_config(struct rte_sched_port *port, return -EINVAL; } + sp = port->subport_profiles + s->profile; /* Handle the case when pipe already has a valid configuration */ p = s->pipe + pipe_id; if (p->tb_time) { params = s->pipe_profiles + p->profile; double subport_tc_be_rate = - (double) s->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] - / (double) s->tc_period; + (double)sp->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] + / (double) sp->tc_period; double pipe_tc_be_rate = (double) params->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] / (double) params->tc_period; @@ -1440,8 +1385,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, { /* Subport best effort tc oversubscription */ double subport_tc_be_rate = - (double) s->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] - / (double) s->tc_period; + (double)sp->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] + / (double) sp->tc_period; double pipe_tc_be_rate = (double) params->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] / (double) params->tc_period; @@ -2229,14 +2174,15 @@ grinder_credits_update(struct rte_sched_port *port, struct rte_sched_grinder *grinder = subport->grinder + pos; struct rte_sched_pipe *pipe = grinder->pipe; struct rte_sched_pipe_profile *params = grinder->pipe_params; + struct rte_sched_subport_profile *sp = grinder->subport_params; uint64_t n_periods; uint32_t i; /* Subport TB */ - n_periods = (port->time - subport->tb_time) / subport->tb_period; - subport->tb_credits += n_periods * subport->tb_credits_per_period; - subport->tb_credits = RTE_MIN(subport->tb_credits, subport->tb_size); - subport->tb_time += n_periods * subport->tb_period; + n_periods = (port->time - subport->tb_time) / sp->tb_period; + subport->tb_credits += n_periods * sp->tb_credits_per_period; + subport->tb_credits = RTE_MIN(subport->tb_credits, sp->tb_size); + subport->tb_time += n_periods * sp->tb_period; /* Pipe TB */ n_periods = (port->time - pipe->tb_time) / params->tb_period; @@ -2247,9 +2193,9 @@ grinder_credits_update(struct rte_sched_port *port, /* Subport TCs */ if (unlikely(port->time >= subport->tc_time)) { for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) - subport->tc_credits[i] = subport->tc_credits_per_period[i]; + subport->tc_credits[i] = sp->tc_credits_per_period[i]; - subport->tc_time = port->time + subport->tc_period; + subport->tc_time = port->time + sp->tc_period; } /* Pipe TCs */ @@ -2265,8 +2211,10 @@ grinder_credits_update(struct rte_sched_port *port, static inline uint64_t grinder_tc_ov_credits_update(struct rte_sched_port *port, - struct rte_sched_subport *subport) + struct rte_sched_subport *subport, uint32_t pos) { + struct rte_sched_grinder *grinder = subport->grinder + pos; + struct rte_sched_subport_profile *sp = grinder->subport_params; uint64_t tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; uint64_t tc_consumption = 0, tc_ov_consumption_max; uint64_t tc_ov_wm = subport->tc_ov_wm; @@ -2276,17 +2224,17 @@ grinder_tc_ov_credits_update(struct rte_sched_port *port, return subport->tc_ov_wm_max; for (i = 0; i < RTE_SCHED_TRAFFIC_CLASS_BE; i++) { - tc_ov_consumption[i] = - subport->tc_credits_per_period[i] - subport->tc_credits[i]; + tc_ov_consumption[i] = sp->tc_credits_per_period[i] + - subport->tc_credits[i]; tc_consumption += tc_ov_consumption[i]; } tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASS_BE] = - subport->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] - + sp->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] - subport->tc_credits[RTE_SCHED_TRAFFIC_CLASS_BE]; tc_ov_consumption_max = - subport->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] - + sp->tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASS_BE] - tc_consumption; if (tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASS_BE] > @@ -2312,14 +2260,15 @@ grinder_credits_update(struct rte_sched_port *port, struct rte_sched_grinder *grinder = subport->grinder + pos; struct rte_sched_pipe *pipe = grinder->pipe; struct rte_sched_pipe_profile *params = grinder->pipe_params; + struct rte_sched_subport_profile *sp = grinder->subport_params; uint64_t n_periods; uint32_t i; /* Subport TB */ - n_periods = (port->time - subport->tb_time) / subport->tb_period; - subport->tb_credits += n_periods * subport->tb_credits_per_period; - subport->tb_credits = RTE_MIN(subport->tb_credits, subport->tb_size); - subport->tb_time += n_periods * subport->tb_period; + n_periods = (port->time - subport->tb_time) / sp->tb_period; + subport->tb_credits += n_periods * sp->tb_credits_per_period; + subport->tb_credits = RTE_MIN(subport->tb_credits, sp->tb_size); + subport->tb_time += n_periods * sp->tb_period; /* Pipe TB */ n_periods = (port->time - pipe->tb_time) / params->tb_period; @@ -2329,12 +2278,13 @@ grinder_credits_update(struct rte_sched_port *port, /* Subport TCs */ if (unlikely(port->time >= subport->tc_time)) { - subport->tc_ov_wm = grinder_tc_ov_credits_update(port, subport); + subport->tc_ov_wm = + grinder_tc_ov_credits_update(port, subport, pos); for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) - subport->tc_credits[i] = subport->tc_credits_per_period[i]; + subport->tc_credits[i] = sp->tc_credits_per_period[i]; - subport->tc_time = port->time + subport->tc_period; + subport->tc_time = port->time + sp->tc_period; subport->tc_ov_period_id++; } @@ -2857,6 +2807,9 @@ grinder_handle(struct rte_sched_port *port, struct rte_sched_pipe *pipe = grinder->pipe; grinder->pipe_params = subport->pipe_profiles + pipe->profile; + grinder->subport_params = port->subport_profiles + + subport->profile; + grinder_prefetch_tc_queue_arrays(subport, pos); grinder_credits_update(port, subport, pos); diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h index aede2e986..1506c6487 100644 --- a/lib/librte_sched/rte_sched.h +++ b/lib/librte_sched/rte_sched.h @@ -361,20 +361,27 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /** * Hierarchical scheduler subport configuration - * + * Note that this function is safe to use at runtime + * to configure subport bandwidth profile. * @param port * Handle to port scheduler instance * @param subport_id * Subport ID * @param params - * Subport configuration parameters + * Subport configuration parameters. Must be non-NULL + * for first invocation (i.e initialization) for a given + * subport. Ignored (recommended value is NULL) for all + * subsequent invocation on the same subport. + * @param subport_profile_id + * ID of subport bandwidth profile * @return * 0 upon success, error code otherwise */ int rte_sched_subport_config(struct rte_sched_port *port, uint32_t subport_id, - struct rte_sched_subport_params *params); + struct rte_sched_subport_params *params, + uint32_t subport_profile_id); /** * Hierarchical scheduler pipe configuration From patchwork Fri Oct 9 12:39:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Savinay Dharmappa X-Patchwork-Id: 80161 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 627DCA04BC; Fri, 9 Oct 2020 14:41:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E9CB01D638; Fri, 9 Oct 2020 14:39:43 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 4808C1D621 for ; Fri, 9 Oct 2020 14:39:38 +0200 (CEST) IronPort-SDR: 6hs4firBcQ+K2px8RN9fgzTD95XmAyRFjiekI6SFHtMgPeZAedyx8C0Vg6V5TElVRA5lT8R/We fXRDz+IMsErA== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="152397584" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152397584" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:39:36 -0700 IronPort-SDR: TeaGHj+obGEjP2p5NYGuZ/ksOnuugeUZtgTjPnhtyHntIZWtlRHZtvPHSB6wj/6pw8lx47FFPZ bqTyZek7WLWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528914523" Received: from silpixa00400629.ir.intel.com ([10.237.214.112]) by orsmga005.jf.intel.com with ESMTP; 09 Oct 2020 05:39:35 -0700 From: Savinay Dharmappa To: cristian.dumitrescu@intel.com, jasvinder.singh@intel.com, dev@dpdk.org Cc: savinay.dharmappa@intel.com Date: Fri, 9 Oct 2020 13:39:15 +0100 Message-Id: <20201009123919.43004-5-savinay.dharmappa@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009123919.43004-1-savinay.dharmappa@intel.com> References: <20201007140915.19491-1-savinay.dharmappa@intel.com> <20201009123919.43004-1-savinay.dharmappa@intel.com> Subject: [dpdk-dev] [PATCH v9 4/8] example/qos_sched: update subport rate dynamically X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify the qos_sched application to build the hierarchical scheduler with default subport bandwidth profile. It also allows to update a subport with different subport rates dynamically. Signed-off-by: Savinay Dharmappa --- examples/qos_sched/cfg_file.c | 151 +++++++++++++++++++-------------- examples/qos_sched/cfg_file.h | 4 + examples/qos_sched/init.c | 20 +++-- examples/qos_sched/main.h | 1 + examples/qos_sched/profile.cfg | 3 + 5 files changed, 110 insertions(+), 69 deletions(-) diff --git a/examples/qos_sched/cfg_file.c b/examples/qos_sched/cfg_file.c index f078e4f7d..cd167bd8e 100644 --- a/examples/qos_sched/cfg_file.c +++ b/examples/qos_sched/cfg_file.c @@ -142,6 +142,93 @@ cfg_load_pipe(struct rte_cfgfile *cfg, struct rte_sched_pipe_params *pipe_params return 0; } +int +cfg_load_subport_profile(struct rte_cfgfile *cfg, + struct rte_sched_subport_profile_params *subport_profile) +{ + int i; + const char *entry; + int profiles; + + if (!cfg || !subport_profile) + return -1; + + profiles = rte_cfgfile_num_sections(cfg, "subport profile", + sizeof("subport profile") - 1); + subport_params[0].n_pipe_profiles = profiles; + + for (i = 0; i < profiles; i++) { + char sec_name[32]; + snprintf(sec_name, sizeof(sec_name), "subport profile %d", i); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tb rate"); + if (entry) + subport_profile[i].tb_rate = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tb size"); + if (entry) + subport_profile[i].tb_size = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc period"); + if (entry) + subport_profile[i].tc_period = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 0 rate"); + if (entry) + subport_profile[i].tc_rate[0] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 1 rate"); + if (entry) + subport_profile[i].tc_rate[1] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 2 rate"); + if (entry) + subport_profile[i].tc_rate[2] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 3 rate"); + if (entry) + subport_profile[i].tc_rate[3] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 4 rate"); + if (entry) + subport_profile[i].tc_rate[4] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 5 rate"); + if (entry) + subport_profile[i].tc_rate[5] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 6 rate"); + if (entry) + subport_profile[i].tc_rate[6] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 7 rate"); + if (entry) + subport_profile[i].tc_rate[7] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 8 rate"); + if (entry) + subport_profile[i].tc_rate[8] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 9 rate"); + if (entry) + subport_profile[i].tc_rate[9] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 10 rate"); + if (entry) + subport_profile[i].tc_rate[10] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 11 rate"); + if (entry) + subport_profile[i].tc_rate[11] = (uint64_t)atoi(entry); + + entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 12 rate"); + if (entry) + subport_profile[i].tc_rate[12] = (uint64_t)atoi(entry); + } + + return 0; +} + int cfg_load_subport(struct rte_cfgfile *cfg, struct rte_sched_subport_params *subport_params) { @@ -267,70 +354,6 @@ cfg_load_subport(struct rte_cfgfile *cfg, struct rte_sched_subport_params *subpo } } - entry = rte_cfgfile_get_entry(cfg, sec_name, "tb rate"); - if (entry) - subport_params[i].tb_rate = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tb size"); - if (entry) - subport_params[i].tb_size = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc period"); - if (entry) - subport_params[i].tc_period = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 0 rate"); - if (entry) - subport_params[i].tc_rate[0] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 1 rate"); - if (entry) - subport_params[i].tc_rate[1] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 2 rate"); - if (entry) - subport_params[i].tc_rate[2] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 3 rate"); - if (entry) - subport_params[i].tc_rate[3] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 4 rate"); - if (entry) - subport_params[i].tc_rate[4] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 5 rate"); - if (entry) - subport_params[i].tc_rate[5] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 6 rate"); - if (entry) - subport_params[i].tc_rate[6] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 7 rate"); - if (entry) - subport_params[i].tc_rate[7] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 8 rate"); - if (entry) - subport_params[i].tc_rate[8] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 9 rate"); - if (entry) - subport_params[i].tc_rate[9] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 10 rate"); - if (entry) - subport_params[i].tc_rate[10] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 11 rate"); - if (entry) - subport_params[i].tc_rate[11] = (uint64_t)atoi(entry); - - entry = rte_cfgfile_get_entry(cfg, sec_name, "tc 12 rate"); - if (entry) - subport_params[i].tc_rate[12] = (uint64_t)atoi(entry); - int n_entries = rte_cfgfile_section_num_entries(cfg, sec_name); struct rte_cfgfile_entry entries[n_entries]; diff --git a/examples/qos_sched/cfg_file.h b/examples/qos_sched/cfg_file.h index 2eccf1ca0..0dc458aa7 100644 --- a/examples/qos_sched/cfg_file.h +++ b/examples/qos_sched/cfg_file.h @@ -14,4 +14,8 @@ int cfg_load_pipe(struct rte_cfgfile *cfg, struct rte_sched_pipe_params *pipe); int cfg_load_subport(struct rte_cfgfile *cfg, struct rte_sched_subport_params *subport); +int cfg_load_subport_profile(struct rte_cfgfile *cfg, + struct rte_sched_subport_profile_params + *subport_profile); + #endif diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c index b188c624b..1abe003fc 100644 --- a/examples/qos_sched/init.c +++ b/examples/qos_sched/init.c @@ -192,15 +192,20 @@ static struct rte_sched_pipe_params pipe_profiles[MAX_SCHED_PIPE_PROFILES] = { }, }; -struct rte_sched_subport_params subport_params[MAX_SCHED_SUBPORTS] = { +static struct rte_sched_subport_profile_params + subport_profile[MAX_SCHED_SUBPORT_PROFILES] = { { .tb_rate = 1250000000, .tb_size = 1000000, - .tc_rate = {1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000}, .tc_period = 10, + }, +}; + +struct rte_sched_subport_params subport_params[MAX_SCHED_SUBPORTS] = { + { .n_pipes_per_subport_enabled = 4096, .qsize = {64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64}, .pipe_profiles = pipe_profiles, @@ -285,6 +290,9 @@ struct rte_sched_port_params port_params = { .mtu = 6 + 6 + 4 + 4 + 2 + 1500, .frame_overhead = RTE_SCHED_FRAME_OVERHEAD_DEFAULT, .n_subports_per_port = 1, + .n_subport_profiles = 1, + .subport_profiles = subport_profile, + .n_max_subport_profiles = MAX_SCHED_SUBPORT_PROFILES, .n_pipes_per_subport = MAX_SCHED_PIPES, }; @@ -315,10 +323,11 @@ app_init_sched_port(uint32_t portid, uint32_t socketid) for (subport = 0; subport < port_params.n_subports_per_port; subport ++) { err = rte_sched_subport_config(port, subport, - &subport_params[subport], 0); + &subport_params[subport], + 0); if (err) { - rte_exit(EXIT_FAILURE, "Unable to config sched subport %u, err=%d\n", - subport, err); + rte_exit(EXIT_FAILURE, "Unable to config sched " + "subport %u, err=%d\n", subport, err); } uint32_t n_pipes_per_subport = @@ -351,6 +360,7 @@ app_load_cfg_profile(const char *profile) cfg_load_port(file, &port_params); cfg_load_subport(file, subport_params); + cfg_load_subport_profile(file, subport_profile); cfg_load_pipe(file, pipe_profiles); rte_cfgfile_close(file); diff --git a/examples/qos_sched/main.h b/examples/qos_sched/main.h index 23bc418d9..0d6815ae6 100644 --- a/examples/qos_sched/main.h +++ b/examples/qos_sched/main.h @@ -51,6 +51,7 @@ extern "C" { #define MAX_SCHED_SUBPORTS 8 #define MAX_SCHED_PIPES 4096 #define MAX_SCHED_PIPE_PROFILES 256 +#define MAX_SCHED_SUBPORT_PROFILES 8 #ifndef APP_COLLECT_STAT #define APP_COLLECT_STAT 1 diff --git a/examples/qos_sched/profile.cfg b/examples/qos_sched/profile.cfg index 61b8b7071..4486d2799 100644 --- a/examples/qos_sched/profile.cfg +++ b/examples/qos_sched/profile.cfg @@ -26,6 +26,9 @@ number of subports per port = 1 number of pipes per subport = 4096 queue sizes = 64 64 64 64 64 64 64 64 64 64 64 64 64 +subport 0-8 = 0 ; These subports are configured with subport profile 0 + +[subport profile 0] tb rate = 1250000000 ; Bytes per second tb size = 1000000 ; Bytes From patchwork Fri Oct 9 12:39:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Savinay Dharmappa X-Patchwork-Id: 80162 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DFEE8A04BC; Fri, 9 Oct 2020 14:41:30 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 70D991D642; Fri, 9 Oct 2020 14:39:45 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 2506E1D628 for ; Fri, 9 Oct 2020 14:39:39 +0200 (CEST) IronPort-SDR: iFzt5gF/t2JAsRczcLXTGvaRNV3DZraotGP2kII0EjxXxBWeojx6om0RJgQ4GtmbM036dSJ0TE uT+k/X+viXsw== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="152397587" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152397587" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:39:38 -0700 IronPort-SDR: mt5mauuSkxswxtYeg9xdmIdP+gfblaiLyaCre4TxKYUVAd4R6LBZaAIER08Opsv9/HiOMrYsUg C+B43DM4TCIw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528914530" Received: from silpixa00400629.ir.intel.com ([10.237.214.112]) by orsmga005.jf.intel.com with ESMTP; 09 Oct 2020 05:39:36 -0700 From: Savinay Dharmappa To: cristian.dumitrescu@intel.com, jasvinder.singh@intel.com, dev@dpdk.org Cc: savinay.dharmappa@intel.com Date: Fri, 9 Oct 2020 13:39:16 +0100 Message-Id: <20201009123919.43004-6-savinay.dharmappa@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009123919.43004-1-savinay.dharmappa@intel.com> References: <20201007140915.19491-1-savinay.dharmappa@intel.com> <20201009123919.43004-1-savinay.dharmappa@intel.com> Subject: [dpdk-dev] [PATCH v9 5/8] example/ip_pipeline: update subport rate dynamically X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify the ip_pipeline application to build the hierarchical scheduler with default subport bandwidth profile. It also allows to update a subport with different subport rates dynamically Signed-off-by: Savinay Dharmappa --- examples/ip_pipeline/cli.c | 68 ++++++++------------ examples/ip_pipeline/tmgr.c | 121 ++++++++++++++++++++++++++++++------ examples/ip_pipeline/tmgr.h | 5 +- 3 files changed, 134 insertions(+), 60 deletions(-) diff --git a/examples/ip_pipeline/cli.c b/examples/ip_pipeline/cli.c index dafc95ae9..ec4acf0ac 100644 --- a/examples/ip_pipeline/cli.c +++ b/examples/ip_pipeline/cli.c @@ -393,12 +393,7 @@ static const char cmd_tmgr_subport_profile_help[] = " " " " " \n" -" \n" -" pps \n" -" qsize " -" " -" " -" "; +" \n"; static void cmd_tmgr_subport_profile(char **tokens, @@ -406,57 +401,37 @@ cmd_tmgr_subport_profile(char **tokens, char *out, size_t out_size) { - struct rte_sched_subport_params p; + struct rte_sched_subport_profile_params subport_profile; int status, i; - if (n_tokens != 35) { + if (n_tokens != 19) { snprintf(out, out_size, MSG_ARG_MISMATCH, tokens[0]); return; } - if (parser_read_uint64(&p.tb_rate, tokens[3]) != 0) { + if (parser_read_uint64(&subport_profile.tb_rate, tokens[3]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "tb_rate"); return; } - if (parser_read_uint64(&p.tb_size, tokens[4]) != 0) { + if (parser_read_uint64(&subport_profile.tb_size, tokens[4]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "tb_size"); return; } for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) - if (parser_read_uint64(&p.tc_rate[i], tokens[5 + i]) != 0) { + if (parser_read_uint64(&subport_profile.tc_rate[i], + tokens[5 + i]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "tc_rate"); return; } - if (parser_read_uint64(&p.tc_period, tokens[18]) != 0) { + if (parser_read_uint64(&subport_profile.tc_period, tokens[18]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "tc_period"); return; } - if (strcmp(tokens[19], "pps") != 0) { - snprintf(out, out_size, MSG_ARG_NOT_FOUND, "pps"); - return; - } - - if (parser_read_uint32(&p.n_pipes_per_subport_enabled, tokens[20]) != 0) { - snprintf(out, out_size, MSG_ARG_INVALID, "n_pipes_per_subport"); - return; - } - - if (strcmp(tokens[21], "qsize") != 0) { - snprintf(out, out_size, MSG_ARG_NOT_FOUND, "qsize"); - return; - } - - for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) - if (parser_read_uint16(&p.qsize[i], tokens[22 + i]) != 0) { - snprintf(out, out_size, MSG_ARG_INVALID, "qsize"); - return; - } - - status = tmgr_subport_profile_add(&p); + status = tmgr_subport_profile_add(&subport_profile); if (status != 0) { snprintf(out, out_size, MSG_CMD_FAIL, tokens[0]); return; @@ -530,6 +505,7 @@ static const char cmd_tmgr_help[] = "tmgr \n" " rate \n" " spp \n" +" pps \n" " fo \n" " mtu \n" " cpu \n"; @@ -544,7 +520,7 @@ cmd_tmgr(char **tokens, char *name; struct tmgr_port *tmgr_port; - if (n_tokens != 12) { + if (n_tokens != 14) { snprintf(out, out_size, MSG_ARG_MISMATCH, tokens[0]); return; } @@ -571,32 +547,42 @@ cmd_tmgr(char **tokens, return; } - if (strcmp(tokens[6], "fo") != 0) { + if (strcmp(tokens[6], "pps") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "spp"); + return; + } + + if (parser_read_uint32(&p.n_pipes_per_subport, tokens[7]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "n_pipes_per_subport"); + return; + } + + if (strcmp(tokens[8], "fo") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "fo"); return; } - if (parser_read_uint32(&p.frame_overhead, tokens[7]) != 0) { + if (parser_read_uint32(&p.frame_overhead, tokens[9]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "frame_overhead"); return; } - if (strcmp(tokens[8], "mtu") != 0) { + if (strcmp(tokens[10], "mtu") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "mtu"); return; } - if (parser_read_uint32(&p.mtu, tokens[9]) != 0) { + if (parser_read_uint32(&p.mtu, tokens[11]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "mtu"); return; } - if (strcmp(tokens[10], "cpu") != 0) { + if (strcmp(tokens[12], "cpu") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "cpu"); return; } - if (parser_read_uint32(&p.cpu_id, tokens[11]) != 0) { + if (parser_read_uint32(&p.cpu_id, tokens[13]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "cpu_id"); return; } diff --git a/examples/ip_pipeline/tmgr.c b/examples/ip_pipeline/tmgr.c index 46c6a83a4..e4e364cbc 100644 --- a/examples/ip_pipeline/tmgr.c +++ b/examples/ip_pipeline/tmgr.c @@ -4,11 +4,12 @@ #include +#include #include #include "tmgr.h" -static struct rte_sched_subport_params +static struct rte_sched_subport_profile_params subport_profile[TMGR_SUBPORT_PROFILE_MAX]; static uint32_t n_subport_profiles; @@ -18,6 +19,82 @@ static struct rte_sched_pipe_params static uint32_t n_pipe_profiles; +static const struct rte_sched_subport_params subport_params_default = { + .n_pipes_per_subport_enabled = 0, /* filled at runtime */ + .qsize = {64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64}, + .pipe_profiles = pipe_profile, + .n_pipe_profiles = 0, /* filled at run time */ + .n_max_pipe_profiles = RTE_DIM(pipe_profile), +#ifdef RTE_SCHED_RED +.red_params = { + /* Traffic Class 0 Colors Green / Yellow / Red */ + [0][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [0][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [0][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 1 - Colors Green / Yellow / Red */ + [1][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [1][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [1][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 2 - Colors Green / Yellow / Red */ + [2][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [2][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [2][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 3 - Colors Green / Yellow / Red */ + [3][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [3][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [3][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 4 - Colors Green / Yellow / Red */ + [4][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [4][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [4][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 5 - Colors Green / Yellow / Red */ + [5][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [5][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [5][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 6 - Colors Green / Yellow / Red */ + [6][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [6][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [6][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 7 - Colors Green / Yellow / Red */ + [7][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [7][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [7][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 8 - Colors Green / Yellow / Red */ + [8][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [8][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [8][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 9 - Colors Green / Yellow / Red */ + [9][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [9][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [9][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 10 - Colors Green / Yellow / Red */ + [10][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [10][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [10][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 11 - Colors Green / Yellow / Red */ + [11][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [11][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [11][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + + /* Traffic Class 12 - Colors Green / Yellow / Red */ + [12][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [12][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + [12][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, + }, +#endif /* RTE_SCHED_RED */ +}; + static struct tmgr_port_list tmgr_port_list; int @@ -44,17 +121,16 @@ tmgr_port_find(const char *name) } int -tmgr_subport_profile_add(struct rte_sched_subport_params *p) +tmgr_subport_profile_add(struct rte_sched_subport_profile_params *params) { /* Check input params */ - if (p == NULL || - p->n_pipes_per_subport_enabled == 0) + if (params == NULL) return -1; /* Save profile */ memcpy(&subport_profile[n_subport_profiles], - p, - sizeof(*p)); + params, + sizeof(*params)); n_subport_profiles++; @@ -81,6 +157,7 @@ tmgr_pipe_profile_add(struct rte_sched_pipe_params *p) struct tmgr_port * tmgr_port_create(const char *name, struct tmgr_port_params *params) { + struct rte_sched_subport_params subport_params; struct rte_sched_port_params p; struct tmgr_port *tmgr_port; struct rte_sched_port *s; @@ -91,6 +168,7 @@ tmgr_port_create(const char *name, struct tmgr_port_params *params) tmgr_port_find(name) || (params == NULL) || (params->n_subports_per_port == 0) || + (params->n_pipes_per_subport == 0) || (params->cpu_id >= RTE_MAX_NUMA_NODES) || (n_subport_profiles == 0) || (n_pipe_profiles == 0)) @@ -103,15 +181,22 @@ tmgr_port_create(const char *name, struct tmgr_port_params *params) p.mtu = params->mtu; p.frame_overhead = params->frame_overhead; p.n_subports_per_port = params->n_subports_per_port; - p.n_pipes_per_subport = TMGR_PIPE_SUBPORT_MAX; + p.n_subport_profiles = n_subport_profiles; + p.subport_profiles = subport_profile; + p.n_max_subport_profiles = TMGR_SUBPORT_PROFILE_MAX; + p.n_pipes_per_subport = params->n_pipes_per_subport; + s = rte_sched_port_config(&p); if (s == NULL) return NULL; - subport_profile[0].pipe_profiles = pipe_profile; - subport_profile[0].n_pipe_profiles = n_pipe_profiles; - subport_profile[0].n_max_pipe_profiles = TMGR_PIPE_PROFILE_MAX; + memcpy(&subport_params, &subport_params_default, + sizeof(subport_params_default)); + + subport_params.n_pipe_profiles = n_pipe_profiles; + subport_params.n_pipes_per_subport_enabled = + params->n_pipes_per_subport; for (i = 0; i < params->n_subports_per_port; i++) { int status; @@ -119,7 +204,7 @@ tmgr_port_create(const char *name, struct tmgr_port_params *params) status = rte_sched_subport_config( s, i, - &subport_profile[0], + &subport_params, 0); if (status) { @@ -127,7 +212,8 @@ tmgr_port_create(const char *name, struct tmgr_port_params *params) return NULL; } - for (j = 0; j < subport_profile[0].n_pipes_per_subport_enabled; j++) { + for (j = 0; j < params->n_pipes_per_subport; j++) { + status = rte_sched_pipe_config( s, i, @@ -152,6 +238,7 @@ tmgr_port_create(const char *name, struct tmgr_port_params *params) strlcpy(tmgr_port->name, name, sizeof(tmgr_port->name)); tmgr_port->s = s; tmgr_port->n_subports_per_port = params->n_subports_per_port; + tmgr_port->n_pipes_per_subport = params->n_pipes_per_subport; /* Node add to list */ TAILQ_INSERT_TAIL(&tmgr_port_list, tmgr_port, node); @@ -181,8 +268,8 @@ tmgr_subport_config(const char *port_name, status = rte_sched_subport_config( port->s, subport_id, - &subport_profile[subport_profile_id], - 0); + NULL, + subport_profile_id); return status; } @@ -204,10 +291,8 @@ tmgr_pipe_config(const char *port_name, port = tmgr_port_find(port_name); if ((port == NULL) || (subport_id >= port->n_subports_per_port) || - (pipe_id_first >= - subport_profile[subport_id].n_pipes_per_subport_enabled) || - (pipe_id_last >= - subport_profile[subport_id].n_pipes_per_subport_enabled) || + (pipe_id_first >= port->n_pipes_per_subport) || + (pipe_id_last >= port->n_pipes_per_subport) || (pipe_id_first > pipe_id_last) || (pipe_profile_id >= n_pipe_profiles)) return -1; diff --git a/examples/ip_pipeline/tmgr.h b/examples/ip_pipeline/tmgr.h index ee50cf7cc..1994c55bc 100644 --- a/examples/ip_pipeline/tmgr.h +++ b/examples/ip_pipeline/tmgr.h @@ -9,6 +9,7 @@ #include #include +#include #include "common.h" @@ -29,6 +30,7 @@ struct tmgr_port { char name[NAME_SIZE]; struct rte_sched_port *s; uint32_t n_subports_per_port; + uint32_t n_pipes_per_subport; }; TAILQ_HEAD(tmgr_port_list, tmgr_port); @@ -42,13 +44,14 @@ tmgr_port_find(const char *name); struct tmgr_port_params { uint64_t rate; uint32_t n_subports_per_port; + uint32_t n_pipes_per_subport; uint32_t frame_overhead; uint32_t mtu; uint32_t cpu_id; }; int -tmgr_subport_profile_add(struct rte_sched_subport_params *p); +tmgr_subport_profile_add(struct rte_sched_subport_profile_params *sp); int tmgr_pipe_profile_add(struct rte_sched_pipe_params *p); From patchwork Fri Oct 9 12:39:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Savinay Dharmappa X-Patchwork-Id: 80163 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DE0ECA04BC; Fri, 9 Oct 2020 14:41:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F06FC1D64D; Fri, 9 Oct 2020 14:39:46 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id B88871D62F for ; Fri, 9 Oct 2020 14:39:40 +0200 (CEST) IronPort-SDR: gzUCMC2TwrgjWbigc585goJnHWUj0CWqwDylJc9OL/5aIesbHFKZuIcjg9y3mnyOB295UGx0iE cVGE5SuVRmsQ== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="152397596" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152397596" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:39:40 -0700 IronPort-SDR: 3DlMx8p0B93XIoou7xdD/lPXqfYkGWiCezcld+k7Jv6AWZc3wQ2lmUUOADcK78i4Ri9DMoOyPD nIwSnOuPbVfA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528914540" Received: from silpixa00400629.ir.intel.com ([10.237.214.112]) by orsmga005.jf.intel.com with ESMTP; 09 Oct 2020 05:39:38 -0700 From: Savinay Dharmappa To: cristian.dumitrescu@intel.com, jasvinder.singh@intel.com, dev@dpdk.org Cc: savinay.dharmappa@intel.com Date: Fri, 9 Oct 2020 13:39:17 +0100 Message-Id: <20201009123919.43004-7-savinay.dharmappa@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009123919.43004-1-savinay.dharmappa@intel.com> References: <20201007140915.19491-1-savinay.dharmappa@intel.com> <20201009123919.43004-1-savinay.dharmappa@intel.com> Subject: [dpdk-dev] [PATCH v9 6/8] drivers/softnic: update subport rate dynamically X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify the softnic drivers to build the hierarchical scheduler with default subport bandwidth profile. It also allows to update a subport with different subport rates dynamically. Signed-off-by: Savinay Dharmappa --- .../net/softnic/rte_eth_softnic_internals.h | 11 +- drivers/net/softnic/rte_eth_softnic_tm.c | 243 ++++++++++++++---- 2 files changed, 195 insertions(+), 59 deletions(-) diff --git a/drivers/net/softnic/rte_eth_softnic_internals.h b/drivers/net/softnic/rte_eth_softnic_internals.h index 6eec43b22..77e0139a6 100644 --- a/drivers/net/softnic/rte_eth_softnic_internals.h +++ b/drivers/net/softnic/rte_eth_softnic_internals.h @@ -164,11 +164,18 @@ TAILQ_HEAD(softnic_link_list, softnic_link); #ifndef TM_MAX_PIPE_PROFILE #define TM_MAX_PIPE_PROFILE 256 #endif + +#ifndef TM_MAX_SUBPORT_PROFILE +#define TM_MAX_SUBPORT_PROFILE 256 +#endif + struct tm_params { struct rte_sched_port_params port_params; - struct rte_sched_subport_params subport_params[TM_MAX_SUBPORTS]; - + struct rte_sched_subport_profile_params + subport_profile[TM_MAX_SUBPORT_PROFILE]; + uint32_t n_subport_profiles; + uint32_t subport_to_profile[TM_MAX_SUBPORT_PROFILE]; struct rte_sched_pipe_params pipe_profiles[TM_MAX_PIPE_PROFILE]; uint32_t n_pipe_profiles; uint32_t pipe_to_profile[TM_MAX_SUBPORTS * TM_MAX_PIPES_PER_SUBPORT]; diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c index 5199dd2cd..725b0231a 100644 --- a/drivers/net/softnic/rte_eth_softnic_tm.c +++ b/drivers/net/softnic/rte_eth_softnic_tm.c @@ -86,13 +86,14 @@ softnic_tmgr_port_create(struct pmd_internals *p, n_subports = t->port_params.n_subports_per_port; for (subport_id = 0; subport_id < n_subports; subport_id++) { uint32_t n_pipes_per_subport = - t->subport_params[subport_id].n_pipes_per_subport_enabled; + t->subport_params[subport_id].n_pipes_per_subport_enabled; uint32_t pipe_id; int status; status = rte_sched_subport_config(sched, subport_id, - &t->subport_params[subport_id], 0); + &t->subport_params[subport_id], + t->subport_to_profile[subport_id]); if (status) { rte_sched_port_free(sched); return NULL; @@ -1114,34 +1115,51 @@ tm_shared_shaper_get_tc(struct rte_eth_dev *dev, return NULL; } +static int +subport_profile_exists(struct rte_eth_dev *dev, + struct rte_sched_subport_profile_params *sp, + uint32_t *subport_profile_id) +{ + struct pmd_internals *p = dev->data->dev_private; + struct tm_params *t = &p->soft.tm.params; + uint32_t i; + + for (i = 0; i < t->n_subport_profiles; i++) + if (memcmp(&t->subport_profile[i], sp, sizeof(*sp)) == 0) { + if (subport_profile_id) + *subport_profile_id = i; + return 1; + } + + return 0; +} + static int update_subport_tc_rate(struct rte_eth_dev *dev, struct tm_node *nt, struct tm_shared_shaper *ss, struct tm_shaper_profile *sp_new) { + struct rte_sched_subport_profile_params subport_profile; struct pmd_internals *p = dev->data->dev_private; uint32_t tc_id = tm_node_tc_id(dev, nt); - struct tm_node *np = nt->parent_node; - struct tm_node *ns = np->parent_node; uint32_t subport_id = tm_node_subport_id(dev, ns); - - struct rte_sched_subport_params subport_params; - + struct tm_params *t = &p->soft.tm.params; + uint32_t subport_profile_id = t->subport_to_profile[subport_id]; struct tm_shaper_profile *sp_old = tm_shaper_profile_search(dev, ss->shaper_profile_id); /* Derive new subport configuration. */ - memcpy(&subport_params, - &p->soft.tm.params.subport_params[subport_id], - sizeof(subport_params)); - subport_params.tc_rate[tc_id] = sp_new->params.peak.rate; + memcpy(&subport_profile, + &p->soft.tm.params.subport_profile[subport_profile_id], + sizeof(subport_profile)); + subport_profile.tc_rate[tc_id] = sp_new->params.peak.rate; /* Update the subport configuration. */ if (rte_sched_subport_config(SCHED(p), - subport_id, &subport_params, 0)) + subport_id, NULL, subport_profile_id)) return -1; /* Commit changes. */ @@ -1150,9 +1168,9 @@ update_subport_tc_rate(struct rte_eth_dev *dev, ss->shaper_profile_id = sp_new->shaper_profile_id; sp_new->n_users++; - memcpy(&p->soft.tm.params.subport_params[subport_id], - &subport_params, - sizeof(subport_params)); + memcpy(&p->soft.tm.params.subport_profile[subport_profile_id], + &subport_profile, + sizeof(subport_profile)); return 0; } @@ -2238,6 +2256,8 @@ pipe_profiles_generate(struct rte_eth_dev *dev) struct rte_sched_pipe_params pp; uint32_t pos; + memset(&pp, 0, sizeof(pp)); + if (np->level != TM_NODE_LEVEL_PIPE || np->parent_node_id != ns->node_id) continue; @@ -2343,6 +2363,123 @@ tm_subport_tc_shared_shaper_get(struct rte_eth_dev *dev, return NULL; } +static struct rte_sched_subport_profile_params * +subport_profile_get(struct rte_eth_dev *dev, struct tm_node *np) +{ + struct pmd_internals *p = dev->data->dev_private; + struct tm_params *t = &p->soft.tm.params; + uint32_t subport_id = tm_node_subport_id(dev, np->parent_node); + + return &t->subport_profile[subport_id]; +} + +static void +subport_profile_mark(struct rte_eth_dev *dev, + uint32_t subport_id, + uint32_t subport_profile_id) +{ + struct pmd_internals *p = dev->data->dev_private; + struct tm_params *t = &p->soft.tm.params; + + t->subport_to_profile[subport_id] = subport_profile_id; +} + +static void +subport_profile_install(struct rte_eth_dev *dev, + struct rte_sched_subport_profile_params *sp, + uint32_t subport_profile_id) +{ + struct pmd_internals *p = dev->data->dev_private; + struct tm_params *t = &p->soft.tm.params; + + memcpy(&t->subport_profile[subport_profile_id], + sp, sizeof(*sp)); + t->n_subport_profiles++; +} + +static int +subport_profile_free_exists(struct rte_eth_dev *dev, + uint32_t *subport_profile_id) +{ + struct pmd_internals *p = dev->data->dev_private; + struct tm_params *t = &p->soft.tm.params; + + if (t->n_subport_profiles < TM_MAX_SUBPORT_PROFILE) { + *subport_profile_id = t->n_subport_profiles; + return 1; + } + + return 0; +} + +static void +subport_profile_build(struct rte_eth_dev *dev, struct tm_node *np, + struct rte_sched_subport_profile_params *sp) +{ + uint32_t i; + memset(sp, 0, sizeof(*sp)); + + sp->tb_rate = np->shaper_profile->params.peak.rate; + sp->tb_size = np->shaper_profile->params.peak.size; + + for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { + struct tm_shared_shaper *ss; + struct tm_shaper_profile *ssp; + + ss = tm_subport_tc_shared_shaper_get(dev, np, i); + ssp = (ss) ? tm_shaper_profile_search(dev, + ss->shaper_profile_id) : + np->shaper_profile; + sp->tc_rate[i] = ssp->params.peak.rate; + } + + /* Traffic Class (TC) */ + sp->tc_period = SUBPORT_TC_PERIOD; +} + +static int +subport_profiles_generate(struct rte_eth_dev *dev) +{ + struct pmd_internals *p = dev->data->dev_private; + struct tm_hierarchy *h = &p->soft.tm.h; + struct tm_node_list *nl = &h->nodes; + struct tm_node *ns; + uint32_t subport_id; + + /* Objective: Fill in the following fields in struct tm_params: + * - subport_profiles + * - n_subport_profiles + * - subport_to_profile + */ + + subport_id = 0; + TAILQ_FOREACH(ns, nl, node) { + if (ns->level != TM_NODE_LEVEL_SUBPORT) + continue; + + struct rte_sched_subport_profile_params sp; + uint32_t pos; + + memset(&sp, 0, sizeof(sp)); + + subport_profile_build(dev, ns, &sp); + + if (!subport_profile_exists(dev, &sp, &pos)) { + if (!subport_profile_free_exists(dev, &pos)) + return -1; + + subport_profile_install(dev, &sp, pos); + } + + subport_profile_mark(dev, subport_id, pos); + + subport_id++; + } + + return 0; +} + + static int hierarchy_commit_check(struct rte_eth_dev *dev, struct rte_tm_error *error) { @@ -2519,6 +2656,15 @@ hierarchy_commit_check(struct rte_eth_dev *dev, struct rte_tm_error *error) rte_strerror(EINVAL)); } + /* Not too many subport profiles. */ + if (subport_profiles_generate(dev)) + return -rte_tm_error_set(error, + EINVAL, + RTE_TM_ERROR_TYPE_UNSPECIFIED, + NULL, + rte_strerror(EINVAL)); + + /* Not too many pipe profiles. */ if (pipe_profiles_generate(dev)) return -rte_tm_error_set(error, @@ -2600,48 +2746,20 @@ hierarchy_blueprints_create(struct rte_eth_dev *dev) .frame_overhead = root->shaper_profile->params.pkt_length_adjust, .n_subports_per_port = root->n_children, + .n_subport_profiles = t->n_subport_profiles, + .subport_profiles = t->subport_profile, + .n_max_subport_profiles = TM_MAX_SUBPORT_PROFILE, .n_pipes_per_subport = TM_MAX_PIPES_PER_SUBPORT, }; subport_id = 0; TAILQ_FOREACH(n, nl, node) { - uint64_t tc_rate[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; - uint32_t i; if (n->level != TM_NODE_LEVEL_SUBPORT) continue; - for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { - struct tm_shared_shaper *ss; - struct tm_shaper_profile *sp; - - ss = tm_subport_tc_shared_shaper_get(dev, n, i); - sp = (ss) ? tm_shaper_profile_search(dev, - ss->shaper_profile_id) : - n->shaper_profile; - tc_rate[i] = sp->params.peak.rate; - } - t->subport_params[subport_id] = (struct rte_sched_subport_params) { - .tb_rate = n->shaper_profile->params.peak.rate, - .tb_size = n->shaper_profile->params.peak.size, - - .tc_rate = {tc_rate[0], - tc_rate[1], - tc_rate[2], - tc_rate[3], - tc_rate[4], - tc_rate[5], - tc_rate[6], - tc_rate[7], - tc_rate[8], - tc_rate[9], - tc_rate[10], - tc_rate[11], - tc_rate[12], - }, - .tc_period = SUBPORT_TC_PERIOD, .n_pipes_per_subport_enabled = h->n_tm_nodes[TM_NODE_LEVEL_PIPE] / h->n_tm_nodes[TM_NODE_LEVEL_SUBPORT], @@ -2901,18 +3019,27 @@ update_subport_rate(struct rte_eth_dev *dev, struct pmd_internals *p = dev->data->dev_private; uint32_t subport_id = tm_node_subport_id(dev, ns); - struct rte_sched_subport_params subport_params; + struct rte_sched_subport_profile_params *profile0 = + subport_profile_get(dev, ns); + struct rte_sched_subport_profile_params profile1; + uint32_t subport_profile_id; - /* Derive new subport configuration. */ - memcpy(&subport_params, - &p->soft.tm.params.subport_params[subport_id], - sizeof(subport_params)); - subport_params.tb_rate = sp->params.peak.rate; - subport_params.tb_size = sp->params.peak.size; + /* Derive new pipe profile. */ + memcpy(&profile1, profile0, sizeof(profile1)); + profile1.tb_rate = sp->params.peak.rate; + profile1.tb_size = sp->params.peak.size; + + /* Since implementation does not allow adding more subport profiles + * after port configuration, the pipe configuration can be successfully + * updated only if the new profile is also part of the existing set of + * pipe profiles. + */ + if (subport_profile_exists(dev, &profile1, &subport_profile_id) == 0) + return -1; /* Update the subport configuration. */ if (rte_sched_subport_config(SCHED(p), subport_id, - &subport_params, 0)) + NULL, subport_profile_id)) return -1; /* Commit changes. */ @@ -2922,9 +3049,11 @@ update_subport_rate(struct rte_eth_dev *dev, ns->params.shaper_profile_id = sp->shaper_profile_id; sp->n_users++; - memcpy(&p->soft.tm.params.subport_params[subport_id], - &subport_params, - sizeof(subport_params)); + subport_profile_mark(dev, subport_id, subport_profile_id); + + memcpy(&p->soft.tm.params.subport_profile[subport_profile_id], + &profile1, + sizeof(profile1)); return 0; } From patchwork Fri Oct 9 12:39:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Savinay Dharmappa X-Patchwork-Id: 80164 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7D4FA04BC; Fri, 9 Oct 2020 14:42:12 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8E23B1D652; Fri, 9 Oct 2020 14:39:48 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id C73FA1D630 for ; Fri, 9 Oct 2020 14:39:41 +0200 (CEST) IronPort-SDR: pYoGLxIUu8ufHEJfI3F5JLBWshmMIrnwk4eGnzXFMDA5y00JmF3dnOyqeQvE7XqVldYylZ6JHQ 695LtzsvETFQ== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="152397601" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152397601" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:39:41 -0700 IronPort-SDR: azPA9xn8qpjYwBMucV/51VlGYOBMpkMRkyTU3HBPe10I3hX7c6wXT90P3HntB6rigjRL/Ghfrf k8bazWE6Ec9g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528914548" Received: from silpixa00400629.ir.intel.com ([10.237.214.112]) by orsmga005.jf.intel.com with ESMTP; 09 Oct 2020 05:39:40 -0700 From: Savinay Dharmappa To: cristian.dumitrescu@intel.com, jasvinder.singh@intel.com, dev@dpdk.org Cc: savinay.dharmappa@intel.com Date: Fri, 9 Oct 2020 13:39:18 +0100 Message-Id: <20201009123919.43004-8-savinay.dharmappa@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009123919.43004-1-savinay.dharmappa@intel.com> References: <20201007140915.19491-1-savinay.dharmappa@intel.com> <20201009123919.43004-1-savinay.dharmappa@intel.com> Subject: [dpdk-dev] [PATCH v9 7/8] app/test_sched: update subport rate dynamically X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify the test_sched application to build the hierarchical scheduler with default subport bandwidth profile. It also allows to update a subport with different subport rates dynamically Signed-off-by: Savinay Dharmappa --- app/test/test_sched.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/app/test/test_sched.c b/app/test/test_sched.c index 5e5c2a59b..958b63114 100644 --- a/app/test/test_sched.c +++ b/app/test/test_sched.c @@ -21,6 +21,7 @@ #define PIPE 1 #define TC 2 #define QUEUE 0 +#define MAX_SCHED_SUBPORT_PROFILES 8 static struct rte_sched_pipe_params pipe_profile[] = { { /* Profile #0 */ @@ -36,15 +37,20 @@ static struct rte_sched_pipe_params pipe_profile[] = { }, }; -static struct rte_sched_subport_params subport_param[] = { +static struct rte_sched_subport_profile_params + subport_profile[] = { { .tb_rate = 1250000000, .tb_size = 1000000, - .tc_rate = {1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000, 1250000000}, .tc_period = 10, + }, +}; + +static struct rte_sched_subport_params subport_param[] = { + { .n_pipes_per_subport_enabled = 1024, .qsize = {32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32}, .pipe_profiles = pipe_profile, @@ -59,6 +65,9 @@ static struct rte_sched_port_params port_param = { .mtu = 1522, .frame_overhead = RTE_SCHED_FRAME_OVERHEAD_DEFAULT, .n_subports_per_port = 1, + .n_subport_profiles = 1, + .subport_profiles = subport_profile, + .n_max_subport_profiles = MAX_SCHED_SUBPORT_PROFILES, .n_pipes_per_subport = 1024, }; From patchwork Fri Oct 9 12:39:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Savinay Dharmappa X-Patchwork-Id: 80165 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7E9FA04BC; Fri, 9 Oct 2020 14:42:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5A8A81D65A; Fri, 9 Oct 2020 14:39:52 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 95C2D1D63E for ; Fri, 9 Oct 2020 14:39:44 +0200 (CEST) IronPort-SDR: V1I+K1gLO6bCQxZZTJ5/MOXsGvg2oUBS0IDVwygDgyz62jK4x4Ox2SySVx5A+4vFBj1fP8hmPn h/kEDAe7Rj+A== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="152397608" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="152397608" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:39:42 -0700 IronPort-SDR: u+DqpHytLlt77QCecLurboLIO4gKSjPdLLzilTI2J/DH634k4gwUpcCT1jfUsuRThBy54StsS0 tOlq1BwfAE/g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528914556" Received: from silpixa00400629.ir.intel.com ([10.237.214.112]) by orsmga005.jf.intel.com with ESMTP; 09 Oct 2020 05:39:41 -0700 From: Savinay Dharmappa To: cristian.dumitrescu@intel.com, jasvinder.singh@intel.com, dev@dpdk.org Cc: savinay.dharmappa@intel.com Date: Fri, 9 Oct 2020 13:39:19 +0100 Message-Id: <20201009123919.43004-9-savinay.dharmappa@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201009123919.43004-1-savinay.dharmappa@intel.com> References: <20201007140915.19491-1-savinay.dharmappa@intel.com> <20201009123919.43004-1-savinay.dharmappa@intel.com> Subject: [dpdk-dev] [PATCH v9 8/8] sched: remove redundant code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Remove redundant data structure fields. Signed-off-by: Savinay Dharmappa --- doc/guides/rel_notes/release_20_11.rst | 3 +++ lib/librte_sched/rte_sched.h | 12 ------------ 2 files changed, 3 insertions(+), 12 deletions(-) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 85d56d46c..116969d06 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -296,6 +296,9 @@ ABI Changes * Added ``subport_profile_id`` as a argument to function ``rte_sched_subport_config``. + * ``tb_rate``, ``tc_rate``, ``tc_period`` and + ``tb_size`` are removed from ``struct rte_sched_subport_params``. + Known Issues ------------ diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h index 1506c6487..c1a772b70 100644 --- a/lib/librte_sched/rte_sched.h +++ b/lib/librte_sched/rte_sched.h @@ -149,18 +149,6 @@ struct rte_sched_pipe_params { * byte. */ struct rte_sched_subport_params { - /** Token bucket rate (measured in bytes per second) */ - uint64_t tb_rate; - - /** Token bucket size (measured in credits) */ - uint64_t tb_size; - - /** Traffic class rates (measured in bytes per second) */ - uint64_t tc_rate[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; - - /** Enforcement period for rates (measured in milliseconds) */ - uint64_t tc_period; - /** Number of subport pipes. * The subport can enable/allocate fewer pipes than the maximum * number set through struct port_params::n_max_pipes_per_subport,