From patchwork Mon May 24 10:58:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liguzinski, WojciechX" X-Patchwork-Id: 93397 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 51660A0547; Mon, 24 May 2021 12:59:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C6BA41119; Mon, 24 May 2021 12:59:11 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id C13B54003C for ; Mon, 24 May 2021 12:59:08 +0200 (CEST) IronPort-SDR: xS1tAB7ipEFZNT8qGyrgwlJxHtc0s9nFRLmTYjz7suRY4TmmsJygZvKBdgcQF1z1D2nm8QAMb/ 3/RxbCt4riQA== X-IronPort-AV: E=McAfee;i="6200,9189,9993"; a="201948456" X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="201948456" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2021 03:59:07 -0700 IronPort-SDR: wDw0XxBKmNTJWWlLW2rGcel6cS6Ofzknddjkhl8r/j/D6DuBUCzwf83XGdkI4Y6Z8Gk2KuWQW5 LNeOCvD8/KzQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="413548044" Received: from silpixa00400629.ir.intel.com ([10.237.214.62]) by orsmga002.jf.intel.com with ESMTP; 24 May 2021 03:59:05 -0700 From: "Liguzinski, WojciechX" To: dev@dpdk.org, jasvinder.singh@intel.com, cristian.dumitrescu@intel.com Cc: savinay.dharmappa@intel.com Date: Mon, 24 May 2021 11:58:20 +0100 Message-Id: <20210524105822.63171-2-wojciechx.liguzinski@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210524105822.63171-1-wojciechx.liguzinski@intel.com> References: <20210524105822.63171-1-wojciechx.liguzinski@intel.com> Subject: [dpdk-dev] [RFC PATCH 1/3] sched: add pie based congestion management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implement pie based congestion management based on rfc8033 Signed-off-by: Liguzinski, WojciechX --- drivers/net/softnic/rte_eth_softnic_tm.c | 4 +- lib/sched/meson.build | 10 +- lib/sched/rte_sched.c | 220 +++++++++++++++++------ lib/sched/rte_sched.h | 53 ++++-- 4 files changed, 210 insertions(+), 77 deletions(-) diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c index 90baba15ce..bdcd05b0e6 100644 --- a/drivers/net/softnic/rte_eth_softnic_tm.c +++ b/drivers/net/softnic/rte_eth_softnic_tm.c @@ -420,7 +420,7 @@ pmd_tm_node_type_get(struct rte_eth_dev *dev, return 0; } -#ifdef RTE_SCHED_RED +#ifdef RTE_SCHED_CMAN #define WRED_SUPPORTED 1 #else #define WRED_SUPPORTED 0 @@ -2306,7 +2306,7 @@ tm_tc_wred_profile_get(struct rte_eth_dev *dev, uint32_t tc_id) return NULL; } -#ifdef RTE_SCHED_RED +#ifdef RTE_SCHED_CMAN static void wred_profiles_set(struct rte_eth_dev *dev, uint32_t subport_id) diff --git a/lib/sched/meson.build b/lib/sched/meson.build index b24f7b8775..e7ae9bcf19 100644 --- a/lib/sched/meson.build +++ b/lib/sched/meson.build @@ -1,11 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation -sources = files('rte_sched.c', 'rte_red.c', 'rte_approx.c') -headers = files( - 'rte_approx.h', - 'rte_red.h', - 'rte_sched.h', - 'rte_sched_common.h', -) +sources = files('rte_sched.c', 'rte_red.c', 'rte_approx.c', 'rte_pie.c') +headers = files('rte_sched.h', 'rte_sched_common.h', + 'rte_red.h', 'rte_approx.h', 'rte_pie.h') deps += ['mbuf', 'meter'] diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c index cd87e688e4..a5fa8fadc8 100644 --- a/lib/sched/rte_sched.c +++ b/lib/sched/rte_sched.c @@ -89,8 +89,12 @@ struct rte_sched_queue { struct rte_sched_queue_extra { struct rte_sched_queue_stats stats; -#ifdef RTE_SCHED_RED - struct rte_red red; +#ifdef RTE_SCHED_CMAN + RTE_STD_C11 + union { + struct rte_red red; + struct rte_pie pie; + }; #endif }; @@ -183,8 +187,13 @@ struct rte_sched_subport { /* Pipe queues size */ uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; -#ifdef RTE_SCHED_RED - struct rte_red_config red_config[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS]; + enum rte_sched_cman_mode cman; +#ifdef RTE_SCHED_CMAN + RTE_STD_C11 + union { + struct rte_red_config red_config[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS]; + struct rte_pie_config pie_config[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; + }; #endif /* Scheduling loop detection */ @@ -1078,6 +1087,91 @@ rte_sched_free_memory(struct rte_sched_port *port, uint32_t n_subports) rte_free(port); } +#ifdef RTE_SCHED_CMAN + +static int +rte_sched_red_config (struct rte_sched_port *port, + struct rte_sched_subport *s, + struct rte_sched_subport_params *params, + uint32_t n_subports) +{ + uint32_t i; + + for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { + + uint32_t j; + + for (j = 0; j < RTE_COLORS; j++) { + /* if min/max are both zero, then RED is disabled */ + if ((params->red_params[i][j].min_th | + params->red_params[i][j].max_th) == 0) { + continue; + } + + if (rte_red_config_init(&s->red_config[i][j], + params->red_params[i][j].wq_log2, + params->red_params[i][j].min_th, + params->red_params[i][j].max_th, + params->red_params[i][j].maxp_inv) != 0) { + rte_sched_free_memory(port, n_subports); + + RTE_LOG(NOTICE, SCHED, + "%s: RED configuration init fails\n", __func__); + return -EINVAL; + } + } + } + s->cman = RTE_SCHED_CMAN_WRED; + return 0; +} + +static int +rte_sched_pie_config (struct rte_sched_port *port, + struct rte_sched_subport *s, + struct rte_sched_subport_params *params, + uint32_t n_subports) +{ + uint32_t i; + + for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { + if (params->pie_params[i].tailq_th > params->qsize[i]) { + RTE_LOG(NOTICE, SCHED, + "%s: PIE tailq threshold incorrect \n", __func__); + return -EINVAL; + } + + if (rte_pie_config_init(&s->pie_config[i], + params->pie_params[i].qdelay_ref, + params->pie_params[i].dp_update_interval, + params->pie_params[i].max_burst, + params->pie_params[i].tailq_th) != 0) { + rte_sched_free_memory(port, n_subports); + + RTE_LOG(NOTICE, SCHED, + "%s: PIE configuration init fails\n", __func__); + return -EINVAL; + } + } + s->cman = RTE_SCHED_CMAN_PIE; + return 0; +} + +static int +rte_sched_cman_config(struct rte_sched_port *port, + struct rte_sched_subport *s, + struct rte_sched_subport_params *params, + uint32_t n_subports) +{ + if(params->cman == RTE_SCHED_CMAN_WRED) + return rte_sched_red_config(port, s, params, n_subports); + + else if (params->cman == RTE_SCHED_CMAN_PIE) + return rte_sched_pie_config(port, s, params, n_subports); + + return -EINVAL; +} +#endif + int rte_sched_subport_config(struct rte_sched_port *port, uint32_t subport_id, @@ -1169,30 +1263,11 @@ rte_sched_subport_config(struct rte_sched_port *port, s->n_pipe_profiles = params->n_pipe_profiles; s->n_max_pipe_profiles = params->n_max_pipe_profiles; -#ifdef RTE_SCHED_RED - for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { - uint32_t j; - - for (j = 0; j < RTE_COLORS; j++) { - /* if min/max are both zero, then RED is disabled */ - if ((params->red_params[i][j].min_th | - params->red_params[i][j].max_th) == 0) { - continue; - } - - if (rte_red_config_init(&s->red_config[i][j], - params->red_params[i][j].wq_log2, - params->red_params[i][j].min_th, - params->red_params[i][j].max_th, - params->red_params[i][j].maxp_inv) != 0) { - rte_sched_free_memory(port, n_subports); - - RTE_LOG(NOTICE, SCHED, - "%s: RED configuration init fails\n", - __func__); - return -EINVAL; - } - } +#ifdef RTE_SCHED_CMAN + status = rte_sched_cman_config(port, s, params, n_subports); + if (status) { + RTE_LOG(NOTICE, SCHED, "%s: CMAN configuration fails\n", __func__); + return status; } #endif @@ -1714,20 +1789,20 @@ rte_sched_port_update_subport_stats(struct rte_sched_port *port, subport->stats.n_bytes_tc[tc_index] += pkt_len; } -#ifdef RTE_SCHED_RED +#ifdef RTE_SCHED_CMAN static inline void rte_sched_port_update_subport_stats_on_drop(struct rte_sched_port *port, struct rte_sched_subport *subport, uint32_t qindex, struct rte_mbuf *pkt, - uint32_t red) + uint32_t cman) #else static inline void rte_sched_port_update_subport_stats_on_drop(struct rte_sched_port *port, struct rte_sched_subport *subport, uint32_t qindex, struct rte_mbuf *pkt, - __rte_unused uint32_t red) + __rte_unused uint32_t cman) #endif { uint32_t tc_index = rte_sched_port_pipe_tc(port, qindex); @@ -1735,8 +1810,8 @@ rte_sched_port_update_subport_stats_on_drop(struct rte_sched_port *port, subport->stats.n_pkts_tc_dropped[tc_index] += 1; subport->stats.n_bytes_tc_dropped[tc_index] += pkt_len; -#ifdef RTE_SCHED_RED - subport->stats.n_pkts_red_dropped[tc_index] += red; +#ifdef RTE_SCHED_CMAN + subport->stats.n_pkts_cman_dropped[tc_index] += cman; #endif } @@ -1752,18 +1827,18 @@ rte_sched_port_update_queue_stats(struct rte_sched_subport *subport, qe->stats.n_bytes += pkt_len; } -#ifdef RTE_SCHED_RED +#ifdef RTE_SCHED_CMAN static inline void rte_sched_port_update_queue_stats_on_drop(struct rte_sched_subport *subport, uint32_t qindex, struct rte_mbuf *pkt, - uint32_t red) + uint32_t cman) #else static inline void rte_sched_port_update_queue_stats_on_drop(struct rte_sched_subport *subport, uint32_t qindex, struct rte_mbuf *pkt, - __rte_unused uint32_t red) + __rte_unused uint32_t cman) #endif { struct rte_sched_queue_extra *qe = subport->queue_extra + qindex; @@ -1771,39 +1846,50 @@ rte_sched_port_update_queue_stats_on_drop(struct rte_sched_subport *subport, qe->stats.n_pkts_dropped += 1; qe->stats.n_bytes_dropped += pkt_len; -#ifdef RTE_SCHED_RED - qe->stats.n_pkts_red_dropped += red; +#ifdef RTE_SCHED_CMAN + qe->stats.n_pkts_cman_dropped += cman; #endif } #endif /* RTE_SCHED_COLLECT_STATS */ -#ifdef RTE_SCHED_RED +#ifdef RTE_SCHED_CMAN static inline int -rte_sched_port_red_drop(struct rte_sched_port *port, +rte_sched_port_cman_drop(struct rte_sched_port *port, struct rte_sched_subport *subport, struct rte_mbuf *pkt, uint32_t qindex, uint16_t qlen) { struct rte_sched_queue_extra *qe; - struct rte_red_config *red_cfg; - struct rte_red *red; uint32_t tc_index; - enum rte_color color; tc_index = rte_sched_port_pipe_tc(port, qindex); - color = rte_sched_port_pkt_read_color(pkt); - red_cfg = &subport->red_config[tc_index][color]; + qe = subport->queue_extra + qindex; - if ((red_cfg->min_th | red_cfg->max_th) == 0) - return 0; + /* RED */ + if (subport->cman == RTE_SCHED_CMAN_WRED) { + struct rte_red_config *red_cfg; + struct rte_red *red; + enum rte_color color; - qe = subport->queue_extra + qindex; - red = &qe->red; + color = rte_sched_port_pkt_read_color(pkt); + red_cfg = &subport->red_config[tc_index][color]; - return rte_red_enqueue(red_cfg, red, qlen, port->time); + if ((red_cfg->min_th | red_cfg->max_th) == 0) + return 0; + + red = &qe->red; + + return rte_red_enqueue(red_cfg, red, qlen, port->time); + } + + /* PIE */ + struct rte_pie_config *pie_cfg = &subport->pie_config[tc_index]; + struct rte_pie *pie = &qe->pie; + + return rte_pie_enqueue(pie_cfg, pie, pkt->pkt_len, qlen, port->time_cpu_cycles); } static inline void @@ -1811,14 +1897,29 @@ rte_sched_port_set_queue_empty_timestamp(struct rte_sched_port *port, struct rte_sched_subport *subport, uint32_t qindex) { struct rte_sched_queue_extra *qe = subport->queue_extra + qindex; - struct rte_red *red = &qe->red; + if (subport->cman == RTE_SCHED_CMAN_WRED) { + struct rte_red *red = &qe->red; - rte_red_mark_queue_empty(red, port->time); + rte_red_mark_queue_empty(red, port->time); + } +} + +static inline void +rte_sched_port_pie_dequeue(struct rte_sched_subport *subport, +uint32_t qindex, uint32_t pkt_len, uint64_t time) { + struct rte_sched_queue_extra *qe = subport->queue_extra + qindex; + struct rte_pie *pie = &qe->pie; + + /* Update queue length */ + pie->qlen -= 1; + pie->qlen_bytes -= pkt_len; + + rte_pie_dequeue (pie, pkt_len, time); } #else -static inline int rte_sched_port_red_drop(struct rte_sched_port *port __rte_unused, +static inline int rte_sched_port_cman_drop(struct rte_sched_port *port __rte_unused, struct rte_sched_subport *subport __rte_unused, struct rte_mbuf *pkt __rte_unused, uint32_t qindex __rte_unused, @@ -1829,7 +1930,7 @@ static inline int rte_sched_port_red_drop(struct rte_sched_port *port __rte_unus #define rte_sched_port_set_queue_empty_timestamp(port, subport, qindex) -#endif /* RTE_SCHED_RED */ +#endif /* RTE_SCHED_CMAN */ #ifdef RTE_SCHED_DEBUG @@ -1925,7 +2026,7 @@ rte_sched_port_enqueue_qwa(struct rte_sched_port *port, qlen = q->qw - q->qr; /* Drop the packet (and update drop stats) when queue is full */ - if (unlikely(rte_sched_port_red_drop(port, subport, pkt, qindex, qlen) || + if (unlikely(rte_sched_port_cman_drop(port, subport, pkt, qindex, qlen) || (qlen >= qsize))) { rte_pktmbuf_free(pkt); #ifdef RTE_SCHED_COLLECT_STATS @@ -2398,6 +2499,7 @@ grinder_schedule(struct rte_sched_port *port, { struct rte_sched_grinder *grinder = subport->grinder + pos; struct rte_sched_queue *queue = grinder->queue[grinder->qpos]; + uint32_t qindex = grinder->qindex[grinder->qpos]; struct rte_mbuf *pkt = grinder->pkt; uint32_t pkt_len = pkt->pkt_len + port->frame_overhead; uint32_t be_tc_active; @@ -2417,15 +2519,19 @@ grinder_schedule(struct rte_sched_port *port, (pkt_len * grinder->wrr_cost[grinder->qpos]) & be_tc_active; if (queue->qr == queue->qw) { - uint32_t qindex = grinder->qindex[grinder->qpos]; - rte_bitmap_clear(subport->bmp, qindex); grinder->qmask &= ~(1 << grinder->qpos); if (be_tc_active) grinder->wrr_mask[grinder->qpos] = 0; + rte_sched_port_set_queue_empty_timestamp(port, subport, qindex); } +#ifdef RTE_SCHED_CMAN + if (subport->cman == RTE_SCHED_CMAN_PIE) + rte_sched_port_pie_dequeue(subport, qindex, pkt_len, port->time_cpu_cycles); +#endif + /* Reset pipe loop detection */ subport->pipe_loop = RTE_SCHED_PIPE_INVALID; grinder->productive = 1; diff --git a/lib/sched/rte_sched.h b/lib/sched/rte_sched.h index c1a772b70c..692aba9442 100644 --- a/lib/sched/rte_sched.h +++ b/lib/sched/rte_sched.h @@ -61,9 +61,10 @@ extern "C" { #include #include -/** Random Early Detection (RED) */ -#ifdef RTE_SCHED_RED +/** Congestion management */ +#ifdef RTE_SCHED_CMAN #include "rte_red.h" +#include "rte_pie.h" #endif /** Maximum number of queues per pipe. @@ -110,6 +111,28 @@ extern "C" { #define RTE_SCHED_FRAME_OVERHEAD_DEFAULT 24 #endif +/** + * Congestion management (CMAN) mode + * + * This is used for controlling the admission of packets into a packet queue or + * group of packet queues on congestion. + * + * The *Random Early Detection (RED)* algorithm works by proactively dropping + * more and more input packets as the queue occupancy builds up. When the queue + * is full or almost full, RED effectively works as *tail drop*. The *Weighted + * RED* algorithm uses a separate set of RED thresholds for each packet color. + * + * Similar to RED, Proportional Integral Controller Enhanced (PIE) randomly + * drops a packet at the onset of the congestion and tries to control the + * latency around the target value. The congestion detection, however, is based + * on the queueing latency instead of the queue length like RED. For more + * information, refer RFC8033. + */ +enum rte_sched_cman_mode { + RTE_SCHED_CMAN_WRED, /**< Weighted Random Early Detection (WRED) */ + RTE_SCHED_CMAN_PIE, /**< Proportional Integral Controller Enhanced (PIE) */ +}; + /* * Pipe configuration parameters. The period and credits_per_period * parameters are measured in bytes, with one byte meaning the time @@ -174,9 +197,17 @@ struct rte_sched_subport_params { /** Max allowed profiles in the pipe profile table */ uint32_t n_max_pipe_profiles; -#ifdef RTE_SCHED_RED - /** RED parameters */ - struct rte_red_params red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS]; +#ifdef RTE_SCHED_CMAN + /** Congestion management mode */ + enum rte_sched_cman_mode cman; + + RTE_STD_C11 + union { + /** RED parameters */ + struct rte_red_params red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS]; + /** PIE parameters */ + struct rte_pie_params pie_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; + }; #endif }; @@ -208,9 +239,9 @@ struct rte_sched_subport_stats { /** Number of bytes dropped for each traffic class */ uint64_t n_bytes_tc_dropped[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; -#ifdef RTE_SCHED_RED - /** Number of packets dropped by red */ - uint64_t n_pkts_red_dropped[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; +#ifdef RTE_SCHED_CMAN + /** Number of packets dropped by congestion management scheme */ + uint64_t n_pkts_cman_dropped[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; #endif }; @@ -222,9 +253,9 @@ struct rte_sched_queue_stats { /** Packets dropped */ uint64_t n_pkts_dropped; -#ifdef RTE_SCHED_RED - /** Packets dropped by RED */ - uint64_t n_pkts_red_dropped; +#ifdef RTE_SCHED_CMAN + /** Packets dropped by congestion management scheme */ + uint64_t n_pkts_cman_dropped; #endif /** Bytes successfully written */ From patchwork Mon May 24 10:58:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liguzinski, WojciechX" X-Patchwork-Id: 93398 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CFDD3A0547; Mon, 24 May 2021 12:59:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CBC2941121; Mon, 24 May 2021 12:59:12 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id CBBAD4003C for ; Mon, 24 May 2021 12:59:09 +0200 (CEST) IronPort-SDR: 6GkzafTFVbLl/M/aZpSj6MZp7d4I3XFaUQhDlVwtGTggrA51q00AC+Zuq39FrGCQVuaTh9D3Jx bZPiayo6LZ1Q== X-IronPort-AV: E=McAfee;i="6200,9189,9993"; a="201948463" X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="201948463" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2021 03:59:08 -0700 IronPort-SDR: A8+s1pySnqiU/WCB3oMe48heAS42SYrnMCj+9q4Bkl8k8AGhfKHQ2jGRNO3z5ch7Z18QjN80Pm LNqVzc0qSz8w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="413548054" Received: from silpixa00400629.ir.intel.com ([10.237.214.62]) by orsmga002.jf.intel.com with ESMTP; 24 May 2021 03:59:07 -0700 From: "Liguzinski, WojciechX" To: dev@dpdk.org, jasvinder.singh@intel.com, cristian.dumitrescu@intel.com Cc: savinay.dharmappa@intel.com Date: Mon, 24 May 2021 11:58:21 +0100 Message-Id: <20210524105822.63171-3-wojciechx.liguzinski@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210524105822.63171-1-wojciechx.liguzinski@intel.com> References: <20210524105822.63171-1-wojciechx.liguzinski@intel.com> Subject: [dpdk-dev] [RFC PATCH 2/3] example/qos_sched: add pie support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" patch add support enable pie or red by parsing config file. Signed-off-by: Liguzinski, WojciechX --- config/rte_config.h | 1 - examples/qos_sched/app_thread.c | 1 - examples/qos_sched/cfg_file.c | 82 ++++++++++--- examples/qos_sched/init.c | 5 +- examples/qos_sched/profile.cfg | 196 +++++++++++++++++++++----------- 5 files changed, 199 insertions(+), 86 deletions(-) diff --git a/config/rte_config.h b/config/rte_config.h index 590903c07d..48132f27df 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -89,7 +89,6 @@ #define RTE_MAX_LCORE_FREQS 64 /* rte_sched defines */ -#undef RTE_SCHED_RED #undef RTE_SCHED_COLLECT_STATS #undef RTE_SCHED_SUBPORT_TC_OV #define RTE_SCHED_PORT_N_GRINDERS 8 diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c index dbc878b553..895c0d3592 100644 --- a/examples/qos_sched/app_thread.c +++ b/examples/qos_sched/app_thread.c @@ -205,7 +205,6 @@ app_worker_thread(struct thread_conf **confs) if (likely(nb_pkt)) { int nb_sent = rte_sched_port_enqueue(conf->sched_port, mbufs, nb_pkt); - APP_STATS_ADD(conf->stat.nb_drop, nb_pkt - nb_sent); APP_STATS_ADD(conf->stat.nb_rx, nb_pkt); } diff --git a/examples/qos_sched/cfg_file.c b/examples/qos_sched/cfg_file.c index cd167bd8e6..5a39e32269 100644 --- a/examples/qos_sched/cfg_file.c +++ b/examples/qos_sched/cfg_file.c @@ -242,20 +242,20 @@ cfg_load_subport(struct rte_cfgfile *cfg, struct rte_sched_subport_params *subpo memset(active_queues, 0, sizeof(active_queues)); n_active_queues = 0; -#ifdef RTE_SCHED_RED - char sec_name[CFG_NAME_LEN]; - struct rte_red_params red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS]; +#ifdef RTE_SCHED_CMAN + enum rte_sched_cman_mode cman_mode; - snprintf(sec_name, sizeof(sec_name), "red"); + struct rte_red_params red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS]; - if (rte_cfgfile_has_section(cfg, sec_name)) { + if (rte_cfgfile_has_section(cfg, "red")) { + cman_mode = RTE_SCHED_CMAN_WRED; for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { char str[32]; /* Parse WRED min thresholds */ snprintf(str, sizeof(str), "tc %d wred min", i); - entry = rte_cfgfile_get_entry(cfg, sec_name, str); + entry = rte_cfgfile_get_entry(cfg, "red", str); if (entry) { char *next; /* for each packet colour (green, yellow, red) */ @@ -315,7 +315,42 @@ cfg_load_subport(struct rte_cfgfile *cfg, struct rte_sched_subport_params *subpo } } } -#endif /* RTE_SCHED_RED */ + + struct rte_pie_params pie_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; + + if (rte_cfgfile_has_section(cfg, "pie")) { + cman_mode = RTE_SCHED_CMAN_PIE; + + for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { + char str[32]; + + /* Parse Queue Delay Ref value */ + snprintf(str, sizeof(str), "tc %d qdelay ref", i); + entry = rte_cfgfile_get_entry(cfg, "pie", str); + if (entry) + pie_params[i].qdelay_ref = (uint16_t) atoi(entry); + + /* Parse Max Burst value */ + snprintf(str, sizeof(str), "tc %d max burst", i); + entry = rte_cfgfile_get_entry(cfg, "pie", str); + if (entry) + pie_params[i].max_burst = (uint16_t) atoi(entry); + + /* Parse Update Interval Value */ + snprintf(str, sizeof(str), "tc %d update interval", i); + entry = rte_cfgfile_get_entry(cfg, "pie", str); + if (entry) + pie_params[i].dp_update_interval = (uint16_t) atoi(entry); + + /* Parse Tailq Threashold Value */ + snprintf(str, sizeof(str), "tc %d tailq th", i); + entry = rte_cfgfile_get_entry(cfg, "pie", str); + if (entry) + pie_params[i].tailq_th = (uint16_t) atoi(entry); + + } + } +#endif /* RTE_SCHED_CMAN */ for (i = 0; i < MAX_SCHED_SUBPORTS; i++) { char sec_name[CFG_NAME_LEN]; @@ -393,17 +428,30 @@ cfg_load_subport(struct rte_cfgfile *cfg, struct rte_sched_subport_params *subpo } } } -#ifdef RTE_SCHED_RED +#ifdef RTE_SCHED_CMAN + subport_params[i].cman = cman_mode; + for (j = 0; j < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; j++) { - for (k = 0; k < RTE_COLORS; k++) { - subport_params[i].red_params[j][k].min_th = - red_params[j][k].min_th; - subport_params[i].red_params[j][k].max_th = - red_params[j][k].max_th; - subport_params[i].red_params[j][k].maxp_inv = - red_params[j][k].maxp_inv; - subport_params[i].red_params[j][k].wq_log2 = - red_params[j][k].wq_log2; + if (subport_params[i].cman == RTE_SCHED_CMAN_WRED) { + for (k = 0; k < RTE_COLORS; k++) { + subport_params[i].red_params[j][k].min_th = + red_params[j][k].min_th; + subport_params[i].red_params[j][k].max_th = + red_params[j][k].max_th; + subport_params[i].red_params[j][k].maxp_inv = + red_params[j][k].maxp_inv; + subport_params[i].red_params[j][k].wq_log2 = + red_params[j][k].wq_log2; + } + } else { + subport_params[i].pie_params[j].qdelay_ref = + pie_params[j].qdelay_ref; + subport_params[i].pie_params[j].dp_update_interval = + pie_params[j].dp_update_interval; + subport_params[i].pie_params[j].max_burst = + pie_params[j].max_burst; + subport_params[i].pie_params[j].tailq_th = + pie_params[j].tailq_th; } } #endif diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c index 1abe003fc6..b1babc2276 100644 --- a/examples/qos_sched/init.c +++ b/examples/qos_sched/init.c @@ -212,7 +212,8 @@ struct rte_sched_subport_params subport_params[MAX_SCHED_SUBPORTS] = { .n_pipe_profiles = sizeof(pipe_profiles) / sizeof(struct rte_sched_pipe_params), .n_max_pipe_profiles = MAX_SCHED_PIPE_PROFILES, -#ifdef RTE_SCHED_RED +#ifdef RTE_SCHED_CMAN + .cman = RTE_SCHED_CMAN_WRED, .red_params = { /* Traffic Class 0 Colors Green / Yellow / Red */ [0][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, @@ -279,7 +280,7 @@ struct rte_sched_subport_params subport_params[MAX_SCHED_SUBPORTS] = { [12][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, [12][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, }, -#endif /* RTE_SCHED_RED */ +#endif /* RTE_SCHED_CMAN */ }, }; diff --git a/examples/qos_sched/profile.cfg b/examples/qos_sched/profile.cfg index 4486d2799e..d4b21c0170 100644 --- a/examples/qos_sched/profile.cfg +++ b/examples/qos_sched/profile.cfg @@ -76,68 +76,134 @@ tc 12 oversubscription weight = 1 tc 12 wrr weights = 1 1 1 1 ; RED params per traffic class and color (Green / Yellow / Red) -[red] -tc 0 wred min = 48 40 32 -tc 0 wred max = 64 64 64 -tc 0 wred inv prob = 10 10 10 -tc 0 wred weight = 9 9 9 - -tc 1 wred min = 48 40 32 -tc 1 wred max = 64 64 64 -tc 1 wred inv prob = 10 10 10 -tc 1 wred weight = 9 9 9 - -tc 2 wred min = 48 40 32 -tc 2 wred max = 64 64 64 -tc 2 wred inv prob = 10 10 10 -tc 2 wred weight = 9 9 9 - -tc 3 wred min = 48 40 32 -tc 3 wred max = 64 64 64 -tc 3 wred inv prob = 10 10 10 -tc 3 wred weight = 9 9 9 - -tc 4 wred min = 48 40 32 -tc 4 wred max = 64 64 64 -tc 4 wred inv prob = 10 10 10 -tc 4 wred weight = 9 9 9 - -tc 5 wred min = 48 40 32 -tc 5 wred max = 64 64 64 -tc 5 wred inv prob = 10 10 10 -tc 5 wred weight = 9 9 9 - -tc 6 wred min = 48 40 32 -tc 6 wred max = 64 64 64 -tc 6 wred inv prob = 10 10 10 -tc 6 wred weight = 9 9 9 - -tc 7 wred min = 48 40 32 -tc 7 wred max = 64 64 64 -tc 7 wred inv prob = 10 10 10 -tc 7 wred weight = 9 9 9 - -tc 8 wred min = 48 40 32 -tc 8 wred max = 64 64 64 -tc 8 wred inv prob = 10 10 10 -tc 8 wred weight = 9 9 9 - -tc 9 wred min = 48 40 32 -tc 9 wred max = 64 64 64 -tc 9 wred inv prob = 10 10 10 -tc 9 wred weight = 9 9 9 - -tc 10 wred min = 48 40 32 -tc 10 wred max = 64 64 64 -tc 10 wred inv prob = 10 10 10 -tc 10 wred weight = 9 9 9 - -tc 11 wred min = 48 40 32 -tc 11 wred max = 64 64 64 -tc 11 wred inv prob = 10 10 10 -tc 11 wred weight = 9 9 9 - -tc 12 wred min = 48 40 32 -tc 12 wred max = 64 64 64 -tc 12 wred inv prob = 10 10 10 -tc 12 wred weight = 9 9 9 +;[red] +;tc 0 wred min = 48 40 32 +;tc 0 wred max = 64 64 64 +;tc 0 wred inv prob = 10 10 10 +;tc 0 wred weight = 9 9 9 + +;tc 1 wred min = 48 40 32 +;tc 1 wred max = 64 64 64 +;tc 1 wred inv prob = 10 10 10 +;tc 1 wred weight = 9 9 9 + +;tc 2 wred min = 48 40 32 +;tc 2 wred max = 64 64 64 +;tc 2 wred inv prob = 10 10 10 +;tc 2 wred weight = 9 9 9 + +;tc 3 wred min = 48 40 32 +;tc 3 wred max = 64 64 64 +;tc 3 wred inv prob = 10 10 10 +;tc 3 wred weight = 9 9 9 + +;tc 4 wred min = 48 40 32 +;tc 4 wred max = 64 64 64 +;tc 4 wred inv prob = 10 10 10 +;tc 4 wred weight = 9 9 9 + +;tc 5 wred min = 48 40 32 +;tc 5 wred max = 64 64 64 +;tc 5 wred inv prob = 10 10 10 +;tc 5 wred weight = 9 9 9 + +;tc 6 wred min = 48 40 32 +;tc 6 wred max = 64 64 64 +;tc 6 wred inv prob = 10 10 10 +;tc 6 wred weight = 9 9 9 + +;tc 7 wred min = 48 40 32 +;tc 7 wred max = 64 64 64 +;tc 7 wred inv prob = 10 10 10 +;tc 7 wred weight = 9 9 9 + +;tc 8 wred min = 48 40 32 +;tc 8 wred max = 64 64 64 +;tc 8 wred inv prob = 10 10 10 +;tc 8 wred weight = 9 9 9 + +;tc 9 wred min = 48 40 32 +;tc 9 wred max = 64 64 64 +;tc 9 wred inv prob = 10 10 10 +;tc 9 wred weight = 9 9 9 + +;tc 10 wred min = 48 40 32 +;tc 10 wred max = 64 64 64 +;tc 10 wred inv prob = 10 10 10 +;tc 10 wred weight = 9 9 9 + +;tc 11 wred min = 48 40 32 +;tc 11 wred max = 64 64 64 +;tc 11 wred inv prob = 10 10 10 +;tc 11 wred weight = 9 9 9 + +;tc 12 wred min = 48 40 32 +;tc 12 wred max = 64 64 64 +;tc 12 wred inv prob = 10 10 10 +;tc 12 wred weight = 9 9 9 + +[pie] +tc 0 qdelay ref = 15 +tc 0 max burst = 150 +tc 0 update interval = 15 +tc 0 tailq th = 64 + +tc 1 qdelay ref = 15 +tc 1 max burst = 150 +tc 1 update interval = 15 +tc 1 tailq th = 64 + +tc 2 qdelay ref = 15 +tc 2 max burst = 150 +tc 2 update interval = 15 +tc 2 tailq th = 64 + +tc 3 qdelay ref = 15 +tc 3 max burst = 150 +tc 3 update interval = 15 +tc 3 tailq th = 64 + +tc 4 qdelay ref = 15 +tc 4 max burst = 150 +tc 4 update interval = 15 +tc 4 tailq th = 64 + +tc 5 qdelay ref = 15 +tc 5 max burst = 150 +tc 5 update interval = 15 +tc 5 tailq th = 64 + +tc 6 qdelay ref = 15 +tc 6 max burst = 150 +tc 6 update interval = 15 +tc 6 tailq th = 64 + +tc 7 qdelay ref = 15 +tc 7 max burst = 150 +tc 7 update interval = 15 +tc 7 tailq th = 64 + +tc 8 qdelay ref = 15 +tc 8 max burst = 150 +tc 8 update interval = 15 +tc 8 tailq th = 64 + +tc 9 qdelay ref = 15 +tc 9 max burst = 150 +tc 9 update interval = 15 +tc 9 tailq th = 64 + +tc 10 qdelay ref = 15 +tc 10 max burst = 150 +tc 10 update interval = 15 +tc 10 tailq th = 64 + +tc 11 qdelay ref = 15 +tc 11 max burst = 150 +tc 11 update interval = 15 +tc 11 tailq th = 64 + +tc 12 qdelay ref = 15 +tc 12 max burst = 150 +tc 12 update interval = 15 +tc 12 tailq th = 64 From patchwork Mon May 24 10:58:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liguzinski, WojciechX" X-Patchwork-Id: 93399 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 973AFA0547; Mon, 24 May 2021 12:59:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 65DD14112F; Mon, 24 May 2021 12:59:14 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 7F99B41116 for ; Mon, 24 May 2021 12:59:10 +0200 (CEST) IronPort-SDR: gNRg1cnlTQG+1Tb0Uxbf+WbROpGIrkiWslQK51Zgak0qjqXTtXezKir5anEDSYYrbhCOWar4EP SD6XFDP8583A== X-IronPort-AV: E=McAfee;i="6200,9189,9993"; a="201948470" X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="201948470" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 May 2021 03:59:10 -0700 IronPort-SDR: /Ye+quAaZG+hsr4y0sS3F4Tw0uMVAbzfRxgkLf3m0Y4o6TabUaCnaIDN0gnV4qpfDREpadxN1f /km/WWhBh+Ng== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="413548063" Received: from silpixa00400629.ir.intel.com ([10.237.214.62]) by orsmga002.jf.intel.com with ESMTP; 24 May 2021 03:59:08 -0700 From: "Liguzinski, WojciechX" To: dev@dpdk.org, jasvinder.singh@intel.com, cristian.dumitrescu@intel.com Cc: savinay.dharmappa@intel.com Date: Mon, 24 May 2021 11:58:22 +0100 Message-Id: <20210524105822.63171-4-wojciechx.liguzinski@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210524105822.63171-1-wojciechx.liguzinski@intel.com> References: <20210524105822.63171-1-wojciechx.liguzinski@intel.com> Subject: [dpdk-dev] [RFC PATCH 3/3] example/ip_pipeline: add pie support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Liguzinski, WojciechX --- examples/ip_pipeline/tmgr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/ip_pipeline/tmgr.c b/examples/ip_pipeline/tmgr.c index e4e364cbc0..406184e760 100644 --- a/examples/ip_pipeline/tmgr.c +++ b/examples/ip_pipeline/tmgr.c @@ -25,7 +25,7 @@ static const struct rte_sched_subport_params subport_params_default = { .pipe_profiles = pipe_profile, .n_pipe_profiles = 0, /* filled at run time */ .n_max_pipe_profiles = RTE_DIM(pipe_profile), -#ifdef RTE_SCHED_RED +#ifdef RTE_SCHED_CMAN .red_params = { /* Traffic Class 0 Colors Green / Yellow / Red */ [0][0] = {.min_th = 48, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, @@ -92,7 +92,7 @@ static const struct rte_sched_subport_params subport_params_default = { [12][1] = {.min_th = 40, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, [12][2] = {.min_th = 32, .max_th = 64, .maxp_inv = 10, .wq_log2 = 9}, }, -#endif /* RTE_SCHED_RED */ +#endif /* RTE_SCHED_CMAN */ }; static struct tmgr_port_list tmgr_port_list;