From patchwork Fri Jun 19 09:41:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Gajdzica X-Patchwork-Id: 5574 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id A6C56C79C; Fri, 19 Jun 2015 12:18:17 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 3C866C73E for ; Fri, 19 Jun 2015 12:18:16 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga101.jf.intel.com with ESMTP; 19 Jun 2015 03:18:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,643,1427785200"; d="scan'208";a="713846424" Received: from unknown (HELO stargo) ([10.217.248.233]) by orsmga001.jf.intel.com with SMTP; 19 Jun 2015 03:18:14 -0700 Received: by stargo (sSMTP sendmail emulation); Fri, 19 Jun 2015 12:19:00 +0200 From: Maciej Gajdzica To: dev@dpdk.org Date: Fri, 19 Jun 2015 11:41:23 +0200 Message-Id: <1434706885-4519-12-git-send-email-maciejx.t.gajdzica@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1434706885-4519-1-git-send-email-maciejx.t.gajdzica@intel.com> References: <1434706885-4519-1-git-send-email-maciejx.t.gajdzica@intel.com> Subject: [dpdk-dev] [PATCH v5 11/13] port: added port_sched_writer stats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Added statistics for sched writer port. Signed-off-by: Maciej Gajdzica --- lib/librte_port/rte_port_sched.c | 57 ++++++++++++++++++++++++++++++++++---- 1 file changed, 52 insertions(+), 5 deletions(-) diff --git a/lib/librte_port/rte_port_sched.c b/lib/librte_port/rte_port_sched.c index a82e4fa..c5ff8ab 100644 --- a/lib/librte_port/rte_port_sched.c +++ b/lib/librte_port/rte_port_sched.c @@ -132,7 +132,23 @@ rte_port_sched_reader_stats_read(void *port, /* * Writer */ +#ifdef RTE_PORT_STATS_COLLECT + +#define RTE_PORT_SCHED_WRITER_STATS_PKTS_IN_ADD(port, val) \ + port->stats.n_pkts_in += val +#define RTE_PORT_SCHED_WRITER_STATS_PKTS_DROP_ADD(port, val) \ + port->stats.n_pkts_drop += val + +#else + +#define RTE_PORT_SCHED_WRITER_STATS_PKTS_IN_ADD(port, val) +#define RTE_PORT_SCHED_WRITER_STATS_PKTS_DROP_ADD(port, val) + +#endif + struct rte_port_sched_writer { + struct rte_port_out_stats stats; + struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX]; struct rte_sched_port *sched; uint32_t tx_burst_sz; @@ -180,8 +196,12 @@ rte_port_sched_writer_tx(void *port, struct rte_mbuf *pkt) struct rte_port_sched_writer *p = (struct rte_port_sched_writer *) port; p->tx_buf[p->tx_buf_count++] = pkt; + RTE_PORT_SCHED_WRITER_STATS_PKTS_IN_ADD(p, 1); if (p->tx_buf_count >= p->tx_burst_sz) { - rte_sched_port_enqueue(p->sched, p->tx_buf, p->tx_buf_count); + __rte_unused uint32_t nb_tx; + + nb_tx = rte_sched_port_enqueue(p->sched, p->tx_buf, p->tx_buf_count); + RTE_PORT_SCHED_WRITER_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx); p->tx_buf_count = 0; } @@ -200,15 +220,18 @@ rte_port_sched_writer_tx_bulk(void *port, ((pkts_mask & bsz_mask) ^ bsz_mask); if (expr == 0) { + __rte_unused uint32_t nb_tx; uint64_t n_pkts = __builtin_popcountll(pkts_mask); if (tx_buf_count) { - rte_sched_port_enqueue(p->sched, p->tx_buf, + nb_tx = rte_sched_port_enqueue(p->sched, p->tx_buf, tx_buf_count); + RTE_PORT_SCHED_WRITER_STATS_PKTS_DROP_ADD(p, tx_buf_count - nb_tx); p->tx_buf_count = 0; } - rte_sched_port_enqueue(p->sched, pkts, n_pkts); + nb_tx = rte_sched_port_enqueue(p->sched, pkts, n_pkts); + RTE_PORT_SCHED_WRITER_STATS_PKTS_DROP_ADD(p, n_pkts - nb_tx); } else { for ( ; pkts_mask; ) { uint32_t pkt_index = __builtin_ctzll(pkts_mask); @@ -216,13 +239,17 @@ rte_port_sched_writer_tx_bulk(void *port, struct rte_mbuf *pkt = pkts[pkt_index]; p->tx_buf[tx_buf_count++] = pkt; + RTE_PORT_SCHED_WRITER_STATS_PKTS_IN_ADD(p, 1); pkts_mask &= ~pkt_mask; } p->tx_buf_count = tx_buf_count; if (tx_buf_count >= p->tx_burst_sz) { - rte_sched_port_enqueue(p->sched, p->tx_buf, + __rte_unused uint32_t nb_tx; + + nb_tx = rte_sched_port_enqueue(p->sched, p->tx_buf, tx_buf_count); + RTE_PORT_SCHED_WRITER_STATS_PKTS_DROP_ADD(p, tx_buf_count - nb_tx); p->tx_buf_count = 0; } } @@ -236,7 +263,10 @@ rte_port_sched_writer_flush(void *port) struct rte_port_sched_writer *p = (struct rte_port_sched_writer *) port; if (p->tx_buf_count) { - rte_sched_port_enqueue(p->sched, p->tx_buf, p->tx_buf_count); + __rte_unused uint32_t nb_tx; + + nb_tx = rte_sched_port_enqueue(p->sched, p->tx_buf, p->tx_buf_count); + RTE_PORT_SCHED_WRITER_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx); p->tx_buf_count = 0; } @@ -257,6 +287,22 @@ rte_port_sched_writer_free(void *port) return 0; } +static int +rte_port_sched_writer_stats_read(void *port, + struct rte_port_out_stats *stats, int clear) +{ + struct rte_port_sched_writer *p = + (struct rte_port_sched_writer *) port; + + if (stats != NULL) + memcpy(stats, &p->stats, sizeof(p->stats)); + + if (clear) + memset(&p->stats, 0, sizeof(p->stats)); + + return 0; +} + /* * Summary of port operations */ @@ -273,4 +319,5 @@ struct rte_port_out_ops rte_port_sched_writer_ops = { .f_tx = rte_port_sched_writer_tx, .f_tx_bulk = rte_port_sched_writer_tx_bulk, .f_flush = rte_port_sched_writer_flush, + .f_stats = rte_port_sched_writer_stats_read, };