From patchwork Wed Apr 25 16:44:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 38964 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A8728A488; Wed, 25 Apr 2018 18:45:03 +0200 (CEST) Received: from mail-pf0-f195.google.com (mail-pf0-f195.google.com [209.85.192.195]) by dpdk.org (Postfix) with ESMTP id 5F5F78E7C for ; Wed, 25 Apr 2018 18:45:01 +0200 (CEST) Received: by mail-pf0-f195.google.com with SMTP id l27so15701307pfk.12 for ; Wed, 25 Apr 2018 09:45:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=IIrt6XP9FhRJUkRn1c+gvsFTrUC5lfFu06aXETZ4cpo=; b=qyYB00NgoBJZghw4ifD5yG+VBBOpBKaBaT0wiWviYG5hsFAaTQKIJHhfAqqEwGtSkI Mc4y4USSMO0MBS7E87tOQrjbVpXdtfXpZW7y4hYtgXyZcIgwlpZdf8adaY9bDA+zbA2k PYILw1TiKgvzkOZ9mv3iHuhmqgrRhv9T6jfNrWZsJco8F15EXt8y1R6GH+Nl4gLKWCfJ NR7Wo/Qsg4oNkubS+nR84bL7cbfUEeAWwpRqe6wYermvsYX1CL38BGulYVT5oXZ0A9jI 2dVYIxmGE+GyzrRCDpKPFiTp3Mz1nyjQtx5xcdkqHDv4Sq1C/OY/ae70Lbd3jyDavk96 +M4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=IIrt6XP9FhRJUkRn1c+gvsFTrUC5lfFu06aXETZ4cpo=; b=YlsRzTqNmye6VGldgCUw/09+6FsIviG0K6eWuKhSYx19zC9bwS9/0EfKHE2bPbQWxf uBDt8YDG1JIQW7ABkV4wpq1G63ZoNpo8Skv0/fSMgAVL3IE9edMxLUYhxggO1Tdwmdcs ByrDJD5JHmZQOO8y5BOyDqLP5Qw7ZJ7PQszYJs7BlkhttDoZxiCL0SMdarW8uBpLwdxi qOgV+Z1OMmXveQ9o0Csks5FkwVUxqYMTNh0JvJJeCVhESk8TXBsAG+cK1/bXUqFceVpj 8iej/RiBldom7ml4vQpzYWQond7wBE5Ot6/A1qcQshEGbBSGWVFt3dNZ7qbcoswVP/tP v+YA== X-Gm-Message-State: ALQs6tA8SQ+pLpkDlK1mNVQUAyveQkCbyfnrPUR0JVxjraVBcEQXthoY vW8aKzc7dUiiHggfhNTKW0ILLk29aAk= X-Google-Smtp-Source: AIpwx48EEPy6wtY/QxB2HOCl8Eht+V+O1BRdYvfT4OOte3nrJ9LUJXff59f6pcvoLIJS94DUzxZYtA== X-Received: by 10.99.111.65 with SMTP id k62mr23916273pgc.73.1524674700036; Wed, 25 Apr 2018 09:45:00 -0700 (PDT) Received: from xeon-e3.lan (204-195-71-95.wavecable.com. [204.195.71.95]) by smtp.gmail.com with ESMTPSA id j9sm22782000pff.46.2018.04.25.09.44.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Apr 2018 09:44:58 -0700 (PDT) From: Stephen Hemminger To: jck@semihalf.com, tdu@semihalf.com, dima@marvell.com, nsamsono@marvell.com, jianbo.liu@arm.com Cc: dev@dpdk.org, Stephen Hemminger Date: Wed, 25 Apr 2018 09:44:54 -0700 Message-Id: <20180425164454.21274-1-stephen@networkplumber.org> X-Mailer: git-send-email 2.17.0 Subject: [dpdk-dev] [RFC] net/mvpp2: implement dynamic logging X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" All DPDK drivers should use dynamic log types, not the default PMD value. This is an RFC not a patch since I don't have libraries are hardware to validate it. Signed-off-by: Stephen Hemminger --- drivers/net/mvpp2/mrvl_ethdev.c | 135 +++++++++++++++++--------------- drivers/net/mvpp2/mrvl_ethdev.h | 6 ++ drivers/net/mvpp2/mrvl_flow.c | 24 +++--- drivers/net/mvpp2/mrvl_qos.c | 32 ++++---- 4 files changed, 107 insertions(+), 90 deletions(-) diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 05998bf2dbc0..25969159baf9 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -94,6 +94,8 @@ struct pp2_bpool *mrvl_port_to_bpool_lookup[RTE_MAX_ETHPORTS]; int mrvl_port_bpool_size[PP2_NUM_PKT_PROC][PP2_BPOOL_NUM_POOLS][RTE_MAX_LCORE]; uint64_t cookie_addr_high = MRVL_COOKIE_ADDR_INVALID; +int mvrl_logtype; + struct mrvl_ifnames { const char *names[PP2_NUM_ETH_PPIO * PP2_NUM_PKT_PROC]; int idx; @@ -206,7 +208,7 @@ mrvl_init_hif(int core_id) ret = mrvl_reserve_bit(&used_hifs, MRVL_MUSDK_HIFS_MAX); if (ret < 0) { - RTE_LOG(ERR, PMD, "Failed to allocate hif %d\n", core_id); + MRVL_LOG(ERR, "Failed to allocate hif %d\n", core_id); return ret; } @@ -216,7 +218,7 @@ mrvl_init_hif(int core_id) params.out_size = MRVL_PP2_AGGR_TXQD_MAX; ret = pp2_hif_init(¶ms, &hifs[core_id]); if (ret) { - RTE_LOG(ERR, PMD, "Failed to initialize hif %d\n", core_id); + MRVL_LOG(ERR, "Failed to initialize hif %d\n", core_id); return ret; } @@ -235,7 +237,7 @@ mrvl_get_hif(struct mrvl_priv *priv, int core_id) ret = mrvl_init_hif(core_id); if (ret < 0) { - RTE_LOG(ERR, PMD, "Failed to allocate hif %d\n", core_id); + MRVL_LOG(ERR, "Failed to allocate hif %d\n", core_id); goto out; } @@ -265,7 +267,7 @@ static int mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf) { if (rss_conf->rss_key) - RTE_LOG(WARNING, PMD, "Changing hash key is not supported\n"); + MRVL_LOG(WARNING, "Changing hash key is not supported\n"); if (rss_conf->rss_hf == 0) { priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE; @@ -307,34 +309,34 @@ mrvl_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE && dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) { - RTE_LOG(INFO, PMD, "Unsupported rx multi queue mode %d\n", + MRVL_LOG(INFO, "Unsupported rx multi queue mode %d\n", dev->data->dev_conf.rxmode.mq_mode); return -EINVAL; } if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CRC_STRIP)) { - RTE_LOG(INFO, PMD, + MRVL_LOG(INFO, "L2 CRC stripping is always enabled in hw\n"); dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CRC_STRIP; } if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) { - RTE_LOG(INFO, PMD, "VLAN stripping not supported\n"); + MRVL_LOG(INFO, "VLAN stripping not supported\n"); return -EINVAL; } if (dev->data->dev_conf.rxmode.split_hdr_size) { - RTE_LOG(INFO, PMD, "Split headers not supported\n"); + MRVL_LOG(INFO, "Split headers not supported\n"); return -EINVAL; } if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) { - RTE_LOG(INFO, PMD, "RX Scatter/Gather not supported\n"); + MRVL_LOG(INFO, "RX Scatter/Gather not supported\n"); return -EINVAL; } if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) { - RTE_LOG(INFO, PMD, "LRO not supported\n"); + MRVL_LOG(INFO, "LRO not supported\n"); return -EINVAL; } @@ -358,7 +360,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev) if (dev->data->nb_rx_queues == 1 && dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) { - RTE_LOG(WARNING, PMD, "Disabling hash for 1 rx queue\n"); + MRVL_LOG(WARNING, "Disabling hash for 1 rx queue\n"); priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE; return 0; @@ -482,7 +484,7 @@ mrvl_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id) /* passing 1 enables given tx queue */ ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 1); if (ret) { - RTE_LOG(ERR, PMD, "Failed to start txq %d\n", queue_id); + MRVL_LOG(ERR, "Failed to start txq %d\n", queue_id); return ret; } @@ -514,7 +516,7 @@ mrvl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id) /* passing 0 disables given tx queue */ ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 0); if (ret) { - RTE_LOG(ERR, PMD, "Failed to stop txq %d\n", queue_id); + MRVL_LOG(ERR, "Failed to stop txq %d\n", queue_id); return ret; } @@ -561,7 +563,7 @@ mrvl_dev_start(struct rte_eth_dev *dev) priv->bpool_init_size += buffs_to_add; ret = mrvl_fill_bpool(dev->data->rx_queues[0], buffs_to_add); if (ret) - RTE_LOG(ERR, PMD, "Failed to add buffers to bpool\n"); + MRVL_LOG(ERR, "Failed to add buffers to bpool\n"); } /* @@ -576,7 +578,7 @@ mrvl_dev_start(struct rte_eth_dev *dev) ret = pp2_ppio_init(&priv->ppio_params, &priv->ppio); if (ret) { - RTE_LOG(ERR, PMD, "Failed to init ppio\n"); + MRVL_LOG(ERR, "Failed to init ppio\n"); return ret; } @@ -589,7 +591,7 @@ mrvl_dev_start(struct rte_eth_dev *dev) if (!priv->uc_mc_flushed) { ret = pp2_ppio_flush_mac_addrs(priv->ppio, 1, 1); if (ret) { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Failed to flush uc/mc filter list\n"); goto out; } @@ -599,7 +601,7 @@ mrvl_dev_start(struct rte_eth_dev *dev) if (!priv->vlan_flushed) { ret = pp2_ppio_flush_vlan(priv->ppio); if (ret) { - RTE_LOG(ERR, PMD, "Failed to flush vlan list\n"); + MRVL_LOG(ERR, "Failed to flush vlan list\n"); /* * TODO * once pp2_ppio_flush_vlan() is supported jump to out @@ -613,14 +615,14 @@ mrvl_dev_start(struct rte_eth_dev *dev) if (mrvl_qos_cfg) { ret = mrvl_start_qos_mapping(priv); if (ret) { - RTE_LOG(ERR, PMD, "Failed to setup QoS mapping\n"); + MRVL_LOG(ERR, "Failed to setup QoS mapping\n"); goto out; } } ret = mrvl_dev_set_link_up(dev); if (ret) { - RTE_LOG(ERR, PMD, "Failed to set link up\n"); + MRVL_LOG(ERR, "Failed to set link up\n"); goto out; } @@ -644,7 +646,7 @@ mrvl_dev_start(struct rte_eth_dev *dev) return 0; out: - RTE_LOG(ERR, PMD, "Failed to start device\n"); + MRVL_LOG(ERR, "Failed to start device\n"); pp2_ppio_deinit(priv->ppio); return ret; } @@ -660,7 +662,7 @@ mrvl_flush_rx_queues(struct rte_eth_dev *dev) { int i; - RTE_LOG(INFO, PMD, "Flushing rx queues\n"); + MRVL_LOG(INFO, "Flushing rx queues\n"); for (i = 0; i < dev->data->nb_rx_queues; i++) { int ret, num; @@ -689,7 +691,7 @@ mrvl_flush_tx_shadow_queues(struct rte_eth_dev *dev) int i, j; struct mrvl_txq *txq; - RTE_LOG(INFO, PMD, "Flushing tx shadow queues\n"); + MRVL_LOG(INFO, "Flushing tx shadow queues\n"); for (i = 0; i < dev->data->nb_tx_queues; i++) { txq = (struct mrvl_txq *)dev->data->tx_queues[i]; @@ -737,7 +739,7 @@ mrvl_flush_bpool(struct rte_eth_dev *dev) ret = pp2_bpool_get_num_buffs(priv->bpool, &num); if (ret) { - RTE_LOG(ERR, PMD, "Failed to get bpool buffers number\n"); + MRVL_LOG(ERR, "Failed to get bpool buffers number\n"); return; } @@ -902,7 +904,7 @@ mrvl_promiscuous_enable(struct rte_eth_dev *dev) ret = pp2_ppio_set_promisc(priv->ppio, 1); if (ret) - RTE_LOG(ERR, PMD, "Failed to enable promiscuous mode\n"); + MRVL_LOG(ERR, "Failed to enable promiscuous mode\n"); } /** @@ -925,7 +927,7 @@ mrvl_allmulticast_enable(struct rte_eth_dev *dev) ret = pp2_ppio_set_mc_promisc(priv->ppio, 1); if (ret) - RTE_LOG(ERR, PMD, "Failed enable all-multicast mode\n"); + MRVL_LOG(ERR, "Failed enable all-multicast mode\n"); } /** @@ -945,7 +947,7 @@ mrvl_promiscuous_disable(struct rte_eth_dev *dev) ret = pp2_ppio_set_promisc(priv->ppio, 0); if (ret) - RTE_LOG(ERR, PMD, "Failed to disable promiscuous mode\n"); + MRVL_LOG(ERR, "Failed to disable promiscuous mode\n"); } /** @@ -965,7 +967,7 @@ mrvl_allmulticast_disable(struct rte_eth_dev *dev) ret = pp2_ppio_set_mc_promisc(priv->ppio, 0); if (ret) - RTE_LOG(ERR, PMD, "Failed to disable all-multicast mode\n"); + MRVL_LOG(ERR, "Failed to disable all-multicast mode\n"); } /** @@ -994,7 +996,7 @@ mrvl_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index) if (ret) { ether_format_addr(buf, sizeof(buf), &dev->data->mac_addrs[index]); - RTE_LOG(ERR, PMD, "Failed to remove mac %s\n", buf); + MRVL_LOG(ERR, "Failed to remove mac %s\n", buf); } } @@ -1047,7 +1049,7 @@ mrvl_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac_addr, ret = pp2_ppio_add_mac_addr(priv->ppio, mac_addr->addr_bytes); if (ret) { ether_format_addr(buf, sizeof(buf), mac_addr); - RTE_LOG(ERR, PMD, "Failed to add mac %s\n", buf); + MRVL_LOG(ERR, "Failed to add mac %s\n", buf); return -1; } @@ -1081,7 +1083,7 @@ mrvl_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr) if (ret) { char buf[ETHER_ADDR_FMT_SIZE]; ether_format_addr(buf, sizeof(buf), mac_addr); - RTE_LOG(ERR, PMD, "Failed to set mac to %s\n", buf); + MRVL_LOG(ERR, "Failed to set mac to %s\n", buf); } return ret; @@ -1118,7 +1120,7 @@ mrvl_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) idx = rxq->queue_id; if (unlikely(idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)) { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "rx queue %d stats out of range (0 - %d)\n", idx, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1); continue; @@ -1129,7 +1131,7 @@ mrvl_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) priv->rxq_map[idx].inq, &rx_stats, 0); if (unlikely(ret)) { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Failed to update rx queue %d stats\n", idx); break; } @@ -1153,7 +1155,7 @@ mrvl_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) idx = txq->queue_id; if (unlikely(idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)) { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "tx queue %d stats out of range (0 - %d)\n", idx, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1); } @@ -1161,7 +1163,7 @@ mrvl_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) ret = pp2_ppio_outq_get_statistics(priv->ppio, idx, &tx_stats, 0); if (unlikely(ret)) { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Failed to update tx queue %d stats\n", idx); break; } @@ -1173,7 +1175,7 @@ mrvl_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) ret = pp2_ppio_get_statistics(priv->ppio, &ppio_stats, 0); if (unlikely(ret)) { - RTE_LOG(ERR, PMD, "Failed to update port statistics\n"); + MRVL_LOG(ERR, "Failed to update port statistics\n"); return ret; } @@ -1495,7 +1497,7 @@ mrvl_fill_bpool(struct mrvl_rxq *rxq, int num) for (i = 0; i < num; i++) { if (((uint64_t)mbufs[i] & MRVL_COOKIE_HIGH_ADDR_MASK) != cookie_addr_high) { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "mbuf virtual addr high 0x%lx out of range\n", (uint64_t)mbufs[i] >> 32); goto out; @@ -1541,14 +1543,14 @@ mrvl_rx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested) uint64_t missing = mandatory & ~requested; if (unsupported) { - RTE_LOG(ERR, PMD, "Some Rx offloads are not supported. " + MRVL_LOG(ERR, "Some Rx offloads are not supported. " "Requested 0x%" PRIx64 " supported 0x%" PRIx64 ".\n", requested, supported); return 0; } if (missing) { - RTE_LOG(ERR, PMD, "Some Rx offloads are missing. " + MRVL_LOG(ERR, "Some Rx offloads are missing. " "Requested 0x%" PRIx64 " missing 0x%" PRIx64 ".\n", requested, missing); return 0; @@ -1595,7 +1597,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, /* * Unknown TC mapping, mapping will not have a correct queue. */ - RTE_LOG(ERR, PMD, "Unknown TC mapping for queue %hu eth%hhu\n", + MRVL_LOG(ERR, "Unknown TC mapping for queue %hu eth%hhu\n", idx, priv->ppio_id); return -EFAULT; } @@ -1603,7 +1605,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, min_size = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS; if (min_size < max_rx_pkt_len) { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Mbuf size must be increased to %u bytes to hold up to %u bytes of data.\n", max_rx_pkt_len + RTE_PKTMBUF_HEADROOM + MRVL_PKT_EFFEC_OFFS, @@ -1705,14 +1707,14 @@ mrvl_tx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested) uint64_t missing = mandatory & ~requested; if (unsupported) { - RTE_LOG(ERR, PMD, "Some Tx offloads are not supported. " + MRVL_LOG(ERR, "Some Tx offloads are not supported. " "Requested 0x%" PRIx64 " supported 0x%" PRIx64 ".\n", requested, supported); return 0; } if (missing) { - RTE_LOG(ERR, PMD, "Some Tx offloads are missing. " + MRVL_LOG(ERR, "Some Tx offloads are missing. " "Requested 0x%" PRIx64 " missing 0x%" PRIx64 ".\n", requested, missing); return 0; @@ -1808,7 +1810,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) ret = pp2_ppio_get_rx_pause(priv->ppio, &en); if (ret) { - RTE_LOG(ERR, PMD, "Failed to read rx pause state\n"); + MRVL_LOG(ERR, "Failed to read rx pause state\n"); return ret; } @@ -1841,7 +1843,7 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) fc_conf->pause_time || fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) { - RTE_LOG(ERR, PMD, "Flowctrl parameter is not supported\n"); + MRVL_LOG(ERR, "Flowctrl parameter is not supported\n"); return -EINVAL; } @@ -1853,7 +1855,7 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) en = fc_conf->mode == RTE_FC_NONE ? 0 : 1; ret = pp2_ppio_set_rx_pause(priv->ppio, en); if (ret) - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Failed to change flowctrl on RX side\n"); return ret; @@ -1945,7 +1947,7 @@ mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused, *(const void **)arg = &mrvl_flow_ops; return 0; default: - RTE_LOG(WARNING, PMD, "Filter type (%d) not supported", + MRVL_LOG(WARNING, "Filter type (%d) not supported", filter_type); return -EINVAL; } @@ -2042,7 +2044,7 @@ mrvl_desc_to_packet_type_and_offset(struct pp2_ppio_desc *desc, *l4_offset = *l3_offset + MRVL_ARP_LENGTH; break; default: - RTE_LOG(DEBUG, PMD, "Failed to recognise l3 packet type\n"); + MRVL_LOG(DEBUG, "Failed to recognise l3 packet type\n"); break; } @@ -2054,7 +2056,7 @@ mrvl_desc_to_packet_type_and_offset(struct pp2_ppio_desc *desc, packet_type |= RTE_PTYPE_L4_UDP; break; default: - RTE_LOG(DEBUG, PMD, "Failed to recognise l4 packet type\n"); + MRVL_LOG(DEBUG, "Failed to recognise l4 packet type\n"); break; } @@ -2125,7 +2127,7 @@ mrvl_rx_pkt_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) ret = pp2_ppio_recv(q->priv->ppio, q->priv->rxq_map[q->queue_id].tc, q->priv->rxq_map[q->queue_id].inq, descs, &nb_pkts); if (unlikely(ret < 0)) { - RTE_LOG(ERR, PMD, "Failed to receive packets\n"); + MRVL_LOG(ERR, "Failed to receive packets\n"); return 0; } mrvl_port_bpool_size[bpool->pp2_id][bpool->id][core_id] -= nb_pkts; @@ -2192,14 +2194,14 @@ mrvl_rx_pkt_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) (!rx_done && num < q->priv->bpool_init_size))) { ret = mrvl_fill_bpool(q, MRVL_BURST_SIZE); if (ret) - RTE_LOG(ERR, PMD, "Failed to fill bpool\n"); + MRVL_LOG(ERR, "Failed to fill bpool\n"); } else if (unlikely(num > q->priv->bpool_max_size)) { int i; int pkt_to_remove = num - q->priv->bpool_init_size; struct rte_mbuf *mbuf; struct pp2_buff_inf buff; - RTE_LOG(DEBUG, PMD, + MRVL_LOG(DEBUG, "\nport-%d:%d: bpool %d oversize - remove %d buffers (pool size: %d -> %d)\n", bpool->pp2_id, q->priv->ppio->port_id, bpool->id, pkt_to_remove, num, @@ -2320,7 +2322,7 @@ mrvl_free_sent_buffers(struct pp2_ppio *ppio, struct pp2_hif *hif, for (i = 0; i < nb_done; i++) { entry = &sq->ent[sq->tail + num]; if (unlikely(!entry->buff.addr)) { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Shadow memory @%d: cookie(%lx), pa(%lx)!\n", sq->tail, (u64)entry->buff.cookie, (u64)entry->buff.addr); @@ -2398,7 +2400,7 @@ mrvl_tx_pkt_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) sq_free_size = MRVL_PP2_TX_SHADOWQ_SIZE - sq->size - 1; if (unlikely(nb_pkts > sq_free_size)) { - RTE_LOG(DEBUG, PMD, + MRVL_LOG(DEBUG, "No room in shadow queue for %d packets! %d packets will be sent.\n", nb_pkts, sq_free_size); nb_pkts = sq_free_size; @@ -2585,7 +2587,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name) rte_zmalloc("mac_addrs", ETHER_ADDR_LEN * MRVL_MAC_ADDRS_MAX, 0); if (!eth_dev->data->mac_addrs) { - RTE_LOG(ERR, PMD, "Failed to allocate space for eth addrs\n"); + MRVL_LOG(ERR, "Failed to allocate space for eth addrs\n"); ret = -ENOMEM; goto out_free_priv; } @@ -2723,9 +2725,9 @@ rte_pmd_mrvl_probe(struct rte_vdev_device *vdev) */ if (!mrvl_qos_cfg) { cfgnum = rte_kvargs_count(kvlist, MRVL_CFG_ARG); - RTE_LOG(INFO, PMD, "Parsing config file!\n"); + MRVL_LOG(INFO, "Parsing config file!\n"); if (cfgnum > 1) { - RTE_LOG(ERR, PMD, "Cannot handle more than one config file!\n"); + MRVL_LOG(ERR, "Cannot handle more than one config file!\n"); goto out_free_kvlist; } else if (cfgnum == 1) { rte_kvargs_process(kvlist, MRVL_CFG_ARG, @@ -2736,7 +2738,7 @@ rte_pmd_mrvl_probe(struct rte_vdev_device *vdev) if (mrvl_dev_num) goto init_devices; - RTE_LOG(INFO, PMD, "Perform MUSDK initializations\n"); + MRVL_LOG(INFO, "Perform MUSDK initializations\n"); /* * ret == -EEXIST is correct, it means DMA * has been already initialized (by another PMD). @@ -2746,13 +2748,13 @@ rte_pmd_mrvl_probe(struct rte_vdev_device *vdev) if (ret != -EEXIST) goto out_free_kvlist; else - RTE_LOG(INFO, PMD, + MRVL_LOG(INFO, "DMA memory has been already initialized by a different driver.\n"); } ret = mrvl_init_pp2(); if (ret) { - RTE_LOG(ERR, PMD, "Failed to init PP!\n"); + MRVL_LOG(ERR, "Failed to init PP!\n"); goto out_deinit_dma; } @@ -2764,7 +2766,7 @@ rte_pmd_mrvl_probe(struct rte_vdev_device *vdev) init_devices: for (i = 0; i < ifnum; i++) { - RTE_LOG(INFO, PMD, "Creating %s\n", ifnames.names[i]); + MRVL_LOG(INFO, "Creating %s\n", ifnames.names[i]); ret = mrvl_eth_dev_create(vdev, ifnames.names[i]); if (ret) goto out_cleanup; @@ -2808,7 +2810,7 @@ rte_pmd_mrvl_remove(struct rte_vdev_device *vdev) if (!name) return -EINVAL; - RTE_LOG(INFO, PMD, "Removing %s\n", name); + MRVL_LOG(INFO, "Removing %s\n", name); RTE_ETH_FOREACH_DEV(i) { /* FIXME: removing all devices! */ char ifname[RTE_ETH_NAME_MAX_LEN]; @@ -2819,7 +2821,7 @@ rte_pmd_mrvl_remove(struct rte_vdev_device *vdev) } if (mrvl_dev_num == 0) { - RTE_LOG(INFO, PMD, "Perform MUSDK deinit\n"); + MRVL_LOG(INFO, "Perform MUSDK deinit\n"); mrvl_deinit_hifs(); mrvl_deinit_pp2(); mv_sys_dma_mem_destroy(); @@ -2835,3 +2837,12 @@ static struct rte_vdev_driver pmd_mrvl_drv = { RTE_PMD_REGISTER_VDEV(net_mvpp2, pmd_mrvl_drv); RTE_PMD_REGISTER_ALIAS(net_mvpp2, eth_mvpp2); + +RTE_INIT(mvrl_init_log); +static void +mvrl_init_log(void) +{ + mvrl_logtype = rte_log_register("pmd.net.mvrl"); + if (mvrl_logtype >= 0) + rte_log_set_level(mvrl_logtype, RTE_LOG_NOTICE); +} diff --git a/drivers/net/mvpp2/mrvl_ethdev.h b/drivers/net/mvpp2/mrvl_ethdev.h index 3a428092dff3..2ba536513504 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.h +++ b/drivers/net/mvpp2/mrvl_ethdev.h @@ -98,4 +98,10 @@ struct mrvl_priv { /** Flow operations forward declaration. */ extern const struct rte_flow_ops mrvl_flow_ops; + +extern int mvrl_logtype; + +#define MVRL_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, mrvl_logtype, "mvrl:" __VA_ARGS__) + #endif /* _MRVL_ETHDEV_H_ */ diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c index 8fd4dbfb1985..bc214fc3b68e 100644 --- a/drivers/net/mvpp2/mrvl_flow.c +++ b/drivers/net/mvpp2/mrvl_flow.c @@ -1054,7 +1054,7 @@ mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow, } if (mask->type) { - RTE_LOG(WARNING, PMD, "eth type mask is ignored\n"); + MRVL_LOG(WARNING, "eth type mask is ignored\n"); ret = mrvl_parse_type(spec, mask, flow); if (ret) goto out; @@ -1099,14 +1099,14 @@ mrvl_parse_vlan(const struct rte_flow_item *item, m = rte_be_to_cpu_16(mask->tci); if (m & MRVL_VLAN_ID_MASK) { - RTE_LOG(WARNING, PMD, "vlan id mask is ignored\n"); + MRVL_LOG(WARNING, "vlan id mask is ignored\n"); ret = mrvl_parse_vlan_id(spec, mask, flow); if (ret) goto out; } if (m & MRVL_VLAN_PRI_MASK) { - RTE_LOG(WARNING, PMD, "vlan pri mask is ignored\n"); + MRVL_LOG(WARNING, "vlan pri mask is ignored\n"); ret = mrvl_parse_vlan_pri(spec, mask, flow); if (ret) goto out; @@ -1172,7 +1172,7 @@ mrvl_parse_ip4(const struct rte_flow_item *item, } if (mask->hdr.next_proto_id) { - RTE_LOG(WARNING, PMD, "next proto id mask is ignored\n"); + MRVL_LOG(WARNING, "next proto id mask is ignored\n"); ret = mrvl_parse_ip4_proto(spec, mask, flow); if (ret) goto out; @@ -1243,7 +1243,7 @@ mrvl_parse_ip6(const struct rte_flow_item *item, } if (mask->hdr.proto) { - RTE_LOG(WARNING, PMD, "next header mask is ignored\n"); + MRVL_LOG(WARNING, "next header mask is ignored\n"); ret = mrvl_parse_ip6_next_hdr(spec, mask, flow); if (ret) goto out; @@ -1292,14 +1292,14 @@ mrvl_parse_tcp(const struct rte_flow_item *item, } if (mask->hdr.src_port) { - RTE_LOG(WARNING, PMD, "tcp sport mask is ignored\n"); + MRVL_LOG(WARNING, "tcp sport mask is ignored\n"); ret = mrvl_parse_tcp_sport(spec, mask, flow); if (ret) goto out; } if (mask->hdr.dst_port) { - RTE_LOG(WARNING, PMD, "tcp dport mask is ignored\n"); + MRVL_LOG(WARNING, "tcp dport mask is ignored\n"); ret = mrvl_parse_tcp_dport(spec, mask, flow); if (ret) goto out; @@ -1343,14 +1343,14 @@ mrvl_parse_udp(const struct rte_flow_item *item, } if (mask->hdr.src_port) { - RTE_LOG(WARNING, PMD, "udp sport mask is ignored\n"); + MRVL_LOG(WARNING, "udp sport mask is ignored\n"); ret = mrvl_parse_udp_sport(spec, mask, flow); if (ret) goto out; } if (mask->hdr.dst_port) { - RTE_LOG(WARNING, PMD, "udp dport mask is ignored\n"); + MRVL_LOG(WARNING, "udp dport mask is ignored\n"); ret = mrvl_parse_udp_dport(spec, mask, flow); if (ret) goto out; @@ -2260,7 +2260,7 @@ mrvl_flow_parse_actions(struct mrvl_priv *priv, * Unknown TC mapping, mapping will not have * a correct queue. */ - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Unknown TC mapping for queue %hu eth%hhu\n", q->index, priv->ppio_id); @@ -2270,7 +2270,7 @@ mrvl_flow_parse_actions(struct mrvl_priv *priv, return -rte_errno; } - RTE_LOG(DEBUG, PMD, + MRVL_LOG(DEBUG, "Action: Assign packets to queue %d, tc:%d, q:%d\n", q->index, priv->rxq_map[q->index].tc, priv->rxq_map[q->index].inq); @@ -2364,7 +2364,7 @@ mrvl_create_cls_table(struct rte_eth_dev *dev, struct rte_flow *first_flow) memset(&priv->cls_tbl_params, 0, sizeof(priv->cls_tbl_params)); priv->cls_tbl_params.type = mrvl_engine_type(first_flow); - RTE_LOG(INFO, PMD, "Setting cls search engine type to %s\n", + MRVL_LOG(INFO, "Setting cls search engine type to %s\n", priv->cls_tbl_params.type == PP2_CLS_TBL_EXACT_MATCH ? "exact" : "maskable"); priv->cls_tbl_params.max_num_rules = MRVL_CLS_MAX_NUM_RULES; diff --git a/drivers/net/mvpp2/mrvl_qos.c b/drivers/net/mvpp2/mrvl_qos.c index 70d000cafbd8..d619bf0976e7 100644 --- a/drivers/net/mvpp2/mrvl_qos.c +++ b/drivers/net/mvpp2/mrvl_qos.c @@ -138,7 +138,7 @@ get_outq_cfg(struct rte_cfgfile *file, int port, int outq, cfg->port[port].outq[outq].sched_mode = PP2_PPIO_SCHED_M_WRR; } else { - RTE_LOG(ERR, PMD, "Unknown token: %s\n", entry); + MRVL_LOG(ERR, "Unknown token: %s\n", entry); return -1; } } @@ -159,7 +159,7 @@ get_outq_cfg(struct rte_cfgfile *file, int port, int outq, * global port rate limiting has priority. */ if (cfg->port[port].rate_limit_enable) { - RTE_LOG(WARNING, PMD, "Port %d rate limiting already enabled\n", + MRVL_LOG(WARNING, "Port %d rate limiting already enabled\n", port); return 0; } @@ -340,7 +340,7 @@ parse_tc_cfg(struct rte_cfgfile *file, int port, int tc, RTE_DIM(cfg->port[port].tc[tc].inq), MRVL_PP2_RXQ_MAX); if (n < 0) { - RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", + MRVL_LOG(ERR, "Error %d while parsing: %s\n", n, entry); return n; } @@ -355,7 +355,7 @@ parse_tc_cfg(struct rte_cfgfile *file, int port, int tc, RTE_DIM(cfg->port[port].tc[tc].pcp), MAX_PCP); if (n < 0) { - RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", + MRVL_LOG(ERR, "Error %d while parsing: %s\n", n, entry); return n; } @@ -370,7 +370,7 @@ parse_tc_cfg(struct rte_cfgfile *file, int port, int tc, RTE_DIM(cfg->port[port].tc[tc].dscp), MAX_DSCP); if (n < 0) { - RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", + MRVL_LOG(ERR, "Error %d while parsing: %s\n", n, entry); return n; } @@ -390,7 +390,7 @@ parse_tc_cfg(struct rte_cfgfile *file, int port, int tc, sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_RED))) { cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_RED; } else { - RTE_LOG(ERR, PMD, "Error while parsing: %s\n", entry); + MRVL_LOG(ERR, "Error while parsing: %s\n", entry); return -1; } } @@ -435,7 +435,7 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path, if (n == 0) { /* This is weird, but not bad. */ - RTE_LOG(WARNING, PMD, "Empty configuration file?\n"); + MRVL_LOG(WARNING, "Empty configuration file?\n"); return 0; } @@ -461,7 +461,7 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path, return -1; (*cfg)->port[n].default_tc = (uint8_t)val; } else { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Default Traffic Class required in custom configuration!\n"); return -1; } @@ -489,7 +489,7 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path, sizeof(MRVL_TOK_PLCR_UNIT_PACKETS))) { unit = PP2_CLS_PLCR_PACKETS_TOKEN_UNIT; } else { - RTE_LOG(ERR, PMD, "Unknown token: %s\n", + MRVL_LOG(ERR, "Unknown token: %s\n", entry); return -1; } @@ -511,7 +511,7 @@ mrvl_get_qoscfg(const char *key __rte_unused, const char *path, sizeof(MRVL_TOK_PLCR_COLOR_AWARE))) { mode = PP2_CLS_PLCR_COLOR_AWARE_MODE; } else { - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Error in parsing: %s\n", entry); return -1; @@ -682,7 +682,7 @@ setup_policer(struct mrvl_priv *priv, struct pp2_cls_plcr_params *params) ret = pp2_cls_plcr_init(params, &priv->policer); if (ret) { - RTE_LOG(ERR, PMD, "Failed to setup %s\n", match); + MRVL_LOG(ERR, "Failed to setup %s\n", match); return -1; } @@ -742,7 +742,7 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid, for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { if (port_cfg->tc[tc].pcps > RTE_DIM(port_cfg->tc[0].pcp)) { /* Better safe than sorry. */ - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Too many PCPs configured in TC %zu!\n", tc); return -1; } @@ -764,7 +764,7 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid, for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { if (port_cfg->tc[tc].dscps > RTE_DIM(port_cfg->tc[0].dscp)) { /* Better safe than sorry. */ - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Too many DSCPs configured in TC %zu!\n", tc); return -1; } @@ -786,7 +786,7 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid, for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { if (port_cfg->tc[tc].inqs > RTE_DIM(port_cfg->tc[0].inq)) { /* Overflow. */ - RTE_LOG(ERR, PMD, + MRVL_LOG(ERR, "Too many RX queues configured per TC %zu!\n", tc); return -1; @@ -795,7 +795,7 @@ mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid, uint8_t idx = port_cfg->tc[tc].inq[i]; if (idx > RTE_DIM(priv->rxq_map)) { - RTE_LOG(ERR, PMD, "Bad queue index %d!\n", idx); + MRVL_LOG(ERR, "Bad queue index %d!\n", idx); return -1; } @@ -878,7 +878,7 @@ mrvl_start_qos_mapping(struct mrvl_priv *priv) size_t i; if (priv->ppio == NULL) { - RTE_LOG(ERR, PMD, "ppio must not be NULL here!\n"); + MRVL_LOG(ERR, "ppio must not be NULL here!\n"); return -1; }