From patchwork Mon Mar 5 13:43:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 35652 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E2036726F; Mon, 5 Mar 2018 14:44:29 +0100 (CET) Received: from mail-wr0-f181.google.com (mail-wr0-f181.google.com [209.85.128.181]) by dpdk.org (Postfix) with ESMTP id 7B1205F6E for ; Mon, 5 Mar 2018 14:44:28 +0100 (CET) Received: by mail-wr0-f181.google.com with SMTP id m12so17291168wrm.13 for ; Mon, 05 Mar 2018 05:44:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=D1ISgj+FzDjTeiJ5jLS30TVf0pzM/efsGFUrK3zqAwE=; b=UplJSibGeWspiFP1O+LHvIweGTUxpsMoWnkGIu+2PaYV6Racay/1A2NxSiHgujGqkl 6nqPsX062lHjPrf7X5GD94ia3zJj/lRXT+2iqNJxlzaSH+FGWs64S8R+AIPq4FS3xuJF D3ds/dw70HTbjoOYkbCBMXMx16R+Tdu6jyfgCUOHsT5l7ICS+1LZv1iRFQy4Fu2eKmvQ i/gcJV9qV7oj8yo9jNjz/jb5NqAMUE+q+m5MwY08d7CEbrLaT0D7U8DDX1sErPtIRW9e nTgwGOb32ReDayFecCfAXYZvep2MBj1UYp1KT4l4Gnr+WZcAMfg/USREFt77Ef0mhu6I QymA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=D1ISgj+FzDjTeiJ5jLS30TVf0pzM/efsGFUrK3zqAwE=; b=TdmOmMk5TwqlU3moKuY8Kp1f+xYAfRsCgX5W4/N3Fwf3pmbas/O8BAlTyY276jXyhD 5UsOFPizteShF3elHFLbtvrBEua9FZInHhMXq31ntA6A3xcvi74uZbG8YJ1m4hgaAydF SuK/CK1BjbP7v9SozLdhD+/tvLJB0b3/JPluoftEDs9tgdPQ3lIbh3f7+lMm0xuurNsR RnoS3rtVakaJFG97Ms1lABikLH6PF9qmFCCVQjALwUUX1YHv+QOpnVMMuoDaqRNGiZJ0 TbHBpKAuavjRL0LKQ/RqnaezN5RmSwOk2oVkLjmUAeJ2wWztJ8OCGfKhIyN77wHLVs5C 1Gew== X-Gm-Message-State: APf1xPDFsoCco91XvprHy+/Pif1OLcB52OsWYbfLPh/kUFZVg2caXhwX 7P9EHie2i6/DEUn7DWMXJ+2+2kL++w== X-Google-Smtp-Source: AG47ELtpsZfCgsP439TSKfKMcVoIXC5osZbuvjXzniVV+8NDmYoS/m9KBiX/wBvmXjXIaJcmPOrJpA== X-Received: by 10.223.164.153 with SMTP id g25mr13451765wrb.79.1520257466835; Mon, 05 Mar 2018 05:44:26 -0800 (PST) Received: from laranjeiro-vm.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id m187sm10807511wmg.0.2018.03.05.05.44.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Mar 2018 05:44:26 -0800 (PST) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Adrien Mazarguil , Yongseok Koh Date: Mon, 5 Mar 2018 14:43:52 +0100 Message-Id: <49be5a0bad0af5b69d872d1e67e794a7d83e7ece.1520257344.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH 1/2] net/mlx5: use port id in PMD log X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Nelio Laranjeiro Acked-by: Adrien Mazarguil --- drivers/net/mlx5/mlx5.c | 79 +++++++++------- drivers/net/mlx5/mlx5_ethdev.c | 86 +++++++++-------- drivers/net/mlx5/mlx5_flow.c | 82 +++++++++------- drivers/net/mlx5/mlx5_mac.c | 9 +- drivers/net/mlx5/mlx5_mr.c | 58 +++++++----- drivers/net/mlx5/mlx5_rxmode.c | 16 ++-- drivers/net/mlx5/mlx5_rxq.c | 201 ++++++++++++++++++++++------------------ drivers/net/mlx5/mlx5_rxtx.h | 7 +- drivers/net/mlx5/mlx5_socket.c | 47 ++++++---- drivers/net/mlx5/mlx5_stats.c | 29 +++--- drivers/net/mlx5/mlx5_trigger.c | 26 +++--- drivers/net/mlx5/mlx5_txq.c | 125 ++++++++++++++----------- drivers/net/mlx5/mlx5_vlan.c | 21 +++-- 13 files changed, 446 insertions(+), 340 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 10da7a283..2edd66f2e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -132,7 +132,6 @@ mlx5_alloc_verbs_buf(size_t size, void *data) ret = rte_malloc_socket(__func__, size, alignment, socket); if (!ret && size) rte_errno = ENOMEM; - DEBUG("Extern alloc size: %lu, align: %lu: %p", size, alignment, ret); return ret; } @@ -148,7 +147,6 @@ static void mlx5_free_verbs_buf(void *ptr, void *data __rte_unused) { assert(data != NULL); - DEBUG("Extern free request: %p", ptr); rte_free(ptr); } @@ -167,8 +165,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) unsigned int i; int ret; - DEBUG("%p: closing device \"%s\"", - (void *)dev, + DEBUG("port %u closing device \"%s\"", + dev->data->port_id, ((priv->ctx != NULL) ? priv->ctx->device->name : "")); /* In case mlx5_dev_stop() has not been called. */ mlx5_dev_interrupt_handler_uninstall(dev); @@ -206,28 +204,35 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_socket_uninit(dev); ret = mlx5_hrxq_ibv_verify(dev); if (ret) - WARN("%p: some Hash Rx queue still remain", (void *)dev); + WARN("port %u some hash Rx queue still remain", + dev->data->port_id); ret = mlx5_ind_table_ibv_verify(dev); if (ret) - WARN("%p: some Indirection table still remain", (void *)dev); + WARN("port %u some indirection table still remain", + dev->data->port_id); ret = mlx5_rxq_ibv_verify(dev); if (ret) - WARN("%p: some Verbs Rx queue still remain", (void *)dev); + WARN("port %u some Verbs Rx queue still remain", + dev->data->port_id); ret = mlx5_rxq_verify(dev); if (ret) - WARN("%p: some Rx Queues still remain", (void *)dev); + WARN("port %u some Rx queues still remain", + dev->data->port_id); ret = mlx5_txq_ibv_verify(dev); if (ret) - WARN("%p: some Verbs Tx queue still remain", (void *)dev); + WARN("port %u some Verbs Tx queue still remain", + dev->data->port_id); ret = mlx5_txq_verify(dev); if (ret) - WARN("%p: some Tx Queues still remain", (void *)dev); + WARN("port %u some Tx queues still remain", + dev->data->port_id); ret = mlx5_flow_verify(dev); if (ret) - WARN("%p: some flows still remain", (void *)dev); + WARN("port %u some flows still remain", dev->data->port_id); ret = mlx5_mr_verify(dev); if (ret) - WARN("%p: some Memory Region still remain", (void *)dev); + WARN("port %u some memory region still remain", + dev->data->port_id); memset(priv, 0, sizeof(*priv)); } @@ -503,15 +508,17 @@ mlx5_uar_init_primary(struct rte_eth_dev *dev) addr = mmap(addr, MLX5_UAR_SIZE, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { - ERROR("Failed to reserve UAR address space, please adjust " - "MLX5_UAR_SIZE or try --base-virtaddr"); + ERROR("port %u failed to reserve UAR address space, please" + " adjust MLX5_UAR_SIZE or try --base-virtaddr", + dev->data->port_id); rte_errno = ENOMEM; return -rte_errno; } /* Accept either same addr or a new addr returned from mmap if target * range occupied. */ - INFO("Reserved UAR address space: %p", addr); + INFO("port %u reserved UAR address space: %p", dev->data->port_id, + addr); priv->uar_base = addr; /* for primary and secondary UAR re-mmap. */ uar_base = addr; /* process local, don't reserve again. */ return 0; @@ -542,20 +549,21 @@ mlx5_uar_init_secondary(struct rte_eth_dev *dev) addr = mmap(priv->uar_base, MLX5_UAR_SIZE, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { - ERROR("UAR mmap failed: %p size: %llu", - priv->uar_base, MLX5_UAR_SIZE); + ERROR("port %u UAR mmap failed: %p size: %llu", + dev->data->port_id, priv->uar_base, MLX5_UAR_SIZE); rte_errno = ENXIO; return -rte_errno; } if (priv->uar_base != addr) { - ERROR("UAR address %p size %llu occupied, please adjust " + ERROR("port %u UAR address %p size %llu occupied, please adjust " "MLX5_UAR_OFFSET or try EAL parameter --base-virtaddr", - priv->uar_base, MLX5_UAR_SIZE); + dev->data->port_id, priv->uar_base, MLX5_UAR_SIZE); rte_errno = ENXIO; return -rte_errno; } uar_base = addr; /* process local, don't reserve again */ - INFO("Reserved UAR address space: %p", addr); + INFO("port %u reserved UAR address space: %p", dev->data->port_id, + addr); return 0; } @@ -659,7 +667,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, mlx5_glue->dv_query_device(attr_ctx, &attrs_out); if (attrs_out.flags & MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED) { if (attrs_out.flags & MLX5DV_CONTEXT_FLAGS_ENHANCED_MPW) { - DEBUG("Enhanced MPW is supported"); + DEBUG("enhanced MPW is supported"); mps = MLX5_MPW_ENHANCED; } else { DEBUG("MPW is supported"); @@ -681,9 +689,9 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, (attrs_out.tunnel_offloads_caps & MLX5DV_RAW_PACKET_CAP_TUNNELED_OFFLOAD_GRE)); } - DEBUG("Tunnel offloading is %ssupported", tunnel_en ? "" : "not "); + DEBUG("tunnel offloading is %ssupported", tunnel_en ? "" : "not "); #else - WARN("Tunnel offloading disabled due to old OFED/rdma-core version"); + WARN("tunnel offloading disabled due to old OFED/rdma-core version"); #endif if (mlx5_glue->query_device_ex(attr_ctx, NULL, &device_attr)) { err = errno; @@ -829,7 +837,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, if (config.ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512) config.ind_table_max_size = ETH_RSS_RETA_SIZE_512; - DEBUG("maximum RX indirection table size is %u", + DEBUG("maximum Rx indirection table size is %u", config.ind_table_max_size); config.hw_vlan_strip = !!(device_attr_ex.raw_packet_caps & IBV_RAW_PACKET_CAP_CVLAN_STRIPPING); @@ -844,7 +852,7 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, #ifdef HAVE_IBV_WQ_FLAG_RX_END_PADDING config.hw_padding = !!device_attr_ex.rx_pad_end_addr_align; #endif - DEBUG("hardware RX end alignment padding is %ssupported", + DEBUG("hardware Rx end alignment padding is %ssupported", (config.hw_padding ? "" : "not ")); config.tso = ((device_attr_ex.tso_caps.max_tso > 0) && (device_attr_ex.tso_caps.supported_qpts & @@ -858,8 +866,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, err = ENOTSUP; goto port_error; } - INFO("%sMPS is %s", - config.mps == MLX5_MPW_ENHANCED ? "Enhanced " : "", + INFO("%s MPS is %s", + config.mps == MLX5_MPW_ENHANCED ? "enhanced " : "", config.mps != MLX5_MPW_DISABLED ? "enabled" : "disabled"); if (config.cqe_comp && !cqe_comp) { WARN("Rx CQE compression isn't supported"); @@ -882,13 +890,14 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, goto port_error; /* Configure the first MAC address by default. */ if (mlx5_get_mac(eth_dev, &mac.addr_bytes)) { - ERROR("cannot get MAC address, is mlx5_en loaded?" - " (errno: %s)", strerror(errno)); + ERROR("port %u cannot get MAC address, is mlx5_en" + " loaded? (errno: %s)", eth_dev->data->port_id, + strerror(errno)); err = ENODEV; goto port_error; } INFO("port %u MAC address is %02x:%02x:%02x:%02x:%02x:%02x", - priv->port, + eth_dev->data->port_id, mac.addr_bytes[0], mac.addr_bytes[1], mac.addr_bytes[2], mac.addr_bytes[3], mac.addr_bytes[4], mac.addr_bytes[5]); @@ -898,16 +907,17 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, if (mlx5_get_ifname(eth_dev, &ifname) == 0) DEBUG("port %u ifname is \"%s\"", - priv->port, ifname); + eth_dev->data->port_id, ifname); else - DEBUG("port %u ifname is unknown", priv->port); + DEBUG("port %u ifname is unknown", + eth_dev->data->port_id); } #endif /* Get actual MTU if possible. */ err = mlx5_get_mtu(eth_dev, &priv->mtu); if (err) goto port_error; - DEBUG("port %u MTU is %u", priv->port, priv->mtu); + DEBUG("port %u MTU is %u", eth_dev->data->port_id, priv->mtu); /* * Initialize burst functions to prevent crashes before link-up. */ @@ -928,7 +938,8 @@ mlx5_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, MLX5DV_CTX_ATTR_BUF_ALLOCATORS, (void *)((uintptr_t)&alctr)); /* Bring Ethernet device up. */ - DEBUG("forcing Ethernet interface up"); + DEBUG("port %u forcing Ethernet interface up", + eth_dev->data->port_id); mlx5_set_flags(eth_dev, ~IFF_UP, IFF_UP); /* Store device configuration on private structure. */ priv->config = config; diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index d7e85577f..5baf843e3 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -313,16 +313,16 @@ mlx5_dev_configure(struct rte_eth_dev *dev) int ret = 0; if ((tx_offloads & supp_tx_offloads) != tx_offloads) { - ERROR("Some Tx offloads are not supported " + ERROR("port %u some Tx offloads are not supported " "requested 0x%" PRIx64 " supported 0x%" PRIx64, - tx_offloads, supp_tx_offloads); + dev->data->port_id, tx_offloads, supp_tx_offloads); rte_errno = ENOTSUP; return -rte_errno; } if ((rx_offloads & supp_rx_offloads) != rx_offloads) { - ERROR("Some Rx offloads are not supported " + ERROR("port %u some Rx offloads are not supported " "requested 0x%" PRIx64 " supported 0x%" PRIx64, - rx_offloads, supp_rx_offloads); + dev->data->port_id, rx_offloads, supp_rx_offloads); rte_errno = ENOTSUP; return -rte_errno; } @@ -337,7 +337,8 @@ mlx5_dev_configure(struct rte_eth_dev *dev) rte_realloc(priv->rss_conf.rss_key, rss_hash_default_key_len, 0); if (!priv->rss_conf.rss_key) { - ERROR("cannot allocate RSS hash key memory (%u)", rxqs_n); + ERROR("port %u cannot allocate RSS hash key memory (%u)", + dev->data->port_id, rxqs_n); rte_errno = ENOMEM; return -rte_errno; } @@ -351,19 +352,20 @@ mlx5_dev_configure(struct rte_eth_dev *dev) priv->rxqs = (void *)dev->data->rx_queues; priv->txqs = (void *)dev->data->tx_queues; if (txqs_n != priv->txqs_n) { - INFO("%p: TX queues number update: %u -> %u", - (void *)dev, priv->txqs_n, txqs_n); + INFO("port %u Tx queues number update: %u -> %u", + dev->data->port_id, priv->txqs_n, txqs_n); priv->txqs_n = txqs_n; } if (rxqs_n > priv->config.ind_table_max_size) { - ERROR("cannot handle this many RX queues (%u)", rxqs_n); + ERROR("port %u cannot handle this many Rx queues (%u)", + dev->data->port_id, rxqs_n); rte_errno = EINVAL; return -rte_errno; } if (rxqs_n == priv->rxqs_n) return 0; - INFO("%p: RX queues number update: %u -> %u", - (void *)dev, priv->rxqs_n, rxqs_n); + INFO("port %u Rx queues number update: %u -> %u", + dev->data->port_id, priv->rxqs_n, rxqs_n); priv->rxqs_n = rxqs_n; /* If the requested number of RX queues is not a power of two, use the * maximum indirection table size for better balancing. @@ -489,7 +491,8 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev) ret = mlx5_ifreq(dev, SIOCGIFFLAGS, &ifr); if (ret) { - WARN("ioctl(SIOCGIFFLAGS) failed: %s", strerror(rte_errno)); + WARN("port %u ioctl(SIOCGIFFLAGS) failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } memset(&dev_link, 0, sizeof(dev_link)); @@ -498,8 +501,8 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev) ifr.ifr_data = (void *)&edata; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("ioctl(SIOCETHTOOL, ETHTOOL_GSET) failed: %s", - strerror(rte_errno)); + WARN("port %u ioctl(SIOCETHTOOL, ETHTOOL_GSET) failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } link_speed = ethtool_cmd_speed(&edata); @@ -555,7 +558,8 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) ret = mlx5_ifreq(dev, SIOCGIFFLAGS, &ifr); if (ret) { - WARN("ioctl(SIOCGIFFLAGS) failed: %s", strerror(rte_errno)); + WARN("port %u ioctl(SIOCGIFFLAGS) failed: %s", + dev->data->port_id, strerror(rte_errno)); return ret; } memset(&dev_link, 0, sizeof(dev_link)); @@ -564,8 +568,8 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) ifr.ifr_data = (void *)&gcmd; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - DEBUG("ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS) failed: %s", - strerror(rte_errno)); + DEBUG("port %u ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS)" + " failed: %s", dev->data->port_id, strerror(rte_errno)); return ret; } gcmd.link_mode_masks_nwords = -gcmd.link_mode_masks_nwords; @@ -579,8 +583,8 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev) ifr.ifr_data = (void *)ecmd; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - DEBUG("ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS) failed: %s", - strerror(rte_errno)); + DEBUG("port %u ioctl(SIOCETHTOOL, ETHTOOL_GLINKSETTINGS)" + " failed: %s", dev->data->port_id, strerror(rte_errno)); return ret; } dev_link.link_speed = ecmd->speed; @@ -651,15 +655,14 @@ mlx5_link_start(struct rte_eth_dev *dev) dev->rx_pkt_burst = mlx5_select_rx_function(dev); ret = mlx5_traffic_enable(dev); if (ret) { - ERROR("%p: error occurred while configuring control flows: %s", - (void *)dev, strerror(rte_errno)); + ERROR("port %u error occurred while configuring control flows:" + " %s", dev->data->port_id, strerror(rte_errno)); return; } ret = mlx5_flow_start(dev, &priv->flows); - if (ret) { - ERROR("%p: error occurred while configuring flows: %s", - (void *)dev, strerror(rte_errno)); - } + if (ret) + ERROR("port %u error occurred while configuring flows: %s", + dev->data->port_id, strerror(rte_errno)); } /** @@ -780,7 +783,7 @@ mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) return ret; if (kern_mtu == mtu) { priv->mtu = mtu; - DEBUG("adapter port %u MTU set to %u", priv->port, mtu); + DEBUG("port %u adapter MTU set to %u", dev->data->port_id, mtu); return 0; } rte_errno = EAGAIN; @@ -810,8 +813,8 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) ifr.ifr_data = (void *)ðpause; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed: %s", - strerror(rte_errno)); + WARN("port %u ioctl(SIOCETHTOOL, ETHTOOL_GPAUSEPARAM) failed:" + " %s", dev->data->port_id, strerror(rte_errno)); return ret; } fc_conf->autoneg = ethpause.autoneg; @@ -861,9 +864,8 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) ethpause.tx_pause = 0; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("ioctl(SIOCETHTOOL, ETHTOOL_SPAUSEPARAM)" - " failed: %s", - strerror(rte_errno)); + WARN("port %u ioctl(SIOCETHTOOL, ETHTOOL_SPAUSEPARAM)" + " failed: %s", dev->data->port_id, strerror(rte_errno)); return ret; } return 0; @@ -992,8 +994,8 @@ mlx5_dev_status_handler(struct rte_eth_dev *dev) dev->data->dev_conf.intr_conf.rmv == 1) ret |= (1 << RTE_ETH_EVENT_INTR_RMV); else - DEBUG("event type %d on port %d not handled", - event.event_type, event.element.port_num); + DEBUG("port %u event type %d on not handled", + dev->data->port_id, event.event_type); mlx5_glue->ack_async_event(&event); } if (ret & (1 << RTE_ETH_EVENT_INTR_LSC)) @@ -1101,7 +1103,8 @@ mlx5_dev_interrupt_handler_install(struct rte_eth_dev *dev) flags = fcntl(priv->ctx->async_fd, F_GETFL); ret = fcntl(priv->ctx->async_fd, F_SETFL, flags | O_NONBLOCK); if (ret) { - INFO("failed to change file descriptor async event queue"); + INFO("port %u failed to change file descriptor async event" + " queue", dev->data->port_id); dev->data->dev_conf.intr_conf.lsc = 0; dev->data->dev_conf.intr_conf.rmv = 0; } @@ -1114,7 +1117,8 @@ mlx5_dev_interrupt_handler_install(struct rte_eth_dev *dev) } ret = mlx5_socket_init(dev); if (ret) - ERROR("cannot initialise socket: %s", strerror(rte_errno)); + ERROR("port %u cannot initialise socket: %s", + dev->data->port_id, strerror(rte_errno)); else if (priv->primary_socket) { priv->intr_handle_socket.fd = priv->primary_socket; priv->intr_handle_socket.type = RTE_INTR_HANDLE_EXT; @@ -1184,17 +1188,20 @@ mlx5_select_tx_function(struct rte_eth_dev *dev) tx_pkt_burst = mlx5_tx_burst_raw_vec; else tx_pkt_burst = mlx5_tx_burst_vec; - DEBUG("selected Enhanced MPW TX vectorized function"); + DEBUG("port %u selected enhanced MPW Tx vectorized" + " function", dev->data->port_id); } else { tx_pkt_burst = mlx5_tx_burst_empw; - DEBUG("selected Enhanced MPW TX function"); + DEBUG("port %u selected enhanced MPW Tx function", + dev->data->port_id); } } else if (config->mps && (config->txq_inline > 0)) { tx_pkt_burst = mlx5_tx_burst_mpw_inline; - DEBUG("selected MPW inline TX function"); + DEBUG("port %u selected MPW inline Tx function", + dev->data->port_id); } else if (config->mps) { tx_pkt_burst = mlx5_tx_burst_mpw; - DEBUG("selected MPW TX function"); + DEBUG("port %u selected MPW Tx function", dev->data->port_id); } return tx_pkt_burst; } @@ -1216,7 +1223,8 @@ mlx5_select_rx_function(struct rte_eth_dev *dev) assert(dev != NULL); if (mlx5_check_vec_rx_support(dev) > 0) { rx_pkt_burst = mlx5_rx_burst_vec; - DEBUG("selected RX vectorized function"); + DEBUG("port %u selected Rx vectorized function", + dev->data->port_id); } return rx_pkt_burst; } diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 1435516dc..634f90a49 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -883,7 +883,7 @@ mlx5_flow_convert_allocate(unsigned int priority, rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "cannot allocate verbs spec attributes."); + "cannot allocate verbs spec attributes"); return NULL; } ibv_attr->priority = priority; @@ -1150,11 +1150,11 @@ mlx5_flow_convert(struct rte_eth_dev *dev, } } rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot allocate verbs spec attributes."); + NULL, "cannot allocate verbs spec attributes"); return -rte_errno; exit_count_error: rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "cannot create counter."); + NULL, "cannot create counter"); return -rte_errno; } @@ -1815,7 +1815,8 @@ mlx5_flow_create_action_queue(struct rte_eth_dev *dev, NULL, "flow rule creation failure"); goto error; } - DEBUG("%p type %d QP %p ibv_flow %p", + DEBUG("port %u %p type %d QP %p ibv_flow %p", + dev->data->port_id, (void *)flow, i, (void *)flow->frxq[i].hrxq, (void *)flow->frxq[i].ibv_flow); @@ -1913,10 +1914,11 @@ mlx5_flow_list_create(struct rte_eth_dev *dev, if (ret) goto exit; TAILQ_INSERT_TAIL(list, flow, next); - DEBUG("Flow created %p", (void *)flow); + DEBUG("port %u flow created %p", dev->data->port_id, (void *)flow); return flow; exit: - ERROR("flow creation error: %s", error->message); + ERROR("port %u flow creation error: %s", dev->data->port_id, + error->message); for (i = 0; i != hash_rxq_init_n; ++i) { if (parser.queue[i].ibv_attr) rte_free(parser.queue[i].ibv_attr); @@ -2034,7 +2036,7 @@ mlx5_flow_list_destroy(struct rte_eth_dev *dev, struct mlx5_flows *list, flow->cs = NULL; } TAILQ_REMOVE(list, flow, next); - DEBUG("Flow destroyed %p", (void *)flow); + DEBUG("port %u flow destroyed %p", dev->data->port_id, (void *)flow); rte_free(flow); } @@ -2076,13 +2078,15 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) assert(priv->ctx); fdq = rte_calloc(__func__, 1, sizeof(*fdq), 0); if (!fdq) { - WARN("cannot allocate memory for drop queue"); + WARN("port %u cannot allocate memory for drop queue", + dev->data->port_id); rte_errno = ENOMEM; return -rte_errno; } fdq->cq = mlx5_glue->create_cq(priv->ctx, 1, NULL, NULL, 0); if (!fdq->cq) { - WARN("cannot allocate CQ for drop queue"); + WARN("port %u cannot allocate CQ for drop queue", + dev->data->port_id); rte_errno = errno; goto error; } @@ -2096,7 +2100,8 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) .cq = fdq->cq, }); if (!fdq->wq) { - WARN("cannot allocate WQ for drop queue"); + WARN("port %u cannot allocate WQ for drop queue", + dev->data->port_id); rte_errno = errno; goto error; } @@ -2108,7 +2113,8 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) .comp_mask = 0, }); if (!fdq->ind_table) { - WARN("cannot allocate indirection table for drop queue"); + WARN("port %u cannot allocate indirection table for drop" + " queue", dev->data->port_id); rte_errno = errno; goto error; } @@ -2131,7 +2137,8 @@ mlx5_flow_create_drop_queue(struct rte_eth_dev *dev) .pd = priv->pd }); if (!fdq->qp) { - WARN("cannot allocate QP for drop queue"); + WARN("port %u cannot allocate QP for drop queue", + dev->data->port_id); rte_errno = errno; goto error; } @@ -2202,7 +2209,8 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct mlx5_flows *list) claim_zero(mlx5_glue->destroy_flow (flow->frxq[HASH_RXQ_ETH].ibv_flow)); flow->frxq[HASH_RXQ_ETH].ibv_flow = NULL; - DEBUG("Flow %p removed", (void *)flow); + DEBUG("port %u flow %p removed", dev->data->port_id, + (void *)flow); /* Next flow. */ continue; } @@ -2235,7 +2243,8 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct mlx5_flows *list) mlx5_hrxq_release(dev, flow->frxq[i].hrxq); flow->frxq[i].hrxq = NULL; } - DEBUG("Flow %p removed", (void *)flow); + DEBUG("port %u flow %p removed", dev->data->port_id, + (void *)flow); } } @@ -2265,12 +2274,14 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) (priv->flow_drop_queue->qp, flow->frxq[HASH_RXQ_ETH].ibv_attr); if (!flow->frxq[HASH_RXQ_ETH].ibv_flow) { - DEBUG("Flow %p cannot be applied", + DEBUG("port %u flow %p cannot be applied", + dev->data->port_id, (void *)flow); rte_errno = EINVAL; return -rte_errno; } - DEBUG("Flow %p applied", (void *)flow); + DEBUG("port %u flow %p applied", dev->data->port_id, + (void *)flow); /* Next flow. */ continue; } @@ -2292,8 +2303,8 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) (*flow->queues), flow->queues_n); if (!flow->frxq[i].hrxq) { - DEBUG("Flow %p cannot be applied", - (void *)flow); + DEBUG("port %u flow %p cannot be applied", + dev->data->port_id, (void *)flow); rte_errno = EINVAL; return -rte_errno; } @@ -2302,12 +2313,13 @@ mlx5_flow_start(struct rte_eth_dev *dev, struct mlx5_flows *list) mlx5_glue->create_flow(flow->frxq[i].hrxq->qp, flow->frxq[i].ibv_attr); if (!flow->frxq[i].ibv_flow) { - DEBUG("Flow %p cannot be applied", - (void *)flow); + DEBUG("port %u flow %p cannot be applied", + dev->data->port_id, (void *)flow); rte_errno = EINVAL; return -rte_errno; } - DEBUG("Flow %p applied", (void *)flow); + DEBUG("port %u flow %p applied", + dev->data->port_id, (void *)flow); } if (!flow->mark) continue; @@ -2333,8 +2345,8 @@ mlx5_flow_verify(struct rte_eth_dev *dev) int ret = 0; TAILQ_FOREACH(flow, &priv->flows, next) { - DEBUG("%p: flow %p still referenced", (void *)dev, - (void *)flow); + DEBUG("port %u flow %p still referenced", + dev->data->port_id, (void *)flow); ++ret; } return ret; @@ -2605,7 +2617,8 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, /* Validate queue number. */ if (fdir_filter->action.rx_queue >= priv->rxqs_n) { - ERROR("invalid queue number %d", fdir_filter->action.rx_queue); + ERROR("port %u invalid queue number %d", + dev->data->port_id, fdir_filter->action.rx_queue); rte_errno = EINVAL; return -rte_errno; } @@ -2628,7 +2641,9 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, }; break; default: - ERROR("invalid behavior %d", fdir_filter->action.behavior); + ERROR("port %u invalid behavior %d", + dev->data->port_id, + fdir_filter->action.behavior); rte_errno = ENOTSUP; return -rte_errno; } @@ -2764,7 +2779,8 @@ mlx5_fdir_filter_convert(struct rte_eth_dev *dev, }; break; default: - ERROR("invalid flow type%d", fdir_filter->input.flow_type); + ERROR("port %u invalid flow type%d", + dev->data->port_id, fdir_filter->input.flow_type); rte_errno = ENOTSUP; return -rte_errno; } @@ -2813,7 +2829,8 @@ mlx5_fdir_filter_add(struct rte_eth_dev *dev, attributes.items, attributes.actions, &error); if (flow) { - DEBUG("FDIR created %p", (void *)flow); + DEBUG("port %u FDIR created %p", dev->data->port_id, + (void *)flow); return 0; } return -rte_errno; @@ -3007,8 +3024,8 @@ mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, return 0; if (fdir_mode != RTE_FDIR_MODE_PERFECT && fdir_mode != RTE_FDIR_MODE_PERFECT_MAC_VLAN) { - ERROR("%p: flow director mode %d not supported", - (void *)dev, fdir_mode); + ERROR("port %u flow director mode %d not supported", + dev->data->port_id, fdir_mode); rte_errno = EINVAL; return -rte_errno; } @@ -3026,7 +3043,8 @@ mlx5_fdir_ctrl_func(struct rte_eth_dev *dev, enum rte_filter_op filter_op, mlx5_fdir_info_get(dev, arg); break; default: - DEBUG("%p: unknown operation %u", (void *)dev, filter_op); + DEBUG("port %u unknown operation %u", dev->data->port_id, + filter_op); rte_errno = EINVAL; return -rte_errno; } @@ -3065,8 +3083,8 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev, case RTE_ETH_FILTER_FDIR: return mlx5_fdir_ctrl_func(dev, filter_op, arg); default: - ERROR("%p: filter type (%d) not supported", - (void *)dev, filter_type); + ERROR("port %u filter type (%d) not supported", + dev->data->port_id, filter_type); rte_errno = ENOTSUP; return -rte_errno; } diff --git a/drivers/net/mlx5/mlx5_mac.c b/drivers/net/mlx5/mlx5_mac.c index ba54c055e..e2f5b5b3a 100644 --- a/drivers/net/mlx5/mlx5_mac.c +++ b/drivers/net/mlx5/mlx5_mac.c @@ -73,8 +73,8 @@ mlx5_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index) int ret = mlx5_traffic_restart(dev); if (ret) - ERROR("%p cannot remove mac address: %s", (void *)dev, - strerror(rte_errno)); + ERROR("port %u cannot remove mac address: %s", + dev->data->port_id, strerror(rte_errno)); } } @@ -130,8 +130,9 @@ mlx5_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr) { int ret; - DEBUG("%p: setting primary MAC address", (void *)dev); + DEBUG("port %u setting primary MAC address", dev->data->port_id); ret = mlx5_mac_addr_add(dev, mac_addr, 0, 0); if (ret) - ERROR("cannot set mac address: %s", strerror(rte_errno)); + ERROR("port %u cannot set mac address: %s", + dev->data->port_id, strerror(rte_errno)); } diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 884ac33eb..1de4d6278 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -104,15 +104,16 @@ mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool *mp, rte_spinlock_lock(&txq_ctrl->priv->mr_lock); /* Add a new entry, register MR first. */ - DEBUG("%p: discovered new memory pool \"%s\" (%p)", - (void *)txq_ctrl, mp->name, (void *)mp); + DEBUG("port %u discovered new memory pool \"%s\" (%p)", + txq_ctrl->priv->dev->data->port_id, mp->name, (void *)mp); dev = txq_ctrl->priv->dev; mr = mlx5_mr_get(dev, mp); if (mr == NULL) { if (rte_eal_process_type() != RTE_PROC_PRIMARY) { - DEBUG("Using unregistered mempool 0x%p(%s) in " + DEBUG("port %u using unregistered mempool 0x%p(%s) in " "secondary process, please create mempool before " " rte_eth_dev_start()", + txq_ctrl->priv->dev->data->port_id, (void *)mp, mp->name); rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); rte_errno = ENOTSUP; @@ -121,15 +122,17 @@ mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool *mp, mr = mlx5_mr_new(dev, mp); } if (unlikely(mr == NULL)) { - DEBUG("%p: unable to configure MR, ibv_reg_mr() failed.", - (void *)txq_ctrl); rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); + DEBUG("port %u unable to configure memory region, ibv_reg_mr()" + " failed", + txq_ctrl->priv->dev->data->port_id); return NULL; } if (unlikely(idx == RTE_DIM(txq->mp2mr))) { /* Table is full, remove oldest entry. */ - DEBUG("%p: MR <-> MP table full, dropping oldest entry.", - (void *)txq_ctrl); + DEBUG("port %u memroy region <-> memory pool table full, " + " dropping oldest entry", + txq_ctrl->priv->dev->data->port_id); --idx; mlx5_mr_release(txq->mp2mr[0]); memmove(&txq->mp2mr[0], &txq->mp2mr[1], @@ -137,8 +140,8 @@ mlx5_txq_mp2mr_reg(struct mlx5_txq_data *txq, struct rte_mempool *mp, } /* Store the new entry. */ txq_ctrl->txq.mp2mr[idx] = mr; - DEBUG("%p: new MR lkey for MP \"%s\" (%p): 0x%08" PRIu32, - (void *)txq_ctrl, mp->name, (void *)mp, + DEBUG("port %u new memory region lkey for MP \"%s\" (%p): 0x%08" PRIu32, + txq_ctrl->priv->dev->data->port_id, mp->name, (void *)mp, txq_ctrl->txq.mp2mr[idx]->lkey); rte_spinlock_unlock(&txq_ctrl->priv->mr_lock); return mr; @@ -206,7 +209,8 @@ mlx5_mp2mr_iter(struct rte_mempool *mp, void *arg) } mr = mlx5_mr_new(priv->dev, mp); if (!mr) - ERROR("cannot create memory region: %s", strerror(rte_errno)); + ERROR("port %u cannot create memory region: %s", + priv->dev->data->port_id, strerror(rte_errno)); } /** @@ -233,18 +237,20 @@ mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) mr = rte_zmalloc_socket(__func__, sizeof(*mr), 0, mp->socket_id); if (!mr) { - DEBUG("unable to configure MR, ibv_reg_mr() failed."); + DEBUG("port %u unable to configure memory region, ibv_reg_mr()" + " failed", + dev->data->port_id); rte_errno = ENOMEM; return NULL; } if (mlx5_check_mempool(mp, &start, &end) != 0) { - ERROR("mempool %p: not virtually contiguous", - (void *)mp); + ERROR("port %u mempool %p: not virtually contiguous", + dev->data->port_id, (void *)mp); rte_errno = ENOMEM; return NULL; } - DEBUG("mempool %p area start=%p end=%p size=%zu", - (void *)mp, (void *)start, (void *)end, + DEBUG("port %u mempool %p area start=%p end=%p size=%zu", + dev->data->port_id, (void *)mp, (void *)start, (void *)end, (size_t)(end - start)); /* Save original addresses for exact MR lookup. */ mr->start = start; @@ -260,8 +266,9 @@ mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) if ((end > addr) && (end < addr + len)) end = RTE_ALIGN_CEIL(end, align); } - DEBUG("mempool %p using start=%p end=%p size=%zu for MR", - (void *)mp, (void *)start, (void *)end, + DEBUG("port %u mempool %p using start=%p end=%p size=%zu for memory" + " region", + dev->data->port_id, (void *)mp, (void *)start, (void *)end, (size_t)(end - start)); mr->mr = mlx5_glue->reg_mr(priv->pd, (void *)start, end - start, IBV_ACCESS_LOCAL_WRITE); @@ -272,8 +279,8 @@ mlx5_mr_new(struct rte_eth_dev *dev, struct rte_mempool *mp) mr->mp = mp; mr->lkey = rte_cpu_to_be_32(mr->mr->lkey); rte_atomic32_inc(&mr->refcnt); - DEBUG("%p: new Memory Region %p refcnt: %d", (void *)dev, - (void *)mr, rte_atomic32_read(&mr->refcnt)); + DEBUG("port %u new memory region %p refcnt: %d", + dev->data->port_id, (void *)mr, rte_atomic32_read(&mr->refcnt)); LIST_INSERT_HEAD(&priv->mr, mr, next); return mr; } @@ -301,8 +308,9 @@ mlx5_mr_get(struct rte_eth_dev *dev, struct rte_mempool *mp) LIST_FOREACH(mr, &priv->mr, next) { if (mr->mp == mp) { rte_atomic32_inc(&mr->refcnt); - DEBUG("Memory Region %p refcnt: %d", - (void *)mr, rte_atomic32_read(&mr->refcnt)); + DEBUG("port %u memory region %p refcnt: %d", + dev->data->port_id, (void *)mr, + rte_atomic32_read(&mr->refcnt)); return mr; } } @@ -322,8 +330,8 @@ int mlx5_mr_release(struct mlx5_mr *mr) { assert(mr); - DEBUG("Memory Region %p refcnt: %d", - (void *)mr, rte_atomic32_read(&mr->refcnt)); + DEBUG("memory region %p refcnt: %d", (void *)mr, + rte_atomic32_read(&mr->refcnt)); if (rte_atomic32_dec_and_test(&mr->refcnt)) { claim_zero(mlx5_glue->dereg_mr(mr->mr)); LIST_REMOVE(mr, next); @@ -350,8 +358,8 @@ mlx5_mr_verify(struct rte_eth_dev *dev) struct mlx5_mr *mr; LIST_FOREACH(mr, &priv->mr, next) { - DEBUG("%p: mr %p still referenced", (void *)dev, - (void *)mr); + DEBUG("port %u memory region %p still referenced", + dev->data->port_id, (void *)mr); ++ret; } return ret; diff --git a/drivers/net/mlx5/mlx5_rxmode.c b/drivers/net/mlx5/mlx5_rxmode.c index 0c1e9eb2a..8cc5667a4 100644 --- a/drivers/net/mlx5/mlx5_rxmode.c +++ b/drivers/net/mlx5/mlx5_rxmode.c @@ -37,8 +37,8 @@ mlx5_promiscuous_enable(struct rte_eth_dev *dev) dev->data->promiscuous = 1; ret = mlx5_traffic_restart(dev); if (ret) - ERROR("%p cannot enable promiscuous mode: %s", (void *)dev, - strerror(rte_errno)); + ERROR("port %u cannot enable promiscuous mode: %s", + dev->data->port_id, strerror(rte_errno)); } /** @@ -55,8 +55,8 @@ mlx5_promiscuous_disable(struct rte_eth_dev *dev) dev->data->promiscuous = 0; ret = mlx5_traffic_restart(dev); if (ret) - ERROR("%p cannot disable promiscuous mode: %s", (void *)dev, - strerror(rte_errno)); + ERROR("port %u cannot disable promiscuous mode: %s", + dev->data->port_id, strerror(rte_errno)); } /** @@ -73,8 +73,8 @@ mlx5_allmulticast_enable(struct rte_eth_dev *dev) dev->data->all_multicast = 1; ret = mlx5_traffic_restart(dev); if (ret) - ERROR("%p cannot enable allmulicast mode: %s", (void *)dev, - strerror(rte_errno)); + ERROR("port %u cannot enable allmulicast mode: %s", + dev->data->port_id, strerror(rte_errno)); } /** @@ -91,6 +91,6 @@ mlx5_allmulticast_disable(struct rte_eth_dev *dev) dev->data->all_multicast = 0; ret = mlx5_traffic_restart(dev); if (ret) - ERROR("%p cannot disable allmulicast mode: %s", (void *)dev, - strerror(rte_errno)); + ERROR("port %u cannot disable allmulicast mode: %s", + dev->data->port_id, strerror(rte_errno)); } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 477aa2631..9f54730e9 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -77,7 +77,8 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) buf = rte_pktmbuf_alloc(rxq_ctrl->rxq.mp); if (buf == NULL) { - ERROR("%p: empty mbuf pool", (void *)rxq_ctrl); + ERROR("port %u empty mbuf pool", + rxq_ctrl->priv->dev->data->port_id); rte_errno = ENOMEM; goto error; } @@ -118,8 +119,9 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) for (j = 0; j < MLX5_VPMD_DESCS_PER_LOOP; ++j) (*rxq->elts)[elts_n + j] = &rxq->fake_mbuf; } - DEBUG("%p: allocated and configured %u segments (max %u packets)", - (void *)rxq_ctrl, elts_n, elts_n / (1 << rxq_ctrl->rxq.sges_n)); + DEBUG("port %u Rx queue %u allocated and configured %u segments" + " (max %u packets)", rxq_ctrl->priv->dev->data->port_id, + rxq_ctrl->idx, elts_n, elts_n / (1 << rxq_ctrl->rxq.sges_n)); return 0; error: err = rte_errno; /* Save rte_errno before cleanup. */ @@ -129,7 +131,8 @@ rxq_alloc_elts(struct mlx5_rxq_ctrl *rxq_ctrl) rte_pktmbuf_free_seg((*rxq_ctrl->rxq.elts)[i]); (*rxq_ctrl->rxq.elts)[i] = NULL; } - DEBUG("%p: failed, freed everything", (void *)rxq_ctrl); + DEBUG("port %u Rx queue %u failed, freed everything", + rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); rte_errno = err; /* Restore rte_errno. */ return -rte_errno; } @@ -149,7 +152,8 @@ rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl) uint16_t used = q_n - (rxq->rq_ci - rxq->rq_pi); uint16_t i; - DEBUG("%p: freeing WRs", (void *)rxq_ctrl); + DEBUG("port %u Rx queue %u freeing WRs", + rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); if (rxq->elts == NULL) return; /** @@ -179,7 +183,8 @@ rxq_free_elts(struct mlx5_rxq_ctrl *rxq_ctrl) void mlx5_rxq_cleanup(struct mlx5_rxq_ctrl *rxq_ctrl) { - DEBUG("cleaning up %p", (void *)rxq_ctrl); + DEBUG("port %u cleaning up Rx queue %u", + rxq_ctrl->priv->dev->data->port_id, rxq_ctrl->idx); if (rxq_ctrl->ibv) mlx5_rxq_ibv_release(rxq_ctrl->ibv); memset(rxq_ctrl, 0, sizeof(*rxq_ctrl)); @@ -285,22 +290,23 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (!rte_is_power_of_2(desc)) { desc = 1 << log2above(desc); - WARN("%p: increased number of descriptors in RX queue %u" + WARN("port %u increased number of descriptors in Rx queue %u" " to the next power of two (%d)", - (void *)dev, idx, desc); + dev->data->port_id, idx, desc); } - DEBUG("%p: configuring queue %u for %u descriptors", - (void *)dev, idx, desc); + DEBUG("port %u configuring Rx queue %u for %u descriptors", + dev->data->port_id, idx, desc); if (idx >= priv->rxqs_n) { - ERROR("%p: queue index out of range (%u >= %u)", - (void *)dev, idx, priv->rxqs_n); + ERROR("port %u Rx queue index out of range (%u >= %u)", + dev->data->port_id, idx, priv->rxqs_n); rte_errno = EOVERFLOW; return -rte_errno; } if (!mlx5_is_rx_queue_offloads_allowed(dev, conf->offloads)) { - ERROR("%p: Rx queue offloads 0x%" PRIx64 " don't match port " - "offloads 0x%" PRIx64 " or supported offloads 0x%" PRIx64, - (void *)dev, conf->offloads, + ERROR("port %u Rx queue offloads 0x%" PRIx64 " don't match" + " port offloads 0x%" PRIx64 " or supported offloads 0x%" + PRIx64, + dev->data->port_id, conf->offloads, dev->data->dev_conf.rxmode.offloads, (mlx5_get_rx_port_offloads() | mlx5_get_rx_queue_offloads(dev))); @@ -308,21 +314,20 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, return -rte_errno; } if (!mlx5_rxq_releasable(dev, idx)) { - ERROR("%p: unable to release queue index %u", - (void *)dev, idx); + ERROR("port %u unable to release queue index %u", + dev->data->port_id, idx); rte_errno = EBUSY; return -rte_errno; } mlx5_rxq_release(dev, idx); rxq_ctrl = mlx5_rxq_new(dev, idx, desc, socket, conf, mp); if (!rxq_ctrl) { - ERROR("%p: unable to allocate queue index %u", - (void *)dev, idx); + ERROR("port %u unable to allocate queue index %u", + dev->data->port_id, idx); rte_errno = ENOMEM; return -rte_errno; } - DEBUG("%p: adding RX queue %p to list", - (void *)dev, (void *)rxq_ctrl); + DEBUG("port %u adding Rx queue %u to list", dev->data->port_id, idx); (*priv->rxqs)[idx] = &rxq_ctrl->rxq; return 0; } @@ -345,8 +350,9 @@ mlx5_rx_queue_release(void *dpdk_rxq) rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); priv = rxq_ctrl->priv; if (!mlx5_rxq_releasable(priv->dev, rxq_ctrl->rxq.stats.idx)) - rte_panic("Rx queue %p is still used by a flow and cannot be" - " removed\n", (void *)rxq_ctrl); + rte_panic("port %u Rx queue %u is still used by a flow and" + " cannot be removed\n", priv->dev->data->port_id, + rxq_ctrl->idx); mlx5_rxq_release(priv->dev, rxq_ctrl->rxq.stats.idx); } @@ -374,8 +380,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) mlx5_rx_intr_vec_disable(dev); intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0])); if (intr_handle->intr_vec == NULL) { - ERROR("failed to allocate memory for interrupt vector," - " Rx interrupts will not be supported"); + ERROR("port %u failed to allocate memory for interrupt vector," + " Rx interrupts will not be supported", + dev->data->port_id); rte_errno = ENOMEM; return -rte_errno; } @@ -396,9 +403,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) continue; } if (count >= RTE_MAX_RXTX_INTR_VEC_ID) { - ERROR("too many Rx queues for interrupt vector size" - " (%d), Rx interrupts cannot be enabled", - RTE_MAX_RXTX_INTR_VEC_ID); + ERROR("port %u too many Rx queues for interrupt vector" + " size (%d), Rx interrupts cannot be enabled", + dev->data->port_id, RTE_MAX_RXTX_INTR_VEC_ID); mlx5_rx_intr_vec_disable(dev); rte_errno = ENOMEM; return -rte_errno; @@ -408,8 +415,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev) rc = fcntl(fd, F_SETFL, flags | O_NONBLOCK); if (rc < 0) { rte_errno = errno; - ERROR("failed to make Rx interrupt file descriptor" - " %d non-blocking for queue index %d", fd, i); + ERROR("port %u failed to make Rx interrupt file" + " descriptor %d non-blocking for queue index %d", + dev->data->port_id, fd, i); mlx5_rx_intr_vec_disable(dev); return -rte_errno; } @@ -575,7 +583,8 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) ret = rte_errno; /* Save rte_errno before cleanup. */ if (rxq_ibv) mlx5_rxq_ibv_release(rxq_ibv); - WARN("unable to disable interrupt on rx queue %d", rx_queue_id); + WARN("port %u unable to disable interrupt on Rx queue %d", + dev->data->port_id, rx_queue_id); rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; } @@ -623,8 +632,8 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) tmpl = rte_calloc_socket(__func__, 1, sizeof(*tmpl), 0, rxq_ctrl->socket); if (!tmpl) { - ERROR("%p: cannot allocate verbs resources", - (void *)rxq_ctrl); + ERROR("port %u Rx queue %u cannot allocate verbs resources", + dev->data->port_id, rxq_ctrl->idx); rte_errno = ENOMEM; goto error; } @@ -634,15 +643,16 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (!tmpl->mr) { tmpl->mr = mlx5_mr_new(dev, rxq_data->mp); if (!tmpl->mr) { - ERROR("%p: MR creation failure", (void *)rxq_ctrl); + ERROR("port %u: memory region creation failure", + dev->data->port_id); goto error; } } if (rxq_ctrl->irq) { tmpl->channel = mlx5_glue->create_comp_channel(priv->ctx); if (!tmpl->channel) { - ERROR("%p: Comp Channel creation failure", - (void *)rxq_ctrl); + ERROR("port %u: comp channel creation failure", + dev->data->port_id); rte_errno = ENOMEM; goto error; } @@ -666,20 +676,22 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (mlx5_rxq_check_vec_support(rxq_data) < 0) attr.cq.ibv.cqe *= 2; } else if (config->cqe_comp && rxq_data->hw_timestamp) { - DEBUG("Rx CQE compression is disabled for HW timestamp"); + DEBUG("port %u Rx CQE compression is disabled for HW timestamp", + dev->data->port_id); } tmpl->cq = mlx5_glue->cq_ex_to_cq (mlx5_glue->dv_create_cq(priv->ctx, &attr.cq.ibv, &attr.cq.mlx5)); if (tmpl->cq == NULL) { - ERROR("%p: CQ creation failure", (void *)rxq_ctrl); + ERROR("port %u Rx queue %u CQ creation failure", + dev->data->port_id, idx); rte_errno = ENOMEM; goto error; } - DEBUG("priv->device_attr.max_qp_wr is %d", - priv->device_attr.orig_attr.max_qp_wr); - DEBUG("priv->device_attr.max_sge is %d", - priv->device_attr.orig_attr.max_sge); + DEBUG("port %u priv->device_attr.max_qp_wr is %d", + dev->data->port_id, priv->device_attr.orig_attr.max_qp_wr); + DEBUG("port %u priv->device_attr.max_sge is %d", + dev->data->port_id, priv->device_attr.orig_attr.max_sge); attr.wq = (struct ibv_wq_init_attr){ .wq_context = NULL, /* Could be useful in the future. */ .wq_type = IBV_WQT_RQ, @@ -709,7 +721,8 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) #endif tmpl->wq = mlx5_glue->create_wq(priv->ctx, &attr.wq); if (tmpl->wq == NULL) { - ERROR("%p: WQ creation failure", (void *)rxq_ctrl); + ERROR("port %u Rx queue %u WQ creation failure", + dev->data->port_id, idx); rte_errno = ENOMEM; goto error; } @@ -720,8 +733,9 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (((int)attr.wq.max_wr != ((1 << rxq_data->elts_n) >> rxq_data->sges_n)) || ((int)attr.wq.max_sge != (1 << rxq_data->sges_n))) { - ERROR("%p: requested %u*%u but got %u*%u WRs*SGEs", - (void *)rxq_ctrl, + ERROR("port %u Rx queue %u requested %u*%u but got %u*%u" + " WRs*SGEs", + dev->data->port_id, idx, ((1 << rxq_data->elts_n) >> rxq_data->sges_n), (1 << rxq_data->sges_n), attr.wq.max_wr, attr.wq.max_sge); @@ -735,8 +749,8 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) }; ret = mlx5_glue->modify_wq(tmpl->wq, &mod); if (ret) { - ERROR("%p: WQ state to IBV_WQS_RDY failed", - (void *)rxq_ctrl); + ERROR("port %u Rx queue %u WQ state to IBV_WQS_RDY failed", + dev->data->port_id, idx); rte_errno = ret; goto error; } @@ -750,8 +764,9 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) goto error; } if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) { - ERROR("Wrong MLX5_CQE_SIZE environment variable value: " - "it should be set to %u", RTE_CACHE_LINE_SIZE); + ERROR("port %u wrong MLX5_CQE_SIZE environment variable value: " + "it should be set to %u", dev->data->port_id, + RTE_CACHE_LINE_SIZE); rte_errno = EINVAL; goto error; } @@ -788,10 +803,11 @@ mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) rxq_data->rq_ci = (1 << rxq_data->elts_n) >> rxq_data->sges_n; rte_wmb(); *rxq_data->rq_db = rte_cpu_to_be_32(rxq_data->rq_ci); - DEBUG("%p: rxq updated with %p", (void *)rxq_ctrl, (void *)&tmpl); + DEBUG("port %u rxq %u updated with %p", dev->data->port_id, idx, + (void *)&tmpl); rte_atomic32_inc(&tmpl->refcnt); - DEBUG("%p: Verbs Rx queue %p: refcnt %d", (void *)dev, - (void *)tmpl, rte_atomic32_read(&tmpl->refcnt)); + DEBUG("port %u Verbs Rx queue %u: refcnt %d", dev->data->port_id, idx, + rte_atomic32_read(&tmpl->refcnt)); LIST_INSERT_HEAD(&priv->rxqsibv, tmpl, next); priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE; return tmpl; @@ -836,8 +852,8 @@ mlx5_rxq_ibv_get(struct rte_eth_dev *dev, uint16_t idx) if (rxq_ctrl->ibv) { mlx5_mr_get(dev, rxq_data->mp); rte_atomic32_inc(&rxq_ctrl->ibv->refcnt); - DEBUG("%p: Verbs Rx queue %p: refcnt %d", (void *)dev, - (void *)rxq_ctrl->ibv, + DEBUG("port %u Verbs Rx queue %u: refcnt %d", + dev->data->port_id, rxq_ctrl->idx, rte_atomic32_read(&rxq_ctrl->ibv->refcnt)); } return rxq_ctrl->ibv; @@ -864,8 +880,9 @@ mlx5_rxq_ibv_release(struct mlx5_rxq_ibv *rxq_ibv) ret = mlx5_mr_release(rxq_ibv->mr); if (!ret) rxq_ibv->mr = NULL; - DEBUG("Verbs Rx queue %p: refcnt %d", - (void *)rxq_ibv, rte_atomic32_read(&rxq_ibv->refcnt)); + DEBUG("port %u Verbs Rx queue %u: refcnt %d", + rxq_ibv->rxq_ctrl->priv->dev->data->port_id, + rxq_ibv->rxq_ctrl->idx, rte_atomic32_read(&rxq_ibv->refcnt)); if (rte_atomic32_dec_and_test(&rxq_ibv->refcnt)) { rxq_free_elts(rxq_ibv->rxq_ctrl); claim_zero(mlx5_glue->destroy_wq(rxq_ibv->wq)); @@ -897,8 +914,8 @@ mlx5_rxq_ibv_verify(struct rte_eth_dev *dev) struct mlx5_rxq_ibv *rxq_ibv; LIST_FOREACH(rxq_ibv, &priv->rxqsibv, next) { - DEBUG("%p: Verbs Rx queue %p still referenced", (void *)dev, - (void *)rxq_ibv); + DEBUG("port %u Verbs Rx queue %u still referenced", + dev->data->port_id, rxq_ibv->rxq_ctrl->idx); ++ret; } return ret; @@ -980,28 +997,28 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, size = mb_len * (1 << tmpl->rxq.sges_n); size -= RTE_PKTMBUF_HEADROOM; if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) { - ERROR("%p: too many SGEs (%u) needed to handle" + ERROR("port %u too many SGEs (%u) needed to handle" " requested maximum packet size %u", - (void *)dev, + dev->data->port_id, 1 << sges_n, dev->data->dev_conf.rxmode.max_rx_pkt_len); rte_errno = EOVERFLOW; goto error; } } else { - WARN("%p: the requested maximum Rx packet size (%u) is" + WARN("port %u the requested maximum Rx packet size (%u) is" " larger than a single mbuf (%u) and scattered" " mode has not been requested", - (void *)dev, + dev->data->port_id, dev->data->dev_conf.rxmode.max_rx_pkt_len, mb_len - RTE_PKTMBUF_HEADROOM); } - DEBUG("%p: maximum number of segments per packet: %u", - (void *)dev, 1 << tmpl->rxq.sges_n); + DEBUG("port %u maximum number of segments per packet: %u", + dev->data->port_id, 1 << tmpl->rxq.sges_n); if (desc % (1 << tmpl->rxq.sges_n)) { - ERROR("%p: number of RX queue descriptors (%u) is not a" + ERROR("port %u number of Rx queue descriptors (%u) is not a" " multiple of SGEs per packet (%u)", - (void *)dev, + dev->data->port_id, desc, 1 << tmpl->rxq.sges_n); rte_errno = EINVAL; @@ -1020,15 +1037,15 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, } else if (config->hw_fcs_strip) { tmpl->rxq.crc_present = 1; } else { - WARN("%p: CRC stripping has been disabled but will still" + WARN("port %u CRC stripping has been disabled but will still" " be performed by hardware, make sure MLNX_OFED and" " firmware are up to date", - (void *)dev); + dev->data->port_id); tmpl->rxq.crc_present = 0; } - DEBUG("%p: CRC stripping is %s, %u bytes will be subtracted from" + DEBUG("port %u CRC stripping is %s, %u bytes will be subtracted from" " incoming frames to hide it", - (void *)dev, + dev->data->port_id, tmpl->rxq.crc_present ? "disabled" : "enabled", tmpl->rxq.crc_present << 2); /* Save port ID. */ @@ -1040,9 +1057,10 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->rxq.elts_n = log2above(desc); tmpl->rxq.elts = (struct rte_mbuf *(*)[1 << tmpl->rxq.elts_n])(tmpl + 1); + tmpl->idx = idx; rte_atomic32_inc(&tmpl->refcnt); - DEBUG("%p: Rx queue %p: refcnt %d", (void *)dev, - (void *)tmpl, rte_atomic32_read(&tmpl->refcnt)); + DEBUG("port %u Rx queue %u: refcnt %d", dev->data->port_id, + idx, rte_atomic32_read(&tmpl->refcnt)); LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next); return tmpl; error: @@ -1073,8 +1091,8 @@ mlx5_rxq_get(struct rte_eth_dev *dev, uint16_t idx) rxq); mlx5_rxq_ibv_get(dev, idx); rte_atomic32_inc(&rxq_ctrl->refcnt); - DEBUG("%p: Rx queue %p: refcnt %d", (void *)dev, - (void *)rxq_ctrl, rte_atomic32_read(&rxq_ctrl->refcnt)); + DEBUG("port %u Rx queue %u: refcnt %d", dev->data->port_id, + rxq_ctrl->idx, rte_atomic32_read(&rxq_ctrl->refcnt)); } return rxq_ctrl; } @@ -1102,8 +1120,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx) assert(rxq_ctrl->priv); if (rxq_ctrl->ibv && !mlx5_rxq_ibv_release(rxq_ctrl->ibv)) rxq_ctrl->ibv = NULL; - DEBUG("%p: Rx queue %p: refcnt %d", (void *)dev, - (void *)rxq_ctrl, rte_atomic32_read(&rxq_ctrl->refcnt)); + DEBUG("port %u Rx queue %u: refcnt %d", dev->data->port_id, + rxq_ctrl->idx, rte_atomic32_read(&rxq_ctrl->refcnt)); if (rte_atomic32_dec_and_test(&rxq_ctrl->refcnt)) { LIST_REMOVE(rxq_ctrl, next); rte_free(rxq_ctrl); @@ -1156,8 +1174,8 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) int ret = 0; LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) { - DEBUG("%p: Rx Queue %p still referenced", (void *)dev, - (void *)rxq_ctrl); + DEBUG("port %u Rx queue %u still referenced", + dev->data->port_id, rxq_ctrl->idx); ++ret; } return ret; @@ -1220,12 +1238,12 @@ mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, uint16_t queues[], } rte_atomic32_inc(&ind_tbl->refcnt); LIST_INSERT_HEAD(&priv->ind_tbls, ind_tbl, next); - DEBUG("%p: Indirection table %p: refcnt %d", (void *)dev, + DEBUG("port %u indirection table %p: refcnt %d", dev->data->port_id, (void *)ind_tbl, rte_atomic32_read(&ind_tbl->refcnt)); return ind_tbl; error: rte_free(ind_tbl); - DEBUG("%p cannot create indirection table", (void *)dev); + DEBUG("port %u cannot create indirection table", dev->data->port_id); return NULL; } @@ -1260,8 +1278,9 @@ mlx5_ind_table_ibv_get(struct rte_eth_dev *dev, uint16_t queues[], unsigned int i; rte_atomic32_inc(&ind_tbl->refcnt); - DEBUG("%p: Indirection table %p: refcnt %d", (void *)dev, - (void *)ind_tbl, rte_atomic32_read(&ind_tbl->refcnt)); + DEBUG("port %u indirection table %p: refcnt %d", + dev->data->port_id, (void *)ind_tbl, + rte_atomic32_read(&ind_tbl->refcnt)); for (i = 0; i != ind_tbl->queues_n; ++i) mlx5_rxq_get(dev, ind_tbl->queues[i]); } @@ -1285,7 +1304,8 @@ mlx5_ind_table_ibv_release(struct rte_eth_dev *dev, { unsigned int i; - DEBUG("%p: Indirection table %p: refcnt %d", (void *)dev, + DEBUG("port %u indirection table %p: refcnt %d", + ((struct priv *)dev->data->dev_private)->port, (void *)ind_tbl, rte_atomic32_read(&ind_tbl->refcnt)); if (rte_atomic32_dec_and_test(&ind_tbl->refcnt)) claim_zero(mlx5_glue->destroy_rwq_ind_table @@ -1317,8 +1337,8 @@ mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev) int ret = 0; LIST_FOREACH(ind_tbl, &priv->ind_tbls, next) { - DEBUG("%p: Verbs indirection table %p still referenced", - (void *)dev, (void *)ind_tbl); + DEBUG("port %u Verbs indirection table %p still referenced", + dev->data->port_id, (void *)ind_tbl); ++ret; } return ret; @@ -1393,7 +1413,7 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, memcpy(hrxq->rss_key, rss_key, rss_key_len); rte_atomic32_inc(&hrxq->refcnt); LIST_INSERT_HEAD(&priv->hrxqs, hrxq, next); - DEBUG("%p: Hash Rx queue %p: refcnt %d", (void *)dev, + DEBUG("port %u hash Rx queue %p: refcnt %d", dev->data->port_id, (void *)hrxq, rte_atomic32_read(&hrxq->refcnt)); return hrxq; error: @@ -1446,7 +1466,7 @@ mlx5_hrxq_get(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, continue; } rte_atomic32_inc(&hrxq->refcnt); - DEBUG("%p: Hash Rx queue %p: refcnt %d", (void *)dev, + DEBUG("port %u hash Rx queue %p: refcnt %d", dev->data->port_id, (void *)hrxq, rte_atomic32_read(&hrxq->refcnt)); return hrxq; } @@ -1467,7 +1487,8 @@ mlx5_hrxq_get(struct rte_eth_dev *dev, uint8_t *rss_key, uint8_t rss_key_len, int mlx5_hrxq_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) { - DEBUG("%p: Hash Rx queue %p: refcnt %d", (void *)dev, + DEBUG("port %u hash Rx queue %p: refcnt %d", + ((struct priv *)dev->data->dev_private)->port, (void *)hrxq, rte_atomic32_read(&hrxq->refcnt)); if (rte_atomic32_dec_and_test(&hrxq->refcnt)) { claim_zero(mlx5_glue->destroy_qp(hrxq->qp)); @@ -1497,8 +1518,8 @@ mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev) int ret = 0; LIST_FOREACH(hrxq, &priv->hrxqs, next) { - DEBUG("%p: Verbs Hash Rx queue %p still referenced", - (void *)dev, (void *)hrxq); + DEBUG("port %u Verbs hash Rx queue %p still referenced", + dev->data->port_id, (void *)hrxq); ++ret; } return ret; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 17a6072e2..ee30a0677 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -126,6 +126,7 @@ struct mlx5_rxq_ctrl { struct mlx5_rxq_data rxq; /* Data path structure. */ unsigned int socket; /* CPU socket ID for allocations. */ unsigned int irq:1; /* Whether IRQ is enabled. */ + uint16_t idx; /* Queue index. */ }; /* Indirection table. */ @@ -187,6 +188,7 @@ struct mlx5_txq_data { struct mlx5_txq_ibv { LIST_ENTRY(mlx5_txq_ibv) next; /* Pointer to the next element. */ rte_atomic32_t refcnt; /* Reference counter. */ + struct mlx5_txq_ctrl *txq_ctrl; /* Pointer to the control queue. */ struct ibv_cq *cq; /* Completion Queue. */ struct ibv_qp *qp; /* Queue Pair. */ }; @@ -203,6 +205,7 @@ struct mlx5_txq_ctrl { struct mlx5_txq_data txq; /* Data path structure. */ off_t uar_mmap_offset; /* UAR mmap offset for non-primary process. */ volatile void *bf_reg_orig; /* Blueflame register from verbs. */ + uint16_t idx; /* Queue index. */ }; /* mlx5_rxq.c */ @@ -446,7 +449,7 @@ mlx5_tx_complete(struct mlx5_txq_data *txq) if ((MLX5_CQE_OPCODE(cqe->op_own) == MLX5_CQE_RESP_ERR) || (MLX5_CQE_OPCODE(cqe->op_own) == MLX5_CQE_REQ_ERR)) { if (!check_cqe_seen(cqe)) { - ERROR("unexpected error CQE, TX stopped"); + ERROR("unexpected error CQE, Tx stopped"); rte_hexdump(stderr, "MLX5 TXQ:", (const void *)((uintptr_t)txq->wqes), ((1 << txq->wqe_n) * @@ -563,7 +566,7 @@ mlx5_tx_mb2mr(struct mlx5_txq_data *txq, struct rte_mbuf *mb) } else { struct rte_mempool *mp = mlx5_tx_mb2mp(mb); - WARN("Failed to register mempool 0x%p(%s)", + WARN("failed to register mempool 0x%p(%s)", (void *)mp, mp->name); } return (uint32_t)-1; diff --git a/drivers/net/mlx5/mlx5_socket.c b/drivers/net/mlx5/mlx5_socket.c index 6e2d971c7..18563eedd 100644 --- a/drivers/net/mlx5/mlx5_socket.c +++ b/drivers/net/mlx5/mlx5_socket.c @@ -42,7 +42,8 @@ mlx5_socket_init(struct rte_eth_dev *dev) ret = socket(AF_UNIX, SOCK_STREAM, 0); if (ret < 0) { rte_errno = errno; - WARN("secondary process not supported: %s", strerror(errno)); + WARN("port %u secondary process not supported: %s", + dev->data->port_id, strerror(errno)); goto error; } priv->primary_socket = ret; @@ -65,14 +66,15 @@ mlx5_socket_init(struct rte_eth_dev *dev) sizeof(sun)); if (ret < 0) { rte_errno = errno; - WARN("cannot bind socket, secondary process not supported: %s", - strerror(errno)); + WARN("port %u cannot bind socket, secondary process not" + " supported: %s", dev->data->port_id, strerror(errno)); goto close; } ret = listen(priv->primary_socket, 0); if (ret < 0) { rte_errno = errno; - WARN("Secondary process not supported: %s", strerror(errno)); + WARN("port %u secondary process not supported: %s", + dev->data->port_id, strerror(errno)); goto close; } return 0; @@ -131,26 +133,29 @@ mlx5_socket_handle(struct rte_eth_dev *dev) /* Accept the connection from the client. */ conn_sock = accept(priv->primary_socket, NULL, NULL); if (conn_sock < 0) { - WARN("connection failed: %s", strerror(errno)); + WARN("port %u connection failed: %s", dev->data->port_id, + strerror(errno)); return; } ret = setsockopt(conn_sock, SOL_SOCKET, SO_PASSCRED, &(int){1}, sizeof(int)); if (ret < 0) { ret = errno; - WARN("cannot change socket options: %s", strerror(rte_errno)); + WARN("port %u cannot change socket options: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } ret = recvmsg(conn_sock, &msg, MSG_WAITALL); if (ret < 0) { ret = errno; - WARN("received an empty message: %s", strerror(rte_errno)); + WARN("port %u received an empty message: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } /* Expect to receive credentials only. */ cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { - WARN("no message"); + WARN("port %u no message", dev->data->port_id); goto error; } if ((cmsg->cmsg_type == SCM_CREDENTIALS) && @@ -160,13 +165,13 @@ mlx5_socket_handle(struct rte_eth_dev *dev) } cmsg = CMSG_NXTHDR(&msg, cmsg); if (cmsg != NULL) { - WARN("Message wrongly formatted"); + WARN("port %u message wrongly formatted", dev->data->port_id); goto error; } /* Make sure all the ancillary data was received and valid. */ if ((cred == NULL) || (cred->uid != getuid()) || (cred->gid != getgid())) { - WARN("wrong credentials"); + WARN("port %u wrong credentials", dev->data->port_id); goto error; } /* Set-up the ancillary data. */ @@ -179,7 +184,7 @@ mlx5_socket_handle(struct rte_eth_dev *dev) *fd = priv->ctx->cmd_fd; ret = sendmsg(conn_sock, &msg, 0); if (ret < 0) - WARN("cannot send response"); + WARN("port %u cannot send response", dev->data->port_id); error: close(conn_sock); } @@ -221,7 +226,7 @@ mlx5_socket_connect(struct rte_eth_dev *dev) ret = socket(AF_UNIX, SOCK_STREAM, 0); if (ret < 0) { rte_errno = errno; - WARN("cannot connect to primary"); + WARN("port %u cannot connect to primary", dev->data->port_id); goto error; } socket_fd = ret; @@ -230,13 +235,13 @@ mlx5_socket_connect(struct rte_eth_dev *dev) ret = connect(socket_fd, (const struct sockaddr *)&sun, sizeof(sun)); if (ret < 0) { rte_errno = errno; - WARN("cannot connect to primary"); + WARN("port %u cannot connect to primary", dev->data->port_id); goto error; } cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { rte_errno = EINVAL; - DEBUG("cannot get first message"); + DEBUG("port %u cannot get first message", dev->data->port_id); goto error; } cmsg->cmsg_level = SOL_SOCKET; @@ -245,7 +250,7 @@ mlx5_socket_connect(struct rte_eth_dev *dev) cred = (struct ucred *)CMSG_DATA(cmsg); if (cred == NULL) { rte_errno = EINVAL; - DEBUG("no credentials received"); + DEBUG("port %u no credentials received", dev->data->port_id); goto error; } cred->pid = getpid(); @@ -254,25 +259,27 @@ mlx5_socket_connect(struct rte_eth_dev *dev) ret = sendmsg(socket_fd, &msg, MSG_DONTWAIT); if (ret < 0) { rte_errno = errno; - WARN("cannot send credentials to primary: %s", - strerror(errno)); + WARN("port %u cannot send credentials to primary: %s", + dev->data->port_id, strerror(errno)); goto error; } ret = recvmsg(socket_fd, &msg, MSG_WAITALL); if (ret <= 0) { rte_errno = errno; - WARN("no message from primary: %s", strerror(errno)); + WARN("port %u no message from primary: %s", + dev->data->port_id, strerror(errno)); goto error; } cmsg = CMSG_FIRSTHDR(&msg); if (cmsg == NULL) { rte_errno = EINVAL; - WARN("No file descriptor received"); + WARN("port %u no file descriptor received", dev->data->port_id); goto error; } fd = (int *)CMSG_DATA(cmsg); if (*fd < 0) { - WARN("no file descriptor received: %s", strerror(errno)); + WARN("port %u no file descriptor received: %s", + dev->data->port_id, strerror(errno)); rte_errno = *fd; goto error; } diff --git a/drivers/net/mlx5/mlx5_stats.c b/drivers/net/mlx5/mlx5_stats.c index 06e9a1f19..066d347a0 100644 --- a/drivers/net/mlx5/mlx5_stats.c +++ b/drivers/net/mlx5/mlx5_stats.c @@ -148,7 +148,8 @@ mlx5_read_dev_counters(struct rte_eth_dev *dev, uint64_t *stats) ifr.ifr_data = (caddr_t)et_stats; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("unable to read statistic values from device"); + WARN("port %u unable to read statistic values from device", + dev->data->port_id); return ret; } for (i = 0; i != xstats_n; ++i) { @@ -194,7 +195,8 @@ mlx5_ethtool_get_stats_n(struct rte_eth_dev *dev) { ifr.ifr_data = (caddr_t)&drvinfo; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("unable to query number of statistics"); + WARN("port %u unable to query number of statistics", + dev->data->port_id); return ret; } return drvinfo.n_stats; @@ -221,7 +223,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) ret = mlx5_ethtool_get_stats_n(dev); if (ret < 0) { - WARN("no extended statistics available"); + WARN("port %u no extended statistics available", + dev->data->port_id); return; } dev_stats_n = ret; @@ -232,7 +235,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) rte_malloc("xstats_strings", str_sz + sizeof(struct ethtool_gstrings), 0); if (!strings) { - WARN("unable to allocate memory for xstats"); + WARN("port %u unable to allocate memory for xstats", + dev->data->port_id); return; } strings->cmd = ETHTOOL_GSTRINGS; @@ -241,7 +245,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) ifr.ifr_data = (caddr_t)strings; ret = mlx5_ifreq(dev, SIOCETHTOOL, &ifr); if (ret) { - WARN("unable to get statistic names"); + WARN("port %u unable to get statistic names", + dev->data->port_id); goto free; } for (j = 0; j != xstats_n; ++j) @@ -262,7 +267,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) if (mlx5_counters_init[j].ib) continue; if (xstats_ctrl->dev_table_idx[j] >= dev_stats_n) { - WARN("counter \"%s\" is not recognized", + WARN("port %u counter \"%s\" is not recognized", + dev->data->port_id, mlx5_counters_init[j].dpdk_name); goto free; } @@ -271,7 +277,8 @@ mlx5_xstats_init(struct rte_eth_dev *dev) assert(xstats_n <= MLX5_MAX_XSTATS); ret = mlx5_read_dev_counters(dev, xstats_ctrl->base); if (ret) - ERROR("cannot read device counters: %s", strerror(rte_errno)); + ERROR("port %u cannot read device counters: %s", + dev->data->port_id, strerror(rte_errno)); free: rte_free(strings); } @@ -438,7 +445,7 @@ mlx5_xstats_reset(struct rte_eth_dev *dev) stats_n = mlx5_ethtool_get_stats_n(dev); if (stats_n < 0) { - ERROR("%p cannot get stats: %s", (void *)dev, + ERROR("port %u cannot get stats: %s", dev->data->port_id, strerror(-stats_n)); return; } @@ -446,8 +453,8 @@ mlx5_xstats_reset(struct rte_eth_dev *dev) mlx5_xstats_init(dev); ret = mlx5_read_dev_counters(dev, counters); if (ret) { - ERROR("%p cannot read device counters: %s", (void *)dev, - strerror(rte_errno)); + ERROR("port %u cannot read device counters: %s", + dev->data->port_id, strerror(rte_errno)); return; } for (i = 0; i != n; ++i) @@ -469,7 +476,7 @@ mlx5_xstats_reset(struct rte_eth_dev *dev) */ int mlx5_xstats_get_names(struct rte_eth_dev *dev __rte_unused, - struct rte_eth_xstat_name *xstats_names, unsigned int n) + struct rte_eth_xstat_name *xstats_names, unsigned int n) { unsigned int i; diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index 4e396b7f0..3d6b8c3b0 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -150,37 +150,38 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->dev_started = 1; ret = mlx5_flow_create_drop_queue(dev); if (ret) { - ERROR("%p: Drop queue allocation failed: %s", - (void *)dev, strerror(rte_errno)); + ERROR("port %u drop queue allocation failed: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } - DEBUG("%p: allocating and configuring hash RX queues", (void *)dev); + DEBUG("port %u allocating and configuring hash Rx queues", + dev->data->port_id); rte_mempool_walk(mlx5_mp2mr_iter, priv); ret = mlx5_txq_start(dev); if (ret) { - ERROR("%p: Tx Queue allocation failed: %s", - (void *)dev, strerror(rte_errno)); + ERROR("port %u Tx queue allocation failed: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } ret = mlx5_rxq_start(dev); if (ret) { - ERROR("%p: Rx Queue allocation failed: %s", - (void *)dev, strerror(rte_errno)); + ERROR("port %u Rx queue allocation failed: %s", + dev->data->port_id, strerror(rte_errno)); goto error; } ret = mlx5_rx_intr_vec_enable(dev); if (ret) { - ERROR("%p: Rx interrupt vector creation failed", - (void *)dev); + ERROR("port %u Rx interrupt vector creation failed", + dev->data->port_id); goto error; } mlx5_xstats_init(dev); /* Update link status and Tx/Rx callbacks for the first time. */ memset(&dev->data->dev_link, 0, sizeof(struct rte_eth_link)); - INFO("Forcing port %u link to be up", dev->data->port_id); + INFO("port %u forcing link to be up", dev->data->port_id); ret = mlx5_force_link_status_change(dev, ETH_LINK_UP); if (ret) { - DEBUG("Failed to set port %u link to be up", + DEBUG("failed to set port %u link to be up", dev->data->port_id); goto error; } @@ -221,7 +222,8 @@ mlx5_dev_stop(struct rte_eth_dev *dev) dev->tx_pkt_burst = removed_tx_burst; rte_wmb(); usleep(1000 * priv->rxqs_n); - DEBUG("%p: cleaning up and destroying hash RX queues", (void *)dev); + DEBUG("port %u cleaning up and destroying hash Rx queues", + dev->data->port_id); mlx5_flow_stop(dev, &priv->flows); mlx5_traffic_disable(dev); mlx5_rx_intr_vec_disable(dev); diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 042704cc6..4d67fbc84 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -47,7 +47,8 @@ txq_alloc_elts(struct mlx5_txq_ctrl *txq_ctrl) for (i = 0; (i != elts_n); ++i) (*txq_ctrl->txq.elts)[i] = NULL; - DEBUG("%p: allocated and configured %u WRs", (void *)txq_ctrl, elts_n); + DEBUG("port %u Tx queue %u allocated and configured %u WRs", + txq_ctrl->priv->dev->data->port_id, txq_ctrl->idx, elts_n); txq_ctrl->txq.elts_head = 0; txq_ctrl->txq.elts_tail = 0; txq_ctrl->txq.elts_comp = 0; @@ -68,7 +69,8 @@ txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl) uint16_t elts_tail = txq_ctrl->txq.elts_tail; struct rte_mbuf *(*elts)[elts_n] = txq_ctrl->txq.elts; - DEBUG("%p: freeing WRs", (void *)txq_ctrl); + DEBUG("port %u Tx queue %u freeing WRs", + txq_ctrl->priv->dev->data->port_id, txq_ctrl->idx); txq_ctrl->txq.elts_head = 0; txq_ctrl->txq.elts_tail = 0; txq_ctrl->txq.elts_comp = 0; @@ -179,49 +181,49 @@ mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (!!(conf->txq_flags & ETH_TXQ_FLAGS_IGNORE) && !mlx5_is_tx_queue_offloads_allowed(dev, conf->offloads)) { rte_errno = ENOTSUP; - ERROR("%p: Tx queue offloads 0x%" PRIx64 " don't match port " - "offloads 0x%" PRIx64 " or supported offloads 0x%" PRIx64, - (void *)dev, conf->offloads, + ERROR("port %u Tx queue offloads 0x%" PRIx64 " don't match" + " port offloads 0x%" PRIx64 " or supported offloads 0x%" + PRIx64, + dev->data->port_id, conf->offloads, dev->data->dev_conf.txmode.offloads, mlx5_get_tx_port_offloads(dev)); return -rte_errno; } if (desc <= MLX5_TX_COMP_THRESH) { - WARN("%p: number of descriptors requested for TX queue %u" + WARN("port %u number of descriptors requested for Tx queue %u" " must be higher than MLX5_TX_COMP_THRESH, using" " %u instead of %u", - (void *)dev, idx, MLX5_TX_COMP_THRESH + 1, desc); + dev->data->port_id, idx, MLX5_TX_COMP_THRESH + 1, desc); desc = MLX5_TX_COMP_THRESH + 1; } if (!rte_is_power_of_2(desc)) { desc = 1 << log2above(desc); - WARN("%p: increased number of descriptors in TX queue %u" + WARN("port %u increased number of descriptors in Tx queue %u" " to the next power of two (%d)", - (void *)dev, idx, desc); + dev->data->port_id, idx, desc); } - DEBUG("%p: configuring queue %u for %u descriptors", - (void *)dev, idx, desc); + DEBUG("port %u configuring queue %u for %u descriptors", + dev->data->port_id, idx, desc); if (idx >= priv->txqs_n) { - ERROR("%p: queue index out of range (%u >= %u)", - (void *)dev, idx, priv->txqs_n); + ERROR("port %u Tx queue index out of range (%u >= %u)", + dev->data->port_id, idx, priv->txqs_n); rte_errno = EOVERFLOW; return -rte_errno; } if (!mlx5_txq_releasable(dev, idx)) { rte_errno = EBUSY; - ERROR("%p: unable to release queue index %u", - (void *)dev, idx); + ERROR("port %u unable to release queue index %u", + dev->data->port_id, idx); return -rte_errno; } mlx5_txq_release(dev, idx); txq_ctrl = mlx5_txq_new(dev, idx, desc, socket, conf); if (!txq_ctrl) { - ERROR("%p: unable to allocate queue index %u", - (void *)dev, idx); + ERROR("port %u unable to allocate queue index %u", + dev->data->port_id, idx); return -rte_errno; } - DEBUG("%p: adding TX queue %p to list", - (void *)dev, (void *)txq_ctrl); + DEBUG("port %u adding Tx queue %u to list", dev->data->port_id, idx); (*priv->txqs)[idx] = &txq_ctrl->txq; return 0; } @@ -247,8 +249,8 @@ mlx5_tx_queue_release(void *dpdk_txq) for (i = 0; (i != priv->txqs_n); ++i) if ((*priv->txqs)[i] == txq) { mlx5_txq_release(priv->dev, i); - DEBUG("%p: removing TX queue %p from list", - (void *)priv->dev, (void *)txq_ctrl); + DEBUG("port %u removing Tx queue %u from list", + priv->dev->data->port_id, txq_ctrl->idx); break; } } @@ -294,6 +296,7 @@ mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd) continue; txq = (*priv->txqs)[i]; txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq); + assert(txq_ctrl->idx == (uint16_t)i); /* UAR addr form verbs used to find dup and offset in page. */ uar_va = (uintptr_t)txq_ctrl->bf_reg_orig; off = uar_va & (page_size - 1); /* offset in page. */ @@ -318,8 +321,9 @@ mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd) txq_ctrl->uar_mmap_offset); if (ret != addr) { /* fixed mmap have to return same address */ - ERROR("call to mmap failed on UAR for txq %d\n", - i); + ERROR("port %u call to mmap failed on UAR for" + " txq %u", dev->data->port_id, + txq_ctrl->idx); rte_errno = ENXIO; return -rte_errno; } @@ -390,7 +394,8 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_TX_QUEUE; priv->verbs_alloc_ctx.obj = txq_ctrl; if (mlx5_getenv_int("MLX5_ENABLE_CQE_COMPRESSION")) { - ERROR("MLX5_ENABLE_CQE_COMPRESSION must never be set"); + ERROR("port %u MLX5_ENABLE_CQE_COMPRESSION must never be set", + dev->data->port_id); rte_errno = EINVAL; return NULL; } @@ -405,7 +410,8 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) cqe_n += MLX5_TX_COMP_THRESH_INLINE_DIV; tmpl.cq = mlx5_glue->create_cq(priv->ctx, cqe_n, NULL, NULL, 0); if (tmpl.cq == NULL) { - ERROR("%p: CQ creation failure", (void *)txq_ctrl); + ERROR("port %u Tx queue %u CQ creation failure", + dev->data->port_id, idx); rte_errno = errno; goto error; } @@ -447,7 +453,8 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) } tmpl.qp = mlx5_glue->create_qp_ex(priv->ctx, &attr.init); if (tmpl.qp == NULL) { - ERROR("%p: QP creation failure", (void *)txq_ctrl); + ERROR("port %u Tx queue %u QP creation failure", + dev->data->port_id, idx); rte_errno = errno; goto error; } @@ -460,7 +467,8 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) ret = mlx5_glue->modify_qp(tmpl.qp, &attr.mod, (IBV_QP_STATE | IBV_QP_PORT)); if (ret) { - ERROR("%p: QP state to IBV_QPS_INIT failed", (void *)txq_ctrl); + ERROR("port %u Tx queue %u QP state to IBV_QPS_INIT failed", + dev->data->port_id, idx); rte_errno = errno; goto error; } @@ -469,21 +477,24 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) }; ret = mlx5_glue->modify_qp(tmpl.qp, &attr.mod, IBV_QP_STATE); if (ret) { - ERROR("%p: QP state to IBV_QPS_RTR failed", (void *)txq_ctrl); + ERROR("port %u Tx queue %u QP state to IBV_QPS_RTR failed", + dev->data->port_id, idx); rte_errno = errno; goto error; } attr.mod.qp_state = IBV_QPS_RTS; ret = mlx5_glue->modify_qp(tmpl.qp, &attr.mod, IBV_QP_STATE); if (ret) { - ERROR("%p: QP state to IBV_QPS_RTS failed", (void *)txq_ctrl); + ERROR("port %u Tx queue %u QP state to IBV_QPS_RTS failed", + dev->data->port_id, idx); rte_errno = errno; goto error; } txq_ibv = rte_calloc_socket(__func__, 1, sizeof(struct mlx5_txq_ibv), 0, txq_ctrl->socket); if (!txq_ibv) { - ERROR("%p: cannot allocate memory", (void *)txq_ctrl); + ERROR("port %u Tx queue %u cannot allocate memory", + dev->data->port_id, idx); rte_errno = ENOMEM; goto error; } @@ -497,8 +508,9 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) goto error; } if (cq_info.cqe_size != RTE_CACHE_LINE_SIZE) { - ERROR("Wrong MLX5_CQE_SIZE environment variable value: " - "it should be set to %u", RTE_CACHE_LINE_SIZE); + ERROR("port %u wrong MLX5_CQE_SIZE environment variable value: " + "it should be set to %u", dev->data->port_id, + RTE_CACHE_LINE_SIZE); rte_errno = EINVAL; goto error; } @@ -524,13 +536,15 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) if (qp.comp_mask & MLX5DV_QP_MASK_UAR_MMAP_OFFSET) { txq_ctrl->uar_mmap_offset = qp.uar_mmap_offset; } else { - ERROR("Failed to retrieve UAR info, invalid libmlx5.so version"); + ERROR("port %u failed to retrieve UAR info, invalid libmlx5.so", + dev->data->port_id); rte_errno = EINVAL; goto error; } - DEBUG("%p: Verbs Tx queue %p: refcnt %d", (void *)dev, - (void *)txq_ibv, rte_atomic32_read(&txq_ibv->refcnt)); + DEBUG("port %u Verbs Tx queue %u: refcnt %d", dev->data->port_id, idx, + rte_atomic32_read(&txq_ibv->refcnt)); LIST_INSERT_HEAD(&priv->txqsibv, txq_ibv, next); + txq_ibv->txq_ctrl = txq_ctrl; priv->verbs_alloc_ctx.type = MLX5_VERBS_ALLOC_TYPE_NONE; return txq_ibv; error: @@ -568,8 +582,8 @@ mlx5_txq_ibv_get(struct rte_eth_dev *dev, uint16_t idx) txq_ctrl = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq); if (txq_ctrl->ibv) { rte_atomic32_inc(&txq_ctrl->ibv->refcnt); - DEBUG("%p: Verbs Tx queue %p: refcnt %d", (void *)dev, - (void *)txq_ctrl->ibv, + DEBUG("port %u Verbs Tx queue %u: refcnt %d", + dev->data->port_id, txq_ctrl->idx, rte_atomic32_read(&txq_ctrl->ibv->refcnt)); } return txq_ctrl->ibv; @@ -588,8 +602,9 @@ int mlx5_txq_ibv_release(struct mlx5_txq_ibv *txq_ibv) { assert(txq_ibv); - DEBUG("Verbs Tx queue %p: refcnt %d", - (void *)txq_ibv, rte_atomic32_read(&txq_ibv->refcnt)); + DEBUG("port %u Verbs Tx queue %u: refcnt %d", + txq_ibv->txq_ctrl->priv->dev->data->port_id, + txq_ibv->txq_ctrl->idx, rte_atomic32_read(&txq_ibv->refcnt)); if (rte_atomic32_dec_and_test(&txq_ibv->refcnt)) { claim_zero(mlx5_glue->destroy_qp(txq_ibv->qp)); claim_zero(mlx5_glue->destroy_cq(txq_ibv->cq)); @@ -630,8 +645,9 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev) struct mlx5_txq_ibv *txq_ibv; LIST_FOREACH(txq_ibv, &priv->txqsibv, next) { - DEBUG("%p: Verbs Tx queue %p still referenced", (void *)dev, - (void *)txq_ibv); + DEBUG("port %u Verbs Tx queue %u still referenced", + dev->data->port_id, + txq_ibv->txq_ctrl->idx); ++ret; } return ret; @@ -722,9 +738,9 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl) max_inline = max_inline - (max_inline % RTE_CACHE_LINE_SIZE); - WARN("txq inline is too large (%d) setting it to " - "the maximum possible: %d\n", - txq_inline, max_inline); + WARN("port %u txq inline is too large (%d) setting it" + " to the maximum possible: %d\n", + priv->dev->data->port_id, txq_inline, max_inline); txq_ctrl->txq.max_inline = max_inline / RTE_CACHE_LINE_SIZE; } @@ -775,18 +791,19 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->priv = priv; tmpl->socket = socket; tmpl->txq.elts_n = log2above(desc); + tmpl->idx = idx; txq_set_params(tmpl); /* MRs will be registered in mp2mr[] later. */ - DEBUG("priv->device_attr.max_qp_wr is %d", + DEBUG("port %u priv->device_attr.max_qp_wr is %d", dev->data->port_id, priv->device_attr.orig_attr.max_qp_wr); - DEBUG("priv->device_attr.max_sge is %d", + DEBUG("port %u priv->device_attr.max_sge is %d", dev->data->port_id, priv->device_attr.orig_attr.max_sge); tmpl->txq.elts = (struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1); tmpl->txq.stats.idx = idx; rte_atomic32_inc(&tmpl->refcnt); - DEBUG("%p: Tx queue %p: refcnt %d", (void *)dev, - (void *)tmpl, rte_atomic32_read(&tmpl->refcnt)); + DEBUG("port %u Tx queue %u: refcnt %d", dev->data->port_id, + idx, rte_atomic32_read(&tmpl->refcnt)); LIST_INSERT_HEAD(&priv->txqsctrl, tmpl, next); return tmpl; } @@ -821,8 +838,8 @@ mlx5_txq_get(struct rte_eth_dev *dev, uint16_t idx) ctrl->txq.mp2mr[i]->mp)); } rte_atomic32_inc(&ctrl->refcnt); - DEBUG("%p: Tx queue %p: refcnt %d", (void *)dev, - (void *)ctrl, rte_atomic32_read(&ctrl->refcnt)); + DEBUG("port %u Tx queue %u refcnt %d", dev->data->port_id, + ctrl->idx, rte_atomic32_read(&ctrl->refcnt)); } return ctrl; } @@ -849,8 +866,8 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx) if (!(*priv->txqs)[idx]) return 0; txq = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq); - DEBUG("%p: Tx queue %p: refcnt %d", (void *)dev, - (void *)txq, rte_atomic32_read(&txq->refcnt)); + DEBUG("port %u Tx queue %u: refcnt %d", dev->data->port_id, + txq->idx, rte_atomic32_read(&txq->refcnt)); if (txq->ibv && !mlx5_txq_ibv_release(txq->ibv)) txq->ibv = NULL; for (i = 0; i != MLX5_PMD_TX_MP_CACHE; ++i) { @@ -912,8 +929,8 @@ mlx5_txq_verify(struct rte_eth_dev *dev) int ret = 0; LIST_FOREACH(txq, &priv->txqsctrl, next) { - DEBUG("%p: Tx Queue %p still referenced", (void *)dev, - (void *)txq); + DEBUG("port %u Tx queue %u still referenced", + dev->data->port_id, txq->idx); ++ret; } return ret; diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c index 3246c0a38..f6643fcd4 100644 --- a/drivers/net/mlx5/mlx5_vlan.c +++ b/drivers/net/mlx5/mlx5_vlan.c @@ -45,8 +45,8 @@ mlx5_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) struct priv *priv = dev->data->dev_private; unsigned int i; - DEBUG("%p: %s VLAN filter ID %" PRIu16, - (void *)dev, (on ? "enable" : "disable"), vlan_id); + DEBUG("port %u %s VLAN filter ID %" PRIu16, + dev->data->port_id, (on ? "enable" : "disable"), vlan_id); assert(priv->vlan_filter_n <= RTE_DIM(priv->vlan_filter)); for (i = 0; (i != priv->vlan_filter_n); ++i) if (priv->vlan_filter[i] == vlan_id) @@ -108,16 +108,18 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) /* Validate hw support */ if (!priv->config.hw_vlan_strip) { - ERROR("VLAN stripping is not supported"); + ERROR("port %u VLAN stripping is not supported", + dev->data->port_id); return; } /* Validate queue number */ if (queue >= priv->rxqs_n) { - ERROR("VLAN stripping, invalid queue number %d", queue); + ERROR("port %u VLAN stripping, invalid queue number %d", + dev->data->port_id, queue); return; } - DEBUG("set VLAN offloads 0x%x for port %d queue %d", - vlan_offloads, rxq->port_id, queue); + DEBUG("port %u set VLAN offloads 0x%x for port %uqueue %d", + dev->data->port_id, vlan_offloads, rxq->port_id, queue); if (!rxq_ctrl->ibv) { /* Update related bits in RX queue. */ rxq->vlan_strip = !!on; @@ -130,8 +132,8 @@ mlx5_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) }; ret = mlx5_glue->modify_wq(rxq_ctrl->ibv->wq, &mod); if (ret) { - ERROR("%p: failed to modified stripping mode: %s", - (void *)dev, strerror(rte_errno)); + ERROR("port %u failed to modified stripping mode: %s", + dev->data->port_id, strerror(rte_errno)); return; } /* Update related bits in RX queue. */ @@ -160,7 +162,8 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask) DEV_RX_OFFLOAD_VLAN_STRIP); if (!priv->config.hw_vlan_strip) { - ERROR("VLAN stripping is not supported"); + ERROR("port %u VLAN stripping is not supported", + dev->data->port_id); return 0; } /* Run on every RX queue and set/reset VLAN stripping. */