From patchwork Fri Aug 3 20:31:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 43585 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 181641B5BF; Fri, 3 Aug 2018 22:31:58 +0200 (CEST) Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by dpdk.org (Postfix) with ESMTP id 11A621B5BE for ; Fri, 3 Aug 2018 22:31:56 +0200 (CEST) Received: by mail-pf1-f195.google.com with SMTP id u24-v6so3803075pfn.13 for ; Fri, 03 Aug 2018 13:31:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id; bh=VYywZ7CcF/7gnwY6siKrmbFH5khfGVJXifHKg2mkrkE=; b=g6AQVL+ioUZMR5Y4VFyPZVNMNeUhiAe79lkKSh/5jQrCClZnWAz7s7jQNmT3+zlMos xoB/F1QaziIaT52dteiwWY75/GjrHDa35x8M8mj14jJSTu8LIb7VEir1ipq3fGm9k5gI RVKBrE7Isbyl/3g1R9vxssVnf5ciWXNi2EiP37u4GejL/+vweOdqKslOvWm8U7G7Pbdq zPos7ylmt8ngGbaSzFHI/2hAuKmpJ2dYRj7K0PePX9NphmJDHVfvMFHPxeBFoG8YXAtc VRqoBdGAGTmoKwXoR9PDpxR8BHxP0wkWkna8q/FD/FjAbygLTpAexitCyW9n3pKGbsL0 zGDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=VYywZ7CcF/7gnwY6siKrmbFH5khfGVJXifHKg2mkrkE=; b=lRgvT4ceSz4PffFLKh02VqJVzkPhtoebRT5UKRRLSM0d7rk1AHX/cFQ7JtKdW0mJUn /U/O7fS9JdhU5cQVK0qMjIDmNW/O92Mhc3LDkRigSae9QX/xYrRpMSAj8DVfGt+9rAUx 4r/GHts0wRtVVdZfpaBOG3M2mKCQAKREpxKW01qKg2iEo6sDiyyh95N8yGdh9kMyQ4to 8mAz8V7X75JykT3Wwkjwr7/xFX2lom4Cj3fKA1ObWyGvUNVQdQuOlp9UtouAUJArxAEZ H7DcS1c5j6RkzfDVrE5UdKcMvYwN3otVHqj6xj54pvnx1Fp0kPk842vcYUrDvJXy3nNx TXhg== X-Gm-Message-State: AOUpUlEOJu4iaDXYcfv0AlAMbolzgiw0w/F1tf5FcJdNoYjtiF9Pij8m 1dVMNTVskXQZCLAw++LSXzzedxHy0+Q= X-Google-Smtp-Source: AAOMgpc/Z7McXvRKfGNkyvB6/SCKCBGlTDCx26eVBkwnbeiwfjz64l+h3GY2CFq8wp70ABTDBb6VbA== X-Received: by 2002:a62:bd4:: with SMTP id 81-v6mr6176430pfl.67.1533328316071; Fri, 03 Aug 2018 13:31:56 -0700 (PDT) Received: from xeon-e3.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id v89-v6sm799896pfj.22.2018.08.03.13.31.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Aug 2018 13:31:55 -0700 (PDT) From: Stephen Hemminger To: yskoh@mellanox.com, shahafs@mellanox.com Cc: dev@dpdk.org, Stephen Hemminger Date: Fri, 3 Aug 2018 13:31:48 -0700 Message-Id: <20180803203148.5589-1-stephen@networkplumber.org> X-Mailer: git-send-email 2.18.0 Subject: [dpdk-dev] [PATCH] mlx5: spelling fixes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Fix spelling errors in messages and comments. Signed-off-by: Stephen Hemminger --- drivers/net/mlx5/mlx5_ethdev.c | 2 +- drivers/net/mlx5/mlx5_flow.c | 4 ++-- drivers/net/mlx5/mlx5_mr.c | 8 ++++---- drivers/net/mlx5/mlx5_rxq.c | 20 ++++++++++---------- drivers/net/mlx5/mlx5_rxtx.c | 2 +- 5 files changed, 18 insertions(+), 18 deletions(-) diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 34c5b95ee6d2..2c838e6539b6 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -1138,7 +1138,7 @@ mlx5_dev_interrupt_handler_install(struct rte_eth_dev *dev) } ret = mlx5_socket_init(dev); if (ret) - DRV_LOG(ERR, "port %u cannot initialise socket: %s", + DRV_LOG(ERR, "port %u cannot initialize socket: %s", dev->data->port_id, strerror(rte_errno)); else if (priv->primary_socket) { priv->intr_handle_socket.fd = priv->primary_socket; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index b94c442ec4e6..d13178be6ba1 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1090,7 +1090,7 @@ mlx5_flow_item_ipv6(const struct rte_flow_item *item, struct rte_flow *flow, item, "L3 cannot follow an L4 layer."); /* - * IPv6 is not recognised by the NIC inside a GRE tunnel. + * IPv6 is not recognized by the NIC inside a GRE tunnel. * Such support has to be disabled as the rule will be * accepted. Issue reproduced with Mellanox OFED 4.3-3.0.2.1 and * Mellanox OFED 4.4-1.0.0.0. @@ -1100,7 +1100,7 @@ mlx5_flow_item_ipv6(const struct rte_flow_item *item, struct rte_flow *flow, RTE_FLOW_ERROR_TYPE_ITEM, item, "IPv6 inside a GRE tunnel is" - " not recognised."); + " not recognized."); if (!mask) mask = &rte_flow_item_ipv6_mask; ret = mlx5_flow_item_acceptable diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 1d1bcb5fe028..aa7ca355b4d8 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -446,7 +446,7 @@ mr_free(struct mlx5_mr *mr) } /** - * Releass resources of detached MR having no online entry. + * Release resources of detached MR having no online entry. * * @param dev * Pointer to Ethernet device. @@ -496,7 +496,7 @@ mr_find_contig_memsegs_cb(const struct rte_memseg_list *msl, } /** - * Create a new global Memroy Region (MR) for a missing virtual address. + * Create a new global Memory Region (MR) for a missing virtual address. * Register entire virtually contiguous memory chunk around the address. * * @param dev @@ -553,7 +553,7 @@ mlx5_mr_create(struct rte_eth_dev *dev, struct mlx5_mr_cache *entry, * Find out a contiguous virtual address chunk in use, to which the * given address belongs, in order to register maximum range. In the * best case where mempools are not dynamically recreated and - * '--socket-mem' is speicified as an EAL option, it is very likely to + * '--socket-mem' is specified as an EAL option, it is very likely to * have only one MR(LKey) per a socket and per a hugepage-size even * though the system memory is highly fragmented. */ @@ -604,7 +604,7 @@ mlx5_mr_create(struct rte_eth_dev *dev, struct mlx5_mr_cache *entry, bmp_mem = RTE_PTR_ALIGN_CEIL(mr + 1, RTE_CACHE_LINE_SIZE); mr->ms_bmp = rte_bitmap_init(ms_n, bmp_mem, bmp_size); if (mr->ms_bmp == NULL) { - DEBUG("port %u unable to initialize bitamp for a new MR of" + DEBUG("port %u unable to initialize bitmap for a new MR of" " address (%p).", dev->data->port_id, (void *)addr); rte_errno = EINVAL; diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 16e1641d00bc..7c2d65ff2007 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -751,7 +751,7 @@ mlx5_rx_intr_disable(struct rte_eth_dev *dev, uint16_t rx_queue_id) * Queue index in DPDK Rx queue array * * @return - * The Verbs object initialised, NULL otherwise and rte_errno is set. + * The Verbs object initialized, NULL otherwise and rte_errno is set. */ struct mlx5_rxq_ibv * mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx) @@ -1179,7 +1179,7 @@ mlx5_mprq_free_mp(struct rte_eth_dev *dev) /** * Allocate a mempool for Multi-Packet RQ. All configured Rx queues share the - * mempool. If already allocated, reuse it if there're enough elements. + * mempool. If already allocated, reuse it if there are enough elements. * Otherwise, resize it. * * @param dev @@ -1234,7 +1234,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) */ desc *= 4; obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * priv->rxqs_n; - /* Check a mempool is already allocated and if it can be resued. */ + /* Check a mempool is already allocated and if it can be reused. */ if (mp != NULL && mp->elt_size >= obj_size && mp->size >= obj_num) { DRV_LOG(DEBUG, "port %u mempool %s is being reused", dev->data->port_id, mp->name); @@ -1583,7 +1583,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev) * Number of queues in the array. * * @return - * The Verbs object initialised, NULL otherwise and rte_errno is set. + * The Verbs object initialized, NULL otherwise and rte_errno is set. */ struct mlx5_ind_table_ibv * mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, const uint16_t *queues, @@ -1613,7 +1613,7 @@ mlx5_ind_table_ibv_new(struct rte_eth_dev *dev, const uint16_t *queues, ind_tbl->queues[i] = queues[i]; } ind_tbl->queues_n = queues_n; - /* Finalise indirection table. */ + /* Finalize indirection table. */ for (j = 0; i != (unsigned int)(1 << wq_n); ++i, ++j) wq[i] = wq[j]; ind_tbl->ind_table = mlx5_glue->create_rwq_ind_table @@ -1746,7 +1746,7 @@ mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev) * Number of queues. * * @return - * The Verbs object initialised, NULL otherwise and rte_errno is set. + * The Verbs object initialized, NULL otherwise and rte_errno is set. */ struct mlx5_hrxq * mlx5_hrxq_new(struct rte_eth_dev *dev, @@ -1950,7 +1950,7 @@ mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev) * Pointer to Ethernet device. * * @return - * The Verbs object initialised, NULL otherwise and rte_errno is set. + * The Verbs object initialized, NULL otherwise and rte_errno is set. */ struct mlx5_rxq_ibv * mlx5_rxq_ibv_drop_new(struct rte_eth_dev *dev) @@ -2009,7 +2009,7 @@ mlx5_rxq_ibv_drop_new(struct rte_eth_dev *dev) * Pointer to Ethernet device. * * @return - * The Verbs object initialised, NULL otherwise and rte_errno is set. + * The Verbs object initialized, NULL otherwise and rte_errno is set. */ void mlx5_rxq_ibv_drop_release(struct rte_eth_dev *dev) @@ -2032,7 +2032,7 @@ mlx5_rxq_ibv_drop_release(struct rte_eth_dev *dev) * Pointer to Ethernet device. * * @return - * The Verbs object initialised, NULL otherwise and rte_errno is set. + * The Verbs object initialized, NULL otherwise and rte_errno is set. */ struct mlx5_ind_table_ibv * mlx5_ind_table_ibv_drop_new(struct rte_eth_dev *dev) @@ -2096,7 +2096,7 @@ mlx5_ind_table_ibv_drop_release(struct rte_eth_dev *dev) * Pointer to Ethernet device. * * @return - * The Verbs object initialised, NULL otherwise and rte_errno is set. + * The Verbs object initialized, NULL otherwise and rte_errno is set. */ struct mlx5_hrxq * mlx5_hrxq_drop_new(struct rte_eth_dev *dev) diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index 2d14f8a6edbd..95cbc513ca7b 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -1507,7 +1507,7 @@ txq_burst_empw(struct mlx5_txq_data *txq, struct rte_mbuf **pkts, mpw.wqe->eseg.cs_flags = cs_flags; } else { /* Evaluate whether the next packet can be inlined. - * Inlininig is possible when: + * Inlining is possible when: * - length is less than configured value * - length fits for remaining space * - not required to fill the title WQEBB with dsegs