From patchwork Fri Nov 26 19:58:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josh Soref X-Patchwork-Id: 104726 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4207A0548; Sun, 28 Nov 2021 13:57:03 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9BC4242732; Sun, 28 Nov 2021 13:56:56 +0100 (CET) Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) by mails.dpdk.org (Postfix) with ESMTP id 96510406FF for ; Fri, 26 Nov 2021 20:59:18 +0100 (CET) Received: by mail-qk1-f179.google.com with SMTP id p4so15842762qkm.7 for ; Fri, 26 Nov 2021 11:59:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=le9xIFiCZY4jtZQrIf+oy675dPBJOncr/iM8qG0Y+qc=; b=M0z5wv7MbSrGCZKdz4PIVwaKj2889GKtdlSmbeAtvlquu3+/XHysTEmL/FgPkkwM2b 2hzdkbKFJ0oYNLDVCNZi0eSgCyJzkXn+G6I2FBt+VatIRSojcbtRdlFB0xJX72VjA+Du exnPPIkwUEhOOlMOZtNWel0DBsxkcm9hkABCrTjLE2f/La5/P+xqzXwEHVYnowQvHX4a uOtHF5Uw/lklZKjm0LP3EUEr+bVNxJHGIiJBQYAXv2F/P8by7aPAPKgWgcyC0MlAf09H UHxlcObSCFh+TY1CM1TgEi0TPdn2ycdbMtRQRwsDm0EANmEYW1hV18oUzox4WpyAVbhm V+zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=le9xIFiCZY4jtZQrIf+oy675dPBJOncr/iM8qG0Y+qc=; b=0/E0GVPt8UAGoGkJjAHpu8/0oQUHB5YGxeaL5K/KAxPSDmMP4+FmismQXukMf8lvnp vI0bsVn+0Y31lp5019Kwlg0r7pGX+UDH9dhsqLn4V1ki5Jo41ikpPgQ3YtnvPcWIOoBp FvU9xw/DgoMwBliUt40wm27iMqAxtqYVx/DMDiWt2MYHqAcko4k0JaWlsZ227HMBPhEW Ih3v9o/TqaMdv0BQaysqfY3VWwIJINlY8UV09QkxifL21ZDLo/LE1nnmdYKuM4m9UgQp NCqwgbzKTOoD2Irxl8vOt63yvKvJZKPzGtMT4i+Itz5YYFZAkZ523uHp1fcfwYrlJR6E yoSQ== X-Gm-Message-State: AOAM533lZIlXkMhJcH4ThrufL50qDgk9ERt19p7ypHfPzgK6H66nbibs ixCqns6/+BossUNJ/ImpiSFbMYj3iMY= X-Google-Smtp-Source: ABdhPJwy/ieOe35GQwkUjFGZyycQgEIB21GXv89Z+cQHrFXDjvZeQaZBorRUYJ6OnKbLkEt+gynmJg== X-Received: by 2002:a05:620a:2a11:: with SMTP id o17mr23922101qkp.253.1637956755950; Fri, 26 Nov 2021 11:59:15 -0800 (PST) Received: from localhost.localdomain ([69.196.142.83]) by smtp.gmail.com with ESMTPSA id q6sm3817612qkl.106.2021.11.26.11.59.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Nov 2021 11:59:14 -0800 (PST) From: Josh Soref X-Google-Original-From: Josh Soref To: dev@dpdk.org Cc: Josh Soref Subject: [PATCH] Spelling Date: Fri, 26 Nov 2021 14:58:51 -0500 Message-Id: <20211126195851.50167-1-jsoref@users.noreply.github.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) MIME-Version: 1.0 X-Mailman-Approved-At: Sun, 28 Nov 2021 13:56:54 +0100 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org * 0x hex * acceptable * account * accounting * accumulate * accumulates * acknowledge * action * actions * activate * active * actively * actually * adapter * adaptive * adding * address * adjusted * aggregator * aggregators * aggressive * algorithm * alignment * allocate * allocated * allocation * alphabetically * although * always * annotation * approx * approximate * arbitrary * archive * argument * array * associated * assumption * attached * attempts * attributes * authentic * authentication * autogreen * available * average * backplane * backtracking * bandwidth * barrier * basically * bearer * before * begin * beginning * beyond * biggest * boundaries * buffer * buffers * but * by * calculate * calculations * cannot * capabilities * capability * chained * chaining * characteristics * checked * checks * checksum * choices * chunk * cipher * classes * classification * classifier * coalescing * command * commit * communicate * comparison * compatibility * completion * config * configuration * configurations * configure * configured * configuring * congestion * connected * connection * consecutive * constant * consumed * consumer * container * containing * contrary * conversion * corresponds * corruption * corrupts * couldn't * cprintf * crypto * cryptographic * current * currently * cycles * datapath * datastructure * decapsulation * default * deferred * defined * definition * definitions * deinitialization * delete * deletion * demonstrates * demonstrating * dependent * depends * dequeuing * derived * described * descriptor * descriptors * destination * destroy * destroying * detach * determine * determined * device * diagrams * differentiate * director * discard * discrimination * distributing * distribution * divergent * domain * doorbell * dropper * duplex * duplicate * effective * efficient * element * empty * enable * enabled * enabling * encapsulate * encoding * endian * enough * enqueued * entries * entry * equally * errno * erroneous * error * ethertype * exceed * exceeds * exclusively * executed * exede * exhaustion * expansion * expected * expects * experimental * expiry * explicit * extended * externally * failed * fairness * fallen * feature * fiber * fields * filters * filters' info * firmware * forwarding * fragment * framework * free * frequencies * frequency * functionality * further * generate * generator * geneve * global * greater * groupid * grpmask * guaranteed * handler * hardware * hash * header * hexadecimal * hierarchical * identical * identification * identifier * identifies * ignore * ignoring * immutable * implicitly * important * imprecise * inconsistent * incremented * incrementing * index * indexed * indicate * indicates * indication * individual * infiniband * information * inherent * inherited * initialization * initialize * insert * inserted * instant * instincts * instructions * insufficient * integrity * interim * internal * interrupt * interrupts * interval * intrinsics * invalid * involves * joining * kernel * lapsed * legacy * length * lengths * license * limited * little * lookup * loopback * mantissa * mapping * maximal * maximum * maxmsivector * measure * memory * message * metadata * metrics * minimize * mismatch * mmappable * mmapped * modify * moment * monitor * mult * multicore * multimatch * multiple * negative * negotiation * nonexistent * notification * notifications * observed * occupied * occurs * octeontx * odpvalid * offload * offloads * offset * operation * operations * order * other than * otherwise * overflow * overhead * override * overwritten * packet * packets * palladium * parallelism * parameter * parameters * partner * passive * pause * peer * pending * per port * period * permanent * personalities * physical * platform * pointer * points * policies * policy * polynomials * populate * portctl * postponed * preemptible * preference * prefetch * prefetching * prefix * preparation * prerequisite * present * preserve * previous * primary * prior * priorities * priority * probability * processed * processing * prodding * profile * programmatically * promisc * promiscuous * properties * protocol * provisioned * qgroup * quantity * queue * queueable * queues * quiescent * reasonably * reassemble * reassembly * recalculate * receive * receiving * recommended * redundant * reflected * register * registering * registers * registration * regular * release * relevant * remapped * removed * replace * request * required * reserved * resettable * resolution * resource * resources * respectively * response * restoration * resulting * results * retransmission * retrieval * retrieve * retrieving * return * revision * robust * routes * routines * scatter * scattered * scenario * schedule * search * searching * second * segment * segregating * selected * sequence * series * service * session * shaping * shift * signature * similar * simplify * simultaneously * single * situation * skeleton * slave * something * specific * specification * specified * specifies * staging * standalone * standard * state * statically * statistics * status for * strategy * string * structure * structures * submission * subsystem * subtraction * succeeded * successful * successfully * supplied * support * supported * synchronization * synthetic * tagged * technique * template * tentatively * terminate * termination * threshold * ticketlock * ticks * timestamp * together * token * traffic * translation * transmit * truncate * tunnel * tuple * typically * unavailable * unconditionally * unexpectedly * unfortunately * uninitialize * unrecognizable * unrecognized * unregistering * unsupported * until * up * update * usage * validation * values * variables * vector * vectors * verification * verify * violation * virtchnl * warnings * weight * where * wherever * whether * without * workaround * worker * written * xstats * zeroed Signed-off-by: Josh Soref --- app/proc-info/main.c | 6 +- app/test-acl/main.c | 6 +- .../comp_perf_test_cyclecount.c | 2 +- .../comp_perf_test_throughput.c | 2 +- .../comp_perf_test_verify.c | 2 +- app/test-compress-perf/main.c | 2 +- .../cperf_test_pmd_cyclecount.c | 2 +- app/test-crypto-perf/cperf_test_vectors.h | 4 +- app/test-eventdev/evt_options.c | 2 +- app/test-eventdev/test_order_common.c | 2 +- app/test-fib/main.c | 4 +- app/test-flow-perf/config.h | 2 +- app/test-flow-perf/main.c | 2 +- app/test-pmd/cmdline.c | 2 +- app/test-pmd/cmdline_flow.c | 6 +- app/test-pmd/cmdline_tm.c | 4 +- app/test-pmd/csumonly.c | 2 +- app/test-pmd/parameters.c | 2 +- app/test-pmd/testpmd.c | 2 +- app/test-pmd/txonly.c | 4 +- app/test/test_barrier.c | 2 +- app/test/test_bpf.c | 4 +- app/test/test_compressdev.c | 2 +- app/test/test_cryptodev.c | 2 +- app/test/test_fib_perf.c | 2 +- app/test/test_kni.c | 4 +- app/test/test_kvargs.c | 16 ++-- app/test/test_link_bonding.c | 4 +- app/test/test_link_bonding_mode4.c | 30 +++---- app/test/test_lpm6_data.h | 2 +- app/test/test_member.c | 2 +- app/test/test_mempool.c | 4 +- app/test/test_memzone.c | 6 +- app/test/test_metrics.c | 2 +- app/test/test_pcapng.c | 2 +- app/test/test_power_cpufreq.c | 2 +- app/test/test_rcu_qsbr.c | 4 +- app/test/test_red.c | 8 +- app/test/test_security.c | 2 +- app/test/test_table.h | 2 +- app/test/test_table_pipeline.c | 2 +- app/test/test_thash.c | 2 +- buildtools/binutils-avx512-check.py | 2 +- devtools/check-symbol-change.sh | 6 +- .../virtio_user_for_container_networking.svg | 2 +- doc/guides/nics/af_packet.rst | 2 +- doc/guides/nics/mlx4.rst | 2 +- doc/guides/nics/mlx5.rst | 6 +- doc/guides/prog_guide/cryptodev_lib.rst | 2 +- .../prog_guide/env_abstraction_layer.rst | 4 +- doc/guides/prog_guide/img/turbo_tb_decode.svg | 2 +- doc/guides/prog_guide/img/turbo_tb_encode.svg | 2 +- doc/guides/prog_guide/qos_framework.rst | 6 +- doc/guides/prog_guide/rte_flow.rst | 2 +- doc/guides/rawdevs/cnxk_bphy.rst | 2 +- doc/guides/regexdevs/features_overview.rst | 2 +- doc/guides/rel_notes/release_16_07.rst | 2 +- doc/guides/rel_notes/release_17_08.rst | 2 +- doc/guides/rel_notes/release_2_1.rst | 2 +- doc/guides/sample_app_ug/ip_reassembly.rst | 2 +- doc/guides/sample_app_ug/l2_forward_cat.rst | 2 +- doc/guides/sample_app_ug/server_node_efd.rst | 2 +- doc/guides/sample_app_ug/skeleton.rst | 2 +- .../sample_app_ug/vm_power_management.rst | 2 +- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 +- drivers/baseband/acc100/rte_acc100_pmd.c | 24 +++--- drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 8 +- drivers/baseband/null/bbdev_null.c | 2 +- .../baseband/turbo_sw/bbdev_turbo_software.c | 2 +- drivers/bus/dpaa/dpaa_bus.c | 2 +- drivers/bus/dpaa/include/fsl_qman.h | 6 +- drivers/bus/dpaa/include/fsl_usd.h | 2 +- drivers/bus/dpaa/include/process.h | 2 +- drivers/bus/fslmc/fslmc_bus.c | 2 +- drivers/bus/fslmc/fslmc_vfio.h | 2 +- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 2 +- drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +- .../fslmc/qbman/include/fsl_qbman_portal.h | 20 ++--- drivers/bus/pci/linux/pci_vfio.c | 2 +- drivers/bus/vdev/rte_bus_vdev.h | 2 +- drivers/bus/vmbus/vmbus_common.c | 2 +- drivers/common/cnxk/roc_bphy_cgx.c | 2 +- drivers/common/cnxk/roc_cpt.c | 10 +-- drivers/common/cnxk/roc_cpt_priv.h | 2 +- drivers/common/cnxk/roc_mbox.h | 4 +- drivers/common/cnxk/roc_nix_bpf.c | 22 ++--- drivers/common/cnxk/roc_nix_tm_ops.c | 2 +- drivers/common/cnxk/roc_npc_mcam.c | 2 +- drivers/common/cnxk/roc_npc_priv.h | 2 +- drivers/common/cnxk/roc_tim.c | 2 +- drivers/common/cpt/cpt_ucode.h | 4 +- drivers/common/cpt/cpt_ucode_asym.h | 2 +- drivers/common/dpaax/caamflib/desc/algo.h | 2 +- drivers/common/dpaax/caamflib/desc/ipsec.h | 4 +- drivers/common/dpaax/caamflib/desc/sdap.h | 6 +- .../common/dpaax/caamflib/rta/operation_cmd.h | 6 +- drivers/common/dpaax/dpaax_iova_table.c | 2 +- drivers/common/iavf/iavf_type.h | 2 +- drivers/common/iavf/virtchnl.h | 2 +- drivers/common/mlx5/mlx5_common.c | 2 +- drivers/common/mlx5/mlx5_common_mr.c | 2 +- drivers/common/mlx5/mlx5_devx_cmds.c | 2 +- drivers/common/mlx5/mlx5_devx_cmds.h | 2 +- drivers/common/mlx5/mlx5_malloc.c | 4 +- drivers/common/mlx5/mlx5_malloc.h | 2 +- drivers/common/mlx5/mlx5_prm.h | 6 +- drivers/common/mlx5/windows/mlx5_common_os.c | 4 +- drivers/common/mlx5/windows/mlx5_common_os.h | 2 +- drivers/common/octeontx2/otx2_mbox.h | 4 +- .../qat/qat_adf/adf_transport_access_macros.h | 2 +- drivers/common/sfc_efx/efsys.h | 2 +- drivers/compress/octeontx/include/zip_regs.h | 4 +- drivers/compress/octeontx/otx_zip.h | 4 +- drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 4 +- drivers/compress/qat/qat_comp.c | 12 +-- drivers/compress/qat/qat_comp_pmd.c | 8 +- drivers/compress/qat/qat_comp_pmd.h | 2 +- drivers/crypto/bcmfs/bcmfs_device.h | 2 +- drivers/crypto/bcmfs/bcmfs_qp.c | 2 +- drivers/crypto/bcmfs/bcmfs_sym_defs.h | 6 +- drivers/crypto/bcmfs/bcmfs_sym_engine.h | 2 +- drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 2 +- drivers/crypto/caam_jr/caam_jr.c | 4 +- drivers/crypto/caam_jr/caam_jr_hw_specific.h | 4 +- drivers/crypto/caam_jr/caam_jr_pvt.h | 4 +- drivers/crypto/caam_jr/caam_jr_uio.c | 2 +- drivers/crypto/ccp/ccp_crypto.c | 2 +- drivers/crypto/ccp/ccp_crypto.h | 2 +- drivers/crypto/ccp/ccp_dev.h | 2 +- drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 4 +- drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 30 +++---- drivers/crypto/dpaa_sec/dpaa_sec.c | 4 +- .../crypto/octeontx/otx_cryptodev_hw_access.c | 2 +- drivers/crypto/octeontx/otx_cryptodev_mbox.h | 2 +- drivers/crypto/octeontx/otx_cryptodev_ops.c | 2 +- drivers/crypto/qat/qat_asym.c | 2 +- drivers/crypto/qat/qat_sym.c | 2 +- drivers/crypto/qat/qat_sym_session.h | 2 +- drivers/crypto/virtio/virtio_cryptodev.c | 6 +- drivers/crypto/virtio/virtqueue.c | 2 +- drivers/crypto/virtio/virtqueue.h | 4 +- drivers/dma/ioat/ioat_dmadev.c | 2 +- drivers/dma/ioat/ioat_hw_defs.h | 2 +- drivers/dma/skeleton/skeleton_dmadev.c | 2 +- drivers/event/cnxk/cnxk_eventdev_selftest.c | 4 +- drivers/event/dlb2/dlb2.c | 2 +- drivers/event/dlb2/dlb2_priv.h | 2 +- drivers/event/dlb2/dlb2_selftest.c | 2 +- drivers/event/dlb2/rte_pmd_dlb2.h | 2 +- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 2 +- drivers/event/dsw/dsw_evdev.h | 4 +- drivers/event/dsw/dsw_event.c | 4 +- drivers/event/octeontx/ssovf_evdev.h | 2 +- drivers/event/octeontx/ssovf_evdev_selftest.c | 2 +- drivers/event/octeontx2/otx2_evdev_selftest.c | 2 +- drivers/event/octeontx2/otx2_tim_evdev.c | 4 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- drivers/event/opdl/opdl_evdev.c | 2 +- drivers/event/opdl/opdl_test.c | 2 +- drivers/event/sw/sw_evdev.h | 2 +- drivers/event/sw/sw_evdev_selftest.c | 2 +- drivers/mempool/dpaa/dpaa_mempool.c | 2 +- drivers/mempool/octeontx/octeontx_fpavf.c | 4 +- drivers/net/ark/ark_ethdev.c | 4 +- drivers/net/ark/ark_global.h | 2 +- drivers/net/ark/ark_rqp.c | 4 +- drivers/net/ark/ark_rqp.h | 4 +- drivers/net/atlantic/atl_ethdev.c | 2 +- drivers/net/atlantic/atl_rxtx.c | 2 +- drivers/net/atlantic/hw_atl/hw_atl_b0.c | 2 +- drivers/net/axgbe/axgbe_dev.c | 2 +- drivers/net/axgbe/axgbe_ethdev.c | 2 +- drivers/net/axgbe/axgbe_ethdev.h | 2 +- drivers/net/axgbe/axgbe_phy_impl.c | 4 +- drivers/net/axgbe/axgbe_rxtx_vec_sse.c | 2 +- drivers/net/bnx2x/bnx2x.c | 56 ++++++------ drivers/net/bnx2x/bnx2x.h | 16 ++-- drivers/net/bnx2x/bnx2x_stats.c | 10 +-- drivers/net/bnx2x/bnx2x_stats.h | 8 +- drivers/net/bnx2x/bnx2x_vfpf.c | 2 +- drivers/net/bnx2x/bnx2x_vfpf.h | 2 +- drivers/net/bnx2x/ecore_fw_defs.h | 2 +- drivers/net/bnx2x/ecore_hsi.h | 54 ++++++------ drivers/net/bnx2x/ecore_init.h | 2 +- drivers/net/bnx2x/ecore_init_ops.h | 6 +- drivers/net/bnx2x/ecore_reg.h | 40 ++++----- drivers/net/bnx2x/ecore_sp.c | 42 ++++----- drivers/net/bnx2x/ecore_sp.h | 8 +- drivers/net/bnx2x/elink.c | 52 +++++------ drivers/net/bnx2x/elink.h | 2 +- drivers/net/bnxt/bnxt_hwrm.c | 2 +- drivers/net/bnxt/tf_core/tfp.c | 2 +- drivers/net/bnxt/tf_core/tfp.h | 2 +- drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c | 4 +- drivers/net/bonding/eth_bond_8023ad_private.h | 2 +- drivers/net/bonding/eth_bond_private.h | 2 +- drivers/net/bonding/rte_eth_bond_8023ad.c | 20 ++--- drivers/net/bonding/rte_eth_bond_8023ad.h | 4 +- drivers/net/bonding/rte_eth_bond_alb.h | 2 +- drivers/net/bonding/rte_eth_bond_api.c | 4 +- drivers/net/cnxk/cn10k_ethdev.h | 2 +- drivers/net/cnxk/cn10k_tx.h | 6 +- drivers/net/cnxk/cn9k_tx.h | 6 +- drivers/net/cnxk/cnxk_ptp.c | 2 +- drivers/net/cxgbe/cxgbe_flow.c | 2 +- drivers/net/cxgbe/cxgbevf_main.c | 2 +- drivers/net/cxgbe/sge.c | 8 +- drivers/net/dpaa/dpaa_ethdev.c | 6 +- drivers/net/dpaa/dpaa_rxtx.c | 4 +- drivers/net/dpaa/fmlib/fm_ext.h | 2 +- drivers/net/dpaa/fmlib/fm_pcd_ext.h | 8 +- drivers/net/dpaa/fmlib/fm_port_ext.h | 18 ++-- drivers/net/dpaa2/dpaa2_ethdev.c | 14 +-- drivers/net/dpaa2/dpaa2_ethdev.h | 2 +- drivers/net/dpaa2/dpaa2_flow.c | 12 +-- drivers/net/dpaa2/dpaa2_mux.c | 6 +- drivers/net/dpaa2/dpaa2_rxtx.c | 6 +- drivers/net/dpaa2/mc/dpdmux.c | 8 +- drivers/net/dpaa2/mc/fsl_dpdmux.h | 4 +- drivers/net/dpaa2/mc/fsl_dpni.h | 10 +-- drivers/net/e1000/e1000_ethdev.h | 4 +- drivers/net/e1000/em_ethdev.c | 10 +-- drivers/net/e1000/em_rxtx.c | 6 +- drivers/net/e1000/igb_ethdev.c | 18 ++-- drivers/net/e1000/igb_flow.c | 4 +- drivers/net/e1000/igb_pf.c | 2 +- drivers/net/e1000/igb_rxtx.c | 14 +-- drivers/net/ena/ena_ethdev.c | 2 +- drivers/net/ena/ena_ethdev.h | 2 +- drivers/net/enetfec/enet_regs.h | 2 +- drivers/net/enic/enic_flow.c | 64 +++++++------- drivers/net/enic/enic_fm_flow.c | 10 +-- drivers/net/enic/enic_main.c | 2 +- drivers/net/enic/enic_rxtx.c | 2 +- drivers/net/fm10k/fm10k.h | 2 +- drivers/net/fm10k/fm10k_ethdev.c | 12 +-- drivers/net/fm10k/fm10k_rxtx_vec.c | 10 +-- drivers/net/hinic/hinic_pmd_ethdev.c | 4 +- drivers/net/hinic/hinic_pmd_ethdev.h | 2 +- drivers/net/hinic/hinic_pmd_flow.c | 8 +- drivers/net/hinic/hinic_pmd_rx.c | 30 +++---- drivers/net/hinic/hinic_pmd_tx.c | 2 +- drivers/net/hns3/hns3_cmd.c | 4 +- drivers/net/hns3/hns3_common.c | 2 +- drivers/net/hns3/hns3_dcb.c | 86 +++++++++---------- drivers/net/hns3/hns3_dcb.h | 24 +++--- drivers/net/hns3/hns3_ethdev.c | 32 +++---- drivers/net/hns3/hns3_ethdev.h | 8 +- drivers/net/hns3/hns3_ethdev_vf.c | 6 +- drivers/net/hns3/hns3_fdir.c | 4 +- drivers/net/hns3/hns3_fdir.h | 2 +- drivers/net/hns3/hns3_flow.c | 12 +-- drivers/net/hns3/hns3_mbx.c | 4 +- drivers/net/hns3/hns3_mbx.h | 2 +- drivers/net/hns3/hns3_rss.h | 2 +- drivers/net/hns3/hns3_rxtx.c | 16 ++-- drivers/net/hns3/hns3_rxtx.h | 2 +- drivers/net/hns3/hns3_stats.c | 4 +- drivers/net/hns3/hns3_tm.c | 4 +- drivers/net/i40e/i40e_ethdev.c | 12 +-- drivers/net/i40e/i40e_ethdev.h | 10 +-- drivers/net/i40e/i40e_fdir.c | 10 +-- drivers/net/i40e/i40e_flow.c | 2 +- drivers/net/i40e/i40e_pf.c | 4 +- drivers/net/i40e/i40e_rxtx.c | 20 ++--- drivers/net/i40e/i40e_rxtx_vec_altivec.c | 2 +- drivers/net/i40e/i40e_rxtx_vec_neon.c | 4 +- drivers/net/i40e/i40e_rxtx_vec_sse.c | 6 +- drivers/net/i40e/rte_pmd_i40e.c | 2 +- drivers/net/iavf/iavf_ethdev.c | 14 +-- drivers/net/iavf/iavf_hash.c | 6 +- drivers/net/iavf/iavf_ipsec_crypto.c | 18 ++-- drivers/net/iavf/iavf_ipsec_crypto.h | 6 +- drivers/net/iavf/iavf_rxtx.c | 4 +- drivers/net/iavf/iavf_rxtx_vec_sse.c | 4 +- drivers/net/iavf/iavf_vchnl.c | 4 +- drivers/net/ice/ice_dcf.c | 2 +- drivers/net/ice/ice_dcf_ethdev.c | 2 +- drivers/net/ice/ice_ethdev.c | 12 +-- drivers/net/ice/ice_rxtx.c | 10 +-- drivers/net/ice/ice_rxtx_vec_sse.c | 4 +- drivers/net/ice/ice_switch_filter.c | 26 +++--- drivers/net/igc/igc_filter.c | 2 +- drivers/net/igc/igc_txrx.c | 4 +- drivers/net/ionic/ionic_if.h | 6 +- drivers/net/ipn3ke/ipn3ke_ethdev.c | 2 +- drivers/net/ipn3ke/ipn3ke_ethdev.h | 4 +- drivers/net/ipn3ke/ipn3ke_flow.c | 2 +- drivers/net/ipn3ke/ipn3ke_representor.c | 12 +-- drivers/net/ipn3ke/ipn3ke_tm.c | 4 +- drivers/net/ipn3ke/meson.build | 2 +- drivers/net/ixgbe/ixgbe_bypass.c | 2 +- drivers/net/ixgbe/ixgbe_bypass_api.h | 4 +- drivers/net/ixgbe/ixgbe_ethdev.c | 18 ++-- drivers/net/ixgbe/ixgbe_ethdev.h | 2 +- drivers/net/ixgbe/ixgbe_fdir.c | 2 +- drivers/net/ixgbe/ixgbe_flow.c | 4 +- drivers/net/ixgbe/ixgbe_ipsec.c | 2 +- drivers/net/ixgbe/ixgbe_pf.c | 2 +- drivers/net/ixgbe/ixgbe_rxtx.c | 10 +-- drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 2 +- drivers/net/memif/memif_socket.c | 2 +- drivers/net/memif/rte_eth_memif.c | 2 +- drivers/net/mlx4/mlx4.h | 2 +- drivers/net/mlx4/mlx4_ethdev.c | 2 +- drivers/net/mlx5/linux/mlx5_os.c | 8 +- drivers/net/mlx5/mlx5.c | 12 +-- drivers/net/mlx5/mlx5.h | 12 +-- drivers/net/mlx5/mlx5_flow.c | 28 +++--- drivers/net/mlx5/mlx5_flow.h | 6 +- drivers/net/mlx5/mlx5_flow_aso.c | 8 +- drivers/net/mlx5/mlx5_flow_dv.c | 56 ++++++------ drivers/net/mlx5/mlx5_flow_flex.c | 4 +- drivers/net/mlx5/mlx5_flow_meter.c | 16 ++-- drivers/net/mlx5/mlx5_rx.c | 2 +- drivers/net/mlx5/mlx5_rxq.c | 4 +- drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 2 +- drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 2 +- drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 2 +- drivers/net/mlx5/mlx5_tx.c | 2 +- drivers/net/mlx5/mlx5_utils.h | 2 +- drivers/net/mlx5/windows/mlx5_flow_os.c | 2 +- drivers/net/mlx5/windows/mlx5_os.c | 2 +- drivers/net/mvneta/mvneta_ethdev.c | 2 +- drivers/net/mvpp2/mrvl_ethdev.c | 2 +- drivers/net/mvpp2/mrvl_qos.c | 4 +- drivers/net/netvsc/hn_nvs.c | 2 +- drivers/net/netvsc/hn_rxtx.c | 4 +- drivers/net/netvsc/hn_vf.c | 2 +- .../net/nfp/nfpcore/nfp-common/nfp_resid.h | 6 +- drivers/net/nfp/nfpcore/nfp_cppcore.c | 2 +- drivers/net/nfp/nfpcore/nfp_nsp.h | 2 +- drivers/net/nfp/nfpcore/nfp_resource.c | 2 +- drivers/net/nfp/nfpcore/nfp_rtsym.c | 2 +- drivers/net/ngbe/ngbe_ethdev.c | 14 +-- drivers/net/ngbe/ngbe_pf.c | 2 +- drivers/net/octeontx/octeontx_ethdev.c | 2 +- drivers/net/octeontx2/otx2_ethdev_irq.c | 2 +- drivers/net/octeontx2/otx2_ptp.c | 2 +- drivers/net/octeontx2/otx2_tx.h | 4 +- drivers/net/octeontx2/otx2_vlan.c | 2 +- drivers/net/octeontx_ep/otx2_ep_vf.c | 2 +- drivers/net/octeontx_ep/otx_ep_vf.c | 2 +- drivers/net/pfe/pfe_ethdev.c | 6 +- drivers/net/pfe/pfe_hal.c | 2 +- drivers/net/pfe/pfe_hif.c | 4 +- drivers/net/pfe/pfe_hif.h | 2 +- drivers/net/pfe/pfe_hif_lib.c | 8 +- drivers/net/qede/qede_debug.c | 14 +-- drivers/net/qede/qede_ethdev.c | 2 +- drivers/net/qede/qede_rxtx.c | 12 +-- drivers/net/qede/qede_rxtx.h | 2 +- drivers/net/sfc/sfc.c | 2 +- drivers/net/sfc/sfc_dp.c | 2 +- drivers/net/sfc/sfc_dp_rx.h | 4 +- drivers/net/sfc/sfc_ef100.h | 2 +- drivers/net/sfc/sfc_ef100_rx.c | 2 +- drivers/net/sfc/sfc_ef10_essb_rx.c | 2 +- drivers/net/sfc/sfc_ef10_rx_ev.h | 2 +- drivers/net/sfc/sfc_intr.c | 2 +- drivers/net/sfc/sfc_mae.c | 50 +++++------ drivers/net/sfc/sfc_rx.c | 6 +- drivers/net/sfc/sfc_tso.h | 10 +-- drivers/net/sfc/sfc_tx.c | 2 +- drivers/net/softnic/rte_eth_softnic_flow.c | 2 +- drivers/net/tap/rte_eth_tap.c | 2 +- drivers/net/tap/tap_bpf_api.c | 4 +- drivers/net/tap/tap_flow.c | 4 +- drivers/net/thunderx/nicvf_svf.c | 2 +- drivers/net/txgbe/txgbe_ethdev.c | 14 +-- drivers/net/txgbe/txgbe_ethdev_vf.c | 6 +- drivers/net/txgbe/txgbe_ipsec.c | 2 +- drivers/net/txgbe/txgbe_pf.c | 2 +- drivers/net/virtio/virtio_ethdev.c | 4 +- drivers/net/virtio/virtio_pci.c | 2 +- drivers/net/virtio/virtio_rxtx.c | 2 +- drivers/net/virtio/virtio_rxtx_packed_avx.h | 2 +- drivers/net/virtio/virtqueue.c | 2 +- drivers/net/virtio/virtqueue.h | 4 +- drivers/raw/dpaa2_qdma/dpaa2_qdma.c | 2 +- drivers/raw/dpaa2_qdma/dpaa2_qdma.h | 4 +- drivers/raw/ifpga/ifpga_rawdev.c | 10 +-- drivers/raw/ioat/ioat_rawdev.c | 2 +- drivers/raw/ioat/ioat_spec.h | 2 +- drivers/raw/ntb/ntb.h | 2 +- drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_mem.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 2 +- examples/bbdev_app/main.c | 2 +- examples/bond/main.c | 4 +- examples/dma/dmafwd.c | 2 +- examples/ethtool/lib/rte_ethtool.c | 2 +- examples/ethtool/lib/rte_ethtool.h | 4 +- examples/ip_reassembly/main.c | 4 +- examples/ipsec-secgw/event_helper.c | 2 +- examples/ipsec-secgw/ipsec-secgw.c | 14 +-- examples/ipsec-secgw/sa.c | 6 +- examples/ipsec-secgw/sp4.c | 2 +- examples/ipsec-secgw/sp6.c | 2 +- examples/ipsec-secgw/test/common_defs.sh | 4 +- examples/kni/main.c | 2 +- examples/l2fwd-cat/l2fwd-cat.c | 2 +- examples/l2fwd-event/l2fwd_event_generic.c | 2 +- .../l2fwd-event/l2fwd_event_internal_port.c | 2 +- examples/l2fwd-jobstats/main.c | 2 +- examples/l3fwd-acl/main.c | 6 +- examples/l3fwd-power/main.c | 4 +- examples/l3fwd/l3fwd_common.h | 4 +- examples/l3fwd/l3fwd_neon.h | 2 +- examples/l3fwd/l3fwd_sse.h | 2 +- examples/multi_process/hotplug_mp/commands.c | 2 +- examples/multi_process/simple_mp/main.c | 2 +- examples/multi_process/symmetric_mp/main.c | 2 +- examples/ntb/ntb_fwd.c | 2 +- examples/packet_ordering/main.c | 2 +- examples/performance-thread/common/lthread.c | 6 +- .../performance-thread/common/lthread_diag.c | 2 +- .../performance-thread/common/lthread_int.h | 2 +- .../performance-thread/common/lthread_tls.c | 2 +- .../performance-thread/l3fwd-thread/main.c | 12 +-- .../pthread_shim/pthread_shim.h | 2 +- examples/pipeline/examples/registers.spec | 2 +- examples/qos_sched/cmdline.c | 2 +- examples/server_node_efd/node/node.c | 2 +- examples/skeleton/basicfwd.c | 2 +- examples/vhost/main.c | 10 +-- examples/vhost/virtio_net.c | 50 +++++------ examples/vm_power_manager/channel_monitor.c | 2 +- examples/vm_power_manager/power_manager.h | 2 +- examples/vmdq/main.c | 2 +- kernel/linux/kni/kni_fifo.h | 2 +- lib/acl/acl_bld.c | 2 +- lib/acl/acl_run_altivec.h | 2 +- lib/acl/acl_run_avx512.c | 2 +- lib/acl/acl_run_avx512x16.h | 14 +-- lib/acl/acl_run_avx512x8.h | 12 +-- lib/bpf/bpf_convert.c | 4 +- lib/bpf/bpf_validate.c | 10 +-- lib/cryptodev/rte_cryptodev.h | 2 +- lib/dmadev/rte_dmadev.h | 4 +- lib/eal/arm/include/rte_cycles_32.h | 2 +- lib/eal/common/eal_common_trace_ctf.c | 8 +- lib/eal/freebsd/eal_interrupts.c | 4 +- lib/eal/include/generic/rte_pflock.h | 2 +- lib/eal/include/rte_malloc.h | 4 +- lib/eal/linux/eal_interrupts.c | 4 +- lib/eal/linux/eal_vfio.h | 2 +- lib/eal/windows/eal_windows.h | 2 +- lib/eal/windows/include/dirent.h | 4 +- lib/eal/windows/include/fnmatch.h | 4 +- lib/eal/x86/include/rte_atomic.h | 2 +- lib/eventdev/rte_event_eth_rx_adapter.c | 6 +- lib/fib/rte_fib.c | 4 +- lib/fib/rte_fib.h | 4 +- lib/fib/rte_fib6.c | 4 +- lib/fib/rte_fib6.h | 4 +- lib/graph/graph_populate.c | 4 +- lib/hash/rte_crc_arm64.h | 2 +- lib/hash/rte_thash.c | 2 +- lib/ip_frag/ip_frag_internal.c | 2 +- lib/ipsec/ipsec_sad.c | 10 +-- lib/ipsec/ipsec_telemetry.c | 2 +- lib/ipsec/rte_ipsec_sad.h | 2 +- lib/ipsec/sa.c | 2 +- lib/mbuf/rte_mbuf_core.h | 2 +- lib/meson.build | 2 +- lib/net/rte_l2tpv2.h | 4 +- lib/pipeline/rte_swx_ctl.h | 4 +- lib/pipeline/rte_swx_pipeline_internal.h | 4 +- lib/pipeline/rte_swx_pipeline_spec.c | 2 +- lib/power/power_cppc_cpufreq.c | 2 +- lib/regexdev/rte_regexdev.h | 6 +- lib/ring/rte_ring_core.h | 2 +- lib/sched/rte_pie.h | 6 +- lib/sched/rte_red.h | 4 +- lib/sched/rte_sched.c | 2 +- lib/sched/rte_sched.h | 2 +- lib/table/rte_swx_table.h | 2 +- lib/table/rte_swx_table_selector.h | 2 +- lib/telemetry/telemetry.c | 2 +- lib/telemetry/telemetry_json.h | 2 +- lib/vhost/vhost_user.c | 4 +- lib/vhost/virtio_net.c | 10 +-- 483 files changed, 1328 insertions(+), 1328 deletions(-) diff --git a/app/proc-info/main.c b/app/proc-info/main.c index ce140aaf..56070a33 100644 --- a/app/proc-info/main.c +++ b/app/proc-info/main.c @@ -630,7 +630,7 @@ metrics_display(int port_id) names = rte_malloc(NULL, sizeof(struct rte_metric_name) * len, 0); if (names == NULL) { - printf("Cannot allocate memory for metrcis names\n"); + printf("Cannot allocate memory for metrics names\n"); rte_free(metrics); return; } @@ -1109,7 +1109,7 @@ show_tm(void) caplevel.n_nodes_max, caplevel.n_nodes_nonleaf_max, caplevel.n_nodes_leaf_max); - printf("\t -- indetical: non leaf %u leaf %u\n", + printf("\t -- identical: non leaf %u leaf %u\n", caplevel.non_leaf_nodes_identical, caplevel.leaf_nodes_identical); @@ -1263,7 +1263,7 @@ show_ring(char *name) printf(" - Name (%s) on socket (%d)\n" " - flags:\n" "\t -- Single Producer Enqueue (%u)\n" - "\t -- Single Consmer Dequeue (%u)\n", + "\t -- Single Consumer Dequeue (%u)\n", ptr->name, ptr->memzone->socket_id, ptr->flags & RING_F_SP_ENQ, diff --git a/app/test-acl/main.c b/app/test-acl/main.c index c2de1877..aa508f5d 100644 --- a/app/test-acl/main.c +++ b/app/test-acl/main.c @@ -386,8 +386,8 @@ parse_cb_ipv4_trace(char *str, struct ipv4_5tuple *v) } /* - * Parses IPV6 address, exepcts the following format: - * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X - is a hexedecimal digit). + * Parses IPV6 address, expects the following format: + * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X - is a hexadecimal digit). */ static int parse_ipv6_addr(const char *in, const char **end, uint32_t v[IPV6_ADDR_U32], @@ -994,7 +994,7 @@ print_usage(const char *prgname) "should be either 1 or multiple of %zu, " "but not greater then %u]\n" "[--" OPT_MAX_SIZE - "= " + "= " "leave 0 for default behaviour]\n" "[--" OPT_ITER_NUM "=]\n" "[--" OPT_VERBOSE "=]\n" diff --git a/app/test-compress-perf/comp_perf_test_cyclecount.c b/app/test-compress-perf/comp_perf_test_cyclecount.c index da55b02b..1d8e5fe6 100644 --- a/app/test-compress-perf/comp_perf_test_cyclecount.c +++ b/app/test-compress-perf/comp_perf_test_cyclecount.c @@ -180,7 +180,7 @@ main_loop(struct cperf_cyclecount_ctx *ctx, enum rte_comp_xform_type type) if (ops == NULL) { RTE_LOG(ERR, USER1, - "Can't allocate memory for ops strucures\n"); + "Can't allocate memory for ops structures\n"); return -1; } diff --git a/app/test-compress-perf/comp_perf_test_throughput.c b/app/test-compress-perf/comp_perf_test_throughput.c index d3dff070..4569599e 100644 --- a/app/test-compress-perf/comp_perf_test_throughput.c +++ b/app/test-compress-perf/comp_perf_test_throughput.c @@ -72,7 +72,7 @@ main_loop(struct cperf_benchmark_ctx *ctx, enum rte_comp_xform_type type) if (ops == NULL) { RTE_LOG(ERR, USER1, - "Can't allocate memory for ops strucures\n"); + "Can't allocate memory for ops structures\n"); return -1; } diff --git a/app/test-compress-perf/comp_perf_test_verify.c b/app/test-compress-perf/comp_perf_test_verify.c index f6e21368..7d060294 100644 --- a/app/test-compress-perf/comp_perf_test_verify.c +++ b/app/test-compress-perf/comp_perf_test_verify.c @@ -75,7 +75,7 @@ main_loop(struct cperf_verify_ctx *ctx, enum rte_comp_xform_type type) if (ops == NULL) { RTE_LOG(ERR, USER1, - "Can't allocate memory for ops strucures\n"); + "Can't allocate memory for ops structures\n"); return -1; } diff --git a/app/test-compress-perf/main.c b/app/test-compress-perf/main.c index cc9951a9..6ff6a2f0 100644 --- a/app/test-compress-perf/main.c +++ b/app/test-compress-perf/main.c @@ -67,7 +67,7 @@ comp_perf_check_capabilities(struct comp_test_data *test_data, uint8_t cdev_id) uint64_t comp_flags = cap->comp_feature_flags; - /* Huffman enconding */ + /* Huffman encoding */ if (test_data->huffman_enc == RTE_COMP_HUFFMAN_FIXED && (comp_flags & RTE_COMP_FF_HUFFMAN_FIXED) == 0) { RTE_LOG(ERR, USER1, diff --git a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c index ba1f104f..5842f29d 100644 --- a/app/test-crypto-perf/cperf_test_pmd_cyclecount.c +++ b/app/test-crypto-perf/cperf_test_pmd_cyclecount.c @@ -334,7 +334,7 @@ pmd_cyclecount_bench_burst_sz( * queue, so we never get any failed enqs unless the driver won't accept * the exact number of descriptors we requested, or the driver won't * wrap around the end of the TX ring. However, since we're only - * dequeueing once we've filled up the queue, we have to benchmark it + * dequeuing once we've filled up the queue, we have to benchmark it * piecemeal and then average out the results. */ cur_op = 0; diff --git a/app/test-crypto-perf/cperf_test_vectors.h b/app/test-crypto-perf/cperf_test_vectors.h index 70f2839c..4390c570 100644 --- a/app/test-crypto-perf/cperf_test_vectors.h +++ b/app/test-crypto-perf/cperf_test_vectors.h @@ -2,8 +2,8 @@ * Copyright(c) 2016-2017 Intel Corporation */ -#ifndef _CPERF_TEST_VECTRORS_ -#define _CPERF_TEST_VECTRORS_ +#ifndef _CPERF_TEST_VECTORS_ +#define _CPERF_TEST_VECTORS_ #include "cperf_options.h" diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c index 753a7dbd..4ae44801 100644 --- a/app/test-eventdev/evt_options.c +++ b/app/test-eventdev/evt_options.c @@ -336,7 +336,7 @@ usage(char *program) "\t--deq_tmo_nsec : global dequeue timeout\n" "\t--prod_type_ethdev : use ethernet device as producer.\n" "\t--prod_type_timerdev : use event timer device as producer.\n" - "\t expity_nsec would be the timeout\n" + "\t expiry_nsec would be the timeout\n" "\t in ns.\n" "\t--prod_type_timerdev_burst : use timer device as producer\n" "\t burst mode.\n" diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c index ff7813f9..603e7c91 100644 --- a/app/test-eventdev/test_order_common.c +++ b/app/test-eventdev/test_order_common.c @@ -253,7 +253,7 @@ void order_opt_dump(struct evt_options *opt) { evt_dump_producer_lcores(opt); - evt_dump("nb_wrker_lcores", "%d", evt_nr_active_lcores(opt->wlcores)); + evt_dump("nb_worker_lcores", "%d", evt_nr_active_lcores(opt->wlcores)); evt_dump_worker_lcores(opt); evt_dump("nb_evdev_ports", "%d", order_nb_event_ports(opt)); } diff --git a/app/test-fib/main.c b/app/test-fib/main.c index ecd42011..622703dc 100644 --- a/app/test-fib/main.c +++ b/app/test-fib/main.c @@ -624,7 +624,7 @@ print_usage(void) "(if -f is not specified)>]\n" "[-r ]\n" - "[-c ]\n" + "[-c ]\n" "[-6 ]\n" "[-s ]\n" "[-a ]\n" "[-w ]\n" "[-u ]\n" - "[-v ]\n", diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h index 0db2254b..29b63298 100644 --- a/app/test-flow-perf/config.h +++ b/app/test-flow-perf/config.h @@ -28,7 +28,7 @@ #define PORT_ID_DST 1 #define TEID_VALUE 1 -/* Flow items/acctions max size */ +/* Flow items/actions max size */ #define MAX_ITEMS_NUM 32 #define MAX_ACTIONS_NUM 32 #define MAX_ATTRS_NUM 16 diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index 11f1ee0e..56d43734 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -1519,7 +1519,7 @@ dump_used_cpu_time(const char *item, * threads time. * * Throughput: total count of rte rules divided - * over the average of the time cosumed by all + * over the average of the time consumed by all * threads time. */ double insertion_latency_time; diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 6e10afee..e626b1c7 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -561,7 +561,7 @@ static void cmd_help_long_parsed(void *parsed_result, " Set the option to enable display of RX and TX bursts.\n" "set port (port_id) vf (vf_id) rx|tx on|off\n" - " Enable/Disable a VF receive/tranmit from a port\n\n" + " Enable/Disable a VF receive/transmit from a port\n\n" "set port (port_id) vf (vf_id) rxmode (AUPE|ROPE|BAM" "|MPE) (on|off)\n" diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index bbe3dc01..5c2bba48 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -2162,7 +2162,7 @@ static const struct token token_list[] = { }, [COMMON_POLICY_ID] = { .name = "{policy_id}", - .type = "POLCIY_ID", + .type = "POLICY_ID", .help = "policy id", .call = parse_int, .comp = comp_none, @@ -2370,7 +2370,7 @@ static const struct token token_list[] = { }, [TUNNEL_DESTROY] = { .name = "destroy", - .help = "destroy tunel", + .help = "destroy tunnel", .next = NEXT(NEXT_ENTRY(TUNNEL_DESTROY_ID), NEXT_ENTRY(COMMON_PORT_ID)), .args = ARGS(ARGS_ENTRY(struct buffer, port)), @@ -2378,7 +2378,7 @@ static const struct token token_list[] = { }, [TUNNEL_DESTROY_ID] = { .name = "id", - .help = "tunnel identifier to testroy", + .help = "tunnel identifier to destroy", .next = NEXT(NEXT_ENTRY(COMMON_UNSIGNED)), .args = ARGS(ARGS_ENTRY(struct tunnel_ops, id)), .call = parse_tunnel, diff --git a/app/test-pmd/cmdline_tm.c b/app/test-pmd/cmdline_tm.c index bfbd43ca..281e4124 100644 --- a/app/test-pmd/cmdline_tm.c +++ b/app/test-pmd/cmdline_tm.c @@ -69,7 +69,7 @@ print_err_msg(struct rte_tm_error *error) [RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS] = "num shared shapers field (node params)", [RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE] - = "wfq weght mode field (node params)", + = "wfq weight mode field (node params)", [RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES] = "num strict priorities field (node params)", [RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN] @@ -479,7 +479,7 @@ static void cmd_show_port_tm_level_cap_parsed(void *parsed_result, cmdline_parse_inst_t cmd_show_port_tm_level_cap = { .f = cmd_show_port_tm_level_cap_parsed, .data = NULL, - .help_str = "Show Port TM Hierarhical level Capabilities", + .help_str = "Show Port TM Hierarchical level Capabilities", .tokens = { (void *)&cmd_show_port_tm_level_cap_show, (void *)&cmd_show_port_tm_level_cap_port, diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 2aeea243..0177284d 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -796,7 +796,7 @@ pkt_copy_split(const struct rte_mbuf *pkt) * * The testpmd command line for this forward engine sets the flags * TESTPMD_TX_OFFLOAD_* in ports[tx_port].tx_ol_flags. They control - * wether a checksum must be calculated in software or in hardware. The + * whether a checksum must be calculated in software or in hardware. The * IP, UDP, TCP and SCTP flags always concern the inner layer. The * OUTER_IP is only useful for tunnel packets. */ diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index f9185065..daf6a31b 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -110,7 +110,7 @@ usage(char* progname) "If the drop-queue doesn't exist, the packet is dropped. " "By default drop-queue=127.\n"); #ifdef RTE_LIB_LATENCYSTATS - printf(" --latencystats=N: enable latency and jitter statistcs " + printf(" --latencystats=N: enable latency and jitter statistics " "monitoring on forwarding lcore id N.\n"); #endif printf(" --disable-crc-strip: disable CRC stripping by hardware.\n"); diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 55eb293c..6c387bde 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -449,7 +449,7 @@ uint32_t bypass_timeout = RTE_PMD_IXGBE_BYPASS_TMT_OFF; uint8_t latencystats_enabled; /* - * Lcore ID to serive latency statistics. + * Lcore ID to service latency statistics. */ lcoreid_t latencystats_lcore_id = -1; diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index b8497e73..47db8271 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -174,14 +174,14 @@ update_pkt_header(struct rte_mbuf *pkt, uint32_t total_pkt_len) sizeof(struct rte_ether_hdr) + sizeof(struct rte_ipv4_hdr) + sizeof(struct rte_udp_hdr))); - /* updata udp pkt length */ + /* update udp pkt length */ udp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_udp_hdr *, sizeof(struct rte_ether_hdr) + sizeof(struct rte_ipv4_hdr)); pkt_len = (uint16_t) (pkt_data_len + sizeof(struct rte_udp_hdr)); udp_hdr->dgram_len = RTE_CPU_TO_BE_16(pkt_len); - /* updata ip pkt length and csum */ + /* update ip pkt length and csum */ ip_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, sizeof(struct rte_ether_hdr)); ip_hdr->hdr_checksum = 0; diff --git a/app/test/test_barrier.c b/app/test/test_barrier.c index 6d6d4874..ec69af25 100644 --- a/app/test/test_barrier.c +++ b/app/test/test_barrier.c @@ -11,7 +11,7 @@ * (https://en.wikipedia.org/wiki/Peterson%27s_algorithm) * for two execution units to make sure that rte_smp_mb() prevents * store-load reordering to happen. - * Also when executed on a single lcore could be used as a approxiamate + * Also when executed on a single lcore could be used as a approximate * estimation of number of cycles particular implementation of rte_smp_mb() * will take. */ diff --git a/app/test/test_bpf.c b/app/test/test_bpf.c index 46bcb51f..2d755a87 100644 --- a/app/test/test_bpf.c +++ b/app/test/test_bpf.c @@ -23,7 +23,7 @@ /* * Basic functional tests for librte_bpf. * The main procedure - load eBPF program, execute it and - * compare restuls with expected values. + * compare results with expected values. */ struct dummy_offset { @@ -2707,7 +2707,7 @@ test_ld_mbuf1_check(uint64_t rc, const void *arg) } /* - * same as ld_mbuf1, but then trancate the mbuf by 1B, + * same as ld_mbuf1, but then truncate the mbuf by 1B, * so load of last 4B fail. */ static void diff --git a/app/test/test_compressdev.c b/app/test/test_compressdev.c index c63b5b67..57c566aa 100644 --- a/app/test/test_compressdev.c +++ b/app/test/test_compressdev.c @@ -1256,7 +1256,7 @@ test_deflate_comp_run(const struct interim_data_params *int_data, /* * Store original operation index in private data, * since ordering does not have to be maintained, - * when dequeueing from compressdev, so a comparison + * when dequeuing from compressdev, so a comparison * at the end of the test can be done. */ priv_data = (struct priv_op_data *) (ops[i] + 1); diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 10b48cda..6c949605 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -6870,7 +6870,7 @@ test_snow3g_decryption_with_digest_test_case_1(void) } /* - * Function prepare data for hash veryfication test case. + * Function prepare data for hash verification test case. * Digest is allocated in 4 last bytes in plaintext, pattern. */ snow3g_hash_test_vector_setup(&snow3g_test_case_7, &snow3g_hash_data); diff --git a/app/test/test_fib_perf.c b/app/test/test_fib_perf.c index 86b2f832..7a25fe8d 100644 --- a/app/test/test_fib_perf.c +++ b/app/test/test_fib_perf.c @@ -346,7 +346,7 @@ test_fib_perf(void) fib = rte_fib_create(__func__, SOCKET_ID_ANY, &config); TEST_FIB_ASSERT(fib != NULL); - /* Measue add. */ + /* Measure add. */ begin = rte_rdtsc(); for (i = 0; i < NUM_ROUTE_ENTRIES; i++) { diff --git a/app/test/test_kni.c b/app/test/test_kni.c index 40ab0d5c..2761de9b 100644 --- a/app/test/test_kni.c +++ b/app/test/test_kni.c @@ -326,7 +326,7 @@ test_kni_register_handler_mp(void) /* Check with the invalid parameters */ if (rte_kni_register_handlers(kni, NULL) == 0) { - printf("Unexpectedly register successuflly " + printf("Unexpectedly register successfully " "with NULL ops pointer\n"); exit(-1); } @@ -475,7 +475,7 @@ test_kni_processing(uint16_t port_id, struct rte_mempool *mp) /** * Check multiple processes support on - * registerring/unregisterring handlers. + * registering/unregistering handlers. */ if (test_kni_register_handler_mp() < 0) { printf("fail to check multiple process support\n"); diff --git a/app/test/test_kvargs.c b/app/test/test_kvargs.c index a91ea8dc..b7b97a0d 100644 --- a/app/test/test_kvargs.c +++ b/app/test/test_kvargs.c @@ -11,7 +11,7 @@ #include "test.h" -/* incrementd in handler, to check it is properly called once per +/* incremented in handler, to check it is properly called once per * key/value association */ static unsigned count; @@ -107,14 +107,14 @@ static int test_valid_kvargs(void) goto fail; } count = 0; - /* call check_handler() for all entries with key="unexistant_key" */ - if (rte_kvargs_process(kvlist, "unexistant_key", check_handler, NULL) < 0) { + /* call check_handler() for all entries with key="nonexistent_key" */ + if (rte_kvargs_process(kvlist, "nonexistent_key", check_handler, NULL) < 0) { printf("rte_kvargs_process() error\n"); rte_kvargs_free(kvlist); goto fail; } if (count != 0) { - printf("invalid count value %d after rte_kvargs_process(unexistant_key)\n", + printf("invalid count value %d after rte_kvargs_process(nonexistent_key)\n", count); rte_kvargs_free(kvlist); goto fail; @@ -135,10 +135,10 @@ static int test_valid_kvargs(void) rte_kvargs_free(kvlist); goto fail; } - /* count all entries with key="unexistant_key" */ - count = rte_kvargs_count(kvlist, "unexistant_key"); + /* count all entries with key="nonexistent_key" */ + count = rte_kvargs_count(kvlist, "nonexistent_key"); if (count != 0) { - printf("invalid count value %d after rte_kvargs_count(unexistant_key)\n", + printf("invalid count value %d after rte_kvargs_count(nonexistent_key)\n", count); rte_kvargs_free(kvlist); goto fail; @@ -156,7 +156,7 @@ static int test_valid_kvargs(void) /* call check_handler() on all entries with key="check", it * should fail as the value is not recognized by the handler */ if (rte_kvargs_process(kvlist, "check", check_handler, NULL) == 0) { - printf("rte_kvargs_process() is success bu should not\n"); + printf("rte_kvargs_process() is success but should not\n"); rte_kvargs_free(kvlist); goto fail; } diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c index dc6fc46b..80ea1cdc 100644 --- a/app/test/test_link_bonding.c +++ b/app/test/test_link_bonding.c @@ -2026,7 +2026,7 @@ uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 }; int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 }; static int -test_roundrobin_verfiy_polling_slave_link_status_change(void) +test_roundrobin_verify_polling_slave_link_status_change(void) { struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)polling_slave_mac; @@ -5118,7 +5118,7 @@ static struct unit_test_suite link_bonding_test_suite = { TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable), TEST_CASE(test_roundrobin_verify_mac_assignment), TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour), - TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change), + TEST_CASE(test_roundrobin_verify_polling_slave_link_status_change), TEST_CASE(test_activebackup_tx_burst), TEST_CASE(test_activebackup_rx_burst), TEST_CASE(test_activebackup_verify_promiscuous_enable_disable), diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c index 351129de..aea76e70 100644 --- a/app/test/test_link_bonding_mode4.c +++ b/app/test/test_link_bonding_mode4.c @@ -58,11 +58,11 @@ static const struct rte_ether_addr slave_mac_default = { { 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } }; -static const struct rte_ether_addr parnter_mac_default = { +static const struct rte_ether_addr partner_mac_default = { { 0x22, 0xBB, 0xFF, 0xBB, 0x00, 0x00 } }; -static const struct rte_ether_addr parnter_system = { +static const struct rte_ether_addr partner_system = { { 0x33, 0xFF, 0xBB, 0xFF, 0x00, 0x00 } }; @@ -76,7 +76,7 @@ struct slave_conf { uint16_t port_id; uint8_t bonded : 1; - uint8_t lacp_parnter_state; + uint8_t lacp_partner_state; }; struct ether_vlan_hdr { @@ -258,7 +258,7 @@ add_slave(struct slave_conf *slave, uint8_t start) TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1, "Slave MAC address is not as expected"); - RTE_VERIFY(slave->lacp_parnter_state == 0); + RTE_VERIFY(slave->lacp_partner_state == 0); return 0; } @@ -288,7 +288,7 @@ remove_slave(struct slave_conf *slave) test_params.bonded_port_id); slave->bonded = 0; - slave->lacp_parnter_state = 0; + slave->lacp_partner_state = 0; return 0; } @@ -501,20 +501,20 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt) slow_hdr = rte_pktmbuf_mtod(pkt, struct slow_protocol_frame *); /* Change source address to partner address */ - rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr); + rte_ether_addr_copy(&partner_mac_default, &slow_hdr->eth_hdr.src_addr); slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id; lacp = (struct lacpdu *) &slow_hdr->slow_protocol; /* Save last received state */ - slave->lacp_parnter_state = lacp->actor.state; + slave->lacp_partner_state = lacp->actor.state; /* Change it into LACP replay by matching parameters. */ memcpy(&lacp->partner.port_params, &lacp->actor.port_params, sizeof(struct port_params)); lacp->partner.state = lacp->actor.state; - rte_ether_addr_copy(&parnter_system, &lacp->actor.port_params.system); + rte_ether_addr_copy(&partner_system, &lacp->actor.port_params.system); lacp->actor.state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION | STATE_AGGREGATION | @@ -580,7 +580,7 @@ bond_handshake_done(struct slave_conf *slave) const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION | STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING; - return slave->lacp_parnter_state == expected_state; + return slave->lacp_partner_state == expected_state; } static unsigned @@ -1134,7 +1134,7 @@ test_mode4_tx_burst(void) if (slave_down_id == slave->port_id) { TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0, - "slave %u enexpectedly transmitted %u packets", + "slave %u unexpectedly transmitted %u packets", normal_cnt + slow_cnt, slave->port_id); } else { TEST_ASSERT_EQUAL(slow_cnt, 0, @@ -1165,7 +1165,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave) &marker_hdr->eth_hdr.dst_addr); /* Init source address */ - rte_ether_addr_copy(&parnter_mac_default, + rte_ether_addr_copy(&partner_mac_default, &marker_hdr->eth_hdr.src_addr); marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id; @@ -1353,7 +1353,7 @@ test_mode4_expired(void) /* After test only expected slave should be in EXPIRED state */ FOR_EACH_SLAVE(i, slave) { if (slave == exp_slave) - TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED, + TEST_ASSERT(slave->lacp_partner_state & STATE_EXPIRED, "Slave %u should be in expired.", slave->port_id); else TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1, @@ -1392,7 +1392,7 @@ test_mode4_ext_ctrl(void) }, }; - rte_ether_addr_copy(&parnter_system, &src_mac); + rte_ether_addr_copy(&partner_system, &src_mac); rte_ether_addr_copy(&slow_protocol_mac_addr, &dst_mac); initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac, @@ -1446,7 +1446,7 @@ test_mode4_ext_lacp(void) }, }; - rte_ether_addr_copy(&parnter_system, &src_mac); + rte_ether_addr_copy(&partner_system, &src_mac); rte_ether_addr_copy(&slow_protocol_mac_addr, &dst_mac); initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac, @@ -1535,7 +1535,7 @@ check_environment(void) if (port->bonded != 0) env_state |= 0x04; - if (port->lacp_parnter_state != 0) + if (port->lacp_partner_state != 0) env_state |= 0x08; if (env_state != 0) diff --git a/app/test/test_lpm6_data.h b/app/test/test_lpm6_data.h index c3894f73..da9b161f 100644 --- a/app/test/test_lpm6_data.h +++ b/app/test/test_lpm6_data.h @@ -22,7 +22,7 @@ struct ips_tbl_entry { * in previous test_lpm6_routes.h . Because this table has only 1000 * lines, keeping it doesn't make LPM6 test case so large and also * make the algorithm to generate rule table unnecessary and the - * algorithm to genertate test input IPv6 and associated expected + * algorithm to generate test input IPv6 and associated expected * next_hop much simple. */ diff --git a/app/test/test_member.c b/app/test/test_member.c index 40aa4c86..af9d5091 100644 --- a/app/test/test_member.c +++ b/app/test/test_member.c @@ -459,7 +459,7 @@ static int test_member_multimatch(void) MAX_MATCH, set_ids_cache); /* * For cache mode, keys overwrite when signature same. - * the mutimatch should work like single match. + * the multimatch should work like single match. */ TEST_ASSERT(ret_ht == M_MATCH_CNT && ret_vbf == M_MATCH_CNT && ret_cache == 1, diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index f6c650d1..8e493eda 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -304,7 +304,7 @@ static int test_mempool_single_consumer(void) } /* - * test function for mempool test based on singple consumer and single producer, + * test function for mempool test based on single consumer and single producer, * can run on one lcore only */ static int @@ -322,7 +322,7 @@ my_mp_init(struct rte_mempool *mp, __rte_unused void *arg) } /* - * it tests the mempool operations based on singple producer and single consumer + * it tests the mempool operations based on single producer and single consumer */ static int test_mempool_sp_sc(void) diff --git a/app/test/test_memzone.c b/app/test/test_memzone.c index 6ddd0fba..c9255e57 100644 --- a/app/test/test_memzone.c +++ b/app/test/test_memzone.c @@ -543,7 +543,7 @@ test_memzone_reserve_max(void) } if (mz->len != maxlen) { - printf("Memzone reserve with 0 size did not return bigest block\n"); + printf("Memzone reserve with 0 size did not return biggest block\n"); printf("Expected size = %zu, actual size = %zu\n", maxlen, mz->len); rte_dump_physmem_layout(stdout); @@ -606,7 +606,7 @@ test_memzone_reserve_max_aligned(void) if (mz->len < minlen || mz->len > maxlen) { printf("Memzone reserve with 0 size and alignment %u did not return" - " bigest block\n", align); + " biggest block\n", align); printf("Expected size = %zu-%zu, actual size = %zu\n", minlen, maxlen, mz->len); rte_dump_physmem_layout(stdout); @@ -1054,7 +1054,7 @@ test_memzone_basic(void) if (mz != memzone1) return -1; - printf("test duplcate zone name\n"); + printf("test duplicate zone name\n"); mz = rte_memzone_reserve(TEST_MEMZONE_NAME("testzone1"), 100, SOCKET_ID_ANY, 0); if (mz != NULL) diff --git a/app/test/test_metrics.c b/app/test/test_metrics.c index e736019a..11222133 100644 --- a/app/test/test_metrics.c +++ b/app/test/test_metrics.c @@ -121,7 +121,7 @@ test_metrics_update_value(void) err = rte_metrics_update_value(RTE_METRICS_GLOBAL, KEY, VALUE); TEST_ASSERT(err >= 0, "%s, %d", __func__, __LINE__); - /* Successful Test: Valid port_id otherthan RTE_METRICS_GLOBAL, key + /* Successful Test: Valid port_id other than RTE_METRICS_GLOBAL, key * and value */ err = rte_metrics_update_value(9, KEY, VALUE); diff --git a/app/test/test_pcapng.c b/app/test/test_pcapng.c index c2dbeaf6..34c5e123 100644 --- a/app/test/test_pcapng.c +++ b/app/test/test_pcapng.c @@ -109,7 +109,7 @@ test_setup(void) return -1; } - /* Make a pool for cloned packeets */ + /* Make a pool for cloned packets */ mp = rte_pktmbuf_pool_create_by_ops("pcapng_test_pool", NUM_PACKETS, 0, 0, rte_pcapng_mbuf_size(pkt_len), diff --git a/app/test/test_power_cpufreq.c b/app/test/test_power_cpufreq.c index 1a954952..4d013cd7 100644 --- a/app/test/test_power_cpufreq.c +++ b/app/test/test_power_cpufreq.c @@ -659,7 +659,7 @@ test_power_cpufreq(void) /* test of exit power management for an invalid lcore */ ret = rte_power_exit(TEST_POWER_LCORE_INVALID); if (ret == 0) { - printf("Unpectedly exit power management successfully for " + printf("Unexpectedly exit power management successfully for " "lcore %u\n", TEST_POWER_LCORE_INVALID); rte_power_unset_env(); return -1; diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c index ab37a068..70404e89 100644 --- a/app/test/test_rcu_qsbr.c +++ b/app/test/test_rcu_qsbr.c @@ -408,7 +408,7 @@ test_rcu_qsbr_synchronize_reader(void *arg) /* * rte_rcu_qsbr_synchronize: Wait till all the reader threads have entered - * the queiscent state. + * the quiescent state. */ static int test_rcu_qsbr_synchronize(void) @@ -443,7 +443,7 @@ test_rcu_qsbr_synchronize(void) rte_rcu_qsbr_synchronize(t[0], RTE_MAX_LCORE - 1); rte_rcu_qsbr_thread_offline(t[0], RTE_MAX_LCORE - 1); - /* Test if the API returns after unregisterng all the threads */ + /* Test if the API returns after unregistering all the threads */ for (i = 0; i < RTE_MAX_LCORE; i++) rte_rcu_qsbr_thread_unregister(t[0], i); rte_rcu_qsbr_synchronize(t[0], RTE_QSBR_THRID_INVALID); diff --git a/app/test/test_red.c b/app/test/test_red.c index 05936cfe..33a9f4eb 100644 --- a/app/test/test_red.c +++ b/app/test/test_red.c @@ -1566,10 +1566,10 @@ static void ovfl_check_avg(uint32_t avg) } static struct test_config ovfl_test1_config = { - .ifname = "queue avergage overflow test interface", + .ifname = "queue average overflow test interface", .msg = "overflow test 1 : use one RED configuration,\n" " increase average queue size to target level,\n" - " check maximum number of bits requirte_red to represent avg_s\n\n", + " check maximum number of bits required to represent avg_s\n\n", .htxt = "avg queue size " "wq_log2 " "fraction bits " @@ -1757,12 +1757,12 @@ test_invalid_parameters(void) printf("%i: rte_red_config_init should have failed!\n", __LINE__); return -1; } - /* min_treshold == max_treshold */ + /* min_threshold == max_threshold */ if (rte_red_config_init(&config, 0, 1, 1, 0) == 0) { printf("%i: rte_red_config_init should have failed!\n", __LINE__); return -1; } - /* min_treshold > max_treshold */ + /* min_threshold > max_threshold */ if (rte_red_config_init(&config, 0, 2, 1, 0) == 0) { printf("%i: rte_red_config_init should have failed!\n", __LINE__); return -1; diff --git a/app/test/test_security.c b/app/test/test_security.c index 060cf1ff..059731b6 100644 --- a/app/test/test_security.c +++ b/app/test/test_security.c @@ -237,7 +237,7 @@ * increases .called counter. Function returns value stored in .ret field * of the structure. * In case of some parameters in some functions the expected value is unknown - * and cannot be detrmined prior to call. Such parameters are stored + * and cannot be determined prior to call. Such parameters are stored * in structure and can be compared or analyzed later in test case code. * * Below structures and functions follow the rules just described. diff --git a/app/test/test_table.h b/app/test/test_table.h index 209bdbff..003088f2 100644 --- a/app/test/test_table.h +++ b/app/test/test_table.h @@ -25,7 +25,7 @@ #define MAX_BULK 32 #define N 65536 #define TIME_S 5 -#define TEST_RING_FULL_EMTPY_ITER 8 +#define TEST_RING_FULL_EMPTY_ITER 8 #define N_PORTS 2 #define N_PKTS 2 #define N_PKTS_EXT 6 diff --git a/app/test/test_table_pipeline.c b/app/test/test_table_pipeline.c index aabf4375..915c451f 100644 --- a/app/test/test_table_pipeline.c +++ b/app/test/test_table_pipeline.c @@ -364,7 +364,7 @@ setup_pipeline(int test_type) .action = RTE_PIPELINE_ACTION_PORT, {.port_id = port_out_id[i^1]}, }; - printf("Setting secont table to output to port\n"); + printf("Setting second table to output to port\n"); /* Add the default action for the table. */ ret = rte_pipeline_table_default_entry_add(p, diff --git a/app/test/test_thash.c b/app/test/test_thash.c index a6253067..62ba4a95 100644 --- a/app/test/test_thash.c +++ b/app/test/test_thash.c @@ -684,7 +684,7 @@ test_predictable_rss_multirange(void) /* * calculate hashes, complements, then adjust keys with - * complements and recalsulate hashes + * complements and recalculate hashes */ for (i = 0; i < RTE_DIM(rng_arr); i++) { for (k = 0; k < 100; k++) { diff --git a/buildtools/binutils-avx512-check.py b/buildtools/binutils-avx512-check.py index a4e14f35..57392ecd 100644 --- a/buildtools/binutils-avx512-check.py +++ b/buildtools/binutils-avx512-check.py @@ -1,5 +1,5 @@ #! /usr/bin/env python3 -# SPDX-License-Identitifer: BSD-3-Clause +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2020 Intel Corporation import subprocess diff --git a/devtools/check-symbol-change.sh b/devtools/check-symbol-change.sh index 8fcd0ce1..8992214a 100755 --- a/devtools/check-symbol-change.sh +++ b/devtools/check-symbol-change.sh @@ -25,7 +25,7 @@ build_map_changes() # Triggering this rule, which starts a line and ends it # with a { identifies a versioned section. The section name is - # the rest of the line with the + and { symbols remvoed. + # the rest of the line with the + and { symbols removed. # Triggering this rule sets in_sec to 1, which actives the # symbol rule below /^.*{/ { @@ -35,7 +35,7 @@ build_map_changes() } } - # This rule idenfies the end of a section, and disables the + # This rule identifies the end of a section, and disables the # symbol rule /.*}/ {in_sec=0} @@ -100,7 +100,7 @@ check_for_rule_violations() # Just inform the user of this occurrence, but # don't flag it as an error echo -n "INFO: symbol $symname is added but " - echo -n "patch has insuficient context " + echo -n "patch has insufficient context " echo -n "to determine the section name " echo -n "please ensure the version is " echo "EXPERIMENTAL" diff --git a/doc/guides/howto/img/virtio_user_for_container_networking.svg b/doc/guides/howto/img/virtio_user_for_container_networking.svg index de808066..dc9b318e 100644 --- a/doc/guides/howto/img/virtio_user_for_container_networking.svg +++ b/doc/guides/howto/img/virtio_user_for_container_networking.svg @@ -465,7 +465,7 @@ v:mID="63" id="shape63-63">Sheet.63Contanier/AppContainer/Appoffse offset offse offset = 4.5.** - The mmaping of the iomem range of the PCI device fails for kernels that + The mmapping of the iomem range of the PCI device fails for kernels that enabled the ``CONFIG_IO_STRICT_DEVMEM`` option. The error seen by the user is as similar to the following:: diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst index 25439dad..1fd17558 100644 --- a/doc/guides/rel_notes/release_17_08.rst +++ b/doc/guides/rel_notes/release_17_08.rst @@ -232,7 +232,7 @@ API Changes * The ``rte_cryptodev_configure()`` function does not create the session mempool for the device anymore. * The ``rte_cryptodev_queue_pair_attach_sym_session()`` and - ``rte_cryptodev_queue_pair_dettach_sym_session()`` functions require + ``rte_cryptodev_queue_pair_detach_sym_session()`` functions require the new parameter ``device id``. * Parameters of ``rte_cryptodev_sym_session_create()`` were modified to accept ``mempool``, instead of ``device id`` and ``rte_crypto_sym_xform``. diff --git a/doc/guides/rel_notes/release_2_1.rst b/doc/guides/rel_notes/release_2_1.rst index 35e6c888..d0ad99eb 100644 --- a/doc/guides/rel_notes/release_2_1.rst +++ b/doc/guides/rel_notes/release_2_1.rst @@ -671,7 +671,7 @@ Resolved Issues value 0. - Fixes: 40b966a211ab ("ivshmem: library changes for mmaping using ivshmem") + Fixes: 40b966a211ab ("ivshmem: library changes for mmapping using ivshmem") * **ixgbe/base: Fix SFP probing.** diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst index 06289c22..279bbca3 100644 --- a/doc/guides/sample_app_ug/ip_reassembly.rst +++ b/doc/guides/sample_app_ug/ip_reassembly.rst @@ -154,7 +154,7 @@ each RX queue uses its own mempool. .. literalinclude:: ../../../examples/ip_reassembly/main.c :language: c - :start-after: mbufs stored int the gragment table. 8< + :start-after: mbufs stored int the fragment table. 8< :end-before: >8 End of mbufs stored int the fragmentation table. :dedent: 1 diff --git a/doc/guides/sample_app_ug/l2_forward_cat.rst b/doc/guides/sample_app_ug/l2_forward_cat.rst index 440642ef..3ada3575 100644 --- a/doc/guides/sample_app_ug/l2_forward_cat.rst +++ b/doc/guides/sample_app_ug/l2_forward_cat.rst @@ -176,7 +176,7 @@ function. The value returned is the number of parsed arguments: .. literalinclude:: ../../../examples/l2fwd-cat/l2fwd-cat.c :language: c :start-after: Initialize the Environment Abstraction Layer (EAL). 8< - :end-before: >8 End of initializion the Environment Abstraction Layer (EAL). + :end-before: >8 End of initialization the Environment Abstraction Layer (EAL). :dedent: 1 The next task is to initialize the PQoS library and configure CAT. The diff --git a/doc/guides/sample_app_ug/server_node_efd.rst b/doc/guides/sample_app_ug/server_node_efd.rst index 605eb09a..c6cbc3de 100644 --- a/doc/guides/sample_app_ug/server_node_efd.rst +++ b/doc/guides/sample_app_ug/server_node_efd.rst @@ -191,7 +191,7 @@ flow is not handled by the node. .. literalinclude:: ../../../examples/server_node_efd/node/node.c :language: c :start-after: Packets dequeued from the shared ring. 8< - :end-before: >8 End of packets dequeueing. + :end-before: >8 End of packets dequeuing. Finally, note that both processes updates statistics, such as transmitted, received and dropped packets, which are shown and refreshed by the server app. diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst index 6d0de644..08ddd7aa 100644 --- a/doc/guides/sample_app_ug/skeleton.rst +++ b/doc/guides/sample_app_ug/skeleton.rst @@ -54,7 +54,7 @@ function. The value returned is the number of parsed arguments: .. literalinclude:: ../../../examples/skeleton/basicfwd.c :language: c :start-after: Initializion the Environment Abstraction Layer (EAL). 8< - :end-before: >8 End of initializion the Environment Abstraction Layer (EAL). + :end-before: >8 End of initialization the Environment Abstraction Layer (EAL). :dedent: 1 diff --git a/doc/guides/sample_app_ug/vm_power_management.rst b/doc/guides/sample_app_ug/vm_power_management.rst index 7160b6a6..9ce87956 100644 --- a/doc/guides/sample_app_ug/vm_power_management.rst +++ b/doc/guides/sample_app_ug/vm_power_management.rst @@ -681,7 +681,7 @@ The following is an example JSON string for a power management request. "resource_id": 10 }} -To query the available frequences of an lcore, use the query_cpu_freq command. +To query the available frequencies of an lcore, use the query_cpu_freq command. Where {core_num} is the lcore to query. Before using this command, please enable responses via the set_query command on the host. diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 44228cd7..94792d88 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3510,7 +3510,7 @@ Tunnel offload Indicate tunnel offload rule type - ``tunnel_set {tunnel_id}``: mark rule as tunnel offload decap_set type. -- ``tunnel_match {tunnel_id}``: mark rule as tunel offload match type. +- ``tunnel_match {tunnel_id}``: mark rule as tunnel offload match type. Matching pattern ^^^^^^^^^^^^^^^^ diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c index 1c6080f2..662c98be 100644 --- a/drivers/baseband/acc100/rte_acc100_pmd.c +++ b/drivers/baseband/acc100/rte_acc100_pmd.c @@ -4343,7 +4343,7 @@ poweron_cleanup(struct rte_bbdev *bbdev, struct acc100_device *d, for (template_idx = ACC100_SIG_UL_5G; template_idx <= ACC100_SIG_UL_5G_LAST; template_idx++) { - address = HWPfQmgrGrpTmplateReg4Indx + address = HWPfQmgrGrpTemplateReg4Indx + ACC100_BYTES_IN_WORD * template_idx; if (template_idx == failed_engine) acc100_reg_write(d, address, value); @@ -4392,7 +4392,7 @@ poweron_cleanup(struct rte_bbdev *bbdev, struct acc100_device *d, address = HwPfFecUl5gIbDebugReg + ACC100_ENGINE_OFFSET * template_idx; status = (acc100_reg_read(d, address) >> 4) & 0xF; - address = HWPfQmgrGrpTmplateReg4Indx + address = HWPfQmgrGrpTemplateReg4Indx + ACC100_BYTES_IN_WORD * template_idx; if (status == 1) { acc100_reg_write(d, address, value); @@ -4470,7 +4470,7 @@ rte_acc100_configure(const char *dev_name, struct rte_acc100_conf *conf) acc100_reg_write(d, address, value); /* Set default descriptor signature */ - address = HWPfDmaDescriptorSignatuture; + address = HWPfDmaDescriptorSignature; value = 0; acc100_reg_write(d, address, value); @@ -4522,19 +4522,19 @@ rte_acc100_configure(const char *dev_name, struct rte_acc100_conf *conf) /* Template Priority in incremental order */ for (template_idx = 0; template_idx < ACC100_NUM_TMPL; template_idx++) { - address = HWPfQmgrGrpTmplateReg0Indx + + address = HWPfQmgrGrpTemplateReg0Indx + ACC100_BYTES_IN_WORD * (template_idx % 8); value = ACC100_TMPL_PRI_0; acc100_reg_write(d, address, value); - address = HWPfQmgrGrpTmplateReg1Indx + + address = HWPfQmgrGrpTemplateReg1Indx + ACC100_BYTES_IN_WORD * (template_idx % 8); value = ACC100_TMPL_PRI_1; acc100_reg_write(d, address, value); - address = HWPfQmgrGrpTmplateReg2indx + + address = HWPfQmgrGrpTemplateReg2indx + ACC100_BYTES_IN_WORD * (template_idx % 8); value = ACC100_TMPL_PRI_2; acc100_reg_write(d, address, value); - address = HWPfQmgrGrpTmplateReg3Indx + + address = HWPfQmgrGrpTemplateReg3Indx + ACC100_BYTES_IN_WORD * (template_idx % 8); value = ACC100_TMPL_PRI_3; acc100_reg_write(d, address, value); @@ -4548,7 +4548,7 @@ rte_acc100_configure(const char *dev_name, struct rte_acc100_conf *conf) for (template_idx = 0; template_idx < ACC100_NUM_TMPL; template_idx++) { value = 0; - address = HWPfQmgrGrpTmplateReg4Indx + address = HWPfQmgrGrpTemplateReg4Indx + ACC100_BYTES_IN_WORD * template_idx; acc100_reg_write(d, address, value); } @@ -4561,7 +4561,7 @@ rte_acc100_configure(const char *dev_name, struct rte_acc100_conf *conf) for (template_idx = ACC100_SIG_UL_4G; template_idx <= ACC100_SIG_UL_4G_LAST; template_idx++) { - address = HWPfQmgrGrpTmplateReg4Indx + address = HWPfQmgrGrpTemplateReg4Indx + ACC100_BYTES_IN_WORD * template_idx; acc100_reg_write(d, address, value); } @@ -4579,7 +4579,7 @@ rte_acc100_configure(const char *dev_name, struct rte_acc100_conf *conf) address = HwPfFecUl5gIbDebugReg + ACC100_ENGINE_OFFSET * template_idx; status = (acc100_reg_read(d, address) >> 4) & 0xF; - address = HWPfQmgrGrpTmplateReg4Indx + address = HWPfQmgrGrpTemplateReg4Indx + ACC100_BYTES_IN_WORD * template_idx; if (status == 1) { acc100_reg_write(d, address, value); @@ -4600,7 +4600,7 @@ rte_acc100_configure(const char *dev_name, struct rte_acc100_conf *conf) for (template_idx = ACC100_SIG_DL_4G; template_idx <= ACC100_SIG_DL_4G_LAST; template_idx++) { - address = HWPfQmgrGrpTmplateReg4Indx + address = HWPfQmgrGrpTemplateReg4Indx + ACC100_BYTES_IN_WORD * template_idx; acc100_reg_write(d, address, value); #if RTE_ACC100_SINGLE_FEC == 1 @@ -4616,7 +4616,7 @@ rte_acc100_configure(const char *dev_name, struct rte_acc100_conf *conf) for (template_idx = ACC100_SIG_DL_5G; template_idx <= ACC100_SIG_DL_5G_LAST; template_idx++) { - address = HWPfQmgrGrpTmplateReg4Indx + address = HWPfQmgrGrpTemplateReg4Indx + ACC100_BYTES_IN_WORD * template_idx; acc100_reg_write(d, address, value); #if RTE_ACC100_SINGLE_FEC == 1 diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c index 92decc3e..21d35292 100644 --- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c +++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c @@ -2097,7 +2097,7 @@ dequeue_enc_one_op_cb(struct fpga_queue *q, struct rte_bbdev_enc_op **op, rte_bbdev_log_debug("DMA response desc %p", desc); *op = desc->enc_req.op_addr; - /* Check the decriptor error field, return 1 on error */ + /* Check the descriptor error field, return 1 on error */ desc_error = check_desc_error(desc->enc_req.error); (*op)->status = desc_error << RTE_BBDEV_DATA_ERROR; @@ -2139,7 +2139,7 @@ dequeue_enc_one_op_tb(struct fpga_queue *q, struct rte_bbdev_enc_op **op, for (cb_idx = 0; cb_idx < cbs_in_op; ++cb_idx) { desc = q->ring_addr + ((q->head_free_desc + desc_offset + cb_idx) & q->sw_ring_wrap_mask); - /* Check the decriptor error field, return 1 on error */ + /* Check the descriptor error field, return 1 on error */ desc_error = check_desc_error(desc->enc_req.error); status |= desc_error << RTE_BBDEV_DATA_ERROR; rte_bbdev_log_debug("DMA response desc %p", desc); @@ -2177,7 +2177,7 @@ dequeue_dec_one_op_cb(struct fpga_queue *q, struct rte_bbdev_dec_op **op, (*op)->turbo_dec.iter_count = (desc->dec_req.iter + 2) >> 1; /* crc_pass = 0 when decoder fails */ (*op)->status = !(desc->dec_req.crc_pass) << RTE_BBDEV_CRC_ERROR; - /* Check the decriptor error field, return 1 on error */ + /* Check the descriptor error field, return 1 on error */ desc_error = check_desc_error(desc->enc_req.error); (*op)->status |= desc_error << RTE_BBDEV_DATA_ERROR; return 1; @@ -2221,7 +2221,7 @@ dequeue_dec_one_op_tb(struct fpga_queue *q, struct rte_bbdev_dec_op **op, iter_count = RTE_MAX(iter_count, (uint8_t) desc->dec_req.iter); /* crc_pass = 0 when decoder fails, one fails all */ status |= !(desc->dec_req.crc_pass) << RTE_BBDEV_CRC_ERROR; - /* Check the decriptor error field, return 1 on error */ + /* Check the descriptor error field, return 1 on error */ desc_error = check_desc_error(desc->enc_req.error); status |= desc_error << RTE_BBDEV_DATA_ERROR; rte_bbdev_log_debug("DMA response desc %p", desc); diff --git a/drivers/baseband/null/bbdev_null.c b/drivers/baseband/null/bbdev_null.c index 753d920e..08cff582 100644 --- a/drivers/baseband/null/bbdev_null.c +++ b/drivers/baseband/null/bbdev_null.c @@ -31,7 +31,7 @@ struct bbdev_null_params { uint16_t queues_num; /*< Null BBDEV queues number */ }; -/* Accecptable params for null BBDEV devices */ +/* Acceptable params for null BBDEV devices */ #define BBDEV_NULL_MAX_NB_QUEUES_ARG "max_nb_queues" #define BBDEV_NULL_SOCKET_ID_ARG "socket_id" diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c index b234bb75..c6b1eb86 100644 --- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c +++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c @@ -61,7 +61,7 @@ struct turbo_sw_params { uint16_t queues_num; /*< Turbo SW device queues number */ }; -/* Accecptable params for Turbo SW devices */ +/* Acceptable params for Turbo SW devices */ #define TURBO_SW_MAX_NB_QUEUES_ARG "max_nb_queues" #define TURBO_SW_SOCKET_ID_ARG "socket_id" diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index 737ac8d8..5546a9cb 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -70,7 +70,7 @@ compare_dpaa_devices(struct rte_dpaa_device *dev1, { int comp = 0; - /* Segragating ETH from SEC devices */ + /* Segregating ETH from SEC devices */ if (dev1->device_type > dev2->device_type) comp = 1; else if (dev1->device_type < dev2->device_type) diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index 7ef2f3b2..9b63e559 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1353,7 +1353,7 @@ __rte_internal int qman_irqsource_add(u32 bits); /** - * qman_fq_portal_irqsource_add - samilar to qman_irqsource_add, but it + * qman_fq_portal_irqsource_add - similar to qman_irqsource_add, but it * takes portal (fq specific) as input rather than using the thread affined * portal. */ @@ -1416,7 +1416,7 @@ __rte_internal struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq); /** - * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue + * qman_dqrr_consume - Consume the DQRR entry after volatile dequeue * @fq: Frame Queue on which the volatile dequeue command is issued * @dq: DQRR entry to consume. This is the one which is provided by the * 'qbman_dequeue' command. @@ -2017,7 +2017,7 @@ int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal, * @cgr: the 'cgr' object to deregister * * "Unplugs" this CGR object from the portal affine to the cpu on which this API - * is executed. This must be excuted on the same affine portal on which it was + * is executed. This must be executed on the same affine portal on which it was * created. */ __rte_internal diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h index dcf35e4a..97279421 100644 --- a/drivers/bus/dpaa/include/fsl_usd.h +++ b/drivers/bus/dpaa/include/fsl_usd.h @@ -40,7 +40,7 @@ struct dpaa_raw_portal { /* Specifies the stash request queue this portal should use */ uint8_t sdest; - /* Specifes a specific portal index to map or QBMAN_ANY_PORTAL_IDX + /* Specifies a specific portal index to map or QBMAN_ANY_PORTAL_IDX * for don't care. The portal index will be populated by the * driver when the ioctl() successfully completes. */ diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h index a9229886..48d6b569 100644 --- a/drivers/bus/dpaa/include/process.h +++ b/drivers/bus/dpaa/include/process.h @@ -49,7 +49,7 @@ struct dpaa_portal_map { struct dpaa_ioctl_portal_map { /* Input parameter, is a qman or bman portal required. */ enum dpaa_portal_type type; - /* Specifes a specific portal index to map or 0xffffffff + /* Specifies a specific portal index to map or 0xffffffff * for don't care. */ uint32_t index; diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index a0ef24cd..53fd7553 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -539,7 +539,7 @@ rte_fslmc_driver_unregister(struct rte_dpaa2_driver *driver) fslmc_bus = driver->fslmc_bus; - /* Cleanup the PA->VA Translation table; From whereever this function + /* Cleanup the PA->VA Translation table; From wherever this function * is called from. */ if (rte_eal_iova_mode() == RTE_IOVA_PA) diff --git a/drivers/bus/fslmc/fslmc_vfio.h b/drivers/bus/fslmc/fslmc_vfio.h index 133606a9..2394445b 100644 --- a/drivers/bus/fslmc/fslmc_vfio.h +++ b/drivers/bus/fslmc/fslmc_vfio.h @@ -56,7 +56,7 @@ int rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle, int fslmc_vfio_setup_group(void); int fslmc_vfio_process_group(void); char *fslmc_get_container(void); -int fslmc_get_container_group(int *gropuid); +int fslmc_get_container_group(int *groupid); int rte_fslmc_vfio_dmamap(void); int rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size); diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 2210a0fa..52605ea2 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -178,7 +178,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev) dpio_epoll_fd = epoll_create(1); ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0); if (ret) { - DPAA2_BUS_ERR("Interrupt registeration failed"); + DPAA2_BUS_ERR("Interrupt registration failed"); return -1; } diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h index b1bba1ac..957fc62d 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h +++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h @@ -156,7 +156,7 @@ struct dpaa2_queue { struct rte_cryptodev_data *crypto_data; }; uint32_t fqid; /*!< Unique ID of this queue */ - uint16_t flow_id; /*!< To be used by DPAA2 frmework */ + uint16_t flow_id; /*!< To be used by DPAA2 framework */ uint8_t tc_index; /*!< traffic class identifier */ uint8_t cgid; /*! < Congestion Group id for this queue */ uint64_t rx_pkts; diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h index eb68c9ca..5375ea38 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h @@ -510,7 +510,7 @@ int qbman_result_has_new_result(struct qbman_swp *s, struct qbman_result *dq); /** - * qbman_check_command_complete() - Check if the previous issued dq commnd + * qbman_check_command_complete() - Check if the previous issued dq command * is completed and results are available in memory. * @s: the software portal object. * @dq: the dequeue result read from the memory. @@ -687,7 +687,7 @@ uint16_t qbman_result_DQ_seqnum(const struct qbman_result *dq); /** * qbman_result_DQ_odpid() - Get the seqnum field in dequeue response - * odpid is valid only if ODPVAILD flag is TRUE. + * odpid is valid only if ODPVALID flag is TRUE. * @dq: the dequeue result. * * Return odpid. @@ -743,7 +743,7 @@ const struct qbman_fd *qbman_result_DQ_fd(const struct qbman_result *dq); * qbman_result_SCN_state() - Get the state field in State-change notification * @scn: the state change notification. * - * Return the state in the notifiation. + * Return the state in the notification. */ __rte_internal uint8_t qbman_result_SCN_state(const struct qbman_result *scn); @@ -825,7 +825,7 @@ uint64_t qbman_result_bpscn_ctx(const struct qbman_result *scn); /* Parsing CGCU */ /** - * qbman_result_cgcu_cgid() - Check CGCU resouce id, i.e. cgid + * qbman_result_cgcu_cgid() - Check CGCU resource id, i.e. cgid * @scn: the state change notification. * * Return the CGCU resource id. @@ -903,14 +903,14 @@ void qbman_eq_desc_clear(struct qbman_eq_desc *d); __rte_internal void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success); /** - * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor + * qbman_eq_desc_set_orp() - Set order-restoration in the enqueue descriptor * @d: the enqueue descriptor. * @response_success: 1 = enqueue with response always; 0 = enqueue with * rejections returned on a FQ. * @opr_id: the order point record id. * @seqnum: the order restoration sequence number. - * @incomplete: indiates whether this is the last fragments using the same - * sequeue number. + * @incomplete: indicates whether this is the last fragments using the same + * sequence number. */ __rte_internal void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success, @@ -1052,10 +1052,10 @@ __rte_internal uint8_t qbman_result_eqresp_rspid(struct qbman_result *eqresp); /** - * qbman_result_eqresp_rc() - determines if enqueue command is sucessful. + * qbman_result_eqresp_rc() - determines if enqueue command is successful. * @eqresp: enqueue response. * - * Return 0 when command is sucessful. + * Return 0 when command is successful. */ __rte_internal uint8_t qbman_result_eqresp_rc(struct qbman_result *eqresp); @@ -1250,7 +1250,7 @@ int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid); /** * These functions change the FQ flow-control stuff between XON/XOFF. (The * default is XON.) This setting doesn't affect enqueues to the FQ, just - * dequeues. XOFF FQs will remain in the tenatively-scheduled state, even when + * dequeues. XOFF FQs will remain in the tentatively-scheduled state, even when * non-empty, meaning they won't be selected for scheduled dequeuing. If a FQ is * changed to XOFF after it had already become truly-scheduled to a channel, and * a pull dequeue of that channel occurs that selects that FQ for dequeuing, diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c index 1a5e7c2d..cd0d0b16 100644 --- a/drivers/bus/pci/linux/pci_vfio.c +++ b/drivers/bus/pci/linux/pci_vfio.c @@ -815,7 +815,7 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev) continue; } - /* skip non-mmapable BARs */ + /* skip non-mmappable BARs */ if ((reg->flags & VFIO_REGION_INFO_FLAG_MMAP) == 0) { free(reg); continue; diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h index 28567999..5af6be00 100644 --- a/drivers/bus/vdev/rte_bus_vdev.h +++ b/drivers/bus/vdev/rte_bus_vdev.h @@ -197,7 +197,7 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg); int rte_vdev_init(const char *name, const char *args); /** - * Uninitalize a driver specified by name. + * Uninitialize a driver specified by name. * * @param name * The pointer to a driver name to be uninitialized. diff --git a/drivers/bus/vmbus/vmbus_common.c b/drivers/bus/vmbus/vmbus_common.c index 519ca9c6..36772736 100644 --- a/drivers/bus/vmbus/vmbus_common.c +++ b/drivers/bus/vmbus/vmbus_common.c @@ -134,7 +134,7 @@ vmbus_probe_one_driver(struct rte_vmbus_driver *dr, /* * If device class GUID matches, call the probe function of - * registere drivers for the vmbus device. + * register drivers for the vmbus device. * Return -1 if initialization failed, * and 1 if no driver found for this device. */ diff --git a/drivers/common/cnxk/roc_bphy_cgx.c b/drivers/common/cnxk/roc_bphy_cgx.c index 7449cbe7..c3be3c90 100644 --- a/drivers/common/cnxk/roc_bphy_cgx.c +++ b/drivers/common/cnxk/roc_bphy_cgx.c @@ -14,7 +14,7 @@ #define CGX_CMRX_INT_OVERFLW BIT_ULL(1) /* * CN10K stores number of lmacs in 4 bit filed - * in contraty to CN9K which uses only 3 bits. + * in contrary to CN9K which uses only 3 bits. * * In theory masks should differ yet on CN9K * bits beyond specified range contain zeros. diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c index 8f8e6d38..aac0fd6a 100644 --- a/drivers/common/cnxk/roc_cpt.c +++ b/drivers/common/cnxk/roc_cpt.c @@ -375,7 +375,7 @@ cpt_available_lfs_get(struct dev *dev, uint16_t *nb_lf) } int -cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr, +cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmask, uint8_t blkaddr, bool inl_dev_sso) { struct cpt_lf_alloc_req_msg *req; @@ -390,7 +390,7 @@ cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr, req->sso_pf_func = nix_inl_dev_pffunc_get(); else req->sso_pf_func = idev_sso_pffunc_get(); - req->eng_grpmsk = eng_grpmsk; + req->eng_grpmask = eng_grpmask; req->blkaddr = blkaddr; return mbox_process(mbox); @@ -481,7 +481,7 @@ roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf) struct cpt *cpt = roc_cpt_to_cpt_priv(roc_cpt); uint8_t blkaddr[ROC_CPT_MAX_BLKS]; struct msix_offset_rsp *rsp; - uint8_t eng_grpmsk; + uint8_t eng_grpmask; int blknum = 0; int rc, i; @@ -508,11 +508,11 @@ roc_cpt_dev_configure(struct roc_cpt *roc_cpt, int nb_lf) for (i = 0; i < nb_lf; i++) cpt->lf_blkaddr[i] = blkaddr[blknum]; - eng_grpmsk = (1 << roc_cpt->eng_grp[CPT_ENG_TYPE_AE]) | + eng_grpmask = (1 << roc_cpt->eng_grp[CPT_ENG_TYPE_AE]) | (1 << roc_cpt->eng_grp[CPT_ENG_TYPE_SE]) | (1 << roc_cpt->eng_grp[CPT_ENG_TYPE_IE]); - rc = cpt_lfs_alloc(&cpt->dev, eng_grpmsk, blkaddr[blknum], false); + rc = cpt_lfs_alloc(&cpt->dev, eng_grpmask, blkaddr[blknum], false); if (rc) goto lfs_detach; diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h index 61dec9a1..4bc888b2 100644 --- a/drivers/common/cnxk/roc_cpt_priv.h +++ b/drivers/common/cnxk/roc_cpt_priv.h @@ -21,7 +21,7 @@ roc_cpt_to_cpt_priv(struct roc_cpt *roc_cpt) int cpt_lfs_attach(struct dev *dev, uint8_t blkaddr, bool modify, uint16_t nb_lf); int cpt_lfs_detach(struct dev *dev); -int cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blk, +int cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmask, uint8_t blk, bool inl_dev_sso); int cpt_lfs_free(struct dev *dev); int cpt_lf_init(struct roc_cpt_lf *lf); diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index b63fe108..ae576d1b 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -1328,7 +1328,7 @@ struct cpt_lf_alloc_req_msg { struct mbox_msghdr hdr; uint16_t __io nix_pf_func; uint16_t __io sso_pf_func; - uint16_t __io eng_grpmsk; + uint16_t __io eng_grpmask; uint8_t __io blkaddr; }; @@ -1739,7 +1739,7 @@ enum tim_af_status { TIM_AF_INVALID_BSIZE = -813, TIM_AF_INVALID_ENABLE_PERIODIC = -814, TIM_AF_INVALID_ENABLE_DONTFREE = -815, - TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816, + TIM_AF_ENA_DONTFREE_NSET_PERIODIC = -816, TIM_AF_RING_ALREADY_DISABLED = -817, }; diff --git a/drivers/common/cnxk/roc_nix_bpf.c b/drivers/common/cnxk/roc_nix_bpf.c index 6996a54b..e483559e 100644 --- a/drivers/common/cnxk/roc_nix_bpf.c +++ b/drivers/common/cnxk/roc_nix_bpf.c @@ -138,11 +138,11 @@ nix_lf_bpf_dump(__io struct nix_band_prof_s *bpf) { plt_dump("W0: cir_mantissa \t\t\t%d\nW0: pebs_mantissa \t\t\t0x%03x", bpf->cir_mantissa, bpf->pebs_mantissa); - plt_dump("W0: peir_matissa \t\t\t\t%d\nW0: cbs_exponent \t\t\t%d", - bpf->peir_mantissa, bpf->cbs_exponent); + plt_dump("W0: peer_mantissa \t\t\t\t%d\nW0: cbs_exponent \t\t\t%d", + bpf->peer_mantissa, bpf->cbs_exponent); plt_dump("W0: cir_exponent \t\t\t%d\nW0: pebs_exponent \t\t\t%d", bpf->cir_exponent, bpf->pebs_exponent); - plt_dump("W0: peir_exponent \t\t\t%d\n", bpf->peir_exponent); + plt_dump("W0: peer_exponent \t\t\t%d\n", bpf->peer_exponent); plt_dump("W0: tnl_ena \t\t\t%d\n", bpf->tnl_ena); plt_dump("W0: icolor \t\t\t%d\n", bpf->icolor); plt_dump("W0: pc_mode \t\t\t%d\n", bpf->pc_mode); @@ -608,8 +608,8 @@ roc_nix_bpf_config(struct roc_nix *roc_nix, uint16_t id, meter_rate_to_nix(cfg->algo2698.pir, &exponent_p, &mantissa_p, &div_exp_p, policer_timeunit); - aq->prof.peir_mantissa = mantissa_p; - aq->prof.peir_exponent = exponent_p; + aq->prof.peer_mantissa = mantissa_p; + aq->prof.peer_exponent = exponent_p; meter_burst_to_nix(cfg->algo2698.cbs, &exponent_p, &mantissa_p); aq->prof.cbs_mantissa = mantissa_p; @@ -620,11 +620,11 @@ roc_nix_bpf_config(struct roc_nix *roc_nix, uint16_t id, aq->prof.pebs_exponent = exponent_p; aq->prof_mask.cir_mantissa = ~(aq->prof_mask.cir_mantissa); - aq->prof_mask.peir_mantissa = ~(aq->prof_mask.peir_mantissa); + aq->prof_mask.peer_mantissa = ~(aq->prof_mask.peer_mantissa); aq->prof_mask.cbs_mantissa = ~(aq->prof_mask.cbs_mantissa); aq->prof_mask.pebs_mantissa = ~(aq->prof_mask.pebs_mantissa); aq->prof_mask.cir_exponent = ~(aq->prof_mask.cir_exponent); - aq->prof_mask.peir_exponent = ~(aq->prof_mask.peir_exponent); + aq->prof_mask.peer_exponent = ~(aq->prof_mask.peer_exponent); aq->prof_mask.cbs_exponent = ~(aq->prof_mask.cbs_exponent); aq->prof_mask.pebs_exponent = ~(aq->prof_mask.pebs_exponent); break; @@ -637,8 +637,8 @@ roc_nix_bpf_config(struct roc_nix *roc_nix, uint16_t id, meter_rate_to_nix(cfg->algo4115.eir, &exponent_p, &mantissa_p, &div_exp_p, policer_timeunit); - aq->prof.peir_mantissa = mantissa_p; - aq->prof.peir_exponent = exponent_p; + aq->prof.peer_mantissa = mantissa_p; + aq->prof.peer_exponent = exponent_p; meter_burst_to_nix(cfg->algo4115.cbs, &exponent_p, &mantissa_p); aq->prof.cbs_mantissa = mantissa_p; @@ -649,12 +649,12 @@ roc_nix_bpf_config(struct roc_nix *roc_nix, uint16_t id, aq->prof.pebs_exponent = exponent_p; aq->prof_mask.cir_mantissa = ~(aq->prof_mask.cir_mantissa); - aq->prof_mask.peir_mantissa = ~(aq->prof_mask.peir_mantissa); + aq->prof_mask.peer_mantissa = ~(aq->prof_mask.peer_mantissa); aq->prof_mask.cbs_mantissa = ~(aq->prof_mask.cbs_mantissa); aq->prof_mask.pebs_mantissa = ~(aq->prof_mask.pebs_mantissa); aq->prof_mask.cir_exponent = ~(aq->prof_mask.cir_exponent); - aq->prof_mask.peir_exponent = ~(aq->prof_mask.peir_exponent); + aq->prof_mask.peer_exponent = ~(aq->prof_mask.peer_exponent); aq->prof_mask.cbs_exponent = ~(aq->prof_mask.cbs_exponent); aq->prof_mask.pebs_exponent = ~(aq->prof_mask.pebs_exponent); break; diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c index 3257fa67..3d81247a 100644 --- a/drivers/common/cnxk/roc_nix_tm_ops.c +++ b/drivers/common/cnxk/roc_nix_tm_ops.c @@ -107,7 +107,7 @@ nix_tm_adjust_shaper_pps_rate(struct nix_tm_shaper_profile *profile) if (profile->peak.rate && min_rate > profile->peak.rate) min_rate = profile->peak.rate; - /* Each packet accomulate single count, whereas HW + /* Each packet accumulate single count, whereas HW * considers each unit as Byte, so we need convert * user pps to bps */ diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c index ba7f89b4..82014a2c 100644 --- a/drivers/common/cnxk/roc_npc_mcam.c +++ b/drivers/common/cnxk/roc_npc_mcam.c @@ -234,7 +234,7 @@ npc_get_kex_capability(struct npc *npc) /* Ethtype: Offset 12B, len 2B */ kex_cap.bit.ethtype_0 = npc_is_kex_enabled( npc, NPC_LID_LA, NPC_LT_LA_ETHER, 12 * 8, 2 * 8); - /* QINQ VLAN Ethtype: ofset 8B, len 2B */ + /* QINQ VLAN Ethtype: offset 8B, len 2B */ kex_cap.bit.ethtype_x = npc_is_kex_enabled( npc, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 8 * 8, 2 * 8); /* VLAN ID0 : Outer VLAN: Offset 2B, len 2B */ diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h index 712302bc..74e0fb2e 100644 --- a/drivers/common/cnxk/roc_npc_priv.h +++ b/drivers/common/cnxk/roc_npc_priv.h @@ -363,7 +363,7 @@ struct npc { uint32_t rss_grps; /* rss groups supported */ uint16_t flow_prealloc_size; /* Pre allocated mcam size */ uint16_t flow_max_priority; /* Max priority for flow */ - uint16_t switch_header_type; /* Suppprted switch header type */ + uint16_t switch_header_type; /* Supported switch header type */ uint32_t mark_actions; /* Number of mark actions */ uint32_t vtag_strip_actions; /* vtag insert/strip actions */ uint16_t pf_func; /* pf_func of device */ diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c index 534b697b..ca58e19a 100644 --- a/drivers/common/cnxk/roc_tim.c +++ b/drivers/common/cnxk/roc_tim.c @@ -73,7 +73,7 @@ tim_err_desc(int rc) case TIM_AF_INVALID_ENABLE_DONTFREE: plt_err("Invalid Don't free value."); break; - case TIM_AF_ENA_DONTFRE_NSET_PERIODIC: + case TIM_AF_ENA_DONTFREE_NSET_PERIODIC: plt_err("Don't free bit not set when periodic is enabled."); break; case TIM_AF_RING_ALREADY_DISABLED: diff --git a/drivers/common/cpt/cpt_ucode.h b/drivers/common/cpt/cpt_ucode.h index e015cf66..e1f2f600 100644 --- a/drivers/common/cpt/cpt_ucode.h +++ b/drivers/common/cpt/cpt_ucode.h @@ -246,7 +246,7 @@ cpt_fc_ciph_set_key(struct cpt_ctx *cpt_ctx, cipher_type_t type, if (cpt_ctx->fc_type == FC_GEN) { /* * We need to always say IV is from DPTR as user can - * sometimes iverride IV per operation. + * sometimes override IV per operation. */ fctx->enc.iv_source = CPT_FROM_DPTR; @@ -3035,7 +3035,7 @@ prepare_iov_from_pkt_inplace(struct rte_mbuf *pkt, tailroom = rte_pktmbuf_tailroom(pkt); if (likely((headroom >= 24) && (tailroom >= 8))) { - /* In 83XX this is prerequivisit for Direct mode */ + /* In 83XX this is prerequisite for Direct mode */ *flags |= SINGLE_BUF_HEADTAILROOM; } param->bufs[0].vaddr = seg_data; diff --git a/drivers/common/cpt/cpt_ucode_asym.h b/drivers/common/cpt/cpt_ucode_asym.h index a67ded64..30dd5399 100644 --- a/drivers/common/cpt/cpt_ucode_asym.h +++ b/drivers/common/cpt/cpt_ucode_asym.h @@ -779,7 +779,7 @@ cpt_ecdsa_verify_prep(struct rte_crypto_ecdsa_op_param *ecdsa, * Set dlen = sum(sizeof(fpm address), ROUNDUP8(message len), * ROUNDUP8(sign len(r and s), public key len(x and y coordinates), * prime len, order len)). - * Please note sign, public key and order can not excede prime length + * Please note sign, public key and order can not exede prime length * i.e. 6 * p_align */ dlen = sizeof(fpm_table_iova) + m_align + (8 * p_align); diff --git a/drivers/common/dpaax/caamflib/desc/algo.h b/drivers/common/dpaax/caamflib/desc/algo.h index 6bb91505..e0848f09 100644 --- a/drivers/common/dpaax/caamflib/desc/algo.h +++ b/drivers/common/dpaax/caamflib/desc/algo.h @@ -67,7 +67,7 @@ cnstr_shdsc_zuce(uint32_t *descbuf, bool ps, bool swap, * @authlen: size of digest * * The IV prepended before hmac payload must be 8 bytes consisting - * of COUNT||BEAERER||DIR. The COUNT is of 32-bits, bearer is of 5 bits and + * of COUNT||BEARER||DIR. The COUNT is of 32-bits, bearer is of 5 bits and * direction is of 1 bit - totalling to 38 bits. * * Return: size of descriptor written in words or negative number on error diff --git a/drivers/common/dpaax/caamflib/desc/ipsec.h b/drivers/common/dpaax/caamflib/desc/ipsec.h index 668d2164..499f4f93 100644 --- a/drivers/common/dpaax/caamflib/desc/ipsec.h +++ b/drivers/common/dpaax/caamflib/desc/ipsec.h @@ -1437,7 +1437,7 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps, CAAM_CMD_SZ) /** - * cnstr_shdsc_authenc - authenc-like descriptor + * cnstr_shdsc_authentic - authentic-like descriptor * @descbuf: pointer to buffer used for descriptor construction * @ps: if 36/40bit addressing is desired, this parameter must be true * @swap: if true, perform descriptor byte swapping on a 4-byte boundary @@ -1502,7 +1502,7 @@ cnstr_shdsc_ipsec_new_decap(uint32_t *descbuf, bool ps, * Return: size of descriptor written in words or negative number on error */ static inline int -cnstr_shdsc_authenc(uint32_t *descbuf, bool ps, bool swap, +cnstr_shdsc_authentic(uint32_t *descbuf, bool ps, bool swap, enum rta_share_type share, struct alginfo *cipherdata, struct alginfo *authdata, diff --git a/drivers/common/dpaax/caamflib/desc/sdap.h b/drivers/common/dpaax/caamflib/desc/sdap.h index b2497a54..07f55b5b 100644 --- a/drivers/common/dpaax/caamflib/desc/sdap.h +++ b/drivers/common/dpaax/caamflib/desc/sdap.h @@ -492,10 +492,10 @@ pdcp_sdap_insert_snoop_op(struct program *p, bool swap __maybe_unused, /* Set the variable size of data the register will write */ if (dir == OP_TYPE_ENCAP_PROTOCOL) { - /* We will add the interity data so add its length */ + /* We will add the integrity data so add its length */ MATHI(p, SEQINSZ, ADD, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2); } else { - /* We will check the interity data so remove its length */ + /* We will check the integrity data so remove its length */ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQOUTSZ, 4, IMMED2); /* Do not take the ICV in the out-snooping configuration */ MATHI(p, SEQINSZ, SUB, PDCP_MAC_I_LEN, VSEQINSZ, 4, IMMED2); @@ -803,7 +803,7 @@ static inline int pdcp_sdap_insert_no_snoop_op( CLRW_CLR_C1MODE, CLRW, 0, 4, IMMED); - /* Load the key for authentcation */ + /* Load the key for authentication */ KEY(p, KEY1, authdata->key_enc_flags, authdata->key, authdata->keylen, INLINE_KEY(authdata)); diff --git a/drivers/common/dpaax/caamflib/rta/operation_cmd.h b/drivers/common/dpaax/caamflib/rta/operation_cmd.h index 3d339cb0..e456ad3c 100644 --- a/drivers/common/dpaax/caamflib/rta/operation_cmd.h +++ b/drivers/common/dpaax/caamflib/rta/operation_cmd.h @@ -199,7 +199,7 @@ __rta_alg_aai_zuca(uint16_t aai) } struct alg_aai_map { - uint32_t chipher_algo; + uint32_t cipher_algo; int (*aai_func)(uint16_t); uint32_t class; }; @@ -242,7 +242,7 @@ rta_operation(struct program *program, uint32_t cipher_algo, int ret; for (i = 0; i < alg_table_sz[rta_sec_era]; i++) { - if (alg_table[i].chipher_algo == cipher_algo) { + if (alg_table[i].cipher_algo == cipher_algo) { if ((aai == OP_ALG_AAI_XCBC_MAC) || (aai == OP_ALG_AAI_CBC_XCBCMAC)) opcode |= cipher_algo | OP_TYPE_CLASS2_ALG; @@ -340,7 +340,7 @@ rta_operation2(struct program *program, uint32_t cipher_algo, int ret; for (i = 0; i < alg_table_sz[rta_sec_era]; i++) { - if (alg_table[i].chipher_algo == cipher_algo) { + if (alg_table[i].cipher_algo == cipher_algo) { if ((aai == OP_ALG_AAI_XCBC_MAC) || (aai == OP_ALG_AAI_CBC_XCBCMAC) || (aai == OP_ALG_AAI_CMAC)) diff --git a/drivers/common/dpaax/dpaax_iova_table.c b/drivers/common/dpaax/dpaax_iova_table.c index 3d661102..9daac4bc 100644 --- a/drivers/common/dpaax/dpaax_iova_table.c +++ b/drivers/common/dpaax/dpaax_iova_table.c @@ -261,7 +261,7 @@ dpaax_iova_table_depopulate(void) rte_free(dpaax_iova_table_p->entries); dpaax_iova_table_p = NULL; - DPAAX_DEBUG("IOVA Table cleanedup"); + DPAAX_DEBUG("IOVA Table cleaned"); } int diff --git a/drivers/common/iavf/iavf_type.h b/drivers/common/iavf/iavf_type.h index 51267ca3..1cd87587 100644 --- a/drivers/common/iavf/iavf_type.h +++ b/drivers/common/iavf/iavf_type.h @@ -1006,7 +1006,7 @@ struct iavf_profile_tlv_section_record { u8 data[12]; }; -/* Generic AQ section in proflie */ +/* Generic AQ section in profile */ struct iavf_profile_aq_section { u16 opcode; u16 flags; diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h index 269578f7..80e754a1 100644 --- a/drivers/common/iavf/virtchnl.h +++ b/drivers/common/iavf/virtchnl.h @@ -233,7 +233,7 @@ static inline const char *virtchnl_op_str(enum virtchnl_ops v_opcode) case VIRTCHNL_OP_DCF_CMD_DESC: return "VIRTCHNL_OP_DCF_CMD_DESC"; case VIRTCHNL_OP_DCF_CMD_BUFF: - return "VIRTCHHNL_OP_DCF_CMD_BUFF"; + return "VIRTCHNL_OP_DCF_CMD_BUFF"; case VIRTCHNL_OP_DCF_DISABLE: return "VIRTCHNL_OP_DCF_DISABLE"; case VIRTCHNL_OP_DCF_GET_VSI_MAP: diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index f1650f94..cc130221 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -854,7 +854,7 @@ static void mlx5_common_driver_init(void) static bool mlx5_common_initialized; /** - * One time innitialization routine for run-time dependency on glue library + * One time initialization routine for run-time dependency on glue library * for multiple PMDs. Each mlx5 PMD that depends on mlx5_common module, * must invoke in its constructor. */ diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index c694aaf2..1537b5d4 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -1541,7 +1541,7 @@ mlx5_mempool_reg_create(struct rte_mempool *mp, unsigned int mrs_n, * Destroy a mempool registration object. * * @param standalone - * Whether @p mpr owns its MRs excludively, i.e. they are not shared. + * Whether @p mpr owns its MRs exclusively, i.e. they are not shared. */ static void mlx5_mempool_reg_destroy(struct mlx5_mr_share_cache *share_cache, diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index e52b995e..7cd3d4fa 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1822,7 +1822,7 @@ mlx5_devx_cmd_create_td(void *ctx) * Pointer to file stream. * * @return - * 0 on success, a nagative value otherwise. + * 0 on success, a negative value otherwise. */ int mlx5_devx_cmd_flow_dump(void *fdb_domain __rte_unused, diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index d7f71646..2dbbed24 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -128,7 +128,7 @@ enum { enum { PARSE_GRAPH_NODE_CAP_LENGTH_MODE_FIXED = RTE_BIT32(0), - PARSE_GRAPH_NODE_CAP_LENGTH_MODE_EXPLISIT_FIELD = RTE_BIT32(1), + PARSE_GRAPH_NODE_CAP_LENGTH_MODE_EXPLICIT_FIELD = RTE_BIT32(1), PARSE_GRAPH_NODE_CAP_LENGTH_MODE_BITMASK_FIELD = RTE_BIT32(2) }; diff --git a/drivers/common/mlx5/mlx5_malloc.c b/drivers/common/mlx5/mlx5_malloc.c index b19501e1..cef3b88e 100644 --- a/drivers/common/mlx5/mlx5_malloc.c +++ b/drivers/common/mlx5/mlx5_malloc.c @@ -58,7 +58,7 @@ static struct mlx5_sys_mem mlx5_sys_mem = { * Check if the address belongs to memory seg list. * * @param addr - * Memory address to be ckeced. + * Memory address to be checked. * @param msl * Memory seg list. * @@ -109,7 +109,7 @@ mlx5_mem_update_msl(void *addr) * Check if the address belongs to rte memory. * * @param addr - * Memory address to be ckeced. + * Memory address to be checked. * * @return * True if it belongs, false otherwise. diff --git a/drivers/common/mlx5/mlx5_malloc.h b/drivers/common/mlx5/mlx5_malloc.h index 74b7eeb2..92149f7b 100644 --- a/drivers/common/mlx5/mlx5_malloc.h +++ b/drivers/common/mlx5/mlx5_malloc.h @@ -19,7 +19,7 @@ extern "C" { enum mlx5_mem_flags { MLX5_MEM_ANY = 0, - /* Memory will be allocated dpends on sys_mem_en. */ + /* Memory will be allocated depends on sys_mem_en. */ MLX5_MEM_SYS = 1 << 0, /* Memory should be allocated from system. */ MLX5_MEM_RTE = 1 << 1, diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 2ded67e8..d921d525 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3112,8 +3112,8 @@ struct mlx5_ifc_conn_track_aso_bits { u8 max_ack_window[0x3]; u8 reserved_at_1f8[0x1]; u8 retransmission_counter[0x3]; - u8 retranmission_limit_exceeded[0x1]; - u8 retranmission_limit[0x3]; /* End of DW15. */ + u8 retransmission_limit_exceeded[0x1]; + u8 retransmission_limit[0x3]; /* End of DW15. */ }; struct mlx5_ifc_conn_track_offload_bits { @@ -4172,7 +4172,7 @@ mlx5_flow_mark_get(uint32_t val) * timestamp format supported by the queue. * * @return - * Converted timstamp format settings. + * Converted timestamp format settings. */ static inline uint32_t mlx5_ts_format_conv(uint32_t ts_format) diff --git a/drivers/common/mlx5/windows/mlx5_common_os.c b/drivers/common/mlx5/windows/mlx5_common_os.c index 162c7476..c3cfc315 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.c +++ b/drivers/common/mlx5/windows/mlx5_common_os.c @@ -302,7 +302,7 @@ mlx5_os_umem_dereg(void *pumem) } /** - * Register mr. Given protection doamin pointer, pointer to addr and length + * Register mr. Given protection domain pointer, pointer to addr and length * register the memory region. * * @param[in] pd @@ -310,7 +310,7 @@ mlx5_os_umem_dereg(void *pumem) * @param[in] addr * Pointer to memory start address (type devx_device_ctx). * @param[in] length - * Lengtoh of the memory to register. + * Length of the memory to register. * @param[out] pmd_mr * pmd_mr struct set with lkey, address, length, pointer to mr object, mkey * diff --git a/drivers/common/mlx5/windows/mlx5_common_os.h b/drivers/common/mlx5/windows/mlx5_common_os.h index 3afce56c..61fc8dd7 100644 --- a/drivers/common/mlx5/windows/mlx5_common_os.h +++ b/drivers/common/mlx5/windows/mlx5_common_os.h @@ -21,7 +21,7 @@ /** * This API allocates aligned or non-aligned memory. The free can be on either * aligned or nonaligned memory. To be protected - even though there may be no - * alignment - in Windows this API will unconditioanlly call _aligned_malloc() + * alignment - in Windows this API will unconditionally call _aligned_malloc() * with at least a minimal alignment size. * * @param[in] align diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h index 25b521a7..8d8fe58d 100644 --- a/drivers/common/octeontx2/otx2_mbox.h +++ b/drivers/common/octeontx2/otx2_mbox.h @@ -1296,7 +1296,7 @@ struct cpt_lf_alloc_req_msg { struct cpt_lf_alloc_rsp_msg { struct mbox_msghdr hdr; - uint16_t __otx2_io eng_grpmsk; + uint16_t __otx2_io eng_grpmask; }; #define CPT_INLINE_INBOUND 0 @@ -1625,7 +1625,7 @@ enum tim_af_status { TIM_AF_INVALID_BSIZE = -813, TIM_AF_INVALID_ENABLE_PERIODIC = -814, TIM_AF_INVALID_ENABLE_DONTFREE = -815, - TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816, + TIM_AF_ENA_DONTFREE_NSET_PERIODIC = -816, TIM_AF_RING_ALREADY_DISABLED = -817, }; diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h index a6d403fa..12a7258c 100644 --- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h +++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h @@ -72,7 +72,7 @@ #define ADF_SIZE_TO_RING_SIZE_IN_BYTES(SIZE) ((1 << (SIZE - 1)) << 7) #define ADF_RING_SIZE_IN_BYTES_TO_SIZE(SIZE) ((1 << (SIZE - 1)) >> 7) -/* Minimum ring bufer size for memory allocation */ +/* Minimum ring buffer size for memory allocation */ #define ADF_RING_SIZE_BYTES_MIN(SIZE) ((SIZE < ADF_RING_SIZE_4K) ? \ ADF_RING_SIZE_4K : SIZE) #define ADF_RING_SIZE_MODULO(SIZE) (SIZE + 0x6) diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h index 3860c283..224254be 100644 --- a/drivers/common/sfc_efx/efsys.h +++ b/drivers/common/sfc_efx/efsys.h @@ -616,7 +616,7 @@ typedef struct efsys_bar_s { #define EFSYS_DMA_SYNC_FOR_KERNEL(_esmp, _offset, _size) ((void)0) -/* Just avoid store and compiler (impliciltly) reordering */ +/* Just avoid store and compiler (implicitly) reordering */ #define EFSYS_DMA_SYNC_FOR_DEVICE(_esmp, _offset, _size) rte_wmb() /* TIMESTAMP */ diff --git a/drivers/compress/octeontx/include/zip_regs.h b/drivers/compress/octeontx/include/zip_regs.h index 96e538bb..94a48cde 100644 --- a/drivers/compress/octeontx/include/zip_regs.h +++ b/drivers/compress/octeontx/include/zip_regs.h @@ -195,7 +195,7 @@ union zip_inst_s { uint64_t bf : 1; /** Comp/decomp operation */ uint64_t op : 2; - /** Data sactter */ + /** Data scatter */ uint64_t ds : 1; /** Data gather */ uint64_t dg : 1; @@ -376,7 +376,7 @@ union zip_inst_s { uint64_t bf : 1; /** Comp/decomp operation */ uint64_t op : 2; - /** Data sactter */ + /** Data scatter */ uint64_t ds : 1; /** Data gather */ uint64_t dg : 1; diff --git a/drivers/compress/octeontx/otx_zip.h b/drivers/compress/octeontx/otx_zip.h index e43f7f5c..e29b8b87 100644 --- a/drivers/compress/octeontx/otx_zip.h +++ b/drivers/compress/octeontx/otx_zip.h @@ -31,7 +31,7 @@ extern int octtx_zip_logtype_driver; /**< PCI device id of ZIP VF */ #define PCI_DEVICE_ID_OCTEONTX_ZIPVF 0xA037 -/* maxmum number of zip vf devices */ +/* maximum number of zip vf devices */ #define ZIP_MAX_VFS 8 /* max size of one chunk */ @@ -66,7 +66,7 @@ extern int octtx_zip_logtype_driver; ((_align) * (((x) + (_align) - 1) / (_align))) /**< ZIP PMD device name */ -#define COMPRESSDEV_NAME_ZIP_PMD compress_octeonx +#define COMPRESSDEV_NAME_ZIP_PMD compress_octeontx #define ZIP_PMD_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, \ diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c index 12d9d890..f92250d3 100644 --- a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c +++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c @@ -39,10 +39,10 @@ qat_comp_dev_config_gen1(struct rte_compressdev *dev, "RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so" " QAT device can't be used for Dynamic Deflate."); } else { - comp_dev->interm_buff_mz = + comp_dev->interim_buff_mz = qat_comp_setup_inter_buffers(comp_dev, RTE_PMD_QAT_COMP_IM_BUFFER_SIZE); - if (comp_dev->interm_buff_mz == NULL) + if (comp_dev->interim_buff_mz == NULL) return -ENOMEM; } diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c index e8f57c3c..2a3ce2ad 100644 --- a/drivers/compress/qat/qat_comp.c +++ b/drivers/compress/qat/qat_comp.c @@ -815,7 +815,7 @@ qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header, static int qat_comp_create_templates(struct qat_comp_xform *qat_xform, - const struct rte_memzone *interm_buff_mz, + const struct rte_memzone *interim_buff_mz, const struct rte_comp_xform *xform, const struct qat_comp_stream *stream, enum rte_comp_op_type op_type, @@ -923,7 +923,7 @@ qat_comp_create_templates(struct qat_comp_xform *qat_xform, comp_req->u1.xlt_pars.inter_buff_ptr = (qat_comp_get_num_im_bufs_required(qat_dev_gen) - == 0) ? 0 : interm_buff_mz->iova; + == 0) ? 0 : interim_buff_mz->iova; } #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG @@ -979,7 +979,7 @@ qat_comp_private_xform_create(struct rte_compressdev *dev, if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED || ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT) - && qat->interm_buff_mz == NULL + && qat->interim_buff_mz == NULL && im_bufs > 0)) qat_xform->qat_comp_request_type = QAT_COMP_REQUEST_FIXED_COMP_STATELESS; @@ -988,7 +988,7 @@ qat_comp_private_xform_create(struct rte_compressdev *dev, RTE_COMP_HUFFMAN_DYNAMIC || xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT) && - (qat->interm_buff_mz != NULL || + (qat->interim_buff_mz != NULL || im_bufs == 0)) qat_xform->qat_comp_request_type = @@ -1007,7 +1007,7 @@ qat_comp_private_xform_create(struct rte_compressdev *dev, qat_xform->checksum_type = xform->decompress.chksum; } - if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform, + if (qat_comp_create_templates(qat_xform, qat->interim_buff_mz, xform, NULL, RTE_COMP_OP_STATELESS, qat_dev_gen)) { QAT_LOG(ERR, "QAT: Problem with setting compression"); @@ -1107,7 +1107,7 @@ qat_comp_stream_create(struct rte_compressdev *dev, ptr->qat_xform.qat_comp_request_type = QAT_COMP_REQUEST_DECOMPRESS; ptr->qat_xform.checksum_type = xform->decompress.chksum; - if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz, + if (qat_comp_create_templates(&ptr->qat_xform, qat->interim_buff_mz, xform, ptr, RTE_COMP_OP_STATEFUL, qat->qat_dev->qat_dev_gen)) { QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream"); diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c index 9b24d46e..ebb93acc 100644 --- a/drivers/compress/qat/qat_comp_pmd.c +++ b/drivers/compress/qat/qat_comp_pmd.c @@ -463,7 +463,7 @@ qat_comp_create_stream_pool(struct qat_comp_dev_private *comp_dev, } else if (info.error) { rte_mempool_obj_iter(mp, qat_comp_stream_destroy, NULL); QAT_LOG(ERR, - "Destoying mempool %s as at least one element failed initialisation", + "Destroying mempool %s as at least one element failed initialisation", stream_pool_name); rte_mempool_free(mp); mp = NULL; @@ -477,7 +477,7 @@ static void _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev) { /* Free intermediate buffers */ - if (comp_dev->interm_buff_mz) { + if (comp_dev->interim_buff_mz) { char mz_name[RTE_MEMZONE_NAMESIZE]; int i = qat_comp_get_num_im_bufs_required( comp_dev->qat_dev->qat_dev_gen); @@ -488,8 +488,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev) comp_dev->qat_dev->name, i); rte_memzone_free(rte_memzone_lookup(mz_name)); } - rte_memzone_free(comp_dev->interm_buff_mz); - comp_dev->interm_buff_mz = NULL; + rte_memzone_free(comp_dev->interim_buff_mz); + comp_dev->interim_buff_mz = NULL; } /* Free private_xform pool */ diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h index 3c8682a7..8331b54d 100644 --- a/drivers/compress/qat/qat_comp_pmd.h +++ b/drivers/compress/qat/qat_comp_pmd.h @@ -60,7 +60,7 @@ struct qat_comp_dev_private { /**< The pointer to this compression device structure */ const struct rte_compressdev_capabilities *qat_dev_capabilities; /* QAT device compression capabilities */ - const struct rte_memzone *interm_buff_mz; + const struct rte_memzone *interim_buff_mz; /**< The device's memory for intermediate buffers */ struct rte_mempool *xformpool; /**< The device's pool for qat_comp_xforms */ diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h index e5ca8669..4901a6cf 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.h +++ b/drivers/crypto/bcmfs/bcmfs_device.h @@ -32,7 +32,7 @@ enum bcmfs_device_type { BCMFS_UNKNOWN }; -/* A table to store registered queue pair opertations */ +/* A table to store registered queue pair operations */ struct bcmfs_hw_queue_pair_ops_table { rte_spinlock_t tl; /* Number of used ops structs in the table. */ diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c index cb5ff6c6..61d457f4 100644 --- a/drivers/crypto/bcmfs/bcmfs_qp.c +++ b/drivers/crypto/bcmfs/bcmfs_qp.c @@ -212,7 +212,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr, nb_descriptors = FS_RM_MAX_REQS; if (qp_conf->iobase == NULL) { - BCMFS_LOG(ERR, "IO onfig space null"); + BCMFS_LOG(ERR, "IO config space null"); return -EINVAL; } diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h index eaefe97e..9bb8a695 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_defs.h +++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h @@ -20,11 +20,11 @@ struct bcmfs_sym_request; /** Crypto Request processing successful. */ #define BCMFS_SYM_RESPONSE_SUCCESS (0) -/** Crypot Request processing protocol failure. */ +/** Crypto Request processing protocol failure. */ #define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1) -/** Crypot Request processing completion failure. */ +/** Crypto Request processing completion failure. */ #define BCMFS_SYM_RESPONSE_COMPL_ERROR (2) -/** Crypot Request processing hash tag check error. */ +/** Crypto Request processing hash tag check error. */ #define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3) /** Maximum threshold length to adjust AAD in continuation diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h index d9594246..51ff9f75 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_engine.h +++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h @@ -12,7 +12,7 @@ #include "bcmfs_sym_defs.h" #include "bcmfs_sym_req.h" -/* structure to hold element's arrtibutes */ +/* structure to hold element's attributes */ struct fsattr { void *va; uint64_t pa; diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c index 86e53051..c677c0cd 100644 --- a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c +++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c @@ -441,7 +441,7 @@ static void bcmfs5_write_doorbell(struct bcmfs_qp *qp) { struct bcmfs_queue *txq = &qp->tx_q; - /* sync in bfeore ringing the door-bell */ + /* sync in before ringing the door-bell */ rte_wmb(); FS_MMIO_WRITE32(txq->descs_inflight, diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c index 8e9cfe73..7b2c7d04 100644 --- a/drivers/crypto/caam_jr/caam_jr.c +++ b/drivers/crypto/caam_jr/caam_jr.c @@ -58,7 +58,7 @@ struct sec_outring_entry { uint32_t status; /* Status for completed descriptor */ } __rte_packed; -/* virtual address conversin when mempool support is available for ctx */ +/* virtual address conversion when mempool support is available for ctx */ static inline phys_addr_t caam_jr_vtop_ctx(struct caam_jr_op_ctx *ctx, void *vaddr) { @@ -447,7 +447,7 @@ caam_jr_prep_cdb(struct caam_jr_session *ses) } } else { /* Auth_only_len is overwritten in fd for each job */ - shared_desc_len = cnstr_shdsc_authenc(cdb->sh_desc, + shared_desc_len = cnstr_shdsc_authentic(cdb->sh_desc, true, swap, SHR_SERIAL, &alginfo_c, &alginfo_a, ses->iv.length, diff --git a/drivers/crypto/caam_jr/caam_jr_hw_specific.h b/drivers/crypto/caam_jr/caam_jr_hw_specific.h index bbe8bc3f..3c5778c9 100644 --- a/drivers/crypto/caam_jr/caam_jr_hw_specific.h +++ b/drivers/crypto/caam_jr/caam_jr_hw_specific.h @@ -376,7 +376,7 @@ struct sec_job_ring_t { void *register_base_addr; /* Base address for SEC's * register memory for this job ring. */ - uint8_t coalescing_en; /* notifies if coelescing is + uint8_t coalescing_en; /* notifies if coalescing is * enabled for the job ring */ sec_job_ring_state_t jr_state; /* The state of this job ring */ @@ -479,7 +479,7 @@ void hw_job_ring_error_print(struct sec_job_ring_t *job_ring, int code); /* @brief Set interrupt coalescing parameters on the Job Ring. * @param [in] job_ring The job ring - * @param [in] irq_coalesing_timer Interrupt coalescing timer threshold. + * @param [in] irq_coalescing_timer Interrupt coalescing timer threshold. * This value determines the maximum * amount of time after processing a * descriptor before raising an interrupt. diff --git a/drivers/crypto/caam_jr/caam_jr_pvt.h b/drivers/crypto/caam_jr/caam_jr_pvt.h index 552d6b9b..52f872bc 100644 --- a/drivers/crypto/caam_jr/caam_jr_pvt.h +++ b/drivers/crypto/caam_jr/caam_jr_pvt.h @@ -169,7 +169,7 @@ struct sec4_sg_entry { /* Structure encompassing a job descriptor which is to be processed * by SEC. User should also initialise this structure with the callback - * function pointer which will be called by driver after recieving proccessed + * function pointer which will be called by driver after receiving processed * descriptor from SEC. User data is also passed in this data structure which * will be sent as an argument to the user callback function. */ @@ -288,7 +288,7 @@ int caam_jr_enable_irqs(int uio_fd); * value that indicates an IRQ disable action into UIO file descriptor * of this job ring. * - * @param [in] uio_fd UIO File descripto + * @param [in] uio_fd UIO File descriptor * @retval 0 for success * @retval -1 value for error * diff --git a/drivers/crypto/caam_jr/caam_jr_uio.c b/drivers/crypto/caam_jr/caam_jr_uio.c index e4ee1023..583ba3b5 100644 --- a/drivers/crypto/caam_jr/caam_jr_uio.c +++ b/drivers/crypto/caam_jr/caam_jr_uio.c @@ -227,7 +227,7 @@ caam_jr_enable_irqs(int uio_fd) * value that indicates an IRQ disable action into UIO file descriptor * of this job ring. * - * @param [in] uio_fd UIO File descripto + * @param [in] uio_fd UIO File descriptor * @retval 0 for success * @retval -1 value for error * diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c index 70daed79..4ed91a74 100644 --- a/drivers/crypto/ccp/ccp_crypto.c +++ b/drivers/crypto/ccp/ccp_crypto.c @@ -1299,7 +1299,7 @@ ccp_auth_slot(struct ccp_session *session) case CCP_AUTH_ALGO_SHA512_HMAC: /** * 1. Load PHash1 = H(k ^ ipad); to LSB - * 2. generate IHash = H(hash on meassage with PHash1 + * 2. generate IHash = H(hash on message with PHash1 * as init values); * 3. Retrieve IHash 2 slots for 384/512 * 4. Load Phash2 = H(k ^ opad); to LSB diff --git a/drivers/crypto/ccp/ccp_crypto.h b/drivers/crypto/ccp/ccp_crypto.h index 8e6d03ef..d307f73e 100644 --- a/drivers/crypto/ccp/ccp_crypto.h +++ b/drivers/crypto/ccp/ccp_crypto.h @@ -70,7 +70,7 @@ /* Maximum length for digest */ #define DIGEST_LENGTH_MAX 64 -/* SHA LSB intialiazation values */ +/* SHA LSB initialization values */ #define SHA1_H0 0x67452301UL #define SHA1_H1 0xefcdab89UL diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h index 85c8fc47..2a205cd4 100644 --- a/drivers/crypto/ccp/ccp_dev.h +++ b/drivers/crypto/ccp/ccp_dev.h @@ -19,7 +19,7 @@ #include #include -/**< CCP sspecific */ +/**< CCP specific */ #define MAX_HW_QUEUES 5 #define CCP_MAX_TRNG_RETRIES 10 #define CCP_ALIGN(x, y) ((((x) + (y - 1)) / y) * y) diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h index 0d363651..a510271a 100644 --- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h +++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h @@ -65,7 +65,7 @@ struct pending_queue { uint64_t time_out; }; -struct crypto_adpter_info { +struct crypto_adapter_info { bool enabled; /**< Set if queue pair is added to crypto adapter */ struct rte_mempool *req_mp; @@ -85,7 +85,7 @@ struct cnxk_cpt_qp { /**< Metabuf info required to support operations on the queue pair */ struct roc_cpt_lmtline lmtline; /**< Lmtline information */ - struct crypto_adpter_info ca; + struct crypto_adapter_info ca; /**< Crypto adapter related info */ }; diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c index a5b05237..e5e554fd 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c @@ -281,7 +281,7 @@ build_proto_fd(dpaa2_sec_session *sess, #endif static inline int -build_authenc_gcm_sg_fd(dpaa2_sec_session *sess, +build_authentic_gcm_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op, struct qbman_fd *fd, __rte_unused uint16_t bpid) { @@ -426,7 +426,7 @@ build_authenc_gcm_sg_fd(dpaa2_sec_session *sess, } static inline int -build_authenc_gcm_fd(dpaa2_sec_session *sess, +build_authentic_gcm_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op, struct qbman_fd *fd, uint16_t bpid) { @@ -448,7 +448,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess, /* TODO we are using the first FLE entry to store Mbuf and session ctxt. * Currently we donot know which FLE has the mbuf stored. - * So while retreiving we can go back 1 FLE from the FD -ADDR + * So while retrieving we can go back 1 FLE from the FD -ADDR * to get the MBUF Addr from the previous FLE. * We can have a better approach to use the inline Mbuf */ @@ -566,7 +566,7 @@ build_authenc_gcm_fd(dpaa2_sec_session *sess, } static inline int -build_authenc_sg_fd(dpaa2_sec_session *sess, +build_authentic_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op, struct qbman_fd *fd, __rte_unused uint16_t bpid) { @@ -713,7 +713,7 @@ build_authenc_sg_fd(dpaa2_sec_session *sess, } static inline int -build_authenc_fd(dpaa2_sec_session *sess, +build_authentic_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op, struct qbman_fd *fd, uint16_t bpid) { @@ -740,7 +740,7 @@ build_authenc_fd(dpaa2_sec_session *sess, /* we are using the first FLE entry to store Mbuf. * Currently we donot know which FLE has the mbuf stored. - * So while retreiving we can go back 1 FLE from the FD -ADDR + * So while retrieving we can go back 1 FLE from the FD -ADDR * to get the MBUF Addr from the previous FLE. * We can have a better approach to use the inline Mbuf */ @@ -1009,7 +1009,7 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op, memset(fle, 0, FLE_POOL_BUF_SIZE); /* TODO we are using the first FLE entry to store Mbuf. * Currently we donot know which FLE has the mbuf stored. - * So while retreiving we can go back 1 FLE from the FD -ADDR + * So while retrieving we can go back 1 FLE from the FD -ADDR * to get the MBUF Addr from the previous FLE. * We can have a better approach to use the inline Mbuf */ @@ -1262,7 +1262,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op, memset(fle, 0, FLE_POOL_BUF_SIZE); /* TODO we are using the first FLE entry to store Mbuf. * Currently we donot know which FLE has the mbuf stored. - * So while retreiving we can go back 1 FLE from the FD -ADDR + * So while retrieving we can go back 1 FLE from the FD -ADDR * to get the MBUF Addr from the previous FLE. * We can have a better approach to use the inline Mbuf */ @@ -1372,10 +1372,10 @@ build_sec_fd(struct rte_crypto_op *op, ret = build_auth_sg_fd(sess, op, fd, bpid); break; case DPAA2_SEC_AEAD: - ret = build_authenc_gcm_sg_fd(sess, op, fd, bpid); + ret = build_authentic_gcm_sg_fd(sess, op, fd, bpid); break; case DPAA2_SEC_CIPHER_HASH: - ret = build_authenc_sg_fd(sess, op, fd, bpid); + ret = build_authentic_sg_fd(sess, op, fd, bpid); break; #ifdef RTE_LIB_SECURITY case DPAA2_SEC_IPSEC: @@ -1396,10 +1396,10 @@ build_sec_fd(struct rte_crypto_op *op, ret = build_auth_fd(sess, op, fd, bpid); break; case DPAA2_SEC_AEAD: - ret = build_authenc_gcm_fd(sess, op, fd, bpid); + ret = build_authentic_gcm_fd(sess, op, fd, bpid); break; case DPAA2_SEC_CIPHER_HASH: - ret = build_authenc_fd(sess, op, fd, bpid); + ret = build_authentic_fd(sess, op, fd, bpid); break; #ifdef RTE_LIB_SECURITY case DPAA2_SEC_IPSEC: @@ -1568,7 +1568,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd) /* we are using the first FLE entry to store Mbuf. * Currently we donot know which FLE has the mbuf stored. - * So while retreiving we can go back 1 FLE from the FD -ADDR + * So while retrieving we can go back 1 FLE from the FD -ADDR * to get the MBUF Addr from the previous FLE. * We can have a better approach to use the inline Mbuf */ @@ -1580,7 +1580,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd) } op = (struct rte_crypto_op *)DPAA2_GET_FLE_ADDR((fle - 1)); - /* Prefeth op */ + /* Prefetch op */ src = op->sym->m_src; rte_prefetch0(src); @@ -2525,7 +2525,7 @@ dpaa2_sec_aead_chain_init(struct rte_cryptodev *dev, priv->flc_desc[0].desc[2] = 0; if (session->ctxt_type == DPAA2_SEC_CIPHER_HASH) { - bufsize = cnstr_shdsc_authenc(priv->flc_desc[0].desc, 1, + bufsize = cnstr_shdsc_authentic(priv->flc_desc[0].desc, 1, 0, SHR_SERIAL, &cipherdata, &authdata, session->iv.length, diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index a552e645..0d500919 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -628,7 +628,7 @@ dpaa_sec_prep_cdb(dpaa_sec_session *ses) /* Auth_only_len is set as 0 here and it will be * overwritten in fd for each packet. */ - shared_desc_len = cnstr_shdsc_authenc(cdb->sh_desc, + shared_desc_len = cnstr_shdsc_authentic(cdb->sh_desc, true, swap, SHR_SERIAL, &alginfo_c, &alginfo_a, ses->iv.length, ses->digest_length, ses->dir); @@ -723,7 +723,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops) } ops[pkts++] = op; - /* report op status to sym->op and then free the ctx memeory */ + /* report op status to sym->op and then free the ctx memory */ rte_mempool_put(ctx->ctx_pool, (void *)ctx); qman_dqrr_consume(fq, dq); diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c index 20b28833..27604459 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.c +++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.c @@ -296,7 +296,7 @@ cpt_vq_init(struct cpt_vf *cptvf, uint8_t group) /* CPT VF device initialization */ otx_cpt_vfvq_init(cptvf); - /* Send msg to PF to assign currnet Q to required group */ + /* Send msg to PF to assign current Q to required group */ cptvf->vfgrp = group; err = otx_cpt_send_vf_grp_msg(cptvf, group); if (err) { diff --git a/drivers/crypto/octeontx/otx_cryptodev_mbox.h b/drivers/crypto/octeontx/otx_cryptodev_mbox.h index 508f3afd..c1eedc1b 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_mbox.h +++ b/drivers/crypto/octeontx/otx_cryptodev_mbox.h @@ -70,7 +70,7 @@ void otx_cpt_handle_mbox_intr(struct cpt_vf *cptvf); /* - * Checks if VF is able to comminicate with PF + * Checks if VF is able to communicate with PF * and also gets the CPT number this VF is associated to. */ int diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c index 9e8fd495..f7ca8a8a 100644 --- a/drivers/crypto/octeontx/otx_cryptodev_ops.c +++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c @@ -558,7 +558,7 @@ otx_cpt_enq_single_sym(struct cpt_instance *instance, &mdata, (void **)&prep_req); if (unlikely(ret)) { - CPT_LOG_DP_ERR("prep cryto req : op %p, cpt_op 0x%x " + CPT_LOG_DP_ERR("prep crypto req : op %p, cpt_op 0x%x " "ret 0x%x", op, (unsigned int)cpt_op, ret); return NULL; } diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c index f8935080..09d8761c 100644 --- a/drivers/crypto/qat/qat_asym.c +++ b/drivers/crypto/qat/qat_asym.c @@ -109,7 +109,7 @@ static void qat_clear_arrays_by_alg(struct qat_asym_op_cookie *cookie, static int qat_asym_check_nonzero(rte_crypto_param n) { if (n.length < 8) { - /* Not a case for any cryptograpic function except for DH + /* Not a case for any cryptographic function except for DH * generator which very often can be of one byte length */ size_t i; diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 93b25752..00ec7037 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -419,7 +419,7 @@ qat_sym_build_request(void *in_op, uint8_t *out_msg, ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC) { /* In case of AES-CCM this may point to user selected - * memory or iv offset in cypto_op + * memory or iv offset in crypto_op */ uint8_t *aad_data = op->sym->aead.aad.data; /* This is true AAD length, it not includes 18 bytes of diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 6ebc1767..21afb90c 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -142,7 +142,7 @@ unsigned int qat_sym_session_get_private_size(struct rte_cryptodev *dev); void -qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session, +qat_sym_session_init_common_hdr(struct qat_sym_session *session, struct icp_qat_fw_comn_req_hdr *header, enum qat_sym_proto_flag proto_flags); int diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c index ed648667..ce23e38b 100644 --- a/drivers/crypto/virtio/virtio_cryptodev.c +++ b/drivers/crypto/virtio/virtio_cryptodev.c @@ -862,7 +862,7 @@ virtio_crypto_dev_free_mbufs(struct rte_cryptodev *dev) VIRTIO_CRYPTO_INIT_LOG_DBG("queue_pairs[%d]=%p", i, dev->data->queue_pairs[i]); - virtqueue_detatch_unused(dev->data->queue_pairs[i]); + virtqueue_detach_unused(dev->data->queue_pairs[i]); VIRTIO_CRYPTO_INIT_LOG_DBG("After freeing dataq[%d] used and " "unused buf", i); @@ -1205,7 +1205,7 @@ virtio_crypto_sym_pad_auth_param( static int virtio_crypto_sym_pad_op_ctrl_req( struct virtio_crypto_op_ctrl_req *ctrl, - struct rte_crypto_sym_xform *xform, bool is_chainned, + struct rte_crypto_sym_xform *xform, bool is_chained, uint8_t *cipher_key_data, uint8_t *auth_key_data, struct virtio_crypto_session *session) { @@ -1228,7 +1228,7 @@ virtio_crypto_sym_pad_op_ctrl_req( VIRTIO_CRYPTO_MAX_IV_SIZE); return -1; } - if (is_chainned) + if (is_chained) ret = virtio_crypto_sym_pad_cipher_param( &ctrl->u.sym_create_session.u.chain.para .cipher_param, cipher_xform); diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c index fd8be581..33985d1d 100644 --- a/drivers/crypto/virtio/virtqueue.c +++ b/drivers/crypto/virtio/virtqueue.c @@ -22,7 +22,7 @@ virtqueue_disable_intr(struct virtqueue *vq) } void -virtqueue_detatch_unused(struct virtqueue *vq) +virtqueue_detach_unused(struct virtqueue *vq) { struct rte_crypto_op *cop = NULL; diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h index bf10c657..1a67b408 100644 --- a/drivers/crypto/virtio/virtqueue.h +++ b/drivers/crypto/virtio/virtqueue.h @@ -99,7 +99,7 @@ void virtqueue_disable_intr(struct virtqueue *vq); /** * Get all mbufs to be freed. */ -void virtqueue_detatch_unused(struct virtqueue *vq); +void virtqueue_detach_unused(struct virtqueue *vq); static inline int virtqueue_full(const struct virtqueue *vq) @@ -145,7 +145,7 @@ virtqueue_notify(struct virtqueue *vq) { /* * Ensure updated avail->idx is visible to host. - * For virtio on IA, the notificaiton is through io port operation + * For virtio on IA, the notification is through io port operation * which is a serialization instruction itself. */ VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c index a230496b..533f7231 100644 --- a/drivers/dma/ioat/ioat_dmadev.c +++ b/drivers/dma/ioat/ioat_dmadev.c @@ -624,7 +624,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev) ioat = dmadev->data->dev_private; ioat->dmadev = dmadev; ioat->regs = dev->mem_resource[0].addr; - ioat->doorbell = &ioat->regs->dmacount; + ioat->doorbell = &ioat->regs->dmaccount; ioat->qcfg.nb_desc = 0; ioat->desc_ring = NULL; ioat->version = ioat->regs->cbver; diff --git a/drivers/dma/ioat/ioat_hw_defs.h b/drivers/dma/ioat/ioat_hw_defs.h index dc3493a7..88bf09a7 100644 --- a/drivers/dma/ioat/ioat_hw_defs.h +++ b/drivers/dma/ioat/ioat_hw_defs.h @@ -68,7 +68,7 @@ struct ioat_registers { uint8_t reserved6[0x2]; /* 0x82 */ uint8_t chancmd; /* 0x84 */ uint8_t reserved3[1]; /* 0x85 */ - uint16_t dmacount; /* 0x86 */ + uint16_t dmaccount; /* 0x86 */ uint64_t chansts; /* 0x88 */ uint64_t chainaddr; /* 0x90 */ uint64_t chancmp; /* 0x98 */ diff --git a/drivers/dma/skeleton/skeleton_dmadev.c b/drivers/dma/skeleton/skeleton_dmadev.c index d9e4f731..81cbdd28 100644 --- a/drivers/dma/skeleton/skeleton_dmadev.c +++ b/drivers/dma/skeleton/skeleton_dmadev.c @@ -169,7 +169,7 @@ vchan_setup(struct skeldma_hw *hw, uint16_t nb_desc) struct rte_ring *completed; uint16_t i; - desc = rte_zmalloc_socket("dma_skelteon_desc", + desc = rte_zmalloc_socket("dma_skeleton_desc", nb_desc * sizeof(struct skeldma_desc), RTE_CACHE_LINE_SIZE, hw->socket_id); if (desc == NULL) { diff --git a/drivers/event/cnxk/cnxk_eventdev_selftest.c b/drivers/event/cnxk/cnxk_eventdev_selftest.c index 69c15b1d..2fe6467f 100644 --- a/drivers/event/cnxk/cnxk_eventdev_selftest.c +++ b/drivers/event/cnxk/cnxk_eventdev_selftest.c @@ -140,7 +140,7 @@ _eventdev_setup(int mode) struct rte_event_dev_info info; int i, ret; - /* Create and destrory pool for each test case to make it standalone */ + /* Create and destroy pool for each test case to make it standalone */ eventdev_test_mempool = rte_pktmbuf_pool_create( pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id()); if (!eventdev_test_mempool) { @@ -1543,7 +1543,7 @@ cnxk_sso_selftest(const char *dev_name) cn9k_sso_set_rsrc(dev); if (cnxk_sso_testsuite_run(dev_name)) return rc; - /* Verift dual ws mode. */ + /* Verify dual ws mode. */ printf("Verifying CN9K Dual workslot mode\n"); dev->dual_ws = 1; cn9k_sso_set_rsrc(dev); diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 16e9764d..d75f12e3 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -2145,7 +2145,7 @@ dlb2_event_queue_detach_ldb(struct dlb2_eventdev *dlb2, } /* This is expected with eventdev API! - * It blindly attemmpts to unmap all queues. + * It blindly attempts to unmap all queues. */ if (i == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) { DLB2_LOG_DBG("dlb2: ignoring LB QID %d not mapped for qm_port %d.\n", diff --git a/drivers/event/dlb2/dlb2_priv.h b/drivers/event/dlb2/dlb2_priv.h index a5e2f8e4..7837ae87 100644 --- a/drivers/event/dlb2/dlb2_priv.h +++ b/drivers/event/dlb2/dlb2_priv.h @@ -519,7 +519,7 @@ struct dlb2_eventdev_port { bool setup_done; /* enq_configured is set when the qm port is created */ bool enq_configured; - uint8_t implicit_release; /* release events before dequeueing */ + uint8_t implicit_release; /* release events before dequeuing */ } __rte_cache_aligned; struct dlb2_queue { diff --git a/drivers/event/dlb2/dlb2_selftest.c b/drivers/event/dlb2/dlb2_selftest.c index 2113bc2c..1863ffe0 100644 --- a/drivers/event/dlb2/dlb2_selftest.c +++ b/drivers/event/dlb2/dlb2_selftest.c @@ -223,7 +223,7 @@ test_stop_flush(struct test *t) /* test to check we can properly flush events */ 0, RTE_EVENT_PORT_ATTR_DEQ_DEPTH, &dequeue_depth)) { - printf("%d: Error retrieveing dequeue depth\n", __LINE__); + printf("%d: Error retrieving dequeue depth\n", __LINE__); goto err; } diff --git a/drivers/event/dlb2/rte_pmd_dlb2.h b/drivers/event/dlb2/rte_pmd_dlb2.h index 74399db0..1dbd885a 100644 --- a/drivers/event/dlb2/rte_pmd_dlb2.h +++ b/drivers/event/dlb2/rte_pmd_dlb2.h @@ -24,7 +24,7 @@ extern "C" { * Selects the token pop mode for a DLB2 port. */ enum dlb2_token_pop_mode { - /* Pop the CQ tokens immediately after dequeueing. */ + /* Pop the CQ tokens immediately after dequeuing. */ AUTO_POP, /* Pop CQ tokens after (dequeue_depth - 1) events are released. * Supported on load-balanced ports only. diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c index bbbd2095..b549bdfc 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -118,7 +118,7 @@ _eventdev_setup(int mode) struct rte_event_dev_info info; const char *pool_name = "evdev_dpaa2_test_pool"; - /* Create and destrory pool for each test case to make it standalone */ + /* Create and destroy pool for each test case to make it standalone */ eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS, 0 /*MBUF_CACHE_SIZE*/, diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h index e64ae26f..c907c00c 100644 --- a/drivers/event/dsw/dsw_evdev.h +++ b/drivers/event/dsw/dsw_evdev.h @@ -24,7 +24,7 @@ /* Multiple 24-bit flow ids will map to the same DSW-level flow. The * number of DSW flows should be high enough make it unlikely that * flow ids of several large flows hash to the same DSW-level flow. - * Such collisions will limit parallism and thus the number of cores + * Such collisions will limit parallelism and thus the number of cores * that may be utilized. However, configuring a large number of DSW * flows might potentially, depending on traffic and actual * application flow id value range, result in each such DSW-level flow @@ -104,7 +104,7 @@ /* Only one outstanding migration per port is allowed */ #define DSW_MAX_PAUSED_FLOWS (DSW_MAX_PORTS*DSW_MAX_FLOWS_PER_MIGRATION) -/* Enough room for paus request/confirm and unpaus request/confirm for +/* Enough room for pause request/confirm and unpaus request/confirm for * all possible senders. */ #define DSW_CTL_IN_RING_SIZE ((DSW_MAX_PORTS-1)*4) diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c index c6ed4702..e209cd5b 100644 --- a/drivers/event/dsw/dsw_event.c +++ b/drivers/event/dsw/dsw_event.c @@ -1096,7 +1096,7 @@ dsw_port_ctl_process(struct dsw_evdev *dsw, struct dsw_port *port) static void dsw_port_note_op(struct dsw_port *port, uint16_t num_events) { - /* To pull the control ring reasonbly often on busy ports, + /* To pull the control ring reasonably often on busy ports, * each dequeued/enqueued event is considered an 'op' too. */ port->ops_since_bg_task += (num_events+1); @@ -1180,7 +1180,7 @@ dsw_event_enqueue_burst_generic(struct dsw_port *source_port, * addition, a port cannot be left "unattended" (e.g. unused) * for long periods of time, since that would stall * migration. Eventdev API extensions to provide a cleaner way - * to archieve both of these functions should be + * to archive both of these functions should be * considered. */ if (unlikely(events_len == 0)) { diff --git a/drivers/event/octeontx/ssovf_evdev.h b/drivers/event/octeontx/ssovf_evdev.h index bb1056a9..e46dc055 100644 --- a/drivers/event/octeontx/ssovf_evdev.h +++ b/drivers/event/octeontx/ssovf_evdev.h @@ -88,7 +88,7 @@ /* * In Cavium OCTEON TX SoC, all accesses to the device registers are - * implictly strongly ordered. So, The relaxed version of IO operation is + * implicitly strongly ordered. So, The relaxed version of IO operation is * safe to use with out any IO memory barriers. */ #define ssovf_read64 rte_read64_relaxed diff --git a/drivers/event/octeontx/ssovf_evdev_selftest.c b/drivers/event/octeontx/ssovf_evdev_selftest.c index d7b0d221..b5552363 100644 --- a/drivers/event/octeontx/ssovf_evdev_selftest.c +++ b/drivers/event/octeontx/ssovf_evdev_selftest.c @@ -151,7 +151,7 @@ _eventdev_setup(int mode) struct rte_event_dev_info info; const char *pool_name = "evdev_octeontx_test_pool"; - /* Create and destrory pool for each test case to make it standalone */ + /* Create and destroy pool for each test case to make it standalone */ eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS, 0 /*MBUF_CACHE_SIZE*/, diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c index 48bfaf89..a89637d6 100644 --- a/drivers/event/octeontx2/otx2_evdev_selftest.c +++ b/drivers/event/octeontx2/otx2_evdev_selftest.c @@ -139,7 +139,7 @@ _eventdev_setup(int mode) struct rte_event_dev_info info; int i, ret; - /* Create and destrory pool for each test case to make it standalone */ + /* Create and destroy pool for each test case to make it standalone */ eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS, 0, 0, 512, rte_socket_id()); diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index 6da8b14b..440b713a 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -145,7 +145,7 @@ tim_err_desc(int rc) { switch (rc) { case TIM_AF_NO_RINGS_LEFT: - otx2_err("Unable to allocat new TIM ring."); + otx2_err("Unable to allocate new TIM ring."); break; case TIM_AF_INVALID_NPA_PF_FUNC: otx2_err("Invalid NPA pf func."); @@ -189,7 +189,7 @@ tim_err_desc(int rc) case TIM_AF_INVALID_ENABLE_DONTFREE: otx2_err("Invalid Don't free value."); break; - case TIM_AF_ENA_DONTFRE_NSET_PERIODIC: + case TIM_AF_ENA_DONTFREE_NSET_PERIODIC: otx2_err("Don't free bit not set when periodic is enabled."); break; case TIM_AF_RING_ALREADY_DISABLED: diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 36ae4dd8..ca06d51c 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -74,7 +74,7 @@ otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, event.flow_id, flags, lookup_mem); /* Extracting tstamp, if PTP enabled. CGX will prepend * the timestamp at starting of packet data and it can - * be derieved from WQE 9 dword which corresponds to SG + * be derived from WQE 9 dword which corresponds to SG * iova. * rte_pktmbuf_mtod_offset can be used for this purpose * but it brings down the performance as it reads diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c index 15c10240..8b6890b2 100644 --- a/drivers/event/opdl/opdl_evdev.c +++ b/drivers/event/opdl/opdl_evdev.c @@ -703,7 +703,7 @@ opdl_probe(struct rte_vdev_device *vdev) } PMD_DRV_LOG(INFO, "DEV_ID:[%02d] : " - "Success - creating eventdev device %s, numa_node:[%d], do_valdation:[%s]" + "Success - creating eventdev device %s, numa_node:[%d], do_validation:[%s]" " , self_test:[%s]\n", dev->data->dev_id, name, diff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c index e4fc70a4..24b92df4 100644 --- a/drivers/event/opdl/opdl_test.c +++ b/drivers/event/opdl/opdl_test.c @@ -864,7 +864,7 @@ qid_basic(struct test *t) } - /* Start the devicea */ + /* Start the device */ if (!err) { if (rte_event_dev_start(evdev) < 0) { PMD_DRV_LOG(ERR, "%s:%d: Error with start call\n", diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h index 33645bd1..4fd10544 100644 --- a/drivers/event/sw/sw_evdev.h +++ b/drivers/event/sw/sw_evdev.h @@ -180,7 +180,7 @@ struct sw_port { uint16_t outstanding_releases __rte_cache_aligned; uint16_t inflight_max; /* app requested max inflights for this port */ uint16_t inflight_credits; /* num credits this port has right now */ - uint8_t implicit_release; /* release events before dequeueing */ + uint8_t implicit_release; /* release events before dequeuing */ uint16_t last_dequeue_burst_sz; /* how big the burst was */ uint64_t last_dequeue_ticks; /* used to track burst processing time */ diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c index 9768d3a0..cb97a4d6 100644 --- a/drivers/event/sw/sw_evdev_selftest.c +++ b/drivers/event/sw/sw_evdev_selftest.c @@ -1109,7 +1109,7 @@ xstats_tests(struct test *t) NULL, 0); - /* Verify that the resetable stats are reset, and others are not */ + /* Verify that the resettable stats are reset, and others are not */ static const uint64_t queue_expected_zero[] = { 0 /* rx */, 0 /* tx */, diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index f17aff96..32639a3b 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -258,7 +258,7 @@ dpaa_mbuf_alloc_bulk(struct rte_mempool *pool, } /* assigning mbuf from the acquired objects */ for (i = 0; (i < ret) && bufs[i].addr; i++) { - /* TODO-errata - objerved that bufs may be null + /* TODO-errata - observed that bufs may be null * i.e. first buffer is valid, remaining 6 buffers * may be null. */ diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c index 94dc5cd8..8fd9edce 100644 --- a/drivers/mempool/octeontx/octeontx_fpavf.c +++ b/drivers/mempool/octeontx/octeontx_fpavf.c @@ -669,7 +669,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id) break; } - /* Imsert it into an ordered linked list */ + /* Insert it into an ordered linked list */ for (curr = &head; curr[0] != NULL; curr = curr[0]) { if ((uintptr_t)node <= (uintptr_t)curr[0]) break; @@ -705,7 +705,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id) ret = octeontx_fpapf_aura_detach(gpool); if (ret) { - fpavf_log_err("Failed to dettach gaura %u. error code=%d\n", + fpavf_log_err("Failed to detach gaura %u. error code=%d\n", gpool, ret); } diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c index b618cba3..1d4b5e1c 100644 --- a/drivers/net/ark/ark_ethdev.c +++ b/drivers/net/ark/ark_ethdev.c @@ -309,7 +309,7 @@ eth_ark_dev_init(struct rte_eth_dev *dev) return -1; } if (ark->sysctrl.t32[3] != 0) { - if (ark_rqp_lasped(ark->rqpacing)) { + if (ark_rqp_lapsed(ark->rqpacing)) { ARK_PMD_LOG(ERR, "Arkville Evaluation System - " "Timer has Expired\n"); return -1; @@ -565,7 +565,7 @@ eth_ark_dev_start(struct rte_eth_dev *dev) if (ark->start_pg && (dev->data->port_id == 0)) { pthread_t thread; - /* Delay packet generatpr start allow the hardware to be ready + /* Delay packet generator start allow the hardware to be ready * This is only used for sanity checking with internal generator */ if (rte_ctrl_thread_create(&thread, "ark-delay-pg", NULL, diff --git a/drivers/net/ark/ark_global.h b/drivers/net/ark/ark_global.h index 6f9b3013..49193ac5 100644 --- a/drivers/net/ark/ark_global.h +++ b/drivers/net/ark/ark_global.h @@ -67,7 +67,7 @@ typedef void (*rx_user_meta_hook_fn)(struct rte_mbuf *mbuf, const uint32_t *meta, void *ext_user_data); -/* TX hook poplulate *meta, with up to 20 bytes. meta_cnt +/* TX hook populate *meta, with up to 20 bytes. meta_cnt * returns the number of uint32_t words populated, 0 to 5 */ typedef void (*tx_user_meta_hook_fn)(const struct rte_mbuf *mbuf, diff --git a/drivers/net/ark/ark_rqp.c b/drivers/net/ark/ark_rqp.c index ef9ccd07..1193a462 100644 --- a/drivers/net/ark/ark_rqp.c +++ b/drivers/net/ark/ark_rqp.c @@ -62,7 +62,7 @@ ark_rqp_dump(struct ark_rqpace_t *rqp) } int -ark_rqp_lasped(struct ark_rqpace_t *rqp) +ark_rqp_lapsed(struct ark_rqpace_t *rqp) { - return rqp->lasped; + return rqp->lapsed; } diff --git a/drivers/net/ark/ark_rqp.h b/drivers/net/ark/ark_rqp.h index 6c804606..fc9c5b57 100644 --- a/drivers/net/ark/ark_rqp.h +++ b/drivers/net/ark/ark_rqp.h @@ -48,10 +48,10 @@ struct ark_rqpace_t { volatile uint32_t cpld_pending_max; volatile uint32_t err_count_other; char eval[4]; - volatile int lasped; + volatile int lapsed; }; void ark_rqp_dump(struct ark_rqpace_t *rqp); void ark_rqp_stats_reset(struct ark_rqpace_t *rqp); -int ark_rqp_lasped(struct ark_rqpace_t *rqp); +int ark_rqp_lapsed(struct ark_rqpace_t *rqp); #endif diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c index 1c03e8bf..3a028f42 100644 --- a/drivers/net/atlantic/atl_ethdev.c +++ b/drivers/net/atlantic/atl_ethdev.c @@ -1423,7 +1423,7 @@ atl_dev_interrupt_action(struct rte_eth_dev *dev, * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c index e3f57ded..aeb79bf5 100644 --- a/drivers/net/atlantic/atl_rxtx.c +++ b/drivers/net/atlantic/atl_rxtx.c @@ -1094,7 +1094,7 @@ atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); diff --git a/drivers/net/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/atlantic/hw_atl/hw_atl_b0.c index 7d0e7240..d0eb4af9 100644 --- a/drivers/net/atlantic/hw_atl/hw_atl_b0.c +++ b/drivers/net/atlantic/hw_atl/hw_atl_b0.c @@ -281,7 +281,7 @@ int hw_atl_b0_hw_init_rx_path(struct aq_hw_s *self) hw_atl_rpf_vlan_outer_etht_set(self, 0x88A8U); hw_atl_rpf_vlan_inner_etht_set(self, 0x8100U); - /* VLAN proimisc bu defauld */ + /* VLAN promisc by default */ hw_atl_rpf_vlan_prom_mode_en_set(self, 1); /* Rx Interrupts */ diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c index daeb3308..2226b4f6 100644 --- a/drivers/net/axgbe/axgbe_dev.c +++ b/drivers/net/axgbe/axgbe_dev.c @@ -1046,7 +1046,7 @@ static int axgbe_config_rx_threshold(struct axgbe_port *pdata, return 0; } -/*Distrubting fifo size */ +/*Distributing fifo size */ static void axgbe_config_rx_fifo_size(struct axgbe_port *pdata) { unsigned int fifo_size; diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index 7d40c18a..76e12d12 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -284,7 +284,7 @@ static int axgbe_phy_reset(struct axgbe_port *pdata) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h index a207f2ae..e06d40f9 100644 --- a/drivers/net/axgbe/axgbe_ethdev.h +++ b/drivers/net/axgbe/axgbe_ethdev.h @@ -641,7 +641,7 @@ struct axgbe_port { unsigned int kr_redrv; - /* Auto-negotiation atate machine support */ + /* Auto-negotiation state machine support */ unsigned int an_int; unsigned int an_status; enum axgbe_an an_result; diff --git a/drivers/net/axgbe/axgbe_phy_impl.c b/drivers/net/axgbe/axgbe_phy_impl.c index 02236ec1..c114550a 100644 --- a/drivers/net/axgbe/axgbe_phy_impl.c +++ b/drivers/net/axgbe/axgbe_phy_impl.c @@ -347,7 +347,7 @@ static int axgbe_phy_i2c_read(struct axgbe_port *pdata, unsigned int target, retry = 1; again2: - /* Read the specfied register */ + /* Read the specified register */ i2c_op.cmd = AXGBE_I2C_CMD_READ; i2c_op.target = target; i2c_op.len = val_len; @@ -1093,7 +1093,7 @@ static int axgbe_phy_an_config(struct axgbe_port *pdata __rte_unused) { return 0; /* Dummy API since there is no case to support - * external phy devices registred through kerenl apis + * external phy devices registred through kernel apis */ } diff --git a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c index 816371cd..c1f7a3d0 100644 --- a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c +++ b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c @@ -11,7 +11,7 @@ #include #include -/* Useful to avoid shifting for every descriptor prepration*/ +/* Useful to avoid shifting for every descriptor preparation*/ #define TX_DESC_CTRL_FLAGS 0xb000000000000000 #define TX_DESC_CTRL_FLAG_TMST 0x40000000 #define TX_FREE_BULK 8 diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c index f67db015..9179fdc8 100644 --- a/drivers/net/bnx2x/bnx2x.c +++ b/drivers/net/bnx2x/bnx2x.c @@ -926,7 +926,7 @@ storm_memset_eq_prod(struct bnx2x_softc *sc, uint16_t eq_prod, uint16_t pfid) * block. * * RAMROD_CMD_ID_ETH_UPDATE - * Used to update the state of the leading connection, usually to udpate + * Used to update the state of the leading connection, usually to update * the RSS indirection table. Completes on the RCQ of the leading * connection. (Not currently used under FreeBSD until OS support becomes * available.) @@ -941,7 +941,7 @@ storm_memset_eq_prod(struct bnx2x_softc *sc, uint16_t eq_prod, uint16_t pfid) * the RCQ of the leading connection. * * RAMROD_CMD_ID_ETH_CFC_DEL - * Used when tearing down a conneciton prior to driver unload. Completes + * Used when tearing down a connection prior to driver unload. Completes * on the RCQ of the leading connection (since the current connection * has been completely removed from controller memory). * @@ -1072,7 +1072,7 @@ bnx2x_sp_post(struct bnx2x_softc *sc, int command, int cid, uint32_t data_hi, /* * It's ok if the actual decrement is issued towards the memory - * somewhere between the lock and unlock. Thus no more explict + * somewhere between the lock and unlock. Thus no more explicit * memory barrier is needed. */ if (common) { @@ -1190,7 +1190,7 @@ bnx2x_sp_event(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp, break; case (RAMROD_CMD_ID_ETH_TERMINATE): - PMD_DRV_LOG(DEBUG, sc, "got MULTI[%d] teminate ramrod", cid); + PMD_DRV_LOG(DEBUG, sc, "got MULTI[%d] terminate ramrod", cid); drv_cmd = ECORE_Q_CMD_TERMINATE; break; @@ -1476,7 +1476,7 @@ bnx2x_fill_accept_flags(struct bnx2x_softc *sc, uint32_t rx_mode, case BNX2X_RX_MODE_ALLMULTI_PROMISC: case BNX2X_RX_MODE_PROMISC: /* - * According to deffinition of SI mode, iface in promisc mode + * According to definition of SI mode, iface in promisc mode * should receive matched and unmatched (in resolution of port) * unicast packets. */ @@ -1944,7 +1944,7 @@ static void bnx2x_disable_close_the_gate(struct bnx2x_softc *sc) /* * Cleans the object that have internal lists without sending - * ramrods. Should be run when interrutps are disabled. + * ramrods. Should be run when interrupts are disabled. */ static void bnx2x_squeeze_objects(struct bnx2x_softc *sc) { @@ -2043,7 +2043,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link /* * Nothing to do during unload if previous bnx2x_nic_load() - * did not completed successfully - all resourses are released. + * did not completed successfully - all resources are released. */ if ((sc->state == BNX2X_STATE_CLOSED) || (sc->state == BNX2X_STATE_ERROR)) { return 0; @@ -2084,7 +2084,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link /* * Prevent transactions to host from the functions on the * engine that doesn't reset global blocks in case of global - * attention once gloabl blocks are reset and gates are opened + * attention once global blocks are reset and gates are opened * (the engine which leader will perform the recovery * last). */ @@ -2101,7 +2101,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link /* * At this stage no more interrupts will arrive so we may safely clean - * the queue'able objects here in case they failed to get cleaned so far. + * the queueable objects here in case they failed to get cleaned so far. */ if (IS_PF(sc)) { bnx2x_squeeze_objects(sc); @@ -2151,7 +2151,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link } /* - * Encapsulte an mbuf cluster into the tx bd chain and makes the memory + * Encapsulate an mbuf cluster into the tx bd chain and makes the memory * visible to the controller. * * If an mbuf is submitted to this routine and cannot be given to the @@ -2719,7 +2719,7 @@ static uint8_t bnx2x_clear_pf_load(struct bnx2x_softc *sc) return val1 != 0; } -/* send load requrest to mcp and analyze response */ +/* send load request to mcp and analyze response */ static int bnx2x_nic_load_request(struct bnx2x_softc *sc, uint32_t * load_code) { PMD_INIT_FUNC_TRACE(sc); @@ -4031,17 +4031,17 @@ static void bnx2x_attn_int_deasserted2(struct bnx2x_softc *sc, uint32_t attn) } } - if (attn & HW_INTERRUT_ASSERT_SET_2) { + if (attn & HW_INTERRUPT_ASSERT_SET_2) { reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_2 : MISC_REG_AEU_ENABLE1_FUNC_0_OUT_2); val = REG_RD(sc, reg_offset); - val &= ~(attn & HW_INTERRUT_ASSERT_SET_2); + val &= ~(attn & HW_INTERRUPT_ASSERT_SET_2); REG_WR(sc, reg_offset, val); PMD_DRV_LOG(ERR, sc, "FATAL HW block attention set2 0x%x", - (uint32_t) (attn & HW_INTERRUT_ASSERT_SET_2)); + (uint32_t) (attn & HW_INTERRUPT_ASSERT_SET_2)); rte_panic("HW block attention set2"); } } @@ -4061,17 +4061,17 @@ static void bnx2x_attn_int_deasserted1(struct bnx2x_softc *sc, uint32_t attn) } } - if (attn & HW_INTERRUT_ASSERT_SET_1) { + if (attn & HW_INTERRUPT_ASSERT_SET_1) { reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_1 : MISC_REG_AEU_ENABLE1_FUNC_0_OUT_1); val = REG_RD(sc, reg_offset); - val &= ~(attn & HW_INTERRUT_ASSERT_SET_1); + val &= ~(attn & HW_INTERRUPT_ASSERT_SET_1); REG_WR(sc, reg_offset, val); PMD_DRV_LOG(ERR, sc, "FATAL HW block attention set1 0x%08x", - (uint32_t) (attn & HW_INTERRUT_ASSERT_SET_1)); + (uint32_t) (attn & HW_INTERRUPT_ASSERT_SET_1)); rte_panic("HW block attention set1"); } } @@ -4103,13 +4103,13 @@ static void bnx2x_attn_int_deasserted0(struct bnx2x_softc *sc, uint32_t attn) bnx2x_release_phy_lock(sc); } - if (attn & HW_INTERRUT_ASSERT_SET_0) { + if (attn & HW_INTERRUPT_ASSERT_SET_0) { val = REG_RD(sc, reg_offset); - val &= ~(attn & HW_INTERRUT_ASSERT_SET_0); + val &= ~(attn & HW_INTERRUPT_ASSERT_SET_0); REG_WR(sc, reg_offset, val); rte_panic("FATAL HW block attention set0 0x%lx", - (attn & (unsigned long)HW_INTERRUT_ASSERT_SET_0)); + (attn & (unsigned long)HW_INTERRUPT_ASSERT_SET_0)); } } @@ -5325,7 +5325,7 @@ static void bnx2x_func_init(struct bnx2x_softc *sc, struct bnx2x_func_init_param * sum of vn_min_rates. * or * 0 - if all the min_rates are 0. - * In the later case fainess algorithm should be deactivated. + * In the later case fairness algorithm should be deactivated. * If all min rates are not zero then those that are zeroes will be set to 1. */ static void bnx2x_calc_vn_min(struct bnx2x_softc *sc, struct cmng_init_input *input) @@ -6564,7 +6564,7 @@ bnx2x_pf_tx_q_prep(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp, txq_init->fw_sb_id = fp->fw_sb_id; /* - * set the TSS leading client id for TX classfication to the + * set the TSS leading client id for TX classification to the * leading RSS client id */ txq_init->tss_leading_cl_id = BNX2X_FP(sc, 0, cl_id); @@ -7634,8 +7634,8 @@ static uint8_t bnx2x_is_pcie_pending(struct bnx2x_softc *sc) } /* -* Walk the PCI capabiites list for the device to find what features are -* supported. These capabilites may be enabled/disabled by firmware so it's +* Walk the PCI capabilities list for the device to find what features are +* supported. These capabilities may be enabled/disabled by firmware so it's * best to walk the list rather than make assumptions. */ static void bnx2x_probe_pci_caps(struct bnx2x_softc *sc) @@ -8425,7 +8425,7 @@ static int bnx2x_get_device_info(struct bnx2x_softc *sc) } else { sc->devinfo.int_block = INT_BLOCK_IGU; -/* do not allow device reset during IGU info preocessing */ +/* do not allow device reset during IGU info processing */ bnx2x_acquire_hw_lock(sc, HW_LOCK_RESOURCE_RESET); val = REG_RD(sc, IGU_REG_BLOCK_CONFIGURATION); @@ -9765,7 +9765,7 @@ int bnx2x_attach(struct bnx2x_softc *sc) sc->igu_base_addr = IS_VF(sc) ? PXP_VF_ADDR_IGU_START : BAR_IGU_INTMEM; - /* get PCI capabilites */ + /* get PCI capabilities */ bnx2x_probe_pci_caps(sc); if (sc->devinfo.pcie_msix_cap_reg != 0) { @@ -10284,7 +10284,7 @@ static int bnx2x_init_hw_common(struct bnx2x_softc *sc) * stay set) * f. If this is VNIC 3 of a port then also init * first_timers_ilt_entry to zero and last_timers_ilt_entry - * to the last enrty in the ILT. + * to the last entry in the ILT. * * Notes: * Currently the PF error in the PGLC is non recoverable. @@ -11090,7 +11090,7 @@ static void bnx2x_hw_enable_status(struct bnx2x_softc *sc) /** * bnx2x_pf_flr_clnup * a. re-enable target read on the PF - * b. poll cfc per function usgae counter + * b. poll cfc per function usage counter * c. poll the qm perfunction usage counter * d. poll the tm per function usage counter * e. poll the tm per function scan-done indication diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h index 80d19cbf..3f7d82c0 100644 --- a/drivers/net/bnx2x/bnx2x.h +++ b/drivers/net/bnx2x/bnx2x.h @@ -681,13 +681,13 @@ struct bnx2x_slowpath { }; /* struct bnx2x_slowpath */ /* - * Port specifc data structure. + * Port specific data structure. */ struct bnx2x_port { /* * Port Management Function (for 57711E only). * When this field is set the driver instance is - * responsible for managing port specifc + * responsible for managing port specific * configurations such as handling link attentions. */ uint32_t pmf; @@ -732,7 +732,7 @@ struct bnx2x_port { /* * MCP scratchpad address for port specific statistics. - * The device is responsible for writing statistcss + * The device is responsible for writing statisticss * back to the MCP for use with management firmware such * as UMP/NC-SI. */ @@ -937,8 +937,8 @@ struct bnx2x_devinfo { * already registered for this port (which means that the user wants storage * services). * 2. During cnic-related load, to know if offload mode is already configured - * in the HW or needs to be configrued. Since the transition from nic-mode to - * offload-mode in HW causes traffic coruption, nic-mode is configured only + * in the HW or needs to be configured. Since the transition from nic-mode to + * offload-mode in HW causes traffic corruption, nic-mode is configured only * in ports on which storage services where never requested. */ #define CONFIGURE_NIC_MODE(sc) (!CHIP_IS_E1x(sc) && !CNIC_ENABLED(sc)) @@ -1709,7 +1709,7 @@ static const uint32_t dmae_reg_go_c[] = { GENERAL_ATTEN_OFFSET(LATCHED_ATTN_RBCP) | \ GENERAL_ATTEN_OFFSET(LATCHED_ATTN_RSVD_GRC)) -#define HW_INTERRUT_ASSERT_SET_0 \ +#define HW_INTERRUPT_ASSERT_SET_0 \ (AEU_INPUTS_ATTN_BITS_TSDM_HW_INTERRUPT | \ AEU_INPUTS_ATTN_BITS_TCM_HW_INTERRUPT | \ AEU_INPUTS_ATTN_BITS_TSEMI_HW_INTERRUPT | \ @@ -1722,7 +1722,7 @@ static const uint32_t dmae_reg_go_c[] = { AEU_INPUTS_ATTN_BITS_TSEMI_PARITY_ERROR |\ AEU_INPUTS_ATTN_BITS_TCM_PARITY_ERROR |\ AEU_INPUTS_ATTN_BITS_PBCLIENT_PARITY_ERROR) -#define HW_INTERRUT_ASSERT_SET_1 \ +#define HW_INTERRUPT_ASSERT_SET_1 \ (AEU_INPUTS_ATTN_BITS_QM_HW_INTERRUPT | \ AEU_INPUTS_ATTN_BITS_TIMERS_HW_INTERRUPT | \ AEU_INPUTS_ATTN_BITS_XSDM_HW_INTERRUPT | \ @@ -1750,7 +1750,7 @@ static const uint32_t dmae_reg_go_c[] = { AEU_INPUTS_ATTN_BITS_UPB_PARITY_ERROR | \ AEU_INPUTS_ATTN_BITS_CSDM_PARITY_ERROR |\ AEU_INPUTS_ATTN_BITS_CCM_PARITY_ERROR) -#define HW_INTERRUT_ASSERT_SET_2 \ +#define HW_INTERRUPT_ASSERT_SET_2 \ (AEU_INPUTS_ATTN_BITS_CSEMI_HW_INTERRUPT | \ AEU_INPUTS_ATTN_BITS_CDU_HW_INTERRUPT | \ AEU_INPUTS_ATTN_BITS_DMAE_HW_INTERRUPT | \ diff --git a/drivers/net/bnx2x/bnx2x_stats.c b/drivers/net/bnx2x/bnx2x_stats.c index 1cd97259..b19f7d67 100644 --- a/drivers/net/bnx2x/bnx2x_stats.c +++ b/drivers/net/bnx2x/bnx2x_stats.c @@ -551,7 +551,7 @@ bnx2x_bmac_stats_update(struct bnx2x_softc *sc) UPDATE_STAT64(rx_stat_grfrg, rx_stat_etherstatsfragments); UPDATE_STAT64(rx_stat_grjbr, rx_stat_etherstatsjabbers); UPDATE_STAT64(rx_stat_grxcf, rx_stat_maccontrolframesreceived); - UPDATE_STAT64(rx_stat_grxpf, rx_stat_xoffstateentered); + UPDATE_STAT64(rx_stat_grxpf, rx_stat_xoffsetateentered); UPDATE_STAT64(rx_stat_grxpf, rx_stat_mac_xpf); UPDATE_STAT64(tx_stat_gtxpf, tx_stat_outxoffsent); @@ -586,7 +586,7 @@ bnx2x_bmac_stats_update(struct bnx2x_softc *sc) UPDATE_STAT64(rx_stat_grfrg, rx_stat_etherstatsfragments); UPDATE_STAT64(rx_stat_grjbr, rx_stat_etherstatsjabbers); UPDATE_STAT64(rx_stat_grxcf, rx_stat_maccontrolframesreceived); - UPDATE_STAT64(rx_stat_grxpf, rx_stat_xoffstateentered); + UPDATE_STAT64(rx_stat_grxpf, rx_stat_xoffsetateentered); UPDATE_STAT64(rx_stat_grxpf, rx_stat_mac_xpf); UPDATE_STAT64(tx_stat_gtxpf, tx_stat_outxoffsent); UPDATE_STAT64(tx_stat_gtxpf, tx_stat_flowcontroldone); @@ -646,7 +646,7 @@ bnx2x_mstat_stats_update(struct bnx2x_softc *sc) ADD_STAT64(stats_rx.rx_grovr, rx_stat_dot3statsframestoolong); ADD_STAT64(stats_rx.rx_grfrg, rx_stat_etherstatsfragments); ADD_STAT64(stats_rx.rx_grxcf, rx_stat_maccontrolframesreceived); - ADD_STAT64(stats_rx.rx_grxpf, rx_stat_xoffstateentered); + ADD_STAT64(stats_rx.rx_grxpf, rx_stat_xoffsetateentered); ADD_STAT64(stats_rx.rx_grxpf, rx_stat_mac_xpf); ADD_STAT64(stats_tx.tx_gtxpf, tx_stat_outxoffsent); ADD_STAT64(stats_tx.tx_gtxpf, tx_stat_flowcontroldone); @@ -729,7 +729,7 @@ bnx2x_emac_stats_update(struct bnx2x_softc *sc) UPDATE_EXTEND_STAT(rx_stat_etherstatsfragments); UPDATE_EXTEND_STAT(rx_stat_etherstatsjabbers); UPDATE_EXTEND_STAT(rx_stat_maccontrolframesreceived); - UPDATE_EXTEND_STAT(rx_stat_xoffstateentered); + UPDATE_EXTEND_STAT(rx_stat_xoffsetateentered); UPDATE_EXTEND_STAT(rx_stat_xonpauseframesreceived); UPDATE_EXTEND_STAT(rx_stat_xoffpauseframesreceived); UPDATE_EXTEND_STAT(tx_stat_outxonsent); @@ -1358,7 +1358,7 @@ bnx2x_prep_fw_stats_req(struct bnx2x_softc *sc) /* * Prepare the first stats ramrod (will be completed with - * the counters equal to zero) - init counters to somethig different. + * the counters equal to zero) - init counters to something different. */ memset(&sc->fw_stats_data->storm_counters, 0xff, sizeof(struct stats_counter)); diff --git a/drivers/net/bnx2x/bnx2x_stats.h b/drivers/net/bnx2x/bnx2x_stats.h index 635412bd..6a6c4ab9 100644 --- a/drivers/net/bnx2x/bnx2x_stats.h +++ b/drivers/net/bnx2x/bnx2x_stats.h @@ -105,8 +105,8 @@ struct bnx2x_eth_stats { uint32_t rx_stat_bmac_xpf_lo; uint32_t rx_stat_bmac_xcf_hi; uint32_t rx_stat_bmac_xcf_lo; - uint32_t rx_stat_xoffstateentered_hi; - uint32_t rx_stat_xoffstateentered_lo; + uint32_t rx_stat_xoffsetateentered_hi; + uint32_t rx_stat_xoffsetateentered_lo; uint32_t rx_stat_xonpauseframesreceived_hi; uint32_t rx_stat_xonpauseframesreceived_lo; uint32_t rx_stat_xoffpauseframesreceived_hi; @@ -314,7 +314,7 @@ struct bnx2x_eth_stats_old { }; struct bnx2x_eth_q_stats_old { - /* Fields to perserve over fw reset*/ + /* Fields to preserve over fw reset*/ uint32_t total_unicast_bytes_received_hi; uint32_t total_unicast_bytes_received_lo; uint32_t total_broadcast_bytes_received_hi; @@ -328,7 +328,7 @@ struct bnx2x_eth_q_stats_old { uint32_t total_multicast_bytes_transmitted_hi; uint32_t total_multicast_bytes_transmitted_lo; - /* Fields to perserve last of */ + /* Fields to preserve last of */ uint32_t total_bytes_received_hi; uint32_t total_bytes_received_lo; uint32_t total_bytes_transmitted_hi; diff --git a/drivers/net/bnx2x/bnx2x_vfpf.c b/drivers/net/bnx2x/bnx2x_vfpf.c index 945e3df8..042d4b29 100644 --- a/drivers/net/bnx2x/bnx2x_vfpf.c +++ b/drivers/net/bnx2x/bnx2x_vfpf.c @@ -73,7 +73,7 @@ bnx2x_add_tlv(__rte_unused struct bnx2x_softc *sc, void *tlvs_list, tl->length = length; } -/* Initiliaze header of the first tlv and clear mailbox*/ +/* Initialize header of the first tlv and clear mailbox*/ static void bnx2x_vf_prep(struct bnx2x_softc *sc, struct vf_first_tlv *first_tlv, uint16_t type, uint16_t length) diff --git a/drivers/net/bnx2x/bnx2x_vfpf.h b/drivers/net/bnx2x/bnx2x_vfpf.h index 95773412..d71e81c0 100644 --- a/drivers/net/bnx2x/bnx2x_vfpf.h +++ b/drivers/net/bnx2x/bnx2x_vfpf.h @@ -241,7 +241,7 @@ struct vf_close_tlv { uint8_t pad[2]; }; -/* rlease the VF's acquired resources */ +/* release the VF's acquired resources */ struct vf_release_tlv { struct vf_first_tlv first_tlv; uint16_t vf_id; /* for debug */ diff --git a/drivers/net/bnx2x/ecore_fw_defs.h b/drivers/net/bnx2x/ecore_fw_defs.h index 93bca8ad..6fc1fce7 100644 --- a/drivers/net/bnx2x/ecore_fw_defs.h +++ b/drivers/net/bnx2x/ecore_fw_defs.h @@ -379,7 +379,7 @@ /* temporarily used for RTT */ #define XSEMI_CLK1_RESUL_CHIP (1e-3) -/* used for Host Coallescing */ +/* used for Host Coalescing */ #define SDM_TIMER_TICK_RESUL_CHIP (4 * (1e-6)) #define TSDM_TIMER_TICK_RESUL_CHIP (1 * (1e-6)) diff --git a/drivers/net/bnx2x/ecore_hsi.h b/drivers/net/bnx2x/ecore_hsi.h index 5508c536..7955cc37 100644 --- a/drivers/net/bnx2x/ecore_hsi.h +++ b/drivers/net/bnx2x/ecore_hsi.h @@ -961,10 +961,10 @@ struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */ #define PORT_FEAT_CFG_DCBX_DISABLED 0x00000000 #define PORT_FEAT_CFG_DCBX_ENABLED 0x00000100 - #define PORT_FEAT_CFG_AUTOGREEEN_MASK 0x00000200 - #define PORT_FEAT_CFG_AUTOGREEEN_SHIFT 9 - #define PORT_FEAT_CFG_AUTOGREEEN_DISABLED 0x00000000 - #define PORT_FEAT_CFG_AUTOGREEEN_ENABLED 0x00000200 + #define PORT_FEAT_CFG_AUTOGREEN_MASK 0x00000200 + #define PORT_FEAT_CFG_AUTOGREEN_SHIFT 9 + #define PORT_FEAT_CFG_AUTOGREEN_DISABLED 0x00000000 + #define PORT_FEAT_CFG_AUTOGREEN_ENABLED 0x00000200 #define PORT_FEAT_CFG_STORAGE_PERSONALITY_MASK 0x00000C00 #define PORT_FEAT_CFG_STORAGE_PERSONALITY_SHIFT 10 @@ -1062,7 +1062,7 @@ struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */ #define PORT_FEATURE_MBA_LINK_SPEED_20G 0x20000000 /* Secondary MBA configuration, - * see mba_config for the fileds defination. + * see mba_config for the fields definition. */ uint32_t mba_config2; @@ -1070,12 +1070,12 @@ struct port_feat_cfg { /* port 0: 0x454 port 1: 0x4c8 */ #define PORT_FEATURE_MBA_VLAN_TAG_MASK 0x0000FFFF #define PORT_FEATURE_MBA_VLAN_TAG_SHIFT 0 #define PORT_FEATURE_MBA_VLAN_EN 0x00010000 - #define PORT_FEATUTE_BOFM_CFGD_EN 0x00020000 + #define PORT_FEATURE_BOFM_CFGD_EN 0x00020000 #define PORT_FEATURE_BOFM_CFGD_FTGT 0x00040000 #define PORT_FEATURE_BOFM_CFGD_VEN 0x00080000 /* Secondary MBA configuration, - * see mba_vlan_cfg for the fileds defination. + * see mba_vlan_cfg for the fields definition. */ uint32_t mba_vlan_cfg2; @@ -1429,7 +1429,7 @@ struct extended_dev_info_shared_cfg { /* NVRAM OFFSET */ #define EXTENDED_DEV_INFO_SHARED_CFG_DBG_GEN3_COMPLI_ENA 0x00080000 /* Override Rx signal detect threshold when enabled the threshold - * will be set staticaly + * will be set statically */ #define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_RX_SIG_MASK 0x00100000 #define EXTENDED_DEV_INFO_SHARED_CFG_OVERRIDE_RX_SIG_SHIFT 20 @@ -2189,9 +2189,9 @@ struct eee_remote_vals { * elements on a per byte or word boundary. * * example: an array with 8 entries each 4 bit wide. This array will fit into - * a single dword. The diagrmas below show the array order of the nibbles. + * a single dword. The diagrams below show the array order of the nibbles. * - * SHMEM_ARRAY_BITPOS(i, 4, 4) defines the stadard ordering: + * SHMEM_ARRAY_BITPOS(i, 4, 4) defines the standard ordering: * * | | | | * 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | @@ -2519,17 +2519,17 @@ struct shmem_lfa { }; /* - * Used to suppoert NSCI get OS driver version + * Used to support NSCI get OS driver version * On driver load the version value will be set * On driver unload driver value of 0x0 will be set */ struct os_drv_ver { #define DRV_VER_NOT_LOADED 0 - /*personalites orrder is importent */ + /*personalities order is important */ #define DRV_PERS_ETHERNET 0 #define DRV_PERS_ISCSI 1 #define DRV_PERS_FCOE 2 - /*shmem2 struct is constatnt can't add more personalites here*/ + /*shmem2 struct is constant can't add more personalities here*/ #define MAX_DRV_PERS 3 uint32_t versions[MAX_DRV_PERS]; }; @@ -2754,8 +2754,8 @@ struct shmem2_region { struct eee_remote_vals eee_remote_vals[PORT_MAX]; /* 0x0110 */ uint32_t pf_allocation[E2_FUNC_MAX]; /* 0x0120 */ - #define PF_ALLOACTION_MSIX_VECTORS_MASK 0x000000ff /* real value, as PCI config space can show only maximum of 64 vectors */ - #define PF_ALLOACTION_MSIX_VECTORS_SHIFT 0 + #define PF_ALLOCATION_MSIX_VECTORS_MASK 0x000000ff /* real value, as PCI config space can show only maximum of 64 vectors */ + #define PF_ALLOCATION_MSIX_VECTORS_SHIFT 0 /* the status of EEE auto-negotiation * bits 15:0 the configured tx-lpi entry timer value. Depends on bit 31. @@ -2821,7 +2821,7 @@ struct shmem2_region { /* Flag to the driver that PF's drv_info_host_addr buffer was read */ uint32_t mfw_drv_indication; /* Offset 0x19c */ - /* We use inidcation for each PF (0..3) */ + /* We use indication for each PF (0..3) */ #define MFW_DRV_IND_READ_DONE_OFFSET(_pf_) (1 << (_pf_)) union { /* For various OEMs */ /* Offset 0x1a0 */ @@ -2940,7 +2940,7 @@ struct emac_stats { uint32_t rx_stat_xonpauseframesreceived; uint32_t rx_stat_xoffpauseframesreceived; uint32_t rx_stat_maccontrolframesreceived; - uint32_t rx_stat_xoffstateentered; + uint32_t rx_stat_xoffsetateentered; uint32_t rx_stat_dot3statsframestoolong; uint32_t rx_stat_etherstatsjabbers; uint32_t rx_stat_etherstatsundersizepkts; @@ -3378,8 +3378,8 @@ struct mac_stx { uint32_t rx_stat_mac_xcf_lo; /* xoff_state_entered */ - uint32_t rx_stat_xoffstateentered_hi; - uint32_t rx_stat_xoffstateentered_lo; + uint32_t rx_stat_xoffsetateentered_hi; + uint32_t rx_stat_xoffsetateentered_lo; /* pause_xon_frames_received */ uint32_t rx_stat_xonpauseframesreceived_hi; uint32_t rx_stat_xonpauseframesreceived_lo; @@ -6090,8 +6090,8 @@ struct fw_version { uint32_t flags; #define FW_VERSION_OPTIMIZED (0x1 << 0) #define FW_VERSION_OPTIMIZED_SHIFT 0 -#define FW_VERSION_BIG_ENDIEN (0x1 << 1) -#define FW_VERSION_BIG_ENDIEN_SHIFT 1 +#define FW_VERSION_BIG_ENDIAN (0x1 << 1) +#define FW_VERSION_BIG_ENDIAN_SHIFT 1 #define FW_VERSION_CHIP_VERSION (0x3 << 2) #define FW_VERSION_CHIP_VERSION_SHIFT 2 #define __FW_VERSION_RESERVED (0xFFFFFFF << 4) @@ -6195,7 +6195,7 @@ struct hc_sb_data { /* - * Segment types for host coaslescing + * Segment types for host coalescing */ enum hc_segment { HC_REGULAR_SEGMENT, @@ -6242,7 +6242,7 @@ struct hc_status_block_data_e2 { /* - * IGU block operartion modes (in Everest2) + * IGU block operation modes (in Everest2) */ enum igu_mode { HC_IGU_BC_MODE, @@ -6407,8 +6407,8 @@ struct pram_fw_version { #define PRAM_FW_VERSION_OPTIMIZED_SHIFT 0 #define PRAM_FW_VERSION_STORM_ID (0x3 << 1) #define PRAM_FW_VERSION_STORM_ID_SHIFT 1 -#define PRAM_FW_VERSION_BIG_ENDIEN (0x1 << 3) -#define PRAM_FW_VERSION_BIG_ENDIEN_SHIFT 3 +#define PRAM_FW_VERSION_BIG_ENDIAN (0x1 << 3) +#define PRAM_FW_VERSION_BIG_ENDIAN_SHIFT 3 #define PRAM_FW_VERSION_CHIP_VERSION (0x3 << 4) #define PRAM_FW_VERSION_CHIP_VERSION_SHIFT 4 #define __PRAM_FW_VERSION_RESERVED0 (0x3 << 6) @@ -6508,7 +6508,7 @@ struct stats_query_header { /* - * Types of statistcis query entry + * Types of statistics query entry */ enum stats_query_type { STATS_TYPE_QUEUE, @@ -6542,7 +6542,7 @@ enum storm_id { /* - * Taffic types used in ETS and flow control algorithms + * Traffic types used in ETS and flow control algorithms */ enum traffic_type { LLFC_TRAFFIC_TYPE_NW, diff --git a/drivers/net/bnx2x/ecore_init.h b/drivers/net/bnx2x/ecore_init.h index 4e348612..a339c0bf 100644 --- a/drivers/net/bnx2x/ecore_init.h +++ b/drivers/net/bnx2x/ecore_init.h @@ -288,7 +288,7 @@ static inline void ecore_dcb_config_qm(struct bnx2x_softc *sc, enum cos_mode mod * * IMPORTANT REMARKS: * 1. the cmng_init struct does not represent the contiguous internal ram - * structure. the driver should use the XSTORM_CMNG_PERPORT_VARS_OFFSET + * structure. the driver should use the XSTORM_CMNG_PER_PORT_VARS_OFFSET * offset in order to write the port sub struct and the * PFID_FROM_PORT_AND_VNIC offset for writing the vnic sub struct (in other * words - don't use memcpy!). diff --git a/drivers/net/bnx2x/ecore_init_ops.h b/drivers/net/bnx2x/ecore_init_ops.h index 0945e799..4ed811fd 100644 --- a/drivers/net/bnx2x/ecore_init_ops.h +++ b/drivers/net/bnx2x/ecore_init_ops.h @@ -534,7 +534,7 @@ static void ecore_init_pxp_arb(struct bnx2x_softc *sc, int r_order, REG_WR(sc, PXP2_REG_WR_CDU_MPS, val); } - /* Validate number of tags suppoted by device */ + /* Validate number of tags supported by device */ #define PCIE_REG_PCIER_TL_HDR_FC_ST 0x2980 val = REG_RD(sc, PCIE_REG_PCIER_TL_HDR_FC_ST); val &= 0xFF; @@ -714,7 +714,7 @@ static void ecore_ilt_client_init_op_ilt(struct bnx2x_softc *sc, for (i = ilt_cli->start; i <= ilt_cli->end; i++) ecore_ilt_line_init_op(sc, ilt, i, initop); - /* init/clear the ILT boundries */ + /* init/clear the ILT boundaries */ ecore_ilt_boundary_init_op(sc, ilt_cli, ilt->start_line, initop); } @@ -765,7 +765,7 @@ static void ecore_ilt_init_client_psz(struct bnx2x_softc *sc, int cli_num, /* * called during init common stage, ilt clients should be initialized - * prioir to calling this function + * prior to calling this function */ static void ecore_ilt_init_page_size(struct bnx2x_softc *sc, uint8_t initop) { diff --git a/drivers/net/bnx2x/ecore_reg.h b/drivers/net/bnx2x/ecore_reg.h index bb92d131..6b220bc5 100644 --- a/drivers/net/bnx2x/ecore_reg.h +++ b/drivers/net/bnx2x/ecore_reg.h @@ -19,7 +19,7 @@ #define ATC_ATC_INT_STS_REG_ATC_RCPL_TO_EMPTY_CNT (0x1 << 3) #define ATC_ATC_INT_STS_REG_ATC_TCPL_ERROR (0x1 << 4) #define ATC_ATC_INT_STS_REG_ATC_TCPL_TO_NOT_PEND (0x1 << 1) -/* [R 1] ATC initalization done */ +/* [R 1] ATC initialization done */ #define ATC_REG_ATC_INIT_DONE 0x1100bc /* [RW 6] Interrupt mask register #0 read/write */ #define ATC_REG_ATC_INT_MASK 0x1101c8 @@ -56,7 +56,7 @@ #define BRB1_REG_PAUSE_HIGH_THRESHOLD_0 0x60078 /* [RW 10] Write client 0: Assert pause threshold. Not Functional */ #define BRB1_REG_PAUSE_LOW_THRESHOLD_0 0x60068 -/* [R 24] The number of full blocks occpied by port. */ +/* [R 24] The number of full blocks occupied by port. */ #define BRB1_REG_PORT_NUM_OCC_BLOCKS_0 0x60094 /* [R 5] Used to read the value of the XX protection CAM occupancy counter. */ #define CCM_REG_CAM_OCCUP 0xd0188 @@ -456,7 +456,7 @@ #define IGU_REG_PCI_PF_MSIX_FUNC_MASK 0x130148 #define IGU_REG_PCI_PF_MSI_EN 0x130140 /* [WB_R 32] Each bit represent the pending bits status for that SB. 0 = no - * pending; 1 = pending. Pendings means interrupt was asserted; and write + * pending; 1 = pending. Pending means interrupt was asserted; and write * done was not received. Data valid only in addresses 0-4. all the rest are * zero. */ @@ -1059,14 +1059,14 @@ /* [R 28] this field hold the last information that caused reserved * attention. bits [19:0] - address; [22:20] function; [23] reserved; * [27:24] the master that caused the attention - according to the following - * encodeing:1 = pxp; 2 = mcp; 3 = usdm; 4 = tsdm; 5 = xsdm; 6 = csdm; 7 = + * encoding:1 = pxp; 2 = mcp; 3 = usdm; 4 = tsdm; 5 = xsdm; 6 = csdm; 7 = * dbu; 8 = dmae */ #define MISC_REG_GRC_RSV_ATTN 0xa3c0 /* [R 28] this field hold the last information that caused timeout * attention. bits [19:0] - address; [22:20] function; [23] reserved; * [27:24] the master that caused the attention - according to the following - * encodeing:1 = pxp; 2 = mcp; 3 = usdm; 4 = tsdm; 5 = xsdm; 6 = csdm; 7 = + * encoding:1 = pxp; 2 = mcp; 3 = usdm; 4 = tsdm; 5 = xsdm; 6 = csdm; 7 = * dbu; 8 = dmae */ #define MISC_REG_GRC_TIMEOUT_ATTN 0xa3c4 @@ -1398,11 +1398,11 @@ * ~nig_registers_led_control_blink_traffic_p0.led_control_blink_traffic_p0 */ #define NIG_REG_LED_CONTROL_OVERRIDE_TRAFFIC_P0 0x102f8 -/* [RW 1] Port0: If set along with the led_control_override_trafic_p0 bit; +/* [RW 1] Port0: If set along with the led_control_override_traffic_p0 bit; * turns on the Traffic LED. If the led_control_blink_traffic_p0 bit is also * set; the LED will blink with blink rate specified in * ~nig_registers_led_control_blink_rate_p0.led_control_blink_rate_p0 and - * ~nig_regsters_led_control_blink_rate_ena_p0.led_control_blink_rate_ena_p0 + * ~nig_registers_led_control_blink_rate_ena_p0.led_control_blink_rate_ena_p0 * fields. */ #define NIG_REG_LED_CONTROL_TRAFFIC_P0 0x10300 @@ -1567,7 +1567,7 @@ * MAC DA 2. The reset default is set to mask out all parameters. */ #define NIG_REG_P0_LLH_PTP_PARAM_MASK 0x187a0 -/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set +/* [RW 14] Mask register for the rules used in detecting PTP packets. Set * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} . * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1; @@ -1672,7 +1672,7 @@ * MAC DA 2. The reset default is set to mask out all parameters. */ #define NIG_REG_P0_TLLH_PTP_PARAM_MASK 0x187f0 -/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set +/* [RW 14] Mask register for the rules used in detecting PTP packets. Set * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} . * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1; @@ -1839,7 +1839,7 @@ * MAC DA 2. The reset default is set to mask out all parameters. */ #define NIG_REG_P1_LLH_PTP_PARAM_MASK 0x187c8 -/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set +/* [RW 14] Mask register for the rules used in detecting PTP packets. Set * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} . * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1; @@ -1926,7 +1926,7 @@ * MAC DA 2. The reset default is set to mask out all parameters. */ #define NIG_REG_P1_TLLH_PTP_PARAM_MASK 0x187f8 -/* [RW 14] Mask regiser for the rules used in detecting PTP packets. Set +/* [RW 14] Mask register for the rules used in detecting PTP packets. Set * each bit to 1 to mask out that particular rule. 0-{IPv4 DA 0; UDP DP 0} . * 1-{IPv4 DA 0; UDP DP 1} . 2-{IPv4 DA 1; UDP DP 0} . 3-{IPv4 DA 1; UDP DP * 1} . 4-{IPv6 DA 0; UDP DP 0} . 5-{IPv6 DA 0; UDP DP 1} . 6-{IPv6 DA 1; @@ -2306,7 +2306,7 @@ #define PBF_REG_HDRS_AFTER_BASIC 0x15c0a8 /* [RW 6] Bit-map indicating which L2 hdrs may appear after L2 tag 0 */ #define PBF_REG_HDRS_AFTER_TAG_0 0x15c0b8 -/* [R 1] Removed for E3 B0 - Indicates which COS is conncted to the highest +/* [R 1] Removed for E3 B0 - Indicates which COS is connected to the highest * priority in the command arbiter. */ #define PBF_REG_HIGH_PRIORITY_COS_NUM 0x15c04c @@ -2366,7 +2366,7 @@ */ #define PBF_REG_NUM_STRICT_ARB_SLOTS 0x15c064 /* [R 11] Removed for E3 B0 - Port 0 threshold used by arbiter in 16 byte - * lines used when pause not suppoterd. + * lines used when pause not supported. */ #define PBF_REG_P0_ARB_THRSH 0x1400e4 /* [R 11] Removed for E3 B0 - Current credit for port 0 in the tx port @@ -3503,7 +3503,7 @@ * queues. */ #define QM_REG_OVFERROR 0x16805c -/* [RC 6] the Q were the qverflow occurs */ +/* [RC 6] the Q were the overflow occurs */ #define QM_REG_OVFQNUM 0x168058 /* [R 16] Pause state for physical queues 15-0 */ #define QM_REG_PAUSESTATE0 0x168410 @@ -4570,8 +4570,8 @@ #define PCICFG_COMMAND_RESERVED (0x1f<<11) #define PCICFG_STATUS_OFFSET 0x06 #define PCICFG_REVISION_ID_OFFSET 0x08 -#define PCICFG_REVESION_ID_MASK 0xff -#define PCICFG_REVESION_ID_ERROR_VAL 0xff +#define PCICFG_REVISION_ID_MASK 0xff +#define PCICFG_REVISION_ID_ERROR_VAL 0xff #define PCICFG_CACHE_LINE_SIZE 0x0c #define PCICFG_LATENCY_TIMER 0x0d #define PCICFG_HEADER_TYPE 0x0e @@ -4890,7 +4890,7 @@ if set, generate pcie_err_attn output when this error is seen. WC \ */ #define PXPCS_TL_FUNC345_STAT_ERR_MASTER_ABRT2 \ - (1 << 3) /* Receive UR Statusfor Function 2. If set, generate \ + (1 << 3) /* Receive UR Status for Function 2. If set, generate \ pcie_err_attn output when this error is seen. WC */ #define PXPCS_TL_FUNC345_STAT_ERR_CPL_TIMEOUT2 \ (1 << 2) /* Completer Timeout Status Status for Function 2, if \ @@ -4986,7 +4986,7 @@ if set, generate pcie_err_attn output when this error is seen. WC \ */ #define PXPCS_TL_FUNC678_STAT_ERR_MASTER_ABRT5 \ - (1 << 3) /* Receive UR Statusfor Function 5. If set, generate \ + (1 << 3) /* Receive UR Status for Function 5. If set, generate \ pcie_err_attn output when this error is seen. WC */ #define PXPCS_TL_FUNC678_STAT_ERR_CPL_TIMEOUT5 \ (1 << 2) /* Completer Timeout Status Status for Function 5, if \ @@ -5272,8 +5272,8 @@ #define MDIO_GP_STATUS_TOP_AN_STATUS1_DUPLEX_STATUS 0x0008 #define MDIO_GP_STATUS_TOP_AN_STATUS1_CL73_MR_LP_NP_AN_ABLE 0x0010 #define MDIO_GP_STATUS_TOP_AN_STATUS1_CL73_LP_NP_BAM_ABLE 0x0020 -#define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_TXSIDE 0x0040 -#define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_RXSIDE 0x0080 +#define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_TXSIDE 0x0040 +#define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_RXSIDE 0x0080 #define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_MASK 0x3f00 #define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_10M 0x0000 #define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_100M 0x0100 diff --git a/drivers/net/bnx2x/ecore_sp.c b/drivers/net/bnx2x/ecore_sp.c index 0075422e..6c727f2f 100644 --- a/drivers/net/bnx2x/ecore_sp.c +++ b/drivers/net/bnx2x/ecore_sp.c @@ -1338,7 +1338,7 @@ static int __ecore_vlan_mac_execute_step(struct bnx2x_softc *sc, if (rc != ECORE_SUCCESS) { __ecore_vlan_mac_h_pend(sc, o, *ramrod_flags); - /** Calling function should not diffrentiate between this case + /** Calling function should not differentiate between this case * and the case in which there is already a pending ramrod */ rc = ECORE_PENDING; @@ -2246,7 +2246,7 @@ struct ecore_pending_mcast_cmd { union { ecore_list_t macs_head; uint32_t macs_num; /* Needed for DEL command */ - int next_bin; /* Needed for RESTORE flow with aprox match */ + int next_bin; /* Needed for RESTORE flow with approx match */ } data; int done; /* set to TRUE, when the command has been handled, @@ -2352,11 +2352,11 @@ static int ecore_mcast_get_next_bin(struct ecore_mcast_obj *o, int last) int i, j, inner_start = last % BIT_VEC64_ELEM_SZ; for (i = last / BIT_VEC64_ELEM_SZ; i < ECORE_MCAST_VEC_SZ; i++) { - if (o->registry.aprox_match.vec[i]) + if (o->registry.approx_match.vec[i]) for (j = inner_start; j < BIT_VEC64_ELEM_SZ; j++) { int cur_bit = j + BIT_VEC64_ELEM_SZ * i; if (BIT_VEC64_TEST_BIT - (o->registry.aprox_match.vec, cur_bit)) { + (o->registry.approx_match.vec, cur_bit)) { return cur_bit; } } @@ -2379,7 +2379,7 @@ static int ecore_mcast_clear_first_bin(struct ecore_mcast_obj *o) int cur_bit = ecore_mcast_get_next_bin(o, 0); if (cur_bit >= 0) - BIT_VEC64_CLEAR_BIT(o->registry.aprox_match.vec, cur_bit); + BIT_VEC64_CLEAR_BIT(o->registry.approx_match.vec, cur_bit); return cur_bit; } @@ -2421,7 +2421,7 @@ static void ecore_mcast_set_one_rule_e2(struct bnx2x_softc *sc __rte_unused, switch (cmd) { case ECORE_MCAST_CMD_ADD: bin = ecore_mcast_bin_from_mac(cfg_data->mac); - BIT_VEC64_SET_BIT(o->registry.aprox_match.vec, bin); + BIT_VEC64_SET_BIT(o->registry.approx_match.vec, bin); break; case ECORE_MCAST_CMD_DEL: @@ -2812,7 +2812,7 @@ static int ecore_mcast_refresh_registry_e2(struct ecore_mcast_obj *o) uint64_t elem; for (i = 0; i < ECORE_MCAST_VEC_SZ; i++) { - elem = o->registry.aprox_match.vec[i]; + elem = o->registry.approx_match.vec[i]; for (; elem; cnt++) elem &= elem - 1; } @@ -2950,7 +2950,7 @@ static void ecore_mcast_hdl_add_e1h(struct bnx2x_softc *sc __rte_unused, bit); /* bookkeeping... */ - BIT_VEC64_SET_BIT(o->registry.aprox_match.vec, bit); + BIT_VEC64_SET_BIT(o->registry.approx_match.vec, bit); } } @@ -2998,8 +2998,8 @@ static int ecore_mcast_setup_e1h(struct bnx2x_softc *sc, ECORE_MSG(sc, "Invalidating multicast MACs configuration"); /* clear the registry */ - ECORE_MEMSET(o->registry.aprox_match.vec, 0, - sizeof(o->registry.aprox_match.vec)); + ECORE_MEMSET(o->registry.approx_match.vec, 0, + sizeof(o->registry.approx_match.vec)); break; case ECORE_MCAST_CMD_RESTORE: @@ -3016,8 +3016,8 @@ static int ecore_mcast_setup_e1h(struct bnx2x_softc *sc, REG_WR(sc, ECORE_MC_HASH_OFFSET(sc, i), mc_filter[i]); } else /* clear the registry */ - ECORE_MEMSET(o->registry.aprox_match.vec, 0, - sizeof(o->registry.aprox_match.vec)); + ECORE_MEMSET(o->registry.approx_match.vec, 0, + sizeof(o->registry.approx_match.vec)); /* We are done */ r->clear_pending(r); @@ -3025,15 +3025,15 @@ static int ecore_mcast_setup_e1h(struct bnx2x_softc *sc, return ECORE_SUCCESS; } -static int ecore_mcast_get_registry_size_aprox(struct ecore_mcast_obj *o) +static int ecore_mcast_get_registry_size_approx(struct ecore_mcast_obj *o) { - return o->registry.aprox_match.num_bins_set; + return o->registry.approx_match.num_bins_set; } -static void ecore_mcast_set_registry_size_aprox(struct ecore_mcast_obj *o, +static void ecore_mcast_set_registry_size_approx(struct ecore_mcast_obj *o, int n) { - o->registry.aprox_match.num_bins_set = n; + o->registry.approx_match.num_bins_set = n; } int ecore_config_mcast(struct bnx2x_softc *sc, @@ -3163,9 +3163,9 @@ void ecore_init_mcast_obj(struct bnx2x_softc *sc, mcast_obj->validate = ecore_mcast_validate_e1h; mcast_obj->revert = ecore_mcast_revert_e1h; mcast_obj->get_registry_size = - ecore_mcast_get_registry_size_aprox; + ecore_mcast_get_registry_size_approx; mcast_obj->set_registry_size = - ecore_mcast_set_registry_size_aprox; + ecore_mcast_set_registry_size_approx; } else { mcast_obj->config_mcast = ecore_mcast_setup_e2; mcast_obj->enqueue_cmd = ecore_mcast_enqueue_cmd; @@ -3177,9 +3177,9 @@ void ecore_init_mcast_obj(struct bnx2x_softc *sc, mcast_obj->validate = ecore_mcast_validate_e2; mcast_obj->revert = ecore_mcast_revert_e2; mcast_obj->get_registry_size = - ecore_mcast_get_registry_size_aprox; + ecore_mcast_get_registry_size_approx; mcast_obj->set_registry_size = - ecore_mcast_set_registry_size_aprox; + ecore_mcast_set_registry_size_approx; } } @@ -3424,7 +3424,7 @@ void ecore_init_mac_credit_pool(struct bnx2x_softc *sc, } else { /* - * CAM credit is equaly divided between all active functions + * CAM credit is equally divided between all active functions * on the PATH. */ if (func_num > 0) { diff --git a/drivers/net/bnx2x/ecore_sp.h b/drivers/net/bnx2x/ecore_sp.h index d58072da..a5276475 100644 --- a/drivers/net/bnx2x/ecore_sp.h +++ b/drivers/net/bnx2x/ecore_sp.h @@ -430,7 +430,7 @@ enum { RAMROD_RESTORE, /* Execute the next command now */ RAMROD_EXEC, - /* Don't add a new command and continue execution of posponed + /* Don't add a new command and continue execution of postponed * commands. If not set a new command will be added to the * pending commands list. */ @@ -974,7 +974,7 @@ struct ecore_mcast_obj { * properly create DEL commands. */ int num_bins_set; - } aprox_match; + } approx_match; struct { ecore_list_t macs; @@ -1173,7 +1173,7 @@ struct ecore_rss_config_obj { /* Last configured indirection table */ uint8_t ind_table[T_ETH_INDIRECTION_TABLE_SIZE]; - /* flags for enabling 4-tupple hash on UDP */ + /* flags for enabling 4-tuple hash on UDP */ uint8_t udp_rss_v4; uint8_t udp_rss_v6; @@ -1285,7 +1285,7 @@ enum ecore_q_type { #define ECORE_MULTI_TX_COS_E3B0 3 #define ECORE_MULTI_TX_COS 3 /* Maximum possible */ #define MAC_PAD (ECORE_ALIGN(ETH_ALEN, sizeof(uint32_t)) - ETH_ALEN) -/* DMAE channel to be used by FW for timesync workaroun. A driver that sends +/* DMAE channel to be used by FW for timesync workaround. A driver that sends * timesync-related ramrods must not use this DMAE command ID. */ #define FW_DMAE_CMD_ID 6 diff --git a/drivers/net/bnx2x/elink.c b/drivers/net/bnx2x/elink.c index 2093d8f3..838ad351 100644 --- a/drivers/net/bnx2x/elink.c +++ b/drivers/net/bnx2x/elink.c @@ -147,8 +147,8 @@ #define MDIO_GP_STATUS_TOP_AN_STATUS1_DUPLEX_STATUS 0x0008 #define MDIO_GP_STATUS_TOP_AN_STATUS1_CL73_MR_LP_NP_AN_ABLE 0x0010 #define MDIO_GP_STATUS_TOP_AN_STATUS1_CL73_LP_NP_BAM_ABLE 0x0020 - #define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_TXSIDE 0x0040 - #define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_RXSIDE 0x0080 + #define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_TXSIDE 0x0040 + #define MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_RXSIDE 0x0080 #define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_MASK 0x3f00 #define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_10M 0x0000 #define MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_100M 0x0100 @@ -746,7 +746,7 @@ typedef elink_status_t (*read_sfp_module_eeprom_func_p)(struct elink_phy *phy, /********************************************************/ #define ELINK_ETH_HLEN 14 /* L2 header size + 2*VLANs (8 bytes) + LLC SNAP (8 bytes) */ -#define ELINK_ETH_OVREHEAD (ELINK_ETH_HLEN + 8 + 8) +#define ELINK_ETH_OVERHEAD (ELINK_ETH_HLEN + 8 + 8) #define ELINK_ETH_MIN_PACKET_SIZE 60 #define ELINK_ETH_MAX_PACKET_SIZE 1500 #define ELINK_ETH_MAX_JUMBO_PACKET_SIZE 9600 @@ -814,10 +814,10 @@ typedef elink_status_t (*read_sfp_module_eeprom_func_p)(struct elink_phy *phy, SHARED_HW_CFG_AN_EN_SGMII_FIBER_AUTO_DETECT #define ELINK_AUTONEG_REMOTE_PHY SHARED_HW_CFG_AN_ENABLE_REMOTE_PHY -#define ELINK_GP_STATUS_PAUSE_RSOLUTION_TXSIDE \ - MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_TXSIDE -#define ELINK_GP_STATUS_PAUSE_RSOLUTION_RXSIDE \ - MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RSOLUTION_RXSIDE +#define ELINK_GP_STATUS_PAUSE_RESOLUTION_TXSIDE \ + MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_TXSIDE +#define ELINK_GP_STATUS_PAUSE_RESOLUTION_RXSIDE \ + MDIO_GP_STATUS_TOP_AN_STATUS1_PAUSE_RESOLUTION_RXSIDE #define ELINK_GP_STATUS_SPEED_MASK \ MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_MASK #define ELINK_GP_STATUS_10M MDIO_GP_STATUS_TOP_AN_STATUS1_ACTUAL_SPEED_10M @@ -1460,7 +1460,7 @@ static void elink_ets_e3b0_pbf_disabled(const struct elink_params *params) } /****************************************************************************** * Description: - * E3B0 disable will return basicly the values to init values. + * E3B0 disable will return basically the values to init values. *. ******************************************************************************/ static elink_status_t elink_ets_e3b0_disabled(const struct elink_params *params, @@ -1483,7 +1483,7 @@ static elink_status_t elink_ets_e3b0_disabled(const struct elink_params *params, /****************************************************************************** * Description: - * Disable will return basicly the values to init values. + * Disable will return basically the values to init values. * ******************************************************************************/ elink_status_t elink_ets_disabled(struct elink_params *params, @@ -1506,7 +1506,7 @@ elink_status_t elink_ets_disabled(struct elink_params *params, /****************************************************************************** * Description - * Set the COS mappimg to SP and BW until this point all the COS are not + * Set the COS mapping to SP and BW until this point all the COS are not * set as SP or BW. ******************************************************************************/ static elink_status_t elink_ets_e3b0_cli_map(const struct elink_params *params, @@ -1652,7 +1652,7 @@ static elink_status_t elink_ets_e3b0_get_total_bw( } ELINK_DEBUG_P0(sc, "elink_ets_E3B0_config total BW should be 100"); - /* We can handle a case whre the BW isn't 100 this can happen + /* We can handle a case where the BW isn't 100 this can happen * if the TC are joined. */ } @@ -2608,7 +2608,7 @@ static elink_status_t elink_emac_enable(struct elink_params *params, REG_WR(sc, NIG_REG_EGRESS_EMAC0_PORT + port * 4, 1); #ifdef ELINK_INCLUDE_EMUL - /* for paladium */ + /* for palladium */ if (CHIP_REV_IS_EMUL(sc)) { /* Use lane 1 (of lanes 0-3) */ REG_WR(sc, NIG_REG_XGXS_LANE_SEL_P0 + port * 4, 1); @@ -2726,7 +2726,7 @@ static elink_status_t elink_emac_enable(struct elink_params *params, /* Enable emac for jumbo packets */ elink_cb_reg_write(sc, emac_base + EMAC_REG_EMAC_RX_MTU_SIZE, (EMAC_RX_MTU_SIZE_JUMBO_ENA | - (ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD))); + (ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD))); /* Strip CRC */ REG_WR(sc, NIG_REG_NIG_INGRESS_EMAC0_NO_CRC + port * 4, 0x1); @@ -2850,7 +2850,7 @@ static void elink_update_pfc_bmac2(struct elink_params *params, /* Set Time (based unit is 512 bit time) between automatic * re-sending of PP packets amd enable automatic re-send of - * Per-Priroity Packet as long as pp_gen is asserted and + * Per-Priority Packet as long as pp_gen is asserted and * pp_disable is low. */ val = 0x8000; @@ -3124,19 +3124,19 @@ static elink_status_t elink_bmac1_enable(struct elink_params *params, REG_WR_DMAE(sc, bmac_addr + BIGMAC_REGISTER_BMAC_CONTROL, wb_data, 2); /* Set rx mtu */ - wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD; + wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD; wb_data[1] = 0; REG_WR_DMAE(sc, bmac_addr + BIGMAC_REGISTER_RX_MAX_SIZE, wb_data, 2); elink_update_pfc_bmac1(params, vars); /* Set tx mtu */ - wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD; + wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD; wb_data[1] = 0; REG_WR_DMAE(sc, bmac_addr + BIGMAC_REGISTER_TX_MAX_SIZE, wb_data, 2); /* Set cnt max size */ - wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD; + wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD; wb_data[1] = 0; REG_WR_DMAE(sc, bmac_addr + BIGMAC_REGISTER_CNT_MAX_SIZE, wb_data, 2); @@ -3203,18 +3203,18 @@ static elink_status_t elink_bmac2_enable(struct elink_params *params, DELAY(30); /* Set RX MTU */ - wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD; + wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD; wb_data[1] = 0; REG_WR_DMAE(sc, bmac_addr + BIGMAC2_REGISTER_RX_MAX_SIZE, wb_data, 2); DELAY(30); /* Set TX MTU */ - wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD; + wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD; wb_data[1] = 0; REG_WR_DMAE(sc, bmac_addr + BIGMAC2_REGISTER_TX_MAX_SIZE, wb_data, 2); DELAY(30); /* Set cnt max size */ - wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVREHEAD - 2; + wb_data[0] = ELINK_ETH_MAX_JUMBO_PACKET_SIZE + ELINK_ETH_OVERHEAD - 2; wb_data[1] = 0; REG_WR_DMAE(sc, bmac_addr + BIGMAC2_REGISTER_CNT_MAX_SIZE, wb_data, 2); DELAY(30); @@ -3339,7 +3339,7 @@ static elink_status_t elink_pbf_update(struct elink_params *params, } else { uint32_t thresh = (ELINK_ETH_MAX_JUMBO_PACKET_SIZE + - ELINK_ETH_OVREHEAD) / 16; + ELINK_ETH_OVERHEAD) / 16; REG_WR(sc, PBF_REG_P0_PAUSE_ENABLE + port * 4, 0); /* Update threshold */ REG_WR(sc, PBF_REG_P0_ARB_THRSH + port * 4, thresh); @@ -3369,7 +3369,7 @@ static elink_status_t elink_pbf_update(struct elink_params *params, } /** - * elink_get_emac_base - retrive emac base address + * elink_get_emac_base - retrieve emac base address * * @bp: driver handle * @mdc_mdio_access: access type @@ -4518,7 +4518,7 @@ static void elink_warpcore_enable_AN_KR2(struct elink_phy *phy, elink_cl45_write(sc, phy, reg_set[i].devad, reg_set[i].reg, reg_set[i].val); - /* Start KR2 work-around timer which handles BNX2X8073 link-parner */ + /* Start KR2 work-around timer which handles BNX2X8073 link-partner */ params->link_attr_sync |= LINK_ATTR_SYNC_KR2_ENABLE; elink_update_link_attr(params, params->link_attr_sync); } @@ -7824,7 +7824,7 @@ elink_status_t elink_link_update(struct elink_params *params, * hence its link is expected to be down * - SECOND_PHY means that first phy should not be able * to link up by itself (using configuration) - * - DEFAULT should be overridden during initialiazation + * - DEFAULT should be overridden during initialization */ ELINK_DEBUG_P1(sc, "Invalid link indication" " mpc=0x%x. DISABLING LINK !!!", @@ -10991,7 +10991,7 @@ static elink_status_t elink_84858_cmd_hdlr(struct elink_phy *phy, ELINK_DEBUG_P0(sc, "FW cmd failed."); return ELINK_STATUS_ERROR; } - /* Step5: Once the command has completed, read the specficied DATA + /* Step5: Once the command has completed, read the specified DATA * registers for any saved results for the command, if applicable */ @@ -12102,7 +12102,7 @@ static uint8_t elink_54618se_config_init(struct elink_phy *phy, if (phy->flags & ELINK_FLAGS_EEE) { /* Handle legacy auto-grEEEn */ if (params->feature_config_flags & - ELINK_FEATURE_CONFIG_AUTOGREEEN_ENABLED) { + ELINK_FEATURE_CONFIG_AUTOGREEN_ENABLED) { temp = 6; ELINK_DEBUG_P0(sc, "Enabling Auto-GrEEEn"); } else { diff --git a/drivers/net/bnx2x/elink.h b/drivers/net/bnx2x/elink.h index 6b2e85f1..1dd9b799 100644 --- a/drivers/net/bnx2x/elink.h +++ b/drivers/net/bnx2x/elink.h @@ -403,7 +403,7 @@ struct elink_params { #define ELINK_FEATURE_CONFIG_EMUL_DISABLE_UMAC (1 << 6) #define ELINK_FEATURE_CONFIG_EMUL_DISABLE_XMAC (1 << 7) #define ELINK_FEATURE_CONFIG_BC_SUPPORTS_AFEX (1 << 8) -#define ELINK_FEATURE_CONFIG_AUTOGREEEN_ENABLED (1 << 9) +#define ELINK_FEATURE_CONFIG_AUTOGREEN_ENABLED (1 << 9) #define ELINK_FEATURE_CONFIG_BC_SUPPORTS_SFP_TX_DISABLED (1 << 10) #define ELINK_FEATURE_CONFIG_DISABLE_REMOTE_FAULT_DET (1 << 11) #define ELINK_FEATURE_CONFIG_IEEE_PHY_TEST (1 << 12) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index f53f8632..7bcf36c9 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -3727,7 +3727,7 @@ int bnxt_hwrm_allocate_pf_only(struct bnxt *bp) int rc; if (!BNXT_PF(bp)) { - PMD_DRV_LOG(ERR, "Attempt to allcoate VFs on a VF!\n"); + PMD_DRV_LOG(ERR, "Attempt to allocate VFs on a VF!\n"); return -EINVAL; } diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c index a4b09346..a967a9cc 100644 --- a/drivers/net/bnxt/tf_core/tfp.c +++ b/drivers/net/bnxt/tf_core/tfp.c @@ -52,7 +52,7 @@ tfp_send_msg_direct(struct bnxt *bp, } /** - * Allocates zero'ed memory from the heap. + * Allocates zeroed memory from the heap. * * Returns success or failure code. */ diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h index dd0a3470..5a99c7a0 100644 --- a/drivers/net/bnxt/tf_core/tfp.h +++ b/drivers/net/bnxt/tf_core/tfp.h @@ -150,7 +150,7 @@ tfp_msg_hwrm_oem_cmd(struct tf *tfp, uint32_t max_flows); /** - * Allocates zero'ed memory from the heap. + * Allocates zeroed memory from the heap. * * NOTE: Also performs virt2phy address conversion by default thus is * can be expensive to invoke. diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c b/drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c index b09ccced..27f42d8c 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c @@ -102,9 +102,9 @@ int32_t bnxt_rss_config_action_apply(struct bnxt_ulp_mapper_parms *parms) #define ULP_FILE_PATH_SIZE 256 -static int32_t glob_error_fn(const char *epath, int32_t eerrno) +static int32_t glob_error_fn(const char *epath, int32_t errno) { - BNXT_TF_DBG(ERR, "path %s error %d\n", epath, eerrno); + BNXT_TF_DBG(ERR, "path %s error %d\n", epath, errno); return 0; } diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h index 9b5738af..a5e1fffe 100644 --- a/drivers/net/bonding/eth_bond_8023ad_private.h +++ b/drivers/net/bonding/eth_bond_8023ad_private.h @@ -20,7 +20,7 @@ /** Maximum number of LACP packets from one slave queued in TX ring. */ #define BOND_MODE_8023AX_SLAVE_TX_PKTS 1 /** - * Timeouts deffinitions (5.4.4 in 802.1AX documentation). + * Timeouts definitions (5.4.4 in 802.1AX documentation). */ #define BOND_8023AD_FAST_PERIODIC_MS 900 #define BOND_8023AD_SLOW_PERIODIC_MS 29000 diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h index 8b104b63..9626b26d 100644 --- a/drivers/net/bonding/eth_bond_private.h +++ b/drivers/net/bonding/eth_bond_private.h @@ -139,7 +139,7 @@ struct bond_dev_private { uint16_t slave_count; /**< Number of bonded slaves */ struct bond_slave_details slaves[RTE_MAX_ETHPORTS]; - /**< Arary of bonded slaves details */ + /**< Array of bonded slaves details */ struct mode8023ad_private mode4; uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS]; diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c index ca50583d..b3cddd8a 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.c +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c @@ -243,7 +243,7 @@ record_default(struct port *port) { /* Record default parameters for partner. Partner admin parameters * are not implemented so set them to arbitrary default (last known) and - * mark actor that parner is in defaulted state. */ + * mark actor that partner is in defaulted state. */ port->partner_state = STATE_LACP_ACTIVE; ACTOR_STATE_SET(port, DEFAULTED); } @@ -300,7 +300,7 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id, MODE4_DEBUG("LACP -> CURRENT\n"); BOND_PRINT_LACP(lacp); /* Update selected flag. If partner parameters are defaulted assume they - * are match. If not defaulted compare LACP actor with ports parner + * are match. If not defaulted compare LACP actor with ports partner * params. */ if (!ACTOR_STATE(port, DEFAULTED) && (ACTOR_STATE(port, AGGREGATION) != PARTNER_STATE(port, AGGREGATION) @@ -399,16 +399,16 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id) PARTNER_STATE(port, LACP_ACTIVE); uint8_t is_partner_fast, was_partner_fast; - /* No periodic is on BEGIN, LACP DISABLE or when both sides are pasive */ + /* No periodic is on BEGIN, LACP DISABLE or when both sides are passive */ if (SM_FLAG(port, BEGIN) || !SM_FLAG(port, LACP_ENABLED) || !active) { timer_cancel(&port->periodic_timer); timer_force_expired(&port->tx_machine_timer); SM_FLAG_CLR(port, PARTNER_SHORT_TIMEOUT); MODE4_DEBUG("-> NO_PERIODIC ( %s%s%s)\n", - SM_FLAG(port, BEGIN) ? "begind " : "", + SM_FLAG(port, BEGIN) ? "begin " : "", SM_FLAG(port, LACP_ENABLED) ? "" : "LACP disabled ", - active ? "LACP active " : "LACP pasive "); + active ? "LACP active " : "LACP passive "); return; } @@ -495,10 +495,10 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id) if ((ACTOR_STATE(port, DISTRIBUTING) || ACTOR_STATE(port, COLLECTING)) && !PARTNER_STATE(port, SYNCHRONIZATION)) { /* If in COLLECTING or DISTRIBUTING state and partner becomes out of - * sync transit to ATACHED state. */ + * sync transit to ATTACHED state. */ ACTOR_STATE_CLR(port, DISTRIBUTING); ACTOR_STATE_CLR(port, COLLECTING); - /* Clear actor sync to activate transit ATACHED in condition bellow */ + /* Clear actor sync to activate transit ATTACHED in condition bellow */ ACTOR_STATE_CLR(port, SYNCHRONIZATION); MODE4_DEBUG("Out of sync -> ATTACHED\n"); } @@ -696,7 +696,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id) /* Search for aggregator suitable for this port */ for (i = 0; i < slaves_count; ++i) { agg = &bond_mode_8023ad_ports[slaves[i]]; - /* Skip ports that are not aggreagators */ + /* Skip ports that are not aggregators */ if (agg->aggregator_port_id != slaves[i]) continue; @@ -921,7 +921,7 @@ bond_mode_8023ad_periodic_cb(void *arg) SM_FLAG_SET(port, BEGIN); - /* LACP is disabled on half duples or link is down */ + /* LACP is disabled on half duplex or link is down */ if (SM_FLAG(port, LACP_ENABLED)) { /* If port was enabled set it to BEGIN state */ SM_FLAG_CLR(port, LACP_ENABLED); @@ -1069,7 +1069,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, port->partner_state = STATE_LACP_ACTIVE | STATE_AGGREGATION; port->sm_flags = SM_FLAGS_BEGIN; - /* use this port as agregator */ + /* use this port as aggregator */ port->aggregator_port_id = slave_id; if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) { diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h index 11a71a55..7eb392f8 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.h +++ b/drivers/net/bonding/rte_eth_bond_8023ad.h @@ -68,7 +68,7 @@ struct port_params { struct rte_ether_addr system; /**< System ID - Slave MAC address, same as bonding MAC address */ uint16_t key; - /**< Speed information (implementation dependednt) and duplex. */ + /**< Speed information (implementation dependent) and duplex. */ uint16_t port_priority; /**< Priority of this (unused in current implementation) */ uint16_t port_number; @@ -317,7 +317,7 @@ rte_eth_bond_8023ad_dedicated_queues_disable(uint16_t port_id); * @param port_id Bonding device id * * @return - * agregator mode on success, negative value otherwise + * aggregator mode on success, negative value otherwise */ int rte_eth_bond_8023ad_agg_selection_get(uint16_t port_id); diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h index 386e70c5..4e9aeda9 100644 --- a/drivers/net/bonding/rte_eth_bond_alb.h +++ b/drivers/net/bonding/rte_eth_bond_alb.h @@ -96,7 +96,7 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset, * @param internals Bonding data. * * @return - * Index of slawe on which packet should be sent. + * Index of slave on which packet should be sent. */ uint16_t bond_mode_alb_arp_upd(struct client_data *client_info, diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c index 84943cff..2d5cac6c 100644 --- a/drivers/net/bonding/rte_eth_bond_api.c +++ b/drivers/net/bonding/rte_eth_bond_api.c @@ -375,7 +375,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals, * value. Thus, the new internal value of default Rx queue offloads * has to be masked by rx_queue_offload_capa to make sure that only * commonly supported offloads are preserved from both the previous - * value and the value being inhereted from the new slave device. + * value and the value being inherited from the new slave device. */ rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) & internals->rx_queue_offload_capa; @@ -413,7 +413,7 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals, * value. Thus, the new internal value of default Tx queue offloads * has to be masked by tx_queue_offload_capa to make sure that only * commonly supported offloads are preserved from both the previous - * value and the value being inhereted from the new slave device. + * value and the value being inherited from the new slave device. */ txconf_i->offloads = (txconf_i->offloads | txconf->offloads) & internals->tx_queue_offload_capa; diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h index c2a46ad7..0982158c 100644 --- a/drivers/net/cnxk/cn10k_ethdev.h +++ b/drivers/net/cnxk/cn10k_ethdev.h @@ -53,7 +53,7 @@ struct cn10k_outb_priv_data { void *userdata; /* Rlen computation data */ struct cnxk_ipsec_outb_rlens rlens; - /* Back pinter to eth sec session */ + /* Back pointer to eth sec session */ struct cnxk_eth_sec_sess *eth_sec; /* SA index */ uint32_t sa_idx; diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index 873e1871..c7d442fc 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -736,7 +736,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd, /* Retrieving the default desc values */ lmt[off] = cmd[2]; - /* Using compiler barier to avoid voilation of C + /* Using compiler barrier to avoid violation of C * aliasing rules. */ rte_compiler_barrier(); @@ -745,7 +745,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd, /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp * should not be recorded, hence changing the alg type to * NIX_SENDMEMALG_SET and also changing send mem addr field to - * next 8 bytes as it corrpt the actual tx tstamp registered + * next 8 bytes as it corrupts the actual tx tstamp registered * address. */ send_mem->w0.subdc = NIX_SUBDC_MEM; @@ -2254,7 +2254,7 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, } if (flags & NIX_TX_OFFLOAD_TSTAMP_F) { - /* Tx ol_flag for timestam. */ + /* Tx ol_flag for timestamp. */ const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST, RTE_MBUF_F_TX_IEEE1588_TMST}; /* Set send mem alg to SUB. */ diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h index 435dde13..f5e5e555 100644 --- a/drivers/net/cnxk/cn9k_tx.h +++ b/drivers/net/cnxk/cn9k_tx.h @@ -304,7 +304,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, /* Retrieving the default desc values */ cmd[off] = send_mem_desc[6]; - /* Using compiler barier to avoid voilation of C + /* Using compiler barrier to avoid violation of C * aliasing rules. */ rte_compiler_barrier(); @@ -313,7 +313,7 @@ cn9k_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp * should not be recorded, hence changing the alg type to * NIX_SENDMEMALG_SET and also changing send mem addr field to - * next 8 bytes as it corrpt the actual tx tstamp registered + * next 8 bytes as it corrupts the actual tx tstamp registered * address. */ send_mem->w0.cn9k.alg = @@ -1531,7 +1531,7 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, } if (flags & NIX_TX_OFFLOAD_TSTAMP_F) { - /* Tx ol_flag for timestam. */ + /* Tx ol_flag for timestamp. */ const uint64x2_t olf = {RTE_MBUF_F_TX_IEEE1588_TMST, RTE_MBUF_F_TX_IEEE1588_TMST}; /* Set send mem alg to SUB. */ diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c index 139fea25..359f9a30 100644 --- a/drivers/net/cnxk/cnxk_ptp.c +++ b/drivers/net/cnxk/cnxk_ptp.c @@ -12,7 +12,7 @@ cnxk_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *clock) /* This API returns the raw PTP HI clock value. Since LFs do not * have direct access to PTP registers and it requires mbox msg * to AF for this value. In fastpath reading this value for every - * packet (which involes mbox call) becomes very expensive, hence + * packet (which involves mbox call) becomes very expensive, hence * we should be able to derive PTP HI clock value from tsc by * using freq_mult and clk_delta calculated during configure stage. */ diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c index edcbba9d..6e460dfe 100644 --- a/drivers/net/cxgbe/cxgbe_flow.c +++ b/drivers/net/cxgbe/cxgbe_flow.c @@ -1378,7 +1378,7 @@ cxgbe_flow_validate(struct rte_eth_dev *dev, } /* - * @ret : > 0 filter destroyed succsesfully + * @ret : > 0 filter destroyed successfully * < 0 error destroying filter * == 1 filter not active / not found */ diff --git a/drivers/net/cxgbe/cxgbevf_main.c b/drivers/net/cxgbe/cxgbevf_main.c index f639612a..d0c93f8a 100644 --- a/drivers/net/cxgbe/cxgbevf_main.c +++ b/drivers/net/cxgbe/cxgbevf_main.c @@ -44,7 +44,7 @@ static void size_nports_qsets(struct adapter *adapter) */ pmask_nports = hweight32(adapter->params.vfres.pmask); if (pmask_nports < adapter->params.nports) { - dev_warn(adapter->pdev_dev, "only using %d of %d provissioned" + dev_warn(adapter->pdev_dev, "only using %d of %d provisioned" " virtual interfaces; limited by Port Access Rights" " mask %#x\n", pmask_nports, adapter->params.nports, adapter->params.vfres.pmask); diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c index f623f3e6..1c76b8e4 100644 --- a/drivers/net/cxgbe/sge.c +++ b/drivers/net/cxgbe/sge.c @@ -211,7 +211,7 @@ static inline unsigned int fl_cap(const struct sge_fl *fl) * @fl: the Free List * * Tests specified Free List to see whether the number of buffers - * available to the hardware has falled below our "starvation" + * available to the hardware has fallen below our "starvation" * threshold. */ static inline bool fl_starving(const struct adapter *adapter, @@ -678,7 +678,7 @@ static void write_sgl(struct rte_mbuf *mbuf, struct sge_txq *q, * @q: the Tx queue * @n: number of new descriptors to give to HW * - * Ring the doorbel for a Tx queue. + * Ring the doorbell for a Tx queue. */ static inline void ring_tx_db(struct adapter *adap, struct sge_txq *q) { @@ -877,7 +877,7 @@ static inline void ship_tx_pkt_coalesce_wr(struct adapter *adap, } /** - * should_tx_packet_coalesce - decides wether to coalesce an mbuf or not + * should_tx_packet_coalesce - decides whether to coalesce an mbuf or not * @txq: tx queue where the mbuf is sent * @mbuf: mbuf to be sent * @nflits: return value for number of flits needed @@ -1846,7 +1846,7 @@ int t4_sge_alloc_rxq(struct adapter *adap, struct sge_rspq *iq, bool fwevtq, * for its status page) along with the associated software * descriptor ring. The free list size needs to be a multiple * of the Egress Queue Unit and at least 2 Egress Units larger - * than the SGE's Egress Congrestion Threshold + * than the SGE's Egress Congestion Threshold * (fl_starve_thres - 1). */ if (fl->size < s->fl_starve_thres - 1 + 2 * 8) diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index e49f7654..2c2c4e4e 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -1030,7 +1030,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, QM_FQCTRL_CTXASTASHING | QM_FQCTRL_PREFERINCACHE; opts.fqd.context_a.stashing.exclusive = 0; - /* In muticore scenario stashing becomes a bottleneck on LS1046. + /* In multicore scenario stashing becomes a bottleneck on LS1046. * So do not enable stashing in this case */ if (dpaa_svr_family != SVR_LS1046A_FAMILY) @@ -1866,7 +1866,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev) dpaa_intf->name = dpaa_device->name; - /* save fman_if & cfg in the interface struture */ + /* save fman_if & cfg in the interface structure */ eth_dev->process_private = fman_intf; dpaa_intf->ifid = dev_id; dpaa_intf->cfg = cfg; @@ -2169,7 +2169,7 @@ rte_dpaa_probe(struct rte_dpaa_driver *dpaa_drv, if (dpaa_svr_family == SVR_LS1043A_FAMILY) dpaa_push_mode_max_queue = 0; - /* if push mode queues to be enabled. Currenly we are allowing + /* if push mode queues to be enabled. Currently we are allowing * only one queue per thread. */ if (getenv("DPAA_PUSH_QUEUES_NUMBER")) { diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index ffac6ce3..956fe946 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -600,8 +600,8 @@ void dpaa_rx_cb_prepare(struct qm_dqrr_entry *dq, void **bufs) void *ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dq->fd)); /* In case of LS1046, annotation stashing is disabled due to L2 cache - * being bottleneck in case of multicore scanario for this platform. - * So we prefetch the annoation beforehand, so that it is available + * being bottleneck in case of multicore scenario for this platform. + * So we prefetch the annotation beforehand, so that it is available * in cache when accessed. */ rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF)); diff --git a/drivers/net/dpaa/fmlib/fm_ext.h b/drivers/net/dpaa/fmlib/fm_ext.h index 27c9fb47..8e7153bd 100644 --- a/drivers/net/dpaa/fmlib/fm_ext.h +++ b/drivers/net/dpaa/fmlib/fm_ext.h @@ -176,7 +176,7 @@ typedef struct t_fm_prs_result { #define FM_FD_ERR_PRS_HDR_ERR 0x00000020 /**< Header error was identified during parsing */ #define FM_FD_ERR_BLOCK_LIMIT_EXCEEDED 0x00000008 - /**< Frame parsed beyind 256 first bytes */ + /**< Frame parsed beyond 256 first bytes */ #define FM_FD_TX_STATUS_ERR_MASK (FM_FD_ERR_UNSUPPORTED_FORMAT | \ FM_FD_ERR_LENGTH | \ diff --git a/drivers/net/dpaa/fmlib/fm_pcd_ext.h b/drivers/net/dpaa/fmlib/fm_pcd_ext.h index 8be3885f..3802b429 100644 --- a/drivers/net/dpaa/fmlib/fm_pcd_ext.h +++ b/drivers/net/dpaa/fmlib/fm_pcd_ext.h @@ -276,7 +276,7 @@ typedef struct ioc_fm_pcd_counters_params_t { } ioc_fm_pcd_counters_params_t; /* - * @Description structure for FM exception definitios + * @Description structure for FM exception definitions */ typedef struct ioc_fm_pcd_exception_params_t { ioc_fm_pcd_exceptions exception; /**< The requested exception */ @@ -883,7 +883,7 @@ typedef enum ioc_fm_pcd_manip_hdr_rmv_specific_l2 { e_IOC_FM_PCD_MANIP_HDR_RMV_ETHERNET, /**< Ethernet/802.3 MAC */ e_IOC_FM_PCD_MANIP_HDR_RMV_STACKED_QTAGS, /**< stacked QTags */ e_IOC_FM_PCD_MANIP_HDR_RMV_ETHERNET_AND_MPLS, - /**< MPLS and Ethernet/802.3 MAC header unitl the header + /**< MPLS and Ethernet/802.3 MAC header until the header * which follows the MPLS header */ e_IOC_FM_PCD_MANIP_HDR_RMV_MPLS @@ -3293,7 +3293,7 @@ typedef struct ioc_fm_pcd_cc_tbl_get_stats_t { /* * @Function fm_pcd_net_env_characteristics_delete * - * @Description Deletes a set of Network Environment Charecteristics. + * @Description Deletes a set of Network Environment Characteristics. * * @Param[in] ioc_fm_obj_t The id of a Network Environment object. * @@ -3493,7 +3493,7 @@ typedef struct ioc_fm_pcd_cc_tbl_get_stats_t { * @Return 0 on success; Error code otherwise. * * @Cautions Allowed only following fm_pcd_match_table_set() not only of - * the relevnt node but also the node that points to this node. + * the relevant node but also the node that points to this node. */ #define FM_PCD_IOC_MATCH_TABLE_MODIFY_KEY_AND_NEXT_ENGINE \ _IOW(FM_IOC_TYPE_BASE, FM_PCD_IOC_NUM(35), \ diff --git a/drivers/net/dpaa/fmlib/fm_port_ext.h b/drivers/net/dpaa/fmlib/fm_port_ext.h index 6f5479fb..abdec961 100644 --- a/drivers/net/dpaa/fmlib/fm_port_ext.h +++ b/drivers/net/dpaa/fmlib/fm_port_ext.h @@ -177,7 +177,7 @@ typedef enum ioc_fm_port_counters { /**< BMI OP & HC only statistics counter */ e_IOC_FM_PORT_COUNTERS_LENGTH_ERR, /**< BMI non-Rx statistics counter */ - e_IOC_FM_PORT_COUNTERS_UNSUPPRTED_FORMAT, + e_IOC_FM_PORT_COUNTERS_UNSUPPORTED_FORMAT, /**< BMI non-Rx statistics counter */ e_IOC_FM_PORT_COUNTERS_DEQ_TOTAL,/**< QMI total QM dequeues counter */ e_IOC_FM_PORT_COUNTERS_ENQ_TOTAL,/**< QMI total QM enqueues counter */ @@ -498,7 +498,7 @@ typedef struct ioc_fm_port_pcd_prs_params_t { /**< Number of bytes from beginning of packet to start parsing */ ioc_net_header_type first_prs_hdr; - /**< The type of the first header axpected at 'parsing_offset' + /**< The type of the first header expected at 'parsing_offset' */ bool include_in_prs_statistics; /**< TRUE to include this port in the parser statistics */ @@ -524,7 +524,7 @@ typedef struct ioc_fm_port_pcd_prs_params_t { } ioc_fm_port_pcd_prs_params_t; /* - * @Description A structure for defining coarse alassification parameters + * @Description A structure for defining coarse classification parameters * (Must match t_fm_portPcdCcParams defined in fm_port_ext.h) */ typedef struct ioc_fm_port_pcd_cc_params_t { @@ -602,7 +602,7 @@ typedef struct ioc_fm_pcd_prs_start_t { /**< Number of bytes from beginning of packet to start parsing */ ioc_net_header_type first_prs_hdr; - /**< The type of the first header axpected at 'parsing_offset' + /**< The type of the first header expected at 'parsing_offset' */ } ioc_fm_pcd_prs_start_t; @@ -1356,7 +1356,7 @@ typedef uint32_t fm_port_frame_err_select_t; #define FM_PORT_FRM_ERR_PRS_HDR_ERR FM_FD_ERR_PRS_HDR_ERR /**< Header error was identified during parsing */ #define FM_PORT_FRM_ERR_BLOCK_LIMIT_EXCEEDED FM_FD_ERR_BLOCK_LIMIT_EXCEEDED - /**< Frame parsed beyind 256 first bytes */ + /**< Frame parsed beyond 256 first bytes */ #define FM_PORT_FRM_ERR_PROCESS_TIMEOUT 0x00000001 /**< FPM Frame Processing Timeout Exceeded */ /* @} */ @@ -1390,7 +1390,7 @@ typedef void (t_fm_port_exception_callback) (t_handle h_app, * @Param[in] length length of received data * @Param[in] status receive status and errors * @Param[in] position position of buffer in frame - * @Param[in] h_buf_context A handle of the user acossiated with this buffer + * @Param[in] h_buf_context A handle of the user associated with this buffer * * @Retval e_RX_STORE_RESPONSE_CONTINUE * order the driver to continue Rx operation for all ready data. @@ -1414,7 +1414,7 @@ typedef e_rx_store_response(t_fm_port_im_rx_store_callback) (t_handle h_app, * @Param[in] p_data A pointer to data received * @Param[in] status transmit status and errors * @Param[in] last_buffer is last buffer in frame - * @Param[in] h_buf_context A handle of the user acossiated with this buffer + * @Param[in] h_buf_context A handle of the user associated with this buffer */ typedef void (t_fm_port_im_tx_conf_callback) (t_handle h_app, uint8_t *p_data, @@ -2538,7 +2538,7 @@ typedef enum e_fm_port_counters { /**< BMI OP & HC only statistics counter */ e_FM_PORT_COUNTERS_LENGTH_ERR, /**< BMI non-Rx statistics counter */ - e_FM_PORT_COUNTERS_UNSUPPRTED_FORMAT, + e_FM_PORT_COUNTERS_UNSUPPORTED_FORMAT, /**< BMI non-Rx statistics counter */ e_FM_PORT_COUNTERS_DEQ_TOTAL, /**< QMI total QM dequeues counter */ e_FM_PORT_COUNTERS_ENQ_TOTAL, /**< QMI total QM enqueues counter */ @@ -2585,7 +2585,7 @@ typedef struct t_fm_port_congestion_grps { bool pfc_prio_enable[FM_NUM_CONG_GRPS][FM_MAX_PFC_PRIO]; /**< a matrix that represents the map between the CG ids * defined in 'congestion_grps_to_consider' to the - * priorties mapping array. + * priorities mapping array. */ } t_fm_port_congestion_grps; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index a3706439..f561dcc1 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -143,7 +143,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask) PMD_INIT_FUNC_TRACE(); if (mask & RTE_ETH_VLAN_FILTER_MASK) { - /* VLAN Filter not avaialble */ + /* VLAN Filter not available */ if (!priv->max_vlan_filters) { DPAA2_PMD_INFO("VLAN filter not available"); return -ENOTSUP; @@ -916,7 +916,7 @@ dpaa2_dev_tx_queue_setup(struct rte_eth_dev *dev, cong_notif_cfg.units = DPNI_CONGESTION_UNIT_FRAMES; cong_notif_cfg.threshold_entry = nb_tx_desc; /* Notify that the queue is not congested when the data in - * the queue is below this thershold.(90% of value) + * the queue is below this threshold.(90% of value) */ cong_notif_cfg.threshold_exit = (nb_tx_desc * 9) / 10; cong_notif_cfg.message_ctx = 0; @@ -1058,7 +1058,7 @@ dpaa2_supported_ptypes_get(struct rte_eth_dev *dev) * Dpaa2 link Interrupt handler * * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -2236,7 +2236,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev, ocfg.oa = 1; /* Late arrival window size disabled */ ocfg.olws = 0; - /* ORL resource exhaustaion advance NESN disabled */ + /* ORL resource exhaustion advance NESN disabled */ ocfg.oeane = 0; /* Loose ordering enabled */ ocfg.oloe = 1; @@ -2720,13 +2720,13 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_burst = dpaa2_dev_tx; - /*Init fields w.r.t. classficaition*/ + /*Init fields w.r.t. classification*/ memset(&priv->extract.qos_key_extract, 0, sizeof(struct dpaa2_key_extract)); priv->extract.qos_extract_param = (size_t)rte_malloc(NULL, 256, 64); if (!priv->extract.qos_extract_param) { DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow " - " classificaiton ", ret); + " classification ", ret); goto init_err; } priv->extract.qos_key_extract.key_info.ipv4_src_offset = @@ -2744,7 +2744,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) priv->extract.tc_extract_param[i] = (size_t)rte_malloc(NULL, 256, 64); if (!priv->extract.tc_extract_param[i]) { - DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classificaiton", + DPAA2_PMD_ERR(" Error(%d) in allocation resources for flow classification", ret); goto init_err; } diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index c5e9267b..1c5569d0 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -117,7 +117,7 @@ extern int dpaa2_timestamp_dynfield_offset; #define DPAA2_FLOW_MAX_KEY_SIZE 16 -/*Externaly defined*/ +/*Externally defined*/ extern const struct rte_flow_ops dpaa2_flow_ops; extern const struct rte_tm_ops dpaa2_tm_ops; diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c index 84fe37a7..8a14eb95 100644 --- a/drivers/net/dpaa2/dpaa2_flow.c +++ b/drivers/net/dpaa2/dpaa2_flow.c @@ -1341,7 +1341,7 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow, } static int -dpaa2_configure_flow_ip_discrimation( +dpaa2_configure_flow_ip_discrimination( struct dpaa2_dev_priv *priv, struct rte_flow *flow, const struct rte_flow_item *pattern, int *local_cfg, int *device_configured, @@ -1447,11 +1447,11 @@ dpaa2_configure_flow_generic_ip( flow->tc_id = group; flow->tc_index = attr->priority; - ret = dpaa2_configure_flow_ip_discrimation(priv, + ret = dpaa2_configure_flow_ip_discrimination(priv, flow, pattern, &local_cfg, device_configured, group); if (ret) { - DPAA2_PMD_ERR("IP discrimation failed!"); + DPAA2_PMD_ERR("IP discrimination failed!"); return -1; } @@ -3349,7 +3349,7 @@ dpaa2_flow_verify_action( (actions[j].conf); if (rss_conf->queue_num > priv->dist_queues) { DPAA2_PMD_ERR( - "RSS number exceeds the distrbution size"); + "RSS number exceeds the distribution size"); return -ENOTSUP; } for (i = 0; i < (int)rss_conf->queue_num; i++) { @@ -3596,7 +3596,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, qos_cfg.keep_entries = true; qos_cfg.key_cfg_iova = (size_t)priv->extract.qos_extract_param; - /* QoS table is effecitive for multiple TCs.*/ + /* QoS table is effective for multiple TCs.*/ if (priv->num_rx_tc > 1) { ret = dpni_set_qos_table(dpni, CMD_PRI_LOW, priv->token, &qos_cfg); @@ -3655,7 +3655,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow, 0, 0); if (ret < 0) { DPAA2_PMD_ERR( - "Error in addnig entry to QoS table(%d)", ret); + "Error in adding entry to QoS table(%d)", ret); return ret; } } diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c index d347f4df..f54ab5df 100644 --- a/drivers/net/dpaa2/dpaa2_mux.c +++ b/drivers/net/dpaa2/dpaa2_mux.c @@ -95,7 +95,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id, mask_iova = (void *)((size_t)key_iova + DIST_PARAM_IOVA_SIZE); /* Currently taking only IP protocol as an extract type. - * This can be exended to other fields using pattern->type. + * This can be extended to other fields using pattern->type. */ memset(&kg_cfg, 0, sizeof(struct dpkg_profile_cfg)); @@ -311,11 +311,11 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused, goto init_err; } - /* The new dpdmux_set/get_resetable() API are available starting with + /* The new dpdmux_set/get_resettable() API are available starting with * DPDMUX_VER_MAJOR==6 and DPDMUX_VER_MINOR==6 */ if (maj_ver >= 6 && min_ver >= 6) { - ret = dpdmux_set_resetable(&dpdmux_dev->dpdmux, CMD_PRI_LOW, + ret = dpdmux_set_resettable(&dpdmux_dev->dpdmux, CMD_PRI_LOW, dpdmux_dev->token, DPDMUX_SKIP_DEFAULT_INTERFACE | DPDMUX_SKIP_UNICAST_RULES | diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index c65589a5..90b971b4 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -714,7 +714,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rte_prefetch0((void *)(size_t)(dq_storage + 1)); /* Prepare next pull descriptor. This will give space for the - * prefething done on DQRR entries + * prefetching done on DQRR entries */ q_storage->toggle ^= 1; dq_storage1 = q_storage->dq_storage[q_storage->toggle]; @@ -1510,7 +1510,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) if (*dpaa2_seqn(*bufs)) { /* Use only queue 0 for Tx in case of atomic/ * ordered packets as packets can get unordered - * when being tranmitted out from the interface + * when being transmitted out from the interface */ dpaa2_set_enqueue_descriptor(order_sendq, (*bufs), @@ -1738,7 +1738,7 @@ dpaa2_dev_loopback_rx(void *queue, rte_prefetch0((void *)(size_t)(dq_storage + 1)); /* Prepare next pull descriptor. This will give space for the - * prefething done on DQRR entries + * prefetching done on DQRR entries */ q_storage->toggle ^= 1; dq_storage1 = q_storage->dq_storage[q_storage->toggle]; diff --git a/drivers/net/dpaa2/mc/dpdmux.c b/drivers/net/dpaa2/mc/dpdmux.c index edbb01b4..693557e1 100644 --- a/drivers/net/dpaa2/mc/dpdmux.c +++ b/drivers/net/dpaa2/mc/dpdmux.c @@ -281,7 +281,7 @@ int dpdmux_reset(struct fsl_mc_io *mc_io, } /** - * dpdmux_set_resetable() - Set overall resetable DPDMUX parameters. + * dpdmux_set_resettable() - Set overall resettable DPDMUX parameters. * @mc_io: Pointer to MC portal's I/O object * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' * @token: Token of DPDMUX object @@ -299,7 +299,7 @@ int dpdmux_reset(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmux_set_resetable(struct fsl_mc_io *mc_io, +int dpdmux_set_resettable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t skip_reset_flags) @@ -321,7 +321,7 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io, } /** - * dpdmux_get_resetable() - Get overall resetable parameters. + * dpdmux_get_resettable() - Get overall resettable parameters. * @mc_io: Pointer to MC portal's I/O object * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' * @token: Token of DPDMUX object @@ -334,7 +334,7 @@ int dpdmux_set_resetable(struct fsl_mc_io *mc_io, * * Return: '0' on Success; Error code otherwise. */ -int dpdmux_get_resetable(struct fsl_mc_io *mc_io, +int dpdmux_get_resettable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t *skip_reset_flags) diff --git a/drivers/net/dpaa2/mc/fsl_dpdmux.h b/drivers/net/dpaa2/mc/fsl_dpdmux.h index b01a98eb..274dcffc 100644 --- a/drivers/net/dpaa2/mc/fsl_dpdmux.h +++ b/drivers/net/dpaa2/mc/fsl_dpdmux.h @@ -155,12 +155,12 @@ int dpdmux_reset(struct fsl_mc_io *mc_io, */ #define DPDMUX_SKIP_MULTICAST_RULES 0x04 -int dpdmux_set_resetable(struct fsl_mc_io *mc_io, +int dpdmux_set_resettable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t skip_reset_flags); -int dpdmux_get_resetable(struct fsl_mc_io *mc_io, +int dpdmux_get_resettable(struct fsl_mc_io *mc_io, uint32_t cmd_flags, uint16_t token, uint8_t *skip_reset_flags); diff --git a/drivers/net/dpaa2/mc/fsl_dpni.h b/drivers/net/dpaa2/mc/fsl_dpni.h index 469ab9b3..3b9bffee 100644 --- a/drivers/net/dpaa2/mc/fsl_dpni.h +++ b/drivers/net/dpaa2/mc/fsl_dpni.h @@ -93,7 +93,7 @@ struct fsl_mc_io; */ #define DPNI_OPT_OPR_PER_TC 0x000080 /** - * All Tx traffic classes will use a single sender (ignore num_queueus for tx) + * All Tx traffic classes will use a single sender (ignore num_queues for tx) */ #define DPNI_OPT_SINGLE_SENDER 0x000100 /** @@ -617,7 +617,7 @@ int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io, * @page_3.ceetm_reject_bytes: Cumulative count of the number of bytes in all * frames whose enqueue was rejected * @page_3.ceetm_reject_frames: Cumulative count of all frame enqueues rejected - * @page_4: congestion point drops for seleted TC + * @page_4: congestion point drops for selected TC * @page_4.cgr_reject_frames: number of rejected frames due to congestion point * @page_4.cgr_reject_bytes: number of rejected bytes due to congestion point * @page_5: policer statistics per TC @@ -1417,7 +1417,7 @@ int dpni_get_tx_confirmation_mode(struct fsl_mc_io *mc_io, * dpkg_prepare_key_cfg() * @discard_on_miss: Set to '1' to discard frames in case of no match (miss); * '0' to use the 'default_tc' in such cases - * @keep_entries: if set to one will not delele existing table entries. This + * @keep_entries: if set to one will not delete existing table entries. This * option will work properly only for dpni objects created with * DPNI_OPT_HAS_KEY_MASKING option. All previous QoS entries must * be compatible with new key composition rule. @@ -1516,7 +1516,7 @@ int dpni_clear_qos_table(struct fsl_mc_io *mc_io, * @flow_id: Identifies the Rx queue used for matching traffic. Supported * values are in range 0 to num_queue-1. * @redirect_obj_token: token that identifies the object where frame is - * redirected when this rule is hit. This paraneter is used only when one of the + * redirected when this rule is hit. This parameter is used only when one of the * flags DPNI_FS_OPT_REDIRECT_TO_DPNI_RX or DPNI_FS_OPT_REDIRECT_TO_DPNI_TX is * set. * The token is obtained using dpni_open() API call. The object must stay @@ -1797,7 +1797,7 @@ int dpni_load_sw_sequence(struct fsl_mc_io *mc_io, struct dpni_load_ss_cfg *cfg); /** - * dpni_eanble_sw_sequence() - Enables a software sequence in the parser + * dpni_enable_sw_sequence() - Enables a software sequence in the parser * profile * corresponding to the ingress or egress of the DPNI. * @mc_io: Pointer to MC portal's I/O object diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h index a548ae2c..718a9746 100644 --- a/drivers/net/e1000/e1000_ethdev.h +++ b/drivers/net/e1000/e1000_ethdev.h @@ -103,7 +103,7 @@ * Maximum number of Ring Descriptors. * * Since RDLEN/TDLEN should be multiple of 128 bytes, the number of ring - * desscriptors should meet the following condition: + * descriptors should meet the following condition: * (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0 */ #define E1000_MIN_RING_DESC 32 @@ -252,7 +252,7 @@ struct igb_rte_flow_rss_conf { }; /* - * Structure to store filters'info. + * Structure to store filters' info. */ struct e1000_filter_info { uint8_t ethertype_mask; /* Bit mask for every used ethertype filter */ diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 31c48700..794496ab 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -1058,8 +1058,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) /* * Starting with 631xESB hw supports 2 TX/RX queues per port. - * Unfortunatelly, all these nics have just one TX context. - * So we have few choises for TX: + * Unfortunately, all these nics have just one TX context. + * So we have few choices for TX: * - Use just one TX queue. * - Allow cksum offload only for one TX queue. * - Don't allow TX cksum offload at all. @@ -1068,7 +1068,7 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) * (Multiple Receive Queues are mutually exclusive with UDP * fragmentation and are not supported when a legacy receive * descriptor format is used). - * Which means separate RX routinies - as legacy nics (82540, 82545) + * Which means separate RX routines - as legacy nics (82540, 82545) * don't support extended RXD. * To avoid it we support just one RX queue for now (no RSS). */ @@ -1558,7 +1558,7 @@ eth_em_interrupt_get_status(struct rte_eth_dev *dev) } /* - * It executes link_update after knowing an interrupt is prsent. + * It executes link_update after knowing an interrupt is present. * * @param dev * Pointer to struct rte_eth_dev. @@ -1616,7 +1616,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev, * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c index 39262502..cea5b490 100644 --- a/drivers/net/e1000/em_rxtx.c +++ b/drivers/net/e1000/em_rxtx.c @@ -141,7 +141,7 @@ union em_vlan_macip { struct em_ctx_info { uint64_t flags; /**< ol_flags related to context build. */ uint32_t cmp_mask; /**< compare mask */ - union em_vlan_macip hdrlen; /**< L2 and L3 header lenghts */ + union em_vlan_macip hdrlen; /**< L2 and L3 header lengths */ }; /** @@ -829,7 +829,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); @@ -1074,7 +1074,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index 3ee16c15..4f865d18 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -1149,7 +1149,7 @@ eth_igb_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; - /* multipe queue mode checking */ + /* multiple queue mode checking */ ret = igb_check_mq_mode(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "igb_check_mq_mode fails with %d.", @@ -1265,7 +1265,7 @@ eth_igb_start(struct rte_eth_dev *dev) } } - /* confiugre msix for rx interrupt */ + /* configure msix for rx interrupt */ eth_igb_configure_msix_intr(dev); /* Configure for OS presence */ @@ -2819,7 +2819,7 @@ eth_igb_interrupt_get_status(struct rte_eth_dev *dev) } /* - * It executes link_update after knowing an interrupt is prsent. + * It executes link_update after knowing an interrupt is present. * * @param dev * Pointer to struct rte_eth_dev. @@ -2889,7 +2889,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev, * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -3787,7 +3787,7 @@ igb_inject_2uple_filter(struct rte_eth_dev *dev, * * @param * dev: Pointer to struct rte_eth_dev. - * ntuple_filter: ponter to the filter that will be added. + * ntuple_filter: pointer to the filter that will be added. * * @return * - On success, zero. @@ -3868,7 +3868,7 @@ igb_delete_2tuple_filter(struct rte_eth_dev *dev, * * @param * dev: Pointer to struct rte_eth_dev. - * ntuple_filter: ponter to the filter that will be removed. + * ntuple_filter: pointer to the filter that will be removed. * * @return * - On success, zero. @@ -4226,7 +4226,7 @@ igb_inject_5tuple_filter_82576(struct rte_eth_dev *dev, * * @param * dev: Pointer to struct rte_eth_dev. - * ntuple_filter: ponter to the filter that will be added. + * ntuple_filter: pointer to the filter that will be added. * * @return * - On success, zero. @@ -4313,7 +4313,7 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev, * * @param * dev: Pointer to struct rte_eth_dev. - * ntuple_filter: ponter to the filter that will be removed. + * ntuple_filter: pointer to the filter that will be removed. * * @return * - On success, zero. @@ -4831,7 +4831,7 @@ igb_timesync_disable(struct rte_eth_dev *dev) /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ E1000_WRITE_REG(hw, E1000_ETQF(E1000_ETQF_FILTER_1588), 0); - /* Stop incrementating the System Time registers. */ + /* Stop incrementing the System Time registers. */ E1000_WRITE_REG(hw, E1000_TIMINCA, 0); return 0; diff --git a/drivers/net/e1000/igb_flow.c b/drivers/net/e1000/igb_flow.c index e72376f6..b7f9b942 100644 --- a/drivers/net/e1000/igb_flow.c +++ b/drivers/net/e1000/igb_flow.c @@ -57,7 +57,7 @@ struct igb_flex_filter_list igb_filter_flex_list; struct igb_rss_filter_list igb_filter_rss_list; /** - * Please aware there's an asumption for all the parsers. + * Please aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and * rte_flow_action are using CPU order. * Because the pattern is used to describe the packets, @@ -1608,7 +1608,7 @@ igb_flow_create(struct rte_eth_dev *dev, /** * Check if the flow rule is supported by igb. - * It only checkes the format. Don't guarantee the rule can be programmed into + * It only checks the format. Don't guarantee the rule can be programmed into * the HW. Because there can be no enough room for the rule. */ static int diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c index fe355ef6..3f3fd0d6 100644 --- a/drivers/net/e1000/igb_pf.c +++ b/drivers/net/e1000/igb_pf.c @@ -155,7 +155,7 @@ int igb_pf_host_configure(struct rte_eth_dev *eth_dev) else E1000_WRITE_REG(hw, E1000_DTXSWC, E1000_DTXSWC_VMDQ_LOOPBACK_EN); - /* clear VMDq map to perment rar 0 */ + /* clear VMDq map to permanent rar 0 */ rah = E1000_READ_REG(hw, E1000_RAH(0)); rah &= ~ (0xFF << E1000_RAH_POOLSEL_SHIFT); E1000_WRITE_REG(hw, E1000_RAH(0), rah); diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index 4a311a7b..f32dee46 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -150,7 +150,7 @@ union igb_tx_offload { (TX_MACIP_LEN_CMP_MASK | TX_TCP_LEN_CMP_MASK | TX_TSO_MSS_CMP_MASK) /** - * Strucutre to check if new context need be built + * Structure to check if new context need be built */ struct igb_advctx_info { uint64_t flags; /**< ol_flags related to context build. */ @@ -967,7 +967,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); @@ -1229,7 +1229,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); @@ -1252,7 +1252,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * Maximum number of Ring Descriptors. * * Since RDLEN/TDLEN should be multiple of 128bytes, the number of ring - * desscriptors should meet the following condition: + * descriptors should meet the following condition: * (num_ring_desc * sizeof(struct e1000_rx/tx_desc)) % 128 == 0 */ @@ -1350,7 +1350,7 @@ igb_tx_done_cleanup(struct igb_tx_queue *txq, uint32_t free_cnt) sw_ring[tx_id].last_id = tx_id; } - /* Move to next segemnt. */ + /* Move to next segment. */ tx_id = sw_ring[tx_id].next_id; } while (tx_id != tx_next); @@ -1383,7 +1383,7 @@ igb_tx_done_cleanup(struct igb_tx_queue *txq, uint32_t free_cnt) /* Walk the list and find the next mbuf, if any. */ do { - /* Move to next segemnt. */ + /* Move to next segment. */ tx_id = sw_ring[tx_id].next_id; if (sw_ring[tx_id].mbuf) @@ -2146,7 +2146,7 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev) igb_rss_disable(dev); - /* RCTL: eanble VLAN filter */ + /* RCTL: enable VLAN filter */ rctl = E1000_READ_REG(hw, E1000_RCTL); rctl |= E1000_RCTL_VFE; E1000_WRITE_REG(hw, E1000_RCTL, rctl); diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 634c97ac..dce26cfa 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1408,7 +1408,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) ++rxq->rx_stats.refill_partial; } - /* When we submitted free recources to device... */ + /* When we submitted free resources to device... */ if (likely(i > 0)) { /* ...let HW know that it can fill buffers with data. */ ena_com_write_sq_doorbell(rxq->ena_com_io_sq); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 865e1241..f99e4f39 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -42,7 +42,7 @@ /* While processing submitted and completed descriptors (rx and tx path * respectively) in a loop it is desired to: - * - perform batch submissions while populating sumbissmion queue + * - perform batch submissions while populating submission queue * - avoid blocking transmission of other packets during cleanup phase * Hence the utilization ratio of 1/8 of a queue size or max value if the size * of the ring is very big - like 8k Rx rings. diff --git a/drivers/net/enetfec/enet_regs.h b/drivers/net/enetfec/enet_regs.h index a300c6f8..c9400957 100644 --- a/drivers/net/enetfec/enet_regs.h +++ b/drivers/net/enetfec/enet_regs.h @@ -12,7 +12,7 @@ #define RX_BD_CR ((ushort)0x0004) /* CRC or Frame error */ #define RX_BD_SH ((ushort)0x0008) /* Reserved */ #define RX_BD_NO ((ushort)0x0010) /* Rcvd non-octet aligned frame */ -#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length voilation */ +#define RX_BD_LG ((ushort)0x0020) /* Rcvd frame length violation */ #define RX_BD_FIRST ((ushort)0x0400) /* Reserved */ #define RX_BD_LAST ((ushort)0x0800) /* last buffer in the frame */ #define RX_BD_INT 0x00800000 diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c index 33147169..bc1dcf22 100644 --- a/drivers/net/enic/enic_flow.c +++ b/drivers/net/enic/enic_flow.c @@ -21,7 +21,7 @@ * so we can easily add new arguments. * item: Item specification. * filter: Partially filled in NIC filter structure. - * inner_ofst: If zero, this is an outer header. If non-zero, this is + * inner_offset: If zero, this is an outer header. If non-zero, this is * the offset into L5 where the header begins. * l2_proto_off: offset to EtherType eth or vlan header. * l3_proto_off: offset to next protocol field in IPv4 or 6 header. @@ -29,7 +29,7 @@ struct copy_item_args { const struct rte_flow_item *item; struct filter_v2 *filter; - uint8_t *inner_ofst; + uint8_t *inner_offset; uint8_t l2_proto_off; uint8_t l3_proto_off; struct enic *enic; @@ -405,7 +405,7 @@ enic_copy_item_ipv4_v1(struct copy_item_args *arg) return ENOTSUP; } - /* check that the suppied mask exactly matches capabilty */ + /* check that the supplied mask exactly matches capability */ if (!mask_exact_match((const uint8_t *)&supported_mask, (const uint8_t *)item->mask, sizeof(*mask))) { ENICPMD_LOG(ERR, "IPv4 exact match mask"); @@ -443,7 +443,7 @@ enic_copy_item_udp_v1(struct copy_item_args *arg) return ENOTSUP; } - /* check that the suppied mask exactly matches capabilty */ + /* check that the supplied mask exactly matches capability */ if (!mask_exact_match((const uint8_t *)&supported_mask, (const uint8_t *)item->mask, sizeof(*mask))) { ENICPMD_LOG(ERR, "UDP exact match mask"); @@ -482,7 +482,7 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg) return ENOTSUP; } - /* check that the suppied mask exactly matches capabilty */ + /* check that the supplied mask exactly matches capability */ if (!mask_exact_match((const uint8_t *)&supported_mask, (const uint8_t *)item->mask, sizeof(*mask))) { ENICPMD_LOG(ERR, "TCP exact match mask"); @@ -504,7 +504,7 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg) * we set EtherType and IP proto as necessary. */ static int -copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_ofst, +copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_offset, const void *val, const void *mask, uint8_t val_size, uint8_t proto_off, uint16_t proto_val, uint8_t proto_size) { @@ -512,7 +512,7 @@ copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_ofst, uint8_t start_off; /* No space left in the L5 pattern buffer. */ - start_off = *inner_ofst; + start_off = *inner_offset; if ((start_off + val_size) > FILTER_GENERIC_1_KEY_LEN) return ENOTSUP; l5_mask = gp->layer[FILTER_GENERIC_1_L5].mask; @@ -537,7 +537,7 @@ copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_ofst, } } /* All inner headers land in L5 buffer even if their spec is null. */ - *inner_ofst += val_size; + *inner_offset += val_size; return 0; } @@ -545,7 +545,7 @@ static int enic_copy_item_inner_eth_v2(struct copy_item_args *arg) { const void *mask = arg->item->mask; - uint8_t *off = arg->inner_ofst; + uint8_t *off = arg->inner_offset; ENICPMD_FUNC_TRACE(); if (!mask) @@ -560,7 +560,7 @@ static int enic_copy_item_inner_vlan_v2(struct copy_item_args *arg) { const void *mask = arg->item->mask; - uint8_t *off = arg->inner_ofst; + uint8_t *off = arg->inner_offset; uint8_t eth_type_off; ENICPMD_FUNC_TRACE(); @@ -578,7 +578,7 @@ static int enic_copy_item_inner_ipv4_v2(struct copy_item_args *arg) { const void *mask = arg->item->mask; - uint8_t *off = arg->inner_ofst; + uint8_t *off = arg->inner_offset; ENICPMD_FUNC_TRACE(); if (!mask) @@ -594,7 +594,7 @@ static int enic_copy_item_inner_ipv6_v2(struct copy_item_args *arg) { const void *mask = arg->item->mask; - uint8_t *off = arg->inner_ofst; + uint8_t *off = arg->inner_offset; ENICPMD_FUNC_TRACE(); if (!mask) @@ -610,7 +610,7 @@ static int enic_copy_item_inner_udp_v2(struct copy_item_args *arg) { const void *mask = arg->item->mask; - uint8_t *off = arg->inner_ofst; + uint8_t *off = arg->inner_offset; ENICPMD_FUNC_TRACE(); if (!mask) @@ -625,7 +625,7 @@ static int enic_copy_item_inner_tcp_v2(struct copy_item_args *arg) { const void *mask = arg->item->mask; - uint8_t *off = arg->inner_ofst; + uint8_t *off = arg->inner_offset; ENICPMD_FUNC_TRACE(); if (!mask) @@ -899,7 +899,7 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; + uint8_t *inner_offset = arg->inner_offset; const struct rte_flow_item_vxlan *spec = item->spec; const struct rte_flow_item_vxlan *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -929,7 +929,7 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg) memcpy(gp->layer[FILTER_GENERIC_1_L5].val, spec, sizeof(struct rte_vxlan_hdr)); - *inner_ofst = sizeof(struct rte_vxlan_hdr); + *inner_offset = sizeof(struct rte_vxlan_hdr); return 0; } @@ -943,7 +943,7 @@ enic_copy_item_raw_v2(struct copy_item_args *arg) { const struct rte_flow_item *item = arg->item; struct filter_v2 *enic_filter = arg->filter; - uint8_t *inner_ofst = arg->inner_ofst; + uint8_t *inner_offset = arg->inner_offset; const struct rte_flow_item_raw *spec = item->spec; const struct rte_flow_item_raw *mask = item->mask; struct filter_generic_1 *gp = &enic_filter->u.generic_1; @@ -951,7 +951,7 @@ enic_copy_item_raw_v2(struct copy_item_args *arg) ENICPMD_FUNC_TRACE(); /* Cannot be used for inner packet */ - if (*inner_ofst) + if (*inner_offset) return EINVAL; /* Need both spec and mask */ if (!spec || !mask) @@ -1020,13 +1020,13 @@ item_stacking_valid(enum rte_flow_item_type prev_item, */ static void fixup_l5_layer(struct enic *enic, struct filter_generic_1 *gp, - uint8_t inner_ofst) + uint8_t inner_offset) { uint8_t layer[FILTER_GENERIC_1_KEY_LEN]; uint8_t inner; uint8_t vxlan; - if (!(inner_ofst > 0 && enic->vxlan)) + if (!(inner_offset > 0 && enic->vxlan)) return; ENICPMD_FUNC_TRACE(); vxlan = sizeof(struct rte_vxlan_hdr); @@ -1034,7 +1034,7 @@ fixup_l5_layer(struct enic *enic, struct filter_generic_1 *gp, gp->layer[FILTER_GENERIC_1_L5].mask, vxlan); memcpy(gp->layer[FILTER_GENERIC_1_L4].val + sizeof(struct rte_udp_hdr), gp->layer[FILTER_GENERIC_1_L5].val, vxlan); - inner = inner_ofst - vxlan; + inner = inner_offset - vxlan; memset(layer, 0, sizeof(layer)); memcpy(layer, gp->layer[FILTER_GENERIC_1_L5].mask + vxlan, inner); memcpy(gp->layer[FILTER_GENERIC_1_L5].mask, layer, sizeof(layer)); @@ -1044,14 +1044,14 @@ fixup_l5_layer(struct enic *enic, struct filter_generic_1 *gp, } /** - * Build the intenal enic filter structure from the provided pattern. The + * Build the internal enic filter structure from the provided pattern. The * pattern is validated as the items are copied. * * @param pattern[in] * @param items_info[in] * Info about this NICs item support, like valid previous items. * @param enic_filter[out] - * NIC specfilc filters derived from the pattern. + * NIC specific filters derived from the pattern. * @param error[out] */ static int @@ -1063,7 +1063,7 @@ enic_copy_filter(const struct rte_flow_item pattern[], { int ret; const struct rte_flow_item *item = pattern; - uint8_t inner_ofst = 0; /* If encapsulated, ofst into L5 */ + uint8_t inner_offset = 0; /* If encapsulated, offset into L5 */ enum rte_flow_item_type prev_item; const struct enic_items *item_info; struct copy_item_args args; @@ -1075,7 +1075,7 @@ enic_copy_filter(const struct rte_flow_item pattern[], prev_item = 0; args.filter = enic_filter; - args.inner_ofst = &inner_ofst; + args.inner_offset = &inner_offset; args.enic = enic; for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { /* Get info about how to validate and copy the item. If NULL @@ -1087,7 +1087,7 @@ enic_copy_filter(const struct rte_flow_item pattern[], item_info = &cap->item_info[item->type]; if (item->type > cap->max_item_type || item_info->copy_item == NULL || - (inner_ofst > 0 && item_info->inner_copy_item == NULL)) { + (inner_offset > 0 && item_info->inner_copy_item == NULL)) { rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "Unsupported item."); @@ -1099,7 +1099,7 @@ enic_copy_filter(const struct rte_flow_item pattern[], goto stacking_error; args.item = item; - copy_fn = inner_ofst > 0 ? item_info->inner_copy_item : + copy_fn = inner_offset > 0 ? item_info->inner_copy_item : item_info->copy_item; ret = copy_fn(&args); if (ret) @@ -1107,7 +1107,7 @@ enic_copy_filter(const struct rte_flow_item pattern[], prev_item = item->type; is_first_item = 0; } - fixup_l5_layer(enic, &enic_filter->u.generic_1, inner_ofst); + fixup_l5_layer(enic, &enic_filter->u.generic_1, inner_offset); return 0; @@ -1123,12 +1123,12 @@ enic_copy_filter(const struct rte_flow_item pattern[], } /** - * Build the intenal version 1 NIC action structure from the provided pattern. + * Build the internal version 1 NIC action structure from the provided pattern. * The pattern is validated as the items are copied. * * @param actions[in] * @param enic_action[out] - * NIC specfilc actions derived from the actions. + * NIC specific actions derived from the actions. * @param error[out] */ static int @@ -1170,12 +1170,12 @@ enic_copy_action_v1(__rte_unused struct enic *enic, } /** - * Build the intenal version 2 NIC action structure from the provided pattern. + * Build the internal version 2 NIC action structure from the provided pattern. * The pattern is validated as the items are copied. * * @param actions[in] * @param enic_action[out] - * NIC specfilc actions derived from the actions. + * NIC specific actions derived from the actions. * @param error[out] */ static int diff --git a/drivers/net/enic/enic_fm_flow.c b/drivers/net/enic/enic_fm_flow.c index ae43f36b..bef842d4 100644 --- a/drivers/net/enic/enic_fm_flow.c +++ b/drivers/net/enic/enic_fm_flow.c @@ -721,7 +721,7 @@ enic_fm_copy_item_gtp(struct copy_item_args *arg) } /* NIC does not support GTP tunnels. No Items are allowed after this. - * This prevents the specificaiton of further items. + * This prevents the specification of further items. */ arg->header_level = 0; @@ -733,7 +733,7 @@ enic_fm_copy_item_gtp(struct copy_item_args *arg) /* * Use the raw L4 buffer to match GTP as fm_header_set does not have - * GTP header. UDP dst port must be specifiec. Using the raw buffer + * GTP header. UDP dst port must be specific. Using the raw buffer * does not affect such UDP item, since we skip UDP in the raw buffer. */ fm_data->fk_header_select |= FKH_L4RAW; @@ -1846,7 +1846,7 @@ enic_fm_dump_tcam_actions(const struct fm_action *fm_action) /* Remove trailing comma */ if (buf[0]) *(bp - 1) = '\0'; - ENICPMD_LOG(DEBUG, " Acions: %s", buf); + ENICPMD_LOG(DEBUG, " Actions: %s", buf); } static int @@ -2364,7 +2364,7 @@ enic_action_handle_get(struct enic_flowman *fm, struct fm_action *action_in, if (ret < 0 && ret != -ENOENT) return rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "enic: rte_hash_lookup(aciton)"); + NULL, "enic: rte_hash_lookup(action)"); if (ret == -ENOENT) { /* Allocate a new action on the NIC. */ @@ -2435,7 +2435,7 @@ __enic_fm_flow_add_entry(struct enic_flowman *fm, ENICPMD_FUNC_TRACE(); - /* Get or create an aciton handle. */ + /* Get or create an action handle. */ ret = enic_action_handle_get(fm, action_in, error, &ah); if (ret) return ret; diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c index 7f84b5f9..97d97ea7 100644 --- a/drivers/net/enic/enic_main.c +++ b/drivers/net/enic/enic_main.c @@ -1137,7 +1137,7 @@ int enic_disable(struct enic *enic) } /* If we were using interrupts, set the interrupt vector to -1 - * to disable interrupts. We are not disabling link notifcations, + * to disable interrupts. We are not disabling link notifications, * though, as we want the polling of link status to continue working. */ if (enic->rte_dev->data->dev_conf.intr_conf.lsc) diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c index c44715bf..33e96b48 100644 --- a/drivers/net/enic/enic_rxtx.c +++ b/drivers/net/enic/enic_rxtx.c @@ -653,7 +653,7 @@ static void enqueue_simple_pkts(struct rte_mbuf **pkts, * The app should not send oversized * packets. tx_pkt_prepare includes a check as * well. But some apps ignore the device max size and - * tx_pkt_prepare. Oversized packets cause WQ errrors + * tx_pkt_prepare. Oversized packets cause WQ errors * and the NIC ends up disabling the whole WQ. So * truncate packets.. */ diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h index 7cfa29fa..17a7056c 100644 --- a/drivers/net/fm10k/fm10k.h +++ b/drivers/net/fm10k/fm10k.h @@ -44,7 +44,7 @@ #define FM10K_TX_MAX_MTU_SEG UINT8_MAX /* - * byte aligment for HW RX data buffer + * byte alignment for HW RX data buffer * Datasheet requires RX buffer addresses shall either be 512-byte aligned or * be 8-byte aligned but without crossing host memory pages (4KB alignment * boundaries). Satisfy first option. diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 43e1d134..8bbd8b44 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -290,7 +290,7 @@ rx_queue_free(struct fm10k_rx_queue *q) } /* - * disable RX queue, wait unitl HW finished necessary flush operation + * disable RX queue, wait until HW finished necessary flush operation */ static inline int rx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) @@ -379,7 +379,7 @@ tx_queue_free(struct fm10k_tx_queue *q) } /* - * disable TX queue, wait unitl HW finished necessary flush operation + * disable TX queue, wait until HW finished necessary flush operation */ static inline int tx_queue_disable(struct fm10k_hw *hw, uint16_t qnum) @@ -453,7 +453,7 @@ fm10k_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; - /* multipe queue mode checking */ + /* multiple queue mode checking */ ret = fm10k_check_mq_mode(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "fm10k_check_mq_mode fails with %d.", @@ -2553,7 +2553,7 @@ fm10k_dev_handle_fault(struct fm10k_hw *hw, uint32_t eicr) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -2676,7 +2676,7 @@ fm10k_dev_interrupt_handler_pf(void *param) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -3034,7 +3034,7 @@ fm10k_params_init(struct rte_eth_dev *dev) struct fm10k_dev_info *info = FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private); - /* Inialize bus info. Normally we would call fm10k_get_bus_info(), but + /* Initialize bus info. Normally we would call fm10k_get_bus_info(), but * there is no way to get link status without reading BAR4. Until this * works, assume we have maximum bandwidth. * @todo - fix bus info diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c index 1269250e..10ce5a75 100644 --- a/drivers/net/fm10k/fm10k_rxtx_vec.c +++ b/drivers/net/fm10k/fm10k_rxtx_vec.c @@ -212,7 +212,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev) struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf; #ifndef RTE_FM10K_RX_OLFLAGS_ENABLE - /* whithout rx ol_flags, no VP flag report */ + /* without rx ol_flags, no VP flag report */ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) return -1; #endif @@ -239,7 +239,7 @@ fm10k_rxq_vec_setup(struct fm10k_rx_queue *rxq) struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */ mb_def.nb_segs = 1; - /* data_off will be ajusted after new mbuf allocated for 512-byte + /* data_off will be adjusted after new mbuf allocated for 512-byte * alignment. */ mb_def.data_off = RTE_PKTMBUF_HEADROOM; @@ -410,7 +410,7 @@ fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, if (!(rxdp->d.staterr & FM10K_RXD_STATUS_DD)) return 0; - /* Vecotr RX will process 4 packets at a time, strip the unaligned + /* Vector RX will process 4 packets at a time, strip the unaligned * tails in case it's not multiple of 4. */ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_FM10K_DESCS_PER_LOOP); @@ -481,7 +481,7 @@ fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, _mm_storeu_si128((__m128i *)&rx_pkts[pos], mbp1); #if defined(RTE_ARCH_X86_64) - /* B.1 load 2 64 bit mbuf poitns */ + /* B.1 load 2 64 bit mbuf points */ mbp2 = _mm_loadu_si128((__m128i *)&mbufp[pos+2]); #endif @@ -573,7 +573,7 @@ fm10k_recv_raw_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, fm10k_desc_to_pktype_v(descs0, &rx_pkts[pos]); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != RTE_FM10K_DESCS_PER_LOOP)) diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index 1853511c..0121b0c2 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -255,7 +255,7 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask); * Interrupt handler triggered by NIC for handling * specific event. * - * @param: The address of parameter (struct rte_eth_dev *) regsitered before. + * @param: The address of parameter (struct rte_eth_dev *) registered before. */ static void hinic_dev_interrupt_handler(void *param) { @@ -336,7 +336,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev) return err; } - /* init vlan offoad */ + /* init vlan offload */ err = hinic_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK); if (err) { diff --git a/drivers/net/hinic/hinic_pmd_ethdev.h b/drivers/net/hinic/hinic_pmd_ethdev.h index 5eca8b10..8e6251f6 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.h +++ b/drivers/net/hinic/hinic_pmd_ethdev.h @@ -170,7 +170,7 @@ struct tag_tcam_key_mem { /* * tunnel packet, mask must be 0xff, spec value is 1; * normal packet, mask must be 0, spec value is 0; - * if tunnal packet, ucode use + * if tunnel packet, ucode use * sip/dip/protocol/src_port/dst_dport from inner packet */ u32 tunnel_flag:8; diff --git a/drivers/net/hinic/hinic_pmd_flow.c b/drivers/net/hinic/hinic_pmd_flow.c index d71a42af..9620e138 100644 --- a/drivers/net/hinic/hinic_pmd_flow.c +++ b/drivers/net/hinic/hinic_pmd_flow.c @@ -232,7 +232,7 @@ static int hinic_check_ethertype_first_item(const struct rte_flow_item *item, } static int -hinic_parse_ethertype_aciton(const struct rte_flow_action *actions, +hinic_parse_ethertype_action(const struct rte_flow_action *actions, const struct rte_flow_action *act, const struct rte_flow_action_queue *act_q, struct rte_eth_ethertype_filter *filter, @@ -344,7 +344,7 @@ static int cons_parse_ethertype_filter(const struct rte_flow_attr *attr, return -rte_errno; } - if (hinic_parse_ethertype_aciton(actions, act, act_q, filter, error)) + if (hinic_parse_ethertype_action(actions, act, act_q, filter, error)) return -rte_errno; if (hinic_check_ethertype_attr_ele(attr, error)) @@ -734,7 +734,7 @@ static int hinic_check_ntuple_item_ele(const struct rte_flow_item *item, * END * other members in mask and spec should set to 0x00. * item->last should be NULL. - * Please aware there's an asumption for all the parsers. + * Please aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and * rte_flow_action are using CPU order. * Because the pattern is used to describe the packets, @@ -1630,7 +1630,7 @@ static int hinic_parse_fdir_filter(struct rte_eth_dev *dev, /** * Check if the flow rule is supported by nic. - * It only checkes the format. Don't guarantee the rule can be programmed into + * It only checks the format. Don't guarantee the rule can be programmed into * the HW. Because there can be no enough room for the rule. */ static int hinic_flow_validate(struct rte_eth_dev *dev, diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c index 7adb6e36..db63c855 100644 --- a/drivers/net/hinic/hinic_pmd_rx.c +++ b/drivers/net/hinic/hinic_pmd_rx.c @@ -142,33 +142,33 @@ #define HINIC_GET_SUPER_CQE_EN(pkt_info) \ RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN) -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21 -#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U +#define RQ_CQE_OFFLOAD_TYPE_VLAN_EN_SHIFT 21 +#define RQ_CQE_OFFLOAD_TYPE_VLAN_EN_MASK 0x1U -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0 -#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU +#define RQ_CQE_OFFLOAD_TYPE_PKT_TYPE_SHIFT 0 +#define RQ_CQE_OFFLOAD_TYPE_PKT_TYPE_MASK 0xFFFU -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19 -#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U +#define RQ_CQE_OFFLOAD_TYPE_PKT_UMBCAST_SHIFT 19 +#define RQ_CQE_OFFLOAD_TYPE_PKT_UMBCAST_MASK 0x3U -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24 -#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU +#define RQ_CQE_OFFLOAD_TYPE_RSS_TYPE_SHIFT 24 +#define RQ_CQE_OFFLOAD_TYPE_RSS_TYPE_MASK 0xFFU -#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) (((val) >> \ - RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \ - RQ_CQE_OFFOLAD_TYPE_##member##_MASK) +#define RQ_CQE_OFFLOAD_TYPE_GET(val, member) (((val) >> \ + RQ_CQE_OFFLOAD_TYPE_##member##_SHIFT) & \ + RQ_CQE_OFFLOAD_TYPE_##member##_MASK) #define HINIC_GET_RX_VLAN_OFFLOAD_EN(offload_type) \ - RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN) + RQ_CQE_OFFLOAD_TYPE_GET(offload_type, VLAN_EN) #define HINIC_GET_RSS_TYPES(offload_type) \ - RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE) + RQ_CQE_OFFLOAD_TYPE_GET(offload_type, RSS_TYPE) #define HINIC_GET_RX_PKT_TYPE(offload_type) \ - RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE) + RQ_CQE_OFFLOAD_TYPE_GET(offload_type, PKT_TYPE) #define HINIC_GET_RX_PKT_UMBCAST(offload_type) \ - RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST) + RQ_CQE_OFFLOAD_TYPE_GET(offload_type, PKT_UMBCAST) #define RQ_CQE_STATUS_CSUM_BYPASS_VAL 0x80U #define RQ_CQE_STATUS_CSUM_ERR_IP_MASK 0x39U diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c index 2688817f..f09b1a6e 100644 --- a/drivers/net/hinic/hinic_pmd_tx.c +++ b/drivers/net/hinic/hinic_pmd_tx.c @@ -1144,7 +1144,7 @@ u16 hinic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts) mbuf_pkt = *tx_pkts++; queue_info = 0; - /* 1. parse sge and tx offlod info from mbuf */ + /* 1. parse sge and tx offload info from mbuf */ if (unlikely(!hinic_get_sge_txoff_info(mbuf_pkt, &sqe_info, &off_info))) { txq->txq_stats.off_errs++; diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c index 2ce59d8d..5b42d38a 100644 --- a/drivers/net/hns3/hns3_cmd.c +++ b/drivers/net/hns3/hns3_cmd.c @@ -466,7 +466,7 @@ hns3_mask_capability(struct hns3_hw *hw, for (i = 0; i < MAX_CAPS_BIT; i++) { if (!(caps_masked & BIT_ULL(i))) continue; - hns3_info(hw, "mask capabiliy: id-%u, name-%s.", + hns3_info(hw, "mask capability: id-%u, name-%s.", i, hns3_get_caps_name(i)); } } @@ -736,7 +736,7 @@ hns3_cmd_init(struct hns3_hw *hw) return 0; /* - * Requiring firmware to enable some features, firber port can still + * Requiring firmware to enable some features, fiber port can still * work without it, but copper port can't work because the firmware * fails to take over the PHY. */ diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c index eac2aa10..0bb552ea 100644 --- a/drivers/net/hns3/hns3_common.c +++ b/drivers/net/hns3/hns3_common.c @@ -603,7 +603,7 @@ hns3_init_ring_with_vector(struct hns3_hw *hw) hw->intr_tqps_num = RTE_MIN(vec, hw->tqps_num); for (i = 0; i < hw->intr_tqps_num; i++) { /* - * Set gap limiter/rate limiter/quanity limiter algorithm + * Set gap limiter/rate limiter/quantity limiter algorithm * configuration for interrupt coalesce of queue's interrupt. */ hns3_set_queue_intr_gl(hw, i, HNS3_RING_GL_RX, diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c index 3d0159d7..c8a1fb2c 100644 --- a/drivers/net/hns3/hns3_dcb.c +++ b/drivers/net/hns3/hns3_dcb.c @@ -25,7 +25,7 @@ * IR(Mbps) = ------------------------- * CLOCK(1000Mbps) * Tick * (2 ^ IR_s) * - * @return: 0: calculate sucessful, negative: fail + * @return: 0: calculate successful, negative: fail */ static int hns3_shaper_para_calc(struct hns3_hw *hw, uint32_t ir, uint8_t shaper_level, @@ -36,8 +36,8 @@ hns3_shaper_para_calc(struct hns3_hw *hw, uint32_t ir, uint8_t shaper_level, #define DIVISOR_IR_B_126 (126 * DIVISOR_CLK) const uint16_t tick_array[HNS3_SHAPER_LVL_CNT] = { - 6 * 256, /* Prioriy level */ - 6 * 32, /* Prioriy group level */ + 6 * 256, /* Priority level */ + 6 * 32, /* Priority group level */ 6 * 8, /* Port level */ 6 * 256 /* Qset level */ }; @@ -312,30 +312,30 @@ hns3_dcb_pg_schd_mode_cfg(struct hns3_hw *hw, uint8_t pg_id) } static uint32_t -hns3_dcb_get_shapping_para(uint8_t ir_b, uint8_t ir_u, uint8_t ir_s, +hns3_dcb_get_shaping_para(uint8_t ir_b, uint8_t ir_u, uint8_t ir_s, uint8_t bs_b, uint8_t bs_s) { - uint32_t shapping_para = 0; + uint32_t shaping_para = 0; - /* If ir_b is zero it means IR is 0Mbps, return zero of shapping_para */ + /* If ir_b is zero it means IR is 0Mbps, return zero of shaping_para */ if (ir_b == 0) - return shapping_para; + return shaping_para; - hns3_dcb_set_field(shapping_para, IR_B, ir_b); - hns3_dcb_set_field(shapping_para, IR_U, ir_u); - hns3_dcb_set_field(shapping_para, IR_S, ir_s); - hns3_dcb_set_field(shapping_para, BS_B, bs_b); - hns3_dcb_set_field(shapping_para, BS_S, bs_s); + hns3_dcb_set_field(shaping_para, IR_B, ir_b); + hns3_dcb_set_field(shaping_para, IR_U, ir_u); + hns3_dcb_set_field(shaping_para, IR_S, ir_s); + hns3_dcb_set_field(shaping_para, BS_B, bs_b); + hns3_dcb_set_field(shaping_para, BS_S, bs_s); - return shapping_para; + return shaping_para; } static int hns3_dcb_port_shaper_cfg(struct hns3_hw *hw, uint32_t speed) { - struct hns3_port_shapping_cmd *shap_cfg_cmd; + struct hns3_port_shaping_cmd *shap_cfg_cmd; struct hns3_shaper_parameter shaper_parameter; - uint32_t shapping_para; + uint32_t shaping_para; uint32_t ir_u, ir_b, ir_s; struct hns3_cmd_desc desc; int ret; @@ -348,21 +348,21 @@ hns3_dcb_port_shaper_cfg(struct hns3_hw *hw, uint32_t speed) } hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_TM_PORT_SHAPPING, false); - shap_cfg_cmd = (struct hns3_port_shapping_cmd *)desc.data; + shap_cfg_cmd = (struct hns3_port_shaping_cmd *)desc.data; ir_b = shaper_parameter.ir_b; ir_u = shaper_parameter.ir_u; ir_s = shaper_parameter.ir_s; - shapping_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s, + shaping_para = hns3_dcb_get_shaping_para(ir_b, ir_u, ir_s, HNS3_SHAPER_BS_U_DEF, HNS3_SHAPER_BS_S_DEF); - shap_cfg_cmd->port_shapping_para = rte_cpu_to_le_32(shapping_para); + shap_cfg_cmd->port_shaping_para = rte_cpu_to_le_32(shaping_para); /* * Configure the port_rate and set bit HNS3_TM_RATE_VLD_B of flag - * field in hns3_port_shapping_cmd to require firmware to recalculate - * shapping parameters. And whether the parameters are recalculated + * field in hns3_port_shaping_cmd to require firmware to recalculate + * shaping parameters. And whether the parameters are recalculated * depends on the firmware version. But driver still needs to * calculate it and configure to firmware for better compatibility. */ @@ -385,10 +385,10 @@ hns3_port_shaper_update(struct hns3_hw *hw, uint32_t speed) } static int -hns3_dcb_pg_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket, - uint8_t pg_id, uint32_t shapping_para, uint32_t rate) +hns3_dcb_pg_shaping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket, + uint8_t pg_id, uint32_t shaping_para, uint32_t rate) { - struct hns3_pg_shapping_cmd *shap_cfg_cmd; + struct hns3_pg_shaping_cmd *shap_cfg_cmd; enum hns3_opcode_type opcode; struct hns3_cmd_desc desc; @@ -396,15 +396,15 @@ hns3_dcb_pg_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket, HNS3_OPC_TM_PG_C_SHAPPING; hns3_cmd_setup_basic_desc(&desc, opcode, false); - shap_cfg_cmd = (struct hns3_pg_shapping_cmd *)desc.data; + shap_cfg_cmd = (struct hns3_pg_shaping_cmd *)desc.data; shap_cfg_cmd->pg_id = pg_id; - shap_cfg_cmd->pg_shapping_para = rte_cpu_to_le_32(shapping_para); + shap_cfg_cmd->pg_shaping_para = rte_cpu_to_le_32(shaping_para); /* * Configure the pg_rate and set bit HNS3_TM_RATE_VLD_B of flag field in - * hns3_pg_shapping_cmd to require firmware to recalculate shapping + * hns3_pg_shaping_cmd to require firmware to recalculate shaping * parameters. And whether parameters are recalculated depends on * the firmware version. But driver still needs to calculate it and * configure to firmware for better compatibility. @@ -432,11 +432,11 @@ hns3_pg_shaper_rate_cfg(struct hns3_hw *hw, uint8_t pg_id, uint32_t rate) return ret; } - shaper_para = hns3_dcb_get_shapping_para(0, 0, 0, + shaper_para = hns3_dcb_get_shaping_para(0, 0, 0, HNS3_SHAPER_BS_U_DEF, HNS3_SHAPER_BS_S_DEF); - ret = hns3_dcb_pg_shapping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, pg_id, + ret = hns3_dcb_pg_shaping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, pg_id, shaper_para, rate); if (ret) { hns3_err(hw, "config PG CIR shaper parameter fail, ret = %d.", @@ -447,11 +447,11 @@ hns3_pg_shaper_rate_cfg(struct hns3_hw *hw, uint8_t pg_id, uint32_t rate) ir_b = shaper_parameter.ir_b; ir_u = shaper_parameter.ir_u; ir_s = shaper_parameter.ir_s; - shaper_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s, + shaper_para = hns3_dcb_get_shaping_para(ir_b, ir_u, ir_s, HNS3_SHAPER_BS_U_DEF, HNS3_SHAPER_BS_S_DEF); - ret = hns3_dcb_pg_shapping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, pg_id, + ret = hns3_dcb_pg_shaping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, pg_id, shaper_para, rate); if (ret) { hns3_err(hw, "config PG PIR shaper parameter fail, ret = %d.", @@ -520,10 +520,10 @@ hns3_dcb_pri_schd_mode_cfg(struct hns3_hw *hw, uint8_t pri_id) } static int -hns3_dcb_pri_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket, - uint8_t pri_id, uint32_t shapping_para, uint32_t rate) +hns3_dcb_pri_shaping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket, + uint8_t pri_id, uint32_t shaping_para, uint32_t rate) { - struct hns3_pri_shapping_cmd *shap_cfg_cmd; + struct hns3_pri_shaping_cmd *shap_cfg_cmd; enum hns3_opcode_type opcode; struct hns3_cmd_desc desc; @@ -532,16 +532,16 @@ hns3_dcb_pri_shapping_cfg(struct hns3_hw *hw, enum hns3_shap_bucket bucket, hns3_cmd_setup_basic_desc(&desc, opcode, false); - shap_cfg_cmd = (struct hns3_pri_shapping_cmd *)desc.data; + shap_cfg_cmd = (struct hns3_pri_shaping_cmd *)desc.data; shap_cfg_cmd->pri_id = pri_id; - shap_cfg_cmd->pri_shapping_para = rte_cpu_to_le_32(shapping_para); + shap_cfg_cmd->pri_shaping_para = rte_cpu_to_le_32(shaping_para); /* * Configure the pri_rate and set bit HNS3_TM_RATE_VLD_B of flag - * field in hns3_pri_shapping_cmd to require firmware to recalculate - * shapping parameters. And whether the parameters are recalculated + * field in hns3_pri_shaping_cmd to require firmware to recalculate + * shaping parameters. And whether the parameters are recalculated * depends on the firmware version. But driver still needs to * calculate it and configure to firmware for better compatibility. */ @@ -567,11 +567,11 @@ hns3_pri_shaper_rate_cfg(struct hns3_hw *hw, uint8_t tc_no, uint32_t rate) return ret; } - shaper_para = hns3_dcb_get_shapping_para(0, 0, 0, + shaper_para = hns3_dcb_get_shaping_para(0, 0, 0, HNS3_SHAPER_BS_U_DEF, HNS3_SHAPER_BS_S_DEF); - ret = hns3_dcb_pri_shapping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, tc_no, + ret = hns3_dcb_pri_shaping_cfg(hw, HNS3_DCB_SHAP_C_BUCKET, tc_no, shaper_para, rate); if (ret) { hns3_err(hw, @@ -583,11 +583,11 @@ hns3_pri_shaper_rate_cfg(struct hns3_hw *hw, uint8_t tc_no, uint32_t rate) ir_b = shaper_parameter.ir_b; ir_u = shaper_parameter.ir_u; ir_s = shaper_parameter.ir_s; - shaper_para = hns3_dcb_get_shapping_para(ir_b, ir_u, ir_s, + shaper_para = hns3_dcb_get_shaping_para(ir_b, ir_u, ir_s, HNS3_SHAPER_BS_U_DEF, HNS3_SHAPER_BS_S_DEF); - ret = hns3_dcb_pri_shapping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, tc_no, + ret = hns3_dcb_pri_shaping_cfg(hw, HNS3_DCB_SHAP_P_BUCKET, tc_no, shaper_para, rate); if (ret) { hns3_err(hw, @@ -1532,7 +1532,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns) ret = hns3_dcb_schd_setup_hw(hw); if (ret) { - hns3_err(hw, "dcb schdule configure failed! ret = %d", ret); + hns3_err(hw, "dcb schedule configure failed! ret = %d", ret); return ret; } @@ -1737,7 +1737,7 @@ hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode) * hns3_dcb_pfc_enable - Enable priority flow control * @dev: pointer to ethernet device * - * Configures the pfc settings for one porority. + * Configures the pfc settings for one priority. */ int hns3_dcb_pfc_enable(struct rte_eth_dev *dev, struct rte_eth_pfc_conf *pfc_conf) diff --git a/drivers/net/hns3/hns3_dcb.h b/drivers/net/hns3/hns3_dcb.h index e06ec177..b3b990f0 100644 --- a/drivers/net/hns3/hns3_dcb.h +++ b/drivers/net/hns3/hns3_dcb.h @@ -86,41 +86,41 @@ struct hns3_nq_to_qs_link_cmd { #define HNS3_DCB_SHAP_BS_S_LSH 21 /* - * For more flexible selection of shapping algorithm in different network - * engine, the algorithm calculating shapping parameter is moved to firmware to - * execute. Bit HNS3_TM_RATE_VLD_B of flag field in hns3_pri_shapping_cmd, - * hns3_pg_shapping_cmd or hns3_port_shapping_cmd is set to 1 to require - * firmware to recalculate shapping parameters. However, whether the parameters + * For more flexible selection of shaping algorithm in different network + * engine, the algorithm calculating shaping parameter is moved to firmware to + * execute. Bit HNS3_TM_RATE_VLD_B of flag field in hns3_pri_shaping_cmd, + * hns3_pg_shaping_cmd or hns3_port_shaping_cmd is set to 1 to require + * firmware to recalculate shaping parameters. However, whether the parameters * are recalculated depends on the firmware version. If firmware doesn't support - * the calculation of shapping parameters, such as on network engine with + * the calculation of shaping parameters, such as on network engine with * revision id 0x21, the value driver calculated will be used to configure to * hardware. On the contrary, firmware ignores configuration of driver * and recalculates the parameter. */ #define HNS3_TM_RATE_VLD_B 0 -struct hns3_pri_shapping_cmd { +struct hns3_pri_shaping_cmd { uint8_t pri_id; uint8_t rsvd[3]; - uint32_t pri_shapping_para; + uint32_t pri_shaping_para; uint8_t flag; uint8_t rsvd1[3]; uint32_t pri_rate; /* Unit Mbps */ uint8_t rsvd2[8]; }; -struct hns3_pg_shapping_cmd { +struct hns3_pg_shaping_cmd { uint8_t pg_id; uint8_t rsvd[3]; - uint32_t pg_shapping_para; + uint32_t pg_shaping_para; uint8_t flag; uint8_t rsvd1[3]; uint32_t pg_rate; /* Unit Mbps */ uint8_t rsvd2[8]; }; -struct hns3_port_shapping_cmd { - uint32_t port_shapping_para; +struct hns3_port_shaping_cmd { + uint32_t port_shaping_para; uint8_t flag; uint8_t rsvd[3]; uint32_t port_rate; /* Unit Mbps */ diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 0bd12907..fee9c2a0 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -386,7 +386,7 @@ hns3_rm_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id) static void hns3_add_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id, - bool writen_to_tbl) + bool written_to_tbl) { struct hns3_user_vlan_table *vlan_entry; struct hns3_hw *hw = &hns->hw; @@ -403,7 +403,7 @@ hns3_add_dev_vlan_table(struct hns3_adapter *hns, uint16_t vlan_id, return; } - vlan_entry->hd_tbl_status = writen_to_tbl; + vlan_entry->hd_tbl_status = written_to_tbl; vlan_entry->vlan_id = vlan_id; LIST_INSERT_HEAD(&pf->vlan_list, vlan_entry, next); @@ -438,7 +438,7 @@ static int hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { struct hns3_hw *hw = &hns->hw; - bool writen_to_tbl = false; + bool written_to_tbl = false; int ret = 0; /* @@ -458,12 +458,12 @@ hns3_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) */ if (hw->port_base_vlan_cfg.state == HNS3_PORT_BASE_VLAN_DISABLE) { ret = hns3_set_port_vlan_filter(hns, vlan_id, on); - writen_to_tbl = true; + written_to_tbl = true; } if (ret == 0) { if (on) - hns3_add_dev_vlan_table(hns, vlan_id, writen_to_tbl); + hns3_add_dev_vlan_table(hns, vlan_id, written_to_tbl); else hns3_rm_dev_vlan_table(hns, vlan_id); } @@ -574,7 +574,7 @@ hns3_set_vlan_rx_offload_cfg(struct hns3_adapter *hns, hns3_set_bit(req->vport_vlan_cfg, HNS3_SHOW_TAG2_EN_B, vcfg->vlan2_vlan_prionly ? 1 : 0); - /* firmwall will ignore this configuration for PCI_REVISION_ID_HIP08 */ + /* firmware will ignore this configuration for PCI_REVISION_ID_HIP08 */ hns3_set_bit(req->vport_vlan_cfg, HNS3_DISCARD_TAG1_EN_B, vcfg->strip_tag1_discard_en ? 1 : 0); hns3_set_bit(req->vport_vlan_cfg, HNS3_DISCARD_TAG2_EN_B, @@ -784,7 +784,7 @@ hns3_set_vlan_tx_offload_cfg(struct hns3_adapter *hns, vcfg->insert_tag2_en ? 1 : 0); hns3_set_bit(req->vport_vlan_cfg, HNS3_CFG_NIC_ROCE_SEL_B, 0); - /* firmwall will ignore this configuration for PCI_REVISION_ID_HIP08 */ + /* firmware will ignore this configuration for PCI_REVISION_ID_HIP08 */ hns3_set_bit(req->vport_vlan_cfg, HNS3_TAG_SHIFT_MODE_EN_B, vcfg->tag_shift_mode_en ? 1 : 0); @@ -2177,7 +2177,7 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed) } static uint32_t -hns3_get_firber_port_speed_capa(uint32_t supported_speed) +hns3_get_fiber_port_speed_capa(uint32_t supported_speed) { uint32_t speed_capa = 0; @@ -2210,7 +2210,7 @@ hns3_get_speed_capa(struct hns3_hw *hw) hns3_get_copper_port_speed_capa(mac->supported_speed); else speed_capa = - hns3_get_firber_port_speed_capa(mac->supported_speed); + hns3_get_fiber_port_speed_capa(mac->supported_speed); if (mac->support_autoneg == 0) speed_capa |= RTE_ETH_LINK_SPEED_FIXED; @@ -3420,7 +3420,7 @@ hns3_only_alloc_priv_buff(struct hns3_hw *hw, * hns3_rx_buffer_calc: calculate the rx private buffer size for all TCs * @hw: pointer to struct hns3_hw * @buf_alloc: pointer to buffer calculation data - * @return: 0: calculate sucessful, negative: fail + * @return: 0: calculate successful, negative: fail */ static int hns3_rx_buffer_calc(struct hns3_hw *hw, struct hns3_pkt_buf_alloc *buf_alloc) @@ -4524,7 +4524,7 @@ hns3_config_all_msix_error(struct hns3_hw *hw, bool enable) } static uint32_t -hns3_set_firber_default_support_speed(struct hns3_hw *hw) +hns3_set_fiber_default_support_speed(struct hns3_hw *hw) { struct hns3_mac *mac = &hw->mac; @@ -4550,14 +4550,14 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw) } /* - * Validity of supported_speed for firber and copper media type can be + * Validity of supported_speed for fiber and copper media type can be * guaranteed by the following policy: * Copper: * Although the initialization of the phy in the firmware may not be * completed, the firmware can guarantees that the supported_speed is * an valid value. * Firber: - * If the version of firmware supports the acitive query way of the + * If the version of firmware supports the active query way of the * HNS3_OPC_GET_SFP_INFO opcode, the supported_speed can be obtained * through it. If unsupported, use the SFP's speed as the value of the * supported_speed. @@ -4582,7 +4582,7 @@ hns3_get_port_supported_speed(struct rte_eth_dev *eth_dev) */ if (mac->supported_speed == 0) mac->supported_speed = - hns3_set_firber_default_support_speed(hw); + hns3_set_fiber_default_support_speed(hw); } return 0; @@ -5327,7 +5327,7 @@ hns3_get_autoneg_fc_mode(struct hns3_hw *hw) /* * Flow control auto-negotiation is not supported for fiber and - * backpalne media type. + * backplane media type. */ case HNS3_MEDIA_TYPE_FIBER: case HNS3_MEDIA_TYPE_BACKPLANE: @@ -6191,7 +6191,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa) } /* - * FEC mode order defined in hns3 hardware is inconsistend with + * FEC mode order defined in hns3 hardware is inconsistent with * that defined in the ethdev library. So the sequence needs * to be converted. */ diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h index aa45b312..153e6733 100644 --- a/drivers/net/hns3/hns3_ethdev.h +++ b/drivers/net/hns3/hns3_ethdev.h @@ -126,7 +126,7 @@ struct hns3_tc_info { uint8_t tc_sch_mode; /* 0: sp; 1: dwrr */ uint8_t pgid; uint32_t bw_limit; - uint8_t up_to_tc_map; /* user priority maping on the TC */ + uint8_t up_to_tc_map; /* user priority mapping on the TC */ }; struct hns3_dcb_info { @@ -571,12 +571,12 @@ struct hns3_hw { /* * vlan mode. * value range: - * HNS3_SW_SHIFT_AND_DISCARD_MODE/HNS3_HW_SHFIT_AND_DISCARD_MODE + * HNS3_SW_SHIFT_AND_DISCARD_MODE/HNS3_HW_SHIFT_AND_DISCARD_MODE * * - HNS3_SW_SHIFT_AND_DISCARD_MODE * For some versions of hardware network engine, because of the * hardware limitation, PMD needs to detect the PVID status - * to work with haredware to implement PVID-related functions. + * to work with hardware to implement PVID-related functions. * For example, driver need discard the stripped PVID tag to ensure * the PVID will not report to mbuf and shift the inserted VLAN tag * to avoid port based VLAN covering it. @@ -724,7 +724,7 @@ enum hns3_mp_req_type { HNS3_MP_REQ_MAX }; -/* Pameters for IPC. */ +/* Parameters for IPC. */ struct hns3_mp_param { enum hns3_mp_req_type type; int port_id; diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 805abd45..5015fe0d 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -242,7 +242,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, if (ret == -EPERM) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, old_addr); - hns3_warn(hw, "Has permanet mac addr(%s) for vf", + hns3_warn(hw, "Has permanent mac addr(%s) for vf", mac_str); } else { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, @@ -318,7 +318,7 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * 1. The promiscuous/allmulticast mode can be configured successfully * only based on the trusted VF device. If based on the non trusted * VF device, configuring promiscuous/allmulticast mode will fail. - * The hns3 VF device can be confiruged as trusted device by hns3 PF + * The hns3 VF device can be configured as trusted device by hns3 PF * kernel ethdev driver on the host by the following command: * "ip link set vf turst on" * 2. After the promiscuous mode is configured successfully, hns3 VF PMD @@ -330,7 +330,7 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * filter is still effective even in promiscuous mode. If upper * applications don't call rte_eth_dev_vlan_filter API function to * set vlan based on VF device, hns3 VF PMD will can't receive - * the packets with vlan tag in promiscuoue mode. + * the packets with vlan tag in promiscuous mode. */ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); req->msg[0] = HNS3_MBX_SET_PROMISC_MODE; diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c index d043f578..870bde4d 100644 --- a/drivers/net/hns3/hns3_fdir.c +++ b/drivers/net/hns3/hns3_fdir.c @@ -67,7 +67,7 @@ enum HNS3_FD_KEY_TYPE { enum HNS3_FD_META_DATA { PACKET_TYPE_ID, - IP_FRAGEMENT, + IP_FRAGMENT, ROCE_TYPE, NEXT_KEY, VLAN_NUMBER, @@ -84,7 +84,7 @@ struct key_info { static const struct key_info meta_data_key_info[] = { {PACKET_TYPE_ID, 6}, - {IP_FRAGEMENT, 1}, + {IP_FRAGMENT, 1}, {ROCE_TYPE, 1}, {NEXT_KEY, 5}, {VLAN_NUMBER, 2}, diff --git a/drivers/net/hns3/hns3_fdir.h b/drivers/net/hns3/hns3_fdir.h index f9efff3b..07b39339 100644 --- a/drivers/net/hns3/hns3_fdir.h +++ b/drivers/net/hns3/hns3_fdir.h @@ -139,7 +139,7 @@ struct hns3_fdir_rule { uint32_t flags; uint32_t fd_id; /* APP marked unique value for this rule. */ uint8_t action; - /* VF id, avaiblable when flags with HNS3_RULE_FLAG_VF_ID. */ + /* VF id, available when flags with HNS3_RULE_FLAG_VF_ID. */ uint8_t vf_id; /* * equal 0 when action is drop. diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c index 9f2f9cb6..0dbc3f65 100644 --- a/drivers/net/hns3/hns3_flow.c +++ b/drivers/net/hns3/hns3_flow.c @@ -338,7 +338,7 @@ hns3_handle_action_queue_region(struct rte_eth_dev *dev, * * @param actions[in] * @param rule[out] - * NIC specfilc actions derived from the actions. + * NIC specific actions derived from the actions. * @param error[out] */ static int @@ -369,7 +369,7 @@ hns3_handle_actions(struct rte_eth_dev *dev, * Queue region is implemented by FDIR + RSS in hns3 hardware, * the FDIR's action is one queue region (start_queue_id and * queue_num), then RSS spread packets to the queue region by - * RSS algorigthm. + * RSS algorithm. */ case RTE_FLOW_ACTION_TYPE_RSS: ret = hns3_handle_action_queue_region(dev, actions, @@ -940,7 +940,7 @@ hns3_parse_nvgre(const struct rte_flow_item *item, struct hns3_fdir_rule *rule, if (nvgre_mask->protocol || nvgre_mask->c_k_s_rsvd0_ver) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, item, - "Ver/protocal is not supported in NVGRE"); + "Ver/protocol is not supported in NVGRE"); /* TNI must be totally masked or not. */ if (memcmp(nvgre_mask->tni, full_mask, VNI_OR_TNI_LEN) && @@ -985,7 +985,7 @@ hns3_parse_geneve(const struct rte_flow_item *item, struct hns3_fdir_rule *rule, if (geneve_mask->ver_opt_len_o_c_rsvd0 || geneve_mask->protocol) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, item, - "Ver/protocal is not supported in GENEVE"); + "Ver/protocol is not supported in GENEVE"); /* VNI must be totally masked or not. */ if (memcmp(geneve_mask->vni, full_mask, VNI_OR_TNI_LEN) && memcmp(geneve_mask->vni, zero_mask, VNI_OR_TNI_LEN)) @@ -1309,7 +1309,7 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw, } /* - * This function is used to parse rss action validatation. + * This function is used to parse rss action validation. */ static int hns3_parse_rss_filter(struct rte_eth_dev *dev, @@ -1682,7 +1682,7 @@ hns3_flow_args_check(const struct rte_flow_attr *attr, /* * Check if the flow rule is supported by hns3. - * It only checkes the format. Don't guarantee the rule can be programmed into + * It only checks the format. Don't guarantee the rule can be programmed into * the HW. Because there can be no enough room for the rule. */ static int diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index b3563d46..02028dcd 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -78,14 +78,14 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, mbx_time_limit = (uint32_t)hns->mbx_time_limit_ms * US_PER_MS; while (wait_time < mbx_time_limit) { if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) { - hns3_err(hw, "Don't wait for mbx respone because of " + hns3_err(hw, "Don't wait for mbx response because of " "disable_cmd"); return -EBUSY; } if (is_reset_pending(hns)) { hw->mbx_resp.req_msg_data = 0; - hns3_err(hw, "Don't wait for mbx respone because of " + hns3_err(hw, "Don't wait for mbx response because of " "reset pending"); return -EIO; } diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index d637bd2b..0172a2e2 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -22,7 +22,7 @@ enum HNS3_MBX_OPCODE { HNS3_MBX_GET_RETA, /* (VF -> PF) get RETA */ HNS3_MBX_GET_RSS_KEY, /* (VF -> PF) get RSS key */ HNS3_MBX_GET_MAC_ADDR, /* (VF -> PF) get MAC addr */ - HNS3_MBX_PF_VF_RESP, /* (PF -> VF) generate respone to VF */ + HNS3_MBX_PF_VF_RESP, /* (PF -> VF) generate response to VF */ HNS3_MBX_GET_BDNUM, /* (VF -> PF) get BD num */ HNS3_MBX_GET_BUFSIZE, /* (VF -> PF) get buffer size */ HNS3_MBX_GET_STREAMID, /* (VF -> PF) get stream id */ diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h index 6f153a1b..c4121207 100644 --- a/drivers/net/hns3/hns3_rss.h +++ b/drivers/net/hns3/hns3_rss.h @@ -41,7 +41,7 @@ struct hns3_rss_tuple_cfg { struct hns3_rss_conf { /* RSS parameters :algorithm, flow_types, key, queue */ struct rte_flow_action_rss conf; - uint8_t hash_algo; /* hash function type definited by hardware */ + uint8_t hash_algo; /* hash function type defined by hardware */ uint8_t key[HNS3_RSS_KEY_SIZE]; /* Hash key */ struct hns3_rss_tuple_cfg rss_tuple_sets; uint16_t rss_indirection_tbl[HNS3_RSS_IND_TBL_SIZE_MAX]; diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index f365daad..d240e36e 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -1903,7 +1903,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, * For hns3 VF device, whether it needs to process PVID depends * on the configuration of PF kernel mode netdevice driver. And the * related PF configuration is delivered through the mailbox and finally - * reflectd in port_base_vlan_cfg. + * reflected in port_base_vlan_cfg. */ if (hns->is_vf || hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE) rxq->pvid_sw_discard_en = hw->port_base_vlan_cfg.state == @@ -3043,7 +3043,7 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc, * For hns3 VF device, whether it needs to process PVID depends * on the configuration of PF kernel mode netdev driver. And the * related PF configuration is delivered through the mailbox and finally - * reflectd in port_base_vlan_cfg. + * reflected in port_base_vlan_cfg. */ if (hns->is_vf || hw->vlan_mode == HNS3_SW_SHIFT_AND_DISCARD_MODE) txq->pvid_sw_shift_en = hw->port_base_vlan_cfg.state == @@ -3208,7 +3208,7 @@ hns3_fill_first_desc(struct hns3_tx_queue *txq, struct hns3_desc *desc, * in Tx direction based on hns3 network engine. So when the number of * VLANs in the packets represented by rxm plus the number of VLAN * offload by hardware such as PVID etc, exceeds two, the packets will - * be discarded or the original VLAN of the packets will be overwitted + * be discarded or the original VLAN of the packets will be overwritten * by hardware. When the PF PVID is enabled by calling the API function * named rte_eth_dev_set_vlan_pvid or the VF PVID is enabled by the hns3 * PF kernel ether driver, the outer VLAN tag will always be the PVID. @@ -3393,7 +3393,7 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec, /* * The inner l2 length of mbuf is the sum of outer l4 length, * tunneling header length and inner l2 length for a tunnel - * packect. But in hns3 tx descriptor, the tunneling header + * packet. But in hns3 tx descriptor, the tunneling header * length is contained in the field of outer L4 length. * Therefore, driver need to calculate the outer L4 length and * inner L2 length. @@ -3409,7 +3409,7 @@ hns3_parse_inner_params(struct rte_mbuf *m, uint32_t *ol_type_vlan_len_msec, tmp_outer |= hns3_gen_field_val(HNS3_TXD_TUNTYPE_M, HNS3_TXD_TUNTYPE_S, HNS3_TUN_NVGRE); /* - * For NVGRE tunnel packect, the outer L4 is empty. So only + * For NVGRE tunnel packet, the outer L4 is empty. So only * fill the NVGRE header length to the outer L4 field. */ tmp_outer |= hns3_gen_field_val(HNS3_TXD_L4LEN_M, @@ -3452,7 +3452,7 @@ hns3_parse_tunneling_params(struct hns3_tx_queue *txq, struct rte_mbuf *m, * mbuf, but for hns3 descriptor, it is contained in the outer L4. So, * there is a need that switching between them. To avoid multiple * calculations, the length of the L2 header include the outer and - * inner, will be filled during the parsing of tunnel packects. + * inner, will be filled during the parsing of tunnel packets. */ if (!(ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) { /* @@ -3632,7 +3632,7 @@ hns3_outer_ipv4_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags, if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) { struct rte_udp_hdr *udp_hdr; /* - * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo + * If OUTER_UDP_CKSUM is support, HW can calculate the pseudo * header for TSO packets */ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) @@ -3657,7 +3657,7 @@ hns3_outer_ipv6_cksum_prepared(struct rte_mbuf *m, uint64_t ol_flags, if (ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM) { struct rte_udp_hdr *udp_hdr; /* - * If OUTER_UDP_CKSUM is support, HW can caclulate the pseudo + * If OUTER_UDP_CKSUM is support, HW can calculate the pseudo * header for TSO packets */ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h index 5423568c..e202eb9c 100644 --- a/drivers/net/hns3/hns3_rxtx.h +++ b/drivers/net/hns3/hns3_rxtx.h @@ -611,7 +611,7 @@ hns3_handle_bdinfo(struct hns3_rx_queue *rxq, struct rte_mbuf *rxm, /* * If packet len bigger than mtu when recv with no-scattered algorithm, - * the first n bd will without FE bit, we need process this sisution. + * the first n bd will without FE bit, we need process this situation. * Note: we don't need add statistic counter because latest BD which * with FE bit will mark HNS3_RXD_L2E_B bit. */ diff --git a/drivers/net/hns3/hns3_stats.c b/drivers/net/hns3/hns3_stats.c index 0fe853d6..606b7250 100644 --- a/drivers/net/hns3/hns3_stats.c +++ b/drivers/net/hns3/hns3_stats.c @@ -630,7 +630,7 @@ hns3_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_stats) cnt = hns3_read_dev(rxq, HNS3_RING_RX_PKTNUM_RECORD_REG); /* - * Read hardware and software in adjacent positions to minumize + * Read hardware and software in adjacent positions to minimize * the timing variance. */ rte_stats->ierrors += rxq->err_stats.l2_errors + @@ -1289,7 +1289,7 @@ hns3_dev_xstats_get_names(struct rte_eth_dev *dev, * A pointer to an ids array passed by application. This tells which * statistics values function should retrieve. This parameter * can be set to NULL if size is 0. In this case function will retrieve - * all avalible statistics. + * all available statistics. * @param values * A pointer to a table to be filled with device statistics values. * @param size diff --git a/drivers/net/hns3/hns3_tm.c b/drivers/net/hns3/hns3_tm.c index e1089b6b..4fc00cbc 100644 --- a/drivers/net/hns3/hns3_tm.c +++ b/drivers/net/hns3/hns3_tm.c @@ -739,7 +739,7 @@ hns3_tm_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, } static void -hns3_tm_nonleaf_level_capsbilities_get(struct rte_eth_dev *dev, +hns3_tm_nonleaf_level_capabilities_get(struct rte_eth_dev *dev, uint32_t level_id, struct rte_tm_level_capabilities *cap) { @@ -818,7 +818,7 @@ hns3_tm_level_capabilities_get(struct rte_eth_dev *dev, memset(cap, 0, sizeof(struct rte_tm_level_capabilities)); if (level_id != HNS3_TM_NODE_LEVEL_QUEUE) - hns3_tm_nonleaf_level_capsbilities_get(dev, level_id, cap); + hns3_tm_nonleaf_level_capabilities_get(dev, level_id, cap); else hns3_tm_leaf_level_capabilities_get(dev, cap); diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index c0bfff43..1d417dbf 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -2483,7 +2483,7 @@ i40e_dev_start(struct rte_eth_dev *dev) if (ret != I40E_SUCCESS) PMD_DRV_LOG(WARNING, "Fail to set phy mask"); - /* Call get_link_info aq commond to enable/disable LSE */ + /* Call get_link_info aq command to enable/disable LSE */ i40e_dev_link_update(dev, 0); } @@ -3555,7 +3555,7 @@ static int i40e_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, count++; } - /* Get individiual stats from i40e_hw_port struct */ + /* Get individual stats from i40e_hw_port struct */ for (i = 0; i < I40E_NB_HW_PORT_XSTATS; i++) { strlcpy(xstats_names[count].name, rte_i40e_hw_port_strings[i].name, @@ -3613,7 +3613,7 @@ i40e_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, count++; } - /* Get individiual stats from i40e_hw_port struct */ + /* Get individual stats from i40e_hw_port struct */ for (i = 0; i < I40E_NB_HW_PORT_XSTATS; i++) { xstats[count].value = *(uint64_t *)(((char *)hw_stats) + rte_i40e_hw_port_strings[i].offset); @@ -5544,7 +5544,7 @@ i40e_vsi_get_bw_config(struct i40e_vsi *vsi) &ets_sla_config, NULL); if (ret != I40E_SUCCESS) { PMD_DRV_LOG(ERR, - "VSI failed to get TC bandwdith configuration %u", + "VSI failed to get TC bandwidth configuration %u", hw->aq.asq_last_status); return ret; } @@ -6822,7 +6822,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -9719,7 +9719,7 @@ i40e_ethertype_filter_convert(const struct rte_eth_ethertype_filter *input, return 0; } -/* Check if there exists the ehtertype filter */ +/* Check if there exists the ethertype filter */ struct i40e_ethertype_filter * i40e_sw_ethertype_filter_lookup(struct i40e_ethertype_rule *ethertype_rule, const struct i40e_ethertype_filter_input *input) diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index 2d182f80..a1ebdc09 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -897,7 +897,7 @@ struct i40e_tunnel_filter { TAILQ_ENTRY(i40e_tunnel_filter) rules; struct i40e_tunnel_filter_input input; uint8_t is_to_vf; /* 0 - to PF, 1 - to VF */ - uint16_t vf_id; /* VF id, avaiblable when is_to_vf is 1. */ + uint16_t vf_id; /* VF id, available when is_to_vf is 1. */ uint16_t queue; /* Queue assigned to when match */ }; @@ -966,7 +966,7 @@ struct i40e_tunnel_filter_conf { uint32_t tenant_id; /**< Tenant ID to match. VNI, GRE key... */ uint16_t queue_id; /**< Queue assigned to if match. */ uint8_t is_to_vf; /**< 0 - to PF, 1 - to VF */ - uint16_t vf_id; /**< VF id, avaiblable when is_to_vf is 1. */ + uint16_t vf_id; /**< VF id, available when is_to_vf is 1. */ }; TAILQ_HEAD(i40e_flow_list, rte_flow); @@ -1100,7 +1100,7 @@ struct i40e_vf_msg_cfg { /* * If message statistics from a VF exceed the maximal limitation, * the PF will ignore any new message from that VF for - * 'ignor_second' time. + * 'ignore_second' time. */ uint32_t ignore_second; }; @@ -1257,7 +1257,7 @@ struct i40e_adapter { }; /** - * Strucute to store private data for each VF representor instance + * Structure to store private data for each VF representor instance */ struct i40e_vf_representor { uint16_t switch_domain_id; @@ -1265,7 +1265,7 @@ struct i40e_vf_representor { uint16_t vf_id; /**< Virtual Function ID */ struct i40e_adapter *adapter; - /**< Private data store of assocaiated physical function */ + /**< Private data store of associated physical function */ struct i40e_eth_stats stats_offset; /**< Zero-point of VF statistics*/ }; diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c index df2a5aae..8caedea1 100644 --- a/drivers/net/i40e/i40e_fdir.c +++ b/drivers/net/i40e/i40e_fdir.c @@ -142,7 +142,7 @@ i40e_fdir_rx_queue_init(struct i40e_rx_queue *rxq) I40E_QRX_TAIL(rxq->vsi->base_queue); rte_wmb(); - /* Init the RX tail regieter. */ + /* Init the RX tail register. */ I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); return err; @@ -430,7 +430,7 @@ i40e_check_fdir_flex_payload(const struct rte_eth_flex_payload_cfg *flex_cfg) for (i = 0; i < I40E_FDIR_MAX_FLEX_LEN; i++) { if (flex_cfg->src_offset[i] >= I40E_MAX_FLX_SOURCE_OFF) { - PMD_DRV_LOG(ERR, "exceeds maxmial payload limit."); + PMD_DRV_LOG(ERR, "exceeds maximal payload limit."); return -EINVAL; } } @@ -438,7 +438,7 @@ i40e_check_fdir_flex_payload(const struct rte_eth_flex_payload_cfg *flex_cfg) memset(flex_pit, 0, sizeof(flex_pit)); num = i40e_srcoff_to_flx_pit(flex_cfg->src_offset, flex_pit); if (num > I40E_MAX_FLXPLD_FIED) { - PMD_DRV_LOG(ERR, "exceeds maxmial number of flex fields."); + PMD_DRV_LOG(ERR, "exceeds maximal number of flex fields."); return -EINVAL; } for (i = 0; i < num; i++) { @@ -948,7 +948,7 @@ i40e_flow_fdir_construct_pkt(struct i40e_pf *pf, uint8_t pctype = fdir_input->pctype; struct i40e_customized_pctype *cus_pctype; - /* raw pcket template - just copy contents of the raw packet */ + /* raw packet template - just copy contents of the raw packet */ if (fdir_input->flow_ext.pkt_template) { memcpy(raw_pkt, fdir_input->flow.raw_flow.packet, fdir_input->flow.raw_flow.length); @@ -1831,7 +1831,7 @@ i40e_flow_add_del_fdir_filter(struct rte_eth_dev *dev, &check_filter.fdir.input); if (!node) { PMD_DRV_LOG(ERR, - "There's no corresponding flow firector filter!"); + "There's no corresponding flow director filter!"); return -EINVAL; } diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c index c9676caa..e0cf9962 100644 --- a/drivers/net/i40e/i40e_flow.c +++ b/drivers/net/i40e/i40e_flow.c @@ -3043,7 +3043,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev, rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, - "Exceeds maxmial payload limit."); + "Exceeds maximal payload limit."); return -rte_errno; } diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c index ccb3924a..2435a8a0 100644 --- a/drivers/net/i40e/i40e_pf.c +++ b/drivers/net/i40e/i40e_pf.c @@ -343,7 +343,7 @@ i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf, uint8_t *msg, vf->request_caps = *(uint32_t *)msg; /* enable all RSS by default, - * doesn't support hena setting by virtchnnl yet. + * doesn't support hena setting by virtchnl yet. */ if (vf->request_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { I40E_WRITE_REG(hw, I40E_VFQF_HENA1(0, vf->vf_idx), @@ -725,7 +725,7 @@ i40e_pf_host_process_cmd_config_irq_map(struct i40e_pf_vf *vf, if ((map->rxq_map < qbit_max) && (map->txq_map < qbit_max)) { i40e_pf_config_irq_link_list(vf, map); } else { - /* configured queue size excceed limit */ + /* configured queue size exceed limit */ ret = I40E_ERR_PARAM; goto send_msg; } diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index e4cb33dc..9a00a9b7 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -609,7 +609,7 @@ i40e_rx_alloc_bufs(struct i40e_rx_queue *rxq) rxdp[i].read.pkt_addr = dma_addr; } - /* Update rx tail regsiter */ + /* Update rx tail register */ I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger); rxq->rx_free_trigger = @@ -995,7 +995,7 @@ i40e_recv_scattered_pkts(void *rx_queue, * threshold of the queue, advance the Receive Descriptor Tail (RDT) * register. Update the RDT with the value of the last processed RX * descriptor minus 1, to guarantee that the RDT register is never - * equal to the RDH register, which creates a "full" ring situtation + * equal to the RDH register, which creates a "full" ring situation * from the hardware point of view. */ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); @@ -1467,7 +1467,7 @@ tx_xmit_pkts(struct i40e_tx_queue *txq, i40e_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n)); txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n)); - /* Determin if RS bit needs to be set */ + /* Determine if RS bit needs to be set */ if (txq->tx_tail > txq->tx_next_rs) { txr[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << @@ -1697,7 +1697,7 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) } if (rxq->rx_deferred_start) - PMD_DRV_LOG(WARNING, "RX queue %u is deferrd start", + PMD_DRV_LOG(WARNING, "RX queue %u is deferred start", rx_queue_id); err = i40e_alloc_rx_queue_mbufs(rxq); @@ -1706,7 +1706,7 @@ i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) return err; } - /* Init the RX tail regieter. */ + /* Init the RX tail register. */ I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); err = i40e_switch_rx_queue(hw, rxq->reg_idx, TRUE); @@ -1771,7 +1771,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) } if (txq->tx_deferred_start) - PMD_DRV_LOG(WARNING, "TX queue %u is deferrd start", + PMD_DRV_LOG(WARNING, "TX queue %u is deferred start", tx_queue_id); /* @@ -1930,7 +1930,7 @@ i40e_dev_rx_queue_setup_runtime(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "Can't use default burst."); return -EINVAL; } - /* check scatterred conflict */ + /* check scattered conflict */ if (!dev->data->scattered_rx && use_scattered_rx) { PMD_DRV_LOG(ERR, "Scattered rx is required."); return -EINVAL; @@ -2014,7 +2014,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev, rxq->rx_deferred_start = rx_conf->rx_deferred_start; rxq->offloads = offloads; - /* Allocate the maximun number of RX ring hardware descriptor. */ + /* Allocate the maximum number of RX ring hardware descriptor. */ len = I40E_MAX_RING_DESC; /** @@ -2322,7 +2322,7 @@ i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, */ tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); - /* force tx_rs_thresh to adapt an aggresive tx_free_thresh */ + /* force tx_rs_thresh to adapt an aggressive tx_free_thresh */ tx_rs_thresh = (DEFAULT_TX_RS_THRESH + tx_free_thresh > nb_desc) ? nb_desc - tx_free_thresh : DEFAULT_TX_RS_THRESH; if (tx_conf->tx_rs_thresh > 0) @@ -2991,7 +2991,7 @@ i40e_rx_queue_init(struct i40e_rx_queue *rxq) if (rxq->max_pkt_len > buf_size) dev_data->scattered_rx = 1; - /* Init the RX tail regieter. */ + /* Init the RX tail register. */ I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); return 0; diff --git a/drivers/net/i40e/i40e_rxtx_vec_altivec.c b/drivers/net/i40e/i40e_rxtx_vec_altivec.c index d0bf86df..f78ba994 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_altivec.c +++ b/drivers/net/i40e/i40e_rxtx_vec_altivec.c @@ -430,7 +430,7 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); desc_to_olflags_v(descs, &rx_pkts[pos]); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll((vec_ld(0, (vector unsigned long *)&staterr)[0])); nb_pkts_recd += var; diff --git a/drivers/net/i40e/i40e_rxtx_vec_neon.c b/drivers/net/i40e/i40e_rxtx_vec_neon.c index b951ea2d..50746853 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_neon.c +++ b/drivers/net/i40e/i40e_rxtx_vec_neon.c @@ -151,7 +151,7 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, uint64x2_t descs[4], vreinterpretq_u8_u32(l3_l4e))); /* then we shift left 1 bit */ l3_l4e = vshlq_n_u32(l3_l4e, 1); - /* we need to mask out the reduntant bits */ + /* we need to mask out the redundant bits */ l3_l4e = vandq_u32(l3_l4e, cksum_mask); vlan0 = vorrq_u32(vlan0, rss); @@ -416,7 +416,7 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *__rte_restrict rxq, I40E_UINT16_BIT - 1)); stat = ~vgetq_lane_u64(vreinterpretq_u64_u16(staterr), 0); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ if (unlikely(stat == 0)) { nb_pkts_recd += RTE_I40E_DESCS_PER_LOOP; } else { diff --git a/drivers/net/i40e/i40e_rxtx_vec_sse.c b/drivers/net/i40e/i40e_rxtx_vec_sse.c index 497b2404..3782e805 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_sse.c +++ b/drivers/net/i40e/i40e_rxtx_vec_sse.c @@ -282,7 +282,7 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, volatile union i40e_rx_desc *rxdp, l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e); /* then we shift left 1 bit */ l3_l4e = _mm_slli_epi32(l3_l4e, 1); - /* we need to mask out the reduntant bits */ + /* we need to mask out the redundant bits */ l3_l4e = _mm_and_si128(l3_l4e, cksum_mask); vlan0 = _mm_or_si128(vlan0, rss); @@ -297,7 +297,7 @@ desc_to_olflags_v(struct i40e_rx_queue *rxq, volatile union i40e_rx_desc *rxdp, __m128i v_fdir_ol_flags = descs_to_fdir_16b(desc_fltstat, descs, rx_pkts); #endif - /* OR in ol_flag bits after descriptor speicific extraction */ + /* OR in ol_flag bits after descriptor specific extraction */ vlan0 = _mm_or_si128(vlan0, v_fdir_ol_flags); } @@ -577,7 +577,7 @@ _recv_raw_pkts_vec(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_pkts, _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, pkt_mb1); desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != RTE_I40E_DESCS_PER_LOOP)) diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c index a492959b..35829a1e 100644 --- a/drivers/net/i40e/rte_pmd_i40e.c +++ b/drivers/net/i40e/rte_pmd_i40e.c @@ -1427,7 +1427,7 @@ rte_pmd_i40e_set_tc_strict_prio(uint16_t port, uint8_t tc_map) /* Get all TCs' bandwidth. */ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { if (veb->enabled_tc & BIT_ULL(i)) { - /* For rubust, if bandwidth is 0, use 1 instead. */ + /* For robust, if bandwidth is 0, use 1 instead. */ if (veb->bw_info.bw_ets_share_credits[i]) ets_data.tc_bw_share_credits[i] = veb->bw_info.bw_ets_share_credits[i]; diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 377d7bc7..5944e0fd 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -516,7 +516,7 @@ iavf_init_rss(struct iavf_adapter *adapter) j = 0; vf->rss_lut[i] = j; } - /* send virtchnnl ops to configure rss*/ + /* send virtchnl ops to configure rss*/ ret = iavf_configure_rss_lut(adapter); if (ret) return ret; @@ -831,7 +831,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev, "vector %u are mapping to all Rx queues", vf->msix_base); } else { - /* If Rx interrupt is reuquired, and we can use + /* If Rx interrupt is required, and we can use * multi interrupts, then the vec is from 1 */ vf->nb_msix = @@ -1420,7 +1420,7 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev, } rte_memcpy(vf->rss_lut, lut, reta_size); - /* send virtchnnl ops to configure rss*/ + /* send virtchnl ops to configure rss*/ ret = iavf_configure_rss_lut(adapter); if (ret) /* revert back */ rte_memcpy(vf->rss_lut, lut, reta_size); @@ -1753,7 +1753,7 @@ static int iavf_dev_xstats_get(struct rte_eth_dev *dev, struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); struct iavf_vsi *vsi = &vf->vsi; struct virtchnl_eth_stats *pstats = NULL; - struct iavf_eth_xstats iavf_xtats = {{0}}; + struct iavf_eth_xstats iavf_xstats = {{0}}; if (n < IAVF_NB_XSTATS) return IAVF_NB_XSTATS; @@ -1766,15 +1766,15 @@ static int iavf_dev_xstats_get(struct rte_eth_dev *dev, return 0; iavf_update_stats(vsi, pstats); - iavf_xtats.eth_stats = *pstats; + iavf_xstats.eth_stats = *pstats; if (iavf_ipsec_crypto_supported(adapter)) - iavf_dev_update_ipsec_xstats(dev, &iavf_xtats.ips_stats); + iavf_dev_update_ipsec_xstats(dev, &iavf_xstats.ips_stats); /* loop over xstats array and values from pstats */ for (i = 0; i < IAVF_NB_XSTATS; i++) { xstats[i].id = i; - xstats[i].value = *(uint64_t *)(((char *)&iavf_xtats) + + xstats[i].value = *(uint64_t *)(((char *)&iavf_xstats) + rte_iavf_stats_strings[i].offset); } diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c index 5e0888ea..d675e0fe 100644 --- a/drivers/net/iavf/iavf_hash.c +++ b/drivers/net/iavf/iavf_hash.c @@ -814,7 +814,7 @@ iavf_hash_parse_pattern(const struct rte_flow_item pattern[], uint64_t *phint, #define REFINE_PROTO_FLD(op, fld) \ VIRTCHNL_##op##_PROTO_HDR_FIELD(hdr, VIRTCHNL_PROTO_HDR_##fld) -#define REPALCE_PROTO_FLD(fld_1, fld_2) \ +#define REPLACE_PROTO_FLD(fld_1, fld_2) \ do { \ REFINE_PROTO_FLD(DEL, fld_1); \ REFINE_PROTO_FLD(ADD, fld_2); \ @@ -925,10 +925,10 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs, } if (rss_type & RTE_ETH_RSS_L3_PRE64) { if (REFINE_PROTO_FLD(TEST, IPV6_SRC)) - REPALCE_PROTO_FLD(IPV6_SRC, + REPLACE_PROTO_FLD(IPV6_SRC, IPV6_PREFIX64_SRC); if (REFINE_PROTO_FLD(TEST, IPV6_DST)) - REPALCE_PROTO_FLD(IPV6_DST, + REPLACE_PROTO_FLD(IPV6_DST, IPV6_PREFIX64_DST); } break; diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c index 884169e0..8174cbfc 100644 --- a/drivers/net/iavf/iavf_ipsec_crypto.c +++ b/drivers/net/iavf/iavf_ipsec_crypto.c @@ -69,7 +69,7 @@ struct iavf_security_session { * 16B - 3 * * but we also need the IV Length for TSO to correctly calculate the total - * header length so placing it in the upper 6-bits here for easier reterival. + * header length so placing it in the upper 6-bits here for easier retrieval. */ static inline uint8_t calc_ipsec_desc_iv_len_field(uint16_t iv_sz) @@ -448,7 +448,7 @@ sa_add_set_auth_params(struct virtchnl_ipsec_crypto_cfg_item *cfg, /** * Send SA add virtual channel request to Inline IPsec driver. * - * Inline IPsec driver expects SPI and destination IP adderss to be in host + * Inline IPsec driver expects SPI and destination IP address to be in host * order, but DPDK APIs are network order, therefore we need to do a htonl * conversion of these parameters. */ @@ -726,7 +726,7 @@ iavf_ipsec_crypto_action_valid(struct rte_eth_dev *ethdev, /** * Send virtual channel security policy add request to IES driver. * - * IES driver expects SPI and destination IP adderss to be in host + * IES driver expects SPI and destination IP address to be in host * order, but DPDK APIs are network order, therefore we need to do a htonl * conversion of these parameters. */ @@ -994,7 +994,7 @@ iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter, request->req_id = (uint16_t)0xDEADBEEF; /** - * SA delete supports deletetion of 1-8 specified SA's or if the flag + * SA delete supports deletion of 1-8 specified SA's or if the flag * field is zero, all SA's associated with VF will be deleted. */ if (sess) { @@ -1147,7 +1147,7 @@ iavf_ipsec_crypto_pkt_metadata_set(void *device, md = RTE_MBUF_DYNFIELD(m, iavf_sctx->pkt_md_offset, struct iavf_ipsec_crypto_pkt_metadata *); - /* Set immutatable metadata values from session template */ + /* Set immutable metadata values from session template */ memcpy(md, &iavf_sess->pkt_metadata_template, sizeof(struct iavf_ipsec_crypto_pkt_metadata)); @@ -1334,7 +1334,7 @@ update_aead_capabilities(struct rte_cryptodev_capabilities *scap, * capabilities structure. */ int -iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx +iavf_ipsec_crypto_set_security_capabilities(struct iavf_security_ctx *iavf_sctx, struct virtchnl_ipsec_cap *vch_cap) { struct rte_cryptodev_capabilities *capabilities; @@ -1355,7 +1355,7 @@ iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx capabilities[number_of_capabilities].op = RTE_CRYPTO_OP_TYPE_UNDEFINED; /** - * Iterate over each virtchl crypto capability by crypto type and + * Iterate over each virtchnl crypto capability by crypto type and * algorithm. */ for (i = 0; i < VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM; i++) { @@ -1454,7 +1454,7 @@ iavf_ipsec_crypto_capabilities_get(void *device) /** * Update the security capabilities struct with the runtime discovered * crypto capabilities, except for last element of the array which is - * the null terminatation + * the null termination */ for (i = 0; i < ((sizeof(iavf_security_capabilities) / sizeof(iavf_security_capabilities[0])) - 1); i++) { @@ -1524,7 +1524,7 @@ iavf_security_init(struct iavf_adapter *adapter) if (rc) return rc; - return iavf_ipsec_crypto_set_security_capabililites(iavf_sctx, + return iavf_ipsec_crypto_set_security_capabilities(iavf_sctx, &capabilities); } diff --git a/drivers/net/iavf/iavf_ipsec_crypto.h b/drivers/net/iavf/iavf_ipsec_crypto.h index 4e4c8798..921ca676 100644 --- a/drivers/net/iavf/iavf_ipsec_crypto.h +++ b/drivers/net/iavf/iavf_ipsec_crypto.h @@ -73,7 +73,7 @@ enum iavf_ipsec_iv_len { }; -/* IPsec Crypto Packet Metaday offload flags */ +/* IPsec Crypto Packet Metadata offload flags */ #define IAVF_IPSEC_CRYPTO_OL_FLAGS_IS_TUN (0x1 << 0) #define IAVF_IPSEC_CRYPTO_OL_FLAGS_ESN (0x1 << 1) #define IAVF_IPSEC_CRYPTO_OL_FLAGS_IPV6_EXT_HDRS (0x1 << 2) @@ -118,8 +118,8 @@ int iavf_security_init(struct iavf_adapter *adapter); /** * Set security capabilities */ -int iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx - *iavf_sctx, struct virtchnl_ipsec_cap *virtchl_capabilities); +int iavf_ipsec_crypto_set_security_capabilities(struct iavf_security_ctx + *iavf_sctx, struct virtchnl_ipsec_cap *virtchnl_capabilities); int iavf_security_get_pkt_md_offset(struct iavf_adapter *adapter); diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 154472c5..59623ac8 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -648,8 +648,8 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return -ENOMEM; } - /* Allocate the maximun number of RX ring hardware descriptor with - * a liitle more to support bulk allocate. + /* Allocate the maximum number of RX ring hardware descriptor with + * a little more to support bulk allocate. */ len = IAVF_MAX_RING_DESC + IAVF_RX_MAX_BURST; ring_size = RTE_ALIGN(len * sizeof(union iavf_rx_desc), diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 1bac59bf..d582a363 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -159,7 +159,7 @@ desc_to_olflags_v(struct iavf_rx_queue *rxq, __m128i descs[4], l3_l4e = _mm_shuffle_epi8(l3_l4e_flags, l3_l4e); /* then we shift left 1 bit */ l3_l4e = _mm_slli_epi32(l3_l4e, 1); - /* we need to mask out the reduntant bits */ + /* we need to mask out the redundant bits */ l3_l4e = _mm_and_si128(l3_l4e, cksum_mask); vlan0 = _mm_or_si128(vlan0, rss); @@ -613,7 +613,7 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, pkt_mb1); desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != IAVF_VPMD_DESCS_PER_LOOP)) diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 145b0598..76026916 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -461,7 +461,7 @@ iavf_check_api_version(struct iavf_adapter *adapter) (vf->virtchnl_version.major == VIRTCHNL_VERSION_MAJOR_START && vf->virtchnl_version.minor < VIRTCHNL_VERSION_MINOR_START)) { PMD_INIT_LOG(ERR, "VIRTCHNL API version should not be lower" - " than (%u.%u) to support Adapative VF", + " than (%u.%u) to support Adaptive VF", VIRTCHNL_VERSION_MAJOR_START, VIRTCHNL_VERSION_MAJOR_START); return -1; @@ -1487,7 +1487,7 @@ iavf_fdir_check(struct iavf_adapter *adapter, err = iavf_execute_vf_cmd(adapter, &args, 0); if (err) { - PMD_DRV_LOG(ERR, "fail to check flow direcotor rule"); + PMD_DRV_LOG(ERR, "fail to check flow director rule"); return err; } diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index cca1d7bf..8313a30b 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -864,7 +864,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw) j = 0; hw->rss_lut[i] = j; } - /* send virtchnnl ops to configure rss*/ + /* send virtchnl ops to configure rss*/ ret = ice_dcf_configure_rss_lut(hw); if (ret) return ret; diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 28f7f7fb..164d834a 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -203,7 +203,7 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, "vector %u are mapping to all Rx queues", hw->msix_base); } else { - /* If Rx interrupt is reuquired, and we can use + /* If Rx interrupt is required, and we can use * multi interrupts, then the vec is from 1 */ hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors, diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 13a7a970..c9fd3de2 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -1264,7 +1264,7 @@ ice_handle_aq_msg(struct rte_eth_dev *dev) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -1627,7 +1627,7 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type) } /* At the beginning, only TC0. */ - /* What we need here is the maximam number of the TX queues. + /* What we need here is the maximum number of the TX queues. * Currently vsi->nb_qps means it. * Correct it if any change. */ @@ -3576,7 +3576,7 @@ ice_dev_start(struct rte_eth_dev *dev) goto rx_err; } - /* enable Rx interrput and mapping Rx queue to interrupt vector */ + /* enable Rx interrupt and mapping Rx queue to interrupt vector */ if (ice_rxq_intr_setup(dev)) return -EIO; @@ -3603,7 +3603,7 @@ ice_dev_start(struct rte_eth_dev *dev) ice_dev_set_link_up(dev); - /* Call get_link_info aq commond to enable/disable LSE */ + /* Call get_link_info aq command to enable/disable LSE */ ice_link_update(dev, 0); pf->adapter_stopped = false; @@ -5395,7 +5395,7 @@ ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, count++; } - /* Get individiual stats from ice_hw_port struct */ + /* Get individual stats from ice_hw_port struct */ for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) { xstats[count].value = *(uint64_t *)((char *)hw_stats + @@ -5426,7 +5426,7 @@ static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev, count++; } - /* Get individiual stats from ice_hw_port struct */ + /* Get individual stats from ice_hw_port struct */ for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) { strlcpy(xstats_names[count].name, ice_hw_port_strings[i].name, sizeof(xstats_names[count].name)); diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index f6d8564a..f59e83d3 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -1118,7 +1118,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, rxq->proto_xtr = pf->proto_xtr != NULL ? pf->proto_xtr[queue_idx] : PROTO_XTR_NONE; - /* Allocate the maximun number of RX ring hardware descriptor. */ + /* Allocate the maximum number of RX ring hardware descriptor. */ len = ICE_MAX_RING_DESC; /** @@ -1248,7 +1248,7 @@ ice_tx_queue_setup(struct rte_eth_dev *dev, tx_free_thresh = (uint16_t)(tx_conf->tx_free_thresh ? tx_conf->tx_free_thresh : ICE_DEFAULT_TX_FREE_THRESH); - /* force tx_rs_thresh to adapt an aggresive tx_free_thresh */ + /* force tx_rs_thresh to adapt an aggressive tx_free_thresh */ tx_rs_thresh = (ICE_DEFAULT_TX_RSBIT_THRESH + tx_free_thresh > nb_desc) ? nb_desc - tx_free_thresh : ICE_DEFAULT_TX_RSBIT_THRESH; @@ -1714,7 +1714,7 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) rxdp[i].read.pkt_addr = dma_addr; } - /* Update rx tail regsiter */ + /* Update rx tail register */ ICE_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger); rxq->rx_free_trigger = @@ -1976,7 +1976,7 @@ ice_recv_scattered_pkts(void *rx_queue, * threshold of the queue, advance the Receive Descriptor Tail (RDT) * register. Update the RDT with the value of the last processed RX * descriptor minus 1, to guarantee that the RDT register is never - * equal to the RDH register, which creates a "full" ring situtation + * equal to the RDH register, which creates a "full" ring situation * from the hardware point of view. */ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); @@ -3117,7 +3117,7 @@ tx_xmit_pkts(struct ice_tx_queue *txq, ice_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n)); txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n)); - /* Determin if RS bit needs to be set */ + /* Determine if RS bit needs to be set */ if (txq->tx_tail > txq->tx_next_rs) { txr[txq->tx_next_rs].cmd_type_offset_bsz |= rte_cpu_to_le_64(((uint64_t)ICE_TX_DESC_CMD_RS) << diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c index 6cd44c58..fd94cedd 100644 --- a/drivers/net/ice/ice_rxtx_vec_sse.c +++ b/drivers/net/ice/ice_rxtx_vec_sse.c @@ -202,7 +202,7 @@ ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4], __m128i l3_l4_mask = _mm_set_epi32(~0x6, ~0x6, ~0x6, ~0x6); __m128i l3_l4_flags = _mm_and_si128(flags, l3_l4_mask); flags = _mm_or_si128(l3_l4_flags, l4_outer_flags); - /* we need to mask out the reduntant bits introduced by RSS or + /* we need to mask out the redundant bits introduced by RSS or * VLAN fields. */ flags = _mm_and_si128(flags, cksum_mask); @@ -566,7 +566,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts, _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, pkt_mb0); ice_rx_desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != ICE_DESCS_PER_LOOP)) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index ed29c00d..5b9251f1 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -1649,11 +1649,11 @@ ice_switch_parse_action(struct ice_pf *pf, struct ice_vsi *vsi = pf->main_vsi; struct rte_eth_dev_data *dev_data = pf->adapter->pf.dev_data; const struct rte_flow_action_queue *act_q; - const struct rte_flow_action_rss *act_qgrop; + const struct rte_flow_action_rss *act_qgroup; uint16_t base_queue, i; const struct rte_flow_action *action; enum rte_flow_action_type action_type; - uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] = { + uint16_t valid_qgroup_number[MAX_QGRP_NUM_TYPE] = { 2, 4, 8, 16, 32, 64, 128}; base_queue = pf->base_queue + vsi->base_queue; @@ -1662,30 +1662,30 @@ ice_switch_parse_action(struct ice_pf *pf, action_type = action->type; switch (action_type) { case RTE_FLOW_ACTION_TYPE_RSS: - act_qgrop = action->conf; - if (act_qgrop->queue_num <= 1) + act_qgroup = action->conf; + if (act_qgroup->queue_num <= 1) goto error; rule_info->sw_act.fltr_act = ICE_FWD_TO_QGRP; rule_info->sw_act.fwd_id.q_id = - base_queue + act_qgrop->queue[0]; + base_queue + act_qgroup->queue[0]; for (i = 0; i < MAX_QGRP_NUM_TYPE; i++) { - if (act_qgrop->queue_num == - valid_qgrop_number[i]) + if (act_qgroup->queue_num == + valid_qgroup_number[i]) break; } if (i == MAX_QGRP_NUM_TYPE) goto error; - if ((act_qgrop->queue[0] + - act_qgrop->queue_num) > + if ((act_qgroup->queue[0] + + act_qgroup->queue_num) > dev_data->nb_rx_queues) goto error1; - for (i = 0; i < act_qgrop->queue_num - 1; i++) - if (act_qgrop->queue[i + 1] != - act_qgrop->queue[i] + 1) + for (i = 0; i < act_qgroup->queue_num - 1; i++) + if (act_qgroup->queue[i + 1] != + act_qgroup->queue[i] + 1) goto error2; rule_info->sw_act.qgrp_size = - act_qgrop->queue_num; + act_qgroup->queue_num; break; case RTE_FLOW_ACTION_TYPE_QUEUE: act_q = action->conf; diff --git a/drivers/net/igc/igc_filter.c b/drivers/net/igc/igc_filter.c index 51fcabfb..bff98df2 100644 --- a/drivers/net/igc/igc_filter.c +++ b/drivers/net/igc/igc_filter.c @@ -167,7 +167,7 @@ igc_tuple_filter_lookup(const struct igc_adapter *igc, /* search the filter array */ for (; i < IGC_MAX_NTUPLE_FILTERS; i++) { if (igc->ntuple_filters[i].hash_val) { - /* compare the hase value */ + /* compare the hash value */ if (ntuple->hash_val == igc->ntuple_filters[i].hash_val) /* filter be found, return index */ diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index 339b0c9a..e48d5df1 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -2099,7 +2099,7 @@ eth_igc_tx_done_cleanup(void *txqueue, uint32_t free_cnt) sw_ring[tx_id].mbuf = NULL; sw_ring[tx_id].last_id = tx_id; - /* Move to next segemnt. */ + /* Move to next segment. */ tx_id = sw_ring[tx_id].next_id; } while (tx_id != tx_next); @@ -2133,7 +2133,7 @@ eth_igc_tx_done_cleanup(void *txqueue, uint32_t free_cnt) * Walk the list and find the next mbuf, if any. */ do { - /* Move to next segemnt. */ + /* Move to next segment. */ tx_id = sw_ring[tx_id].next_id; if (sw_ring[tx_id].mbuf) diff --git a/drivers/net/ionic/ionic_if.h b/drivers/net/ionic/ionic_if.h index 693b44d7..45bad9b0 100644 --- a/drivers/net/ionic/ionic_if.h +++ b/drivers/net/ionic/ionic_if.h @@ -2068,7 +2068,7 @@ typedef struct ionic_admin_comp ionic_fw_download_comp; * enum ionic_fw_control_oper - FW control operations * @IONIC_FW_RESET: Reset firmware * @IONIC_FW_INSTALL: Install firmware - * @IONIC_FW_ACTIVATE: Acticate firmware + * @IONIC_FW_ACTIVATE: Activate firmware */ enum ionic_fw_control_oper { IONIC_FW_RESET = 0, @@ -2091,7 +2091,7 @@ struct ionic_fw_control_cmd { }; /** - * struct ionic_fw_control_comp - Firmware control copletion + * struct ionic_fw_control_comp - Firmware control completion * @status: Status of the command (enum ionic_status_code) * @comp_index: Index in the descriptor ring for which this is the completion * @slot: Slot where the firmware was installed @@ -2878,7 +2878,7 @@ struct ionic_doorbell { * and @identity->intr_coal_div to convert from * usecs to device units: * - * coal_init = coal_usecs * coal_mutl / coal_div + * coal_init = coal_usecs * coal_mult / coal_div * * When an interrupt is sent the interrupt * coalescing timer current value diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.c b/drivers/net/ipn3ke/ipn3ke_ethdev.c index 964506c6..014e438d 100644 --- a/drivers/net/ipn3ke/ipn3ke_ethdev.c +++ b/drivers/net/ipn3ke/ipn3ke_ethdev.c @@ -483,7 +483,7 @@ static int ipn3ke_vswitch_probe(struct rte_afu_device *afu_dev) RTE_CACHE_LINE_SIZE, afu_dev->device.numa_node); if (!hw) { - IPN3KE_AFU_PMD_ERR("failed to allocate hardwart data"); + IPN3KE_AFU_PMD_ERR("failed to allocate hardware data"); retval = -ENOMEM; return -ENOMEM; } diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.h b/drivers/net/ipn3ke/ipn3ke_ethdev.h index 041f13d9..58fcc50c 100644 --- a/drivers/net/ipn3ke/ipn3ke_ethdev.h +++ b/drivers/net/ipn3ke/ipn3ke_ethdev.h @@ -223,7 +223,7 @@ struct ipn3ke_hw_cap { }; /** - * Strucute to store private data for each representor instance + * Structure to store private data for each representor instance */ struct ipn3ke_rpst { TAILQ_ENTRY(ipn3ke_rpst) next; /**< Next in device list. */ @@ -237,7 +237,7 @@ struct ipn3ke_rpst { uint16_t i40e_pf_eth_port_id; struct rte_eth_link ori_linfo; struct ipn3ke_tm_internals tm; - /**< Private data store of assocaiated physical function */ + /**< Private data store of associated physical function */ struct rte_ether_addr mac_addr; }; diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c index f5867ca0..66ae31a5 100644 --- a/drivers/net/ipn3ke/ipn3ke_flow.c +++ b/drivers/net/ipn3ke/ipn3ke_flow.c @@ -1299,7 +1299,7 @@ int ipn3ke_flow_init(void *dev) IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_LKUP_ENABLE: %x\n", data); - /* configure rx parse config, settings associatied with VxLAN */ + /* configure rx parse config, settings associated with VxLAN */ IPN3KE_MASK_WRITE_REG(hw, IPN3KE_CLF_RX_PARSE_CFG, 0, diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index de325c7d..c9dde1d8 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -2282,7 +2282,7 @@ ipn3ke_rpst_xstats_get count++; } - /* Get individiual stats from ipn3ke_rpst_hw_port */ + /* Get individual stats from ipn3ke_rpst_hw_port */ for (i = 0; i < IPN3KE_RPST_HW_PORT_XSTATS_CNT; i++) { xstats[count].value = *(uint64_t *)(((char *)(&hw_stats)) + ipn3ke_rpst_hw_port_strings[i].offset); @@ -2290,7 +2290,7 @@ ipn3ke_rpst_xstats_get count++; } - /* Get individiual stats from ipn3ke_rpst_rxq_pri */ + /* Get individual stats from ipn3ke_rpst_rxq_pri */ for (i = 0; i < IPN3KE_RPST_RXQ_PRIO_XSTATS_CNT; i++) { for (prio = 0; prio < IPN3KE_RPST_PRIO_XSTATS_CNT; prio++) { xstats[count].value = @@ -2302,7 +2302,7 @@ ipn3ke_rpst_xstats_get } } - /* Get individiual stats from ipn3ke_rpst_txq_prio */ + /* Get individual stats from ipn3ke_rpst_txq_prio */ for (i = 0; i < IPN3KE_RPST_TXQ_PRIO_XSTATS_CNT; i++) { for (prio = 0; prio < IPN3KE_RPST_PRIO_XSTATS_CNT; prio++) { xstats[count].value = @@ -2340,7 +2340,7 @@ __rte_unused unsigned int limit) count++; } - /* Get individiual stats from ipn3ke_rpst_hw_port */ + /* Get individual stats from ipn3ke_rpst_hw_port */ for (i = 0; i < IPN3KE_RPST_HW_PORT_XSTATS_CNT; i++) { snprintf(xstats_names[count].name, sizeof(xstats_names[count].name), @@ -2349,7 +2349,7 @@ __rte_unused unsigned int limit) count++; } - /* Get individiual stats from ipn3ke_rpst_rxq_pri */ + /* Get individual stats from ipn3ke_rpst_rxq_pri */ for (i = 0; i < IPN3KE_RPST_RXQ_PRIO_XSTATS_CNT; i++) { for (prio = 0; prio < 8; prio++) { snprintf(xstats_names[count].name, @@ -2361,7 +2361,7 @@ __rte_unused unsigned int limit) } } - /* Get individiual stats from ipn3ke_rpst_txq_prio */ + /* Get individual stats from ipn3ke_rpst_txq_prio */ for (i = 0; i < IPN3KE_RPST_TXQ_PRIO_XSTATS_CNT; i++) { for (prio = 0; prio < 8; prio++) { snprintf(xstats_names[count].name, diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c index 6a9b98fd..5172f21f 100644 --- a/drivers/net/ipn3ke/ipn3ke_tm.c +++ b/drivers/net/ipn3ke/ipn3ke_tm.c @@ -1956,7 +1956,7 @@ ipn3ke_tm_show(struct rte_eth_dev *dev) } static void -ipn3ke_tm_show_commmit(struct rte_eth_dev *dev) +ipn3ke_tm_show_commit(struct rte_eth_dev *dev) { struct ipn3ke_tm_internals *tm = IPN3KE_DEV_PRIVATE_TO_TM(dev); uint32_t tm_id; @@ -2013,7 +2013,7 @@ ipn3ke_tm_hierarchy_commit(struct rte_eth_dev *dev, NULL, rte_strerror(EBUSY)); - ipn3ke_tm_show_commmit(dev); + ipn3ke_tm_show_commit(dev); status = ipn3ke_tm_hierarchy_commit_check(dev, error); if (status) { diff --git a/drivers/net/ipn3ke/meson.build b/drivers/net/ipn3ke/meson.build index 4bf73980..104d2f58 100644 --- a/drivers/net/ipn3ke/meson.build +++ b/drivers/net/ipn3ke/meson.build @@ -8,7 +8,7 @@ if is_windows endif # -# Add the experimenatal APIs called from this PMD +# Add the experimental APIs called from this PMD # rte_eth_switch_domain_alloc() # rte_eth_dev_create() # rte_eth_dev_destroy() diff --git a/drivers/net/ixgbe/ixgbe_bypass.c b/drivers/net/ixgbe/ixgbe_bypass.c index 67ced6c7..94f34a29 100644 --- a/drivers/net/ixgbe/ixgbe_bypass.c +++ b/drivers/net/ixgbe/ixgbe_bypass.c @@ -11,7 +11,7 @@ #define BYPASS_STATUS_OFF_MASK 3 -/* Macros to check for invlaid function pointers. */ +/* Macros to check for invalid function pointers. */ #define FUNC_PTR_OR_ERR_RET(func, retval) do { \ if ((func) == NULL) { \ PMD_DRV_LOG(ERR, "%s:%d function not supported", \ diff --git a/drivers/net/ixgbe/ixgbe_bypass_api.h b/drivers/net/ixgbe/ixgbe_bypass_api.h index 8eb77339..6ef965db 100644 --- a/drivers/net/ixgbe/ixgbe_bypass_api.h +++ b/drivers/net/ixgbe/ixgbe_bypass_api.h @@ -135,7 +135,7 @@ static s32 ixgbe_bypass_rw_generic(struct ixgbe_hw *hw, u32 cmd, u32 *status) * ixgbe_bypass_valid_rd_generic - Verify valid return from bit-bang. * * If we send a write we can't be sure it took until we can read back - * that same register. It can be a problem as some of the feilds may + * that same register. It can be a problem as some of the fields may * for valid reasons change between the time wrote the register and * we read it again to verify. So this function check everything we * can check and then assumes it worked. @@ -189,7 +189,7 @@ static bool ixgbe_bypass_valid_rd_generic(u32 in_reg, u32 out_reg) } /** - * ixgbe_bypass_set_generic - Set a bypass field in the FW CTRL Regiter. + * ixgbe_bypass_set_generic - Set a bypass field in the FW CTRL Register. * * @hw: pointer to hardware structure * @cmd: The control word we are setting. diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index fe61dba8..49bd0abd 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -2375,7 +2375,7 @@ ixgbe_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; - /* multipe queue mode checking */ + /* multiple queue mode checking */ ret = ixgbe_check_mq_mode(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "ixgbe_check_mq_mode fails with %d.", @@ -2603,7 +2603,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev) } } - /* confiugre msix for sleep until rx interrupt */ + /* configure msix for sleep until rx interrupt */ ixgbe_configure_msix(dev); /* initialize transmission unit */ @@ -2907,7 +2907,7 @@ ixgbe_dev_set_link_up(struct rte_eth_dev *dev) if (hw->mac.type == ixgbe_mac_82599EB) { #ifdef RTE_LIBRTE_IXGBE_BYPASS if (hw->device_id == IXGBE_DEV_ID_82599_BYPASS) { - /* Not suported in bypass mode */ + /* Not supported in bypass mode */ PMD_INIT_LOG(ERR, "Set link up is not supported " "by device id 0x%x", hw->device_id); return -ENOTSUP; @@ -2938,7 +2938,7 @@ ixgbe_dev_set_link_down(struct rte_eth_dev *dev) if (hw->mac.type == ixgbe_mac_82599EB) { #ifdef RTE_LIBRTE_IXGBE_BYPASS if (hw->device_id == IXGBE_DEV_ID_82599_BYPASS) { - /* Not suported in bypass mode */ + /* Not supported in bypass mode */ PMD_INIT_LOG(ERR, "Set link down is not supported " "by device id 0x%x", hw->device_id); return -ENOTSUP; @@ -4603,7 +4603,7 @@ ixgbe_dev_interrupt_action(struct rte_eth_dev *dev) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -4659,7 +4659,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param) * @param handle * Pointer to interrupt handle. * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. + * The address of parameter (struct rte_eth_dev *) registered before. * * @return * void @@ -5921,7 +5921,7 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev) /* Configure all RX queues of VF */ for (q_idx = 0; q_idx < dev->data->nb_rx_queues; q_idx++) { /* Force all queue use vector 0, - * as IXGBE_VF_MAXMSIVECOTR = 1 + * as IXGBE_VF_MAXMSIVECTOR = 1 */ ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx); rte_intr_vec_list_index_set(intr_handle, q_idx, @@ -6256,7 +6256,7 @@ ixgbe_inject_5tuple_filter(struct rte_eth_dev *dev, * @param * dev: Pointer to struct rte_eth_dev. * index: the index the filter allocates. - * filter: ponter to the filter that will be added. + * filter: pointer to the filter that will be added. * rx_queue: the queue id the filter assigned to. * * @return @@ -6872,7 +6872,7 @@ ixgbe_timesync_disable(struct rte_eth_dev *dev) /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_1588), 0); - /* Stop incrementating the System Time registers. */ + /* Stop incrementing the System Time registers. */ IXGBE_WRITE_REG(hw, IXGBE_TIMINCA, 0); return 0; diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 83e8b5e5..69e0e82a 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -68,7 +68,7 @@ #define IXGBE_LPBK_NONE 0x0 /* Default value. Loopback is disabled. */ #define IXGBE_LPBK_TX_RX 0x1 /* Tx->Rx loopback operation is enabled. */ /* X540-X550 specific loopback operations */ -#define IXGBE_MII_AUTONEG_ENABLE 0x1000 /* Auto-negociation enable (default = 1) */ +#define IXGBE_MII_AUTONEG_ENABLE 0x1000 /* Auto-negotiation enable (default = 1) */ #define IXGBE_MAX_JUMBO_FRAME_SIZE 0x2600 /* Maximum Jumbo frame size. */ diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c index 78940478..834c1b3f 100644 --- a/drivers/net/ixgbe/ixgbe_fdir.c +++ b/drivers/net/ixgbe/ixgbe_fdir.c @@ -390,7 +390,7 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev) switch (info->mask.tunnel_type_mask) { case 0: - /* Mask turnnel type */ + /* Mask tunnel type */ fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE; break; case 1: diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c index bdc9d479..2fa2e02b 100644 --- a/drivers/net/ixgbe/ixgbe_flow.c +++ b/drivers/net/ixgbe/ixgbe_flow.c @@ -135,7 +135,7 @@ const struct rte_flow_action *next_no_void_action( } /** - * Please aware there's an asumption for all the parsers. + * Please aware there's an assumption for all the parsers. * rte_flow_item is using big endian, rte_flow_attr and * rte_flow_action are using CPU order. * Because the pattern is used to describe the packets, @@ -3261,7 +3261,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, /** * Check if the flow rule is supported by ixgbe. - * It only checkes the format. Don't guarantee the rule can be programmed into + * It only checks the format. Don't guarantee the rule can be programmed into * the HW. Because there can be no enough room for the rule. */ static int diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c index 944c9f23..c353ae33 100644 --- a/drivers/net/ixgbe/ixgbe_ipsec.c +++ b/drivers/net/ixgbe/ixgbe_ipsec.c @@ -310,7 +310,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev, return -1; } - /* Disable and clear Rx SPI and key table table entryes*/ + /* Disable and clear Rx SPI and key table table entries*/ reg_val = IPSRXIDX_WRITE | IPSRXIDX_TABLE_SPI | (sa_index << 3); IXGBE_WRITE_REG(hw, IXGBE_IPSRXSPI, 0); IXGBE_WRITE_REG(hw, IXGBE_IPSRXIPIDX, 0); diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c index 9f1bd0a6..c73833b7 100644 --- a/drivers/net/ixgbe/ixgbe_pf.c +++ b/drivers/net/ixgbe/ixgbe_pf.c @@ -242,7 +242,7 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev) /* PFDMA Tx General Switch Control Enables VMDQ loopback */ IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, IXGBE_PFDTXGSWC_VT_LBEN); - /* clear VMDq map to perment rar 0 */ + /* clear VMDq map to permanent rar 0 */ hw->mac.ops.clear_vmdq(hw, 0, IXGBE_CLEAR_VMDQ_ALL); /* clear VMDq map to scan rar 127 */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index d7c80d42..99e928a2 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -1954,7 +1954,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); @@ -2303,7 +2303,7 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, * register. * Update the RDT with the value of the last processed RX descriptor * minus 1, to guarantee that the RDT register is never equal to the - * RDH register, which creates a "full" ring situtation from the + * RDH register, which creates a "full" ring situation from the * hardware point of view... */ if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) { @@ -2666,7 +2666,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, */ tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); - /* force tx_rs_thresh to adapt an aggresive tx_free_thresh */ + /* force tx_rs_thresh to adapt an aggressive tx_free_thresh */ tx_rs_thresh = (DEFAULT_TX_RS_THRESH + tx_free_thresh > nb_desc) ? nb_desc - tx_free_thresh : DEFAULT_TX_RS_THRESH; if (tx_conf->tx_rs_thresh > 0) @@ -4831,7 +4831,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev) dev->data->port_id); dev->rx_pkt_burst = ixgbe_recv_pkts_lro_bulk_alloc; } else { - PMD_INIT_LOG(DEBUG, "Using Regualr (non-vector, " + PMD_INIT_LOG(DEBUG, "Using Regular (non-vector, " "single allocation) " "Scattered Rx callback " "(port=%d).", @@ -5170,7 +5170,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) /* * Setup the Checksum Register. * Disable Full-Packet Checksum which is mutually exclusive with RSS. - * Enable IP/L4 checkum computation by hardware if requested to do so. + * Enable IP/L4 checksum computation by hardware if requested to do so. */ rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM); rxcsum |= IXGBE_RXCSUM_PCSD; diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c index 1eed9494..c56f76b3 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c @@ -562,7 +562,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, desc_to_ptype_v(descs, rxq->pkt_type_mask, &rx_pkts[pos]); - /* C.4 calc avaialbe number of desc */ + /* C.4 calc available number of desc */ var = __builtin_popcountll(_mm_cvtsi128_si64(staterr)); nb_pkts_recd += var; if (likely(var != RTE_IXGBE_DESCS_PER_LOOP)) diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c index 079cf012..42f48a68 100644 --- a/drivers/net/memif/memif_socket.c +++ b/drivers/net/memif/memif_socket.c @@ -726,7 +726,7 @@ memif_msg_receive(struct memif_control_channel *cc) break; case MEMIF_MSG_TYPE_INIT: /* - * This cc does not have an interface asociated with it. + * This cc does not have an interface associated with it. * If suitable interface is found it will be assigned here. */ ret = memif_msg_receive_init(cc, &msg); diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c index e3d523af..59cb5a82 100644 --- a/drivers/net/memif/rte_eth_memif.c +++ b/drivers/net/memif/rte_eth_memif.c @@ -1026,7 +1026,7 @@ memif_regions_init(struct rte_eth_dev *dev) if (ret < 0) return ret; } else { - /* create one memory region contaning rings and buffers */ + /* create one memory region containing rings and buffers */ ret = memif_region_init_shm(dev, /* has buffers */ 1); if (ret < 0) return ret; diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h index 2d0c512f..4023a476 100644 --- a/drivers/net/mlx4/mlx4.h +++ b/drivers/net/mlx4/mlx4.h @@ -74,7 +74,7 @@ enum mlx4_mp_req_type { MLX4_MP_REQ_STOP_RXTX, }; -/* Pameters for IPC. */ +/* Parameters for IPC. */ struct mlx4_mp_param { enum mlx4_mp_req_type type; int port_id; diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c index d606ec8c..ce74c51c 100644 --- a/drivers/net/mlx4/mlx4_ethdev.c +++ b/drivers/net/mlx4/mlx4_ethdev.c @@ -752,7 +752,7 @@ mlx4_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) * Pointer to Ethernet device structure. * * @return - * alwasy 0 on success + * always 0 on success */ int mlx4_stats_reset(struct rte_eth_dev *dev) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index c29fe3d9..36f0fbf0 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -112,7 +112,7 @@ static struct mlx5_indexed_pool_config icfg[] = { * Pointer to RQ channel object, which includes the channel fd * * @param[out] fd - * The file descriptor (representing the intetrrupt) used in this channel. + * The file descriptor (representing the interrupt) used in this channel. * * @return * 0 on successfully setting the fd to non-blocking, non-zero otherwise. @@ -1743,7 +1743,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, priv->drop_queue.hrxq = mlx5_drop_action_create(eth_dev); if (!priv->drop_queue.hrxq) goto error; - /* Port representor shares the same max prioirity with pf port. */ + /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ err = mlx5_flow_discover_priorities(eth_dev); @@ -2300,7 +2300,7 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev, /* * Force standalone bonding * device for ROCE LAG - * confgiurations. + * configurations. */ list[ns].info.master = 0; list[ns].info.representor = 0; @@ -2637,7 +2637,7 @@ mlx5_os_pci_probe(struct mlx5_common_device *cdev) } if (ret) { DRV_LOG(ERR, "Probe of PCI device " PCI_PRI_FMT " " - "aborted due to proding failure of PF %u", + "aborted due to prodding failure of PF %u", pci_dev->addr.domain, pci_dev->addr.bus, pci_dev->addr.devid, pci_dev->addr.function, eth_da.ports[p]); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index aa5f313c..66a2d9b5 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -350,7 +350,7 @@ static const struct mlx5_indexed_pool_config mlx5_ipool_cfg[] = { .free = mlx5_free, .type = "rte_flow_ipool", }, - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID] = { + [MLX5_IPOOL_RSS_EXPANSION_FLOW_ID] = { .size = 0, .need_lock = 1, .type = "mlx5_flow_rss_id_ipool", @@ -1642,7 +1642,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) /* * Free the shared context in last turn, because the cleanup * routines above may use some shared fields, like - * mlx5_os_mac_addr_flush() uses ibdev_path for retrieveing + * mlx5_os_mac_addr_flush() uses ibdev_path for retrieving * ifindex if Netlink fails. */ mlx5_free_shared_dev_ctx(priv->sh); @@ -1962,7 +1962,7 @@ mlx5_args_check(const char *key, const char *val, void *opaque) if (tmp != MLX5_RCM_NONE && tmp != MLX5_RCM_LIGHT && tmp != MLX5_RCM_AGGR) { - DRV_LOG(ERR, "Unrecognize %s: \"%s\"", key, val); + DRV_LOG(ERR, "Unrecognized %s: \"%s\"", key, val); rte_errno = EINVAL; return -rte_errno; } @@ -2177,17 +2177,17 @@ mlx5_set_metadata_mask(struct rte_eth_dev *dev) break; } if (sh->dv_mark_mask && sh->dv_mark_mask != mark) - DRV_LOG(WARNING, "metadata MARK mask mismatche %08X:%08X", + DRV_LOG(WARNING, "metadata MARK mask mismatch %08X:%08X", sh->dv_mark_mask, mark); else sh->dv_mark_mask = mark; if (sh->dv_meta_mask && sh->dv_meta_mask != meta) - DRV_LOG(WARNING, "metadata META mask mismatche %08X:%08X", + DRV_LOG(WARNING, "metadata META mask mismatch %08X:%08X", sh->dv_meta_mask, meta); else sh->dv_meta_mask = meta; if (sh->dv_regc0_mask && sh->dv_regc0_mask != reg_c0) - DRV_LOG(WARNING, "metadata reg_c0 mask mismatche %08X:%08X", + DRV_LOG(WARNING, "metadata reg_c0 mask mismatch %08X:%08X", sh->dv_meta_mask, reg_c0); else sh->dv_regc0_mask = reg_c0; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 84665310..61287800 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -73,7 +73,7 @@ enum mlx5_ipool_index { MLX5_IPOOL_HRXQ, /* Pool for hrxq resource. */ MLX5_IPOOL_MLX5_FLOW, /* Pool for mlx5 flow handle. */ MLX5_IPOOL_RTE_FLOW, /* Pool for rte_flow. */ - MLX5_IPOOL_RSS_EXPANTION_FLOW_ID, /* Pool for Queue/RSS flow ID. */ + MLX5_IPOOL_RSS_EXPANSION_FLOW_ID, /* Pool for Queue/RSS flow ID. */ MLX5_IPOOL_RSS_SHARED_ACTIONS, /* Pool for RSS shared actions. */ MLX5_IPOOL_MTR_POLICY, /* Pool for meter policy resource. */ MLX5_IPOOL_MAX, @@ -751,7 +751,7 @@ struct mlx5_flow_meter_policy { /* drop action for red color. */ uint16_t sub_policy_num; /* Count sub policy tables, 3 bits per domain. */ - struct mlx5_flow_meter_sub_policy **sub_policys[MLX5_MTR_DOMAIN_MAX]; + struct mlx5_flow_meter_sub_policy **sub_policies[MLX5_MTR_DOMAIN_MAX]; /* Sub policy table array must be the end of struct. */ }; @@ -977,7 +977,7 @@ struct mlx5_flow_id_pool { uint32_t base_index; /**< The next index that can be used without any free elements. */ uint32_t *curr; /**< Pointer to the index to pop. */ - uint32_t *last; /**< Pointer to the last element in the empty arrray. */ + uint32_t *last; /**< Pointer to the last element in the empty array. */ uint32_t max_id; /**< Maximum id can be allocated from the pool. */ }; @@ -1014,7 +1014,7 @@ struct mlx5_dev_txpp { void *pp; /* Packet pacing context. */ uint16_t pp_id; /* Packet pacing context index. */ uint16_t ts_n; /* Number of captured timestamps. */ - uint16_t ts_p; /* Pointer to statisticks timestamp. */ + uint16_t ts_p; /* Pointer to statistics timestamp. */ struct mlx5_txpp_ts *tsa; /* Timestamps sliding window stats. */ struct mlx5_txpp_ts ts; /* Cached completion id/timestamp. */ uint32_t sync_lost:1; /* ci/timestamp synchronization lost. */ @@ -1118,7 +1118,7 @@ struct mlx5_flex_parser_devx { uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM]; }; -/* Pattern field dscriptor - how to translate flex pattern into samples. */ +/* Pattern field descriptor - how to translate flex pattern into samples. */ __extension__ struct mlx5_flex_pattern_field { uint16_t width:6; @@ -1169,7 +1169,7 @@ struct mlx5_dev_ctx_shared { /* Shared DV/DR flow data section. */ uint32_t dv_meta_mask; /* flow META metadata supported mask. */ uint32_t dv_mark_mask; /* flow MARK metadata supported mask. */ - uint32_t dv_regc0_mask; /* available bits of metatada reg_c[0]. */ + uint32_t dv_regc0_mask; /* available bits of metadata reg_c[0]. */ void *fdb_domain; /* FDB Direct Rules name space handle. */ void *rx_domain; /* RX Direct Rules name space handle. */ void *tx_domain; /* TX Direct Rules name space handle. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index f34e4b88..7e5ce5a2 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1206,7 +1206,7 @@ flow_rxq_tunnel_ptype_update(struct mlx5_rxq_ctrl *rxq_ctrl) } /** - * Set the Rx queue flags (Mark/Flag and Tunnel Ptypes) according to the devive + * Set the Rx queue flags (Mark/Flag and Tunnel Ptypes) according to the device * flow. * * @param[in] dev @@ -3008,7 +3008,7 @@ mlx5_flow_validate_item_geneve_opt(const struct rte_flow_item *item, if ((uint32_t)spec->option_len > MLX5_GENEVE_OPTLEN_MASK) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, - "Geneve TLV opt length exceeeds the limit (31)"); + "Geneve TLV opt length exceeds the limit (31)"); /* Check if class type and length masks are full. */ if (full_mask.option_class != mask->option_class || full_mask.option_type != mask->option_type || @@ -3957,7 +3957,7 @@ find_graph_root(uint32_t rss_level) * subflow. * * @param[in] dev_flow - * Pointer the created preifx subflow. + * Pointer the created prefix subflow. * * @return * The layers get from prefix subflow. @@ -4284,7 +4284,7 @@ flow_dv_mreg_create_cb(void *tool_ctx, void *cb_ctx) [3] = { .type = RTE_FLOW_ACTION_TYPE_END, }, }; - /* Fill the register fileds in the flow. */ + /* Fill the register fields in the flow. */ ret = mlx5_flow_get_reg_id(dev, MLX5_FLOW_MARK, 0, error); if (ret < 0) return NULL; @@ -4353,7 +4353,7 @@ flow_dv_mreg_create_cb(void *tool_ctx, void *cb_ctx) /* * The copy Flows are not included in any list. There * ones are referenced from other Flows and can not - * be applied, removed, deleted in ardbitrary order + * be applied, removed, deleted in arbitrary order * by list traversing. */ mcp_res->rix_flow = flow_list_create(dev, MLX5_FLOW_TYPE_MCP, @@ -4810,7 +4810,7 @@ flow_create_split_inner(struct rte_eth_dev *dev, /* * If dev_flow is as one of the suffix flow, some actions in suffix * flow may need some user defined item layer flags, and pass the - * Metadate rxq mark flag to suffix flow as well. + * Metadata rxq mark flag to suffix flow as well. */ if (flow_split_info->prefix_layers) dev_flow->handle->layers = flow_split_info->prefix_layers; @@ -4933,7 +4933,7 @@ get_meter_sub_policy(struct rte_eth_dev *dev, attr->transfer ? MLX5_MTR_DOMAIN_TRANSFER : (attr->egress ? MLX5_MTR_DOMAIN_EGRESS : MLX5_MTR_DOMAIN_INGRESS); - sub_policy = policy->sub_policys[mtr_domain][0]; + sub_policy = policy->sub_policies[mtr_domain][0]; } if (!sub_policy) rte_flow_error_set(error, EINVAL, @@ -5301,7 +5301,7 @@ flow_mreg_split_qrss_prep(struct rte_eth_dev *dev, * IDs. */ mlx5_ipool_malloc(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], &flow_id); + [MLX5_IPOOL_RSS_EXPANSION_FLOW_ID], &flow_id); if (!flow_id) return rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION, @@ -5359,7 +5359,7 @@ flow_mreg_split_qrss_prep(struct rte_eth_dev *dev, * @param[out] error * Perform verbose error reporting if not NULL. * @param[in] encap_idx - * The encap action inndex. + * The encap action index. * * @return * 0 on success, negative value otherwise @@ -5628,7 +5628,7 @@ flow_sample_split_prep(struct rte_eth_dev *dev, if (ret < 0) return ret; mlx5_ipool_malloc(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], &tag_id); + [MLX5_IPOOL_RSS_EXPANSION_FLOW_ID], &tag_id); *set_tag = (struct mlx5_rte_flow_action_set_tag) { .id = ret, .data = tag_id, @@ -5899,7 +5899,7 @@ flow_create_split_metadata(struct rte_eth_dev *dev, * These ones are included into parent flow list and will be destroyed * by flow_drv_destroy. */ - mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANSION_FLOW_ID], qrss_id); mlx5_free(ext_actions); return ret; @@ -6884,7 +6884,7 @@ flow_list_destroy(struct rte_eth_dev *dev, enum mlx5_flow_type type, * @param type * Flow type to be flushed. * @param active - * If flushing is called avtively. + * If flushing is called actively. */ void mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type, @@ -8531,7 +8531,7 @@ mlx5_flow_dev_dump_sh_all(struct rte_eth_dev *dev, * Perform verbose error reporting if not NULL. PMDs initialize this * structure in case of error only. * @return - * 0 on success, a nagative value otherwise. + * 0 on success, a negative value otherwise. */ int mlx5_flow_dev_dump(struct rte_eth_dev *dev, struct rte_flow *flow_idx, @@ -9009,7 +9009,7 @@ mlx5_get_tof(const struct rte_flow_item *item, } /** - * tunnel offload functionalilty is defined for DV environment only + * tunnel offload functionality is defined for DV environment only */ #ifdef HAVE_IBV_FLOW_DV_SUPPORT __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1f54649c..8c131d61 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -598,7 +598,7 @@ struct mlx5_flow_tbl_data_entry { const struct mlx5_flow_tunnel *tunnel; uint32_t group_id; uint32_t external:1; - uint32_t tunnel_offload:1; /* Tunnel offlod table or not. */ + uint32_t tunnel_offload:1; /* Tunnel offload table or not. */ uint32_t is_egress:1; /**< Egress table. */ uint32_t is_transfer:1; /**< Transfer table. */ uint32_t dummy:1; /**< DR table. */ @@ -696,8 +696,8 @@ struct mlx5_flow_handle { /**< Bit-fields of present layers, see MLX5_FLOW_LAYER_*. */ void *drv_flow; /**< pointer to driver flow object. */ uint32_t split_flow_id:27; /**< Sub flow unique match flow id. */ - uint32_t is_meter_flow_id:1; /**< Indate if flow_id is for meter. */ - uint32_t mark:1; /**< Metadate rxq mark flag. */ + uint32_t is_meter_flow_id:1; /**< Indicate if flow_id is for meter. */ + uint32_t mark:1; /**< Metadata rxq mark flag. */ uint32_t fate_action:3; /**< Fate action type. */ uint32_t flex_item; /**< referenced Flex Item bitmask. */ union { diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index ddf4328d..cd01e0c3 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -981,13 +981,13 @@ mlx5_aso_ct_sq_enqueue_single(struct mlx5_dev_ctx_shared *sh, MLX5_SET(conn_track_aso, desg, sack_permitted, profile->selective_ack); MLX5_SET(conn_track_aso, desg, challenged_acked, profile->challenge_ack_passed); - /* Heartbeat, retransmission_counter, retranmission_limit_exceeded: 0 */ + /* Heartbeat, retransmission_counter, retransmission_limit_exceeded: 0 */ MLX5_SET(conn_track_aso, desg, heartbeat, 0); MLX5_SET(conn_track_aso, desg, max_ack_window, profile->max_ack_window); MLX5_SET(conn_track_aso, desg, retransmission_counter, 0); - MLX5_SET(conn_track_aso, desg, retranmission_limit_exceeded, 0); - MLX5_SET(conn_track_aso, desg, retranmission_limit, + MLX5_SET(conn_track_aso, desg, retransmission_limit_exceeded, 0); + MLX5_SET(conn_track_aso, desg, retransmission_limit, profile->retransmission_limit); MLX5_SET(conn_track_aso, desg, reply_direction_tcp_scale, profile->reply_dir.scale); @@ -1312,7 +1312,7 @@ mlx5_aso_ct_obj_analyze(struct rte_flow_action_conntrack *profile, profile->max_ack_window = MLX5_GET(conn_track_aso, wdata, max_ack_window); profile->retransmission_limit = MLX5_GET(conn_track_aso, wdata, - retranmission_limit); + retransmission_limit); profile->last_window = MLX5_GET(conn_track_aso, wdata, last_win); profile->last_direction = MLX5_GET(conn_track_aso, wdata, last_dir); profile->last_index = (enum rte_flow_conntrack_tcp_last_index) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 3da122cb..f43781f7 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -2032,7 +2032,7 @@ flow_dv_validate_item_meta(struct rte_eth_dev *dev __rte_unused, if (reg == REG_NON) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, - "unavalable extended metadata register"); + "unavailable extended metadata register"); if (reg == REG_B) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, item, @@ -3205,7 +3205,7 @@ flow_dv_validate_action_set_meta(struct rte_eth_dev *dev, if (reg == REG_NON) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, action, - "unavalable extended metadata register"); + "unavailable extended metadata register"); if (reg != REG_A && reg != REG_B) { struct mlx5_priv *priv = dev->data->dev_private; @@ -5145,7 +5145,7 @@ flow_dv_modify_hdr_action_max(struct rte_eth_dev *dev __rte_unused, * Pointer to error structure. * * @return - * 0 on success, a negative errno value otherwise and rte_ernno is set. + * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int mlx5_flow_validate_action_meter(struct rte_eth_dev *dev, @@ -7858,7 +7858,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * - Explicit decap action is prohibited by the tunnel offload API. * - Drop action in tunnel steer rule is prohibited by the API. * - Application cannot use MARK action because it's value can mask - * tunnel default miss nitification. + * tunnel default miss notification. * - JUMP in tunnel match rule has no support in current PMD * implementation. * - TAG & META are reserved for future uses. @@ -9184,7 +9184,7 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, geneve_opt_v->option_type && geneve_opt_resource->length == geneve_opt_v->option_len) { - /* We already have GENVE TLV option obj allocated. */ + /* We already have GENEVE TLV option obj allocated. */ __atomic_fetch_add(&geneve_opt_resource->refcnt, 1, __ATOMIC_RELAXED); } else { @@ -10226,7 +10226,7 @@ __flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria) * Check flow matching criteria first, subtract misc5/4 length if flow * doesn't own misc5/4 parameters. In some old rdma-core releases, * misc5/4 are not supported, and matcher creation failure is expected - * w/o subtration. If misc5 is provided, misc4 must be counted in since + * w/o subtraction. If misc5 is provided, misc4 must be counted in since * misc5 is right after misc4. */ if (!(match_criteria & (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) { @@ -11425,7 +11425,7 @@ flow_dv_dest_array_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) goto error; } } - /* create a dest array actioin */ + /* create a dest array action */ ret = mlx5_os_flow_dr_create_flow_action_dest_array (domain, resource->num_of_dest, @@ -14538,7 +14538,7 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) else if (dev_handle->split_flow_id && !dev_handle->is_meter_flow_id) mlx5_ipool_free(priv->sh->ipool - [MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], + [MLX5_IPOOL_RSS_EXPANSION_FLOW_ID], dev_handle->split_flow_id); mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MLX5_FLOW], tmp_idx); @@ -15311,7 +15311,7 @@ flow_dv_destroy_policy_rules(struct rte_eth_dev *dev, (MLX5_MTR_SUB_POLICY_NUM_SHIFT * i)) & MLX5_MTR_SUB_POLICY_NUM_MASK; for (j = 0; j < sub_policy_num; j++) { - sub_policy = mtr_policy->sub_policys[i][j]; + sub_policy = mtr_policy->sub_policies[i][j]; if (sub_policy) __flow_dv_destroy_sub_policy_rules(dev, sub_policy); @@ -15649,7 +15649,7 @@ __flow_dv_create_domain_policy_acts(struct rte_eth_dev *dev, (1 << MLX5_SCALE_FLOW_GROUP_BIT), }; struct mlx5_flow_meter_sub_policy *sub_policy = - mtr_policy->sub_policys[domain][0]; + mtr_policy->sub_policies[domain][0]; if (i >= MLX5_MTR_RTE_COLORS) return -rte_mtr_error_set(error, @@ -16504,7 +16504,7 @@ __flow_dv_create_policy_acts_rules(struct rte_eth_dev *dev, next_fm->policy_id, NULL); MLX5_ASSERT(next_policy); next_sub_policy = - next_policy->sub_policys[domain][0]; + next_policy->sub_policies[domain][0]; } tbl_data = container_of(next_sub_policy->tbl_rsc, @@ -16559,7 +16559,7 @@ flow_dv_create_policy_rules(struct rte_eth_dev *dev, continue; /* Prepare actions list and create policy rules. */ if (__flow_dv_create_policy_acts_rules(dev, mtr_policy, - mtr_policy->sub_policys[i][0], i)) { + mtr_policy->sub_policies[i][0], i)) { DRV_LOG(ERR, "Failed to create policy action " "list per domain."); return -1; @@ -16898,7 +16898,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) { if (rss_desc[i] && hrxq_idx[i] != - mtr_policy->sub_policys[domain][j]->rix_hrxq[i]) + mtr_policy->sub_policies[domain][j]->rix_hrxq[i]) break; } if (i >= MLX5_MTR_RTE_COLORS) { @@ -16910,13 +16910,13 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, for (i = 0; i < MLX5_MTR_RTE_COLORS; i++) mlx5_hrxq_release(dev, hrxq_idx[i]); *is_reuse = true; - return mtr_policy->sub_policys[domain][j]; + return mtr_policy->sub_policies[domain][j]; } } /* Create sub policy. */ - if (!mtr_policy->sub_policys[domain][0]->rix_hrxq[0]) { + if (!mtr_policy->sub_policies[domain][0]->rix_hrxq[0]) { /* Reuse the first pre-allocated sub_policy. */ - sub_policy = mtr_policy->sub_policys[domain][0]; + sub_policy = mtr_policy->sub_policies[domain][0]; sub_policy_idx = sub_policy->idx; } else { sub_policy = mlx5_ipool_zmalloc @@ -16967,7 +16967,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, "rules for ingress domain."); goto rss_sub_policy_error; } - if (sub_policy != mtr_policy->sub_policys[domain][0]) { + if (sub_policy != mtr_policy->sub_policies[domain][0]) { i = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; @@ -16975,7 +16975,7 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, DRV_LOG(ERR, "No free sub-policy slot."); goto rss_sub_policy_error; } - mtr_policy->sub_policys[domain][i] = sub_policy; + mtr_policy->sub_policies[domain][i] = sub_policy; i++; mtr_policy->sub_policy_num &= ~(MLX5_MTR_SUB_POLICY_NUM_MASK << (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)); @@ -16989,11 +16989,11 @@ __flow_dv_meter_get_rss_sub_policy(struct rte_eth_dev *dev, rss_sub_policy_error: if (sub_policy) { __flow_dv_destroy_sub_policy_rules(dev, sub_policy); - if (sub_policy != mtr_policy->sub_policys[domain][0]) { + if (sub_policy != mtr_policy->sub_policies[domain][0]) { i = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; - mtr_policy->sub_policys[domain][i] = NULL; + mtr_policy->sub_policies[domain][i] = NULL; mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], sub_policy->idx); } @@ -17078,11 +17078,11 @@ flow_dv_meter_sub_policy_rss_prepare(struct rte_eth_dev *dev, sub_policy = sub_policies[--j]; mtr_policy = sub_policy->main_policy; __flow_dv_destroy_sub_policy_rules(dev, sub_policy); - if (sub_policy != mtr_policy->sub_policys[domain][0]) { + if (sub_policy != mtr_policy->sub_policies[domain][0]) { sub_policy_num = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * domain)) & MLX5_MTR_SUB_POLICY_NUM_MASK; - mtr_policy->sub_policys[domain][sub_policy_num - 1] = + mtr_policy->sub_policies[domain][sub_policy_num - 1] = NULL; sub_policy_num--; mtr_policy->sub_policy_num &= @@ -17157,7 +17157,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, if (!next_fm->drop_cnt) goto exit; color_reg_c_idx = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, error); - sub_policy = mtr_policy->sub_policys[domain][0]; + sub_policy = mtr_policy->sub_policies[domain][0]; for (i = 0; i < RTE_COLORS; i++) { bool rule_exist = false; struct mlx5_meter_policy_action_container *act_cnt; @@ -17184,7 +17184,7 @@ flow_dv_meter_hierarchy_rule_create(struct rte_eth_dev *dev, next_policy = mlx5_flow_meter_policy_find(dev, next_fm->policy_id, NULL); MLX5_ASSERT(next_policy); - next_sub_policy = next_policy->sub_policys[domain][0]; + next_sub_policy = next_policy->sub_policies[domain][0]; tbl_data = container_of(next_sub_policy->tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl); act_cnt = &mtr_policy->act_cnt[i]; @@ -17277,13 +17277,13 @@ flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, new_policy_num = sub_policy_num; for (j = 0; j < sub_policy_num; j++) { sub_policy = - mtr_policy->sub_policys[domain][j]; + mtr_policy->sub_policies[domain][j]; if (sub_policy) { __flow_dv_destroy_sub_policy_rules(dev, sub_policy); if (sub_policy != - mtr_policy->sub_policys[domain][0]) { - mtr_policy->sub_policys[domain][j] = + mtr_policy->sub_policies[domain][0]) { + mtr_policy->sub_policies[domain][j] = NULL; mlx5_ipool_free (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], @@ -17303,7 +17303,7 @@ flow_dv_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev, } break; case MLX5_FLOW_FATE_QUEUE: - sub_policy = mtr_policy->sub_policys[domain][0]; + sub_policy = mtr_policy->sub_policies[domain][0]; __flow_dv_destroy_sub_policy_rules(dev, sub_policy); break; diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c index 64867dc9..9413d4d8 100644 --- a/drivers/net/mlx5/mlx5_flow_flex.c +++ b/drivers/net/mlx5/mlx5_flow_flex.c @@ -205,7 +205,7 @@ mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v, * @param dev * Ethernet device to translate flex item on. * @param[in, out] matcher - * Flow matcher to confgiure + * Flow matcher to configure * @param[in, out] key * Flow matcher value. * @param[in] item @@ -457,7 +457,7 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr, if (field->offset_shift > 15 || field->offset_shift < 0) return rte_flow_error_set (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "header length field shift exceeeds limit"); + "header length field shift exceeds limit"); node->header_length_field_shift = field->offset_shift; node->header_length_field_offset = field->offset_base; } diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c index f4a7b697..be693e10 100644 --- a/drivers/net/mlx5/mlx5_flow_meter.c +++ b/drivers/net/mlx5/mlx5_flow_meter.c @@ -251,7 +251,7 @@ mlx5_flow_meter_xir_man_exp_calc(int64_t xir, uint8_t *man, uint8_t *exp) uint8_t _exp = 0; uint64_t m, e; - /* Special case xir == 0 ? both exp and matissa are 0. */ + /* Special case xir == 0 ? both exp and mantissa are 0. */ if (xir == 0) { *man = 0; *exp = 0; @@ -287,7 +287,7 @@ mlx5_flow_meter_xbs_man_exp_calc(uint64_t xbs, uint8_t *man, uint8_t *exp) int _exp; double _man; - /* Special case xbs == 0 ? both exp and matissa are 0. */ + /* Special case xbs == 0 ? both exp and mantissa are 0. */ if (xbs == 0) { *man = 0; *exp = 0; @@ -305,7 +305,7 @@ mlx5_flow_meter_xbs_man_exp_calc(uint64_t xbs, uint8_t *man, uint8_t *exp) * Fill the prm meter parameter. * * @param[in,out] fmp - * Pointer to meter profie to be converted. + * Pointer to meter profile to be converted. * @param[out] error * Pointer to the error structure. * @@ -696,7 +696,7 @@ __mlx5_flow_meter_policy_delete(struct rte_eth_dev *dev, MLX5_MTR_SUB_POLICY_NUM_MASK; if (sub_policy_num) { for (j = 0; j < sub_policy_num; j++) { - sub_policy = mtr_policy->sub_policys[i][j]; + sub_policy = mtr_policy->sub_policies[i][j]; if (sub_policy) mlx5_ipool_free (priv->sh->ipool[MLX5_IPOOL_MTR_POLICY], @@ -847,10 +847,10 @@ mlx5_flow_meter_policy_add(struct rte_eth_dev *dev, policy_idx = sub_policy_idx; sub_policy->main_policy_id = 1; } - mtr_policy->sub_policys[i] = + mtr_policy->sub_policies[i] = (struct mlx5_flow_meter_sub_policy **) ((uint8_t *)mtr_policy + policy_size); - mtr_policy->sub_policys[i][0] = sub_policy; + mtr_policy->sub_policies[i][0] = sub_policy; sub_policy_num = (mtr_policy->sub_policy_num >> (MLX5_MTR_SUB_POLICY_NUM_SHIFT * i)) & MLX5_MTR_SUB_POLICY_NUM_MASK; @@ -1101,7 +1101,7 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv, if (ret) return ret; } - /* Update succeedded modify meter parameters. */ + /* Update succeeded modify meter parameters. */ if (modify_bits & MLX5_FLOW_METER_OBJ_MODIFY_FIELD_ACTIVE) fm->active_state = !!active_state; } @@ -1615,7 +1615,7 @@ mlx5_flow_meter_profile_update(struct rte_eth_dev *dev, return -rte_mtr_error_set(error, -ret, RTE_MTR_ERROR_TYPE_MTR_PARAMS, NULL, "Failed to update meter" - " parmeters in hardware."); + " parameters in hardware."); } old_fmp->ref_cnt--; fmp->ref_cnt++; diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index e8215f73..c8d2f407 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -178,7 +178,7 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, * Pointer to the device structure. * * @param rx_queue_id - * Rx queue identificatior. + * Rx queue identification. * * @param mode * Pointer to the burts mode information. diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index f77d42de..be5f4da1 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2152,7 +2152,7 @@ mlx5_rxq_get_hairpin_conf(struct rte_eth_dev *dev, uint16_t idx) * Number of queues in the array. * * @return - * 1 if all queues in indirection table match 0 othrwise. + * 1 if all queues in indirection table match 0 otherwise. */ static int mlx5_ind_table_obj_match_queues(const struct mlx5_ind_table_obj *ind_tbl, @@ -2586,7 +2586,7 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, if (hrxq->standalone) { /* * Replacement of indirection table unsupported for - * stanalone hrxq objects (used by shared RSS). + * standalone hrxq objects (used by shared RSS). */ rte_errno = ENOTSUP; return -rte_errno; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h index 423e2295..f6e434c1 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h @@ -1230,7 +1230,7 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, uint32_t mask = rxq->flow_meta_port_mask; uint32_t metadata; - /* This code is subject for futher optimization. */ + /* This code is subject for further optimization. */ metadata = rte_be_to_cpu_32 (cq[pos].flow_table_metadata) & mask; *RTE_MBUF_DYNFIELD(pkts[pos], offs, uint32_t *) = diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h index b1d16baa..f7bbde4e 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h @@ -839,7 +839,7 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, } } if (rxq->dynf_meta) { - /* This code is subject for futher optimization. */ + /* This code is subject for further optimization. */ int32_t offs = rxq->flow_meta_offset; uint32_t mask = rxq->flow_meta_port_mask; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h index f3d83838..185d2695 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h @@ -772,7 +772,7 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, } } if (rxq->dynf_meta) { - /* This code is subject for futher optimization. */ + /* This code is subject for further optimization. */ int32_t offs = rxq->flow_meta_offset; uint32_t mask = rxq->flow_meta_port_mask; diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c index 5492d64c..fd2cf209 100644 --- a/drivers/net/mlx5/mlx5_tx.c +++ b/drivers/net/mlx5/mlx5_tx.c @@ -728,7 +728,7 @@ mlx5_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id, * Pointer to the device structure. * * @param tx_queue_id - * Tx queue identificatior. + * Tx queue identification. * * @param mode * Pointer to the burts mode information. diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index cf3db894..e2dcbafc 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -55,7 +55,7 @@ extern int mlx5_logtype; /* * For the case which data is linked with sequence increased index, the - * array table will be more efficiect than hash table once need to serarch + * array table will be more efficient than hash table once need to search * one data entry in large numbers of entries. Since the traditional hash * tables has fixed table size, when huge numbers of data saved to the hash * table, it also comes lots of hash conflict. diff --git a/drivers/net/mlx5/windows/mlx5_flow_os.c b/drivers/net/mlx5/windows/mlx5_flow_os.c index c4d57907..7bb4c459 100644 --- a/drivers/net/mlx5/windows/mlx5_flow_os.c +++ b/drivers/net/mlx5/windows/mlx5_flow_os.c @@ -400,7 +400,7 @@ mlx5_flow_os_set_specific_workspace(struct mlx5_flow_workspace *data) /* * set_specific_workspace when current value is NULL * can happen only once per thread, mark this thread in - * linked list to be able to release reasorces later on. + * linked list to be able to release resources later on. */ err = mlx5_add_workspace_to_list(data); if (err) { diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c index dec4b923..f1437249 100644 --- a/drivers/net/mlx5/windows/mlx5_os.c +++ b/drivers/net/mlx5/windows/mlx5_os.c @@ -226,7 +226,7 @@ mlx5_os_free_shared_dr(struct mlx5_priv *priv) * Pointer to RQ channel object, which includes the channel fd * * @param[out] fd - * The file descriptor (representing the intetrrupt) used in this channel. + * The file descriptor (representing the interrupt) used in this channel. * * @return * 0 on successfully setting the fd to non-blocking, non-zero otherwise. diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c index 10fe6d82..eef016aa 100644 --- a/drivers/net/mvneta/mvneta_ethdev.c +++ b/drivers/net/mvneta/mvneta_ethdev.c @@ -247,7 +247,7 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) (mru + MRVL_NETA_PKT_OFFS > mbuf_data_size)) { mru = mbuf_data_size - MRVL_NETA_PKT_OFFS; mtu = MRVL_NETA_MRU_TO_MTU(mru); - MVNETA_LOG(WARNING, "MTU too big, max MTU possible limitted by" + MVNETA_LOG(WARNING, "MTU too big, max MTU possible limited by" " current mbuf size: %u. Set MTU to %u, MRU to %u", mbuf_data_size, mtu, mru); } diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 9c7fe13f..f86701d2 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -579,7 +579,7 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (mru - RTE_ETHER_CRC_LEN + MRVL_PKT_OFFS > mbuf_data_size) { mru = mbuf_data_size + RTE_ETHER_CRC_LEN - MRVL_PKT_OFFS; mtu = MRVL_PP2_MRU_TO_MTU(mru); - MRVL_LOG(WARNING, "MTU too big, max MTU possible limitted " + MRVL_LOG(WARNING, "MTU too big, max MTU possible limited " "by current mbuf size: %u. Set MTU to %u, MRU to %u", mbuf_data_size, mtu, mru); } diff --git a/drivers/net/mvpp2/mrvl_qos.c b/drivers/net/mvpp2/mrvl_qos.c index dbfc3b5d..99f0ee56 100644 --- a/drivers/net/mvpp2/mrvl_qos.c +++ b/drivers/net/mvpp2/mrvl_qos.c @@ -301,7 +301,7 @@ get_entry_values(const char *entry, uint8_t *tab, } /** - * Parse Traffic Class'es mapping configuration. + * Parse Traffic Classes mapping configuration. * * @param file Config file handle. * @param port Which port to look for. @@ -736,7 +736,7 @@ mrvl_get_cfg(const char *key __rte_unused, const char *path, void *extra_args) /* MRVL_TOK_START_HDR replaces MRVL_TOK_DSA_MODE parameter. * MRVL_TOK_DSA_MODE will be supported for backward - * compatibillity. + * compatibility. */ entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_START_HDR); diff --git a/drivers/net/netvsc/hn_nvs.c b/drivers/net/netvsc/hn_nvs.c index 89dbba6c..a29ac18f 100644 --- a/drivers/net/netvsc/hn_nvs.c +++ b/drivers/net/netvsc/hn_nvs.c @@ -229,7 +229,7 @@ hn_nvs_conn_rxbuf(struct hn_data *hv) hv->rxbuf_section_cnt = resp.nvs_sect[0].slotcnt; /* - * Pimary queue's rxbuf_info is not allocated at creation time. + * Primary queue's rxbuf_info is not allocated at creation time. * Now we can allocate it after we figure out the slotcnt. */ hv->primary->rxbuf_info = rte_calloc("HN_RXBUF_INFO", diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 028f176c..50ca1710 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -578,7 +578,7 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb, rte_iova_t iova; /* - * Build an external mbuf that points to recveive area. + * Build an external mbuf that points to receive area. * Use refcount to handle multiple packets in same * receive buffer section. */ @@ -1031,7 +1031,7 @@ hn_dev_rx_queue_count(void *rx_queue) * returns: * - -EINVAL - offset outside of ring * - RTE_ETH_RX_DESC_AVAIL - no data available yet - * - RTE_ETH_RX_DESC_DONE - data is waiting in stagin ring + * - RTE_ETH_RX_DESC_DONE - data is waiting in staging ring */ int hn_dev_rx_queue_status(void *arg, uint16_t offset) { diff --git a/drivers/net/netvsc/hn_vf.c b/drivers/net/netvsc/hn_vf.c index fead8eba..ebb9c601 100644 --- a/drivers/net/netvsc/hn_vf.c +++ b/drivers/net/netvsc/hn_vf.c @@ -103,7 +103,7 @@ static void hn_remove_delayed(void *args) struct rte_device *dev = rte_eth_devices[port_id].device; int ret; - /* Tell VSP to switch data path to synthentic */ + /* Tell VSP to switch data path to synthetic */ hn_vf_remove(hv); PMD_DRV_LOG(NOTICE, "Start to remove port %d", port_id); diff --git a/drivers/net/nfp/nfpcore/nfp-common/nfp_resid.h b/drivers/net/nfp/nfpcore/nfp-common/nfp_resid.h index 0e03948e..394a7628 100644 --- a/drivers/net/nfp/nfpcore/nfp-common/nfp_resid.h +++ b/drivers/net/nfp/nfpcore/nfp-common/nfp_resid.h @@ -63,7 +63,7 @@ * Wildcard indicating a CPP read or write action * * The action used will be either read or write depending on whether a read or - * write instruction/call is performed on the NFP_CPP_ID. It is recomended that + * write instruction/call is performed on the NFP_CPP_ID. It is recommended that * the RW action is used even if all actions to be performed on a NFP_CPP_ID are * known to be only reads or writes. Doing so will in many cases save NFP CPP * internal software resources. @@ -405,7 +405,7 @@ int nfp_idstr2meid(int chip_family, const char *s, const char **endptr); * @param chip_family Chip family ID * @param s A string of format "iX.anything" or "iX" * @param endptr If non-NULL, *endptr will point to the trailing - * striong after the ME ID part of the string, which + * string after the ME ID part of the string, which * is either an empty string or the first character * after the separating period. * @return The island ID on succes, -1 on error. @@ -425,7 +425,7 @@ int nfp_idstr2island(int chip_family, const char *s, const char **endptr); * @param chip_family Chip family ID * @param s A string of format "meX.anything" or "meX" * @param endptr If non-NULL, *endptr will point to the trailing - * striong after the ME ID part of the string, which + * string after the ME ID part of the string, which * is either an empty string or the first character * after the separating period. * @return The ME number on succes, -1 on error. diff --git a/drivers/net/nfp/nfpcore/nfp_cppcore.c b/drivers/net/nfp/nfpcore/nfp_cppcore.c index f9104938..37799af5 100644 --- a/drivers/net/nfp/nfpcore/nfp_cppcore.c +++ b/drivers/net/nfp/nfpcore/nfp_cppcore.c @@ -202,7 +202,7 @@ nfp_cpp_area_alloc(struct nfp_cpp *cpp, uint32_t dest, * @address: start address on CPP target * @size: size of area * - * Allocate and initilizae a CPP area structure, and lock it down so + * Allocate and initialize a CPP area structure, and lock it down so * that it can be accessed directly. * * NOTE: @address and @size must be 32-bit aligned values. diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.h b/drivers/net/nfp/nfpcore/nfp_nsp.h index c9c7b0d0..e74cdeb1 100644 --- a/drivers/net/nfp/nfpcore/nfp_nsp.h +++ b/drivers/net/nfp/nfpcore/nfp_nsp.h @@ -272,7 +272,7 @@ int __nfp_eth_set_split(struct nfp_nsp *nsp, unsigned int lanes); * @br_primary: branch id of primary bootloader * @br_secondary: branch id of secondary bootloader * @br_nsp: branch id of NSP - * @primary: version of primarary bootloader + * @primary: version of primary bootloader * @secondary: version id of secondary bootloader * @nsp: version id of NSP * @sensor_mask: mask of present sensors available on NIC diff --git a/drivers/net/nfp/nfpcore/nfp_resource.c b/drivers/net/nfp/nfpcore/nfp_resource.c index dd41fa4d..7b5630fd 100644 --- a/drivers/net/nfp/nfpcore/nfp_resource.c +++ b/drivers/net/nfp/nfpcore/nfp_resource.c @@ -207,7 +207,7 @@ nfp_resource_acquire(struct nfp_cpp *cpp, const char *name) * nfp_resource_release() - Release a NFP Resource handle * @res: NFP Resource handle * - * NOTE: This function implictly unlocks the resource handle + * NOTE: This function implicitly unlocks the resource handle */ void nfp_resource_release(struct nfp_resource *res) diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.c b/drivers/net/nfp/nfpcore/nfp_rtsym.c index cb7d83db..2feca2ed 100644 --- a/drivers/net/nfp/nfpcore/nfp_rtsym.c +++ b/drivers/net/nfp/nfpcore/nfp_rtsym.c @@ -236,7 +236,7 @@ nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl, const char *name) * nfp_rtsym_read_le() - Read a simple unsigned scalar value from symbol * @rtbl: NFP RTsym table * @name: Symbol name - * @error: Poniter to error code (optional) + * @error: Pointer to error code (optional) * * Lookup a symbol, map, read it and return it's value. Value of the symbol * will be interpreted as a simple little-endian unsigned value. Symbol can diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 981592f7..2534c175 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -594,12 +594,12 @@ ngbe_vlan_tpid_set(struct rte_eth_dev *dev, { struct ngbe_hw *hw = ngbe_dev_hw(dev); int ret = 0; - uint32_t portctrl, vlan_ext, qinq; + uint32_t portctl, vlan_ext, qinq; - portctrl = rd32(hw, NGBE_PORTCTL); + portctl = rd32(hw, NGBE_PORTCTL); - vlan_ext = (portctrl & NGBE_PORTCTL_VLANEXT); - qinq = vlan_ext && (portctrl & NGBE_PORTCTL_QINQ); + vlan_ext = (portctl & NGBE_PORTCTL_VLANEXT); + qinq = vlan_ext && (portctl & NGBE_PORTCTL_QINQ); switch (vlan_type) { case RTE_ETH_VLAN_TYPE_INNER: if (vlan_ext) { @@ -983,7 +983,7 @@ ngbe_dev_start(struct rte_eth_dev *dev) } } - /* confiugre MSI-X for sleep until Rx interrupt */ + /* configure MSI-X for sleep until Rx interrupt */ ngbe_configure_msix(dev); /* initialize transmission unit */ @@ -2641,7 +2641,7 @@ ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction, wr32(hw, NGBE_IVARMISC, tmp); } else { /* rx or tx causes */ - /* Workround for ICR lost */ + /* Workaround for ICR lost */ idx = ((16 * (queue & 1)) + (8 * direction)); tmp = rd32(hw, NGBE_IVAR(queue >> 1)); tmp &= ~(0xFF << idx); @@ -2893,7 +2893,7 @@ ngbe_timesync_disable(struct rte_eth_dev *dev) /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ wr32(hw, NGBE_ETFLT(NGBE_ETF_ID_1588), 0); - /* Stop incrementating the System Time registers. */ + /* Stop incrementing the System Time registers. */ wr32(hw, NGBE_TSTIMEINC, 0); return 0; diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c index 7f9c04fb..12a18de3 100644 --- a/drivers/net/ngbe/ngbe_pf.c +++ b/drivers/net/ngbe/ngbe_pf.c @@ -163,7 +163,7 @@ int ngbe_pf_host_configure(struct rte_eth_dev *eth_dev) wr32(hw, NGBE_PSRCTL, NGBE_PSRCTL_LBENA); - /* clear VMDq map to perment rar 0 */ + /* clear VMDq map to permanent rar 0 */ hw->mac.clear_vmdq(hw, 0, BIT_MASK32); /* clear VMDq map to scan rar 31 */ diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c index 4f1e368c..b47472eb 100644 --- a/drivers/net/octeontx/octeontx_ethdev.c +++ b/drivers/net/octeontx/octeontx_ethdev.c @@ -1090,7 +1090,7 @@ octeontx_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx, /* Verify queue index */ if (qidx >= dev->data->nb_rx_queues) { - octeontx_log_err("QID %d not supporteded (0 - %d available)\n", + octeontx_log_err("QID %d not supported (0 - %d available)\n", qidx, (dev->data->nb_rx_queues - 1)); return -ENOTSUP; } diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index cc573bb2..f56d5b2a 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -369,7 +369,7 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev) "rc=%d", rc); return rc; } - /* VFIO vector zero is resereved for misc interrupt so + /* VFIO vector zero is reserved for misc interrupt so * doing required adjustment. (b13bfab4cd) */ if (rte_intr_vec_list_index_set(handle, q, diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c index abb21305..974018f9 100644 --- a/drivers/net/octeontx2/otx2_ptp.c +++ b/drivers/net/octeontx2/otx2_ptp.c @@ -440,7 +440,7 @@ otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *clock) /* This API returns the raw PTP HI clock value. Since LFs doesn't * have direct access to PTP registers and it requires mbox msg * to AF for this value. In fastpath reading this value for every - * packet (which involes mbox call) becomes very expensive, hence + * packet (which involves mbox call) becomes very expensive, hence * we should be able to derive PTP HI clock value from tsc by * using freq_mult and clk_delta calculated during configure stage. */ diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index 4bbd5a39..a2fb7ce3 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -61,7 +61,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, /* Retrieving the default desc values */ cmd[off] = send_mem_desc[6]; - /* Using compiler barier to avoid voilation of C + /* Using compiler barrier to avoid violation of C * aliasing rules. */ rte_compiler_barrier(); @@ -70,7 +70,7 @@ otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp * should not be recorded, hence changing the alg type to * NIX_SENDMEMALG_SET and also changing send mem addr field to - * next 8 bytes as it corrpt the actual tx tstamp registered + * next 8 bytes as it corrupts the actual tx tstamp registered * address. */ send_mem->alg = NIX_SENDMEMALG_SETTSTMP - (is_ol_tstamp); diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index cce643b7..359680de 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -953,7 +953,7 @@ static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev) struct vlan_entry *entry; int rc; - /* VLAN filters can't be set without setting filtern on */ + /* VLAN filters can't be set without setting filters on */ rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true); if (rc) { otx2_err("Failed to reinstall vlan filters"); diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c index 0716beb9..94e510ef 100644 --- a/drivers/net/octeontx_ep/otx2_ep_vf.c +++ b/drivers/net/octeontx_ep/otx2_ep_vf.c @@ -104,7 +104,7 @@ otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no) iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr + SDP_VF_R_IN_CNTS(iq_no); - otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p", + otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instant_reg @ 0x%p", iq_no, iq->doorbell_reg, iq->inst_cnt_reg); do { diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c index c9b91fef..ad7b1ea9 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.c +++ b/drivers/net/octeontx_ep/otx_ep_vf.c @@ -117,7 +117,7 @@ otx_ep_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no) iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr + OTX_EP_R_IN_CNTS(iq_no); - otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n", + otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instant_reg @ 0x%p\n", iq_no, iq->doorbell_reg, iq->inst_cnt_reg); do { diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c index 047010e1..41159d6e 100644 --- a/drivers/net/pfe/pfe_ethdev.c +++ b/drivers/net/pfe/pfe_ethdev.c @@ -769,7 +769,7 @@ pfe_eth_init(struct rte_vdev_device *vdev, struct pfe *pfe, int id) if (eth_dev == NULL) return -ENOMEM; - /* Extract pltform data */ + /* Extract platform data */ pfe_info = (struct ls1012a_pfe_platform_data *)&pfe->platform_data; if (!pfe_info) { PFE_PMD_ERR("pfe missing additional platform data"); @@ -845,7 +845,7 @@ pfe_eth_init(struct rte_vdev_device *vdev, struct pfe *pfe, int id) } static int -pfe_get_gemac_if_proprties(struct pfe *pfe, +pfe_get_gemac_if_properties(struct pfe *pfe, __rte_unused const struct device_node *parent, unsigned int port, unsigned int if_cnt, struct ls1012a_pfe_platform_data *pdata) @@ -1053,7 +1053,7 @@ pmd_pfe_probe(struct rte_vdev_device *vdev) g_pfe->platform_data.ls1012a_mdio_pdata[0].phy_mask = 0xffffffff; for (ii = 0; ii < interface_count; ii++) { - pfe_get_gemac_if_proprties(g_pfe, np, ii, interface_count, + pfe_get_gemac_if_properties(g_pfe, np, ii, interface_count, &g_pfe->platform_data); } diff --git a/drivers/net/pfe/pfe_hal.c b/drivers/net/pfe/pfe_hal.c index 41d783db..934dd122 100644 --- a/drivers/net/pfe/pfe_hal.c +++ b/drivers/net/pfe/pfe_hal.c @@ -187,7 +187,7 @@ gemac_set_mode(void *base, __rte_unused int mode) { u32 val = readl(base + EMAC_RCNTRL_REG); - /*Remove loopbank*/ + /*Remove loopback*/ val &= ~EMAC_RCNTRL_LOOP; /*Enable flow control and MII mode*/ diff --git a/drivers/net/pfe/pfe_hif.c b/drivers/net/pfe/pfe_hif.c index c4a7154b..69b1d0ed 100644 --- a/drivers/net/pfe/pfe_hif.c +++ b/drivers/net/pfe/pfe_hif.c @@ -114,9 +114,9 @@ pfe_hif_init_buffers(struct pfe_hif *hif) * results, eth id, queue id from PFE block along with data. * so we have to provide additional memory for each packet to * HIF rx rings so that PFE block can write its headers. - * so, we are giving the data pointor to HIF rings whose + * so, we are giving the data pointer to HIF rings whose * calculation is as below: - * mbuf->data_pointor - Required_header_size + * mbuf->data_pointer - Required_header_size * * We are utilizing the HEADROOM area to receive the PFE * block headers. On packet reception, HIF driver will use diff --git a/drivers/net/pfe/pfe_hif.h b/drivers/net/pfe/pfe_hif.h index 6aaf904b..e8d5ba10 100644 --- a/drivers/net/pfe/pfe_hif.h +++ b/drivers/net/pfe/pfe_hif.h @@ -8,7 +8,7 @@ #define HIF_CLIENT_QUEUES_MAX 16 #define HIF_RX_PKT_MIN_SIZE RTE_CACHE_LINE_SIZE /* - * HIF_TX_DESC_NT value should be always greter than 4, + * HIF_TX_DESC_NT value should be always greater than 4, * Otherwise HIF_TX_POLL_MARK will become zero. */ #define HIF_RX_DESC_NT 64 diff --git a/drivers/net/pfe/pfe_hif_lib.c b/drivers/net/pfe/pfe_hif_lib.c index 799050dc..6fe6d33d 100644 --- a/drivers/net/pfe/pfe_hif_lib.c +++ b/drivers/net/pfe/pfe_hif_lib.c @@ -38,7 +38,7 @@ pfe_hif_shm_clean(struct hif_shm *hif_shm) * This function should be called before initializing HIF driver. * * @param[in] hif_shm Shared memory address location in DDR - * @rerurn 0 - on succes, <0 on fail to initialize + * @return 0 - on succes, <0 on fail to initialize */ int pfe_hif_shm_init(struct hif_shm *hif_shm, struct rte_mempool *mb_pool) @@ -109,9 +109,9 @@ hif_lib_client_release_rx_buffers(struct hif_client_s *client) for (ii = 0; ii < client->rx_q[qno].size; ii++) { buf = (void *)desc->data; if (buf) { - /* Data pointor to mbuf pointor calculation: + /* Data pointer to mbuf pointer calculation: * "Data - User private data - headroom - mbufsize" - * Actual data pointor given to HIF BDs was + * Actual data pointer given to HIF BDs was * "mbuf->data_offset - PFE_PKT_HEADER_SZ" */ buf = buf + PFE_PKT_HEADER_SZ @@ -477,7 +477,7 @@ hif_hdr_write(struct hif_hdr *pkt_hdr, unsigned int client_id, unsigned int qno, u32 client_ctrl) { - /* Optimize the write since the destinaton may be non-cacheable */ + /* Optimize the write since the destination may be non-cacheable */ if (!((unsigned long)pkt_hdr & 0x3)) { ((u32 *)pkt_hdr)[0] = (client_ctrl << 16) | (qno << 8) | client_id; diff --git a/drivers/net/qede/qede_debug.c b/drivers/net/qede/qede_debug.c index 2297d245..9a2f05ac 100644 --- a/drivers/net/qede/qede_debug.c +++ b/drivers/net/qede/qede_debug.c @@ -457,7 +457,7 @@ struct split_type_defs { (MCP_REG_SCRATCH + \ offsetof(struct static_init, sections[SPAD_SECTION_TRACE])) -#define MAX_SW_PLTAFORM_STR_SIZE 64 +#define MAX_SW_PLATFORM_STR_SIZE 64 #define EMPTY_FW_VERSION_STR "???_???_???_???" #define EMPTY_FW_IMAGE_STR "???????????????" @@ -1227,13 +1227,13 @@ static u32 qed_dump_common_global_params(struct ecore_hwfn *p_hwfn, u8 num_specific_global_params) { struct dbg_tools_data *dev_data = &p_hwfn->dbg_info; - char sw_platform_str[MAX_SW_PLTAFORM_STR_SIZE]; + char sw_platform_str[MAX_SW_PLATFORM_STR_SIZE]; u32 offset = 0; u8 num_params; /* Fill platform string */ ecore_set_platform_str(p_hwfn, sw_platform_str, - MAX_SW_PLTAFORM_STR_SIZE); + MAX_SW_PLATFORM_STR_SIZE); /* Dump global params section header */ num_params = NUM_COMMON_GLOBAL_PARAMS + num_specific_global_params + @@ -5983,7 +5983,7 @@ static char *qed_get_buf_ptr(void *buf, u32 offset) /* Reads a param from the specified buffer. Returns the number of dwords read. * If the returned str_param is NULL, the param is numeric and its value is * returned in num_param. - * Otheriwise, the param is a string and its pointer is returned in str_param. + * Otherwise, the param is a string and its pointer is returned in str_param. */ static u32 qed_read_param(u32 *dump_buf, const char **param_name, @@ -7441,11 +7441,11 @@ qed_print_idle_chk_results_wrapper(struct ecore_hwfn *p_hwfn, u32 num_dumped_dwords, char *results_buf) { - u32 num_errors, num_warnnings; + u32 num_errors, num_warnings; return qed_print_idle_chk_results(p_hwfn, dump_buf, num_dumped_dwords, results_buf, &num_errors, - &num_warnnings); + &num_warnings); } /* Feature meta data lookup table */ @@ -7558,7 +7558,7 @@ static enum dbg_status format_feature(struct ecore_hwfn *p_hwfn, text_buf[i] = '\n'; - /* Free the old dump_buf and point the dump_buf to the newly allocagted + /* Free the old dump_buf and point the dump_buf to the newly allocated * and formatted text buffer. */ OSAL_VFREE(p_hwfn, feature->dump_buf); diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 3e9aaeec..a1122a29 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -2338,7 +2338,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) if (fp->rxq != NULL) { bufsz = (uint16_t)rte_pktmbuf_data_room_size( fp->rxq->mb_pool) - RTE_PKTMBUF_HEADROOM; - /* cache align the mbuf size to simplfy rx_buf_size + /* cache align the mbuf size to simplify rx_buf_size * calculation */ bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz); diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index c0eeea89..7088c57b 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -90,7 +90,7 @@ static inline int qede_alloc_rx_bulk_mbufs(struct qede_rx_queue *rxq, int count) * (MTU + Maximum L2 Header Size + 2) / ETH_RX_MAX_BUFF_PER_PKT * 3) In regular mode - minimum rx_buf_size should be * (MTU + Maximum L2 Header Size + 2) - * In above cases +2 corrosponds to 2 bytes padding in front of L2 + * In above cases +2 corresponds to 2 bytes padding in front of L2 * header. * 4) rx_buf_size should be cacheline-size aligned. So considering * criteria 1, we need to adjust the size to floor instead of ceil, @@ -106,7 +106,7 @@ qede_calc_rx_buf_size(struct rte_eth_dev *dev, uint16_t mbufsz, if (dev->data->scattered_rx) { /* per HW limitation, only ETH_RX_MAX_BUFF_PER_PKT number of - * bufferes can be used for single packet. So need to make sure + * buffers can be used for single packet. So need to make sure * mbuf size is sufficient enough for this. */ if ((mbufsz * ETH_RX_MAX_BUFF_PER_PKT) < @@ -247,7 +247,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, /* Fix up RX buffer size */ bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; - /* cache align the mbuf size to simplfy rx_buf_size calculation */ + /* cache align the mbuf size to simplify rx_buf_size calculation */ bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz); if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) || (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) { @@ -1745,7 +1745,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } } - /* Request number of bufferes to be allocated in next loop */ + /* Request number of buffers to be allocated in next loop */ rxq->rx_alloc_count = rx_alloc_count; rxq->rcv_pkts += rx_pkt; @@ -2042,7 +2042,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } } - /* Request number of bufferes to be allocated in next loop */ + /* Request number of buffers to be allocated in next loop */ rxq->rx_alloc_count = rx_alloc_count; rxq->rcv_pkts += rx_pkt; @@ -2506,7 +2506,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* Inner L2 header size in two byte words */ inner_l2_hdr_size = (mbuf->l2_len - MPLSINUDP_HDR_SIZE) / 2; - /* Inner L4 header offset from the beggining + /* Inner L4 header offset from the beginning * of inner packet in two byte words */ inner_l4_hdr_offset = (mbuf->l2_len - diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h index 754efe79..11ed1d9b 100644 --- a/drivers/net/qede/qede_rxtx.h +++ b/drivers/net/qede/qede_rxtx.h @@ -225,7 +225,7 @@ struct qede_fastpath { struct qede_tx_queue *txq; }; -/* This structure holds the inforation of fast path queues +/* This structure holds the information of fast path queues * belonging to individual engines in CMT mode. */ struct qede_fastpath_cmt { diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c index ed714fe0..2cead4e0 100644 --- a/drivers/net/sfc/sfc.c +++ b/drivers/net/sfc/sfc.c @@ -371,7 +371,7 @@ sfc_set_drv_limits(struct sfc_adapter *sa) /* * Limits are strict since take into account initial estimation. - * Resource allocation stategy is described in + * Resource allocation strategy is described in * sfc_estimate_resource_limits(). */ lim.edl_min_evq_count = lim.edl_max_evq_count = diff --git a/drivers/net/sfc/sfc_dp.c b/drivers/net/sfc/sfc_dp.c index d4cd1625..da2d1603 100644 --- a/drivers/net/sfc/sfc_dp.c +++ b/drivers/net/sfc/sfc_dp.c @@ -68,7 +68,7 @@ sfc_dp_register(struct sfc_dp_list *head, struct sfc_dp *entry) { if (sfc_dp_find_by_name(head, entry->type, entry->name) != NULL) { SFC_GENERIC_LOG(ERR, - "sfc %s dapapath '%s' already registered", + "sfc %s datapath '%s' already registered", entry->type == SFC_DP_RX ? "Rx" : entry->type == SFC_DP_TX ? "Tx" : "unknown", diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h index 760540ba..246adbd8 100644 --- a/drivers/net/sfc/sfc_dp_rx.h +++ b/drivers/net/sfc/sfc_dp_rx.h @@ -158,7 +158,7 @@ typedef int (sfc_dp_rx_qcreate_t)(uint16_t port_id, uint16_t queue_id, struct sfc_dp_rxq **dp_rxqp); /** - * Free resources allocated for datapath recevie queue. + * Free resources allocated for datapath receive queue. */ typedef void (sfc_dp_rx_qdestroy_t)(struct sfc_dp_rxq *dp_rxq); @@ -191,7 +191,7 @@ typedef bool (sfc_dp_rx_qrx_ps_ev_t)(struct sfc_dp_rxq *dp_rxq, /** * Receive queue purge function called after queue flush. * - * Should be used to free unused recevie buffers. + * Should be used to free unused receive buffers. */ typedef void (sfc_dp_rx_qpurge_t)(struct sfc_dp_rxq *dp_rxq); diff --git a/drivers/net/sfc/sfc_ef100.h b/drivers/net/sfc/sfc_ef100.h index 5e2052d1..e81847e7 100644 --- a/drivers/net/sfc/sfc_ef100.h +++ b/drivers/net/sfc/sfc_ef100.h @@ -19,7 +19,7 @@ extern "C" { * * @param evq_prime Global address of the prime register * @param evq_hw_index Event queue index - * @param evq_read_ptr Masked event qeueu read pointer + * @param evq_read_ptr Masked event queue read pointer */ static inline void sfc_ef100_evq_prime(volatile void *evq_prime, unsigned int evq_hw_index, diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c index 5d16bf28..45253ed7 100644 --- a/drivers/net/sfc/sfc_ef100_rx.c +++ b/drivers/net/sfc/sfc_ef100_rx.c @@ -851,7 +851,7 @@ sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr, unsup_rx_prefix_fields = efx_rx_prefix_layout_check(pinfo, &sfc_ef100_rx_prefix_layout); - /* LENGTH and CLASS filds must always be present */ + /* LENGTH and CLASS fields must always be present */ if ((unsup_rx_prefix_fields & ((1U << EFX_RX_PREFIX_FIELD_LENGTH) | (1U << EFX_RX_PREFIX_FIELD_CLASS))) != 0) diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c index 712c2076..78bd4303 100644 --- a/drivers/net/sfc/sfc_ef10_essb_rx.c +++ b/drivers/net/sfc/sfc_ef10_essb_rx.c @@ -630,7 +630,7 @@ sfc_ef10_essb_rx_qcreate(uint16_t port_id, uint16_t queue_id, rxq->block_size, rxq->buf_stride); sfc_ef10_essb_rx_info(&rxq->dp.dpq, "max fill level is %u descs (%u bufs), " - "refill threashold %u descs (%u bufs)", + "refill threshold %u descs (%u bufs)", rxq->max_fill_level, rxq->max_fill_level * rxq->block_size, rxq->refill_threshold, diff --git a/drivers/net/sfc/sfc_ef10_rx_ev.h b/drivers/net/sfc/sfc_ef10_rx_ev.h index 821e2227..412254e3 100644 --- a/drivers/net/sfc/sfc_ef10_rx_ev.h +++ b/drivers/net/sfc/sfc_ef10_rx_ev.h @@ -40,7 +40,7 @@ sfc_ef10_rx_ev_to_offloads(const efx_qword_t rx_ev, struct rte_mbuf *m, rte_cpu_to_le_64((1ull << ESF_DZ_RX_ECC_ERR_LBN) | (1ull << ESF_DZ_RX_ECRC_ERR_LBN) | (1ull << ESF_DZ_RX_PARSE_INCOMPLETE_LBN)))) { - /* Zero packet type is used as a marker to dicard bad packets */ + /* Zero packet type is used as a marker to discard bad packets */ goto done; } diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c index ab67aa92..ddddefad 100644 --- a/drivers/net/sfc/sfc_intr.c +++ b/drivers/net/sfc/sfc_intr.c @@ -8,7 +8,7 @@ */ /* - * At the momemt of writing DPDK v16.07 has notion of two types of + * At the moment of writing DPDK v16.07 has notion of two types of * interrupts: LSC (link status change) and RXQ (receive indication). * It allows to register interrupt callback for entire device which is * not intended to be used for receive indication (i.e. link status diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index b34c9afd..9127c903 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1805,7 +1805,7 @@ struct sfc_mae_field_locator { efx_mae_field_id_t field_id; size_t size; /* Field offset in the corresponding rte_flow_item_ struct */ - size_t ofst; + size_t offset; }; static void @@ -1820,8 +1820,8 @@ sfc_mae_item_build_supp_mask(const struct sfc_mae_field_locator *field_locators, for (i = 0; i < nb_field_locators; ++i) { const struct sfc_mae_field_locator *fl = &field_locators[i]; - SFC_ASSERT(fl->ofst + fl->size <= mask_size); - memset(RTE_PTR_ADD(mask_ptr, fl->ofst), 0xff, fl->size); + SFC_ASSERT(fl->offset + fl->size <= mask_size); + memset(RTE_PTR_ADD(mask_ptr, fl->offset), 0xff, fl->size); } } @@ -1843,8 +1843,8 @@ sfc_mae_parse_item(const struct sfc_mae_field_locator *field_locators, rc = efx_mae_match_spec_field_set(ctx->match_spec, fremap[fl->field_id], - fl->size, spec + fl->ofst, - fl->size, mask + fl->ofst); + fl->size, spec + fl->offset, + fl->size, mask + fl->offset); if (rc != 0) break; } @@ -2387,7 +2387,7 @@ static const struct sfc_mae_field_locator flocs_tunnel[] = { * for Geneve and NVGRE, too. */ .size = RTE_SIZEOF_FIELD(struct rte_flow_item_vxlan, vni), - .ofst = offsetof(struct rte_flow_item_vxlan, vni), + .offset = offsetof(struct rte_flow_item_vxlan, vni), }, }; @@ -3297,7 +3297,7 @@ sfc_mae_rule_parse_action_of_set_vlan_pcp( struct sfc_mae_parsed_item { const struct rte_flow_item *item; - size_t proto_header_ofst; + size_t proto_header_offset; size_t proto_header_size; }; @@ -3316,20 +3316,20 @@ sfc_mae_header_force_item_masks(uint8_t *header_buf, const struct sfc_mae_parsed_item *parsed_item; const struct rte_flow_item *item; size_t proto_header_size; - size_t ofst; + size_t offset; parsed_item = &parsed_items[item_idx]; proto_header_size = parsed_item->proto_header_size; item = parsed_item->item; - for (ofst = 0; ofst < proto_header_size; - ofst += sizeof(rte_be16_t)) { - rte_be16_t *wp = RTE_PTR_ADD(header_buf, ofst); + for (offset = 0; offset < proto_header_size; + offset += sizeof(rte_be16_t)) { + rte_be16_t *wp = RTE_PTR_ADD(header_buf, offset); const rte_be16_t *w_maskp; const rte_be16_t *w_specp; - w_maskp = RTE_PTR_ADD(item->mask, ofst); - w_specp = RTE_PTR_ADD(item->spec, ofst); + w_maskp = RTE_PTR_ADD(item->mask, offset); + w_specp = RTE_PTR_ADD(item->spec, offset); *wp &= ~(*w_maskp); *wp |= (*w_specp & *w_maskp); @@ -3363,7 +3363,7 @@ sfc_mae_rule_parse_action_vxlan_encap( 1 /* VXLAN */]; unsigned int nb_parsed_items = 0; - size_t eth_ethertype_ofst = offsetof(struct rte_ether_hdr, ether_type); + size_t eth_ethertype_offset = offsetof(struct rte_ether_hdr, ether_type); uint8_t dummy_buf[RTE_MAX(sizeof(struct rte_ipv4_hdr), sizeof(struct rte_ipv6_hdr))]; struct rte_ipv4_hdr *ipv4 = (void *)dummy_buf; @@ -3371,8 +3371,8 @@ sfc_mae_rule_parse_action_vxlan_encap( struct rte_vxlan_hdr *vxlan = NULL; struct rte_udp_hdr *udp = NULL; unsigned int nb_vlan_tags = 0; - size_t next_proto_ofst = 0; - size_t ethertype_ofst = 0; + size_t next_proto_offset = 0; + size_t ethertype_offset = 0; uint64_t exp_items; int rc; @@ -3444,7 +3444,7 @@ sfc_mae_rule_parse_action_vxlan_encap( proto_header_size = sizeof(struct rte_ether_hdr); - ethertype_ofst = eth_ethertype_ofst; + ethertype_offset = eth_ethertype_offset; exp_items = RTE_BIT64(RTE_FLOW_ITEM_TYPE_VLAN) | RTE_BIT64(RTE_FLOW_ITEM_TYPE_IPV4) | @@ -3458,13 +3458,13 @@ sfc_mae_rule_parse_action_vxlan_encap( proto_header_size = sizeof(struct rte_vlan_hdr); - ethertypep = RTE_PTR_ADD(buf, eth_ethertype_ofst); + ethertypep = RTE_PTR_ADD(buf, eth_ethertype_offset); *ethertypep = RTE_BE16(RTE_ETHER_TYPE_QINQ); - ethertypep = RTE_PTR_ADD(buf, ethertype_ofst); + ethertypep = RTE_PTR_ADD(buf, ethertype_offset); *ethertypep = RTE_BE16(RTE_ETHER_TYPE_VLAN); - ethertype_ofst = + ethertype_offset = bounce_eh->size + offsetof(struct rte_vlan_hdr, eth_proto); @@ -3482,10 +3482,10 @@ sfc_mae_rule_parse_action_vxlan_encap( proto_header_size = sizeof(struct rte_ipv4_hdr); - ethertypep = RTE_PTR_ADD(buf, ethertype_ofst); + ethertypep = RTE_PTR_ADD(buf, ethertype_offset); *ethertypep = RTE_BE16(RTE_ETHER_TYPE_IPV4); - next_proto_ofst = + next_proto_offset = bounce_eh->size + offsetof(struct rte_ipv4_hdr, next_proto_id); @@ -3501,10 +3501,10 @@ sfc_mae_rule_parse_action_vxlan_encap( proto_header_size = sizeof(struct rte_ipv6_hdr); - ethertypep = RTE_PTR_ADD(buf, ethertype_ofst); + ethertypep = RTE_PTR_ADD(buf, ethertype_offset); *ethertypep = RTE_BE16(RTE_ETHER_TYPE_IPV6); - next_proto_ofst = bounce_eh->size + + next_proto_offset = bounce_eh->size + offsetof(struct rte_ipv6_hdr, proto); ipv6 = (struct rte_ipv6_hdr *)buf_cur; @@ -3519,7 +3519,7 @@ sfc_mae_rule_parse_action_vxlan_encap( proto_header_size = sizeof(struct rte_udp_hdr); - next_protop = RTE_PTR_ADD(buf, next_proto_ofst); + next_protop = RTE_PTR_ADD(buf, next_proto_offset); *next_protop = IPPROTO_UDP; udp = (struct rte_udp_hdr *)buf_cur; diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c index 71042841..cd58d60a 100644 --- a/drivers/net/sfc/sfc_rx.c +++ b/drivers/net/sfc/sfc_rx.c @@ -1057,7 +1057,7 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa, struct rte_mempool *mb_pool) /* Make sure that end padding does not write beyond the buffer */ if (buf_aligned < nic_align_end) { /* - * Estimate space which can be lost. If guarnteed buffer + * Estimate space which can be lost. If guaranteed buffer * size is odd, lost space is (nic_align_end - 1). More * accurate formula is below. */ @@ -1702,7 +1702,7 @@ sfc_rx_fini_queues(struct sfc_adapter *sa, unsigned int nb_rx_queues) /* * Finalize only ethdev queues since other ones are finalized only - * on device close and they may require additional deinitializaton. + * on device close and they may require additional deinitialization. */ ethdev_qid = sas->ethdev_rxq_count; while (--ethdev_qid >= (int)nb_rx_queues) { @@ -1775,7 +1775,7 @@ sfc_rx_configure(struct sfc_adapter *sa) reconfigure = true; - /* Do not ununitialize reserved queues */ + /* Do not uninitialize reserved queues */ if (nb_rx_queues < sas->ethdev_rxq_count) sfc_rx_fini_queues(sa, nb_rx_queues); diff --git a/drivers/net/sfc/sfc_tso.h b/drivers/net/sfc/sfc_tso.h index 9029ad15..f2fba304 100644 --- a/drivers/net/sfc/sfc_tso.h +++ b/drivers/net/sfc/sfc_tso.h @@ -53,21 +53,21 @@ sfc_tso_outer_udp_fix_len(const struct rte_mbuf *m, uint8_t *tsoh) static inline void sfc_tso_innermost_ip_fix_len(const struct rte_mbuf *m, uint8_t *tsoh, - size_t iph_ofst) + size_t iph_offset) { size_t ip_payload_len = m->l4_len + m->tso_segsz; - size_t field_ofst; + size_t field_offset; rte_be16_t len; if (m->ol_flags & RTE_MBUF_F_TX_IPV4) { - field_ofst = offsetof(struct rte_ipv4_hdr, total_length); + field_offset = offsetof(struct rte_ipv4_hdr, total_length); len = rte_cpu_to_be_16(m->l3_len + ip_payload_len); } else { - field_ofst = offsetof(struct rte_ipv6_hdr, payload_len); + field_offset = offsetof(struct rte_ipv6_hdr, payload_len); len = rte_cpu_to_be_16(ip_payload_len); } - rte_memcpy(tsoh + iph_ofst + field_ofst, &len, sizeof(len)); + rte_memcpy(tsoh + iph_offset + field_offset, &len, sizeof(len)); } unsigned int sfc_tso_prepare_header(uint8_t *tsoh, size_t header_len, diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c index 0dccf21f..cd927cf2 100644 --- a/drivers/net/sfc/sfc_tx.c +++ b/drivers/net/sfc/sfc_tx.c @@ -356,7 +356,7 @@ sfc_tx_fini_queues(struct sfc_adapter *sa, unsigned int nb_tx_queues) /* * Finalize only ethdev queues since other ones are finalized only - * on device close and they may require additional deinitializaton. + * on device close and they may require additional deinitialization. */ ethdev_qid = sas->ethdev_txq_count; while (--ethdev_qid >= (int)nb_tx_queues) { diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c index ca70eab6..ad96288e 100644 --- a/drivers/net/softnic/rte_eth_softnic_flow.c +++ b/drivers/net/softnic/rte_eth_softnic_flow.c @@ -930,7 +930,7 @@ flow_rule_match_acl_get(struct pmd_internals *softnic __rte_unused, * Both *tmask* and *fmask* are byte arrays of size *tsize* and *fsize* * respectively. * They are located within a larger buffer at offsets *toffset* and *foffset* - * respectivelly. Both *tmask* and *fmask* represent bitmasks for the larger + * respectively. Both *tmask* and *fmask* represent bitmasks for the larger * buffer. * Question: are the two masks equivalent? * diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index f1b48cae..5bb472f1 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -525,7 +525,7 @@ tap_tx_l4_cksum(uint16_t *l4_cksum, uint16_t l4_phdr_cksum, } } -/* Accumaulate L4 raw checksums */ +/* Accumulate L4 raw checksums */ static void tap_tx_l4_add_rcksum(char *l4_data, unsigned int l4_len, uint16_t *l4_cksum, uint32_t *l4_raw_cksum) diff --git a/drivers/net/tap/tap_bpf_api.c b/drivers/net/tap/tap_bpf_api.c index 98f6a760..15283f89 100644 --- a/drivers/net/tap/tap_bpf_api.c +++ b/drivers/net/tap/tap_bpf_api.c @@ -96,7 +96,7 @@ static inline int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr, * Load BPF instructions to kernel * * @param[in] type - * BPF program type: classifieir or action + * BPF program type: classifier or action * * @param[in] insns * Array of BPF instructions (equivalent to BPF instructions) @@ -104,7 +104,7 @@ static inline int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr, * @param[in] insns_cnt * Number of BPF instructions (size of array) * - * @param[in] lincense + * @param[in] license * License string that must be acknowledged by the kernel * * @return diff --git a/drivers/net/tap/tap_flow.c b/drivers/net/tap/tap_flow.c index c4f60ce9..76738239 100644 --- a/drivers/net/tap/tap_flow.c +++ b/drivers/net/tap/tap_flow.c @@ -961,7 +961,7 @@ add_action(struct rte_flow *flow, size_t *act_index, struct action_data *adata) } /** - * Helper function to send a serie of TC actions to the kernel + * Helper function to send a series of TC actions to the kernel * * @param[in] flow * Pointer to rte flow containing the netlink message @@ -2017,7 +2017,7 @@ static int bpf_rss_key(enum bpf_rss_key_e cmd, __u32 *key_idx) break; /* - * Subtract offest to restore real key index + * Subtract offset to restore real key index * If a non RSS flow is falsely trying to release map * entry 0 - the offset subtraction will calculate the real * map index as an out-of-range value and the release operation diff --git a/drivers/net/thunderx/nicvf_svf.c b/drivers/net/thunderx/nicvf_svf.c index bccf2905..1bcf73d9 100644 --- a/drivers/net/thunderx/nicvf_svf.c +++ b/drivers/net/thunderx/nicvf_svf.c @@ -21,7 +21,7 @@ nicvf_svf_push(struct nicvf *vf) entry = rte_zmalloc("nicvf", sizeof(*entry), RTE_CACHE_LINE_SIZE); if (entry == NULL) - rte_panic("Cannoc allocate memory for svf_entry\n"); + rte_panic("Cannot allocate memory for svf_entry\n"); entry->vf = vf; diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index 47d0e6ea..e617c9af 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -1026,12 +1026,12 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev, { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); int ret = 0; - uint32_t portctrl, vlan_ext, qinq; + uint32_t portctl, vlan_ext, qinq; - portctrl = rd32(hw, TXGBE_PORTCTL); + portctl = rd32(hw, TXGBE_PORTCTL); - vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT); - qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ); + vlan_ext = (portctl & TXGBE_PORTCTL_VLANEXT); + qinq = vlan_ext && (portctl & TXGBE_PORTCTL_QINQ); switch (vlan_type) { case RTE_ETH_VLAN_TYPE_INNER: if (vlan_ext) { @@ -1678,7 +1678,7 @@ txgbe_dev_start(struct rte_eth_dev *dev) return -ENOMEM; } } - /* confiugre msix for sleep until rx interrupt */ + /* configure msix for sleep until rx interrupt */ txgbe_configure_msix(dev); /* initialize transmission unit */ @@ -3682,7 +3682,7 @@ txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction, wr32(hw, TXGBE_IVARMISC, tmp); } else { /* rx or tx causes */ - /* Workround for ICR lost */ + /* Workaround for ICR lost */ idx = ((16 * (queue & 1)) + (8 * direction)); tmp = rd32(hw, TXGBE_IVAR(queue >> 1)); tmp &= ~(0xFF << idx); @@ -4387,7 +4387,7 @@ txgbe_timesync_disable(struct rte_eth_dev *dev) /* Disable L2 filtering of IEEE1588/802.1AS Ethernet frame types. */ wr32(hw, TXGBE_ETFLT(TXGBE_ETF_ID_1588), 0); - /* Stop incrementating the System Time registers. */ + /* Stop incrementing the System Time registers. */ wr32(hw, TXGBE_TSTIMEINC, 0); return 0; diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 84b960b8..f52cd8bc 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -961,7 +961,7 @@ txgbevf_set_ivar_map(struct txgbe_hw *hw, int8_t direction, wr32(hw, TXGBE_VFIVARMISC, tmp); } else { /* rx or tx cause */ - /* Workround for ICR lost */ + /* Workaround for ICR lost */ idx = ((16 * (queue & 1)) + (8 * direction)); tmp = rd32(hw, TXGBE_VFIVAR(queue >> 1)); tmp &= ~(0xFF << idx); @@ -997,7 +997,7 @@ txgbevf_configure_msix(struct rte_eth_dev *dev) /* Configure all RX queues of VF */ for (q_idx = 0; q_idx < dev->data->nb_rx_queues; q_idx++) { /* Force all queue use vector 0, - * as TXGBE_VF_MAXMSIVECOTR = 1 + * as TXGBE_VF_MAXMSIVECTOR = 1 */ txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx); rte_intr_vec_list_index_set(intr_handle, q_idx, @@ -1288,7 +1288,7 @@ txgbevf_dev_interrupt_get_status(struct rte_eth_dev *dev) /* only one misc vector supported - mailbox */ eicr &= TXGBE_VFICR_MASK; - /* Workround for ICR lost */ + /* Workaround for ICR lost */ intr->flags |= TXGBE_FLAG_MAILBOX; /* To avoid compiler warnings set eicr to used. */ diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c index 445733f3..e2701063 100644 --- a/drivers/net/txgbe/txgbe_ipsec.c +++ b/drivers/net/txgbe/txgbe_ipsec.c @@ -288,7 +288,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev, return -1; } - /* Disable and clear Rx SPI and key table entryes*/ + /* Disable and clear Rx SPI and key table entries*/ reg_val = TXGBE_IPSRXIDX_WRITE | TXGBE_IPSRXIDX_TB_SPI | (sa_index << 3); wr32(hw, TXGBE_IPSRXSPI, 0); diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c index 30be2873..67d92bfa 100644 --- a/drivers/net/txgbe/txgbe_pf.c +++ b/drivers/net/txgbe/txgbe_pf.c @@ -236,7 +236,7 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev) wr32(hw, TXGBE_PSRCTL, TXGBE_PSRCTL_LBENA); - /* clear VMDq map to perment rar 0 */ + /* clear VMDq map to permanent rar 0 */ hw->mac.clear_vmdq(hw, 0, BIT_MASK32); /* clear VMDq map to scan rar 127 */ diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index c2588369..b317649d 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -2657,7 +2657,7 @@ virtio_dev_configure(struct rte_eth_dev *dev) hw->has_rx_offload = rx_offload_enabled(hw); if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) - /* Enable vector (0) for Link State Intrerrupt */ + /* Enable vector (0) for Link State Interrupt */ if (VIRTIO_OPS(hw)->set_config_irq(hw, 0) == VIRTIO_MSI_NO_VECTOR) { PMD_DRV_LOG(ERR, "failed to set config vector"); @@ -2775,7 +2775,7 @@ virtio_dev_start(struct rte_eth_dev *dev) } } - /* Enable uio/vfio intr/eventfd mapping: althrough we already did that + /* Enable uio/vfio intr/eventfd mapping: although we already did that * in device configure, but it could be unmapped when device is * stopped. */ diff --git a/drivers/net/virtio/virtio_pci.c b/drivers/net/virtio/virtio_pci.c index 182cfc9e..632451dc 100644 --- a/drivers/net/virtio/virtio_pci.c +++ b/drivers/net/virtio/virtio_pci.c @@ -235,7 +235,7 @@ legacy_get_isr(struct virtio_hw *hw) return dst; } -/* Enable one vector (0) for Link State Intrerrupt */ +/* Enable one vector (0) for Link State Interrupt */ static uint16_t legacy_set_config_irq(struct virtio_hw *hw, uint16_t vec) { diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 2e115ded..b39dd92d 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -962,7 +962,7 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) return -EINVAL; } - /* Update mss lengthes in mbuf */ + /* Update mss lengths in mbuf */ m->tso_segsz = hdr->gso_size; switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { case VIRTIO_NET_HDR_GSO_TCPV4: diff --git a/drivers/net/virtio/virtio_rxtx_packed_avx.h b/drivers/net/virtio/virtio_rxtx_packed_avx.h index 8cb71f3f..584ac72f 100644 --- a/drivers/net/virtio/virtio_rxtx_packed_avx.h +++ b/drivers/net/virtio/virtio_rxtx_packed_avx.h @@ -192,7 +192,7 @@ virtqueue_dequeue_batch_packed_vec(struct virtnet_rx *rxvq, /* * load len from desc, store into mbuf pkt_len and data_len - * len limiated by l6bit buf_len, pkt_len[16:31] can be ignored + * len limited by l6bit buf_len, pkt_len[16:31] can be ignored */ const __mmask16 mask = 0x6 | 0x6 << 4 | 0x6 << 8 | 0x6 << 12; __m512i values = _mm512_maskz_shuffle_epi32(mask, v_desc, 0xAA); diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 65bf792e..c98d696e 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -13,7 +13,7 @@ /* * Two types of mbuf to be cleaned: * 1) mbuf that has been consumed by backend but not used by virtio. - * 2) mbuf that hasn't been consued by backend. + * 2) mbuf that hasn't been consumed by backend. */ struct rte_mbuf * virtqueue_detach_unused(struct virtqueue *vq) diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 855f57a9..99c68cf6 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -227,7 +227,7 @@ struct virtio_net_ctrl_rss { * Control link announce acknowledgement * * The command VIRTIO_NET_CTRL_ANNOUNCE_ACK is used to indicate that - * driver has recevied the notification; device would clear the + * driver has received the notification; device would clear the * VIRTIO_NET_S_ANNOUNCE bit in the status field after it receives * this command. */ @@ -312,7 +312,7 @@ struct virtqueue { struct vq_desc_extra vq_descx[0]; }; -/* If multiqueue is provided by host, then we suppport it. */ +/* If multiqueue is provided by host, then we support it. */ #define VIRTIO_NET_CTRL_MQ 4 #define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET 0 diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c index de26d2ae..ebc2cd5d 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c @@ -653,7 +653,7 @@ dpdmai_dev_dequeue_multijob_prefetch( rte_prefetch0((void *)(size_t)(dq_storage + 1)); /* Prepare next pull descriptor. This will give space for the - * prefething done on DQRR entries + * prefetching done on DQRR entries */ q_storage->toggle ^= 1; dq_storage1 = q_storage->dq_storage[q_storage->toggle]; diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.h b/drivers/raw/dpaa2_qdma/dpaa2_qdma.h index d6f6bb55..1973d5d2 100644 --- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.h +++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.h @@ -82,7 +82,7 @@ struct qdma_device { /** total number of hw queues. */ uint16_t num_hw_queues; /** - * Maximum number of hw queues to be alocated per core. + * Maximum number of hw queues to be allocated per core. * This is limited by MAX_HW_QUEUE_PER_CORE */ uint16_t max_hw_queues_per_core; @@ -268,7 +268,7 @@ struct dpaa2_dpdmai_dev { struct fsl_mc_io dpdmai; /** HW ID for DPDMAI object */ uint32_t dpdmai_id; - /** Tocken of this device */ + /** Token of this device */ uint16_t token; /** Number of queue in this DPDMAI device */ uint8_t num_queues; diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c index 8d9db585..0eae0c94 100644 --- a/drivers/raw/ifpga/ifpga_rawdev.c +++ b/drivers/raw/ifpga/ifpga_rawdev.c @@ -382,7 +382,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev, if (HIGH_WARN(sensor, value) || LOW_WARN(sensor, value)) { - IFPGA_RAWDEV_PMD_INFO("%s reach theshold %d\n", + IFPGA_RAWDEV_PMD_INFO("%s reach threshold %d\n", sensor->name, value); *gsd_start = true; break; @@ -393,7 +393,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev, if (!strcmp(sensor->name, "12V AUX Voltage")) { if (value < AUX_VOLTAGE_WARN) { IFPGA_RAWDEV_PMD_INFO( - "%s reach theshold %d mV\n", + "%s reach threshold %d mV\n", sensor->name, value); *gsd_start = true; break; @@ -441,12 +441,12 @@ static int set_surprise_link_check_aer( pos = ifpga_pci_find_ext_capability(fd, RTE_PCI_EXT_CAP_ID_ERR); if (!pos) goto end; - /* save previout ECAP_AER+0x08 */ + /* save previous ECAP_AER+0x08 */ ret = pread(fd, &data, sizeof(data), pos+0x08); if (ret == -1) goto end; ifpga_rdev->aer_old[0] = data; - /* save previout ECAP_AER+0x14 */ + /* save previous ECAP_AER+0x14 */ ret = pread(fd, &data, sizeof(data), pos+0x14); if (ret == -1) goto end; @@ -531,7 +531,7 @@ ifpga_monitor_start_func(void) ifpga_rawdev_gsd_handle, NULL); if (ret != 0) { IFPGA_RAWDEV_PMD_ERR( - "Fail to create ifpga nonitor thread"); + "Fail to create ifpga monitor thread"); return -1; } ifpga_monitor_start = 1; diff --git a/drivers/raw/ioat/ioat_rawdev.c b/drivers/raw/ioat/ioat_rawdev.c index 5396671d..d4dcb233 100644 --- a/drivers/raw/ioat/ioat_rawdev.c +++ b/drivers/raw/ioat/ioat_rawdev.c @@ -200,7 +200,7 @@ ioat_rawdev_create(const char *name, struct rte_pci_device *dev) ioat->rawdev = rawdev; ioat->mz = mz; ioat->regs = dev->mem_resource[0].addr; - ioat->doorbell = &ioat->regs->dmacount; + ioat->doorbell = &ioat->regs->dmaccount; ioat->ring_size = 0; ioat->desc_ring = NULL; ioat->status_addr = ioat->mz->iova + diff --git a/drivers/raw/ioat/ioat_spec.h b/drivers/raw/ioat/ioat_spec.h index 6aa467e4..51c4b3f8 100644 --- a/drivers/raw/ioat/ioat_spec.h +++ b/drivers/raw/ioat/ioat_spec.h @@ -60,7 +60,7 @@ struct rte_ioat_registers { uint8_t reserved6[0x2]; /* 0x82 */ uint8_t chancmd; /* 0x84 */ uint8_t reserved3[1]; /* 0x85 */ - uint16_t dmacount; /* 0x86 */ + uint16_t dmaccount; /* 0x86 */ uint64_t chansts; /* 0x88 */ uint64_t chainaddr; /* 0x90 */ uint64_t chancmp; /* 0x98 */ diff --git a/drivers/raw/ntb/ntb.h b/drivers/raw/ntb/ntb.h index cdf7667d..c9ff33aa 100644 --- a/drivers/raw/ntb/ntb.h +++ b/drivers/raw/ntb/ntb.h @@ -95,7 +95,7 @@ enum ntb_spad_idx { * @spad_write: Write val to local/peer spad register. * @db_read: Read doorbells status. * @db_clear: Clear local doorbells. - * @db_set_mask: Set bits in db mask, preventing db interrpts generated + * @db_set_mask: Set bits in db mask, preventing db interrupts generated * for those db bits. * @peer_db_set: Set doorbell bit to generate peer interrupt for that bit. * @vector_bind: Bind vector source [intr] to msix vector [msix]. diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c index 9a2db7e4..72464cad 100644 --- a/drivers/regex/mlx5/mlx5_regex_fastpath.c +++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c @@ -226,7 +226,7 @@ complete_umr_wqe(struct mlx5_regex_qp *qp, struct mlx5_regex_hw_qp *qp_obj, rte_cpu_to_be_32(mkey_job->imkey->id)); /* Set UMR WQE control seg. */ ucseg->mkey_mask |= rte_cpu_to_be_64(MLX5_WQE_UMR_CTRL_MKEY_MASK_LEN | - MLX5_WQE_UMR_CTRL_FLAG_TRNSLATION_OFFSET | + MLX5_WQE_UMR_CTRL_FLAG_TRANSLATION_OFFSET | MLX5_WQE_UMR_CTRL_MKEY_MASK_ACCESS_LOCAL_WRITE); ucseg->klm_octowords = rte_cpu_to_be_16(klm_align); /* Set mkey context seg. */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c index b1b9053b..130d201a 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_mem.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_mem.c @@ -160,7 +160,7 @@ mlx5_vdpa_vhost_mem_regions_prepare(int vid, uint8_t *mode, uint64_t *mem_size, * The target here is to group all the physical memory regions of the * virtio device in one indirect mkey. * For KLM Fixed Buffer Size mode (HW find the translation entry in one - * read according to the guest phisical address): + * read according to the guest physical address): * All the sub-direct mkeys of it must be in the same size, hence, each * one of them should be in the GCD size of all the virtio memory * regions and the holes between them. diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index db971bad..2f32aef6 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -403,7 +403,7 @@ mlx5_vdpa_features_validate(struct mlx5_vdpa_priv *priv) if (priv->features & (1ULL << VIRTIO_F_RING_PACKED)) { if (!(priv->caps.virtio_queue_type & (1 << MLX5_VIRTQ_TYPE_PACKED))) { - DRV_LOG(ERR, "Failed to configur PACKED mode for vdev " + DRV_LOG(ERR, "Failed to configure PACKED mode for vdev " "%d - it was not reported by HW/driver" " capability.", priv->vid); return -ENOTSUP; diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c index ecafc5e4..fc7e8b81 100644 --- a/examples/bbdev_app/main.c +++ b/examples/bbdev_app/main.c @@ -372,7 +372,7 @@ add_awgn(struct rte_mbuf **mbufs, uint16_t num_pkts) /* Encoder output to Decoder input adapter. The Decoder accepts only soft input * so each bit of the encoder output must be translated into one byte of LLR. If * Sub-block Deinterleaver is bypassed, which is the case, the padding bytes - * must additionally be insterted at the end of each sub-block. + * must additionally be inserted at the end of each sub-block. */ static inline void transform_enc_out_dec_in(struct rte_mbuf **mbufs, uint8_t *temp_buf, diff --git a/examples/bond/main.c b/examples/bond/main.c index 1087b0da..335bde5c 100644 --- a/examples/bond/main.c +++ b/examples/bond/main.c @@ -230,7 +230,7 @@ bond_port_init(struct rte_mempool *mbuf_pool) 0 /*SOCKET_ID_ANY*/); if (retval < 0) rte_exit(EXIT_FAILURE, - "Faled to create bond port\n"); + "Failed to create bond port\n"); BOND_PORT = retval; @@ -405,7 +405,7 @@ static int lcore_main(__rte_unused void *arg1) struct rte_ether_hdr *); ether_type = eth_hdr->ether_type; if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) - printf("VLAN taged frame, offset:"); + printf("VLAN tagged frame, offset:"); offset = get_vlan_offset(eth_hdr, ðer_type); if (offset > 0) printf("%d\n", offset); diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c index d074acc9..608487e3 100644 --- a/examples/dma/dmafwd.c +++ b/examples/dma/dmafwd.c @@ -87,7 +87,7 @@ static uint16_t nb_queues = 1; /* MAC updating enabled by default. */ static int mac_updating = 1; -/* hardare copy mode enabled by default. */ +/* hardware copy mode enabled by default. */ static copy_mode_t copy_mode = COPY_MODE_DMA_NUM; /* size of descriptor ring for hardware copy mode or diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c index 86286d38..ffaad964 100644 --- a/examples/ethtool/lib/rte_ethtool.c +++ b/examples/ethtool/lib/rte_ethtool.c @@ -402,7 +402,7 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id) #endif } - /* Enable Rx vlan filter, VF unspport status is discard */ + /* Enable Rx vlan filter, VF unsupported status is discard */ ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK); if (ret != 0) return ret; diff --git a/examples/ethtool/lib/rte_ethtool.h b/examples/ethtool/lib/rte_ethtool.h index f1770966..d27e0102 100644 --- a/examples/ethtool/lib/rte_ethtool.h +++ b/examples/ethtool/lib/rte_ethtool.h @@ -189,7 +189,7 @@ int rte_ethtool_get_module_eeprom(uint16_t port_id, /** * Retrieve the Ethernet device pause frame configuration according to - * parameter attributes desribed by ethtool data structure, + * parameter attributes described by ethtool data structure, * ethtool_pauseparam. * * @param port_id @@ -209,7 +209,7 @@ int rte_ethtool_get_pauseparam(uint16_t port_id, /** * Setting the Ethernet device pause frame configuration according to - * parameter attributes desribed by ethtool data structure, ethtool_pauseparam. + * parameter attributes described by ethtool data structure, ethtool_pauseparam. * * @param port_id * The port identifier of the Ethernet device. diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index fb3cac3b..1023bf6b 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -244,7 +244,7 @@ static struct rte_lpm6 *socket_lpm6[RTE_MAX_NUMA_NODES]; #endif /* RTE_LIBRTE_IP_FRAG_TBL_STAT */ /* - * If number of queued packets reached given threahold, then + * If number of queued packets reached given threshold, then * send burst of packets on an output interface. */ static inline uint32_t @@ -877,7 +877,7 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue) * Plus, each TX queue can hold up to packets. */ - /* mbufs stored int the gragment table. 8< */ + /* mbufs stored int the fragment table. 8< */ nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM; nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + BUF_SIZE - 1) / BUF_SIZE; diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index e8600f5e..24b210ad 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1353,7 +1353,7 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); sprintf(print_buf, - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", + "\tRx adapter ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, adapter->eventdev_id); diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index bf3dbf6b..96916cd3 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -265,7 +265,7 @@ struct socket_ctx socket_ctx[NB_SOCKETS]; /* * Determine is multi-segment support required: * - either frame buffer size is smaller then mtu - * - or reassmeble support is requested + * - or reassemble support is requested */ static int multi_seg_required(void) @@ -2050,7 +2050,7 @@ add_mapping(struct rte_hash *map, const char *str, uint16_t cdev_id, ret = rte_hash_add_key_data(map, &key, (void *)i); if (ret < 0) { - printf("Faled to insert cdev mapping for (lcore %u, " + printf("Failed to insert cdev mapping for (lcore %u, " "cdev %u, qp %u), errno %d\n", key.lcore_id, ipsec_ctx->tbl[i].id, ipsec_ctx->tbl[i].qp, ret); @@ -2083,7 +2083,7 @@ add_cdev_mapping(struct rte_cryptodev_info *dev_info, uint16_t cdev_id, str = "Inbound"; } - /* Required cryptodevs with operation chainning */ + /* Required cryptodevs with operation chaining */ if (!(dev_info->feature_flags & RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING)) return ret; @@ -2251,7 +2251,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) "Error during getting device (port %u) info: %s\n", portid, strerror(-ret)); - /* limit allowed HW offloafs, as user requested */ + /* limit allowed HW offloads, as user requested */ dev_info.rx_offload_capa &= dev_rx_offload; dev_info.tx_offload_capa &= dev_tx_offload; @@ -2298,7 +2298,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) local_port_conf.rxmode.offloads) rte_exit(EXIT_FAILURE, "Error: port %u required RX offloads: 0x%" PRIx64 - ", avaialbe RX offloads: 0x%" PRIx64 "\n", + ", available RX offloads: 0x%" PRIx64 "\n", portid, local_port_conf.rxmode.offloads, dev_info.rx_offload_capa); @@ -2306,7 +2306,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) local_port_conf.txmode.offloads) rte_exit(EXIT_FAILURE, "Error: port %u required TX offloads: 0x%" PRIx64 - ", avaialbe TX offloads: 0x%" PRIx64 "\n", + ", available TX offloads: 0x%" PRIx64 "\n", portid, local_port_conf.txmode.offloads, dev_info.tx_offload_capa); @@ -2317,7 +2317,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM; - printf("port %u configurng rx_offloads=0x%" PRIx64 + printf("port %u configuring rx_offloads=0x%" PRIx64 ", tx_offloads=0x%" PRIx64 "\n", portid, local_port_conf.rxmode.offloads, local_port_conf.txmode.offloads); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 30bc693e..1839ac71 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -897,7 +897,7 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, continue; } - /* unrecognizeable input */ + /* unrecognizable input */ APP_CHECK(0, status, "unrecognized input \"%s\"", tokens[ti]); return; @@ -1145,7 +1145,7 @@ get_spi_proto(uint32_t spi, enum rte_security_ipsec_sa_direction dir, if (rc4 >= 0) { if (rc6 >= 0) { RTE_LOG(ERR, IPSEC, - "%s: SPI %u used simultaeously by " + "%s: SPI %u used simultaneously by " "IPv4(%d) and IPv6 (%d) SP rules\n", __func__, spi, rc4, rc6); return -EINVAL; @@ -1550,7 +1550,7 @@ ipsec_sa_init(struct ipsec_sa *lsa, struct rte_ipsec_sa *sa, uint32_t sa_size) } /* - * Allocate space and init rte_ipsec_sa strcutures, + * Allocate space and init rte_ipsec_sa structures, * one per session. */ static int diff --git a/examples/ipsec-secgw/sp4.c b/examples/ipsec-secgw/sp4.c index beddd7bc..fc4101a4 100644 --- a/examples/ipsec-secgw/sp4.c +++ b/examples/ipsec-secgw/sp4.c @@ -410,7 +410,7 @@ parse_sp4_tokens(char **tokens, uint32_t n_tokens, continue; } - /* unrecognizeable input */ + /* unrecognizable input */ APP_CHECK(0, status, "unrecognized input \"%s\"", tokens[ti]); return; diff --git a/examples/ipsec-secgw/sp6.c b/examples/ipsec-secgw/sp6.c index 328e0852..cce4da78 100644 --- a/examples/ipsec-secgw/sp6.c +++ b/examples/ipsec-secgw/sp6.c @@ -515,7 +515,7 @@ parse_sp6_tokens(char **tokens, uint32_t n_tokens, continue; } - /* unrecognizeable input */ + /* unrecognizable input */ APP_CHECK(0, status, "unrecognized input \"%s\"", tokens[ti]); return; diff --git a/examples/ipsec-secgw/test/common_defs.sh b/examples/ipsec-secgw/test/common_defs.sh index f22eb3ab..3ef06bc7 100644 --- a/examples/ipsec-secgw/test/common_defs.sh +++ b/examples/ipsec-secgw/test/common_defs.sh @@ -20,7 +20,7 @@ REMOTE_MAC=`ssh ${REMOTE_HOST} ip addr show dev ${REMOTE_IFACE}` st=$? REMOTE_MAC=`echo ${REMOTE_MAC} | sed -e 's/^.*ether //' -e 's/ brd.*$//'` if [[ $st -ne 0 || -z "${REMOTE_MAC}" ]]; then - echo "coouldn't retrieve ether addr from ${REMOTE_IFACE}" + echo "couldn't retrieve ether addr from ${REMOTE_IFACE}" exit 127 fi @@ -40,7 +40,7 @@ DPDK_VARS="" # by default ipsec-secgw can't deal with multi-segment packets # make sure our local/remote host wouldn't generate fragmented packets -# if reassmebly option is not enabled +# if reassembly option is not enabled DEF_MTU_LEN=1400 DEF_PING_LEN=1200 diff --git a/examples/kni/main.c b/examples/kni/main.c index d324ee22..f5b20a7b 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -1039,7 +1039,7 @@ main(int argc, char** argv) pthread_t kni_link_tid; int pid; - /* Associate signal_hanlder function with USR signals */ + /* Associate signal_handler function with USR signals */ signal(SIGUSR1, signal_handler); signal(SIGUSR2, signal_handler); signal(SIGRTMIN, signal_handler); diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c index d9cf00c9..6e16705e 100644 --- a/examples/l2fwd-cat/l2fwd-cat.c +++ b/examples/l2fwd-cat/l2fwd-cat.c @@ -157,7 +157,7 @@ main(int argc, char *argv[]) int ret = rte_eal_init(argc, argv); if (ret < 0) rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); - /* >8 End of initializion the Environment Abstraction Layer (EAL). */ + /* >8 End of initialization the Environment Abstraction Layer (EAL). */ argc -= ret; argv += ret; diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c index f31569a7..1977e232 100644 --- a/examples/l2fwd-event/l2fwd_event_generic.c +++ b/examples/l2fwd-event/l2fwd_event_generic.c @@ -42,7 +42,7 @@ l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc) ethdev_count++; } - /* Event device configurtion */ + /* Event device configuration */ rte_event_dev_info_get(event_d_id, &dev_info); /* Enable implicit release */ diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c index 86d772d8..717a7bce 100644 --- a/examples/l2fwd-event/l2fwd_event_internal_port.c +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c @@ -40,7 +40,7 @@ l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc) ethdev_count++; } - /* Event device configurtion */ + /* Event device configuration */ rte_event_dev_info_get(event_d_id, &dev_info); /* Enable implicit release */ diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c index d8eabe4c..9e71ba2d 100644 --- a/examples/l2fwd-jobstats/main.c +++ b/examples/l2fwd-jobstats/main.c @@ -468,7 +468,7 @@ l2fwd_flush_job(__rte_unused struct rte_timer *timer, __rte_unused void *arg) qconf->next_flush_time[portid] = rte_get_timer_cycles() + drain_tsc; } - /* Pass target to indicate that this job is happy of time interwal + /* Pass target to indicate that this job is happy of time interval * in which it was called. */ rte_jobstats_finish(&qconf->flush_job, qconf->flush_job.target); } diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c index 1fb18072..151cb8f6 100644 --- a/examples/l3fwd-acl/main.c +++ b/examples/l3fwd-acl/main.c @@ -801,8 +801,8 @@ send_packets(struct rte_mbuf **m, uint32_t *res, int num) } /* - * Parses IPV6 address, exepcts the following format: - * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X - is a hexedecimal digit). + * Parses IPV6 address, expects the following format: + * XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX:XXXX (where X - is a hexadecimal digit). */ static int parse_ipv6_addr(const char *in, const char **end, uint32_t v[IPV6_ADDR_U32], @@ -1959,7 +1959,7 @@ check_all_ports_link_status(uint32_t port_mask) } /* - * build-up default vaues for dest MACs. + * build-up default values for dest MACs. */ static void set_default_dest_mac(void) diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index b8b3be2b..20e5b59a 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -433,7 +433,7 @@ signal_exit_now(int sigtype) } -/* Freqency scale down timer callback */ +/* Frequency scale down timer callback */ static void power_timer_cb(__rte_unused struct rte_timer *tim, __rte_unused void *arg) @@ -2358,7 +2358,7 @@ update_telemetry(__rte_unused struct rte_timer *tim, ret = rte_metrics_update_values(RTE_METRICS_GLOBAL, telstats_index, values, RTE_DIM(values)); if (ret < 0) - RTE_LOG(WARNING, POWER, "failed to update metrcis\n"); + RTE_LOG(WARNING, POWER, "failed to update metrics\n"); } static int diff --git a/examples/l3fwd/l3fwd_common.h b/examples/l3fwd/l3fwd_common.h index 7d83ff64..cbaab79f 100644 --- a/examples/l3fwd/l3fwd_common.h +++ b/examples/l3fwd/l3fwd_common.h @@ -51,7 +51,7 @@ rfc1812_process(struct rte_ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype) #endif /* DO_RFC_1812_CHECKS */ /* - * We group consecutive packets with the same destionation port into one burst. + * We group consecutive packets with the same destination port into one burst. * To avoid extra latency this is done together with some other packet * processing, but after we made a final decision about packet's destination. * To do this we maintain: @@ -76,7 +76,7 @@ rfc1812_process(struct rte_ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype) static const struct { uint64_t pnum; /* prebuild 4 values for pnum[]. */ - int32_t idx; /* index for new last updated elemnet. */ + int32_t idx; /* index for new last updated element. */ uint16_t lpv; /* add value to the last updated element. */ } gptbl[GRPSZ] = { { diff --git a/examples/l3fwd/l3fwd_neon.h b/examples/l3fwd/l3fwd_neon.h index 86ac5971..e3d33a52 100644 --- a/examples/l3fwd/l3fwd_neon.h +++ b/examples/l3fwd/l3fwd_neon.h @@ -64,7 +64,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP]) /* * Group consecutive packets with the same destination port in bursts of 4. - * Suppose we have array of destionation ports: + * Suppose we have array of destination ports: * dst_port[] = {a, b, c, d,, e, ... } * dp1 should contain: , dp2: . * We doing 4 comparisons at once and the result is 4 bit mask. diff --git a/examples/l3fwd/l3fwd_sse.h b/examples/l3fwd/l3fwd_sse.h index bb565ed5..d5a717e1 100644 --- a/examples/l3fwd/l3fwd_sse.h +++ b/examples/l3fwd/l3fwd_sse.h @@ -64,7 +64,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP]) /* * Group consecutive packets with the same destination port in bursts of 4. - * Suppose we have array of destionation ports: + * Suppose we have array of destination ports: * dst_port[] = {a, b, c, d,, e, ... } * dp1 should contain: , dp2: . * We doing 4 comparisons at once and the result is 4 bit mask. diff --git a/examples/multi_process/hotplug_mp/commands.c b/examples/multi_process/hotplug_mp/commands.c index 48fd3295..41ea265e 100644 --- a/examples/multi_process/hotplug_mp/commands.c +++ b/examples/multi_process/hotplug_mp/commands.c @@ -175,7 +175,7 @@ static void cmd_dev_detach_parsed(void *parsed_result, cmdline_printf(cl, "detached device %s\n", da.name); else - cmdline_printf(cl, "failed to dettach device %s\n", + cmdline_printf(cl, "failed to detach device %s\n", da.name); rte_devargs_reset(&da); } diff --git a/examples/multi_process/simple_mp/main.c b/examples/multi_process/simple_mp/main.c index 5df2a390..9d5f1088 100644 --- a/examples/multi_process/simple_mp/main.c +++ b/examples/multi_process/simple_mp/main.c @@ -4,7 +4,7 @@ /* * This sample application is a simple multi-process application which - * demostrates sharing of queues and memory pools between processes, and + * demonstrates sharing of queues and memory pools between processes, and * using those queues/pools for communication between the processes. * * Application is designed to run with two processes, a primary and a diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c index b35886a7..05033776 100644 --- a/examples/multi_process/symmetric_mp/main.c +++ b/examples/multi_process/symmetric_mp/main.c @@ -3,7 +3,7 @@ */ /* - * Sample application demostrating how to do packet I/O in a multi-process + * Sample application demonstrating how to do packet I/O in a multi-process * environment. The same code can be run as a primary process and as a * secondary process, just with a different proc-id parameter in each case * (apart from the EAL flag to indicate a secondary process). diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c index f110fc12..81964d03 100644 --- a/examples/ntb/ntb_fwd.c +++ b/examples/ntb/ntb_fwd.c @@ -696,7 +696,7 @@ assign_stream_to_lcores(void) break; } - /* Print packet forwading config. */ + /* Print packet forwarding config. */ RTE_LCORE_FOREACH_WORKER(lcore_id) { conf = &fwd_lcore_conf[lcore_id]; diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c index b01ac60f..99e67ef6 100644 --- a/examples/packet_ordering/main.c +++ b/examples/packet_ordering/main.c @@ -686,7 +686,7 @@ main(int argc, char **argv) if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid packet_ordering arguments\n"); - /* Check if we have enought cores */ + /* Check if we have enough cores */ if (rte_lcore_count() < 3) rte_exit(EXIT_FAILURE, "Error, This application needs at " "least 3 logical cores to run:\n" diff --git a/examples/performance-thread/common/lthread.c b/examples/performance-thread/common/lthread.c index 009374a8..b02e0fc1 100644 --- a/examples/performance-thread/common/lthread.c +++ b/examples/performance-thread/common/lthread.c @@ -178,7 +178,7 @@ lthread_create(struct lthread **new_lt, int lcore_id, bzero(lt, sizeof(struct lthread)); lt->root_sched = THIS_SCHED; - /* set the function args and exit handlder */ + /* set the function args and exit handler */ _lthread_init(lt, fun, arg, _lthread_exit_handler); /* put it in the ready queue */ @@ -384,7 +384,7 @@ void lthread_exit(void *ptr) } - /* wait until the joinging thread has collected the exit value */ + /* wait until the joining thread has collected the exit value */ while (lt->join != LT_JOIN_EXIT_VAL_READ) _reschedule(); @@ -410,7 +410,7 @@ int lthread_join(struct lthread *lt, void **ptr) /* invalid to join a detached thread, or a thread that is joined */ if ((lt_state & BIT(ST_LT_DETACH)) || (lt->join == LT_JOIN_THREAD_SET)) return POSIX_ERRNO(EINVAL); - /* pointer to the joining thread and a poingter to return a value */ + /* pointer to the joining thread and a pointer to return a value */ lt->lt_join = current; current->lt_exit_ptr = ptr; /* There is a race between lthread_join() and lthread_exit() diff --git a/examples/performance-thread/common/lthread_diag.c b/examples/performance-thread/common/lthread_diag.c index 57760a1e..b1bdf7a3 100644 --- a/examples/performance-thread/common/lthread_diag.c +++ b/examples/performance-thread/common/lthread_diag.c @@ -232,7 +232,7 @@ lthread_sched_stats_display(void) } /* - * Defafult diagnostic callback + * Default diagnostic callback */ static uint64_t _lthread_diag_default_cb(uint64_t time, struct lthread *lt, int diag_event, diff --git a/examples/performance-thread/common/lthread_int.h b/examples/performance-thread/common/lthread_int.h index d010126f..ec018e34 100644 --- a/examples/performance-thread/common/lthread_int.h +++ b/examples/performance-thread/common/lthread_int.h @@ -107,7 +107,7 @@ enum join_st { LT_JOIN_EXIT_VAL_READ, /* joining thread has collected ret val */ }; -/* defnition of an lthread stack object */ +/* definition of an lthread stack object */ struct lthread_stack { uint8_t stack[LTHREAD_MAX_STACK_SIZE]; size_t stack_size; diff --git a/examples/performance-thread/common/lthread_tls.c b/examples/performance-thread/common/lthread_tls.c index 4ab2e355..bae45f2a 100644 --- a/examples/performance-thread/common/lthread_tls.c +++ b/examples/performance-thread/common/lthread_tls.c @@ -215,7 +215,7 @@ void _lthread_tls_alloc(struct lthread *lt) tls->root_sched = (THIS_SCHED); lt->tls = tls; - /* allocate data for TLS varaiables using RTE_PER_LTHREAD macros */ + /* allocate data for TLS variables using RTE_PER_LTHREAD macros */ if (sizeof(void *) < (uint64_t)RTE_PER_LTHREAD_SECTION_SIZE) { lt->per_lthread_data = _lthread_objcache_alloc((THIS_SCHED)->per_lthread_cache); diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c index 8a350405..eda5e701 100644 --- a/examples/performance-thread/l3fwd-thread/main.c +++ b/examples/performance-thread/l3fwd-thread/main.c @@ -125,7 +125,7 @@ cb_parse_ptype(__rte_unused uint16_t port, __rte_unused uint16_t queue, } /* - * When set to zero, simple forwaring path is eanbled. + * When set to zero, simple forwaring path is enabled. * When set to one, optimized forwarding path is enabled. * Note that LPM optimisation path uses SSE4.1 instructions. */ @@ -1529,7 +1529,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP]) } /* - * We group consecutive packets with the same destionation port into one burst. + * We group consecutive packets with the same destination port into one burst. * To avoid extra latency this is done together with some other packet * processing, but after we made a final decision about packet's destination. * To do this we maintain: @@ -1554,7 +1554,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP]) /* * Group consecutive packets with the same destination port in bursts of 4. - * Suppose we have array of destionation ports: + * Suppose we have array of destination ports: * dst_port[] = {a, b, c, d,, e, ... } * dp1 should contain: , dp2: . * We doing 4 comparisons at once and the result is 4 bit mask. @@ -1565,7 +1565,7 @@ port_groupx4(uint16_t pn[FWDSTEP + 1], uint16_t *lp, __m128i dp1, __m128i dp2) { static const struct { uint64_t pnum; /* prebuild 4 values for pnum[]. */ - int32_t idx; /* index for new last updated elemnet. */ + int32_t idx; /* index for new last updated element. */ uint16_t lpv; /* add value to the last updated element. */ } gptbl[GRPSZ] = { { @@ -1834,7 +1834,7 @@ process_burst(struct rte_mbuf *pkts_burst[MAX_PKT_BURST], int nb_rx, /* * Send packets out, through destination port. - * Consecuteve pacekts with the same destination port + * Consecutive packets with the same destination port * are already grouped together. * If destination port for the packet equals BAD_PORT, * then free the packet without sending it out. @@ -3514,7 +3514,7 @@ main(int argc, char **argv) ret = rte_timer_subsystem_init(); if (ret < 0) - rte_exit(EXIT_FAILURE, "Failed to initialize timer subystem\n"); + rte_exit(EXIT_FAILURE, "Failed to initialize timer subsystem\n"); /* pre-init dst MACs for all ports to 02:00:00:00:00:xx */ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) { diff --git a/examples/performance-thread/pthread_shim/pthread_shim.h b/examples/performance-thread/pthread_shim/pthread_shim.h index e90fb15f..ce51627a 100644 --- a/examples/performance-thread/pthread_shim/pthread_shim.h +++ b/examples/performance-thread/pthread_shim/pthread_shim.h @@ -41,7 +41,7 @@ * * The decision whether to invoke the real library function or the lthread * function is controlled by a per pthread flag that can be switched - * on of off by the pthread_override_set() API described below. Typcially + * on of off by the pthread_override_set() API described below. Typically * this should be done as the first action of the initial lthread. * * N.B In general it would be poor practice to revert to invoke a real diff --git a/examples/pipeline/examples/registers.spec b/examples/pipeline/examples/registers.spec index 74a014ad..59998fef 100644 --- a/examples/pipeline/examples/registers.spec +++ b/examples/pipeline/examples/registers.spec @@ -4,7 +4,7 @@ ; This program is setting up two register arrays called "pkt_counters" and "byte_counters". ; On every input packet (Ethernet/IPv4), the "pkt_counters" register at location indexed by ; the IPv4 header "Source Address" field is incremented, while the same location in the -; "byte_counters" array accummulates the value of the IPv4 header "Total Length" field. +; "byte_counters" array accumulates the value of the IPv4 header "Total Length" field. ; ; The "regrd" and "regwr" CLI commands can be used to read and write the current value of ; any register array location. diff --git a/examples/qos_sched/cmdline.c b/examples/qos_sched/cmdline.c index 257b87a7..6691b02d 100644 --- a/examples/qos_sched/cmdline.c +++ b/examples/qos_sched/cmdline.c @@ -41,7 +41,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result, " qavg port X subport Y pipe Z : Show average queue size per pipe.\n" " qavg port X subport Y pipe Z tc A : Show average queue size per pipe and TC.\n" " qavg port X subport Y pipe Z tc A q B : Show average queue size of a specific queue.\n" - " qavg [n|period] X : Set number of times and peiod (us).\n\n" + " qavg [n|period] X : Set number of times and period (us).\n\n" ); } diff --git a/examples/server_node_efd/node/node.c b/examples/server_node_efd/node/node.c index ba1c7e51..fc2aa5ff 100644 --- a/examples/server_node_efd/node/node.c +++ b/examples/server_node_efd/node/node.c @@ -296,7 +296,7 @@ handle_packets(struct rte_hash *h, struct rte_mbuf **bufs, uint16_t num_packets) } } } -/* >8 End of packets dequeueing. */ +/* >8 End of packets dequeuing. */ /* * Application main function - loops through diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c index 16435ee3..518cd721 100644 --- a/examples/skeleton/basicfwd.c +++ b/examples/skeleton/basicfwd.c @@ -179,7 +179,7 @@ main(int argc, char *argv[]) int ret = rte_eal_init(argc, argv); if (ret < 0) rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); - /* >8 End of initializion the Environment Abstraction Layer (EAL). */ + /* >8 End of initialization the Environment Abstraction Layer (EAL). */ argc -= ret; argv += ret; diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 33d023aa..b65e80b7 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -107,7 +107,7 @@ static uint32_t burst_rx_retry_num = BURST_RX_RETRIES; static char *socket_files; static int nb_sockets; -/* empty vmdq configuration structure. Filled in programatically */ +/* empty vmdq configuration structure. Filled in programmatically */ static struct rte_eth_conf vmdq_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY, @@ -115,7 +115,7 @@ static struct rte_eth_conf vmdq_conf_default = { /* * VLAN strip is necessary for 1G NIC such as I350, * this fixes bug of ipv4 forwarding in guest can't - * forward pakets from one virtio dev to another virtio dev. + * forward packets from one virtio dev to another virtio dev. */ .offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP, }, @@ -463,7 +463,7 @@ us_vhost_usage(const char *prgname) " --nb-devices ND\n" " -p PORTMASK: Set mask for ports to be used by application\n" " --vm2vm [0|1|2]: disable/software(default)/hardware vm2vm comms\n" - " --rx-retry [0|1]: disable/enable(default) retries on rx. Enable retry if destintation queue is full\n" + " --rx-retry [0|1]: disable/enable(default) retries on rx. Enable retry if destination queue is full\n" " --rx-retry-delay [0-N]: timeout(in usecond) between retries on RX. This makes effect only if retries on rx enabled\n" " --rx-retry-num [0-N]: the number of retries on rx. This makes effect only if retries on rx enabled\n" " --mergeable [0|1]: disable(default)/enable RX mergeable buffers\n" @@ -1289,7 +1289,7 @@ switch_worker(void *arg __rte_unused) struct vhost_dev *vdev; struct mbuf_table *tx_q; - RTE_LOG(INFO, VHOST_DATA, "Procesing on Core %u started\n", lcore_id); + RTE_LOG(INFO, VHOST_DATA, "Processing on Core %u started\n", lcore_id); tx_q = &lcore_tx_queue[lcore_id]; for (i = 0; i < rte_lcore_count(); i++) { @@ -1333,7 +1333,7 @@ switch_worker(void *arg __rte_unused) /* * Remove a device from the specific data core linked list and from the - * main linked list. Synchonization occurs through the use of the + * main linked list. Synchronization occurs through the use of the * lcore dev_removal_flag. Device is made volatile here to avoid re-ordering * of dev->remove=1 which can cause an infinite loop in the rte_pause loop. */ diff --git a/examples/vhost/virtio_net.c b/examples/vhost/virtio_net.c index 9064fc3a..1b646059 100644 --- a/examples/vhost/virtio_net.c +++ b/examples/vhost/virtio_net.c @@ -62,7 +62,7 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, struct rte_mbuf *m, uint16_t desc_idx) { uint32_t desc_avail, desc_offset; - uint64_t desc_chunck_len; + uint64_t desc_chunk_len; uint32_t mbuf_avail, mbuf_offset; uint32_t cpy_len; struct vring_desc *desc; @@ -72,10 +72,10 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, uint16_t nr_desc = 1; desc = &vr->desc[desc_idx]; - desc_chunck_len = desc->len; + desc_chunk_len = desc->len; desc_gaddr = desc->addr; desc_addr = rte_vhost_va_from_guest_pa( - dev->mem, desc_gaddr, &desc_chunck_len); + dev->mem, desc_gaddr, &desc_chunk_len); /* * Checking of 'desc_addr' placed outside of 'unlikely' macro to avoid * performance issue with some versions of gcc (4.8.4 and 5.3.0) which @@ -87,7 +87,7 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, rte_prefetch0((void *)(uintptr_t)desc_addr); /* write virtio-net header */ - if (likely(desc_chunck_len >= dev->hdr_len)) { + if (likely(desc_chunk_len >= dev->hdr_len)) { *(struct virtio_net_hdr *)(uintptr_t)desc_addr = virtio_hdr; desc_offset = dev->hdr_len; } else { @@ -112,11 +112,11 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, src += len; } - desc_chunck_len = desc->len - dev->hdr_len; + desc_chunk_len = desc->len - dev->hdr_len; desc_gaddr += dev->hdr_len; desc_addr = rte_vhost_va_from_guest_pa( dev->mem, desc_gaddr, - &desc_chunck_len); + &desc_chunk_len); if (unlikely(!desc_addr)) return -1; @@ -147,28 +147,28 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, return -1; desc = &vr->desc[desc->next]; - desc_chunck_len = desc->len; + desc_chunk_len = desc->len; desc_gaddr = desc->addr; desc_addr = rte_vhost_va_from_guest_pa( - dev->mem, desc_gaddr, &desc_chunck_len); + dev->mem, desc_gaddr, &desc_chunk_len); if (unlikely(!desc_addr)) return -1; desc_offset = 0; desc_avail = desc->len; - } else if (unlikely(desc_chunck_len == 0)) { - desc_chunck_len = desc_avail; + } else if (unlikely(desc_chunk_len == 0)) { + desc_chunk_len = desc_avail; desc_gaddr += desc_offset; desc_addr = rte_vhost_va_from_guest_pa(dev->mem, desc_gaddr, - &desc_chunck_len); + &desc_chunk_len); if (unlikely(!desc_addr)) return -1; desc_offset = 0; } - cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail); + cpy_len = RTE_MIN(desc_chunk_len, mbuf_avail); rte_memcpy((void *)((uintptr_t)(desc_addr + desc_offset)), rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), cpy_len); @@ -177,7 +177,7 @@ enqueue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, mbuf_offset += cpy_len; desc_avail -= cpy_len; desc_offset += cpy_len; - desc_chunck_len -= cpy_len; + desc_chunk_len -= cpy_len; } return 0; @@ -246,7 +246,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, struct vring_desc *desc; uint64_t desc_addr, desc_gaddr; uint32_t desc_avail, desc_offset; - uint64_t desc_chunck_len; + uint64_t desc_chunk_len; uint32_t mbuf_avail, mbuf_offset; uint32_t cpy_len; struct rte_mbuf *cur = m, *prev = m; @@ -258,10 +258,10 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, (desc->flags & VRING_DESC_F_INDIRECT)) return -1; - desc_chunck_len = desc->len; + desc_chunk_len = desc->len; desc_gaddr = desc->addr; desc_addr = rte_vhost_va_from_guest_pa( - dev->mem, desc_gaddr, &desc_chunck_len); + dev->mem, desc_gaddr, &desc_chunk_len); if (unlikely(!desc_addr)) return -1; @@ -275,10 +275,10 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, * header. */ desc = &vr->desc[desc->next]; - desc_chunck_len = desc->len; + desc_chunk_len = desc->len; desc_gaddr = desc->addr; desc_addr = rte_vhost_va_from_guest_pa( - dev->mem, desc_gaddr, &desc_chunck_len); + dev->mem, desc_gaddr, &desc_chunk_len); if (unlikely(!desc_addr)) return -1; rte_prefetch0((void *)(uintptr_t)desc_addr); @@ -290,7 +290,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, mbuf_offset = 0; mbuf_avail = m->buf_len - RTE_PKTMBUF_HEADROOM; while (1) { - cpy_len = RTE_MIN(desc_chunck_len, mbuf_avail); + cpy_len = RTE_MIN(desc_chunk_len, mbuf_avail); rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, mbuf_offset), (void *)((uintptr_t)(desc_addr + desc_offset)), @@ -300,7 +300,7 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, mbuf_offset += cpy_len; desc_avail -= cpy_len; desc_offset += cpy_len; - desc_chunck_len -= cpy_len; + desc_chunk_len -= cpy_len; /* This desc reaches to its end, get the next one */ if (desc_avail == 0) { @@ -312,22 +312,22 @@ dequeue_pkt(struct vhost_dev *dev, struct rte_vhost_vring *vr, return -1; desc = &vr->desc[desc->next]; - desc_chunck_len = desc->len; + desc_chunk_len = desc->len; desc_gaddr = desc->addr; desc_addr = rte_vhost_va_from_guest_pa( - dev->mem, desc_gaddr, &desc_chunck_len); + dev->mem, desc_gaddr, &desc_chunk_len); if (unlikely(!desc_addr)) return -1; rte_prefetch0((void *)(uintptr_t)desc_addr); desc_offset = 0; desc_avail = desc->len; - } else if (unlikely(desc_chunck_len == 0)) { - desc_chunck_len = desc_avail; + } else if (unlikely(desc_chunk_len == 0)) { + desc_chunk_len = desc_avail; desc_gaddr += desc_offset; desc_addr = rte_vhost_va_from_guest_pa(dev->mem, desc_gaddr, - &desc_chunck_len); + &desc_chunk_len); if (unlikely(!desc_addr)) return -1; diff --git a/examples/vm_power_manager/channel_monitor.c b/examples/vm_power_manager/channel_monitor.c index d767423a..97b8def7 100644 --- a/examples/vm_power_manager/channel_monitor.c +++ b/examples/vm_power_manager/channel_monitor.c @@ -404,7 +404,7 @@ get_pcpu_to_control(struct policy *pol) /* * So now that we're handling virtual and physical cores, we need to - * differenciate between them when adding them to the branch monitor. + * differentiate between them when adding them to the branch monitor. * Virtual cores need to be converted to physical cores. */ if (pol->pkt.core_type == RTE_POWER_CORE_TYPE_VIRTUAL) { diff --git a/examples/vm_power_manager/power_manager.h b/examples/vm_power_manager/power_manager.h index d35f8cbe..d51039e2 100644 --- a/examples/vm_power_manager/power_manager.h +++ b/examples/vm_power_manager/power_manager.h @@ -224,7 +224,7 @@ int power_manager_enable_turbo_core(unsigned int core_num); int power_manager_disable_turbo_core(unsigned int core_num); /** - * Get the current freuency of the core specified by core_num + * Get the current frequency of the core specified by core_num * * @param core_num * The core number to get the current frequency diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index 2c00a942..2a294635 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -62,7 +62,7 @@ static uint8_t rss_enable; /* Default structure for VMDq. 8< */ -/* empty vmdq configuration structure. Filled in programatically */ +/* empty vmdq configuration structure. Filled in programmatically */ static const struct rte_eth_conf vmdq_conf_default = { .rxmode = { .mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY, diff --git a/kernel/linux/kni/kni_fifo.h b/kernel/linux/kni/kni_fifo.h index 5c91b553..791552af 100644 --- a/kernel/linux/kni/kni_fifo.h +++ b/kernel/linux/kni/kni_fifo.h @@ -41,7 +41,7 @@ kni_fifo_put(struct rte_kni_fifo *fifo, void **data, uint32_t num) } /** - * Get up to num elements from the fifo. Return the number actully read + * Get up to num elements from the fifo. Return the number actually read */ static inline uint32_t kni_fifo_get(struct rte_kni_fifo *fifo, void **data, uint32_t num) diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c index f316d3e8..7ea30f41 100644 --- a/lib/acl/acl_bld.c +++ b/lib/acl/acl_bld.c @@ -885,7 +885,7 @@ acl_gen_range_trie(struct acl_build_context *context, return root; } - /* gather information about divirgent paths */ + /* gather information about divergent paths */ lo_00 = 0; hi_ff = UINT8_MAX; for (k = n - 1; k >= 0; k--) { diff --git a/lib/acl/acl_run_altivec.h b/lib/acl/acl_run_altivec.h index 2de6f27b..24a41eec 100644 --- a/lib/acl/acl_run_altivec.h +++ b/lib/acl/acl_run_altivec.h @@ -146,7 +146,7 @@ transition4(xmm_t next_input, const uint64_t *trans, dfa_ofs = vec_sub(t, r); - /* QUAD/SINGLE caluclations. */ + /* QUAD/SINGLE calculations. */ t = (xmm_t)vec_cmpgt((vector signed char)in, (vector signed char)tr_hi); t = (xmm_t)vec_sel( vec_sel( diff --git a/lib/acl/acl_run_avx512.c b/lib/acl/acl_run_avx512.c index 78fbe34f..3b879556 100644 --- a/lib/acl/acl_run_avx512.c +++ b/lib/acl/acl_run_avx512.c @@ -64,7 +64,7 @@ update_flow_mask(const struct acl_flow_avx512 *flow, uint32_t *fmsk, } /* - * Resolve matches for multiple categories (LE 8, use 128b instuctions/regs) + * Resolve matches for multiple categories (LE 8, use 128b instructions/regs) */ static inline void resolve_mcle8_avx512x1(uint32_t result[], diff --git a/lib/acl/acl_run_avx512x16.h b/lib/acl/acl_run_avx512x16.h index 48bb6fed..c8e6a124 100644 --- a/lib/acl/acl_run_avx512x16.h +++ b/lib/acl/acl_run_avx512x16.h @@ -10,7 +10,7 @@ */ /* - * This implementation uses 512-bit registers(zmm) and instrincts. + * This implementation uses 512-bit registers(zmm) and instincts. * So our main SIMD type is 512-bit width and each such variable can * process sizeof(__m512i) / sizeof(uint32_t) == 16 entries in parallel. */ @@ -25,20 +25,20 @@ #define _F_(x) x##_avx512x16 /* - * Same instrincts have different syntaxis (depending on the bit-width), + * Same instincts have different syntaxis (depending on the bit-width), * so to overcome that few macros need to be defined. */ -/* Naming convention for generic epi(packed integers) type instrincts. */ +/* Naming convention for generic epi(packed integers) type instincts. */ #define _M_I_(x) _mm512_##x -/* Naming convention for si(whole simd integer) type instrincts. */ +/* Naming convention for si(whole simd integer) type instincts. */ #define _M_SI_(x) _mm512_##x##_si512 -/* Naming convention for masked gather type instrincts. */ +/* Naming convention for masked gather type instincts. */ #define _M_MGI_(x) _mm512_##x -/* Naming convention for gather type instrincts. */ +/* Naming convention for gather type instincts. */ #define _M_GI_(name, idx, base, scale) _mm512_##name(idx, base, scale) /* num/mask of transitions per SIMD regs */ @@ -239,7 +239,7 @@ _F_(gather_bytes)(__m512i zero, const __m512i p[2], const uint32_t m[2], } /* - * Resolve matches for multiple categories (GT 8, use 512b instuctions/regs) + * Resolve matches for multiple categories (GT 8, use 512b instructions/regs) */ static inline void resolve_mcgt8_avx512x1(uint32_t result[], diff --git a/lib/acl/acl_run_avx512x8.h b/lib/acl/acl_run_avx512x8.h index 61ac9d1b..edd5c554 100644 --- a/lib/acl/acl_run_avx512x8.h +++ b/lib/acl/acl_run_avx512x8.h @@ -10,7 +10,7 @@ */ /* - * This implementation uses 256-bit registers(ymm) and instrincts. + * This implementation uses 256-bit registers(ymm) and instincts. * So our main SIMD type is 256-bit width and each such variable can * process sizeof(__m256i) / sizeof(uint32_t) == 8 entries in parallel. */ @@ -25,20 +25,20 @@ #define _F_(x) x##_avx512x8 /* - * Same instrincts have different syntaxis (depending on the bit-width), + * Same instincts have different syntaxis (depending on the bit-width), * so to overcome that few macros need to be defined. */ -/* Naming convention for generic epi(packed integers) type instrincts. */ +/* Naming convention for generic epi(packed integers) type instincts. */ #define _M_I_(x) _mm256_##x -/* Naming convention for si(whole simd integer) type instrincts. */ +/* Naming convention for si(whole simd integer) type instincts. */ #define _M_SI_(x) _mm256_##x##_si256 -/* Naming convention for masked gather type instrincts. */ +/* Naming convention for masked gather type instincts. */ #define _M_MGI_(x) _mm256_m##x -/* Naming convention for gather type instrincts. */ +/* Naming convention for gather type instincts. */ #define _M_GI_(name, idx, base, scale) _mm256_##name(base, idx, scale) /* num/mask of transitions per SIMD regs */ diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c index db84add7..9563274c 100644 --- a/lib/bpf/bpf_convert.c +++ b/lib/bpf/bpf_convert.c @@ -412,7 +412,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, BPF_EMIT_JMP; break; - /* ldxb 4 * ([14] & 0xf) is remaped into 6 insns. */ + /* ldxb 4 * ([14] & 0xf) is remapped into 6 insns. */ case BPF_LDX | BPF_MSH | BPF_B: /* tmp = A */ *insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_A); @@ -428,7 +428,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, *insn = BPF_MOV64_REG(BPF_REG_A, BPF_REG_TMP); break; - /* RET_K is remaped into 2 insns. RET_A case doesn't need an + /* RET_K is remapped into 2 insns. RET_A case doesn't need an * extra mov as EBPF_REG_0 is already mapped into BPF_REG_A. */ case BPF_RET | BPF_A: diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 09331258..2426d57a 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -856,7 +856,7 @@ eval_mbuf_store(const struct bpf_reg_val *rv, uint32_t opsz) static const struct { size_t off; size_t sz; - } mbuf_ro_fileds[] = { + } mbuf_ro_fields[] = { { .off = offsetof(struct rte_mbuf, buf_addr), }, { .off = offsetof(struct rte_mbuf, refcnt), }, { .off = offsetof(struct rte_mbuf, nb_segs), }, @@ -866,13 +866,13 @@ eval_mbuf_store(const struct bpf_reg_val *rv, uint32_t opsz) { .off = offsetof(struct rte_mbuf, priv_size), }, }; - for (i = 0; i != RTE_DIM(mbuf_ro_fileds) && - (mbuf_ro_fileds[i].off + mbuf_ro_fileds[i].sz <= - rv->u.max || rv->u.max + opsz <= mbuf_ro_fileds[i].off); + for (i = 0; i != RTE_DIM(mbuf_ro_fields) && + (mbuf_ro_fields[i].off + mbuf_ro_fields[i].sz <= + rv->u.max || rv->u.max + opsz <= mbuf_ro_fields[i].off); i++) ; - if (i != RTE_DIM(mbuf_ro_fileds)) + if (i != RTE_DIM(mbuf_ro_fields)) return "store to the read-only mbuf field"; return NULL; diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index 59ea5a54..5f5cd029 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -27,7 +27,7 @@ extern "C" { #include "rte_cryptodev_trace_fp.h" -extern const char **rte_cyptodev_names; +extern const char **rte_cryptodev_names; /* Logging Macros */ diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h index 9942c6ec..4abe79c5 100644 --- a/lib/dmadev/rte_dmadev.h +++ b/lib/dmadev/rte_dmadev.h @@ -533,7 +533,7 @@ struct rte_dma_port_param { * @note If some fields can not be supported by the * hardware/driver, then the driver ignores those fields. * Please check driver-specific documentation for limitations - * and capablites. + * and capabilities. */ __extension__ struct { @@ -731,7 +731,7 @@ enum rte_dma_status_code { /** The operation completed successfully. */ RTE_DMA_STATUS_SUCCESSFUL, /** The operation failed to complete due abort by user. - * This is mainly used when processing dev_stop, user could modidy the + * This is mainly used when processing dev_stop, user could modify the * descriptors (e.g. change one bit to tell hardware abort this job), * it allows outstanding requests to be complete as much as possible, * so reduce the time to stop the device. diff --git a/lib/eal/arm/include/rte_cycles_32.h b/lib/eal/arm/include/rte_cycles_32.h index f79718ce..cec4d69e 100644 --- a/lib/eal/arm/include/rte_cycles_32.h +++ b/lib/eal/arm/include/rte_cycles_32.h @@ -30,7 +30,7 @@ extern "C" { /** * This call is easily portable to any architecture, however, - * it may require a system call and inprecise for some tasks. + * it may require a system call and imprecise for some tasks. */ static inline uint64_t __rte_rdtsc_syscall(void) diff --git a/lib/eal/common/eal_common_trace_ctf.c b/lib/eal/common/eal_common_trace_ctf.c index 33e419aa..8f245941 100644 --- a/lib/eal/common/eal_common_trace_ctf.c +++ b/lib/eal/common/eal_common_trace_ctf.c @@ -321,7 +321,7 @@ meta_fix_freq(struct trace *trace, char *meta) static void meta_fix_freq_offset(struct trace *trace, char *meta) { - uint64_t uptime_tickes_floor, uptime_ticks, freq, uptime_sec; + uint64_t uptime_ticks_floor, uptime_ticks, freq, uptime_sec; uint64_t offset, offset_s; char *str; int rc; @@ -329,12 +329,12 @@ meta_fix_freq_offset(struct trace *trace, char *meta) uptime_ticks = trace->uptime_ticks & ((1ULL << __RTE_TRACE_EVENT_HEADER_ID_SHIFT) - 1); freq = rte_get_tsc_hz(); - uptime_tickes_floor = RTE_ALIGN_MUL_FLOOR(uptime_ticks, freq); + uptime_ticks_floor = RTE_ALIGN_MUL_FLOOR(uptime_ticks, freq); - uptime_sec = uptime_tickes_floor / freq; + uptime_sec = uptime_ticks_floor / freq; offset_s = trace->epoch_sec - uptime_sec; - offset = uptime_ticks - uptime_tickes_floor; + offset = uptime_ticks - uptime_ticks_floor; offset += trace->epoch_nsec * (freq / NSEC_PER_SEC); str = RTE_PTR_ADD(meta, trace->ctf_meta_offset_freq_off_s); diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c index 10aa91cc..9f720bdc 100644 --- a/lib/eal/freebsd/eal_interrupts.c +++ b/lib/eal/freebsd/eal_interrupts.c @@ -234,7 +234,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, rte_spinlock_lock(&intr_lock); - /* check if the insterrupt source for the fd is existent */ + /* check if the interrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) break; @@ -288,7 +288,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, rte_spinlock_lock(&intr_lock); - /* check if the insterrupt source for the fd is existent */ + /* check if the interrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) break; diff --git a/lib/eal/include/generic/rte_pflock.h b/lib/eal/include/generic/rte_pflock.h index b9de063c..e7bb29b3 100644 --- a/lib/eal/include/generic/rte_pflock.h +++ b/lib/eal/include/generic/rte_pflock.h @@ -157,7 +157,7 @@ rte_pflock_write_lock(rte_pflock_t *pf) uint16_t ticket, w; /* Acquire ownership of write-phase. - * This is same as rte_tickelock_lock(). + * This is same as rte_ticketlock_lock(). */ ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED); rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE); diff --git a/lib/eal/include/rte_malloc.h b/lib/eal/include/rte_malloc.h index ed02e151..3892519f 100644 --- a/lib/eal/include/rte_malloc.h +++ b/lib/eal/include/rte_malloc.h @@ -58,7 +58,7 @@ rte_malloc(const char *type, size_t size, unsigned align) __rte_alloc_size(2); /** - * Allocate zero'ed memory from the heap. + * Allocate zeroed memory from the heap. * * Equivalent to rte_malloc() except that the memory zone is * initialised with zeros. In NUMA systems, the memory allocated resides on the @@ -189,7 +189,7 @@ rte_malloc_socket(const char *type, size_t size, unsigned align, int socket) __rte_alloc_size(2); /** - * Allocate zero'ed memory from the heap. + * Allocate zeroed memory from the heap. * * Equivalent to rte_malloc() except that the memory zone is * initialised with zeros. diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index 6e3925ef..70060bf3 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -589,7 +589,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, rte_spinlock_lock(&intr_lock); - /* check if the insterrupt source for the fd is existent */ + /* check if the interrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) { if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) break; @@ -639,7 +639,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, rte_spinlock_lock(&intr_lock); - /* check if the insterrupt source for the fd is existent */ + /* check if the interrupt source for the fd is existent */ TAILQ_FOREACH(src, &intr_sources, next) if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) break; diff --git a/lib/eal/linux/eal_vfio.h b/lib/eal/linux/eal_vfio.h index 6ebaca6a..c5d5f705 100644 --- a/lib/eal/linux/eal_vfio.h +++ b/lib/eal/linux/eal_vfio.h @@ -103,7 +103,7 @@ struct vfio_group { typedef int (*vfio_dma_func_t)(int); /* Custom memory region DMA mapping function prototype. - * Takes VFIO container fd, virtual address, phisical address, length and + * Takes VFIO container fd, virtual address, physical address, length and * operation type (0 to unmap 1 for map) as a parameters. * Returns 0 on success, -1 on error. **/ diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h index 23ead6d3..245aa603 100644 --- a/lib/eal/windows/eal_windows.h +++ b/lib/eal/windows/eal_windows.h @@ -63,7 +63,7 @@ unsigned int eal_socket_numa_node(unsigned int socket_id); * @param arg * Argument to the called function. * @return - * 0 on success, netagive error code on failure. + * 0 on success, negative error code on failure. */ int eal_intr_thread_schedule(void (*func)(void *arg), void *arg); diff --git a/lib/eal/windows/include/dirent.h b/lib/eal/windows/include/dirent.h index 869a5983..34eb077f 100644 --- a/lib/eal/windows/include/dirent.h +++ b/lib/eal/windows/include/dirent.h @@ -440,7 +440,7 @@ opendir(const char *dirname) * display correctly on console. The problem can be fixed in two ways: * (1) change the character set of console to 1252 using chcp utility * and use Lucida Console font, or (2) use _cprintf function when - * writing to console. The _cprinf() will re-encode ANSI strings to the + * writing to console. The _cprintf() will re-encode ANSI strings to the * console code page so many non-ASCII characters will display correctly. */ static struct dirent* @@ -579,7 +579,7 @@ dirent_mbstowcs_s( wcstr[n] = 0; } - /* Length of resuting multi-byte string WITH zero + /* Length of resulting multi-byte string WITH zero *terminator */ if (pReturnValue) diff --git a/lib/eal/windows/include/fnmatch.h b/lib/eal/windows/include/fnmatch.h index c272f65c..c6b226bd 100644 --- a/lib/eal/windows/include/fnmatch.h +++ b/lib/eal/windows/include/fnmatch.h @@ -26,14 +26,14 @@ extern "C" { #define FNM_PREFIX_DIRS 0x20 /** - * This function is used for searhing a given string source + * This function is used for searching a given string source * with the given regular expression pattern. * * @param pattern * regular expression notation describing the pattern to match * * @param string - * source string to searcg for the pattern + * source string to search for the pattern * * @param flag * containing information about the pattern diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h index 915afd9d..f2ee1a9c 100644 --- a/lib/eal/x86/include/rte_atomic.h +++ b/lib/eal/x86/include/rte_atomic.h @@ -60,7 +60,7 @@ extern "C" { * Basic idea is to use lock prefixed add with some dummy memory location * as the destination. From their experiments 128B(2 cache lines) below * current stack pointer looks like a good candidate. - * So below we use that techinque for rte_smp_mb() implementation. + * So below we use that technique for rte_smp_mb() implementation. */ static __rte_always_inline void diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 809416d9..3182b52c 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -3334,7 +3334,7 @@ handle_rxa_get_queue_conf(const char *cmd __rte_unused, token = strtok(NULL, "\0"); if (token != NULL) RTE_EDEV_LOG_ERR("Extra parameters passed to eventdev" - " telemetry command, igrnoring"); + " telemetry command, ignoring"); if (rte_event_eth_rx_adapter_queue_conf_get(rx_adapter_id, eth_dev_id, rx_queue_id, &queue_conf)) { @@ -3398,7 +3398,7 @@ handle_rxa_get_queue_stats(const char *cmd __rte_unused, token = strtok(NULL, "\0"); if (token != NULL) RTE_EDEV_LOG_ERR("Extra parameters passed to eventdev" - " telemetry command, igrnoring"); + " telemetry command, ignoring"); if (rte_event_eth_rx_adapter_queue_stats_get(rx_adapter_id, eth_dev_id, rx_queue_id, &q_stats)) { @@ -3460,7 +3460,7 @@ handle_rxa_queue_stats_reset(const char *cmd __rte_unused, token = strtok(NULL, "\0"); if (token != NULL) RTE_EDEV_LOG_ERR("Extra parameters passed to eventdev" - " telemetry command, igrnoring"); + " telemetry command, ignoring"); if (rte_event_eth_rx_adapter_queue_stats_reset(rx_adapter_id, eth_dev_id, diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c index 6ca180d7..ad0b85bc 100644 --- a/lib/fib/rte_fib.c +++ b/lib/fib/rte_fib.c @@ -40,10 +40,10 @@ EAL_REGISTER_TAILQ(rte_fib_tailq) struct rte_fib { char name[RTE_FIB_NAMESIZE]; enum rte_fib_type type; /**< Type of FIB struct */ - struct rte_rib *rib; /**< RIB helper datastruct */ + struct rte_rib *rib; /**< RIB helper datastructure */ void *dp; /**< pointer to the dataplane struct*/ rte_fib_lookup_fn_t lookup; /**< fib lookup function */ - rte_fib_modify_fn_t modify; /**< modify fib datastruct */ + rte_fib_modify_fn_t modify; /**< modify fib datastructure */ uint64_t def_nh; }; diff --git a/lib/fib/rte_fib.h b/lib/fib/rte_fib.h index b3c59dfa..e592d325 100644 --- a/lib/fib/rte_fib.h +++ b/lib/fib/rte_fib.h @@ -189,7 +189,7 @@ rte_fib_lookup_bulk(struct rte_fib *fib, uint32_t *ips, * FIB object handle * @return * Pointer on the dataplane struct on success - * NULL othervise + * NULL otherwise */ void * rte_fib_get_dp(struct rte_fib *fib); @@ -201,7 +201,7 @@ rte_fib_get_dp(struct rte_fib *fib); * FIB object handle * @return * Pointer on the RIB on success - * NULL othervise + * NULL otherwise */ struct rte_rib * rte_fib_get_rib(struct rte_fib *fib); diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c index be79efe0..4d35ea32 100644 --- a/lib/fib/rte_fib6.c +++ b/lib/fib/rte_fib6.c @@ -40,10 +40,10 @@ EAL_REGISTER_TAILQ(rte_fib6_tailq) struct rte_fib6 { char name[FIB6_NAMESIZE]; enum rte_fib6_type type; /**< Type of FIB struct */ - struct rte_rib6 *rib; /**< RIB helper datastruct */ + struct rte_rib6 *rib; /**< RIB helper datastructure */ void *dp; /**< pointer to the dataplane struct*/ rte_fib6_lookup_fn_t lookup; /**< fib lookup function */ - rte_fib6_modify_fn_t modify; /**< modify fib datastruct */ + rte_fib6_modify_fn_t modify; /**< modify fib datastructure */ uint64_t def_nh; }; diff --git a/lib/fib/rte_fib6.h b/lib/fib/rte_fib6.h index 95879af9..cb133719 100644 --- a/lib/fib/rte_fib6.h +++ b/lib/fib/rte_fib6.h @@ -184,7 +184,7 @@ rte_fib6_lookup_bulk(struct rte_fib6 *fib, * FIB6 object handle * @return * Pointer on the dataplane struct on success - * NULL othervise + * NULL otherwise */ void * rte_fib6_get_dp(struct rte_fib6 *fib); @@ -196,7 +196,7 @@ rte_fib6_get_dp(struct rte_fib6 *fib); * FIB object handle * @return * Pointer on the RIB6 on success - * NULL othervise + * NULL otherwise */ struct rte_rib6 * rte_fib6_get_rib(struct rte_fib6 *fib); diff --git a/lib/graph/graph_populate.c b/lib/graph/graph_populate.c index 093512ef..62d2d69c 100644 --- a/lib/graph/graph_populate.c +++ b/lib/graph/graph_populate.c @@ -46,7 +46,7 @@ graph_fp_mem_calc_size(struct graph *graph) } static void -graph_header_popluate(struct graph *_graph) +graph_header_populate(struct graph *_graph) { struct rte_graph *graph = _graph->graph; @@ -184,7 +184,7 @@ graph_fp_mem_populate(struct graph *graph) { int rc; - graph_header_popluate(graph); + graph_header_populate(graph); graph_nodes_populate(graph); rc = graph_node_nexts_populate(graph); rc |= graph_src_nodes_populate(graph); diff --git a/lib/hash/rte_crc_arm64.h b/lib/hash/rte_crc_arm64.h index b4628cfc..6995b414 100644 --- a/lib/hash/rte_crc_arm64.h +++ b/lib/hash/rte_crc_arm64.h @@ -61,7 +61,7 @@ crc32c_arm64_u64(uint64_t data, uint32_t init_val) } /** - * Allow or disallow use of arm64 SIMD instrinsics for CRC32 hash + * Allow or disallow use of arm64 SIMD intrinsics for CRC32 hash * calculation. * * @param alg diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c index 6847e36f..e27ac8ac 100644 --- a/lib/hash/rte_thash.c +++ b/lib/hash/rte_thash.c @@ -27,7 +27,7 @@ static struct rte_tailq_elem rte_thash_tailq = { EAL_REGISTER_TAILQ(rte_thash_tailq) /** - * Table of some irreducible polinomials over GF(2). + * Table of some irreducible polynomials over GF(2). * For lfsr they are represented in BE bit order, and * x^0 is masked out. * For example, poly x^5 + x^2 + 1 will be represented diff --git a/lib/ip_frag/ip_frag_internal.c b/lib/ip_frag/ip_frag_internal.c index b436a4c9..01849284 100644 --- a/lib/ip_frag/ip_frag_internal.c +++ b/lib/ip_frag/ip_frag_internal.c @@ -172,7 +172,7 @@ ip_frag_process(struct ip_frag_pkt *fp, struct rte_ip_frag_death_row *dr, mb = ipv6_frag_reassemble(fp); } - /* errorenous set of fragments. */ + /* erroneous set of fragments. */ if (mb == NULL) { /* report an error. */ diff --git a/lib/ipsec/ipsec_sad.c b/lib/ipsec/ipsec_sad.c index 531e1e32..8548e2cf 100644 --- a/lib/ipsec/ipsec_sad.c +++ b/lib/ipsec/ipsec_sad.c @@ -69,14 +69,14 @@ add_specific(struct rte_ipsec_sad *sad, const void *key, int key_type, void *sa) { void *tmp_val; - int ret, notexist; + int ret, nonexistent; /* Check if the key is present in the table. - * Need for further accaunting in cnt_arr + * Need for further accounting in cnt_arr */ ret = rte_hash_lookup_with_hash(sad->hash[key_type], key, rte_hash_crc(key, sad->keysize[key_type], sad->init_val)); - notexist = (ret == -ENOENT); + nonexistent = (ret == -ENOENT); /* Add an SA to the corresponding table.*/ ret = rte_hash_add_key_with_hash_data(sad->hash[key_type], key, @@ -107,9 +107,9 @@ add_specific(struct rte_ipsec_sad *sad, const void *key, if (ret < 0) return ret; if (key_type == RTE_IPSEC_SAD_SPI_DIP) - sad->cnt_arr[ret].cnt_dip += notexist; + sad->cnt_arr[ret].cnt_dip += nonexistent; else - sad->cnt_arr[ret].cnt_dip_sip += notexist; + sad->cnt_arr[ret].cnt_dip_sip += nonexistent; return 0; } diff --git a/lib/ipsec/ipsec_telemetry.c b/lib/ipsec/ipsec_telemetry.c index b8b08404..9a91e471 100644 --- a/lib/ipsec/ipsec_telemetry.c +++ b/lib/ipsec/ipsec_telemetry.c @@ -236,7 +236,7 @@ RTE_INIT(rte_ipsec_telemetry_init) "Return list of IPsec SAs with telemetry enabled."); rte_telemetry_register_cmd("/ipsec/sa/stats", handle_telemetry_cmd_ipsec_sa_stats, - "Returns IPsec SA stastistics. Parameters: int sa_spi"); + "Returns IPsec SA statistics. Parameters: int sa_spi"); rte_telemetry_register_cmd("/ipsec/sa/details", handle_telemetry_cmd_ipsec_sa_details, "Returns IPsec SA configuration. Parameters: int sa_spi"); diff --git a/lib/ipsec/rte_ipsec_sad.h b/lib/ipsec/rte_ipsec_sad.h index b65d2958..a3ae57df 100644 --- a/lib/ipsec/rte_ipsec_sad.h +++ b/lib/ipsec/rte_ipsec_sad.h @@ -153,7 +153,7 @@ rte_ipsec_sad_destroy(struct rte_ipsec_sad *sad); * @param keys * Array of keys to be looked up in the SAD * @param sa - * Pointer assocoated with the keys. + * Pointer associated with the keys. * If the lookup for the given key failed, then corresponding sa * will be NULL * @param n diff --git a/lib/ipsec/sa.c b/lib/ipsec/sa.c index 1e51482c..cdb70af0 100644 --- a/lib/ipsec/sa.c +++ b/lib/ipsec/sa.c @@ -362,7 +362,7 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm) memcpy(sa->hdr, prm->tun.hdr, prm->tun.hdr_len); - /* insert UDP header if UDP encapsulation is inabled */ + /* insert UDP header if UDP encapsulation is enabled */ if (sa->type & RTE_IPSEC_SATP_NATT_ENABLE) { struct rte_udp_hdr *udph = (struct rte_udp_hdr *) &sa->hdr[prm->tun.hdr_len]; diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index 321a419c..3d6ddd67 100644 --- a/lib/mbuf/rte_mbuf_core.h +++ b/lib/mbuf/rte_mbuf_core.h @@ -8,7 +8,7 @@ /** * @file - * This file contains definion of RTE mbuf structure itself, + * This file contains definition of RTE mbuf structure itself, * packet offload flags and some related macros. * For majority of DPDK entities, it is not recommended to include * this file directly, use include instead. diff --git a/lib/meson.build b/lib/meson.build index 018976df..fbaa6ef7 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -3,7 +3,7 @@ # process all libraries equally, as far as possible -# "core" libs first, then others alphebetically as far as possible +# "core" libs first, then others alphabetically as far as possible # NOTE: for speed of meson runs, the dependencies in the subdirectories # sometimes skip deps that would be implied by others, e.g. if mempool is # given as a dep, no need to mention ring. This is especially true for the diff --git a/lib/net/rte_l2tpv2.h b/lib/net/rte_l2tpv2.h index b90e36cf..938a993b 100644 --- a/lib/net/rte_l2tpv2.h +++ b/lib/net/rte_l2tpv2.h @@ -143,7 +143,7 @@ struct rte_l2tpv2_msg_without_length { /** * L2TPv2 message Header contains all options except ns_nr(length, * offset size, offset padding). - * Ns and Nr MUST be toghter. + * Ns and Nr MUST be together. */ struct rte_l2tpv2_msg_without_ns_nr { rte_be16_t length; /**< length(16) */ @@ -155,7 +155,7 @@ struct rte_l2tpv2_msg_without_ns_nr { /** * L2TPv2 message Header contains all options except ns_nr(length, ns, nr). - * offset size and offset padding MUST be toghter. + * offset size and offset padding MUST be together. */ struct rte_l2tpv2_msg_without_offset { rte_be16_t length; /**< length(16) */ diff --git a/lib/pipeline/rte_swx_ctl.h b/lib/pipeline/rte_swx_ctl.h index 46d05823..82e62e70 100644 --- a/lib/pipeline/rte_swx_ctl.h +++ b/lib/pipeline/rte_swx_ctl.h @@ -369,7 +369,7 @@ struct rte_swx_table_stats { uint64_t n_pkts_miss; /** Number of packets (with either lookup hit or miss) per pipeline - * action. Array of pipeline *n_actions* elements indedex by the + * action. Array of pipeline *n_actions* elements indexed by the * pipeline-level *action_id*, therefore this array has the same size * for all the tables within the same pipeline. */ @@ -629,7 +629,7 @@ struct rte_swx_learner_stats { uint64_t n_pkts_forget; /** Number of packets (with either lookup hit or miss) per pipeline action. Array of - * pipeline *n_actions* elements indedex by the pipeline-level *action_id*, therefore this + * pipeline *n_actions* elements indexed by the pipeline-level *action_id*, therefore this * array has the same size for all the tables within the same pipeline. */ uint64_t *n_pkts_action; diff --git a/lib/pipeline/rte_swx_pipeline_internal.h b/lib/pipeline/rte_swx_pipeline_internal.h index 1921fdcd..fa944c95 100644 --- a/lib/pipeline/rte_swx_pipeline_internal.h +++ b/lib/pipeline/rte_swx_pipeline_internal.h @@ -309,7 +309,7 @@ enum instruction_type { */ INSTR_ALU_CKADD_FIELD, /* src = H */ INSTR_ALU_CKADD_STRUCT20, /* src = h.header, with sizeof(header) = 20 */ - INSTR_ALU_CKADD_STRUCT, /* src = h.hdeader, with any sizeof(header) */ + INSTR_ALU_CKADD_STRUCT, /* src = h.header, with any sizeof(header) */ /* cksub dst src * dst = dst '- src @@ -1562,7 +1562,7 @@ emit_handler(struct thread *t) return; } - /* Header encapsulation (optionally, with prior header decasulation). */ + /* Header encapsulation (optionally, with prior header decapsulation). */ if ((t->n_headers_out == 2) && (h1->ptr + h1->n_bytes == t->ptr) && (h0->ptr == h0->ptr0)) { diff --git a/lib/pipeline/rte_swx_pipeline_spec.c b/lib/pipeline/rte_swx_pipeline_spec.c index 8e9aa44e..07a7580a 100644 --- a/lib/pipeline/rte_swx_pipeline_spec.c +++ b/lib/pipeline/rte_swx_pipeline_spec.c @@ -2011,7 +2011,7 @@ rte_swx_pipeline_build_from_spec(struct rte_swx_pipeline *p, if (err_line) *err_line = 0; if (err_msg) - *err_msg = "Null pipeline arument."; + *err_msg = "Null pipeline argument."; status = -EINVAL; goto error; } diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index 6afd310e..25185a79 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -621,7 +621,7 @@ power_cppc_enable_turbo(unsigned int lcore_id) return -1; } - /* TODO: must set to max once enbling Turbo? Considering add condition: + /* TODO: must set to max once enabling Turbo? Considering add condition: * if ((pi->turbo_available) && (pi->curr_idx <= 1)) */ /* Max may have changed, so call to max function */ diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index 86f0b231..0bac46cd 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -298,14 +298,14 @@ rte_regexdev_get_dev_id(const char *name); * backtracking positions remembered by any tokens inside the group. * Example RegEx is `a(?>bc|b)c` if the given patterns are `abc` and `abcc` then * `a(bc|b)c` matches both where as `a(?>bc|b)c` matches only abcc because - * atomic groups don't allow backtracing back to `b`. + * atomic groups don't allow backtracking back to `b`. * * @see struct rte_regexdev_info::regexdev_capa */ #define RTE_REGEXDEV_SUPP_PCRE_BACKTRACKING_CTRL_F (1ULL << 3) /**< RegEx device support PCRE backtracking control verbs. - * Some examples of backtracing verbs are (*COMMIT), (*ACCEPT), (*FAIL), + * Some examples of backtracking verbs are (*COMMIT), (*ACCEPT), (*FAIL), * (*SKIP), (*PRUNE). * * @see struct rte_regexdev_info::regexdev_capa @@ -1015,7 +1015,7 @@ rte_regexdev_rule_db_update(uint8_t dev_id, * @b EXPERIMENTAL: this API may change without prior notice. * * Compile local rule set and burn the complied result to the - * RegEx deive. + * RegEx device. * * @param dev_id * RegEx device identifier. diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h index 46ad584f..1252ca95 100644 --- a/lib/ring/rte_ring_core.h +++ b/lib/ring/rte_ring_core.h @@ -12,7 +12,7 @@ /** * @file - * This file contains definion of RTE ring structure itself, + * This file contains definition of RTE ring structure itself, * init flags and some related macros. * For majority of DPDK entities, it is not recommended to include * this file directly, use include or diff --git a/lib/sched/rte_pie.h b/lib/sched/rte_pie.h index dfdf5723..02a987f5 100644 --- a/lib/sched/rte_pie.h +++ b/lib/sched/rte_pie.h @@ -252,7 +252,7 @@ _rte_pie_drop(const struct rte_pie_config *pie_cfg, } /** - * @brief Decides if new packet should be enqeued or dropped for non-empty queue + * @brief Decides if new packet should be enqueued or dropped for non-empty queue * * @param pie_cfg [in] config pointer to a PIE configuration parameter structure * @param pie [in,out] data pointer to PIE runtime data @@ -319,7 +319,7 @@ rte_pie_enqueue_nonempty(const struct rte_pie_config *pie_cfg, } /** - * @brief Decides if new packet should be enqeued or dropped + * @brief Decides if new packet should be enqueued or dropped * Updates run time data and gives verdict whether to enqueue or drop the packet. * * @param pie_cfg [in] config pointer to a PIE configuration parameter structure @@ -330,7 +330,7 @@ rte_pie_enqueue_nonempty(const struct rte_pie_config *pie_cfg, * * @return Operation status * @retval 0 enqueue the packet - * @retval 1 drop the packet based on drop probility criteria + * @retval 1 drop the packet based on drop probability criteria */ static inline int __rte_experimental diff --git a/lib/sched/rte_red.h b/lib/sched/rte_red.h index 36273cac..f5843dab 100644 --- a/lib/sched/rte_red.h +++ b/lib/sched/rte_red.h @@ -303,7 +303,7 @@ __rte_red_drop(const struct rte_red_config *red_cfg, struct rte_red *red) } /** - * @brief Decides if new packet should be enqeued or dropped in queue non-empty case + * @brief Decides if new packet should be enqueued or dropped in queue non-empty case * * @param red_cfg [in] config pointer to a RED configuration parameter structure * @param red [in,out] data pointer to RED runtime data @@ -361,7 +361,7 @@ rte_red_enqueue_nonempty(const struct rte_red_config *red_cfg, } /** - * @brief Decides if new packet should be enqeued or dropped + * @brief Decides if new packet should be enqueued or dropped * Updates run time data based on new queue size value. * Based on new queue average and RED configuration parameters * gives verdict whether to enqueue or drop the packet. diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c index ed44808f..62b3d2e3 100644 --- a/lib/sched/rte_sched.c +++ b/lib/sched/rte_sched.c @@ -239,7 +239,7 @@ struct rte_sched_port { int socket; /* Timing */ - uint64_t time_cpu_cycles; /* Current CPU time measured in CPU cyles */ + uint64_t time_cpu_cycles; /* Current CPU time measured in CPU cycles */ uint64_t time_cpu_bytes; /* Current CPU time measured in bytes */ uint64_t time; /* Current NIC TX time measured in bytes */ struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */ diff --git a/lib/sched/rte_sched.h b/lib/sched/rte_sched.h index 484dbdcc..3c625ba1 100644 --- a/lib/sched/rte_sched.h +++ b/lib/sched/rte_sched.h @@ -360,7 +360,7 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, * * Hierarchical scheduler subport bandwidth profile add * Note that this function is safe to use in runtime for adding new - * subport bandwidth profile as it doesn't have any impact on hiearchical + * subport bandwidth profile as it doesn't have any impact on hierarchical * structure of the scheduler. * @param port * Handle to port scheduler instance diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h index f93e5f3f..c1383c2e 100644 --- a/lib/table/rte_swx_table.h +++ b/lib/table/rte_swx_table.h @@ -216,7 +216,7 @@ typedef int * operations into the same table. * * The typical reason an implementation may choose to split the table lookup - * operation into multiple steps is to hide the latency of the inherrent memory + * operation into multiple steps is to hide the latency of the inherent memory * read operations: before a read operation with the source data likely not in * the CPU cache, the source data prefetch is issued and the table lookup * operation is postponed in favor of some other unrelated work, which the CPU diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h index 62988d28..05863cc9 100644 --- a/lib/table/rte_swx_table_selector.h +++ b/lib/table/rte_swx_table_selector.h @@ -155,7 +155,7 @@ rte_swx_table_selector_group_set(void *table, * mechanism allows for multiple concurrent select operations into the same table. * * The typical reason an implementation may choose to split the operation into multiple steps is to - * hide the latency of the inherrent memory read operations: before a read operation with the + * hide the latency of the inherent memory read operations: before a read operation with the * source data likely not in the CPU cache, the source data prefetch is issued and the operation is * postponed in favor of some other unrelated work, which the CPU executes in parallel with the * source data being fetched into the CPU cache; later on, the operation is resumed, this time with diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index a7483167..e5ccfe47 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -534,7 +534,7 @@ telemetry_legacy_init(void) } rc = pthread_create(&t_old, NULL, socket_listener, &v1_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create legcay socket thread: %s\n", + TMTY_LOG(ERR, "Error with create legacy socket thread: %s\n", strerror(rc)); close(v1_socket.sock); v1_socket.sock = -1; diff --git a/lib/telemetry/telemetry_json.h b/lib/telemetry/telemetry_json.h index f02a12f5..db706902 100644 --- a/lib/telemetry/telemetry_json.h +++ b/lib/telemetry/telemetry_json.h @@ -23,7 +23,7 @@ /** * @internal * Copies a value into a buffer if the buffer has enough available space. - * Nothing written to buffer if an overflow ocurs. + * Nothing written to buffer if an overflow occurs. * This function is not for use for values larger than given buffer length. */ __rte_format_printf(3, 4) diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index a781346c..05ef70f6 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -1115,7 +1115,7 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, struct uffdio_register reg_struct; /* - * Let's register all the mmap'ed area to ensure + * Let's register all the mmapped area to ensure * alignment on page boundary. */ reg_struct.range.start = (uint64_t)(uintptr_t)reg->mmap_addr; @@ -1177,7 +1177,7 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, msg->fd_num = 0; send_vhost_reply(main_fd, msg); - /* Wait for qemu to acknolwedge it's got the addresses + /* Wait for qemu to acknowledge it's got the addresses * we've got to wait before we're allowed to generate faults. */ if (read_vhost_message(main_fd, &ack_msg) <= 0) { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index b3d954aa..28a4dc1b 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -477,14 +477,14 @@ map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, while (desc_len) { uint64_t desc_addr; - uint64_t desc_chunck_len = desc_len; + uint64_t desc_chunk_len = desc_len; if (unlikely(vec_id >= BUF_VECTOR_MAX)) return -1; desc_addr = vhost_iova_to_vva(dev, vq, desc_iova, - &desc_chunck_len, + &desc_chunk_len, perm); if (unlikely(!desc_addr)) return -1; @@ -493,10 +493,10 @@ map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_vec[vec_id].buf_iova = desc_iova; buf_vec[vec_id].buf_addr = desc_addr; - buf_vec[vec_id].buf_len = desc_chunck_len; + buf_vec[vec_id].buf_len = desc_chunk_len; - desc_len -= desc_chunck_len; - desc_iova += desc_chunck_len; + desc_len -= desc_chunk_len; + desc_iova += desc_chunk_len; vec_id++; } *vec_idx = vec_id;