From patchwork Fri Sep 11 13:19:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 77394 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD1F4A04B7; Fri, 11 Sep 2020 15:17:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D7B591C12B; Fri, 11 Sep 2020 15:16:14 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id F2C201C11C for ; Fri, 11 Sep 2020 15:16:10 +0200 (CEST) IronPort-SDR: CUxHjIJ5zHrTpotfiG/T1iOgf2gYhlgPjiZThYdJfSIuWz5Ft723jkAMYf71Gh0E4VK4ECL762 +i+3ksnr7noQ== X-IronPort-AV: E=McAfee;i="6000,8403,9740"; a="146482151" X-IronPort-AV: E=Sophos;i="5.76,415,1592895600"; d="scan'208";a="146482151" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2020 06:16:10 -0700 IronPort-SDR: LgWXzl7rUX2GBayam4pURb85pvTOvoiH6P6i8rT/YpL+l9Q14DSH4k+1TC3bmUd652lfrsA4f7 06pDn+91ryXA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,415,1592895600"; d="scan'208";a="342296480" Received: from dpdk51.sh.intel.com ([10.67.111.82]) by FMSMGA003.fm.intel.com with ESMTP; 11 Sep 2020 06:16:09 -0700 From: Qi Zhang To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Qi Zhang , Bruce Allan Date: Fri, 11 Sep 2020 21:19:21 +0800 Message-Id: <20200911131954.15999-8-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200911131954.15999-1-qi.z.zhang@intel.com> References: <20200907112826.48493-1-qi.z.zhang@intel.com> <20200911131954.15999-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 07/40] net/ice/base: cleanup misleading comment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The maximum Admin Queue buffer size and NVM shadow RAM sector size are both 4 Kilobytes. Some comments refer to those as 4Kb which can be confused with 4 Kilobits. Update the comments to use the commonly used KB symbol instead. Signed-off-by: Bruce Allan Signed-off-by: Qi Zhang Acked-by: Qiming Yang --- drivers/net/ice/base/ice_nvm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index bfeade6f9..2befe68d7 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -58,7 +58,7 @@ ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length, * * Reads a portion of the NVM, as a flat memory space. This function correctly * breaks read requests across Shadow RAM sectors and ensures that no single - * read request exceeds the maximum 4Kb read for a single AdminQ command. + * read request exceeds the maximum 4KB read for a single AdminQ command. * * Returns a status code on failure. Note that the data pointer may be * partially updated if some reads succeed before a failure. @@ -86,10 +86,10 @@ ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data, do { u32 read_size, sector_offset; - /* ice_aq_read_nvm cannot read more than 4Kb at a time. + /* ice_aq_read_nvm cannot read more than 4KB at a time. * Additionally, a read from the Shadow RAM may not cross over * a sector boundary. Conveniently, the sector size is also - * 4Kb. + * 4KB. */ sector_offset = offset % ICE_AQ_MAX_BUF_LEN; read_size = MIN_T(u32, ICE_AQ_MAX_BUF_LEN - sector_offset, @@ -164,7 +164,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data) ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); - /* ice_read_flat_nvm takes into account the 4Kb AdminQ and Shadow RAM + /* ice_read_flat_nvm takes into account the 4KB AdminQ and Shadow RAM * sector restrictions necessary when reading from the NVM. */ status = ice_read_flat_nvm(hw, offset * 2, &bytes, (u8 *)data, true);