[6/9] doc: reword log library section in prog guide

Message ID 20240513155911.31872-7-nandinipersad361@gmail.com (mailing list archive)
State Superseded
Delegated to: Thomas Monjalon
Headers
Series reowrd in prog guide |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Nandini Persad May 13, 2024, 3:59 p.m. UTC
  minor changes made for syntax in the log library section and 7.1
section of the programmer's guide. A couple sentences at the end of the
trace library section were also edited.

Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
 doc/guides/prog_guide/cmdline.rst   | 24 +++++++++++-----------
 doc/guides/prog_guide/log_lib.rst   | 32 ++++++++++++++---------------
 doc/guides/prog_guide/trace_lib.rst | 22 ++++++++++----------
 3 files changed, 39 insertions(+), 39 deletions(-)
  

Patch

diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..6b10ab6c99 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -5,8 +5,8 @@  Command-line Library
 ====================
 
 Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end application.
+primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end application.
 This chapter covers the basics of the command-line library and how to use it in an application.
 
 Library Features
@@ -18,7 +18,7 @@  The DPDK command-line library supports the following features:
 
 * Ability to read and process commands taken from an input file, e.g. startup script
 
-* Parameterized commands able to take multiple parameters with different datatypes:
+* Parameterized commands that can take multiple parameters with different datatypes:
 
    * Strings
    * Signed/unsigned 16/32/64-bit integers
@@ -56,7 +56,7 @@  Creating a Command List File
 The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be used by the application.
 While these can be piped to it via standard input, using a list file is probably best.
 
-The format of the list file must be:
+The format of the list file must follow these requirements:
 
 * Comment lines start with '#' as first non-whitespace character
 
@@ -75,7 +75,7 @@  The format of the list file must be:
   * ``<IPv6>dst_ip6``
 
 * Variable fields, which take their values from a list of options,
-  have the comma-separated option list placed in braces, rather than a the type name.
+  have the comma-separated option list placed in braces, rather than by the type name.
   For example,
 
   * ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@  and the callback stubs will be written to an equivalent ".c" file.
 Providing the Function Callbacks
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-As discussed above, the script output is a header file, containing structure definitions,
-but the callback functions themselves obviously have to be provided by the user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
 and named ``cmd_<cmdname>_parsed``.
 The function prototypes can be seen in the generated output header.
 
-The "cmdname" part of the function name is built up by combining the non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable initial tokens in the command.
 So, given the commands in our worked example below: ``quit`` and ``show port stats <n>``,
 the callback functions would be:
 
@@ -151,11 +151,11 @@  the callback functions would be:
         ...
    }
 
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
 stub functions may be generated by the script automatically using the ``--stubs`` parameter.
 
 The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
 the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
 or ``struct cmd_show_port_stats_result`` respectively.
 
@@ -179,7 +179,7 @@  To integrate the script output with the application,
 we must ``#include`` the generated header into our applications C file,
 and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
 The first parameter to the function call should be the context array in the generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
 
 The callback functions may be in this same file, or in a separate one -
 they just need to be available to the linker at build-time.
diff --git a/doc/guides/prog_guide/log_lib.rst b/doc/guides/prog_guide/log_lib.rst
index ff9d1b54a2..05f032dfad 100644
--- a/doc/guides/prog_guide/log_lib.rst
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -5,7 +5,7 @@  Log Library
 ===========
 
 The DPDK Log library provides the logging functionality for other DPDK libraries and drivers.
-By default, in a Linux application, logs are sent to syslog and also to the console.
+By default, in a Linux application, logs are sent to syslog and the console.
 On FreeBSD and Windows applications, logs are sent only to the console.
 However, the log function can be overridden by the user to use a different logging mechanism.
 
@@ -26,14 +26,14 @@  These levels, specified in ``rte_log.h`` are (from most to least important):
 
 At runtime, only messages of a configured level or above (i.e. of higher importance)
 will be emitted by the application to the log output.
-That level can be configured either by the application calling the relevant APIs from the logging library,
+That level can be configured either by the application calling relevant APIs from the logging library,
 or by the user passing the ``--log-level`` parameter to the EAL via the application.
 
 Setting Global Log Level
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
 To adjust the global log level for an application,
-just pass a numeric level or a level name to the ``--log-level`` EAL parameter.
+pass a numeric level or a level name to the ``--log-level`` EAL parameter.
 For example::
 
 	/path/to/app --log-level=error
@@ -47,9 +47,9 @@  Within an application, the log level can be similarly set using the ``rte_log_se
 Setting Log Level for a Component
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-In some cases, for example, for debugging purposes,
-it may be desirable to increase or decrease the log level for only a specific component, or set of components.
-To facilitate this, the ``--log-level`` argument also accepts an, optionally wildcarded, component name,
+In some cases (such as debugging purposes),
+you may want to increase or decrease the log level for only a specific component or set of components.
+To facilitate this, the ``--log-level`` argument also accepts an optionally wildcarded component name,
 along with the desired level for that component.
 For example::
 
@@ -57,13 +57,13 @@  For example::
 
 	/path/to/app --log-level=lib.*:warning
 
-Within an application, the same result can be got using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
+Within an application, you can achieve the same result using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
 
 Using Logging APIs to Generate Log Messages
 -------------------------------------------
 
-To output log messages, ``rte_log()`` API function should be used.
-As well as the log message, ``rte_log()`` takes two additional parameters:
+To output log messages, the ``rte_log()`` API function should be used,
+as well as the log message, ``rte_log()`` which takes two additional parameters:
 
 * The log level
 * The log component type
@@ -73,16 +73,16 @@  The component type is a unique id that identifies the particular DPDK component
 To get this id, each component needs to register itself at startup,
 using the macro ``RTE_LOG_REGISTER_DEFAULT``.
 This macro takes two parameters, with the second being the default log level for the component.
-The first parameter, called "type", the name of the "logtype", or "component type" variable used in the component.
-This variable will be defined by the macro, and should be passed as the second parameter in calls to ``rte_log()``.
+The first parameter, called "type", is the name of the "logtype", or "component type" variable used in the component.
+This variable will be defined by the macro and should be passed as the second parameter in calls to ``rte_log()``.
 In general, most DPDK components define their own logging macros to simplify the calls to the log APIs.
 They do this by:
 
 * Hiding the component type parameter inside the macro so it never needs to be passed explicitly.
 * Using the log-level definitions given in ``rte_log.h`` to allow short textual names to be used in
-  place of the numeric log levels.
+  place of numeric log levels.
 
-The following code is taken from ``rte_cfgfile.c`` and shows the log registration,
+The following code is taken from ``rte_cfgfile.c`` and shows the log registration
 and subsequent definition of a shortcut logging macro.
 It can be used as a template for any new components using DPDK logging.
 
@@ -97,10 +97,10 @@  It can be used as a template for any new components using DPDK logging.
 	it should be placed near the top of the C file using it.
 	If not, the logtype variable should be defined as an "extern int" near the top of the file.
 
-	Similarly, if logging is to be done by multiple files in a component,
-	only one file should register the logtype via the macro,
+	Similarly, if logging will be done by multiple files in a component,
+	only one file should register the logtype via the macro
 	and the logtype should be defined as an "extern int" in a common header file.
-	Any component-specific logging macro should similarly be defined in that header.
+	Any component-specific logging macro should be similarly defined in that header.
 
 Throughout the cfgfile library, all logging calls are therefore of the form:
 
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index e2983017d8..4177f8ba15 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -195,12 +195,12 @@  to babeltrace with no options::
 all their events, merging them in chronological order.
 
 You can pipe the output of the babeltrace into a tool like grep(1) for further
-filtering. Below example grep the events for ``ethdev`` only::
+filtering. Here's an example of how you grep the events for ``ethdev`` only::
 
     babeltrace /tmp/my-dpdk-trace | grep ethdev
 
 You can pipe the output of babeltrace into a tool like wc(1) to count the
-recorded events. Below example count the number of ``ethdev`` events::
+recorded events. Below is an example of counting the number of ``ethdev`` events::
 
     babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
 
@@ -210,14 +210,14 @@  Use the tracecompass GUI tool
 ``Tracecompass`` is another tool to view/analyze the DPDK traces which gives
 a graphical view of events. Like ``babeltrace``, tracecompass also provides
 an interface to search for a particular event.
-To use ``tracecompass``, following are the minimum required steps:
+To use ``tracecompass``, the following are the minimum required steps:
 
 - Install ``tracecompass`` to the localhost. Variants are available for Linux,
   Windows, and OS-X.
 - Launch ``tracecompass`` which will open a graphical window with trace
   management interfaces.
-- Open a trace using ``File->Open Trace`` option and select metadata file which
-  is to be viewed/analyzed.
+- Open a trace using the ``File->Open Trace`` option and select the metadata file which
+  will be viewed/analyzed.
 
 For more details, refer
 `Trace Compass <https://www.eclipse.org/tracecompass/>`_.
@@ -225,7 +225,7 @@  For more details, refer
 Quick start
 -----------
 
-This section steps you through the details of generating trace and viewing it.
+This section steps you through the details of generating the trace and viewing it.
 
 - Start the dpdk-test::
 
@@ -238,8 +238,8 @@  This section steps you through the details of generating trace and viewing it.
 Implementation details
 ----------------------
 
-As DPDK trace library is designed to generate traces that uses ``Common Trace
-Format (CTF)``. ``CTF`` specification consists of the following units to create
+As the DPDK trace library is designed to generate traces that use the ``Common Trace
+Format (CTF)``. ``CTF`` specification and consists of the following units to create
 a trace.
 
 - ``Stream`` Sequence of packets.
@@ -249,7 +249,7 @@  a trace.
 For detailed information, refer to
 `Common Trace Format <https://diamon.org/ctf/>`_.
 
-The implementation details broadly divided into the following areas:
+Implementation details are broadly divided into the following areas:
 
 Trace metadata creation
 ~~~~~~~~~~~~~~~~~~~~~~~
@@ -277,7 +277,7 @@  per thread to enable lock less trace-emit function.
 For non lcore threads, the trace memory is allocated on the first trace
 emission.
 
-For lcore threads, if trace points are enabled through a EAL option, the trace
+For lcore threads, if trace points are enabled through an EAL option, the trace
 memory is allocated when the threads are known of DPDK
 (``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
 Otherwise, when trace points are enabled later in the life of the application,
@@ -348,7 +348,7 @@  trace.header
   | timestamp [47:0]     |
   +----------------------+
 
-The trace header is 64 bits, it consists of 48 bits of timestamp and 16 bits
+The trace header is 64 bits. It consists of 48 bits of timestamp and 16 bits
 event ID.
 
 The ``packet.header`` and ``packet.context`` will be written in the slow path