[v6,0/3] Introduce event link profiles

Message ID 20231003094721.5115-1-pbhagavatula@marvell.com (mailing list archive)
Headers
Series Introduce event link profiles |

Message

Pavan Nikhilesh Bhagavatula Oct. 3, 2023, 9:47 a.m. UTC
  From: Pavan Nikhilesh <pbhagavatula@marvell.com>

A collection of event queues linked to an event port can be associated
with unique identifier called as a link profile, multiple such profiles
can be configured based on the event device capability using the function
`rte_event_port_profile_links_set` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.

The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.

By default, event ports are configured to use the link profile 0 on
initialization.

Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_profile_switch` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.

Rudementary work flow would something like:

Config path:

    uint8_t lq[4] = {4, 5, 6, 7};
    uint8_t hq[4] = {0, 1, 2, 3};

    if (rte_event_dev_info.max_profiles_per_port < 2)
        return -ENOTSUP;

    rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
    rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);

Worker path:

    empty_high_deq = 0;
    empty_low_deq = 0;
    is_low_deq = 0;
    while (1) {
        deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
        if (deq == 0) {
            /**
             * Change link profile based on work activity on current
             * active profile
             */
            if (is_low_deq) {
                empty_low_deq++;
                if (empty_low_deq == MAX_LOW_RETRY) {
                    rte_event_port_profile_switch(0, 0, 0);
                    is_low_deq = 0;
                    empty_low_deq = 0;
                }
                continue;
            }

            if (empty_high_deq == MAX_HIGH_RETRY) {
                rte_event_port_profile_switch(0, 0, 1);
                is_low_deq = 1;
                empty_high_deq = 0;
            }
            continue;
        }

        // Process the event received.

        if (is_low_deq++ == MAX_LOW_EVENTS) {
            rte_event_port_profile_switch(0, 0, 0);
            is_low_deq = 0;
        }
    }

An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.

An unlink function `rte_event_port_profile_unlink` is provided to
modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the links
associated with a profile.

Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.

v6 Changes:
----------
- Fix compilation.

v5 Changes:
----------
- Rebase on next-event

v4 Changes:
----------
- Address review comments (Jerin).

v3 Changes:
----------
- Rebase to next-eventdev
- Rename testcase name to match API.

v2 Changes:
----------
- Fix compilation.

Pavan Nikhilesh (3):
  eventdev: introduce link profiles
  event/cnxk: implement event link profiles
  test/event: add event link profile test

 app/test/test_eventdev.c                  | 117 +++++++++++
 config/rte_config.h                       |   1 +
 doc/guides/eventdevs/cnxk.rst             |   1 +
 doc/guides/eventdevs/features/cnxk.ini    |   3 +-
 doc/guides/eventdevs/features/default.ini |   1 +
 doc/guides/prog_guide/eventdev.rst        |  40 ++++
 doc/guides/rel_notes/release_23_11.rst    |  13 ++
 drivers/common/cnxk/roc_nix_inl_dev.c     |   4 +-
 drivers/common/cnxk/roc_sso.c             |  18 +-
 drivers/common/cnxk/roc_sso.h             |   8 +-
 drivers/common/cnxk/roc_sso_priv.h        |   4 +-
 drivers/event/cnxk/cn10k_eventdev.c       |  45 +++--
 drivers/event/cnxk/cn10k_worker.c         |  11 ++
 drivers/event/cnxk/cn10k_worker.h         |   1 +
 drivers/event/cnxk/cn9k_eventdev.c        |  74 ++++---
 drivers/event/cnxk/cn9k_worker.c          |  22 +++
 drivers/event/cnxk/cn9k_worker.h          |   2 +
 drivers/event/cnxk/cnxk_eventdev.c        |  37 ++--
 drivers/event/cnxk/cnxk_eventdev.h        |  10 +-
 lib/eventdev/eventdev_pmd.h               |  59 +++++-
 lib/eventdev/eventdev_private.c           |   9 +
 lib/eventdev/eventdev_trace.h             |  32 +++
 lib/eventdev/eventdev_trace_points.c      |  12 ++
 lib/eventdev/rte_eventdev.c               | 150 +++++++++++---
 lib/eventdev/rte_eventdev.h               | 231 ++++++++++++++++++++++
 lib/eventdev/rte_eventdev_core.h          |   5 +
 lib/eventdev/rte_eventdev_trace_fp.h      |   8 +
 lib/eventdev/version.map                  |   4 +
 28 files changed, 813 insertions(+), 109 deletions(-)

--
2.25.1
  

Comments

Jerin Jacob Oct. 3, 2023, 10:36 a.m. UTC | #1
On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> A collection of event queues linked to an event port can be associated
> with unique identifier called as a link profile, multiple such profiles
> can be configured based on the event device capability using the function
> `rte_event_port_profile_links_set` which takes arguments similar to
> `rte_event_port_link` in addition to the profile identifier.
>
> The maximum link profiles that are supported by an event device is
> advertised through the structure member

...

>
> v6 Changes:

Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks

[for-main]dell[dpdk-next-eventdev] $ git diff
diff --git a/doc/guides/prog_guide/eventdev.rst
b/doc/guides/prog_guide/eventdev.rst
index 8c15c678bf..e177ca6bdb 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -325,7 +325,7 @@ multiple link profile per port and change them run
time depending up on heuristi
 Using Link profiles can reduce the overhead of linking/unlinking and
wait for unlinks in progress
 in fast-path and gives applications the ability to switch between
preset profiles on the fly.

-An Example use case could be as follows.
+An example use case could be as follows.

 Config path:

diff --git a/doc/guides/rel_notes/release_23_11.rst
b/doc/guides/rel_notes/release_23_11.rst
index 66c4ddf37c..261594aacc 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -105,8 +105,7 @@ New Features

   * Added support for ``remaining_ticks_get`` timer adapter PMD callback
     to get the remaining ticks to expire for a given event timer.
-  * Added link profiles support for Marvell CNXK event device driver,
-    up to two link profiles are supported per event port.
+  * Added link profiles support, up to two link profiles are supported.
  
Bruce Richardson Oct. 3, 2023, 2:12 p.m. UTC | #2
On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote:
> On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > A collection of event queues linked to an event port can be associated
> > with unique identifier called as a link profile, multiple such profiles
> > can be configured based on the event device capability using the function
> > `rte_event_port_profile_links_set` which takes arguments similar to
> > `rte_event_port_link` in addition to the profile identifier.
> >
> > The maximum link profiles that are supported by an event device is
> > advertised through the structure member
> 
> ...
> 
> >
> > v6 Changes:
> 
> Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks
> 

I'm doing some investigation work on the software eventdev, using
eventdev_pipeline, and following these patches the eventdev_pipeline sample
no longer is working for me. Error message is as shown below:

    Config:
	ports: 2
	workers: 22
	packets: 33554432
	Queue-prio: 0
	qid0 type: ordered
	Cores available: 48
	Cores used: 24
	Eventdev 0: event_sw
    Stages:
	Stage 0, Type Ordered	Priority = 128

  EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0
  Error setting up port 0

Parameters used when running the app:
  -l 24-47 --in-memory --vdev=event_sw0 -- \
	-r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000  -c 64 -W 500

Regards,
/Bruce
  
Jerin Jacob Oct. 3, 2023, 3:17 p.m. UTC | #3
On Tue, Oct 3, 2023 at 7:43 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote:
> > On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote:
> > >
> > > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > >
> > > A collection of event queues linked to an event port can be associated
> > > with unique identifier called as a link profile, multiple such profiles
> > > can be configured based on the event device capability using the function
> > > `rte_event_port_profile_links_set` which takes arguments similar to
> > > `rte_event_port_link` in addition to the profile identifier.
> > >
> > > The maximum link profiles that are supported by an event device is
> > > advertised through the structure member
> >
> > ...
> >
> > >
> > > v6 Changes:
> >
> > Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks
> >
>
> I'm doing some investigation work on the software eventdev, using
> eventdev_pipeline, and following these patches the eventdev_pipeline sample
> no longer is working for me. Error message is as shown below:
>
>     Config:
>         ports: 2
>         workers: 22
>         packets: 33554432
>         Queue-prio: 0
>         qid0 type: ordered
>         Cores available: 48
>         Cores used: 24
>         Eventdev 0: event_sw
>     Stages:
>         Stage 0, Type Ordered   Priority = 128
>
>   EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0
>   Error setting up port 0
>
> Parameters used when running the app:
>   -l 24-47 --in-memory --vdev=event_sw0 -- \
>         -r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000  -c 64 -W 500


Following max_profiles_per_port = 1 is getting overridden in [1]. I
was suggested to take this path to avoid driver changes.
Looks like we can not rely on common code. @Pavan Nikhilesh  Could you
change to your old version(where every driver changes to add
max_profiles_per_port = 1).
I will squash it.

diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 60509c6efb..5ee8bd665b 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -96,6 +96,7 @@  rte_event_dev_info_get(uint8_t dev_id, struct
rte_event_dev_info *dev_info)
  return -EINVAL;

  memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+ dev_info->max_profiles_per_port = 1;

[1]
static void
sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
{
        RTE_SET_USED(dev);

        static const struct rte_event_dev_info evdev_sw_info = {
                        .driver_name = SW_PMD_NAME,
                        .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV,
                        .max_event_queue_flows = SW_QID_NUM_FIDS,
                        .max_event_queue_priority_levels = SW_Q_PRIORITY_MAX,
                        .max_event_priority_levels = SW_IQS_MAX,
                        .max_event_ports = SW_PORTS_MAX,
                        .max_event_port_dequeue_depth = MAX_SW_CONS_Q_DEPTH,
                        .max_event_port_enqueue_depth = MAX_SW_PROD_Q_DEPTH,
                        .max_num_events = SW_INFLIGHT_EVENTS_TOTAL,
                        .event_dev_cap = (
                                RTE_EVENT_DEV_CAP_QUEUE_QOS |
                                RTE_EVENT_DEV_CAP_BURST_MODE |
                                RTE_EVENT_DEV_CAP_EVENT_QOS |
                                RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
                                RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
                                RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
                                RTE_EVENT_DEV_CAP_NONSEQ_MODE |
                                RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
                                RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
        };

        *info = evdev_sw_info;
}


>
> Regards,
> /Bruce
  
Pavan Nikhilesh Bhagavatula Oct. 3, 2023, 3:32 p.m. UTC | #4
> On Tue, Oct 3, 2023 at 7:43 PM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote:
> > > On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote:
> > > >
> > > > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > >
> > > > A collection of event queues linked to an event port can be associated
> > > > with unique identifier called as a link profile, multiple such profiles
> > > > can be configured based on the event device capability using the
> function
> > > > `rte_event_port_profile_links_set` which takes arguments similar to
> > > > `rte_event_port_link` in addition to the profile identifier.
> > > >
> > > > The maximum link profiles that are supported by an event device is
> > > > advertised through the structure member
> > >
> > > ...
> > >
> > > >
> > > > v6 Changes:
> > >
> > > Series applied to dpdk-next-net-eventdev/for-main with following
> changes. Thanks
> > >
> >
> > I'm doing some investigation work on the software eventdev, using
> > eventdev_pipeline, and following these patches the eventdev_pipeline
> sample
> > no longer is working for me. Error message is as shown below:
> >
> >     Config:
> >         ports: 2
> >         workers: 22
> >         packets: 33554432
> >         Queue-prio: 0
> >         qid0 type: ordered
> >         Cores available: 48
> >         Cores used: 24
> >         Eventdev 0: event_sw
> >     Stages:
> >         Stage 0, Type Ordered   Priority = 128
> >
> >   EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0
> >   Error setting up port 0
> >
> > Parameters used when running the app:
> >   -l 24-47 --in-memory --vdev=event_sw0 -- \
> >         -r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000  -c 64 -W 500
> 
> 
> Following max_profiles_per_port = 1 is getting overridden in [1]. I
> was suggested to take this path to avoid driver changes.
> Looks like we can not rely on common code. @Pavan Nikhilesh  Could you
> change to your old version(where every driver changes to add
> max_profiles_per_port = 1).
> I will squash it.
> 
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 60509c6efb..5ee8bd665b 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -96,6 +96,7 @@  rte_event_dev_info_get(uint8_t dev_id, struct
> rte_event_dev_info *dev_info)
>   return -EINVAL;
> 
>   memset(dev_info, 0, sizeof(struct rte_event_dev_info));
> + dev_info->max_profiles_per_port = 1;


Should be fixed with the following patch, @Bruce Richardson could you please verify 
https://patchwork.dpdk.org/project/dpdk/patch/20231003152535.10177-1-pbhagavatula@marvell.com/

> 
> [1]
> static void
> sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
> {
>         RTE_SET_USED(dev);
> 
>         static const struct rte_event_dev_info evdev_sw_info = {
>                         .driver_name = SW_PMD_NAME,
>                         .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV,
>                         .max_event_queue_flows = SW_QID_NUM_FIDS,
>                         .max_event_queue_priority_levels = SW_Q_PRIORITY_MAX,
>                         .max_event_priority_levels = SW_IQS_MAX,
>                         .max_event_ports = SW_PORTS_MAX,
>                         .max_event_port_dequeue_depth =
> MAX_SW_CONS_Q_DEPTH,
>                         .max_event_port_enqueue_depth =
> MAX_SW_PROD_Q_DEPTH,
>                         .max_num_events = SW_INFLIGHT_EVENTS_TOTAL,
>                         .event_dev_cap = (
>                                 RTE_EVENT_DEV_CAP_QUEUE_QOS |
>                                 RTE_EVENT_DEV_CAP_BURST_MODE |
>                                 RTE_EVENT_DEV_CAP_EVENT_QOS |
>                                 RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
>                                 RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>                                 RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>                                 RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>                                 RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
>                                 RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
>         };
> 
>         *info = evdev_sw_info;
> }
> 
> 
> >
> > Regards,
> > /Bruce