Message ID | 20231003094721.5115-1-pbhagavatula@marvell.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECBEC426AE; Tue, 3 Oct 2023 11:47:33 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93C9C4026B; Tue, 3 Oct 2023 11:47:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3AC2B40262 for <dev@dpdk.org>; Tue, 3 Oct 2023 11:47:32 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3933inQa022273; Tue, 3 Oct 2023 02:47:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=2r8aM64hGOxEn00aAwUFA+koqIjXNwrxuDcSdl3kF04=; b=CuyLOKepZ/w8sFcy2LYtA/EGPRjSNrbyedIWPL8cSzPM1EYDu+pe/8tUwKM40mX1OX+B Z64OUbyvo8wm9ZnP/pF8nyVmIa3kS4mwQtdEU5917JRnnzp0Y9RnSiOrNCyOI5MZG1hA M0QzeLK58geuSUwVBgtNdLGBoB9pjuIJZ9xxE2mFNUzaW+VOy+fejwlwgwx4BmeDxbdo vPTMYzdBIu5KpOvIN0l6Tgtbgd1fNU4HfC4l85LvBwn3EAVN7mSN/6iZHcw/wMpLfiEm B6w4bcrj9hTFYUA5CIVd12haJ9OMFsSMvReGUIXZw1J2aK4RQOA24SHBT4jgJfAZ2D3O yA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3tgbas92tf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 03 Oct 2023 02:47:31 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 3 Oct 2023 02:47:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 3 Oct 2023 02:47:29 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id D285B3F7043; Tue, 3 Oct 2023 02:47:22 -0700 (PDT) From: <pbhagavatula@marvell.com> To: <jerinj@marvell.com>, <pbhagavatula@marvell.com>, <sthotton@marvell.com>, <timothy.mcdaniel@intel.com>, <hemant.agrawal@nxp.com>, <sachin.saxena@nxp.com>, <mattias.ronnblom@ericsson.com>, <liangma@liangbit.com>, <peter.mccarthy@intel.com>, <harry.van.haaren@intel.com>, <erik.g.carrillo@intel.com>, <abhinandan.gujjar@intel.com>, <s.v.naga.harish.k@intel.com>, <anatoly.burakov@intel.com> CC: <dev@dpdk.org> Subject: [PATCH v6 0/3] Introduce event link profiles Date: Tue, 3 Oct 2023 15:17:18 +0530 Message-ID: <20231003094721.5115-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231003075109.4309-1-pbhagavatula@marvell.com> References: <20231003075109.4309-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: W4ebC5hgNLgpJ8bpw0xODvV5D3-G-6uO X-Proofpoint-GUID: W4ebC5hgNLgpJ8bpw0xODvV5D3-G-6uO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-03_06,2023-10-02_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org |
Series |
Introduce event link profiles
|
|
Message
Pavan Nikhilesh Bhagavatula
Oct. 3, 2023, 9:47 a.m. UTC
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
A collection of event queues linked to an event port can be associated
with unique identifier called as a link profile, multiple such profiles
can be configured based on the event device capability using the function
`rte_event_port_profile_links_set` which takes arguments similar to
`rte_event_port_link` in addition to the profile identifier.
The maximum link profiles that are supported by an event device is
advertised through the structure member
`rte_event_dev_info::max_profiles_per_port`.
By default, event ports are configured to use the link profile 0 on
initialization.
Once multiple link profiles are set up and the event device is started, the
application can use the function `rte_event_port_profile_switch` to change
the currently active profile on an event port. This effects the next
`rte_event_dequeue_burst` call, where the event queues associated with the
newly active link profile will participate in scheduling.
Rudementary work flow would something like:
Config path:
uint8_t lq[4] = {4, 5, 6, 7};
uint8_t hq[4] = {0, 1, 2, 3};
if (rte_event_dev_info.max_profiles_per_port < 2)
return -ENOTSUP;
rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
Worker path:
empty_high_deq = 0;
empty_low_deq = 0;
is_low_deq = 0;
while (1) {
deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
if (deq == 0) {
/**
* Change link profile based on work activity on current
* active profile
*/
if (is_low_deq) {
empty_low_deq++;
if (empty_low_deq == MAX_LOW_RETRY) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
empty_low_deq = 0;
}
continue;
}
if (empty_high_deq == MAX_HIGH_RETRY) {
rte_event_port_profile_switch(0, 0, 1);
is_low_deq = 1;
empty_high_deq = 0;
}
continue;
}
// Process the event received.
if (is_low_deq++ == MAX_LOW_EVENTS) {
rte_event_port_profile_switch(0, 0, 0);
is_low_deq = 0;
}
}
An application could use heuristic data of load/activity of a given event
port and change its active profile to adapt to the traffic pattern.
An unlink function `rte_event_port_profile_unlink` is provided to
modify the links associated to a profile, and
`rte_event_port_profile_links_get` can be used to retrieve the links
associated with a profile.
Using Link profiles can reduce the overhead of linking/unlinking and
waiting for unlinks in progress in fast-path and gives applications
the ability to switch between preset profiles on the fly.
v6 Changes:
----------
- Fix compilation.
v5 Changes:
----------
- Rebase on next-event
v4 Changes:
----------
- Address review comments (Jerin).
v3 Changes:
----------
- Rebase to next-eventdev
- Rename testcase name to match API.
v2 Changes:
----------
- Fix compilation.
Pavan Nikhilesh (3):
eventdev: introduce link profiles
event/cnxk: implement event link profiles
test/event: add event link profile test
app/test/test_eventdev.c | 117 +++++++++++
config/rte_config.h | 1 +
doc/guides/eventdevs/cnxk.rst | 1 +
doc/guides/eventdevs/features/cnxk.ini | 3 +-
doc/guides/eventdevs/features/default.ini | 1 +
doc/guides/prog_guide/eventdev.rst | 40 ++++
doc/guides/rel_notes/release_23_11.rst | 13 ++
drivers/common/cnxk/roc_nix_inl_dev.c | 4 +-
drivers/common/cnxk/roc_sso.c | 18 +-
drivers/common/cnxk/roc_sso.h | 8 +-
drivers/common/cnxk/roc_sso_priv.h | 4 +-
drivers/event/cnxk/cn10k_eventdev.c | 45 +++--
drivers/event/cnxk/cn10k_worker.c | 11 ++
drivers/event/cnxk/cn10k_worker.h | 1 +
drivers/event/cnxk/cn9k_eventdev.c | 74 ++++---
drivers/event/cnxk/cn9k_worker.c | 22 +++
drivers/event/cnxk/cn9k_worker.h | 2 +
drivers/event/cnxk/cnxk_eventdev.c | 37 ++--
drivers/event/cnxk/cnxk_eventdev.h | 10 +-
lib/eventdev/eventdev_pmd.h | 59 +++++-
lib/eventdev/eventdev_private.c | 9 +
lib/eventdev/eventdev_trace.h | 32 +++
lib/eventdev/eventdev_trace_points.c | 12 ++
lib/eventdev/rte_eventdev.c | 150 +++++++++++---
lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++
lib/eventdev/rte_eventdev_core.h | 5 +
lib/eventdev/rte_eventdev_trace_fp.h | 8 +
lib/eventdev/version.map | 4 +
28 files changed, 813 insertions(+), 109 deletions(-)
--
2.25.1
Comments
On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote: > > From: Pavan Nikhilesh <pbhagavatula@marvell.com> > > A collection of event queues linked to an event port can be associated > with unique identifier called as a link profile, multiple such profiles > can be configured based on the event device capability using the function > `rte_event_port_profile_links_set` which takes arguments similar to > `rte_event_port_link` in addition to the profile identifier. > > The maximum link profiles that are supported by an event device is > advertised through the structure member ... > > v6 Changes: Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks [for-main]dell[dpdk-next-eventdev] $ git diff diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst index 8c15c678bf..e177ca6bdb 100644 --- a/doc/guides/prog_guide/eventdev.rst +++ b/doc/guides/prog_guide/eventdev.rst @@ -325,7 +325,7 @@ multiple link profile per port and change them run time depending up on heuristi Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress in fast-path and gives applications the ability to switch between preset profiles on the fly. -An Example use case could be as follows. +An example use case could be as follows. Config path: diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index 66c4ddf37c..261594aacc 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -105,8 +105,7 @@ New Features * Added support for ``remaining_ticks_get`` timer adapter PMD callback to get the remaining ticks to expire for a given event timer. - * Added link profiles support for Marvell CNXK event device driver, - up to two link profiles are supported per event port. + * Added link profiles support, up to two link profiles are supported.
On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote: > On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote: > > > > From: Pavan Nikhilesh <pbhagavatula@marvell.com> > > > > A collection of event queues linked to an event port can be associated > > with unique identifier called as a link profile, multiple such profiles > > can be configured based on the event device capability using the function > > `rte_event_port_profile_links_set` which takes arguments similar to > > `rte_event_port_link` in addition to the profile identifier. > > > > The maximum link profiles that are supported by an event device is > > advertised through the structure member > > ... > > > > > v6 Changes: > > Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks > I'm doing some investigation work on the software eventdev, using eventdev_pipeline, and following these patches the eventdev_pipeline sample no longer is working for me. Error message is as shown below: Config: ports: 2 workers: 22 packets: 33554432 Queue-prio: 0 qid0 type: ordered Cores available: 48 Cores used: 24 Eventdev 0: event_sw Stages: Stage 0, Type Ordered Priority = 128 EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0 Error setting up port 0 Parameters used when running the app: -l 24-47 --in-memory --vdev=event_sw0 -- \ -r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000 -c 64 -W 500 Regards, /Bruce
On Tue, Oct 3, 2023 at 7:43 PM Bruce Richardson <bruce.richardson@intel.com> wrote: > > On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote: > > On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote: > > > > > > From: Pavan Nikhilesh <pbhagavatula@marvell.com> > > > > > > A collection of event queues linked to an event port can be associated > > > with unique identifier called as a link profile, multiple such profiles > > > can be configured based on the event device capability using the function > > > `rte_event_port_profile_links_set` which takes arguments similar to > > > `rte_event_port_link` in addition to the profile identifier. > > > > > > The maximum link profiles that are supported by an event device is > > > advertised through the structure member > > > > ... > > > > > > > > v6 Changes: > > > > Series applied to dpdk-next-net-eventdev/for-main with following changes. Thanks > > > > I'm doing some investigation work on the software eventdev, using > eventdev_pipeline, and following these patches the eventdev_pipeline sample > no longer is working for me. Error message is as shown below: > > Config: > ports: 2 > workers: 22 > packets: 33554432 > Queue-prio: 0 > qid0 type: ordered > Cores available: 48 > Cores used: 24 > Eventdev 0: event_sw > Stages: > Stage 0, Type Ordered Priority = 128 > > EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0 > Error setting up port 0 > > Parameters used when running the app: > -l 24-47 --in-memory --vdev=event_sw0 -- \ > -r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000 -c 64 -W 500 Following max_profiles_per_port = 1 is getting overridden in [1]. I was suggested to take this path to avoid driver changes. Looks like we can not rely on common code. @Pavan Nikhilesh Could you change to your old version(where every driver changes to add max_profiles_per_port = 1). I will squash it. diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 60509c6efb..5ee8bd665b 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -96,6 +96,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info) return -EINVAL; memset(dev_info, 0, sizeof(struct rte_event_dev_info)); + dev_info->max_profiles_per_port = 1; [1] static void sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info) { RTE_SET_USED(dev); static const struct rte_event_dev_info evdev_sw_info = { .driver_name = SW_PMD_NAME, .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV, .max_event_queue_flows = SW_QID_NUM_FIDS, .max_event_queue_priority_levels = SW_Q_PRIORITY_MAX, .max_event_priority_levels = SW_IQS_MAX, .max_event_ports = SW_PORTS_MAX, .max_event_port_dequeue_depth = MAX_SW_CONS_Q_DEPTH, .max_event_port_enqueue_depth = MAX_SW_PROD_Q_DEPTH, .max_num_events = SW_INFLIGHT_EVENTS_TOTAL, .event_dev_cap = ( RTE_EVENT_DEV_CAP_QUEUE_QOS | RTE_EVENT_DEV_CAP_BURST_MODE | RTE_EVENT_DEV_CAP_EVENT_QOS | RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE| RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK | RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE), }; *info = evdev_sw_info; } > > Regards, > /Bruce
> On Tue, Oct 3, 2023 at 7:43 PM Bruce Richardson > <bruce.richardson@intel.com> wrote: > > > > On Tue, Oct 03, 2023 at 04:06:10PM +0530, Jerin Jacob wrote: > > > On Tue, Oct 3, 2023 at 3:17 PM <pbhagavatula@marvell.com> wrote: > > > > > > > > From: Pavan Nikhilesh <pbhagavatula@marvell.com> > > > > > > > > A collection of event queues linked to an event port can be associated > > > > with unique identifier called as a link profile, multiple such profiles > > > > can be configured based on the event device capability using the > function > > > > `rte_event_port_profile_links_set` which takes arguments similar to > > > > `rte_event_port_link` in addition to the profile identifier. > > > > > > > > The maximum link profiles that are supported by an event device is > > > > advertised through the structure member > > > > > > ... > > > > > > > > > > > v6 Changes: > > > > > > Series applied to dpdk-next-net-eventdev/for-main with following > changes. Thanks > > > > > > > I'm doing some investigation work on the software eventdev, using > > eventdev_pipeline, and following these patches the eventdev_pipeline > sample > > no longer is working for me. Error message is as shown below: > > > > Config: > > ports: 2 > > workers: 22 > > packets: 33554432 > > Queue-prio: 0 > > qid0 type: ordered > > Cores available: 48 > > Cores used: 24 > > Eventdev 0: event_sw > > Stages: > > Stage 0, Type Ordered Priority = 128 > > > > EVENTDEV: rte_event_port_profile_unlink() line 1092: Invalid profile_id=0 > > Error setting up port 0 > > > > Parameters used when running the app: > > -l 24-47 --in-memory --vdev=event_sw0 -- \ > > -r 1000000 -t 1000000 -e 2000000 -w FFFFFC000000 -c 64 -W 500 > > > Following max_profiles_per_port = 1 is getting overridden in [1]. I > was suggested to take this path to avoid driver changes. > Looks like we can not rely on common code. @Pavan Nikhilesh Could you > change to your old version(where every driver changes to add > max_profiles_per_port = 1). > I will squash it. > > diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c > index 60509c6efb..5ee8bd665b 100644 > --- a/lib/eventdev/rte_eventdev.c > +++ b/lib/eventdev/rte_eventdev.c > @@ -96,6 +96,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct > rte_event_dev_info *dev_info) > return -EINVAL; > > memset(dev_info, 0, sizeof(struct rte_event_dev_info)); > + dev_info->max_profiles_per_port = 1; Should be fixed with the following patch, @Bruce Richardson could you please verify https://patchwork.dpdk.org/project/dpdk/patch/20231003152535.10177-1-pbhagavatula@marvell.com/ > > [1] > static void > sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info) > { > RTE_SET_USED(dev); > > static const struct rte_event_dev_info evdev_sw_info = { > .driver_name = SW_PMD_NAME, > .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV, > .max_event_queue_flows = SW_QID_NUM_FIDS, > .max_event_queue_priority_levels = SW_Q_PRIORITY_MAX, > .max_event_priority_levels = SW_IQS_MAX, > .max_event_ports = SW_PORTS_MAX, > .max_event_port_dequeue_depth = > MAX_SW_CONS_Q_DEPTH, > .max_event_port_enqueue_depth = > MAX_SW_PROD_Q_DEPTH, > .max_num_events = SW_INFLIGHT_EVENTS_TOTAL, > .event_dev_cap = ( > RTE_EVENT_DEV_CAP_QUEUE_QOS | > RTE_EVENT_DEV_CAP_BURST_MODE | > RTE_EVENT_DEV_CAP_EVENT_QOS | > RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE| > RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK | > RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | > RTE_EVENT_DEV_CAP_NONSEQ_MODE | > RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | > RTE_EVENT_DEV_CAP_MAINTENANCE_FREE), > }; > > *info = evdev_sw_info; > } > > > > > > Regards, > > /Bruce