eventdev: fix max link profiles info
Checks
Commit Message
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Since most of the drivers overwrite the info structure passed
from the common layer it is not possible to set defaults in
``rte_event_dev_info_get`` API.
Initialize default max_profiles_per_port in the driver layer.
Fixes: 162aa4e1b479 ("eventdev: introduce link profiles")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
Please squash to 162aa4e1b479
drivers/event/dlb2/dlb2.c | 1 +
drivers/event/dpaa/dpaa_eventdev.c | 1 +
drivers/event/dpaa2/dpaa2_eventdev.c | 2 +-
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/octeontx/ssovf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 1 +
drivers/event/skeleton/skeleton_eventdev.c | 1 +
drivers/event/sw/sw_evdev.c | 1 +
lib/eventdev/rte_eventdev.c | 1 -
9 files changed, 8 insertions(+), 3 deletions(-)
--
2.25.1
Comments
On Tue, Oct 03, 2023 at 08:55:35PM +0530, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Since most of the drivers overwrite the info structure passed
> from the common layer it is not possible to set defaults in
> ``rte_event_dev_info_get`` API.
> Initialize default max_profiles_per_port in the driver layer.
>
> Fixes: 162aa4e1b479 ("eventdev: introduce link profiles")
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> Please squash to 162aa4e1b479
>
Just wondering, is another valid approach to check the return value from
the driver callback and set max_profiles to 1 if it's set to zero by the
driver? That would save modifying all drivers and probably still fix any
issues. [I'm assuming that max_profiles == 0 is invalid, and that every
device by default should report "1" as supported]
/Bruce
> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Tuesday, October 3, 2023 10:19 PM
> To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Abdullah Sevincer
> <abdullah.sevincer@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>;
> Mattias Rönnblom <mattias.ronnblom@ericsson.com>; Liang Ma
> <liangma@liangbit.com>; Peter Mccarthy <peter.mccarthy@intel.com>; Harry
> van Haaren <harry.van.haaren@intel.com>; dev@dpdk.org
> Subject: [EXT] Re: [PATCH] eventdev: fix max link profiles info
>
> External Email
>
> ----------------------------------------------------------------------
> On Tue, Oct 03, 2023 at 08:55:35PM +0530, pbhagavatula@marvell.com
> wrote:
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > Since most of the drivers overwrite the info structure passed from the
> > common layer it is not possible to set defaults in
> > ``rte_event_dev_info_get`` API.
> > Initialize default max_profiles_per_port in the driver layer.
> >
> > Fixes: 162aa4e1b479 ("eventdev: introduce link profiles")
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > ---
> > Please squash to 162aa4e1b479
> >
> Just wondering, is another valid approach to check the return value from the
> driver callback and set max_profiles to 1 if it's set to zero by the driver? That
> would save modifying all drivers and probably still fix any issues. [I'm assuming
> that max_profiles == 0 is invalid, and that every device by default should report
> "1" as supported]
I can think of three options
1)Change max_profile to max_profiles_minus_one as name
2)In generic info_get, fix up max_profile as one if max_profile is zero after PMD callback + https://patches.dpdk.org/project/dpdk/patch/20231003150829.8257-1-pbhagavatula@marvell.com/
3) Or Keep as this patch.
Looks like (1) and (2) not very clean. I think, we can keep as (3) if you don't have strong opinion.
>
> /Bruce
On Tue, Oct 03, 2023 at 05:03:28PM +0000, Jerin Jacob Kollanukkaran wrote:
>
>
> > -----Original Message-----
> > From: Bruce Richardson <bruce.richardson@intel.com>
> > Sent: Tuesday, October 3, 2023 10:19 PM
> > To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Abdullah Sevincer
> > <abdullah.sevincer@intel.com>; Hemant Agrawal
> > <hemant.agrawal@nxp.com>; Sachin Saxena <sachin.saxena@nxp.com>;
> > Mattias Rönnblom <mattias.ronnblom@ericsson.com>; Liang Ma
> > <liangma@liangbit.com>; Peter Mccarthy <peter.mccarthy@intel.com>; Harry
> > van Haaren <harry.van.haaren@intel.com>; dev@dpdk.org
> > Subject: [EXT] Re: [PATCH] eventdev: fix max link profiles info
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > On Tue, Oct 03, 2023 at 08:55:35PM +0530, pbhagavatula@marvell.com
> > wrote:
> > > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > >
> > > Since most of the drivers overwrite the info structure passed from the
> > > common layer it is not possible to set defaults in
> > > ``rte_event_dev_info_get`` API.
> > > Initialize default max_profiles_per_port in the driver layer.
> > >
> > > Fixes: 162aa4e1b479 ("eventdev: introduce link profiles")
> > >
> > > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > ---
> > > Please squash to 162aa4e1b479
> > >
> > Just wondering, is another valid approach to check the return value from the
> > driver callback and set max_profiles to 1 if it's set to zero by the driver? That
> > would save modifying all drivers and probably still fix any issues. [I'm assuming
> > that max_profiles == 0 is invalid, and that every device by default should report
> > "1" as supported]
>
> I can think of three options
>
> 1)Change max_profile to max_profiles_minus_one as name
Or call it "additional_profiles"?
> 2)In generic info_get, fix up max_profile as one if max_profile is zero after PMD callback + https://patches.dpdk.org/project/dpdk/patch/20231003150829.8257-1-pbhagavatula@marvell.com/
> 3) Or Keep as this patch.
>
> Looks like (1) and (2) not very clean. I think, we can keep as (3) if you don't have strong opinion.
>
No, no strong opinions.
/Bruce
On Tue, Oct 03, 2023 at 08:55:35PM +0530, pbhagavatula@marvell.com wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Since most of the drivers overwrite the info structure passed
> from the common layer it is not possible to set defaults in
> ``rte_event_dev_info_get`` API.
> Initialize default max_profiles_per_port in the driver layer.
>
> Fixes: 162aa4e1b479 ("eventdev: introduce link profiles")
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> Please squash to 162aa4e1b479
>
Confirm that this patch allows the eventdev_pipeline sample to once again
run with the event_sw driver.
For this solution, if it's chosen over alternatives
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
On Tue, Oct 3, 2023 at 10:47 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Tue, Oct 03, 2023 at 08:55:35PM +0530, pbhagavatula@marvell.com wrote:
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > Since most of the drivers overwrite the info structure passed
> > from the common layer it is not possible to set defaults in
> > ``rte_event_dev_info_get`` API.
> > Initialize default max_profiles_per_port in the driver layer.
> >
> > Fixes: 162aa4e1b479 ("eventdev: introduce link profiles")
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > ---
> > Please squash to 162aa4e1b479
> >
> Confirm that this patch allows the eventdev_pipeline sample to once again
> run with the event_sw driver.
>
> For this solution, if it's chosen over alternatives
>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Squashed/Rebased this patch and pushed to next-event tree.
@@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = {
RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
struct process_local_port_data
@@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
@@ -411,7 +411,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
@@ -218,6 +218,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused,
.max_event_port_dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH,
.max_event_port_enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH,
.max_num_events = DSW_MAX_EVENTS,
+ .max_profiles_per_port = 1,
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE|
RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
RTE_EVENT_DEV_CAP_NONSEQ_MODE|
@@ -158,7 +158,7 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-
+ dev_info->max_profiles_per_port = 1;
}
static int
@@ -378,6 +378,7 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
.event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE,
+ .max_profiles_per_port = 1,
};
*info = evdev_opdl_info;
@@ -104,6 +104,7 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev,
RTE_EVENT_DEV_CAP_EVENT_QOS |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
+ dev_info->max_profiles_per_port = 1;
}
static int
@@ -609,6 +609,7 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
RTE_EVENT_DEV_CAP_NONSEQ_MODE |
RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
RTE_EVENT_DEV_CAP_MAINTENANCE_FREE),
+ .max_profiles_per_port = 1,
};
*info = evdev_sw_info;
@@ -96,7 +96,6 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
return -EINVAL;
memset(dev_info, 0, sizeof(struct rte_event_dev_info));
- dev_info->max_profiles_per_port = 1;
if (*dev->dev_ops->dev_infos_get == NULL)
return -ENOTSUP;