[v2] ethdev: fast path async flow API

Message ID 20240131093523.1553028-1-dsosnowski@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series [v2] ethdev: fast path async flow API |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/github-robot: build success github build: passed
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/Intel-compilation success Compilation OK
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-abi-testing success Testing PASS
ci/intel-Testing success Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/intel-Functional success Functional PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS

Commit Message

Dariusz Sosnowski Jan. 31, 2024, 9:35 a.m. UTC
  This patch reworks the async flow API functions called in data path,
to reduce the overhead during flow operations at the library level.
Main source of the overhead was indirection and checks done while
ethdev library was fetching rte_flow_ops from a given driver.

This patch introduces rte_flow_fp_ops struct which holds callbacks
to driver's implementation of fast path async flow API functions.
Each driver implementing these functions must populate flow_fp_ops
field inside rte_eth_dev structure with a reference to
its own implementation.
By default, ethdev library provides dummy callbacks with
implementations returning ENOSYS.
Such design provides a few assumptions:

- rte_flow_fp_ops struct for given port is always available.
- Each callback is either:
    - Default provided by library.
    - Set up by driver.

As a result, no checks for availability of the implementation
are needed at library level in data path.
Any library-level validation checks in async flow API are compiled
if and only if RTE_FLOW_DEBUG macro is defined.

These changes apply only to the following API functions:

- rte_flow_async_create()
- rte_flow_async_create_by_index()
- rte_flow_async_actions_update()
- rte_flow_async_destroy()
- rte_flow_push()
- rte_flow_pull()
- rte_flow_async_action_handle_create()
- rte_flow_async_action_handle_destroy()
- rte_flow_async_action_handle_update()
- rte_flow_async_action_handle_query()
- rte_flow_async_action_handle_query_update()
- rte_flow_async_action_list_handle_create()
- rte_flow_async_action_list_handle_destroy()
- rte_flow_async_action_list_handle_query_update()

This patch also adjusts the mlx5 PMD to the introduced flow API changes.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
v2:
- Fixed mlx5 PMD build issue with older versions of rdma-core.
---
 doc/guides/rel_notes/release_24_03.rst |  37 ++
 drivers/net/mlx5/mlx5_flow.c           | 608 +------------------------
 drivers/net/mlx5/mlx5_flow_hw.c        |  25 +
 lib/ethdev/ethdev_driver.c             |   4 +
 lib/ethdev/ethdev_driver.h             |   4 +
 lib/ethdev/rte_flow.c                  | 518 ++++++++++++++++-----
 lib/ethdev/rte_flow_driver.h           | 277 ++++++-----
 lib/ethdev/version.map                 |   2 +
 8 files changed, 635 insertions(+), 840 deletions(-)
  

Comments

Ori Kam Jan. 31, 2024, 1:20 p.m. UTC | #1
Hi Draiusz,

> -----Original Message-----
> From: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Sent: Wednesday, January 31, 2024 11:35 AM
> 
> This patch reworks the async flow API functions called in data path,
> to reduce the overhead during flow operations at the library level.
> Main source of the overhead was indirection and checks done while
> ethdev library was fetching rte_flow_ops from a given driver.
> 
> This patch introduces rte_flow_fp_ops struct which holds callbacks
> to driver's implementation of fast path async flow API functions.
> Each driver implementing these functions must populate flow_fp_ops
> field inside rte_eth_dev structure with a reference to
> its own implementation.
> By default, ethdev library provides dummy callbacks with
> implementations returning ENOSYS.
> Such design provides a few assumptions:
> 
> - rte_flow_fp_ops struct for given port is always available.
> - Each callback is either:
>     - Default provided by library.
>     - Set up by driver.
> 
> As a result, no checks for availability of the implementation
> are needed at library level in data path.
> Any library-level validation checks in async flow API are compiled
> if and only if RTE_FLOW_DEBUG macro is defined.
> 
> These changes apply only to the following API functions:
> 
> - rte_flow_async_create()
> - rte_flow_async_create_by_index()
> - rte_flow_async_actions_update()
> - rte_flow_async_destroy()
> - rte_flow_push()
> - rte_flow_pull()
> - rte_flow_async_action_handle_create()
> - rte_flow_async_action_handle_destroy()
> - rte_flow_async_action_handle_update()
> - rte_flow_async_action_handle_query()
> - rte_flow_async_action_handle_query_update()
> - rte_flow_async_action_list_handle_create()
> - rte_flow_async_action_list_handle_destroy()
> - rte_flow_async_action_list_handle_query_update()
> 
> This patch also adjusts the mlx5 PMD to the introduced flow API changes.
> 
> Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
> ---
> v2:
> - Fixed mlx5 PMD build issue with older versions of rdma-core.
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori
  
Thomas Monjalon Feb. 5, 2024, 11:07 a.m. UTC | #2
31/01/2024 10:35, Dariusz Sosnowski:
> This patch reworks the async flow API functions called in data path,
> to reduce the overhead during flow operations at the library level.
> Main source of the overhead was indirection and checks done while
> ethdev library was fetching rte_flow_ops from a given driver.
> 
> This patch introduces rte_flow_fp_ops struct which holds callbacks
> to driver's implementation of fast path async flow API functions.
> Each driver implementing these functions must populate flow_fp_ops
> field inside rte_eth_dev structure with a reference to
> its own implementation.
> By default, ethdev library provides dummy callbacks with
> implementations returning ENOSYS.
> Such design provides a few assumptions:
> 
> - rte_flow_fp_ops struct for given port is always available.
> - Each callback is either:
>     - Default provided by library.
>     - Set up by driver.

It looks similar to what was done in the commit
c87d435a4d79 ("ethdev: copy fast-path API into separate structure")
right?
Maybe worth to mention in the commit log.

> As a result, no checks for availability of the implementation
> are needed at library level in data path.
> Any library-level validation checks in async flow API are compiled
> if and only if RTE_FLOW_DEBUG macro is defined.

How are we supposed to enable RTE_FLOW_DEBUG?
May it be enabled automatically if other debug option is globally enabled?

One comment on the code style: please compare pointers explicitly with NULL
instead of considering them as boolean.
  
Dariusz Sosnowski Feb. 5, 2024, 1:14 p.m. UTC | #3
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, February 5, 2024 12:08
> To: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Ferruh Yigit <ferruh.yigit@amd.com>; Andrew
> Rybchenko <andrew.rybchenko@oktetlabs.ru>; dev@dpdk.org
> Subject: Re: [PATCH v2] ethdev: fast path async flow API
> 
> 31/01/2024 10:35, Dariusz Sosnowski:
> > This patch reworks the async flow API functions called in data path,
> > to reduce the overhead during flow operations at the library level.
> > Main source of the overhead was indirection and checks done while
> > ethdev library was fetching rte_flow_ops from a given driver.
> >
> > This patch introduces rte_flow_fp_ops struct which holds callbacks to
> > driver's implementation of fast path async flow API functions.
> > Each driver implementing these functions must populate flow_fp_ops
> > field inside rte_eth_dev structure with a reference to its own
> > implementation.
> > By default, ethdev library provides dummy callbacks with
> > implementations returning ENOSYS.
> > Such design provides a few assumptions:
> >
> > - rte_flow_fp_ops struct for given port is always available.
> > - Each callback is either:
> >     - Default provided by library.
> >     - Set up by driver.
> 
> It looks similar to what was done in the commit
> c87d435a4d79 ("ethdev: copy fast-path API into separate structure") right?
> Maybe worth to mention in the commit log.

Right, proposed design is based on what happens in Rx/Tx datapath.
I think it's worth mentioning the commit. I'll add a reference in v3.

> > As a result, no checks for availability of the implementation are
> > needed at library level in data path.
> > Any library-level validation checks in async flow API are compiled if
> > and only if RTE_FLOW_DEBUG macro is defined.
> 
> How are we supposed to enable RTE_FLOW_DEBUG?

I should document it, but the idea was that it must be explicitly enabled during build,
by adding -c_args=-DRTE_FLOW_DEBUG to meson options.

Do you think doc/guides/nics/build_and_test.rst is a good place to document this option?
It would be documented alongside RTE_ETHDEV_DEBUG_RX and RTE_ETHDEV_DEBUG_TX.

> May it be enabled automatically if other debug option is globally enabled?

Do you mean that if buildtype is defined as debug, then RTE_FLOW_DEBUG is defined automatically?
I think that's a good idea.

> One comment on the code style: please compare pointers explicitly with NULL
> instead of considering them as boolean.

Ok. I'll fix it and I'll send v3.

Best regards,
Dariusz Sosnowski
  
Thomas Monjalon Feb. 5, 2024, 2:03 p.m. UTC | #4
05/02/2024 14:14, Dariusz Sosnowski:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 31/01/2024 10:35, Dariusz Sosnowski:
> > > As a result, no checks for availability of the implementation are
> > > needed at library level in data path.
> > > Any library-level validation checks in async flow API are compiled if
> > > and only if RTE_FLOW_DEBUG macro is defined.
> > 
> > How are we supposed to enable RTE_FLOW_DEBUG?
> 
> I should document it, but the idea was that it must be explicitly enabled during build,
> by adding -c_args=-DRTE_FLOW_DEBUG to meson options.
> 
> Do you think doc/guides/nics/build_and_test.rst is a good place to document this option?

Yes

> It would be documented alongside RTE_ETHDEV_DEBUG_RX and RTE_ETHDEV_DEBUG_TX.
> 
> > May it be enabled automatically if other debug option is globally enabled?
> 
> Do you mean that if buildtype is defined as debug, then RTE_FLOW_DEBUG is defined automatically?

Yes

> I think that's a good idea.

Another way of enabling it is to check
#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
  
Dariusz Sosnowski Feb. 6, 2024, 5:50 p.m. UTC | #5
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, February 5, 2024 15:03
> To: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Cc: Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Ferruh Yigit <ferruh.yigit@amd.com>; Andrew
> Rybchenko <andrew.rybchenko@oktetlabs.ru>; dev@dpdk.org
> Subject: Re: [PATCH v2] ethdev: fast path async flow API
> 
> External email: Use caution opening links or attachments
> 
> 
> 05/02/2024 14:14, Dariusz Sosnowski:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 31/01/2024 10:35, Dariusz Sosnowski:
> > > > As a result, no checks for availability of the implementation are
> > > > needed at library level in data path.
> > > > Any library-level validation checks in async flow API are compiled
> > > > if and only if RTE_FLOW_DEBUG macro is defined.
> > >
> > > How are we supposed to enable RTE_FLOW_DEBUG?
> >
> > I should document it, but the idea was that it must be explicitly
> > enabled during build, by adding -c_args=-DRTE_FLOW_DEBUG to meson
> options.
> >
> > Do you think doc/guides/nics/build_and_test.rst is a good place to
> document this option?
> 
> Yes
> 
> > It would be documented alongside RTE_ETHDEV_DEBUG_RX and
> RTE_ETHDEV_DEBUG_TX.
> >
> > > May it be enabled automatically if other debug option is globally enabled?
> >
> > Do you mean that if buildtype is defined as debug, then RTE_FLOW_DEBUG
> is defined automatically?
> 
> Yes
> 
> > I think that's a good idea.
> 
> Another way of enabling it is to check
> #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
I think that defining RTE_FLOW_DEBUG based on buildtype would be more appropriate,
since the code under RTE_FLOW_DEBUG is not responsible for any additional logging or tracing.
It rather serves as basic checks for all async flow API functions.

Best regards,
Dariusz Sosnowski
  

Patch

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 84d3144215..55e7d57096 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -106,6 +106,43 @@  API Changes
 
 * gso: ``rte_gso_segment`` now returns -ENOTSUP for unknown protocols.
 
+* ethdev: PMDs implementing asynchronous flow operations are required to provide relevant functions
+  implementation through ``rte_flow_fp_ops`` struct, instead of ``rte_flow_ops`` struct.
+  Pointer to device-dependent ``rte_flow_fp_ops`` should be provided to ``rte_eth_dev.flow_fp_ops``.
+  This change applies to the following API functions:
+
+   * ``rte_flow_async_create``
+   * ``rte_flow_async_create_by_index``
+   * ``rte_flow_async_actions_update``
+   * ``rte_flow_async_destroy``
+   * ``rte_flow_push``
+   * ``rte_flow_pull``
+   * ``rte_flow_async_action_handle_create``
+   * ``rte_flow_async_action_handle_destroy``
+   * ``rte_flow_async_action_handle_update``
+   * ``rte_flow_async_action_handle_query``
+   * ``rte_flow_async_action_handle_query_update``
+   * ``rte_flow_async_action_list_handle_create``
+   * ``rte_flow_async_action_list_handle_destroy``
+   * ``rte_flow_async_action_list_handle_query_update``
+
+* ethdev: Removed the following fields from ``rte_flow_ops`` struct:
+
+   * ``async_create``
+   * ``async_create_by_index``
+   * ``async_actions_update``
+   * ``async_destroy``
+   * ``push``
+   * ``pull``
+   * ``async_action_handle_create``
+   * ``async_action_handle_destroy``
+   * ``async_action_handle_update``
+   * ``async_action_handle_query``
+   * ``async_action_handle_query_update``
+   * ``async_action_list_handle_create``
+   * ``async_action_list_handle_destroy``
+   * ``async_action_list_handle_query_update``
+
 
 ABI Changes
 -----------
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 85e8c77c81..0ff3b91596 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1055,98 +1055,13 @@  mlx5_flow_group_set_miss_actions(struct rte_eth_dev *dev,
 				 const struct rte_flow_group_attr *attr,
 				 const struct rte_flow_action actions[],
 				 struct rte_flow_error *error);
-static struct rte_flow *
-mlx5_flow_async_flow_create(struct rte_eth_dev *dev,
-			    uint32_t queue,
-			    const struct rte_flow_op_attr *attr,
-			    struct rte_flow_template_table *table,
-			    const struct rte_flow_item items[],
-			    uint8_t pattern_template_index,
-			    const struct rte_flow_action actions[],
-			    uint8_t action_template_index,
-			    void *user_data,
-			    struct rte_flow_error *error);
-static struct rte_flow *
-mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev,
-			    uint32_t queue,
-			    const struct rte_flow_op_attr *attr,
-			    struct rte_flow_template_table *table,
-			    uint32_t rule_index,
-			    const struct rte_flow_action actions[],
-			    uint8_t action_template_index,
-			    void *user_data,
-			    struct rte_flow_error *error);
-static int
-mlx5_flow_async_flow_update(struct rte_eth_dev *dev,
-			     uint32_t queue,
-			     const struct rte_flow_op_attr *attr,
-			     struct rte_flow *flow,
-			     const struct rte_flow_action actions[],
-			     uint8_t action_template_index,
-			     void *user_data,
-			     struct rte_flow_error *error);
-static int
-mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev,
-			     uint32_t queue,
-			     const struct rte_flow_op_attr *attr,
-			     struct rte_flow *flow,
-			     void *user_data,
-			     struct rte_flow_error *error);
-static int
-mlx5_flow_pull(struct rte_eth_dev *dev,
-	       uint32_t queue,
-	       struct rte_flow_op_result res[],
-	       uint16_t n_res,
-	       struct rte_flow_error *error);
-static int
-mlx5_flow_push(struct rte_eth_dev *dev,
-	       uint32_t queue,
-	       struct rte_flow_error *error);
-
-static struct rte_flow_action_handle *
-mlx5_flow_async_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
-				 const struct rte_flow_op_attr *attr,
-				 const struct rte_flow_indir_action_conf *conf,
-				 const struct rte_flow_action *action,
-				 void *user_data,
-				 struct rte_flow_error *error);
-
-static int
-mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
-				 const struct rte_flow_op_attr *attr,
-				 struct rte_flow_action_handle *handle,
-				 const void *update,
-				 void *user_data,
-				 struct rte_flow_error *error);
 
 static int
-mlx5_flow_async_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
-				  const struct rte_flow_op_attr *attr,
-				  struct rte_flow_action_handle *handle,
-				  void *user_data,
-				  struct rte_flow_error *error);
-
-static int
-mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
-				 const struct rte_flow_op_attr *attr,
-				 const struct rte_flow_action_handle *handle,
-				 void *data,
-				 void *user_data,
-				 struct rte_flow_error *error);
-static int
 mlx5_action_handle_query_update(struct rte_eth_dev *dev,
 				struct rte_flow_action_handle *handle,
 				const void *update, void *query,
 				enum rte_flow_query_update_mode qu_mode,
 				struct rte_flow_error *error);
-static int
-mlx5_flow_async_action_handle_query_update
-	(struct rte_eth_dev *dev, uint32_t queue_id,
-	 const struct rte_flow_op_attr *op_attr,
-	 struct rte_flow_action_handle *action_handle,
-	 const void *update, void *query,
-	 enum rte_flow_query_update_mode qu_mode,
-	 void *user_data, struct rte_flow_error *error);
 
 static struct rte_flow_action_list_handle *
 mlx5_action_list_handle_create(struct rte_eth_dev *dev,
@@ -1159,20 +1074,6 @@  mlx5_action_list_handle_destroy(struct rte_eth_dev *dev,
 				struct rte_flow_action_list_handle *handle,
 				struct rte_flow_error *error);
 
-static struct rte_flow_action_list_handle *
-mlx5_flow_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue_id,
-					  const struct rte_flow_op_attr *attr,
-					  const struct
-					  rte_flow_indir_action_conf *conf,
-					  const struct rte_flow_action *actions,
-					  void *user_data,
-					  struct rte_flow_error *error);
-static int
-mlx5_flow_async_action_list_handle_destroy
-			(struct rte_eth_dev *dev, uint32_t queue_id,
-			 const struct rte_flow_op_attr *op_attr,
-			 struct rte_flow_action_list_handle *action_handle,
-			 void *user_data, struct rte_flow_error *error);
 static int
 mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev,
 					  const
@@ -1180,17 +1081,7 @@  mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev,
 					  const void **update, void **query,
 					  enum rte_flow_query_update_mode mode,
 					  struct rte_flow_error *error);
-static int
-mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev,
-						uint32_t queue_id,
-						const struct rte_flow_op_attr *attr,
-						const struct
-						rte_flow_action_list_handle *handle,
-						const void **update,
-						void **query,
-						enum rte_flow_query_update_mode mode,
-						void *user_data,
-						struct rte_flow_error *error);
+
 static int
 mlx5_flow_calc_table_hash(struct rte_eth_dev *dev,
 			  const struct rte_flow_template_table *table,
@@ -1232,26 +1123,8 @@  static const struct rte_flow_ops mlx5_flow_ops = {
 	.template_table_create = mlx5_flow_table_create,
 	.template_table_destroy = mlx5_flow_table_destroy,
 	.group_set_miss_actions = mlx5_flow_group_set_miss_actions,
-	.async_create = mlx5_flow_async_flow_create,
-	.async_create_by_index = mlx5_flow_async_flow_create_by_index,
-	.async_destroy = mlx5_flow_async_flow_destroy,
-	.pull = mlx5_flow_pull,
-	.push = mlx5_flow_push,
-	.async_action_handle_create = mlx5_flow_async_action_handle_create,
-	.async_action_handle_update = mlx5_flow_async_action_handle_update,
-	.async_action_handle_query_update =
-		mlx5_flow_async_action_handle_query_update,
-	.async_action_handle_query = mlx5_flow_async_action_handle_query,
-	.async_action_handle_destroy = mlx5_flow_async_action_handle_destroy,
-	.async_actions_update = mlx5_flow_async_flow_update,
-	.async_action_list_handle_create =
-		mlx5_flow_async_action_list_handle_create,
-	.async_action_list_handle_destroy =
-		mlx5_flow_async_action_list_handle_destroy,
 	.action_list_handle_query_update =
 		mlx5_flow_action_list_handle_query_update,
-	.async_action_list_handle_query_update =
-		mlx5_flow_async_action_list_handle_query_update,
 	.flow_calc_table_hash = mlx5_flow_calc_table_hash,
 };
 
@@ -9427,424 +9300,6 @@  mlx5_flow_group_set_miss_actions(struct rte_eth_dev *dev,
 	return fops->group_set_miss_actions(dev, group_id, attr, actions, error);
 }
 
-/**
- * Enqueue flow creation.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue_id
- *   The queue to create the flow.
- * @param[in] attr
- *   Pointer to the flow operation attributes.
- * @param[in] items
- *   Items with flow spec value.
- * @param[in] pattern_template_index
- *   The item pattern flow follows from the table.
- * @param[in] actions
- *   Action with flow spec value.
- * @param[in] action_template_index
- *   The action pattern flow follows from the table.
- * @param[in] user_data
- *   Pointer to the user_data.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *    Flow pointer on success, NULL otherwise and rte_errno is set.
- */
-static struct rte_flow *
-mlx5_flow_async_flow_create(struct rte_eth_dev *dev,
-			    uint32_t queue_id,
-			    const struct rte_flow_op_attr *attr,
-			    struct rte_flow_template_table *table,
-			    const struct rte_flow_item items[],
-			    uint8_t pattern_template_index,
-			    const struct rte_flow_action actions[],
-			    uint8_t action_template_index,
-			    void *user_data,
-			    struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-	struct rte_flow_attr fattr = {0};
-
-	if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) {
-		rte_flow_error_set(error, ENOTSUP,
-				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-				NULL,
-				"flow_q create with incorrect steering mode");
-		return NULL;
-	}
-	fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-	return fops->async_flow_create(dev, queue_id, attr, table,
-				       items, pattern_template_index,
-				       actions, action_template_index,
-				       user_data, error);
-}
-
-/**
- * Enqueue flow creation by index.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue_id
- *   The queue to create the flow.
- * @param[in] attr
- *   Pointer to the flow operation attributes.
- * @param[in] rule_index
- *   The item pattern flow follows from the table.
- * @param[in] actions
- *   Action with flow spec value.
- * @param[in] action_template_index
- *   The action pattern flow follows from the table.
- * @param[in] user_data
- *   Pointer to the user_data.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *    Flow pointer on success, NULL otherwise and rte_errno is set.
- */
-static struct rte_flow *
-mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev,
-			    uint32_t queue_id,
-			    const struct rte_flow_op_attr *attr,
-			    struct rte_flow_template_table *table,
-			    uint32_t rule_index,
-			    const struct rte_flow_action actions[],
-			    uint8_t action_template_index,
-			    void *user_data,
-			    struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-	struct rte_flow_attr fattr = {0};
-
-	if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) {
-		rte_flow_error_set(error, ENOTSUP,
-				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-				NULL,
-				"flow_q create with incorrect steering mode");
-		return NULL;
-	}
-	fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-	return fops->async_flow_create_by_index(dev, queue_id, attr, table,
-				       rule_index, actions, action_template_index,
-				       user_data, error);
-}
-
-/**
- * Enqueue flow update.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue
- *   The queue to destroy the flow.
- * @param[in] attr
- *   Pointer to the flow operation attributes.
- * @param[in] flow
- *   Pointer to the flow to be destroyed.
- * @param[in] actions
- *   Action with flow spec value.
- * @param[in] action_template_index
- *   The action pattern flow follows from the table.
- * @param[in] user_data
- *   Pointer to the user_data.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *    0 on success, negative value otherwise and rte_errno is set.
- */
-static int
-mlx5_flow_async_flow_update(struct rte_eth_dev *dev,
-			     uint32_t queue,
-			     const struct rte_flow_op_attr *attr,
-			     struct rte_flow *flow,
-			     const struct rte_flow_action actions[],
-			     uint8_t action_template_index,
-			     void *user_data,
-			     struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-	struct rte_flow_attr fattr = {0};
-
-	if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW)
-		return rte_flow_error_set(error, ENOTSUP,
-				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-				NULL,
-				"flow_q update with incorrect steering mode");
-	fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-	return fops->async_flow_update(dev, queue, attr, flow,
-					actions, action_template_index, user_data, error);
-}
-
-/**
- * Enqueue flow destruction.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue
- *   The queue to destroy the flow.
- * @param[in] attr
- *   Pointer to the flow operation attributes.
- * @param[in] flow
- *   Pointer to the flow to be destroyed.
- * @param[in] user_data
- *   Pointer to the user_data.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *    0 on success, negative value otherwise and rte_errno is set.
- */
-static int
-mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev,
-			     uint32_t queue,
-			     const struct rte_flow_op_attr *attr,
-			     struct rte_flow *flow,
-			     void *user_data,
-			     struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-	struct rte_flow_attr fattr = {0};
-
-	if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW)
-		return rte_flow_error_set(error, ENOTSUP,
-				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-				NULL,
-				"flow_q destroy with incorrect steering mode");
-	fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-	return fops->async_flow_destroy(dev, queue, attr, flow,
-					user_data, error);
-}
-
-/**
- * Pull the enqueued flows.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue
- *   The queue to pull the result.
- * @param[in/out] res
- *   Array to save the results.
- * @param[in] n_res
- *   Available result with the array.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *    Result number on success, negative value otherwise and rte_errno is set.
- */
-static int
-mlx5_flow_pull(struct rte_eth_dev *dev,
-	       uint32_t queue,
-	       struct rte_flow_op_result res[],
-	       uint16_t n_res,
-	       struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-	struct rte_flow_attr attr = {0};
-
-	if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW)
-		return rte_flow_error_set(error, ENOTSUP,
-				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-				NULL,
-				"flow_q pull with incorrect steering mode");
-	fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-	return fops->pull(dev, queue, res, n_res, error);
-}
-
-/**
- * Push the enqueued flows.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue
- *   The queue to push the flows.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *    0 on success, negative value otherwise and rte_errno is set.
- */
-static int
-mlx5_flow_push(struct rte_eth_dev *dev,
-	       uint32_t queue,
-	       struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-	struct rte_flow_attr attr = {0};
-
-	if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW)
-		return rte_flow_error_set(error, ENOTSUP,
-				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-				NULL,
-				"flow_q push with incorrect steering mode");
-	fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-	return fops->push(dev, queue, error);
-}
-
-/**
- * Create shared action.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue
- *   Which queue to be used..
- * @param[in] attr
- *   Operation attribute.
- * @param[in] conf
- *   Indirect action configuration.
- * @param[in] action
- *   rte_flow action detail.
- * @param[in] user_data
- *   Pointer to the user_data.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *   Action handle on success, NULL otherwise and rte_errno is set.
- */
-static struct rte_flow_action_handle *
-mlx5_flow_async_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
-				 const struct rte_flow_op_attr *attr,
-				 const struct rte_flow_indir_action_conf *conf,
-				 const struct rte_flow_action *action,
-				 void *user_data,
-				 struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops =
-			flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-
-	return fops->async_action_create(dev, queue, attr, conf, action,
-					 user_data, error);
-}
-
-/**
- * Update shared action.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue
- *   Which queue to be used..
- * @param[in] attr
- *   Operation attribute.
- * @param[in] handle
- *   Action handle to be updated.
- * @param[in] update
- *   Update value.
- * @param[in] user_data
- *   Pointer to the user_data.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *   0 on success, negative value otherwise and rte_errno is set.
- */
-static int
-mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
-				     const struct rte_flow_op_attr *attr,
-				     struct rte_flow_action_handle *handle,
-				     const void *update,
-				     void *user_data,
-				     struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops =
-			flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-
-	return fops->async_action_update(dev, queue, attr, handle,
-					 update, user_data, error);
-}
-
-static int
-mlx5_flow_async_action_handle_query_update
-	(struct rte_eth_dev *dev, uint32_t queue_id,
-	 const struct rte_flow_op_attr *op_attr,
-	 struct rte_flow_action_handle *action_handle,
-	 const void *update, void *query,
-	 enum rte_flow_query_update_mode qu_mode,
-	 void *user_data, struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops =
-		flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-
-	if (!fops || !fops->async_action_query_update)
-		return rte_flow_error_set(error, ENOTSUP,
-					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
-					  "async query_update not supported");
-	return fops->async_action_query_update
-			   (dev, queue_id, op_attr, action_handle,
-			    update, query, qu_mode, user_data, error);
-}
-
-/**
- * Query shared action.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue
- *   Which queue to be used..
- * @param[in] attr
- *   Operation attribute.
- * @param[in] handle
- *   Action handle to be updated.
- * @param[in] data
- *   Pointer query result data.
- * @param[in] user_data
- *   Pointer to the user_data.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *   0 on success, negative value otherwise and rte_errno is set.
- */
-static int
-mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
-				    const struct rte_flow_op_attr *attr,
-				    const struct rte_flow_action_handle *handle,
-				    void *data,
-				    void *user_data,
-				    struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops =
-			flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-
-	return fops->async_action_query(dev, queue, attr, handle,
-					data, user_data, error);
-}
-
-/**
- * Destroy shared action.
- *
- * @param[in] dev
- *   Pointer to the rte_eth_dev structure.
- * @param[in] queue
- *   Which queue to be used..
- * @param[in] attr
- *   Operation attribute.
- * @param[in] handle
- *   Action handle to be destroyed.
- * @param[in] user_data
- *   Pointer to the user_data.
- * @param[out] error
- *   Pointer to error structure.
- *
- * @return
- *   0 on success, negative value otherwise and rte_errno is set.
- */
-static int
-mlx5_flow_async_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
-				      const struct rte_flow_op_attr *attr,
-				      struct rte_flow_action_handle *handle,
-				      void *user_data,
-				      struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops =
-			flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
-
-	return fops->async_action_destroy(dev, queue, attr, handle,
-					  user_data, error);
-}
-
 /**
  * Allocate a new memory for the counter values wrapped by all the needed
  * management.
@@ -11015,41 +10470,6 @@  mlx5_action_list_handle_destroy(struct rte_eth_dev *dev,
 	return fops->action_list_handle_destroy(dev, handle, error);
 }
 
-static struct rte_flow_action_list_handle *
-mlx5_flow_async_action_list_handle_create(struct rte_eth_dev *dev,
-					  uint32_t queue_id,
-					  const struct
-					  rte_flow_op_attr *op_attr,
-					  const struct
-					  rte_flow_indir_action_conf *conf,
-					  const struct rte_flow_action *actions,
-					  void *user_data,
-					  struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-
-	MLX5_DRV_FOPS_OR_ERR(dev, fops, async_action_list_handle_create, NULL);
-	return fops->async_action_list_handle_create(dev, queue_id, op_attr,
-						     conf, actions, user_data,
-						     error);
-}
-
-static int
-mlx5_flow_async_action_list_handle_destroy
-	(struct rte_eth_dev *dev, uint32_t queue_id,
-	 const struct rte_flow_op_attr *op_attr,
-	 struct rte_flow_action_list_handle *action_handle,
-	 void *user_data, struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-
-	MLX5_DRV_FOPS_OR_ERR(dev, fops,
-			     async_action_list_handle_destroy, ENOTSUP);
-	return fops->async_action_list_handle_destroy(dev, queue_id, op_attr,
-						      action_handle, user_data,
-						      error);
-}
-
 static int
 mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev,
 					  const
@@ -11065,32 +10485,6 @@  mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev,
 	return fops->action_list_handle_query_update(dev, handle, update, query,
 						     mode, error);
 }
-
-static int
-mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev,
-						uint32_t queue_id,
-						const
-						struct rte_flow_op_attr *op_attr,
-						const struct
-						rte_flow_action_list_handle *handle,
-						const void **update,
-						void **query,
-						enum
-						rte_flow_query_update_mode mode,
-						void *user_data,
-						struct rte_flow_error *error)
-{
-	const struct mlx5_flow_driver_ops *fops;
-
-	MLX5_DRV_FOPS_OR_ERR(dev, fops,
-			     async_action_list_handle_query_update, ENOTSUP);
-	return fops->async_action_list_handle_query_update(dev, queue_id, op_attr,
-							   handle, update,
-							   query, mode,
-							   user_data, error);
-}
-
-
 static int
 mlx5_flow_calc_table_hash(struct rte_eth_dev *dev,
 			  const struct rte_flow_template_table *table,
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index da873ae2e2..c65ebfbba2 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3,6 +3,7 @@ 
  */
 
 #include <rte_flow.h>
+#include <rte_flow_driver.h>
 
 #include <mlx5_malloc.h>
 
@@ -14,6 +15,9 @@ 
 #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
 #include "mlx5_hws_cnt.h"
 
+/** Fast path async flow API functions. */
+static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops;
+
 /* The maximum actions support in the flow. */
 #define MLX5_HW_MAX_ACTS 16
 
@@ -9543,6 +9547,7 @@  flow_hw_configure(struct rte_eth_dev *dev,
 		mlx5_free(_queue_attr);
 	if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE)
 		priv->hws_strict_queue = 1;
+	dev->flow_fp_ops = &mlx5_flow_hw_fp_ops;
 	return 0;
 err:
 	if (priv->hws_ctpool) {
@@ -9617,6 +9622,7 @@  flow_hw_resource_release(struct rte_eth_dev *dev)
 
 	if (!priv->dr_ctx)
 		return;
+	dev->flow_fp_ops = &rte_flow_fp_default_ops;
 	flow_hw_rxq_flag_set(dev, false);
 	flow_hw_flush_all_ctrl_flows(dev);
 	flow_hw_cleanup_tx_repr_tagging(dev);
@@ -12992,4 +12998,23 @@  mlx5_reformat_action_destroy(struct rte_eth_dev *dev,
 	mlx5_free(handle);
 	return 0;
 }
+
+static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops = {
+	.async_create = flow_hw_async_flow_create,
+	.async_create_by_index = flow_hw_async_flow_create_by_index,
+	.async_actions_update = flow_hw_async_flow_update,
+	.async_destroy = flow_hw_async_flow_destroy,
+	.push = flow_hw_push,
+	.pull = flow_hw_pull,
+	.async_action_handle_create = flow_hw_action_handle_create,
+	.async_action_handle_destroy = flow_hw_action_handle_destroy,
+	.async_action_handle_update = flow_hw_action_handle_update,
+	.async_action_handle_query = flow_hw_action_handle_query,
+	.async_action_handle_query_update = flow_hw_async_action_handle_query_update,
+	.async_action_list_handle_create = flow_hw_async_action_list_handle_create,
+	.async_action_list_handle_destroy = flow_hw_async_action_list_handle_destroy,
+	.async_action_list_handle_query_update =
+		flow_hw_async_action_list_handle_query_update,
+};
+
 #endif
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index bd917a15fc..34909a3018 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -10,6 +10,7 @@ 
 
 #include "ethdev_driver.h"
 #include "ethdev_private.h"
+#include "rte_flow_driver.h"
 
 /**
  * A set of values to describe the possible states of a switch domain.
@@ -110,6 +111,7 @@  rte_eth_dev_allocate(const char *name)
 	}
 
 	eth_dev = eth_dev_get(port_id);
+	eth_dev->flow_fp_ops = &rte_flow_fp_default_ops;
 	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
 	eth_dev->data->port_id = port_id;
 	eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS;
@@ -245,6 +247,8 @@  rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
 
 	eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id);
 
+	eth_dev->flow_fp_ops = &rte_flow_fp_default_ops;
+
 	rte_spinlock_lock(rte_mcfg_ethdev_get_lock());
 
 	eth_dev->state = RTE_ETH_DEV_UNUSED;
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index f05f68a67c..2dfa8e238b 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -73,6 +73,10 @@  struct rte_eth_dev {
 	struct rte_eth_dev_data *data;
 	void *process_private; /**< Pointer to per-process device data */
 	const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */
+	/**
+	 * Fast path flow API functions exported by PMD.
+	 */
+	const struct rte_flow_fp_ops *flow_fp_ops;
 	struct rte_device *device; /**< Backing device */
 	struct rte_intr_handle *intr_handle; /**< Device interrupt handle */
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 3f58d792f9..7cd04c3637 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -2014,16 +2014,26 @@  rte_flow_async_create(uint16_t port_id,
 		      struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	struct rte_flow *flow;
 
-	flow = ops->async_create(dev, queue_id,
-				 op_attr, template_table,
-				 pattern, pattern_template_index,
-				 actions, actions_template_index,
-				 user_data, error);
-	if (flow == NULL)
-		flow_err(port_id, -rte_errno, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENODEV));
+		return NULL;
+	}
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_create) {
+		rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+#endif
+
+	flow = dev->flow_fp_ops->async_create(dev, queue_id,
+					      op_attr, template_table,
+					      pattern, pattern_template_index,
+					      actions, actions_template_index,
+					      user_data, error);
 
 	rte_flow_trace_async_create(port_id, queue_id, op_attr, template_table,
 				    pattern, pattern_template_index, actions,
@@ -2044,16 +2054,24 @@  rte_flow_async_create_by_index(uint16_t port_id,
 			       struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
-	struct rte_flow *flow;
 
-	flow = ops->async_create_by_index(dev, queue_id,
-					  op_attr, template_table, rule_index,
-					  actions, actions_template_index,
-					  user_data, error);
-	if (flow == NULL)
-		flow_err(port_id, -rte_errno, error);
-	return flow;
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENODEV));
+		return NULL;
+	}
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_create_by_index) {
+		rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+#endif
+
+	return dev->flow_fp_ops->async_create_by_index(dev, queue_id,
+						       op_attr, template_table, rule_index,
+						       actions, actions_template_index,
+						       user_data, error);
 }
 
 int
@@ -2065,14 +2083,20 @@  rte_flow_async_destroy(uint16_t port_id,
 		       struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	int ret;
 
-	ret = flow_err(port_id,
-		       ops->async_destroy(dev, queue_id,
-					  op_attr, flow,
-					  user_data, error),
-		       error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_destroy)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->async_destroy(dev, queue_id,
+					      op_attr, flow,
+					      user_data, error);
 
 	rte_flow_trace_async_destroy(port_id, queue_id, op_attr, flow,
 				     user_data, ret);
@@ -2091,15 +2115,21 @@  rte_flow_async_actions_update(uint16_t port_id,
 			      struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	int ret;
 
-	ret = flow_err(port_id,
-		       ops->async_actions_update(dev, queue_id, op_attr,
-						 flow, actions,
-						 actions_template_index,
-						 user_data, error),
-		       error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_actions_update)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->async_actions_update(dev, queue_id, op_attr,
+						     flow, actions,
+						     actions_template_index,
+						     user_data, error);
 
 	rte_flow_trace_async_actions_update(port_id, queue_id, op_attr, flow,
 					    actions, actions_template_index,
@@ -2114,12 +2144,18 @@  rte_flow_push(uint16_t port_id,
 	      struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	int ret;
 
-	ret = flow_err(port_id,
-		       ops->push(dev, queue_id, error),
-		       error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->push)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->push(dev, queue_id, error);
 
 	rte_flow_trace_push(port_id, queue_id, ret);
 
@@ -2134,16 +2170,22 @@  rte_flow_pull(uint16_t port_id,
 	      struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	int ret;
-	int rc;
 
-	ret = ops->pull(dev, queue_id, res, n_res, error);
-	rc = ret ? ret : flow_err(port_id, ret, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->pull)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->pull(dev, queue_id, res, n_res, error);
 
-	rte_flow_trace_pull(port_id, queue_id, res, n_res, rc);
+	rte_flow_trace_pull(port_id, queue_id, res, n_res, ret);
 
-	return rc;
+	return ret;
 }
 
 struct rte_flow_action_handle *
@@ -2156,13 +2198,24 @@  rte_flow_async_action_handle_create(uint16_t port_id,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	struct rte_flow_action_handle *handle;
 
-	handle = ops->async_action_handle_create(dev, queue_id, op_attr,
-					     indir_action_conf, action, user_data, error);
-	if (handle == NULL)
-		flow_err(port_id, -rte_errno, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENODEV));
+		return NULL;
+	}
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_action_handle_create) {
+		rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+#endif
+
+	handle = dev->flow_fp_ops->async_action_handle_create(dev, queue_id, op_attr,
+							      indir_action_conf, action,
+							      user_data, error);
 
 	rte_flow_trace_async_action_handle_create(port_id, queue_id, op_attr,
 						  indir_action_conf, action,
@@ -2180,12 +2233,19 @@  rte_flow_async_action_handle_destroy(uint16_t port_id,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	int ret;
 
-	ret = ops->async_action_handle_destroy(dev, queue_id, op_attr,
-					   action_handle, user_data, error);
-	ret = flow_err(port_id, ret, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_action_handle_destroy)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->async_action_handle_destroy(dev, queue_id, op_attr,
+							    action_handle, user_data, error);
 
 	rte_flow_trace_async_action_handle_destroy(port_id, queue_id, op_attr,
 						   action_handle, user_data, ret);
@@ -2203,12 +2263,19 @@  rte_flow_async_action_handle_update(uint16_t port_id,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	int ret;
 
-	ret = ops->async_action_handle_update(dev, queue_id, op_attr,
-					  action_handle, update, user_data, error);
-	ret = flow_err(port_id, ret, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_action_handle_update)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->async_action_handle_update(dev, queue_id, op_attr,
+							   action_handle, update, user_data, error);
 
 	rte_flow_trace_async_action_handle_update(port_id, queue_id, op_attr,
 						  action_handle, update,
@@ -2227,14 +2294,19 @@  rte_flow_async_action_handle_query(uint16_t port_id,
 		struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
-	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	int ret;
 
-	if (unlikely(!ops))
-		return -rte_errno;
-	ret = ops->async_action_handle_query(dev, queue_id, op_attr,
-					  action_handle, data, user_data, error);
-	ret = flow_err(port_id, ret, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_action_handle_query)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->async_action_handle_query(dev, queue_id, op_attr,
+							  action_handle, data, user_data, error);
 
 	rte_flow_trace_async_action_handle_query(port_id, queue_id, op_attr,
 						 action_handle, data, user_data,
@@ -2277,24 +2349,21 @@  rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id,
 					  void *user_data,
 					  struct rte_flow_error *error)
 {
-	int ret;
-	struct rte_eth_dev *dev;
-	const struct rte_flow_ops *ops;
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	if (!handle)
-		return -EINVAL;
-	if (!update && !query)
-		return -EINVAL;
-	dev = &rte_eth_devices[port_id];
-	ops = rte_flow_ops_get(port_id, error);
-	if (!ops || !ops->async_action_handle_query_update)
-		return -ENOTSUP;
-	ret = ops->async_action_handle_query_update(dev, queue_id, attr,
-						    handle, update,
-						    query, mode,
-						    user_data, error);
-	return flow_err(port_id, ret, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_action_handle_query_update)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	return dev->flow_fp_ops->async_action_handle_query_update(dev, queue_id, attr,
+								  handle, update,
+								  query, mode,
+								  user_data, error);
 }
 
 struct rte_flow_action_list_handle *
@@ -2354,24 +2423,28 @@  rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id,
 					 void *user_data,
 					 struct rte_flow_error *error)
 {
-	int ret;
-	struct rte_eth_dev *dev;
-	const struct rte_flow_ops *ops;
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	struct rte_flow_action_list_handle *handle;
+	int ret;
 
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL);
-	ops = rte_flow_ops_get(port_id, error);
-	if (!ops || !ops->async_action_list_handle_create) {
-		rte_flow_error_set(error, ENOTSUP,
-				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "action_list handle not supported");
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id)) {
+		rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENODEV));
 		return NULL;
 	}
-	dev = &rte_eth_devices[port_id];
-	handle = ops->async_action_list_handle_create(dev, queue_id, attr, conf,
-						      actions, user_data,
-						      error);
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_action_list_handle_create) {
+		rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+#endif
+
+	handle = dev->flow_fp_ops->async_action_list_handle_create(dev, queue_id, attr, conf,
+								   actions, user_data,
+								   error);
 	ret = flow_err(port_id, -rte_errno, error);
+
 	rte_flow_trace_async_action_list_handle_create(port_id, queue_id, attr,
 						       conf, actions, user_data,
 						       ret);
@@ -2384,20 +2457,21 @@  rte_flow_async_action_list_handle_destroy(uint16_t port_id, uint32_t queue_id,
 				 struct rte_flow_action_list_handle *handle,
 				 void *user_data, struct rte_flow_error *error)
 {
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	int ret;
-	struct rte_eth_dev *dev;
-	const struct rte_flow_ops *ops;
 
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	ops = rte_flow_ops_get(port_id, error);
-	if (!ops || !ops->async_action_list_handle_destroy)
-		return rte_flow_error_set(error, ENOTSUP,
-					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-					  "async action_list handle not supported");
-	dev = &rte_eth_devices[port_id];
-	ret = ops->async_action_list_handle_destroy(dev, queue_id, op_attr,
-						    handle, user_data, error);
-	ret = flow_err(port_id, ret, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_action_list_handle_destroy)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->async_action_list_handle_destroy(dev, queue_id, op_attr,
+								 handle, user_data, error);
+
 	rte_flow_trace_async_action_list_handle_destroy(port_id, queue_id,
 							op_attr, handle,
 							user_data, ret);
@@ -2438,22 +2512,23 @@  rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_
 			 enum rte_flow_query_update_mode mode,
 			 void *user_data, struct rte_flow_error *error)
 {
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	int ret;
-	struct rte_eth_dev *dev;
-	const struct rte_flow_ops *ops;
 
-	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-	ops = rte_flow_ops_get(port_id, error);
-	if (!ops || !ops->async_action_list_handle_query_update)
-		return rte_flow_error_set(error, ENOTSUP,
-					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-					  "action_list async query_update not supported");
-	dev = &rte_eth_devices[port_id];
-	ret = ops->async_action_list_handle_query_update(dev, queue_id, attr,
-							 handle, update, query,
-							 mode, user_data,
-							 error);
-	ret = flow_err(port_id, ret, error);
+#ifdef RTE_FLOW_DEBUG
+	if (!rte_eth_dev_is_valid_port(port_id))
+		return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENODEV));
+	if (!dev->flow_fp_ops || !dev->flow_fp_ops->async_action_list_handle_query_update)
+		return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  rte_strerror(ENOSYS));
+#endif
+
+	ret = dev->flow_fp_ops->async_action_list_handle_query_update(dev, queue_id, attr,
+								      handle, update, query,
+								      mode, user_data,
+								      error);
+
 	rte_flow_trace_async_action_list_handle_query_update(port_id, queue_id,
 							     attr, handle,
 							     update, query,
@@ -2482,3 +2557,216 @@  rte_flow_calc_table_hash(uint16_t port_id, const struct rte_flow_template_table
 					hash, error);
 	return flow_err(port_id, ret, error);
 }
+
+static struct rte_flow *
+rte_flow_dummy_async_create(struct rte_eth_dev *dev __rte_unused,
+			    uint32_t queue __rte_unused,
+			    const struct rte_flow_op_attr *attr __rte_unused,
+			    struct rte_flow_template_table *table __rte_unused,
+			    const struct rte_flow_item items[] __rte_unused,
+			    uint8_t pattern_template_index __rte_unused,
+			    const struct rte_flow_action actions[] __rte_unused,
+			    uint8_t action_template_index __rte_unused,
+			    void *user_data __rte_unused,
+			    struct rte_flow_error *error)
+{
+	rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   rte_strerror(ENOSYS));
+	return NULL;
+}
+
+static struct rte_flow *
+rte_flow_dummy_async_create_by_index(struct rte_eth_dev *dev __rte_unused,
+				     uint32_t queue __rte_unused,
+				     const struct rte_flow_op_attr *attr __rte_unused,
+				     struct rte_flow_template_table *table __rte_unused,
+				     uint32_t rule_index __rte_unused,
+				     const struct rte_flow_action actions[] __rte_unused,
+				     uint8_t action_template_index __rte_unused,
+				     void *user_data __rte_unused,
+				     struct rte_flow_error *error)
+{
+	rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   rte_strerror(ENOSYS));
+	return NULL;
+}
+
+static int
+rte_flow_dummy_async_actions_update(struct rte_eth_dev *dev __rte_unused,
+				    uint32_t queue_id __rte_unused,
+				    const struct rte_flow_op_attr *op_attr __rte_unused,
+				    struct rte_flow *flow __rte_unused,
+				    const struct rte_flow_action actions[] __rte_unused,
+				    uint8_t actions_template_index __rte_unused,
+				    void *user_data __rte_unused,
+				    struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static int
+rte_flow_dummy_async_destroy(struct rte_eth_dev *dev __rte_unused,
+			     uint32_t queue_id __rte_unused,
+			     const struct rte_flow_op_attr *op_attr __rte_unused,
+			     struct rte_flow *flow __rte_unused,
+			     void *user_data __rte_unused,
+			     struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static int
+rte_flow_dummy_push(struct rte_eth_dev *dev __rte_unused,
+		    uint32_t queue_id __rte_unused,
+		    struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static int
+rte_flow_dummy_pull(struct rte_eth_dev *dev __rte_unused,
+		    uint32_t queue_id __rte_unused,
+		    struct rte_flow_op_result res[] __rte_unused,
+		    uint16_t n_res __rte_unused,
+		    struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static struct rte_flow_action_handle *
+rte_flow_dummy_async_action_handle_create(
+	struct rte_eth_dev *dev __rte_unused,
+	uint32_t queue_id __rte_unused,
+	const struct rte_flow_op_attr *op_attr __rte_unused,
+	const struct rte_flow_indir_action_conf *indir_action_conf __rte_unused,
+	const struct rte_flow_action *action __rte_unused,
+	void *user_data __rte_unused,
+	struct rte_flow_error *error)
+{
+	rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   rte_strerror(ENOSYS));
+	return NULL;
+}
+
+static int
+rte_flow_dummy_async_action_handle_destroy(
+	struct rte_eth_dev *dev __rte_unused,
+	uint32_t queue_id __rte_unused,
+	const struct rte_flow_op_attr *op_attr __rte_unused,
+	struct rte_flow_action_handle *action_handle __rte_unused,
+	void *user_data __rte_unused,
+	struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static int
+rte_flow_dummy_async_action_handle_update(
+	struct rte_eth_dev *dev __rte_unused,
+	uint32_t queue_id __rte_unused,
+	const struct rte_flow_op_attr *op_attr __rte_unused,
+	struct rte_flow_action_handle *action_handle __rte_unused,
+	const void *update __rte_unused,
+	void *user_data __rte_unused,
+	struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static int
+rte_flow_dummy_async_action_handle_query(
+	struct rte_eth_dev *dev __rte_unused,
+	uint32_t queue_id __rte_unused,
+	const struct rte_flow_op_attr *op_attr __rte_unused,
+	const struct rte_flow_action_handle *action_handle __rte_unused,
+	void *data __rte_unused,
+	void *user_data __rte_unused,
+	struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static int
+rte_flow_dummy_async_action_handle_query_update(
+	struct rte_eth_dev *dev __rte_unused,
+	uint32_t queue_id __rte_unused,
+	const struct rte_flow_op_attr *attr __rte_unused,
+	struct rte_flow_action_handle *handle __rte_unused,
+	const void *update __rte_unused,
+	void *query __rte_unused,
+	enum rte_flow_query_update_mode mode __rte_unused,
+	void *user_data __rte_unused,
+	struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static struct rte_flow_action_list_handle *
+rte_flow_dummy_async_action_list_handle_create(
+	struct rte_eth_dev *dev __rte_unused,
+	uint32_t queue_id __rte_unused,
+	const struct rte_flow_op_attr *attr __rte_unused,
+	const struct rte_flow_indir_action_conf *conf __rte_unused,
+	const struct rte_flow_action *actions __rte_unused,
+	void *user_data __rte_unused,
+	struct rte_flow_error *error)
+{
+	rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			   rte_strerror(ENOSYS));
+	return NULL;
+}
+
+static int
+rte_flow_dummy_async_action_list_handle_destroy(
+	struct rte_eth_dev *dev __rte_unused,
+	uint32_t queue_id __rte_unused,
+	const struct rte_flow_op_attr *op_attr __rte_unused,
+	struct rte_flow_action_list_handle *handle __rte_unused,
+	void *user_data __rte_unused,
+	struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+static int
+rte_flow_dummy_async_action_list_handle_query_update(
+	struct rte_eth_dev *dev __rte_unused,
+	uint32_t queue_id __rte_unused,
+	const struct rte_flow_op_attr *attr __rte_unused,
+	const struct rte_flow_action_list_handle *handle __rte_unused,
+	const void **update __rte_unused,
+	void **query __rte_unused,
+	enum rte_flow_query_update_mode mode __rte_unused,
+	void *user_data __rte_unused,
+	struct rte_flow_error *error)
+{
+	return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				  rte_strerror(ENOSYS));
+}
+
+struct rte_flow_fp_ops rte_flow_fp_default_ops = {
+	.async_create = rte_flow_dummy_async_create,
+	.async_create_by_index = rte_flow_dummy_async_create_by_index,
+	.async_actions_update = rte_flow_dummy_async_actions_update,
+	.async_destroy = rte_flow_dummy_async_destroy,
+	.push = rte_flow_dummy_push,
+	.pull = rte_flow_dummy_pull,
+	.async_action_handle_create = rte_flow_dummy_async_action_handle_create,
+	.async_action_handle_destroy = rte_flow_dummy_async_action_handle_destroy,
+	.async_action_handle_update = rte_flow_dummy_async_action_handle_update,
+	.async_action_handle_query = rte_flow_dummy_async_action_handle_query,
+	.async_action_handle_query_update = rte_flow_dummy_async_action_handle_query_update,
+	.async_action_list_handle_create = rte_flow_dummy_async_action_list_handle_create,
+	.async_action_list_handle_destroy = rte_flow_dummy_async_action_list_handle_destroy,
+	.async_action_list_handle_query_update =
+		rte_flow_dummy_async_action_list_handle_query_update,
+};
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f35f659503..dd9d01045d 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -234,122 +234,12 @@  struct rte_flow_ops {
 		 const struct rte_flow_group_attr *attr,
 		 const struct rte_flow_action actions[],
 		 struct rte_flow_error *err);
-	/** See rte_flow_async_create() */
-	struct rte_flow *(*async_create)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 struct rte_flow_template_table *template_table,
-		 const struct rte_flow_item pattern[],
-		 uint8_t pattern_template_index,
-		 const struct rte_flow_action actions[],
-		 uint8_t actions_template_index,
-		 void *user_data,
-		 struct rte_flow_error *err);
-	/** See rte_flow_async_create_by_index() */
-	struct rte_flow *(*async_create_by_index)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 struct rte_flow_template_table *template_table,
-		 uint32_t rule_index,
-		 const struct rte_flow_action actions[],
-		 uint8_t actions_template_index,
-		 void *user_data,
-		 struct rte_flow_error *err);
-	/** See rte_flow_async_destroy() */
-	int (*async_destroy)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 struct rte_flow *flow,
-		 void *user_data,
-		 struct rte_flow_error *err);
-	/** See rte_flow_push() */
-	int (*push)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 struct rte_flow_error *err);
-	/** See rte_flow_pull() */
-	int (*pull)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 struct rte_flow_op_result res[],
-		 uint16_t n_res,
-		 struct rte_flow_error *error);
-	/** See rte_flow_async_action_handle_create() */
-	struct rte_flow_action_handle *(*async_action_handle_create)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 const struct rte_flow_indir_action_conf *indir_action_conf,
-		 const struct rte_flow_action *action,
-		 void *user_data,
-		 struct rte_flow_error *err);
-	/** See rte_flow_async_action_handle_destroy() */
-	int (*async_action_handle_destroy)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 struct rte_flow_action_handle *action_handle,
-		 void *user_data,
-		 struct rte_flow_error *error);
-	/** See rte_flow_async_action_handle_update() */
-	int (*async_action_handle_update)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 struct rte_flow_action_handle *action_handle,
-		 const void *update,
-		 void *user_data,
-		 struct rte_flow_error *error);
-	/** See rte_flow_async_action_handle_query() */
-	int (*async_action_handle_query)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 const struct rte_flow_action_handle *action_handle,
-		 void *data,
-		 void *user_data,
-		 struct rte_flow_error *error);
-	/** See rte_flow_async_action_handle_query_update */
-	int (*async_action_handle_query_update)
-		(struct rte_eth_dev *dev, uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 struct rte_flow_action_handle *action_handle,
-		 const void *update, void *query,
-		 enum rte_flow_query_update_mode qu_mode,
-		 void *user_data, struct rte_flow_error *error);
 	/** See rte_flow_actions_update(). */
 	int (*actions_update)
 		(struct rte_eth_dev *dev,
 		 struct rte_flow *flow,
 		 const struct rte_flow_action actions[],
 		 struct rte_flow_error *error);
-	/** See rte_flow_async_actions_update() */
-	int (*async_actions_update)
-		(struct rte_eth_dev *dev,
-		 uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 struct rte_flow *flow,
-		 const struct rte_flow_action actions[],
-		 uint8_t actions_template_index,
-		 void *user_data,
-		 struct rte_flow_error *error);
-	/** @see rte_flow_async_action_list_handle_create() */
-	struct rte_flow_action_list_handle *
-	(*async_action_list_handle_create)
-		(struct rte_eth_dev *dev, uint32_t queue_id,
-		 const struct rte_flow_op_attr *attr,
-		 const struct rte_flow_indir_action_conf *conf,
-		 const struct rte_flow_action *actions,
-		 void *user_data, struct rte_flow_error *error);
-	/** @see rte_flow_async_action_list_handle_destroy() */
-	int (*async_action_list_handle_destroy)
-		(struct rte_eth_dev *dev, uint32_t queue_id,
-		 const struct rte_flow_op_attr *op_attr,
-		 struct rte_flow_action_list_handle *action_handle,
-		 void *user_data, struct rte_flow_error *error);
 	/** @see rte_flow_action_list_handle_query_update() */
 	int (*action_list_handle_query_update)
 		(struct rte_eth_dev *dev,
@@ -357,14 +247,6 @@  struct rte_flow_ops {
 		 const void **update, void **query,
 		 enum rte_flow_query_update_mode mode,
 		 struct rte_flow_error *error);
-	/** @see rte_flow_async_action_list_handle_query_update() */
-	int (*async_action_list_handle_query_update)
-		(struct rte_eth_dev *dev, uint32_t queue_id,
-		 const struct rte_flow_op_attr *attr,
-		 const struct rte_flow_action_list_handle *handle,
-		 const void **update, void **query,
-		 enum rte_flow_query_update_mode mode,
-		 void *user_data, struct rte_flow_error *error);
 	/** @see rte_flow_calc_table_hash() */
 	int (*flow_calc_table_hash)
 		(struct rte_eth_dev *dev, const struct rte_flow_template_table *table,
@@ -394,6 +276,165 @@  rte_flow_ops_get(uint16_t port_id, struct rte_flow_error *error);
 int
 rte_flow_restore_info_dynflag_register(void);
 
+/** @internal Enqueue rule creation operation. */
+typedef struct rte_flow *(*rte_flow_async_create_t)(struct rte_eth_dev *dev,
+						    uint32_t queue,
+						    const struct rte_flow_op_attr *attr,
+						    struct rte_flow_template_table *table,
+						    const struct rte_flow_item *items,
+						    uint8_t pattern_template_index,
+						    const struct rte_flow_action *actions,
+						    uint8_t action_template_index,
+						    void *user_data,
+						    struct rte_flow_error *error);
+
+/** @internal Enqueue rule creation by index operation. */
+typedef struct rte_flow *(*rte_flow_async_create_by_index_t)(struct rte_eth_dev *dev,
+							     uint32_t queue,
+							     const struct rte_flow_op_attr *attr,
+							     struct rte_flow_template_table *table,
+							     uint32_t rule_index,
+							     const struct rte_flow_action *actions,
+							     uint8_t action_template_index,
+							     void *user_data,
+							     struct rte_flow_error *error);
+
+/** @internal Enqueue rule update operation. */
+typedef int (*rte_flow_async_actions_update_t)(struct rte_eth_dev *dev,
+					       uint32_t queue_id,
+					       const struct rte_flow_op_attr *op_attr,
+					       struct rte_flow *flow,
+					       const struct rte_flow_action *actions,
+					       uint8_t actions_template_index,
+					       void *user_data,
+					       struct rte_flow_error *error);
+
+/** @internal Enqueue rule destruction operation. */
+typedef int (*rte_flow_async_destroy_t)(struct rte_eth_dev *dev,
+					uint32_t queue_id,
+					const struct rte_flow_op_attr *op_attr,
+					struct rte_flow *flow,
+					void *user_data,
+					struct rte_flow_error *error);
+
+/** @internal Push all internally stored rules to the HW. */
+typedef int (*rte_flow_push_t)(struct rte_eth_dev *dev,
+			       uint32_t queue_id,
+			       struct rte_flow_error *error);
+
+/** @internal Pull the flow rule operations results from the HW. */
+typedef int (*rte_flow_pull_t)(struct rte_eth_dev *dev,
+			       uint32_t queue_id,
+			       struct rte_flow_op_result *res,
+			       uint16_t n_res,
+			       struct rte_flow_error *error);
+
+/** @internal Enqueue indirect action creation operation. */
+typedef struct rte_flow_action_handle *(*rte_flow_async_action_handle_create_t)(
+					struct rte_eth_dev *dev,
+					uint32_t queue_id,
+					const struct rte_flow_op_attr *op_attr,
+					const struct rte_flow_indir_action_conf *indir_action_conf,
+					const struct rte_flow_action *action,
+					void *user_data,
+					struct rte_flow_error *error);
+
+/** @internal Enqueue indirect action destruction operation. */
+typedef int (*rte_flow_async_action_handle_destroy_t)(struct rte_eth_dev *dev,
+						      uint32_t queue_id,
+						      const struct rte_flow_op_attr *op_attr,
+						      struct rte_flow_action_handle *action_handle,
+						      void *user_data,
+						      struct rte_flow_error *error);
+
+/** @internal Enqueue indirect action update operation. */
+typedef int (*rte_flow_async_action_handle_update_t)(struct rte_eth_dev *dev,
+						     uint32_t queue_id,
+						     const struct rte_flow_op_attr *op_attr,
+						     struct rte_flow_action_handle *action_handle,
+						     const void *update,
+						     void *user_data,
+						     struct rte_flow_error *error);
+
+/** @internal Enqueue indirect action query operation. */
+typedef int (*rte_flow_async_action_handle_query_t)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 const struct rte_flow_action_handle *action_handle,
+		 void *data,
+		 void *user_data,
+		 struct rte_flow_error *error);
+
+/** @internal Enqueue indirect action query and/or update operation. */
+typedef int (*rte_flow_async_action_handle_query_update_t)(struct rte_eth_dev *dev,
+							   uint32_t queue_id,
+							   const struct rte_flow_op_attr *attr,
+							   struct rte_flow_action_handle *handle,
+							   const void *update, void *query,
+							   enum rte_flow_query_update_mode mode,
+							   void *user_data,
+							   struct rte_flow_error *error);
+
+/** @internal Enqueue indirect action list creation operation. */
+typedef struct rte_flow_action_list_handle *(*rte_flow_async_action_list_handle_create_t)(
+	struct rte_eth_dev *dev,
+	uint32_t queue_id,
+	const struct rte_flow_op_attr *attr,
+	const struct rte_flow_indir_action_conf *conf,
+	const struct rte_flow_action *actions,
+	void *user_data,
+	struct rte_flow_error *error);
+
+/** @internal Enqueue indirect action list destruction operation. */
+typedef int (*rte_flow_async_action_list_handle_destroy_t)(
+	struct rte_eth_dev *dev,
+	uint32_t queue_id,
+	const struct rte_flow_op_attr *op_attr,
+	struct rte_flow_action_list_handle *handle,
+	void *user_data,
+	struct rte_flow_error *error);
+
+/** @internal Enqueue indirect action list query and/or update operation. */
+typedef int (*rte_flow_async_action_list_handle_query_update_t)(
+	struct rte_eth_dev *dev,
+	uint32_t queue_id,
+	const struct rte_flow_op_attr *attr,
+	const struct rte_flow_action_list_handle *handle,
+	const void **update,
+	void **query,
+	enum rte_flow_query_update_mode mode,
+	void *user_data,
+	struct rte_flow_error *error);
+
+/**
+ * @internal
+ *
+ * Fast path async flow functions are held in a flat array, one entry per ethdev.
+ */
+struct rte_flow_fp_ops {
+	rte_flow_async_create_t async_create;
+	rte_flow_async_create_by_index_t async_create_by_index;
+	rte_flow_async_actions_update_t async_actions_update;
+	rte_flow_async_destroy_t async_destroy;
+	rte_flow_push_t push;
+	rte_flow_pull_t pull;
+	rte_flow_async_action_handle_create_t async_action_handle_create;
+	rte_flow_async_action_handle_destroy_t async_action_handle_destroy;
+	rte_flow_async_action_handle_update_t async_action_handle_update;
+	rte_flow_async_action_handle_query_t async_action_handle_query;
+	rte_flow_async_action_handle_query_update_t async_action_handle_query_update;
+	rte_flow_async_action_list_handle_create_t async_action_list_handle_create;
+	rte_flow_async_action_list_handle_destroy_t async_action_list_handle_destroy;
+	rte_flow_async_action_list_handle_query_update_t async_action_list_handle_query_update;
+} __rte_cache_aligned;
+
+/**
+ * @internal
+ * Default implementation of fast path flow API functions.
+ */
+extern struct rte_flow_fp_ops rte_flow_fp_default_ops;
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index a050baab0f..61e38a9f00 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -348,4 +348,6 @@  INTERNAL {
 	rte_eth_representor_id_get;
 	rte_eth_switch_domain_alloc;
 	rte_eth_switch_domain_free;
+
+	rte_flow_fp_default_ops;
 };