[v4,3/6] net: advertise no support for keeping flow rules

Message ID 20211021063503.3632732-4-dkozlyuk@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series Flow entites behavior on port restart |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation warning apply issues

Commit Message

Dmitry Kozlyuk Oct. 21, 2021, 6:35 a.m. UTC
  When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
the specified behavior is the same as it had been before
this bit was introduced. Explicitly reset it in all PMDs
supporting rte_flow API in order to attract the attention
of maintainers, who should eventually choose to advertise
the new capability or not. It is already known that
mlx4 and mlx5 will not support this capability.

For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
similar action is not performed,
because no PMD except mlx5 supports indirect actions.
Any PMD that starts doing so will anyway have to consider
all relevant API, including this capability.

Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
---
 drivers/net/bnxt/bnxt_ethdev.c          | 1 +
 drivers/net/bnxt/bnxt_reps.c            | 1 +
 drivers/net/cnxk/cnxk_ethdev_ops.c      | 1 +
 drivers/net/cxgbe/cxgbe_ethdev.c        | 2 ++
 drivers/net/dpaa2/dpaa2_ethdev.c        | 1 +
 drivers/net/e1000/em_ethdev.c           | 2 ++
 drivers/net/e1000/igb_ethdev.c          | 1 +
 drivers/net/enic/enic_ethdev.c          | 1 +
 drivers/net/failsafe/failsafe_ops.c     | 1 +
 drivers/net/hinic/hinic_pmd_ethdev.c    | 2 ++
 drivers/net/hns3/hns3_ethdev.c          | 1 +
 drivers/net/hns3/hns3_ethdev_vf.c       | 1 +
 drivers/net/i40e/i40e_ethdev.c          | 1 +
 drivers/net/i40e/i40e_vf_representor.c  | 2 ++
 drivers/net/iavf/iavf_ethdev.c          | 1 +
 drivers/net/ice/ice_dcf_ethdev.c        | 1 +
 drivers/net/igc/igc_ethdev.c            | 1 +
 drivers/net/ipn3ke/ipn3ke_representor.c | 1 +
 drivers/net/mvpp2/mrvl_ethdev.c         | 2 ++
 drivers/net/octeontx2/otx2_ethdev_ops.c | 1 +
 drivers/net/qede/qede_ethdev.c          | 1 +
 drivers/net/sfc/sfc_ethdev.c            | 1 +
 drivers/net/softnic/rte_eth_softnic.c   | 1 +
 drivers/net/tap/rte_eth_tap.c           | 1 +
 drivers/net/txgbe/txgbe_ethdev.c        | 1 +
 drivers/net/txgbe/txgbe_ethdev_vf.c     | 1 +
 26 files changed, 31 insertions(+)
  

Comments

Ajit Khaparde Oct. 21, 2021, 6:26 p.m. UTC | #1
On Wed, Oct 20, 2021 at 11:36 PM Dmitry Kozlyuk <dkozlyuk@nvidia.com> wrote:
>
> When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
> the specified behavior is the same as it had been before
> this bit was introduced. Explicitly reset it in all PMDs
> supporting rte_flow API in order to attract the attention
> of maintainers, who should eventually choose to advertise
> the new capability or not. It is already known that
> mlx4 and mlx5 will not support this capability.
>
> For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
> similar action is not performed,
> because no PMD except mlx5 supports indirect actions.
> Any PMD that starts doing so will anyway have to consider
> all relevant API, including this capability.
>
> Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
  
Somnath Kotur Oct. 22, 2021, 1:38 a.m. UTC | #2
On Thu, 21 Oct 2021, 23:56 Ajit Khaparde, <ajit.khaparde@broadcom.com>
wrote:

> On Wed, Oct 20, 2021 at 11:36 PM Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> wrote:
> >
> > When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
> > the specified behavior is the same as it had been before
> > this bit was introduced. Explicitly reset it in all PMDs
> > supporting rte_flow API in order to attract the attention
> > of maintainers, who should eventually choose to advertise
> > the new capability or not. It is already known that
> > mlx4 and mlx5 will not support this capability.
> >
> > For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
> > similar action is not performed,
> > because no PMD except mlx5 supports indirect actions.
> > Any PMD that starts doing so will anyway have to consider
> > all relevant API, including this capability.
> >
> > Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>
Acked-by: Somnath Kotur
<somnath.kotur@broadcom.com>

>
  
Hyong Youb Kim (hyonkim) Oct. 27, 2021, 7:11 a.m. UTC | #3
> -----Original Message-----
> From: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Sent: Thursday, October 21, 2021 3:35 PM
[...]
> Subject: [PATCH v4 3/6] net: advertise no support for keeping flow rules
> 
> When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
> the specified behavior is the same as it had been before
> this bit was introduced. Explicitly reset it in all PMDs
> supporting rte_flow API in order to attract the attention
> of maintainers, who should eventually choose to advertise
> the new capability or not. It is already known that
> mlx4 and mlx5 will not support this capability.
> 
> For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
> similar action is not performed,
> because no PMD except mlx5 supports indirect actions.
> Any PMD that starts doing so will anyway have to consider
> all relevant API, including this capability.
> 
> Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> ---

For net/enic,

Acked-by: Hyong Youb Kim <hyonkim@cisco.com>

-Hyong
  
Andrew Rybchenko Nov. 1, 2021, 3:06 p.m. UTC | #4
On 10/21/21 9:35 AM, Dmitry Kozlyuk wrote:
> When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
> the specified behavior is the same as it had been before
> this bit was introduced. Explicitly reset it in all PMDs
> supporting rte_flow API in order to attract the attention
> of maintainers, who should eventually choose to advertise
> the new capability or not. It is already known that
> mlx4 and mlx5 will not support this capability.
> 
> For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
> similar action is not performed,
> because no PMD except mlx5 supports indirect actions.
> Any PMD that starts doing so will anyway have to consider
> all relevant API, including this capability.
> 
> Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>

I'm sorry, but I still think that the patch is confusing.
No strong opinion, but personally I'd go without it.
  
Ferruh Yigit Nov. 1, 2021, 4:59 p.m. UTC | #5
On 11/1/2021 3:06 PM, Andrew Rybchenko wrote:
> On 10/21/21 9:35 AM, Dmitry Kozlyuk wrote:
>> When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
>> the specified behavior is the same as it had been before
>> this bit was introduced. Explicitly reset it in all PMDs
>> supporting rte_flow API in order to attract the attention
>> of maintainers, who should eventually choose to advertise
>> the new capability or not. It is already known that
>> mlx4 and mlx5 will not support this capability.
>>
>> For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
>> similar action is not performed,
>> because no PMD except mlx5 supports indirect actions.
>> Any PMD that starts doing so will anyway have to consider
>> all relevant API, including this capability.
>>
>> Suggested-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> 
> I'm sorry, but I still think that the patch is confusing.
> No strong opinion, but personally I'd go without it.

It is confusing alright to add a flag that has no impact.

But again my concern is some PMDs maintainers not being aware of
this new capability flag that has an impact in the user application.
See the level of comments to the patch from PMD maintainers.

This way gives a visibility to both PMD maintainers that some action
is required, and gives visibility to application developers and maintainers
to track the support. By time the  should go away.

Updating ethdev without updating all drivers is hard to manage.
  

Patch

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f..dbdcdb1ec4 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1008,6 +1008,7 @@  static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	dev_info->speed_capa = bnxt_get_speed_capabilities(bp);
 	dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 			     RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index b7e88e013a..34b5df6018 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -526,6 +526,7 @@  int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	dev_info->max_tx_queues = max_rx_rings;
 	dev_info->reta_size = bnxt_rss_hash_tbl_size(parent_bp);
 	dev_info->hash_key_size = 40;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	/* MTU specifics */
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index d0924df761..b598512322 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -68,6 +68,7 @@  cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 	devinfo->speed_capa = dev->speed_capa;
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 			    RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	return 0;
 }
 
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index f77b297600..e654ccc854 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -131,6 +131,8 @@  int cxgbe_dev_info_get(struct rte_eth_dev *eth_dev,
 	device_info->max_vfs = adapter->params.arch.vfcount;
 	device_info->max_vmdq_pools = 0; /* XXX: For now no support for VMDQ */
 
+	device_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	device_info->rx_queue_offload_capa = 0UL;
 	device_info->rx_offload_capa = CXGBE_RX_OFFLOADS;
 
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e7852..19f35262e5 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -254,6 +254,7 @@  dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->speed_capa = ETH_LINK_SPEED_1G |
 			ETH_LINK_SPEED_2_5G |
 			ETH_LINK_SPEED_10G;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6e..3d546c5517 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1106,6 +1106,8 @@  eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
 			ETH_LINK_SPEED_1G;
 
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	/* Preferred queue parameters */
 	dev_info->default_rxportconf.nb_queues = 1;
 	dev_info->default_txportconf.nb_queues = 1;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad..d1e61ea345 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -2174,6 +2174,7 @@  eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_queue_offload_capa = igb_get_tx_queue_offloads_capa(dev);
 	dev_info->tx_offload_capa = igb_get_tx_port_offloads_capa(dev) |
 				    dev_info->tx_queue_offload_capa;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	switch (hw->mac.type) {
 	case e1000_82575:
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8df7332bc5..4e8ccfd832 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -469,6 +469,7 @@  static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
 	device_info->rx_offload_capa = enic->rx_offload_capa;
 	device_info->tx_offload_capa = enic->tx_offload_capa;
 	device_info->tx_queue_offload_capa = enic->tx_queue_offload_capa;
+	device_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	device_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_free_thresh = ENIC_DEFAULT_RX_FREE_THRESH
 	};
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c..9e9c688961 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1220,6 +1220,7 @@  fs_dev_infos_get(struct rte_eth_dev *dev,
 	infos->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	infos->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
 		struct rte_eth_dev_info sub_info;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb67..ff287321c5 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -751,6 +751,8 @@  hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 				DEV_TX_OFFLOAD_TCP_TSO |
 				DEV_TX_OFFLOAD_MULTI_SEGS;
 
+	info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	info->hash_key_size = HINIC_RSS_KEY_SIZE;
 	info->reta_size = HINIC_RSS_INDIR_SIZE;
 	info->flow_type_rss_offloads = HINIC_RSS_OFFLOAD_ALL;
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 693048f587..4177c0db41 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -2707,6 +2707,7 @@  hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	if (hns3_dev_get_support(hw, PTP))
 		info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 54dbd4b798..b53e9be091 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -965,6 +965,7 @@  hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	if (hns3_dev_get_support(hw, INDEP_TXRX))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	info->rx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = HNS3_MAX_RING_DESC,
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d..e472cee167 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3751,6 +3751,7 @@  i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->hash_key_size = (I40E_PFQF_HKEY_MAX_INDEX + 1) *
 						sizeof(uint32_t);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 12d5a2e48a..4d5a4af292 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -35,6 +35,8 @@  i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	/* get dev info for the vdev */
 	dev_info->device = ethdev->device;
 
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	dev_info->max_rx_queues = ethdev->data->nb_rx_queues;
 	dev_info->max_tx_queues = ethdev->data->nb_tx_queues;
 
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722..9bb5bdf465 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -960,6 +960,7 @@  iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = vf->vf_res->rss_lut_size;
 	dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	dev_info->rx_offload_capa =
 		DEV_RX_OFFLOAD_VLAN_STRIP |
 		DEV_RX_OFFLOAD_QINQ_STRIP |
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b8a537cb85..05a7ccf71e 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -673,6 +673,7 @@  ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->hash_key_size = hw->vf_res->rss_key_size;
 	dev_info->reta_size = hw->vf_res->rss_lut_size;
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->rx_offload_capa =
 		DEV_RX_OFFLOAD_VLAN_STRIP |
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b64..7d4cc408ba 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -1480,6 +1480,7 @@  eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
 	dev_info->max_rx_pktlen = MAX_RX_JUMBO_FRAME_SIZE;
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
 	dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
 	dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 063a9c6a6f..d40947162d 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -96,6 +96,7 @@  ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	dev_info->switch_info.name = ethdev->device->name;
 	dev_info->switch_info.domain_id = rpst->switch_domain_id;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index a6458d2ce9..a6d67ea093 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -1709,6 +1709,8 @@  mrvl_dev_infos_get(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
+	info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+
 	info->speed_capa = ETH_LINK_SPEED_10M |
 			   ETH_LINK_SPEED_100M |
 			   ETH_LINK_SPEED_1G |
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 22a8af5cba..cad5416ba2 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -583,6 +583,7 @@  otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	return 0;
 }
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc7..5bcc97d314 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1367,6 +1367,7 @@  qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
 	dev_info->rx_desc_lim = qede_rx_desc_lim;
 	dev_info->tx_desc_lim = qede_tx_desc_lim;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	if (IS_PF(edev))
 		dev_info->max_rx_queues = (uint16_t)RTE_MIN(
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f5986b610f..8951495841 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -186,6 +186,7 @@  sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 			     RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	if (mae->status == SFC_MAE_STATUS_SUPPORTED ||
 	    mae->status == SFC_MAE_STATUS_ADMIN) {
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035..3622049afa 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -93,6 +93,7 @@  pmd_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 	dev_info->max_rx_pktlen = UINT32_MAX;
 	dev_info->max_rx_queues = UINT16_MAX;
 	dev_info->max_tx_queues = UINT16_MAX;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	return 0;
 }
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad4521..5e19bd8d4b 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1006,6 +1006,7 @@  tap_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	 * functions together and not in partial combinations
 	 */
 	dev_info->flow_type_rss_offloads = ~TAP_RSS_HF_MASK;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 
 	return 0;
 }
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb686..6d64c657d9 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -2603,6 +2603,7 @@  txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vfs = pci_dev->max_vfs;
 	dev_info->max_vmdq_pools = ETH_64_POOLS;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b..0d464c5a4c 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -487,6 +487,7 @@  txgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);