[dpdk-dev,15/18] net/ixgbe: parse flow director filter

Message ID 1480675394-59179-16-git-send-email-wei.zhao1@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers

Checks

Context Check Description
checkpatch/checkpatch warning coding style issues

Commit Message

Zhao1, Wei Dec. 2, 2016, 10:43 a.m. UTC
  From: wei zhao1 <wei.zhao1@intel.com>

check if the rule is a flow director rule, and get the flow director info.

Signed-off-by: wei zhao1 <wei.zhao1@intel.com>
Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
---
 drivers/net/ixgbe/ixgbe_ethdev.c | 823 +++++++++++++++++++++++++++++++++++++++
 drivers/net/ixgbe/ixgbe_ethdev.h |  32 +-
 drivers/net/ixgbe/ixgbe_fdir.c   | 247 ++++++++----
 3 files changed, 1019 insertions(+), 83 deletions(-)
  

Comments

Ferruh Yigit Dec. 20, 2016, 5 p.m. UTC | #1
On 12/2/2016 10:43 AM, Wei Zhao wrote:
> From: wei zhao1 <wei.zhao1@intel.com>
> 
> check if the rule is a flow director rule, and get the flow director info.
> 
> Signed-off-by: wei zhao1 <wei.zhao1@intel.com>
> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> ---

<...>

> +	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
> +			  RTE_FLOW_ERROR_TYPE_ITEM_NUM);
> +	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
> +	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
> +	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
> +	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {

This gives build error [1], there are a few more same usage:

.../drivers/net/ixgbe/ixgbe_ethdev.c:9238:17: error: comparison of
constant 241 with expression of type 'const enum rte_flow_item_type' is
always true [-Werror,-Wtautological-constant-out-of-range-compare]
            item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
  
Zhao1, Wei Dec. 22, 2016, 9:19 a.m. UTC | #2
Hi, Yigit

> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, December 21, 2016 1:01 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 15/18] net/ixgbe: parse flow director filter
> 
> On 12/2/2016 10:43 AM, Wei Zhao wrote:
> > From: wei zhao1 <wei.zhao1@intel.com>
> >
> > check if the rule is a flow director rule, and get the flow director info.
> >
> > Signed-off-by: wei zhao1 <wei.zhao1@intel.com>
> > Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > ---
> 
> <...>
> 
> > +	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
> > +			  RTE_FLOW_ERROR_TYPE_ITEM_NUM);
> > +	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
> > +	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
> > +	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
> > +	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
> > +	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
> > +	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
> 
> This gives build error [1], there are a few more same usage:
> 
> .../drivers/net/ixgbe/ixgbe_ethdev.c:9238:17: error: comparison of constant
> 241 with expression of type 'const enum rte_flow_item_type' is always true
> [-Werror,-Wtautological-constant-out-of-range-compare]
>             item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
> 
> 
> 

Ok, I will add two type definition RTE_FLOW_ITEM_TYPE_NVGRE and RTE_FLOW_ITEM_TYPE_E_TAG  into const enum rte_flow_item_type to eliminate this problem.
Thank you.
  
Ferruh Yigit Dec. 22, 2016, 10:44 a.m. UTC | #3
On 12/22/2016 9:19 AM, Zhao1, Wei wrote:
> Hi, Yigit
> 
>> -----Original Message-----
>> From: Yigit, Ferruh
>> Sent: Wednesday, December 21, 2016 1:01 AM
>> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
>> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH 15/18] net/ixgbe: parse flow director filter
>>
>> On 12/2/2016 10:43 AM, Wei Zhao wrote:
>>> From: wei zhao1 <wei.zhao1@intel.com>
>>>
>>> check if the rule is a flow director rule, and get the flow director info.
>>>
>>> Signed-off-by: wei zhao1 <wei.zhao1@intel.com>
>>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
>>> ---
>>
>> <...>
>>
>>> +	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
>>> +			  RTE_FLOW_ERROR_TYPE_ITEM_NUM);
>>> +	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
>>> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
>>> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
>>> +	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
>>> +	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
>>> +	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
>>
>> This gives build error [1], there are a few more same usage:
>>
>> .../drivers/net/ixgbe/ixgbe_ethdev.c:9238:17: error: comparison of constant
>> 241 with expression of type 'const enum rte_flow_item_type' is always true
>> [-Werror,-Wtautological-constant-out-of-range-compare]
>>             item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
>>
>>
>>
> 
> Ok, I will add two type definition RTE_FLOW_ITEM_TYPE_NVGRE and RTE_FLOW_ITEM_TYPE_E_TAG  into const enum rte_flow_item_type to eliminate this problem.
> Thank you.
> 

CC: Adrien Mazarguil <adrien.mazarguil@6wind.com>

Yes, that is what right thing to do, since rte_flow patchset not merged
yet, perhaps Adrien may want to include this as next version of his
patchset?

What do you think Adrien?

Thanks,
ferruh
  
Adrien Mazarguil Dec. 23, 2016, 8:13 a.m. UTC | #4
Hi,

On Thu, Dec 22, 2016 at 10:44:32AM +0000, Ferruh Yigit wrote:
> On 12/22/2016 9:19 AM, Zhao1, Wei wrote:
> > Hi, Yigit
> > 
> >> -----Original Message-----
> >> From: Yigit, Ferruh
> >> Sent: Wednesday, December 21, 2016 1:01 AM
> >> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> >> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> >> Subject: Re: [dpdk-dev] [PATCH 15/18] net/ixgbe: parse flow director filter
> >>
> >> On 12/2/2016 10:43 AM, Wei Zhao wrote:
> >>> From: wei zhao1 <wei.zhao1@intel.com>
> >>>
> >>> check if the rule is a flow director rule, and get the flow director info.
> >>>
> >>> Signed-off-by: wei zhao1 <wei.zhao1@intel.com>
> >>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> >>> ---
> >>
> >> <...>
> >>
> >>> +	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
> >>> +			  RTE_FLOW_ERROR_TYPE_ITEM_NUM);
> >>> +	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
> >>> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
> >>> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
> >>> +	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
> >>> +	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
> >>> +	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
> >>
> >> This gives build error [1], there are a few more same usage:
> >>
> >> .../drivers/net/ixgbe/ixgbe_ethdev.c:9238:17: error: comparison of constant
> >> 241 with expression of type 'const enum rte_flow_item_type' is always true
> >> [-Werror,-Wtautological-constant-out-of-range-compare]
> >>             item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
> >>
> >>
> >>
> > 
> > Ok, I will add two type definition RTE_FLOW_ITEM_TYPE_NVGRE and RTE_FLOW_ITEM_TYPE_E_TAG  into const enum rte_flow_item_type to eliminate this problem.
> > Thank you.
> > 
> 
> CC: Adrien Mazarguil <adrien.mazarguil@6wind.com>

Thanks, the original message did not catch my attention.

> Yes, that is what right thing to do, since rte_flow patchset not merged
> yet, perhaps Adrien may want to include this as next version of his
> patchset?
> 
> What do you think Adrien?

I think by now the rte_flow series is automatically categorized as spam by
half the community already, and that new items such as these can be added
subsequently on their own, ideally before the entire ixgbe/i40e series.

I have a few comments regarding these new items if we want to make them part
of rte_flow.h (see definitions below).

Unfortunately, even though they are super convenient to use and expose in the
testpmd flow command, we need to avoid C-style bit-field definitions such as
these for protocol header matching because they are not endian-safe,
particularly multi-byte fields. Only meta-data that should be interpreted
with host endianness can use them (e.g. struct rte_flow_attr, struct
rte_flow_action_vf, etc):

 struct rte_flow_item_nvgre {
        uint32_t flags0:1; /**< 0 */
        uint32_t rsvd1:1; /**< 1 bit not defined */
        uint32_t flags1:2; /**< 2 bits, 1 0 */
        uint32_t rsvd0:9; /**< Reserved0 */
        uint32_t ver:3; /**< version */
        uint32_t protocol:16; /**< protocol type, 0x6558 */
        uint32_t tni:24; /**< tenant network ID or virtual subnet ID */
        uint32_t flow_id:8; /**< flow ID or Reserved */
 };

For an example how to avoid them, see struct ipv6_hdr definition in
rte_ip.h, where field vtc_flow is 32 bit but covers three protocol fields
and is considered big-endian (Nelio's endianness series [1] would be really
handy to eliminate confusion here). Also see struct rte_flow_item_vxlan,
which covers 24-bit fields using uint8_t arrays.

 struct rte_flow_item_e_tag {
        struct ether_addr dst; /**< Destination MAC. */
        struct ether_addr src; /**< Source MAC. */
        uint16_t e_tag_ethertype; /**< E-tag EtherType, 0x893F. */
        uint16_t e_pcp:3; /**<  E-PCP */
        uint16_t dei:1; /**< DEI */
        uint16_t in_e_cid_base:12; /**< Ingress E-CID base */
        uint16_t rsv:2; /**< reserved */
        uint16_t grp:2; /**< GRP */
        uint16_t e_cid_base:12; /**< E-CID base */
        uint16_t in_e_cid_ext:8; /**< Ingress E-CID extend */
        uint16_t e_cid_ext:8; /**< E-CID extend */
        uint16_t type; /**< MAC type. */
        unsigned int tags; /**< Number of 802.1Q/ad tags defined. */
        struct {
                uint16_t tpid; /**< Tag protocol identifier. */
                uint16_t tci; /**< Tag control information. */
        } tag[]; /**< 802.1Q/ad tag definitions, outermost first. */
 };

Besides the bit-field issue for this one, looks like it should be split, at
least the initial part is already covered by rte_flow_item_eth.

I do not know much about E-Tag (IEEE 802.1BR right?) but it sort of looks
like a 2.5 layer protocol reminiscent of VLAN.

tags and tag[] fields seem based on the VLAN definition of the original
rte_flow RFC that has since been replaced with stacked rte_flow_item_vlan
items, much easier to program for. Perhaps this can be relied on instead of
having e_tag implement its own variant.

As a protocol-matching item and E-Tag TCI being 6 bytes according to IEEE
802.1BR, sizeof(struct rte_flow_item_e_tag) should ideally return 6 as well
and would likely comprise three big-endian uint16_t fields. See how
PCP/DEI/VID VLAN fields are interpreted in testpmd [2].

Again, these concerns only stand if you intend to include these definitions
into rte_flow.h.

[1] http://dpdk.org/ml/archives/dev/2016-November/050060.html
[2] http://dpdk.org/ml/archives/dev/2016-December/052976.html
  
Zhao1, Wei Dec. 27, 2016, 3:31 a.m. UTC | #5
Hi,  Adrien

> -----Original Message-----
> From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com]
> Sent: Friday, December 23, 2016 4:13 PM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 15/18] net/ixgbe: parse flow director filter
> 
> Hi,
> 
> On Thu, Dec 22, 2016 at 10:44:32AM +0000, Ferruh Yigit wrote:
> > On 12/22/2016 9:19 AM, Zhao1, Wei wrote:
> > > Hi, Yigit
> > >
> > >> -----Original Message-----
> > >> From: Yigit, Ferruh
> > >> Sent: Wednesday, December 21, 2016 1:01 AM
> > >> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> > >> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > >> Subject: Re: [dpdk-dev] [PATCH 15/18] net/ixgbe: parse flow
> > >> director filter
> > >>
> > >> On 12/2/2016 10:43 AM, Wei Zhao wrote:
> > >>> From: wei zhao1 <wei.zhao1@intel.com>
> > >>>
> > >>> check if the rule is a flow director rule, and get the flow director info.
> > >>>
> > >>> Signed-off-by: wei zhao1 <wei.zhao1@intel.com>
> > >>> Signed-off-by: Wenzhuo Lu <wenzhuo.lu@intel.com>
> > >>> ---
> > >>
> > >> <...>
> > >>
> > >>> +	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
> > >>> +			  RTE_FLOW_ERROR_TYPE_ITEM_NUM);
> > >>> +	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
> > >>> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
> > >>> +	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
> > >>> +	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
> > >>> +	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
> > >>> +	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
> > >>
> > >> This gives build error [1], there are a few more same usage:
> > >>
> > >> .../drivers/net/ixgbe/ixgbe_ethdev.c:9238:17: error: comparison of
> > >> constant
> > >> 241 with expression of type 'const enum rte_flow_item_type' is
> > >> always true [-Werror,-Wtautological-constant-out-of-range-compare]
> > >>             item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
> > >>
> > >>
> > >>
> > >
> > > Ok, I will add two type definition RTE_FLOW_ITEM_TYPE_NVGRE and
> RTE_FLOW_ITEM_TYPE_E_TAG  into const enum rte_flow_item_type to
> eliminate this problem.
> > > Thank you.
> > >
> >
> > CC: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> 
> Thanks, the original message did not catch my attention.
> 
> > Yes, that is what right thing to do, since rte_flow patchset not
> > merged yet, perhaps Adrien may want to include this as next version of
> > his patchset?
> >
> > What do you think Adrien?
> 
> I think by now the rte_flow series is automatically categorized as spam by
> half the community already, and that new items such as these can be added
> subsequently on their own, ideally before the entire ixgbe/i40e series.
> 
> I have a few comments regarding these new items if we want to make them
> part of rte_flow.h (see definitions below).
> 
oK , I will add these type definition by my patch in v2

> Unfortunately, even though they are super convenient to use and expose in
> the testpmd flow command, we need to avoid C-style bit-field definitions
> such as these for protocol header matching because they are not endian-
> safe, particularly multi-byte fields. Only meta-data that should be interpreted
> with host endianness can use them (e.g. struct rte_flow_attr, struct
> rte_flow_action_vf, etc):
> 
>  struct rte_flow_item_nvgre {
>         uint32_t flags0:1; /**< 0 */
>         uint32_t rsvd1:1; /**< 1 bit not defined */
>         uint32_t flags1:2; /**< 2 bits, 1 0 */
>         uint32_t rsvd0:9; /**< Reserved0 */
>         uint32_t ver:3; /**< version */
>         uint32_t protocol:16; /**< protocol type, 0x6558 */
>         uint32_t tni:24; /**< tenant network ID or virtual subnet ID */
>         uint32_t flow_id:8; /**< flow ID or Reserved */  };
> 
> For an example how to avoid them, see struct ipv6_hdr definition in rte_ip.h,
> where field vtc_flow is 32 bit but covers three protocol fields and is
> considered big-endian (Nelio's endianness series [1] would be really handy to
> eliminate confusion here). Also see struct rte_flow_item_vxlan, which
> covers 24-bit fields using uint8_t arrays.
> 
>  struct rte_flow_item_e_tag {
>         struct ether_addr dst; /**< Destination MAC. */
>         struct ether_addr src; /**< Source MAC. */
>         uint16_t e_tag_ethertype; /**< E-tag EtherType, 0x893F. */
>         uint16_t e_pcp:3; /**<  E-PCP */
>         uint16_t dei:1; /**< DEI */
>         uint16_t in_e_cid_base:12; /**< Ingress E-CID base */
>         uint16_t rsv:2; /**< reserved */
>         uint16_t grp:2; /**< GRP */
>         uint16_t e_cid_base:12; /**< E-CID base */
>         uint16_t in_e_cid_ext:8; /**< Ingress E-CID extend */
>         uint16_t e_cid_ext:8; /**< E-CID extend */
>         uint16_t type; /**< MAC type. */
>         unsigned int tags; /**< Number of 802.1Q/ad tags defined. */
>         struct {
>                 uint16_t tpid; /**< Tag protocol identifier. */
>                 uint16_t tci; /**< Tag control information. */
>         } tag[]; /**< 802.1Q/ad tag definitions, outermost first. */  };
> 
> Besides the bit-field issue for this one, looks like it should be split, at least the
> initial part is already covered by rte_flow_item_eth.
> 
> I do not know much about E-Tag (IEEE 802.1BR right?) but it sort of looks like a
> 2.5 layer protocol reminiscent of VLAN.
> 
> tags and tag[] fields seem based on the VLAN definition of the original
> rte_flow RFC that has since been replaced with stacked rte_flow_item_vlan
> items, much easier to program for. Perhaps this can be relied on instead of
> having e_tag implement its own variant.
> 
> As a protocol-matching item and E-Tag TCI being 6 bytes according to IEEE
> 802.1BR, sizeof(struct rte_flow_item_e_tag) should ideally return 6 as well
> and would likely comprise three big-endian uint16_t fields. See how
> PCP/DEI/VID VLAN fields are interpreted in testpmd [2].
> 
> Again, these concerns only stand if you intend to include these definitions
> into rte_flow.h.
> 
> [1] http://dpdk.org/ml/archives/dev/2016-November/050060.html
> [2] http://dpdk.org/ml/archives/dev/2016-December/052976.html
> 
> --
> Adrien Mazarguil
> 6WIND
  

Patch

diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 104277d..75fadb0 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -425,6 +425,21 @@  cons_parse_l2_tn_filter(const struct rte_flow_attr *attr,
 		const struct rte_flow_item pattern[],
 		const struct rte_flow_action actions[],
 		struct rte_eth_l2_tunnel_conf *filter);
+static enum rte_flow_error_type
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule);
+static enum rte_flow_error_type
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule);
+static enum rte_flow_error_type
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+		const struct rte_flow_item pattern[],
+		const struct rte_flow_action actions[],
+		struct ixgbe_fdir_rule *rule);
 enum rte_flow_error_type
 ixgbe_flow_rule_validate(__rte_unused struct rte_eth_dev *dev,
 				const struct rte_flow_attr *attr,
@@ -1390,6 +1405,7 @@  eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev)
 			     "Failed to allocate memory for fdir hash map!");
 		return -ENOMEM;
 	}
+	fdir_info->mask_added = FALSE;
 
 	/* initialize l2 tunnel filter list & hash */
 	TAILQ_INIT(&l2_tn_info->l2_tn_list);
@@ -8609,6 +8625,808 @@  cons_parse_syn_filter(const struct rte_flow_attr *attr,
 
 	return 0;
 }
+/* Parse to get the attr and action info of flow director rule. */
+static int
+ixgbe_parse_fdir_act_attr(const struct rte_flow_attr *attr,
+			  const struct rte_flow_action actions[],
+			  struct ixgbe_fdir_rule *rule)
+{
+	const struct rte_flow_action *act;
+	const struct rte_flow_action_queue *act_q;
+	const struct rte_flow_action_mark *mark;
+	uint32_t i;
+
+	/************************************************
+	 * parse attr
+	 ************************************************/
+	/* must be input direction */
+	if (!attr->ingress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ATTR_INGRESS;
+	}
+
+	/* not supported */
+	if (attr->egress) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ATTR_EGRESS;
+	}
+
+	/* not supported */
+	if (attr->priority) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY;
+	}
+
+	/************************************************
+	 * parse action
+	 ************************************************/
+	i = 0;
+
+	/* check if the first not void action is QUEUE or DROP. */
+	ACTION_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+			 RTE_FLOW_ERROR_TYPE_ACTION_NUM);
+	if (act->type != RTE_FLOW_ACTION_TYPE_QUEUE &&
+	    act->type != RTE_FLOW_ACTION_TYPE_DROP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ACTION;
+	}
+
+	if (act->type == RTE_FLOW_ACTION_TYPE_QUEUE) {
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		rule->queue = act_q->index;
+	} else { /* drop */
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	}
+
+	/* check if the next not void item is MARK */
+	i++;
+	ACTION_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+			 RTE_FLOW_ERROR_TYPE_ACTION);
+	if (act->type != RTE_FLOW_ACTION_TYPE_MARK) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ACTION;
+	}
+
+	mark = (const struct rte_flow_action_mark *)act->conf;
+	rule->soft_id = mark->id;
+
+	/* check if the next not void item is END */
+	i++;
+	ACTION_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+			 RTE_FLOW_ERROR_TYPE_ACTION);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ACTION;
+	}
+
+	return 0;
+}
+
+/**
+ * Parse the rule to see if it is a IP or MAC VLAN flow director rule.
+ * And get the flow director filter info BTW.
+ */
+static enum rte_flow_error_type
+ixgbe_parse_fdir_filter_normal(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv4 *ipv4_mask;
+	const struct rte_flow_item_tcp *tcp_spec;
+	const struct rte_flow_item_tcp *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec;
+	const struct rte_flow_item_udp *udp_mask;
+	const struct rte_flow_item_sctp *sctp_spec;
+	const struct rte_flow_item_sctp *sctp_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
+
+	uint32_t i, j;
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/************************************************
+	 * parse pattern
+	 ************************************************/
+	i = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or TCP or UDP or SCTP.
+	 */
+	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+			  RTE_FLOW_ERROR_TYPE_ITEM_NUM);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_SCTP) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ITEM;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT;
+
+	/* Get the MAC info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/**
+		 * Only support vlan and dst MAC address,
+		 * others should be masked.
+		 */
+		if (item->spec && !item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+			/* Get the dst MAC. */
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				rule->ixgbe_fdir.formatted.inner_mac[j] =
+					eth_spec->dst.addr_bytes[j];
+			}
+		}
+
+
+		if (item->mask) {
+			/* If ethernet has meaning, it means MAC VLAN mode. */
+			rule->mode = RTE_FDIR_MODE_PERFECT_MAC_VLAN;
+
+			rule->b_mask = TRUE;
+			eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+			/* Ether type should be masked. */
+			if (eth_mask->type) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				return RTE_FLOW_ERROR_TYPE_ITEM;
+			}
+
+			/**
+			* src MAC address must be masked,
+			* and don't support dst MAC address mask.
+			*/
+			for (j = 0; j < ETHER_ADDR_LEN; j++) {
+				if (eth_mask->src.addr_bytes[j] ||
+					eth_mask->dst.addr_bytes[j] != 0xFF) {
+					memset(rule, 0,
+					sizeof(struct ixgbe_fdir_rule));
+					return RTE_FLOW_ERROR_TYPE_ITEM;
+				}
+			}
+
+			/* When no VLAN, considered as full mask. */
+			rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+		}
+		/*** If both spec and mask are NULL,
+		* it means don't care about ETH.
+		* Do nothing.
+		*/
+
+		/**
+		 * Check if the next not void item is vlan or ipv4.
+		 * IPv6 is not supported.
+		*/
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+				  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (rule->mode == RTE_FDIR_MODE_PERFECT_MAC_VLAN) {
+			if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				return RTE_FLOW_ERROR_TYPE_ITEM;
+			}
+		} else {
+			if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				return RTE_FLOW_ERROR_TYPE_ITEM;
+			}
+		}
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		* Check if the next not void item is not vlan.
+		*/
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+				  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+
+	/* Get the IP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_IPV4;
+
+		/**
+		 * Only care about src & dst addresses,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->b_mask = TRUE;
+		ipv4_mask =
+			(const struct rte_flow_item_ipv4 *)item->mask;
+		if (ipv4_mask->hdr.version_ihl ||
+		    ipv4_mask->hdr.type_of_service ||
+		    ipv4_mask->hdr.total_length ||
+		    ipv4_mask->hdr.packet_id ||
+		    ipv4_mask->hdr.fragment_offset ||
+		    ipv4_mask->hdr.time_to_live ||
+		    ipv4_mask->hdr.next_proto_id ||
+		    ipv4_mask->hdr.hdr_checksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr;
+		rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			ipv4_spec =
+				(const struct rte_flow_item_ipv4 *)item->spec;
+			rule->ixgbe_fdir.formatted.dst_ip[0] =
+				ipv4_spec->hdr.dst_addr;
+			rule->ixgbe_fdir.formatted.src_ip[0] =
+				ipv4_spec->hdr.src_addr;
+		}
+
+		/**
+		 * Check if the next not void item is
+		 * TCP or UDP or SCTP or END.
+		 */
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+				  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (item->type != RTE_FLOW_ITEM_TYPE_TCP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_SCTP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+
+	/* Get the TCP info. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_TCP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_TCPV4;
+
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->b_mask = TRUE;
+		tcp_mask = (const struct rte_flow_item_tcp *)item->mask;
+		if (tcp_mask->hdr.sent_seq ||
+		    tcp_mask->hdr.recv_ack ||
+		    tcp_mask->hdr.data_off ||
+		    tcp_mask->hdr.tcp_flags ||
+		    tcp_mask->hdr.rx_win ||
+		    tcp_mask->hdr.cksum ||
+		    tcp_mask->hdr.tcp_urp) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->mask.src_port_mask = tcp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = tcp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			tcp_spec = (const struct rte_flow_item_tcp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				tcp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				tcp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the UDP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_UDPV4;
+
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->b_mask = TRUE;
+		udp_mask = (const struct rte_flow_item_udp *)item->mask;
+		if (udp_mask->hdr.dgram_len ||
+		    udp_mask->hdr.dgram_cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->mask.src_port_mask = udp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = udp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			udp_spec = (const struct rte_flow_item_udp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				udp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				udp_spec->hdr.dst_port;
+		}
+	}
+
+	/* Get the SCTP info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) {
+		/**
+		 * Set the flow type even if there's no content
+		 * as we must have a flow type.
+		 */
+		rule->ixgbe_fdir.formatted.flow_type =
+			IXGBE_ATR_FLOW_TYPE_SCTPV4;
+
+		/**
+		 * Only care about src & dst ports,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->b_mask = TRUE;
+		sctp_mask =
+			(const struct rte_flow_item_sctp *)item->mask;
+		if (sctp_mask->hdr.tag ||
+		    sctp_mask->hdr.cksum) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->mask.src_port_mask = sctp_mask->hdr.src_port;
+		rule->mask.dst_port_mask = sctp_mask->hdr.dst_port;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			sctp_spec =
+				(const struct rte_flow_item_sctp *)item->spec;
+			rule->ixgbe_fdir.formatted.src_port =
+				sctp_spec->hdr.src_port;
+			rule->ixgbe_fdir.formatted.dst_port =
+				sctp_spec->hdr.dst_port;
+		}
+	}
+
+	if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* check if the next not void item is END */
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+				  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule);
+}
+
+#define NVGRE_PROTOCOL 0x6558
+
+/**
+ * Parse the rule to see if it is a VxLAN or NVGRE flow director rule.
+ * And get the flow director filter info BTW.
+ */
+static enum rte_flow_error_type
+ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr,
+			       const struct rte_flow_item pattern[],
+			       const struct rte_flow_action actions[],
+			       struct ixgbe_fdir_rule *rule)
+{
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_eth *eth_mask;
+	const struct rte_flow_item_vlan *vlan_spec;
+	const struct rte_flow_item_vlan *vlan_mask;
+	uint32_t i, j;
+
+	/**
+	 * Some fields may not be provided. Set spec to 0 and mask to default
+	 * value. So, we need not do anything for the not provided fields later.
+	 */
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+	memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask));
+	rule->mask.vlan_tci_mask = 0;
+
+	/************************************************
+	 * parse pattern
+	 ************************************************/
+	i = 0;
+
+	/**
+	 * The first not void item should be
+	 * MAC or IPv4 or IPv6 or UDP or VxLAN.
+	 */
+	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+			  RTE_FLOW_ERROR_TYPE_ITEM_NUM);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_IPV6 &&
+	    item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+	    item->type != RTE_FLOW_ITEM_TYPE_VXLAN &&
+	    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ITEM;
+	}
+
+	rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL;
+
+	/* Skip MAC. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		/* Check if the next not void item is IPv4 or IPv6. */
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+				  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+		    item->type != RTE_FLOW_ITEM_TYPE_IPV6) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+
+	/* Skip IP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+	    item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		/* Check if the next not void item is UDP or NVGRE. */
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+				  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (item->type != RTE_FLOW_ITEM_TYPE_UDP &&
+		    item->type != RTE_FLOW_ITEM_TYPE_NVGRE) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+
+	/* Skip UDP. */
+	if (item->type == RTE_FLOW_ITEM_TYPE_UDP) {
+		/* Only used to describe the protocol stack. */
+		if (item->spec || item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		/* Check if the next not void item is VxLAN. */
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+				  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+
+	/* Get the VxLAN info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_VXLAN;
+
+		/* Only care about VNI, others should be masked. */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		vxlan_mask =
+			(const struct rte_flow_item_vxlan *)item->mask;
+		if (vxlan_mask->flags) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		/* VNI must be totally masked or not. */
+		if ((vxlan_mask->vni[0] || vxlan_mask->vni[1]
+			|| vxlan_mask->vni[2]) &&
+		    ((vxlan_mask->vni[0] != 0xFF) ||
+			(vxlan_mask->vni[1] != 0xFF) ||
+				(vxlan_mask->vni[2] != 0xFF))) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		rte_memcpy(&rule->mask.tunnel_id_mask, vxlan_mask->vni,
+			RTE_DIM(vxlan_mask->vni));
+		rule->mask.tunnel_id_mask <<= 8;
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			vxlan_spec = (const struct rte_flow_item_vxlan *)
+					item->spec;
+			rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni,
+				vxlan_spec->vni, RTE_DIM(vxlan_spec->vni));
+			rule->ixgbe_fdir.formatted.tni_vni <<= 8;
+		}
+	}
+
+	/* Get the NVGRE info */
+	if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) {
+		rule->ixgbe_fdir.formatted.tunnel_type =
+			RTE_FDIR_TUNNEL_TYPE_NVGRE;
+
+		/**
+		 * Only care about flags0, flags1, protocol and TNI,
+		 * others should be masked.
+		 */
+		if (!item->mask) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->b_mask = TRUE;
+
+		/* Tunnel type is always meaningful. */
+		rule->mask.tunnel_type_mask = 1;
+
+		nvgre_mask =
+			(const struct rte_flow_item_nvgre *)item->mask;
+		if (nvgre_mask->ver ||
+		    nvgre_mask->flow_id) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		if (!nvgre_mask->flags0 ||
+		    nvgre_mask->flags1 != 0x3 ||
+		    nvgre_mask->protocol != 0xFFFF) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		/* TNI must be totally masked or not. */
+		if (nvgre_mask->tni &&
+		    nvgre_mask->tni != 0xFFFFFF) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		/* tni is a 24-bits bit field */
+		rule->mask.tunnel_id_mask = rte_be_to_cpu_24(nvgre_mask->tni);
+		rule->mask.tunnel_id_mask =
+			rte_cpu_to_be_32(rule->mask.tunnel_id_mask);
+
+		if (item->spec) {
+			rule->b_spec = TRUE;
+			nvgre_spec =
+				(const struct rte_flow_item_nvgre *)item->spec;
+			if (nvgre_spec->flags0 ||
+			    nvgre_spec->flags1 != 2 ||
+			    nvgre_spec->protocol !=
+			    rte_cpu_to_be_16(NVGRE_PROTOCOL)) {
+				memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+				return RTE_FLOW_ERROR_TYPE_ITEM;
+			}
+			/* tni is a 24-bits bit field */
+			rule->ixgbe_fdir.formatted.tni_vni =
+				rte_be_to_cpu_24(nvgre_spec->tni);
+			rule->ixgbe_fdir.formatted.tni_vni =
+				rte_cpu_to_be_32(
+					rule->ixgbe_fdir.formatted.tni_vni);
+		}
+	}
+
+	/* check if the next not void item is MAC */
+	i++;
+	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+			  RTE_FLOW_ERROR_TYPE_ITEM);
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ITEM;
+	}
+
+	/**
+	 * Only support vlan and dst MAC address,
+	 * others should be masked.
+	 */
+
+	if (!item->mask) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ITEM;
+	}
+	rule->b_mask = TRUE;
+	eth_mask = (const struct rte_flow_item_eth *)item->mask;
+
+	/* Ether type should be masked. */
+	if (eth_mask->type) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ITEM;
+	}
+
+	/* src MAC address should be masked. */
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		if (eth_mask->src.addr_bytes[j]) {
+			memset(rule, 0,
+			       sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+	rule->mask.mac_addr_byte_mask = 0;
+	for (j = 0; j < ETHER_ADDR_LEN; j++) {
+		/* It's a per byte mask. */
+		if (eth_mask->dst.addr_bytes[j] == 0xFF) {
+			rule->mask.mac_addr_byte_mask |= 0x1 << j;
+		} else if (eth_mask->dst.addr_bytes[j]) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+
+	/* When no vlan, considered as full mask. */
+	rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF);
+
+	if (item->spec) {
+		rule->b_spec = TRUE;
+		eth_spec = (const struct rte_flow_item_eth *)item->spec;
+
+		/* Get the dst MAC. */
+		for (j = 0; j < ETHER_ADDR_LEN; j++) {
+			rule->ixgbe_fdir.formatted.inner_mac[j] =
+				eth_spec->dst.addr_bytes[j];
+		}
+	}
+
+	/**
+	* Check if the next not void item is vlan or ipv4.
+	* IPv6 is not supported.
+	*/
+	i++;
+	PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+		RTE_FLOW_ERROR_TYPE_ITEM);
+	if ((item->type != RTE_FLOW_ITEM_TYPE_VLAN)
+		&& (item->type != RTE_FLOW_ITEM_TYPE_VLAN)) {
+		memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+		return RTE_FLOW_ERROR_TYPE_ITEM;
+	}
+
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		if (!(item->spec && item->mask)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		vlan_spec = (const struct rte_flow_item_vlan *)item->spec;
+		vlan_mask = (const struct rte_flow_item_vlan *)item->mask;
+
+		if (vlan_spec->tpid != rte_cpu_to_be_16(ETHER_TYPE_VLAN)) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+
+		rule->ixgbe_fdir.formatted.vlan_id = vlan_spec->tci;
+
+		if (vlan_mask->tpid != (uint16_t)~0U) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		rule->mask.vlan_tci_mask = vlan_mask->tci;
+		/* More than one tags are not supported. */
+
+		/**
+		* Check if the next not void item is not vlan.
+		*/
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+				  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		} else if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+		/* check if the next not void item is END */
+		i++;
+		PATTERN_SKIP_VOID(rule, struct ixgbe_fdir_rule,
+			  RTE_FLOW_ERROR_TYPE_ITEM);
+		if (item->type != RTE_FLOW_ITEM_TYPE_END) {
+			memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+			return RTE_FLOW_ERROR_TYPE_ITEM;
+		}
+	}
+
+	/**
+	* If the tags is 0, it means don't care about the VLAN.
+	* Do nothing.
+	*/
+
+	return ixgbe_parse_fdir_act_attr(attr, actions, rule);
+}
+
+static enum rte_flow_error_type
+ixgbe_parse_fdir_filter(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct ixgbe_fdir_rule *rule)
+{
+	int ret;
+
+	ret = ixgbe_parse_fdir_filter_normal(attr, pattern, actions, rule);
+
+	if (!ret)
+		return 0;
+
+	ret = ixgbe_parse_fdir_filter_tunnel(attr, pattern, actions, rule);
+
+	return ret;
+}
 
 /**
  * Parse the rule to see if it is a L2 tunnel rule.
@@ -8756,6 +9574,7 @@  ixgbe_flow_rule_validate(__rte_unused struct rte_eth_dev *dev,
 	struct rte_eth_ethertype_filter ethertype_filter;
 	struct rte_eth_syn_filter syn_filter;
 	struct rte_eth_l2_tunnel_conf l2_tn_filter;
+	struct ixgbe_fdir_rule fdir_rule;
 
 	memset(&ntuple_filter, 0, sizeof(struct rte_eth_ntuple_filter));
 	ret = ixgbe_parse_ntuple_filter(attr, pattern, actions, &ntuple_filter);
@@ -8772,6 +9591,10 @@  ixgbe_flow_rule_validate(__rte_unused struct rte_eth_dev *dev,
 	ret = cons_parse_syn_filter(attr, pattern, actions, &syn_filter);
 	if (!ret)
 		return RTE_FLOW_ERROR_TYPE_NONE;
+	memset(&fdir_rule, 0, sizeof(struct ixgbe_fdir_rule));
+	ret = ixgbe_parse_fdir_filter(attr, pattern, actions, &fdir_rule);
+	if (!ret)
+		return RTE_FLOW_ERROR_TYPE_NONE;
 
 	memset(&l2_tn_filter, 0, sizeof(struct rte_eth_l2_tunnel_conf));
 	ret = cons_parse_l2_tn_filter(attr, pattern, actions, &l2_tn_filter);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index 4aa5fd5..d23ad67 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -162,6 +162,17 @@  struct ixgbe_fdir_filter {
 /* list of fdir filters */
 TAILQ_HEAD(ixgbe_fdir_filter_list, ixgbe_fdir_filter);
 
+struct ixgbe_fdir_rule {
+	struct ixgbe_hw_fdir_mask mask;
+	union ixgbe_atr_input ixgbe_fdir; /* key of fdir filter*/
+	bool b_spec; /* If TRUE, ixgbe_fdir, fdirflags, queue have meaning. */
+	bool b_mask; /* If TRUE, mask has meaning. */
+	enum rte_fdir_mode mode; /* IP, MAC VLAN, Tunnel */
+	uint32_t fdirflags; /* drop or forward */
+	uint32_t soft_id; /* an unique value for this rule */
+	uint8_t queue; /* assigned rx queue */
+};
+
 struct ixgbe_hw_fdir_info {
 	struct ixgbe_hw_fdir_mask mask;
 	uint8_t     flex_bytes_offset;
@@ -177,6 +188,7 @@  struct ixgbe_hw_fdir_info {
 	/* store the pointers of the filters, index is the hash value. */
 	struct ixgbe_fdir_filter **hash_map;
 	struct rte_hash *hash_handle; /* cuckoo hash handler */
+	bool mask_added; /* If already got mask from consistent filter */
 };
 
 /* structure for interrupt relative data */
@@ -473,6 +485,10 @@  bool ixgbe_rss_update_sp(enum ixgbe_mac_type mac_type);
  * Flow director function prototypes
  */
 int ixgbe_fdir_configure(struct rte_eth_dev *dev);
+int ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev);
+int ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			      struct ixgbe_fdir_rule *rule,
+			      bool del, bool update);
 
 void ixgbe_configure_dcb(struct rte_eth_dev *dev);
 
@@ -553,9 +569,23 @@  ixgbe_ethertype_filter_remove(struct ixgbe_filter_info *filter_info,
 	return idx;
 }
 
+
+#define RTE_FLOW_ITEM_TYPE_NVGRE 0xF1
 #define RTE_FLOW_ITEM_TYPE_E_TAG 0xF2
 
-/* E-tag Header.*/
+
+struct rte_flow_item_nvgre {
+	uint32_t flags0:1; /**< 0 */
+	uint32_t rsvd1:1; /**< 1 bit not defined */
+	uint32_t flags1:2; /**< 2 bits, 1 0 */
+	uint32_t rsvd0:9; /**< Reserved0 */
+	uint32_t ver:3; /**< version */
+	uint32_t protocol:16; /**< protocol type, 0x6558 */
+	uint32_t tni:24; /**< tenant network ID or virtual subnet ID */
+	uint32_t flow_id:8; /**< flow ID or Reserved */
+};
+
+/* E-tag Header. Need to move to RTE */
 struct rte_flow_item_e_tag {
 	struct ether_addr dst; /**< Destination MAC. */
 	struct ether_addr src; /**< Source MAC. */
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 7097dca..0c6464f 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -112,10 +112,8 @@ 
 static int fdir_erase_filter_82599(struct ixgbe_hw *hw, uint32_t fdirhash);
 static int fdir_set_input_mask(struct rte_eth_dev *dev,
 			       const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask);
-static int fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-				    const struct rte_eth_fdir_masks *input_mask);
+static int fdir_set_input_mask_82599(struct rte_eth_dev *dev);
+static int fdir_set_input_mask_x550(struct rte_eth_dev *dev);
 static int ixgbe_set_fdir_flex_conf(struct rte_eth_dev *dev,
 		const struct rte_eth_fdir_flex_conf *conf, uint32_t *fdirctrl);
 static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
@@ -295,8 +293,7 @@  reverse_fdir_bitmasks(uint16_t hi_dword, uint16_t lo_dword)
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_82599(struct rte_eth_dev *dev,
-		const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_82599(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -308,8 +305,6 @@  fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	uint32_t fdirm = IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | IXGBE_FDIRM_FLEX;
 	uint32_t fdirtcpm;  /* TCP source and destination port masks. */
 	uint32_t fdiripv6m; /* IPv6 source and destination masks. */
-	uint16_t dst_ipv6m = 0;
-	uint16_t src_ipv6m = 0;
 	volatile uint32_t *reg;
 
 	PMD_INIT_FUNC_TRACE();
@@ -320,31 +315,30 @@  fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	 * a VLAN of 0 is unspecified, so mask that out as well.  L4type
 	 * cannot be masked out in this implementation.
 	 */
-	if (input_mask->dst_port_mask == 0 && input_mask->src_port_mask == 0)
+	if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0)
 		/* use the L4 protocol mask for raw IPv4/IPv6 traffic */
 		fdirm |= IXGBE_FDIRM_L4P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
 	/* store the TCP/UDP port masks, bit reversed from port layout */
 	fdirtcpm = reverse_fdir_bitmasks(
-			rte_be_to_cpu_16(input_mask->dst_port_mask),
-			rte_be_to_cpu_16(input_mask->src_port_mask));
+			rte_be_to_cpu_16(info->mask.dst_port_mask),
+			rte_be_to_cpu_16(info->mask.src_port_mask));
 
 	/* write all the same so that UDP, TCP and SCTP use the same mask
 	 * (little-endian)
@@ -352,30 +346,23 @@  fdir_set_input_mask_82599(struct rte_eth_dev *dev,
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRTCPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRUDPM, ~fdirtcpm);
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
-	info->mask.src_port_mask = input_mask->src_port_mask;
-	info->mask.dst_port_mask = input_mask->dst_port_mask;
 
 	/* Store source and destination IPv4 masks (big-endian),
 	 * can not use IXGBE_WRITE_REG.
 	 */
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRSIP4M);
-	*reg = ~(input_mask->ipv4_mask.src_ip);
+	*reg = ~(info->mask.src_ipv4_mask);
 	reg = IXGBE_PCI_REG_ADDR(hw, IXGBE_FDIRDIP4M);
-	*reg = ~(input_mask->ipv4_mask.dst_ip);
-	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
-	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	*reg = ~(info->mask.dst_ipv4_mask);
 
 	if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_SIGNATURE) {
 		/*
 		 * Store source and destination IPv6 masks (bit reversed)
 		 */
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
-		IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
-		fdiripv6m = (dst_ipv6m << 16) | src_ipv6m;
+		fdiripv6m = (info->mask.dst_ipv6_mask << 16) |
+			    info->mask.src_ipv6_mask;
 
 		IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, ~fdiripv6m);
-		info->mask.src_ipv6_mask = src_ipv6m;
-		info->mask.dst_ipv6_mask = dst_ipv6m;
 	}
 
 	return IXGBE_SUCCESS;
@@ -386,8 +373,7 @@  fdir_set_input_mask_82599(struct rte_eth_dev *dev,
  * but makes use of the rte_fdir_masks structure to see which bits to set.
  */
 static int
-fdir_set_input_mask_x550(struct rte_eth_dev *dev,
-			 const struct rte_eth_fdir_masks *input_mask)
+fdir_set_input_mask_x550(struct rte_eth_dev *dev)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct ixgbe_hw_fdir_info *info =
@@ -410,20 +396,19 @@  fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 	/* some bits must be set for mac vlan or tunnel mode */
 	fdirm |= IXGBE_FDIRM_L4P | IXGBE_FDIRM_L3P;
 
-	if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
+	if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0x0FFF))
 		/* mask VLAN Priority */
 		fdirm |= IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask == rte_cpu_to_be_16(0xE000))
+	else if (info->mask.vlan_tci_mask == rte_cpu_to_be_16(0xE000))
 		/* mask VLAN ID */
 		fdirm |= IXGBE_FDIRM_VLANID;
-	else if (input_mask->vlan_tci_mask == 0)
+	else if (info->mask.vlan_tci_mask == 0)
 		/* mask VLAN ID and Priority */
 		fdirm |= IXGBE_FDIRM_VLANID | IXGBE_FDIRM_VLANP;
-	else if (input_mask->vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
+	else if (info->mask.vlan_tci_mask != rte_cpu_to_be_16(0xEFFF)) {
 		PMD_INIT_LOG(ERR, "invalid vlan_tci_mask");
 		return -EINVAL;
 	}
-	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm);
 
@@ -434,12 +419,11 @@  fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 				IXGBE_FDIRIP6M_TNI_VNI;
 
 	if (mode == RTE_FDIR_MODE_PERFECT_TUNNEL) {
-		mac_mask = input_mask->mac_addr_byte_mask;
+		mac_mask = info->mask.mac_addr_byte_mask;
 		fdiripv6m |= (mac_mask << IXGBE_FDIRIP6M_INNER_MAC_SHIFT)
 				& IXGBE_FDIRIP6M_INNER_MAC;
-		info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
 
-		switch (input_mask->tunnel_type_mask) {
+		switch (info->mask.tunnel_type_mask) {
 		case 0:
 			/* Mask turnnel type */
 			fdiripv6m |= IXGBE_FDIRIP6M_TUNNEL_TYPE;
@@ -450,10 +434,8 @@  fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_type_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_type_mask =
-			input_mask->tunnel_type_mask;
 
-		switch (rte_be_to_cpu_32(input_mask->tunnel_id_mask)) {
+		switch (rte_be_to_cpu_32(info->mask.tunnel_id_mask)) {
 		case 0x0:
 			/* Mask vxlan id */
 			fdiripv6m |= IXGBE_FDIRIP6M_TNI_VNI;
@@ -467,8 +449,6 @@  fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 			PMD_INIT_LOG(ERR, "invalid tunnel_id_mask");
 			return -EINVAL;
 		}
-		info->mask.tunnel_id_mask =
-			input_mask->tunnel_id_mask;
 	}
 
 	IXGBE_WRITE_REG(hw, IXGBE_FDIRIP6M, fdiripv6m);
@@ -482,22 +462,90 @@  fdir_set_input_mask_x550(struct rte_eth_dev *dev,
 }
 
 static int
-fdir_set_input_mask(struct rte_eth_dev *dev,
-		    const struct rte_eth_fdir_masks *input_mask)
+ixgbe_fdir_store_input_mask_82599(struct rte_eth_dev *dev,
+				  const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+	uint16_t dst_ipv6m = 0;
+	uint16_t src_ipv6m = 0;
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.src_port_mask = input_mask->src_port_mask;
+	info->mask.dst_port_mask = input_mask->dst_port_mask;
+	info->mask.src_ipv4_mask = input_mask->ipv4_mask.src_ip;
+	info->mask.dst_ipv4_mask = input_mask->ipv4_mask.dst_ip;
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.src_ip, src_ipv6m);
+	IPV6_ADDR_TO_MASK(input_mask->ipv6_mask.dst_ip, dst_ipv6m);
+	info->mask.src_ipv6_mask = src_ipv6m;
+	info->mask.dst_ipv6_mask = dst_ipv6m;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask_x550(struct rte_eth_dev *dev,
+				 const struct rte_eth_fdir_masks *input_mask)
+{
+	struct ixgbe_hw_fdir_info *info =
+		IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private);
+
+	memset(&info->mask, 0, sizeof(struct ixgbe_hw_fdir_mask));
+	info->mask.vlan_tci_mask = input_mask->vlan_tci_mask;
+	info->mask.mac_addr_byte_mask = input_mask->mac_addr_byte_mask;
+	info->mask.tunnel_type_mask = input_mask->tunnel_type_mask;
+	info->mask.tunnel_id_mask = input_mask->tunnel_id_mask;
+
+	return IXGBE_SUCCESS;
+}
+
+static int
+ixgbe_fdir_store_input_mask(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_masks *input_mask)
+{
+	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
+
+	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
+	    mode <= RTE_FDIR_MODE_PERFECT)
+		return ixgbe_fdir_store_input_mask_82599(dev, input_mask);
+	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
+		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
+		return ixgbe_fdir_store_input_mask_x550(dev, input_mask);
+
+	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
+	return -ENOTSUP;
+}
+
+int
+ixgbe_fdir_set_input_mask(struct rte_eth_dev *dev)
 {
 	enum rte_fdir_mode mode = dev->data->dev_conf.fdir_conf.mode;
 
 	if (mode >= RTE_FDIR_MODE_SIGNATURE &&
 	    mode <= RTE_FDIR_MODE_PERFECT)
-		return fdir_set_input_mask_82599(dev, input_mask);
+		return fdir_set_input_mask_82599(dev);
 	else if (mode >= RTE_FDIR_MODE_PERFECT_MAC_VLAN &&
 		 mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
-		return fdir_set_input_mask_x550(dev, input_mask);
+		return fdir_set_input_mask_x550(dev);
 
 	PMD_DRV_LOG(ERR, "Not supported fdir mode - %d!", mode);
 	return -ENOTSUP;
 }
 
+static int
+fdir_set_input_mask(struct rte_eth_dev *dev,
+		    const struct rte_eth_fdir_masks *input_mask)
+{
+	int ret;
+
+	ret = ixgbe_fdir_store_input_mask(dev, input_mask);
+	if (ret)
+		return ret;
+
+	return ixgbe_fdir_set_input_mask(dev);
+}
+
 /*
  * ixgbe_check_fdir_flex_conf -check if the flex payload and mask configuration
  * arguments are valid
@@ -1135,23 +1183,40 @@  ixgbe_remove_fdir_filter(struct ixgbe_hw_fdir_info *fdir_info,
 	return 0;
 }
 
-/*
- * ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
- * @dev: pointer to the structure rte_eth_dev
- * @fdir_filter: fdir filter entry
- * @del: 1 - delete, 0 - add
- * @update: 1 - update
- */
 static int
-ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
-			  const struct rte_eth_fdir_filter *fdir_filter,
+ixgbe_interpret_fdir_filter(struct rte_eth_dev *dev,
+			    const struct rte_eth_fdir_filter *fdir_filter,
+			    struct ixgbe_fdir_rule *rule)
+{
+	enum rte_fdir_mode fdir_mode = dev->data->dev_conf.fdir_conf.mode;
+	int err;
+
+	memset(rule, 0, sizeof(struct ixgbe_fdir_rule));
+
+	err = ixgbe_fdir_filter_to_atr_input(fdir_filter,
+					     &rule->ixgbe_fdir,
+					     fdir_mode);
+	if (err)
+		return err;
+
+	rule->mode = fdir_mode;
+	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT)
+		rule->fdirflags = IXGBE_FDIRCMD_DROP;
+	rule->queue = fdir_filter->action.rx_queue;
+	rule->soft_id = fdir_filter->soft_id;
+
+	return 0;
+}
+
+int
+ixgbe_fdir_filter_program(struct rte_eth_dev *dev,
+			  struct ixgbe_fdir_rule *rule,
 			  bool del,
 			  bool update)
 {
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t fdircmd_flags;
 	uint32_t fdirhash;
-	union ixgbe_atr_input input;
 	uint8_t queue;
 	bool is_perfect = FALSE;
 	int err;
@@ -1161,7 +1226,8 @@  ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	struct ixgbe_fdir_filter *node;
 	bool add_node = FALSE;
 
-	if (fdir_mode == RTE_FDIR_MODE_NONE)
+	if (fdir_mode == RTE_FDIR_MODE_NONE ||
+	    fdir_mode != rule->mode)
 		return -ENOTSUP;
 
 	/*
@@ -1174,7 +1240,7 @@  ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    (hw->mac.type == ixgbe_mac_X550 ||
 	     hw->mac.type == ixgbe_mac_X550EM_x ||
 	     hw->mac.type == ixgbe_mac_X550EM_a) &&
-	    (fdir_filter->input.flow_type ==
+	    (rule->ixgbe_fdir.formatted.flow_type ==
 	     RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) &&
 	    (info->mask.src_port_mask != 0 ||
 	     info->mask.dst_port_mask != 0)) {
@@ -1188,29 +1254,23 @@  ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	    fdir_mode <= RTE_FDIR_MODE_PERFECT_TUNNEL)
 		is_perfect = TRUE;
 
-	memset(&input, 0, sizeof(input));
-
-	err = ixgbe_fdir_filter_to_atr_input(fdir_filter, &input,
-					     fdir_mode);
-	if (err)
-		return err;
-
 	if (is_perfect) {
-		if (input.formatted.flow_type & IXGBE_ATR_L4TYPE_IPV6_MASK) {
+		if (rule->ixgbe_fdir.formatted.flow_type &
+		    IXGBE_ATR_L4TYPE_IPV6_MASK) {
 			PMD_DRV_LOG(ERR, "IPv6 is not supported in"
 				    " perfect mode!");
 			return -ENOTSUP;
 		}
-		fdirhash = atr_compute_perfect_hash_82599(&input,
+		fdirhash = atr_compute_perfect_hash_82599(&rule->ixgbe_fdir,
 							  dev->data->dev_conf.fdir_conf.pballoc);
-		fdirhash |= fdir_filter->soft_id <<
+		fdirhash |= rule->soft_id <<
 			IXGBE_FDIRHASH_SIG_SW_INDEX_SHIFT;
 	} else
-		fdirhash = atr_compute_sig_hash_82599(&input,
+		fdirhash = atr_compute_sig_hash_82599(&rule->ixgbe_fdir,
 						      dev->data->dev_conf.fdir_conf.pballoc);
 
 	if (del) {
-		err = ixgbe_remove_fdir_filter(info, &input);
+		err = ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 		if (err < 0)
 			return err;
 
@@ -1223,7 +1283,7 @@  ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 	/* add or update an fdir filter*/
 	fdircmd_flags = (update) ? IXGBE_FDIRCMD_FILTER_UPDATE : 0;
-	if (fdir_filter->action.behavior == RTE_ETH_FDIR_REJECT) {
+	if (rule->fdirflags & IXGBE_FDIRCMD_DROP) {
 		if (is_perfect) {
 			queue = dev->data->dev_conf.fdir_conf.drop_queue;
 			fdircmd_flags |= IXGBE_FDIRCMD_DROP;
@@ -1232,13 +1292,12 @@  ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 				    " signature mode.");
 			return -EINVAL;
 		}
-	} else if (fdir_filter->action.behavior == RTE_ETH_FDIR_ACCEPT &&
-		   fdir_filter->action.rx_queue < IXGBE_MAX_RX_QUEUE_NUM)
-		queue = (uint8_t)fdir_filter->action.rx_queue;
+	} else if (rule->queue < IXGBE_MAX_RX_QUEUE_NUM)
+		queue = (uint8_t)rule->queue;
 	else
 		return -EINVAL;
 
-	node = ixgbe_fdir_filter_lookup(info, &input);
+	node = ixgbe_fdir_filter_lookup(info, &rule->ixgbe_fdir);
 	if (node) {
 		if (update) {
 			node->fdirflags = fdircmd_flags;
@@ -1256,7 +1315,7 @@  ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 		if (!node)
 			return -ENOMEM;
 		(void)rte_memcpy(&node->ixgbe_fdir,
-				 &input,
+				 &rule->ixgbe_fdir,
 				 sizeof(union ixgbe_atr_input));
 		node->fdirflags = fdircmd_flags;
 		node->fdirhash = fdirhash;
@@ -1270,18 +1329,19 @@  ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	}
 
 	if (is_perfect) {
-		err = fdir_write_perfect_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash,
-						      fdir_mode);
+		err = fdir_write_perfect_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash, fdir_mode);
 	} else {
-		err = fdir_add_signature_filter_82599(hw, &input, queue,
-						      fdircmd_flags, fdirhash);
+		err = fdir_add_signature_filter_82599(hw, &rule->ixgbe_fdir,
+						      queue, fdircmd_flags,
+						      fdirhash);
 	}
 	if (err < 0) {
 		PMD_DRV_LOG(ERR, "Fail to add FDIR filter!");
 
 		if (add_node)
-			(void)ixgbe_remove_fdir_filter(info, &input);
+			(void)ixgbe_remove_fdir_filter(info, &rule->ixgbe_fdir);
 	} else {
 		PMD_DRV_LOG(DEBUG, "Success to add FDIR filter");
 	}
@@ -1289,6 +1349,29 @@  ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
 	return err;
 }
 
+/* ixgbe_add_del_fdir_filter - add or remove a flow diretor filter.
+ * @dev: pointer to the structure rte_eth_dev
+ * @fdir_filter: fdir filter entry
+ * @del: 1 - delete, 0 - add
+ * @update: 1 - update
+ */
+static int
+ixgbe_add_del_fdir_filter(struct rte_eth_dev *dev,
+			  const struct rte_eth_fdir_filter *fdir_filter,
+			  bool del,
+			  bool update)
+{
+	struct ixgbe_fdir_rule rule;
+	int err;
+
+	err = ixgbe_interpret_fdir_filter(dev, fdir_filter, &rule);
+
+	if (err)
+		return err;
+
+	return ixgbe_fdir_filter_program(dev, &rule, del, update);
+}
+
 static int
 ixgbe_fdir_flush(struct rte_eth_dev *dev)
 {