mbox series

[0/7] net/mlx5: support for flow action on VLAN header

Message ID cover.1565072905.git.motih@mellanox.com (mailing list archive)
Headers
Series net/mlx5: support for flow action on VLAN header |

Message

Moti Haimovsky Aug. 6, 2019, 8:24 a.m. UTC
  VLAN actions support is implemented in librte_ethdev, and in
test-pmd application, based on [1] Generic flow API.
These actions conform to the VLAN actions defined in
[2] the OpenFlow Switch Specification.

rte_flow defines the following VLAN actions:
 1. OF_POP_VLAN
    Pop the outer-most VLAN header from the packet.
 2. OF_PUSH_VLAN
    Push a new VLAN header onto the packet.
 3. OF_SET_VLAN_VID
    Sets the ID of the outermost VLAN tag.
 4. OF_SET_VLAN_PCP
    Sets the 3-bit priority field of the outermost VLAN tag.

This series of patches adds support for those VLAN actions
to the mlx5 PMD using the Direct Verbs interface.

Moti Haimovsky (7):
  net/mlx5: support for an action search in a list
  net/mlx5: add VLAN push/pop DR commands to glue
  net/mlx5: support pop flow action on VLAN header
  net/mlx5: support push flow action on VLAN header
  net/mlx5: support modify VLAN priority on VLAN hdr
  net/mlx5: supp modify VLAN ID on new VLAN header
  net/mlx5: supp modify VLAN ID on existing VLAN hdr

 drivers/net/mlx5/Makefile       |   5 +
 drivers/net/mlx5/meson.build    |   2 +
 drivers/net/mlx5/mlx5.c         |   9 +
 drivers/net/mlx5/mlx5.h         |   3 +
 drivers/net/mlx5/mlx5_flow.c    |  23 ++
 drivers/net/mlx5/mlx5_flow.h    |  27 ++-
 drivers/net/mlx5/mlx5_flow_dv.c | 521 ++++++++++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_glue.c    |  29 +++
 drivers/net/mlx5/mlx5_glue.h    |   6 +
 drivers/net/mlx5/mlx5_prm.h     |   1 +
 10 files changed, 623 insertions(+), 3 deletions(-)
  

Comments

Hideyuki Yamashita Oct. 1, 2019, 12:17 p.m. UTC | #1
Hello Moti,

I have some questions on the patch.
Just want to know how to use it.

Q1. Is it correct understanding that the patch will be reflected in
19.11 if it is approved?

Q2.Which action should I specify when I want to insert VLAN tag
to non-VLAN frame?

OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?

Q3. Is it possible to detag VLAN when it receives VLAN tagged 
frame from outside of the host?

Q4. Is it possible to entag VLAN to non-VLAN frame when 
it sends packet to outside of host?

Q5.Are there any restriction to conbime other ACTIONS like QUEUE?

Q6. Is it possible to apply rte_flow actions for specified tx queue 
of physical NIC?
(e.g. VM connect with PHY:0 using tx queue index:1, I want
to entag VLAN 101 to the traffic from VM to PHY:0 is it possible?)

Thanks in advance!

BR,
Hideyuki Yamashita
NTT TechnoCross

> VLAN actions support is implemented in librte_ethdev, and in
> test-pmd application, based on [1] Generic flow API.
> These actions conform to the VLAN actions defined in
> [2] the OpenFlow Switch Specification.
> 
> rte_flow defines the following VLAN actions:
>  1. OF_POP_VLAN
>     Pop the outer-most VLAN header from the packet.
>  2. OF_PUSH_VLAN
>     Push a new VLAN header onto the packet.
>  3. OF_SET_VLAN_VID
>     Sets the ID of the outermost VLAN tag.
>  4. OF_SET_VLAN_PCP
>     Sets the 3-bit priority field of the outermost VLAN tag.
> 
> This series of patches adds support for those VLAN actions
> to the mlx5 PMD using the Direct Verbs interface.
> 
> Moti Haimovsky (7):
>   net/mlx5: support for an action search in a list
>   net/mlx5: add VLAN push/pop DR commands to glue
>   net/mlx5: support pop flow action on VLAN header
>   net/mlx5: support push flow action on VLAN header
>   net/mlx5: support modify VLAN priority on VLAN hdr
>   net/mlx5: supp modify VLAN ID on new VLAN header
>   net/mlx5: supp modify VLAN ID on existing VLAN hdr
> 
>  drivers/net/mlx5/Makefile       |   5 +
>  drivers/net/mlx5/meson.build    |   2 +
>  drivers/net/mlx5/mlx5.c         |   9 +
>  drivers/net/mlx5/mlx5.h         |   3 +
>  drivers/net/mlx5/mlx5_flow.c    |  23 ++
>  drivers/net/mlx5/mlx5_flow.h    |  27 ++-
>  drivers/net/mlx5/mlx5_flow_dv.c | 521 ++++++++++++++++++++++++++++++++++++++++
>  drivers/net/mlx5/mlx5_glue.c    |  29 +++
>  drivers/net/mlx5/mlx5_glue.h    |   6 +
>  drivers/net/mlx5/mlx5_prm.h     |   1 +
>  10 files changed, 623 insertions(+), 3 deletions(-)
> 
> -- 
> 1.8.3.1
  
Hideyuki Yamashita Oct. 4, 2019, 10:35 a.m. UTC | #2
Can somebody (Mellanox guys?) help me out?

> Hello Moti,
> 
> I have some questions on the patch.
> Just want to know how to use it.
> 
> Q1. Is it correct understanding that the patch will be reflected in
> 19.11 if it is approved?
> 
> Q2.Which action should I specify when I want to insert VLAN tag
> to non-VLAN frame?
> 
> OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?
> 
> Q3. Is it possible to detag VLAN when it receives VLAN tagged 
> frame from outside of the host?
> 
> Q4. Is it possible to entag VLAN to non-VLAN frame when 
> it sends packet to outside of host?
> 
> Q5.Are there any restriction to conbime other ACTIONS like QUEUE?
> 
> Q6. Is it possible to apply rte_flow actions for specified tx queue 
> of physical NIC?
> (e.g. VM connect with PHY:0 using tx queue index:1, I want
> to entag VLAN 101 to the traffic from VM to PHY:0 is it possible?)
> 
> Thanks in advance!
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > VLAN actions support is implemented in librte_ethdev, and in
> > test-pmd application, based on [1] Generic flow API.
> > These actions conform to the VLAN actions defined in
> > [2] the OpenFlow Switch Specification.
> > 
> > rte_flow defines the following VLAN actions:
> >  1. OF_POP_VLAN
> >     Pop the outer-most VLAN header from the packet.
> >  2. OF_PUSH_VLAN
> >     Push a new VLAN header onto the packet.
> >  3. OF_SET_VLAN_VID
> >     Sets the ID of the outermost VLAN tag.
> >  4. OF_SET_VLAN_PCP
> >     Sets the 3-bit priority field of the outermost VLAN tag.
> > 
> > This series of patches adds support for those VLAN actions
> > to the mlx5 PMD using the Direct Verbs interface.
> > 
> > Moti Haimovsky (7):
> >   net/mlx5: support for an action search in a list
> >   net/mlx5: add VLAN push/pop DR commands to glue
> >   net/mlx5: support pop flow action on VLAN header
> >   net/mlx5: support push flow action on VLAN header
> >   net/mlx5: support modify VLAN priority on VLAN hdr
> >   net/mlx5: supp modify VLAN ID on new VLAN header
> >   net/mlx5: supp modify VLAN ID on existing VLAN hdr
> > 
> >  drivers/net/mlx5/Makefile       |   5 +
> >  drivers/net/mlx5/meson.build    |   2 +
> >  drivers/net/mlx5/mlx5.c         |   9 +
> >  drivers/net/mlx5/mlx5.h         |   3 +
> >  drivers/net/mlx5/mlx5_flow.c    |  23 ++
> >  drivers/net/mlx5/mlx5_flow.h    |  27 ++-
> >  drivers/net/mlx5/mlx5_flow_dv.c | 521 ++++++++++++++++++++++++++++++++++++++++
> >  drivers/net/mlx5/mlx5_glue.c    |  29 +++
> >  drivers/net/mlx5/mlx5_glue.h    |   6 +
> >  drivers/net/mlx5/mlx5_prm.h     |   1 +
> >  10 files changed, 623 insertions(+), 3 deletions(-)
> > 
> > -- 
> > 1.8.3.1
>
  
Slava Ovsiienko Oct. 4, 2019, 10:51 a.m. UTC | #3
> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Friday, October 4, 2019 13:35
> To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Cc: Moti Haimovsky <motih@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Can somebody (Mellanox guys?) help me out?

Hi, Hideyuki

I'm sorry, there are long holidays in IL, so let me try to answer.

> 
> > Hello Moti,
> >
> > I have some questions on the patch.
> > Just want to know how to use it.
> >
> > Q1. Is it correct understanding that the patch will be reflected in
> > 19.11 if it is approved?

Yes, it is merged and should be reflected.

> >
> > Q2.Which action should I specify when I want to insert VLAN tag to
> > non-VLAN frame?
> >
> > OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?

All of them, OF_PUSH_VLAN inserts the VLAN header, OF_SET_VLAN_VID and
OF_SET_VLAN_PCP fill the fields with appropriate values.

> >
> > Q3. Is it possible to detag VLAN when it receives VLAN tagged frame
> > from outside of the host?
Do you mean some complex configuration with multiple VMs and engaged E-Switch
feature? Anyway, there are multiple ways to strip (untag) VLAN header:
- with E-Switch rules (including match on specified port)
- with local port rules
- stripping VLAN in Rx queue

> >
> > Q4. Is it possible to entag VLAN to non-VLAN frame when it sends
> > packet to outside of host?
Yes.

> >
> > Q5.Are there any restriction to conbime other ACTIONS like QUEUE?
Should no be. Action QUEUE is on Rx NIC namespace, VLAN POP is supported there.

> >
> > Q6. Is it possible to apply rte_flow actions for specified tx queue of
> > physical NIC?
> > (e.g. VM connect with PHY:0 using tx queue index:1, I want to entag
> > VLAN 101 to the traffic from VM to PHY:0 is it possible?)
Directly - no, there is no item to match with specific Tx queue.

If setting VLAN on specific Tx queue is desired we have two options:

- engage Tx offload DEV_TX_OFFLOAD_VLAN_INSERT, and provide VLAN with
 each packet being transferred to tx_burst

- engage DEV_TX_OFFLOAD_MATCH_METADATA feature, and set specific
metadata for all packets on specific queue. Then the rules matching with this metadata
may be inserted.

[snip]

With best regards, Slava
  
Hideyuki Yamashita Oct. 18, 2019, 10:55 a.m. UTC | #4
Dear Slava and experts,

Thanks for your answering me.
Baased on your answer, I tested using testpmd.
And about the outcome, I have several questions.


[1.Test environment]
OS:Ubuntu18.04
NIC1:MCX4121A-ACAT 25G
NIC2:MCX516A-CCAT 100G
Repo:dpdk-next-net

I checked that the following is shown in git log command.
9f1e94469 net/mlx5: fix netlink rdma socket callback routine
50735012c net/mlx5: support reading module EEPROM data
f53a5f917 net/mlx5: support modify VLAN ID on existing VLAN header
9af8046a1 net/mlx5: support modify VLAN ID on new VLAN header
43184603e net/mlx5: support modifying VLAN priority on VLAN header
4f59ffbd8 net/mlx5: support push flow action on VLAN header
b4bd8f5da net/mlx5: support pop flow action on VLAN header
048e3e84c net/mlx5: add VLAN push/pop DR commands to glue

[2.Test result]
I tested the follwoing flows with testpmd included in dpdk-next-net.

A.flow create 0 ingress pattern eth / vlan id is 100 / end actions OF_POP_VLAN / end 
B.flow create 0 ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end 
C.flow create 0 ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end actions OF_SET_VLAN_VID vlan_vid 200 / end 
D.flow create X ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end actions of_SET_VLAN_PCP vlan_pcp 3 / end 
E.flow create 0 egress pattern eth src is BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end

A-D, resulted in "Caught error type 16 (specific action): cause: 0x7ffcc711db48, action not supported: Operation not supported".
E resulted in "Egress is not supported".

[3. Quetions]
Q1. What is the appropriate flow to entag/detag VLAN using testpmd?
 I think related commits are included so it "should" work and my guess is that my flow is somehow wrong.
Q2. Is it correct understanding that "egress" is not supported for mlx5 PMD?
Q3. If yes, is it possible to entag VLAN tag to the outgoing packet from physical NIC by using rte_flow?

BR,
Hideyuki Yamashita
NTT TechnoCross


> > -----Original Message-----
> > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Sent: Friday, October 4, 2019 13:35
> > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Cc: Moti Haimovsky <motih@mellanox.com>; Slava Ovsiienko
> > <viacheslavo@mellanox.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > VLAN header
> > 
> > Can somebody (Mellanox guys?) help me out?
> 
> Hi, Hideyuki
> 
> I'm sorry, there are long holidays in IL, so let me try to answer.
> 
> > 
> > > Hello Moti,
> > >
> > > I have some questions on the patch.
> > > Just want to know how to use it.
> > >
> > > Q1. Is it correct understanding that the patch will be reflected in
> > > 19.11 if it is approved?
> 
> Yes, it is merged and should be reflected.
> 
> > >
> > > Q2.Which action should I specify when I want to insert VLAN tag to
> > > non-VLAN frame?
> > >
> > > OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?
> 
> All of them, OF_PUSH_VLAN inserts the VLAN header, OF_SET_VLAN_VID and
> OF_SET_VLAN_PCP fill the fields with appropriate values.
> 
> > >
> > > Q3. Is it possible to detag VLAN when it receives VLAN tagged frame
> > > from outside of the host?
> Do you mean some complex configuration with multiple VMs and engaged E-Switch
> feature? Anyway, there are multiple ways to strip (untag) VLAN header:
> - with E-Switch rules (including match on specified port)
> - with local port rules
> - stripping VLAN in Rx queue
> 
> > >
> > > Q4. Is it possible to entag VLAN to non-VLAN frame when it sends
> > > packet to outside of host?
> Yes.
> 
> > >
> > > Q5.Are there any restriction to conbime other ACTIONS like QUEUE?
> Should no be. Action QUEUE is on Rx NIC namespace, VLAN POP is supported there.
> 
> > >
> > > Q6. Is it possible to apply rte_flow actions for specified tx queue of
> > > physical NIC?
> > > (e.g. VM connect with PHY:0 using tx queue index:1, I want to entag
> > > VLAN 101 to the traffic from VM to PHY:0 is it possible?)
> Directly - no, there is no item to match with specific Tx queue.
> 
> If setting VLAN on specific Tx queue is desired we have two options:
> 
> - engage Tx offload DEV_TX_OFFLOAD_VLAN_INSERT, and provide VLAN with
>  each packet being transferred to tx_burst
> 
> - engage DEV_TX_OFFLOAD_MATCH_METADATA feature, and set specific
> metadata for all packets on specific queue. Then the rules matching with this metadata
> may be inserted.
> 
> [snip]
> 
> With best regards, Slava
  
Hideyuki Yamashita Oct. 21, 2019, 7:11 a.m. UTC | #5
Dear Slava, Moti and all,

Please let me know if you need more information.
Partial answer is acceptable for me.

Thanks in advaince!

BR,
HIdeyuki Yamashita
NTT TechnoCross

> Dear Slava and experts,
> 
> Thanks for your answering me.
> Baased on your answer, I tested using testpmd.
> And about the outcome, I have several questions.
> 
> 
> [1.Test environment]
> OS:Ubuntu18.04
> NIC1:MCX4121A-ACAT 25G
> NIC2:MCX516A-CCAT 100G
> Repo:dpdk-next-net
> 
> I checked that the following is shown in git log command.
> 9f1e94469 net/mlx5: fix netlink rdma socket callback routine
> 50735012c net/mlx5: support reading module EEPROM data
> f53a5f917 net/mlx5: support modify VLAN ID on existing VLAN header
> 9af8046a1 net/mlx5: support modify VLAN ID on new VLAN header
> 43184603e net/mlx5: support modifying VLAN priority on VLAN header
> 4f59ffbd8 net/mlx5: support push flow action on VLAN header
> b4bd8f5da net/mlx5: support pop flow action on VLAN header
> 048e3e84c net/mlx5: add VLAN push/pop DR commands to glue
> 
> [2.Test result]
> I tested the follwoing flows with testpmd included in dpdk-next-net.
> 
> A.flow create 0 ingress pattern eth / vlan id is 100 / end actions OF_POP_VLAN / end 
> B.flow create 0 ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end 
> C.flow create 0 ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end actions OF_SET_VLAN_VID vlan_vid 200 / end 
> D.flow create X ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end actions of_SET_VLAN_PCP vlan_pcp 3 / end 
> E.flow create 0 egress pattern eth src is BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> 
> A-D, resulted in "Caught error type 16 (specific action): cause: 0x7ffcc711db48, action not supported: Operation not supported".
> E resulted in "Egress is not supported".
> 
> [3. Quetions]
> Q1. What is the appropriate flow to entag/detag VLAN using testpmd?
>  I think related commits are included so it "should" work and my guess is that my flow is somehow wrong.
> Q2. Is it correct understanding that "egress" is not supported for mlx5 PMD?
> Q3. If yes, is it possible to entag VLAN tag to the outgoing packet from physical NIC by using rte_flow?
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> 
> > > -----Original Message-----
> > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Sent: Friday, October 4, 2019 13:35
> > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Cc: Moti Haimovsky <motih@mellanox.com>; Slava Ovsiienko
> > > <viacheslavo@mellanox.com>; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > > VLAN header
> > > 
> > > Can somebody (Mellanox guys?) help me out?
> > 
> > Hi, Hideyuki
> > 
> > I'm sorry, there are long holidays in IL, so let me try to answer.
> > 
> > > 
> > > > Hello Moti,
> > > >
> > > > I have some questions on the patch.
> > > > Just want to know how to use it.
> > > >
> > > > Q1. Is it correct understanding that the patch will be reflected in
> > > > 19.11 if it is approved?
> > 
> > Yes, it is merged and should be reflected.
> > 
> > > >
> > > > Q2.Which action should I specify when I want to insert VLAN tag to
> > > > non-VLAN frame?
> > > >
> > > > OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?
> > 
> > All of them, OF_PUSH_VLAN inserts the VLAN header, OF_SET_VLAN_VID and
> > OF_SET_VLAN_PCP fill the fields with appropriate values.
> > 
> > > >
> > > > Q3. Is it possible to detag VLAN when it receives VLAN tagged frame
> > > > from outside of the host?
> > Do you mean some complex configuration with multiple VMs and engaged E-Switch
> > feature? Anyway, there are multiple ways to strip (untag) VLAN header:
> > - with E-Switch rules (including match on specified port)
> > - with local port rules
> > - stripping VLAN in Rx queue
> > 
> > > >
> > > > Q4. Is it possible to entag VLAN to non-VLAN frame when it sends
> > > > packet to outside of host?
> > Yes.
> > 
> > > >
> > > > Q5.Are there any restriction to conbime other ACTIONS like QUEUE?
> > Should no be. Action QUEUE is on Rx NIC namespace, VLAN POP is supported there.
> > 
> > > >
> > > > Q6. Is it possible to apply rte_flow actions for specified tx queue of
> > > > physical NIC?
> > > > (e.g. VM connect with PHY:0 using tx queue index:1, I want to entag
> > > > VLAN 101 to the traffic from VM to PHY:0 is it possible?)
> > Directly - no, there is no item to match with specific Tx queue.
> > 
> > If setting VLAN on specific Tx queue is desired we have two options:
> > 
> > - engage Tx offload DEV_TX_OFFLOAD_VLAN_INSERT, and provide VLAN with
> >  each packet being transferred to tx_burst
> > 
> > - engage DEV_TX_OFFLOAD_MATCH_METADATA feature, and set specific
> > metadata for all packets on specific queue. Then the rules matching with this metadata
> > may be inserted.
> > 
> > [snip]
> > 
> > With best regards, Slava
>
  
Slava Ovsiienko Oct. 21, 2019, 7:29 a.m. UTC | #6
Hi, Hideyuki

> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Monday, October 21, 2019 10:12
> To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Moti Haimovsky
> <motih@mellanox.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Dear Slava, Moti and all,
> 
> Please let me know if you need more information.
> Partial answer is acceptable for me.
> 
> Thanks in advaince!

I'm sorry for delay, your issue is still in progress.
I've tested your rules on my standard configuration - these ones are rejected by FW/SW,
not by DPDK code. Moti tested the flows on custom setup (I suppose experimental FW/kernel).
AFAIK, VLAN feature was planned to GA with OFED 4.7.1,
please, let me check it (hope in few days, there are holidays still lasting in IL).

With best regards, Slava
> 
> BR,
> HIdeyuki Yamashita
> NTT TechnoCross
> 
> > Dear Slava and experts,
> >
> > Thanks for your answering me.
> > Baased on your answer, I tested using testpmd.
> > And about the outcome, I have several questions.
> >
> >
> > [1.Test environment]
> > OS:Ubuntu18.04
> > NIC1:MCX4121A-ACAT 25G
> > NIC2:MCX516A-CCAT 100G
> > Repo:dpdk-next-net
> >
> > I checked that the following is shown in git log command.
> > 9f1e94469 net/mlx5: fix netlink rdma socket callback routine 50735012c
> > net/mlx5: support reading module EEPROM data
> > f53a5f917 net/mlx5: support modify VLAN ID on existing VLAN header
> > 9af8046a1 net/mlx5: support modify VLAN ID on new VLAN header
> > 43184603e net/mlx5: support modifying VLAN priority on VLAN header
> > 4f59ffbd8 net/mlx5: support push flow action on VLAN header b4bd8f5da
> > net/mlx5: support pop flow action on VLAN header 048e3e84c net/mlx5:
> > add VLAN push/pop DR commands to glue
> >
> > [2.Test result]
> > I tested the follwoing flows with testpmd included in dpdk-next-net.
> >
> > A.flow create 0 ingress pattern eth / vlan id is 100 / end actions
> > OF_POP_VLAN / end B.flow create 0 ingress pattern eth dst is
> > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> > C.flow create 0 ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end
> > actions OF_SET_VLAN_VID vlan_vid 200 / end D.flow create X ingress
> > pattern eth dst is BB:BB:BB:BB:BB:BB / end actions of_SET_VLAN_PCP
> > vlan_pcp 3 / end E.flow create 0 egress pattern eth src is
> > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> >
> > A-D, resulted in "Caught error type 16 (specific action): cause:
> 0x7ffcc711db48, action not supported: Operation not supported".
> > E resulted in "Egress is not supported".
> >
> > [3. Quetions]
> > Q1. What is the appropriate flow to entag/detag VLAN using testpmd?
> >  I think related commits are included so it "should" work and my guess is
> that my flow is somehow wrong.
> > Q2. Is it correct understanding that "egress" is not supported for mlx5
> PMD?
> > Q3. If yes, is it possible to entag VLAN tag to the outgoing packet from
> physical NIC by using rte_flow?
> >
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> >
> >
> > > > -----Original Message-----
> > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > Sent: Friday, October 4, 2019 13:35
> > > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > Cc: Moti Haimovsky <motih@mellanox.com>; Slava Ovsiienko
> > > > <viacheslavo@mellanox.com>; dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > action on VLAN header
> > > >
> > > > Can somebody (Mellanox guys?) help me out?
> > >
> > > Hi, Hideyuki
> > >
> > > I'm sorry, there are long holidays in IL, so let me try to answer.
> > >
> > > >
> > > > > Hello Moti,
> > > > >
> > > > > I have some questions on the patch.
> > > > > Just want to know how to use it.
> > > > >
> > > > > Q1. Is it correct understanding that the patch will be reflected
> > > > > in
> > > > > 19.11 if it is approved?
> > >
> > > Yes, it is merged and should be reflected.
> > >
> > > > >
> > > > > Q2.Which action should I specify when I want to insert VLAN tag
> > > > > to non-VLAN frame?
> > > > >
> > > > > OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?
> > >
> > > All of them, OF_PUSH_VLAN inserts the VLAN header, OF_SET_VLAN_VID
> > > and OF_SET_VLAN_PCP fill the fields with appropriate values.
> > >
> > > > >
> > > > > Q3. Is it possible to detag VLAN when it receives VLAN tagged
> > > > > frame from outside of the host?
> > > Do you mean some complex configuration with multiple VMs and
> engaged
> > > E-Switch feature? Anyway, there are multiple ways to strip (untag) VLAN
> header:
> > > - with E-Switch rules (including match on specified port)
> > > - with local port rules
> > > - stripping VLAN in Rx queue
> > >
> > > > >
> > > > > Q4. Is it possible to entag VLAN to non-VLAN frame when it sends
> > > > > packet to outside of host?
> > > Yes.
> > >
> > > > >
> > > > > Q5.Are there any restriction to conbime other ACTIONS like QUEUE?
> > > Should no be. Action QUEUE is on Rx NIC namespace, VLAN POP is
> supported there.
> > >
> > > > >
> > > > > Q6. Is it possible to apply rte_flow actions for specified tx
> > > > > queue of physical NIC?
> > > > > (e.g. VM connect with PHY:0 using tx queue index:1, I want to
> > > > > entag VLAN 101 to the traffic from VM to PHY:0 is it possible?)
> > > Directly - no, there is no item to match with specific Tx queue.
> > >
> > > If setting VLAN on specific Tx queue is desired we have two options:
> > >
> > > - engage Tx offload DEV_TX_OFFLOAD_VLAN_INSERT, and provide VLAN
> > > with  each packet being transferred to tx_burst
> > >
> > > - engage DEV_TX_OFFLOAD_MATCH_METADATA feature, and set
> specific
> > > metadata for all packets on specific queue. Then the rules matching
> > > with this metadata may be inserted.
> > >
> > > [snip]
> > >
> > > With best regards, Slava
> >
> 
>
  
Hideyuki Yamashita Oct. 25, 2019, 4:48 a.m. UTC | #7
Hello Slava,

Thanks for your response back.

While waiting your final response,
I am sending additional info from my side.

1 
I am using "MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64"
as OFED.

tx_h-yamashita@R730n10:~/dpdk-next-net$ pwd
/home/tx_h-yamashita/dpdk-next-net
tx_h-yamashita@R730n10:~/dpdk-next-net$ ls
app          MAINTAINERS
buildtools   Makefile
config       meson.build
devtools     meson_options.txt
doc          mk
drivers      MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64
examples     MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64.tgz
GNUmakefile  README
kernel       usertools
lib          VERSION
license      x86_64-native-linuxapp-gcc

2. 
I am using ConnextX-4 and ConnectX-5.
I attach the result of typing ethtool -i .

Bus info          Device        Class          Description
==========================================================
pci@0000:03:00.0  enp3s0f0      network        MT27710 Family [ConnectX-4 Lx]
pci@0000:03:00.1  enp3s0f1      network        MT27710 Family [ConnectX-4 Lx]
pci@0000:04:00.0  enp4s0f0      network        MT27800 Family [ConnectX-5]
pci@0000:04:00.1  enp4s0f1      network        MT27800 Family [ConnectX-5]

tx_h-yamashita@R730n10:~/dpdk-next-net$ ethtool -i enp3s0f0
driver: mlx5_core
version: 4.7-1.0.0
firmware-version: 14.25.1020 (MT_0000000266)
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
tx_h-yamashita@R730n10:~/dpdk-next-net$ ethtool -i enp4s0f0
driver: mlx5_core
version: 4.7-1.0.0
firmware-version: 16.25.6000 (MT_0000000012)
expansion-rom-version:
bus-info: 0000:04:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes

If you needs more info from my side, please let me know.

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hi, Hideyuki
> 
> > -----Original Message-----
> > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Sent: Monday, October 21, 2019 10:12
> > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Moti Haimovsky
> > <motih@mellanox.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > VLAN header
> > 
> > Dear Slava, Moti and all,
> > 
> > Please let me know if you need more information.
> > Partial answer is acceptable for me.
> > 
> > Thanks in advaince!
> 
> I'm sorry for delay, your issue is still in progress.
> I've tested your rules on my standard configuration - these ones are rejected by FW/SW,
> not by DPDK code. Moti tested the flows on custom setup (I suppose experimental FW/kernel).
> AFAIK, VLAN feature was planned to GA with OFED 4.7.1,
> please, let me check it (hope in few days, there are holidays still lasting in IL).
> 
> With best regards, Slava
> > 
> > BR,
> > HIdeyuki Yamashita
> > NTT TechnoCross
> > 
> > > Dear Slava and experts,
> > >
> > > Thanks for your answering me.
> > > Baased on your answer, I tested using testpmd.
> > > And about the outcome, I have several questions.
> > >
> > >
> > > [1.Test environment]
> > > OS:Ubuntu18.04
> > > NIC1:MCX4121A-ACAT 25G
> > > NIC2:MCX516A-CCAT 100G
> > > Repo:dpdk-next-net
> > >
> > > I checked that the following is shown in git log command.
> > > 9f1e94469 net/mlx5: fix netlink rdma socket callback routine 50735012c
> > > net/mlx5: support reading module EEPROM data
> > > f53a5f917 net/mlx5: support modify VLAN ID on existing VLAN header
> > > 9af8046a1 net/mlx5: support modify VLAN ID on new VLAN header
> > > 43184603e net/mlx5: support modifying VLAN priority on VLAN header
> > > 4f59ffbd8 net/mlx5: support push flow action on VLAN header b4bd8f5da
> > > net/mlx5: support pop flow action on VLAN header 048e3e84c net/mlx5:
> > > add VLAN push/pop DR commands to glue
> > >
> > > [2.Test result]
> > > I tested the follwoing flows with testpmd included in dpdk-next-net.
> > >
> > > A.flow create 0 ingress pattern eth / vlan id is 100 / end actions
> > > OF_POP_VLAN / end B.flow create 0 ingress pattern eth dst is
> > > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> > > C.flow create 0 ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end
> > > actions OF_SET_VLAN_VID vlan_vid 200 / end D.flow create X ingress
> > > pattern eth dst is BB:BB:BB:BB:BB:BB / end actions of_SET_VLAN_PCP
> > > vlan_pcp 3 / end E.flow create 0 egress pattern eth src is
> > > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> > >
> > > A-D, resulted in "Caught error type 16 (specific action): cause:
> > 0x7ffcc711db48, action not supported: Operation not supported".
> > > E resulted in "Egress is not supported".
> > >
> > > [3. Quetions]
> > > Q1. What is the appropriate flow to entag/detag VLAN using testpmd?
> > >  I think related commits are included so it "should" work and my guess is
> > that my flow is somehow wrong.
> > > Q2. Is it correct understanding that "egress" is not supported for mlx5
> > PMD?
> > > Q3. If yes, is it possible to entag VLAN tag to the outgoing packet from
> > physical NIC by using rte_flow?
> > >
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > >
> > >
> > > > > -----Original Message-----
> > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Sent: Friday, October 4, 2019 13:35
> > > > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Cc: Moti Haimovsky <motih@mellanox.com>; Slava Ovsiienko
> > > > > <viacheslavo@mellanox.com>; dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > action on VLAN header
> > > > >
> > > > > Can somebody (Mellanox guys?) help me out?
> > > >
> > > > Hi, Hideyuki
> > > >
> > > > I'm sorry, there are long holidays in IL, so let me try to answer.
> > > >
> > > > >
> > > > > > Hello Moti,
> > > > > >
> > > > > > I have some questions on the patch.
> > > > > > Just want to know how to use it.
> > > > > >
> > > > > > Q1. Is it correct understanding that the patch will be reflected
> > > > > > in
> > > > > > 19.11 if it is approved?
> > > >
> > > > Yes, it is merged and should be reflected.
> > > >
> > > > > >
> > > > > > Q2.Which action should I specify when I want to insert VLAN tag
> > > > > > to non-VLAN frame?
> > > > > >
> > > > > > OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?
> > > >
> > > > All of them, OF_PUSH_VLAN inserts the VLAN header, OF_SET_VLAN_VID
> > > > and OF_SET_VLAN_PCP fill the fields with appropriate values.
> > > >
> > > > > >
> > > > > > Q3. Is it possible to detag VLAN when it receives VLAN tagged
> > > > > > frame from outside of the host?
> > > > Do you mean some complex configuration with multiple VMs and
> > engaged
> > > > E-Switch feature? Anyway, there are multiple ways to strip (untag) VLAN
> > header:
> > > > - with E-Switch rules (including match on specified port)
> > > > - with local port rules
> > > > - stripping VLAN in Rx queue
> > > >
> > > > > >
> > > > > > Q4. Is it possible to entag VLAN to non-VLAN frame when it sends
> > > > > > packet to outside of host?
> > > > Yes.
> > > >
> > > > > >
> > > > > > Q5.Are there any restriction to conbime other ACTIONS like QUEUE?
> > > > Should no be. Action QUEUE is on Rx NIC namespace, VLAN POP is
> > supported there.
> > > >
> > > > > >
> > > > > > Q6. Is it possible to apply rte_flow actions for specified tx
> > > > > > queue of physical NIC?
> > > > > > (e.g. VM connect with PHY:0 using tx queue index:1, I want to
> > > > > > entag VLAN 101 to the traffic from VM to PHY:0 is it possible?)
> > > > Directly - no, there is no item to match with specific Tx queue.
> > > >
> > > > If setting VLAN on specific Tx queue is desired we have two options:
> > > >
> > > > - engage Tx offload DEV_TX_OFFLOAD_VLAN_INSERT, and provide VLAN
> > > > with  each packet being transferred to tx_burst
> > > >
> > > > - engage DEV_TX_OFFLOAD_MATCH_METADATA feature, and set
> > specific
> > > > metadata for all packets on specific queue. Then the rules matching
> > > > with this metadata may be inserted.
> > > >
> > > > [snip]
> > > >
> > > > With best regards, Slava
> > >
> > 
> >
  
Slava Ovsiienko Oct. 29, 2019, 5:45 a.m. UTC | #8
Hi, Hideyuki.

Thanks for providing extra information. 

We rechecked the VLAN actions support in OFED 4.7.1, it should be supported.
There are some limitations:
- VLAN pop is supported on ingress direction only
- VLAN push is supported on egress direction only
- not supported in group 0 (this is root table, has some limitations)
 we should insert into group 0 flow with jump to group 1, and then insert
the rule with VLAN actions to group 1

I tried this flow (on my setup OFED 4.7.1.0.0.2):

flow create 0 ingress group 1 priority 0 pattern eth dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan / queue index 0 / end
It was created successfully.

With best regards, Slava

> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Friday, October 25, 2019 7:49
> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: Moti Haimovsky <motih@mellanox.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Hello Slava,
> 
> Thanks for your response back.
> 
> While waiting your final response,
> I am sending additional info from my side.
> 
> 1
> I am using "MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64"
> as OFED.
> 
> tx_h-yamashita@R730n10:~/dpdk-next-net$ pwd /home/tx_h-
> yamashita/dpdk-next-net
> tx_h-yamashita@R730n10:~/dpdk-next-net$ ls
> app          MAINTAINERS
> buildtools   Makefile
> config       meson.build
> devtools     meson_options.txt
> doc          mk
> drivers      MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64
> examples     MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64.tgz
> GNUmakefile  README
> kernel       usertools
> lib          VERSION
> license      x86_64-native-linuxapp-gcc
> 
> 2.
> I am using ConnextX-4 and ConnectX-5.
> I attach the result of typing ethtool -i .
> 
> Bus info          Device        Class          Description
> ==========================================================
> pci@0000:03:00.0  enp3s0f0      network        MT27710 Family [ConnectX-4
> Lx]
> pci@0000:03:00.1  enp3s0f1      network        MT27710 Family [ConnectX-4
> Lx]
> pci@0000:04:00.0  enp4s0f0      network        MT27800 Family [ConnectX-5]
> pci@0000:04:00.1  enp4s0f1      network        MT27800 Family [ConnectX-5]
> 
> tx_h-yamashita@R730n10:~/dpdk-next-net$ ethtool -i enp3s0f0
> driver: mlx5_core
> version: 4.7-1.0.0
> firmware-version: 14.25.1020 (MT_0000000266)
> expansion-rom-version:
> bus-info: 0000:03:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: yes
> tx_h-yamashita@R730n10:~/dpdk-next-net$ ethtool -i enp4s0f0
> driver: mlx5_core
> version: 4.7-1.0.0
> firmware-version: 16.25.6000 (MT_0000000012)
> expansion-rom-version:
> bus-info: 0000:04:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: no
> supports-register-dump: no
> supports-priv-flags: yes
> 
> If you needs more info from my side, please let me know.
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > Hi, Hideyuki
> >
> > > -----Original Message-----
> > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Sent: Monday, October 21, 2019 10:12
> > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Moti Haimovsky
> > > <motih@mellanox.com>; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > action on VLAN header
> > >
> > > Dear Slava, Moti and all,
> > >
> > > Please let me know if you need more information.
> > > Partial answer is acceptable for me.
> > >
> > > Thanks in advaince!
> >
> > I'm sorry for delay, your issue is still in progress.
> > I've tested your rules on my standard configuration - these ones are
> > rejected by FW/SW, not by DPDK code. Moti tested the flows on custom
> setup (I suppose experimental FW/kernel).
> > AFAIK, VLAN feature was planned to GA with OFED 4.7.1, please, let me
> > check it (hope in few days, there are holidays still lasting in IL).
> >
> > With best regards, Slava
> > >
> > > BR,
> > > HIdeyuki Yamashita
> > > NTT TechnoCross
> > >
> > > > Dear Slava and experts,
> > > >
> > > > Thanks for your answering me.
> > > > Baased on your answer, I tested using testpmd.
> > > > And about the outcome, I have several questions.
> > > >
> > > >
> > > > [1.Test environment]
> > > > OS:Ubuntu18.04
> > > > NIC1:MCX4121A-ACAT 25G
> > > > NIC2:MCX516A-CCAT 100G
> > > > Repo:dpdk-next-net
> > > >
> > > > I checked that the following is shown in git log command.
> > > > 9f1e94469 net/mlx5: fix netlink rdma socket callback routine
> > > > 50735012c
> > > > net/mlx5: support reading module EEPROM data
> > > > f53a5f917 net/mlx5: support modify VLAN ID on existing VLAN header
> > > > 9af8046a1 net/mlx5: support modify VLAN ID on new VLAN header
> > > > 43184603e net/mlx5: support modifying VLAN priority on VLAN header
> > > > 4f59ffbd8 net/mlx5: support push flow action on VLAN header
> > > > b4bd8f5da
> > > > net/mlx5: support pop flow action on VLAN header 048e3e84c
> net/mlx5:
> > > > add VLAN push/pop DR commands to glue
> > > >
> > > > [2.Test result]
> > > > I tested the follwoing flows with testpmd included in dpdk-next-net.
> > > >
> > > > A.flow create 0 ingress pattern eth / vlan id is 100 / end actions
> > > > OF_POP_VLAN / end B.flow create 0 ingress pattern eth dst is
> > > > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> > > > C.flow create 0 ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end
> > > > actions OF_SET_VLAN_VID vlan_vid 200 / end D.flow create X ingress
> > > > pattern eth dst is BB:BB:BB:BB:BB:BB / end actions of_SET_VLAN_PCP
> > > > vlan_pcp 3 / end E.flow create 0 egress pattern eth src is
> > > > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> > > >
> > > > A-D, resulted in "Caught error type 16 (specific action): cause:
> > > 0x7ffcc711db48, action not supported: Operation not supported".
> > > > E resulted in "Egress is not supported".
> > > >
> > > > [3. Quetions]
> > > > Q1. What is the appropriate flow to entag/detag VLAN using testpmd?
> > > >  I think related commits are included so it "should" work and my
> > > > guess is
> > > that my flow is somehow wrong.
> > > > Q2. Is it correct understanding that "egress" is not supported for
> > > > mlx5
> > > PMD?
> > > > Q3. If yes, is it possible to entag VLAN tag to the outgoing
> > > > packet from
> > > physical NIC by using rte_flow?
> > > >
> > > > BR,
> > > > Hideyuki Yamashita
> > > > NTT TechnoCross
> > > >
> > > >
> > > > > > -----Original Message-----
> > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > Sent: Friday, October 4, 2019 13:35
> > > > > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > Cc: Moti Haimovsky <motih@mellanox.com>; Slava Ovsiienko
> > > > > > <viacheslavo@mellanox.com>; dev@dpdk.org
> > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > > action on VLAN header
> > > > > >
> > > > > > Can somebody (Mellanox guys?) help me out?
> > > > >
> > > > > Hi, Hideyuki
> > > > >
> > > > > I'm sorry, there are long holidays in IL, so let me try to answer.
> > > > >
> > > > > >
> > > > > > > Hello Moti,
> > > > > > >
> > > > > > > I have some questions on the patch.
> > > > > > > Just want to know how to use it.
> > > > > > >
> > > > > > > Q1. Is it correct understanding that the patch will be
> > > > > > > reflected in
> > > > > > > 19.11 if it is approved?
> > > > >
> > > > > Yes, it is merged and should be reflected.
> > > > >
> > > > > > >
> > > > > > > Q2.Which action should I specify when I want to insert VLAN
> > > > > > > tag to non-VLAN frame?
> > > > > > >
> > > > > > > OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?
> > > > >
> > > > > All of them, OF_PUSH_VLAN inserts the VLAN header,
> > > > > OF_SET_VLAN_VID and OF_SET_VLAN_PCP fill the fields with
> appropriate values.
> > > > >
> > > > > > >
> > > > > > > Q3. Is it possible to detag VLAN when it receives VLAN
> > > > > > > tagged frame from outside of the host?
> > > > > Do you mean some complex configuration with multiple VMs and
> > > engaged
> > > > > E-Switch feature? Anyway, there are multiple ways to strip
> > > > > (untag) VLAN
> > > header:
> > > > > - with E-Switch rules (including match on specified port)
> > > > > - with local port rules
> > > > > - stripping VLAN in Rx queue
> > > > >
> > > > > > >
> > > > > > > Q4. Is it possible to entag VLAN to non-VLAN frame when it
> > > > > > > sends packet to outside of host?
> > > > > Yes.
> > > > >
> > > > > > >
> > > > > > > Q5.Are there any restriction to conbime other ACTIONS like
> QUEUE?
> > > > > Should no be. Action QUEUE is on Rx NIC namespace, VLAN POP is
> > > supported there.
> > > > >
> > > > > > >
> > > > > > > Q6. Is it possible to apply rte_flow actions for specified
> > > > > > > tx queue of physical NIC?
> > > > > > > (e.g. VM connect with PHY:0 using tx queue index:1, I want
> > > > > > > to entag VLAN 101 to the traffic from VM to PHY:0 is it
> > > > > > > possible?)
> > > > > Directly - no, there is no item to match with specific Tx queue.
> > > > >
> > > > > If setting VLAN on specific Tx queue is desired we have two options:
> > > > >
> > > > > - engage Tx offload DEV_TX_OFFLOAD_VLAN_INSERT, and provide
> VLAN
> > > > > with  each packet being transferred to tx_burst
> > > > >
> > > > > - engage DEV_TX_OFFLOAD_MATCH_METADATA feature, and set
> > > specific
> > > > > metadata for all packets on specific queue. Then the rules
> > > > > matching with this metadata may be inserted.
> > > > >
> > > > > [snip]
> > > > >
> > > > > With best regards, Slava
> > > >
> > >
> > >
>
  
Hideyuki Yamashita Oct. 30, 2019, 10:04 a.m. UTC | #9
Hi Slava,

Thanks for your response back and letting me know the limitation.

I tried to input flow you suggested.
But it returns error.

testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan / queue index 0 / end
Caught error type 3 (group field): groups is not supported: Operation not supported

Note that my setup OFED is NOT 4.7.1.0.0.2, but 4.7.1.0.0.1
because that is the latest version which I can download from the
following web site.

https://jp.mellanox.com/page/products_dyn?product_family=26&ssn=u44h3rn8ngcmbdl6v0fvhqrgt3

Do you have any hints?

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hi, Hideyuki.
> 
> Thanks for providing extra information. 
> 
> We rechecked the VLAN actions support in OFED 4.7.1, it should be supported.
> There are some limitations:
> - VLAN pop is supported on ingress direction only
> - VLAN push is supported on egress direction only
> - not supported in group 0 (this is root table, has some limitations)
>  we should insert into group 0 flow with jump to group 1, and then insert
> the rule with VLAN actions to group 1
> 
> I tried this flow (on my setup OFED 4.7.1.0.0.2):
> 
> flow create 0 ingress group 1 priority 0 pattern eth dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan / queue index 0 / end
> It was created successfully.
> 
> With best regards, Slava
> 
> > -----Original Message-----
> > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Sent: Friday, October 25, 2019 7:49
> > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > Cc: Moti Haimovsky <motih@mellanox.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > VLAN header
> > 
> > Hello Slava,
> > 
> > Thanks for your response back.
> > 
> > While waiting your final response,
> > I am sending additional info from my side.
> > 
> > 1
> > I am using "MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64"
> > as OFED.
> > 
> > tx_h-yamashita@R730n10:~/dpdk-next-net$ pwd /home/tx_h-
> > yamashita/dpdk-next-net
> > tx_h-yamashita@R730n10:~/dpdk-next-net$ ls
> > app          MAINTAINERS
> > buildtools   Makefile
> > config       meson.build
> > devtools     meson_options.txt
> > doc          mk
> > drivers      MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64
> > examples     MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64.tgz
> > GNUmakefile  README
> > kernel       usertools
> > lib          VERSION
> > license      x86_64-native-linuxapp-gcc
> > 
> > 2.
> > I am using ConnextX-4 and ConnectX-5.
> > I attach the result of typing ethtool -i .
> > 
> > Bus info          Device        Class          Description
> > ==========================================================
> > pci@0000:03:00.0  enp3s0f0      network        MT27710 Family [ConnectX-4
> > Lx]
> > pci@0000:03:00.1  enp3s0f1      network        MT27710 Family [ConnectX-4
> > Lx]
> > pci@0000:04:00.0  enp4s0f0      network        MT27800 Family [ConnectX-5]
> > pci@0000:04:00.1  enp4s0f1      network        MT27800 Family [ConnectX-5]
> > 
> > tx_h-yamashita@R730n10:~/dpdk-next-net$ ethtool -i enp3s0f0
> > driver: mlx5_core
> > version: 4.7-1.0.0
> > firmware-version: 14.25.1020 (MT_0000000266)
> > expansion-rom-version:
> > bus-info: 0000:03:00.0
> > supports-statistics: yes
> > supports-test: yes
> > supports-eeprom-access: no
> > supports-register-dump: no
> > supports-priv-flags: yes
> > tx_h-yamashita@R730n10:~/dpdk-next-net$ ethtool -i enp4s0f0
> > driver: mlx5_core
> > version: 4.7-1.0.0
> > firmware-version: 16.25.6000 (MT_0000000012)
> > expansion-rom-version:
> > bus-info: 0000:04:00.0
> > supports-statistics: yes
> > supports-test: yes
> > supports-eeprom-access: no
> > supports-register-dump: no
> > supports-priv-flags: yes
> > 
> > If you needs more info from my side, please let me know.
> > 
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> > 
> > > Hi, Hideyuki
> > >
> > > > -----Original Message-----
> > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > Sent: Monday, October 21, 2019 10:12
> > > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Moti Haimovsky
> > > > <motih@mellanox.com>; dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > action on VLAN header
> > > >
> > > > Dear Slava, Moti and all,
> > > >
> > > > Please let me know if you need more information.
> > > > Partial answer is acceptable for me.
> > > >
> > > > Thanks in advaince!
> > >
> > > I'm sorry for delay, your issue is still in progress.
> > > I've tested your rules on my standard configuration - these ones are
> > > rejected by FW/SW, not by DPDK code. Moti tested the flows on custom
> > setup (I suppose experimental FW/kernel).
> > > AFAIK, VLAN feature was planned to GA with OFED 4.7.1, please, let me
> > > check it (hope in few days, there are holidays still lasting in IL).
> > >
> > > With best regards, Slava
> > > >
> > > > BR,
> > > > HIdeyuki Yamashita
> > > > NTT TechnoCross
> > > >
> > > > > Dear Slava and experts,
> > > > >
> > > > > Thanks for your answering me.
> > > > > Baased on your answer, I tested using testpmd.
> > > > > And about the outcome, I have several questions.
> > > > >
> > > > >
> > > > > [1.Test environment]
> > > > > OS:Ubuntu18.04
> > > > > NIC1:MCX4121A-ACAT 25G
> > > > > NIC2:MCX516A-CCAT 100G
> > > > > Repo:dpdk-next-net
> > > > >
> > > > > I checked that the following is shown in git log command.
> > > > > 9f1e94469 net/mlx5: fix netlink rdma socket callback routine
> > > > > 50735012c
> > > > > net/mlx5: support reading module EEPROM data
> > > > > f53a5f917 net/mlx5: support modify VLAN ID on existing VLAN header
> > > > > 9af8046a1 net/mlx5: support modify VLAN ID on new VLAN header
> > > > > 43184603e net/mlx5: support modifying VLAN priority on VLAN header
> > > > > 4f59ffbd8 net/mlx5: support push flow action on VLAN header
> > > > > b4bd8f5da
> > > > > net/mlx5: support pop flow action on VLAN header 048e3e84c
> > net/mlx5:
> > > > > add VLAN push/pop DR commands to glue
> > > > >
> > > > > [2.Test result]
> > > > > I tested the follwoing flows with testpmd included in dpdk-next-net.
> > > > >
> > > > > A.flow create 0 ingress pattern eth / vlan id is 100 / end actions
> > > > > OF_POP_VLAN / end B.flow create 0 ingress pattern eth dst is
> > > > > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> > > > > C.flow create 0 ingress pattern eth dst is BB:BB:BB:BB:BB:BB / end
> > > > > actions OF_SET_VLAN_VID vlan_vid 200 / end D.flow create X ingress
> > > > > pattern eth dst is BB:BB:BB:BB:BB:BB / end actions of_SET_VLAN_PCP
> > > > > vlan_pcp 3 / end E.flow create 0 egress pattern eth src is
> > > > > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 / end
> > > > >
> > > > > A-D, resulted in "Caught error type 16 (specific action): cause:
> > > > 0x7ffcc711db48, action not supported: Operation not supported".
> > > > > E resulted in "Egress is not supported".
> > > > >
> > > > > [3. Quetions]
> > > > > Q1. What is the appropriate flow to entag/detag VLAN using testpmd?
> > > > >  I think related commits are included so it "should" work and my
> > > > > guess is
> > > > that my flow is somehow wrong.
> > > > > Q2. Is it correct understanding that "egress" is not supported for
> > > > > mlx5
> > > > PMD?
> > > > > Q3. If yes, is it possible to entag VLAN tag to the outgoing
> > > > > packet from
> > > > physical NIC by using rte_flow?
> > > > >
> > > > > BR,
> > > > > Hideyuki Yamashita
> > > > > NTT TechnoCross
> > > > >
> > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > > Sent: Friday, October 4, 2019 13:35
> > > > > > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > > Cc: Moti Haimovsky <motih@mellanox.com>; Slava Ovsiienko
> > > > > > > <viacheslavo@mellanox.com>; dev@dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > > > action on VLAN header
> > > > > > >
> > > > > > > Can somebody (Mellanox guys?) help me out?
> > > > > >
> > > > > > Hi, Hideyuki
> > > > > >
> > > > > > I'm sorry, there are long holidays in IL, so let me try to answer.
> > > > > >
> > > > > > >
> > > > > > > > Hello Moti,
> > > > > > > >
> > > > > > > > I have some questions on the patch.
> > > > > > > > Just want to know how to use it.
> > > > > > > >
> > > > > > > > Q1. Is it correct understanding that the patch will be
> > > > > > > > reflected in
> > > > > > > > 19.11 if it is approved?
> > > > > >
> > > > > > Yes, it is merged and should be reflected.
> > > > > >
> > > > > > > >
> > > > > > > > Q2.Which action should I specify when I want to insert VLAN
> > > > > > > > tag to non-VLAN frame?
> > > > > > > >
> > > > > > > > OF_PUSH_VLAN and OF_SET_VLAN_VID and OF_SET_VLAN_PCP ?
> > > > > >
> > > > > > All of them, OF_PUSH_VLAN inserts the VLAN header,
> > > > > > OF_SET_VLAN_VID and OF_SET_VLAN_PCP fill the fields with
> > appropriate values.
> > > > > >
> > > > > > > >
> > > > > > > > Q3. Is it possible to detag VLAN when it receives VLAN
> > > > > > > > tagged frame from outside of the host?
> > > > > > Do you mean some complex configuration with multiple VMs and
> > > > engaged
> > > > > > E-Switch feature? Anyway, there are multiple ways to strip
> > > > > > (untag) VLAN
> > > > header:
> > > > > > - with E-Switch rules (including match on specified port)
> > > > > > - with local port rules
> > > > > > - stripping VLAN in Rx queue
> > > > > >
> > > > > > > >
> > > > > > > > Q4. Is it possible to entag VLAN to non-VLAN frame when it
> > > > > > > > sends packet to outside of host?
> > > > > > Yes.
> > > > > >
> > > > > > > >
> > > > > > > > Q5.Are there any restriction to conbime other ACTIONS like
> > QUEUE?
> > > > > > Should no be. Action QUEUE is on Rx NIC namespace, VLAN POP is
> > > > supported there.
> > > > > >
> > > > > > > >
> > > > > > > > Q6. Is it possible to apply rte_flow actions for specified
> > > > > > > > tx queue of physical NIC?
> > > > > > > > (e.g. VM connect with PHY:0 using tx queue index:1, I want
> > > > > > > > to entag VLAN 101 to the traffic from VM to PHY:0 is it
> > > > > > > > possible?)
> > > > > > Directly - no, there is no item to match with specific Tx queue.
> > > > > >
> > > > > > If setting VLAN on specific Tx queue is desired we have two options:
> > > > > >
> > > > > > - engage Tx offload DEV_TX_OFFLOAD_VLAN_INSERT, and provide
> > VLAN
> > > > > > with  each packet being transferred to tx_burst
> > > > > >
> > > > > > - engage DEV_TX_OFFLOAD_MATCH_METADATA feature, and set
> > > > specific
> > > > > > metadata for all packets on specific queue. Then the rules
> > > > > > matching with this metadata may be inserted.
> > > > > >
> > > > > > [snip]
> > > > > >
> > > > > > With best regards, Slava
> > > > >
> > > >
> > > >
> > 
>
  
Slava Ovsiienko Oct. 30, 2019, 10:08 a.m. UTC | #10
Hi, Hideyuki

> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Wednesday, October 30, 2019 12:05
> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: Moti Haimovsky <motih@mellanox.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Hi Slava,
> 
> Thanks for your response back and letting me know the limitation.
> 
> I tried to input flow you suggested.
> But it returns error.

Did you specify the magic "dv_flow_en=1" devarg in testpmd command line?
Something like this: "-w 82:00.0,dv_flow_en=1"

With best regards, Slava


> 
> testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is
> testpmd> 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan
> testpmd> / queue index 0 / end
> Caught error type 3 (group field): groups is not supported: Operation not
> supported
> 
> Note that my setup OFED is NOT 4.7.1.0.0.2, but 4.7.1.0.0.1 because that is
> the latest version which I can download from the following web site.
> 
> https://jp.mellanox.com/page/products_dyn?product_family=26&ssn=u44h3
> rn8ngcmbdl6v0fvhqrgt3
> 
> Do you have any hints?
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > Hi, Hideyuki.
> >
> > Thanks for providing extra information.
> >
> > We rechecked the VLAN actions support in OFED 4.7.1, it should be
> supported.
> > There are some limitations:
> > - VLAN pop is supported on ingress direction only
> > - VLAN push is supported on egress direction only
> > - not supported in group 0 (this is root table, has some limitations)
> > we should insert into group 0 flow with jump to group 1, and then
> > insert the rule with VLAN actions to group 1
> >
> > I tried this flow (on my setup OFED 4.7.1.0.0.2):
> >
> > flow create 0 ingress group 1 priority 0 pattern eth dst is
> > 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan / queue
> index 0 / end It was created successfully.
> >
> > With best regards, Slava
> >
> > > -----Original Message-----
> > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Sent: Friday, October 25, 2019 7:49
> > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > Cc: Moti Haimovsky <motih@mellanox.com>; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > action on VLAN header
> > >
> > > Hello Slava,
> > >
> > > Thanks for your response back.
> > >
> > > While waiting your final response,
> > > I am sending additional info from my side.
> > >
> > > 1
> > > I am using "MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64"
> > > as OFED.
> > >
> > > tx_h-yamashita@R730n10:~/dpdk-next-net$ pwd /home/tx_h-
> > > yamashita/dpdk-next-net tx_h-yamashita@R730n10:~/dpdk-next-net$ ls
> > > app          MAINTAINERS
> > > buildtools   Makefile
> > > config       meson.build
> > > devtools     meson_options.txt
> > > doc          mk
> > > drivers      MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64
> > > examples     MLNX_OFED_LINUX-4.7-1.0.0.1-ubuntu18.04-x86_64.tgz
> > > GNUmakefile  README
> > > kernel       usertools
> > > lib          VERSION
> > > license      x86_64-native-linuxapp-gcc
> > >
> > > 2.
> > > I am using ConnextX-4 and ConnectX-5.
> > > I attach the result of typing ethtool -i .
> > >
> > > Bus info          Device        Class          Description
> > > ==========================================================
> > > pci@0000:03:00.0  enp3s0f0      network        MT27710 Family [ConnectX-
> 4
> > > Lx]
> > > pci@0000:03:00.1  enp3s0f1      network        MT27710 Family [ConnectX-
> 4
> > > Lx]
> > > pci@0000:04:00.0  enp4s0f0      network        MT27800 Family [ConnectX-
> 5]
> > > pci@0000:04:00.1  enp4s0f1      network        MT27800 Family [ConnectX-
> 5]
> > >
> > > tx_h-yamashita@R730n10:~/dpdk-next-net$ ethtool -i enp3s0f0
> > > driver: mlx5_core
> > > version: 4.7-1.0.0
> > > firmware-version: 14.25.1020 (MT_0000000266)
> > > expansion-rom-version:
> > > bus-info: 0000:03:00.0
> > > supports-statistics: yes
> > > supports-test: yes
> > > supports-eeprom-access: no
> > > supports-register-dump: no
> > > supports-priv-flags: yes
> > > tx_h-yamashita@R730n10:~/dpdk-next-net$ ethtool -i enp4s0f0
> > > driver: mlx5_core
> > > version: 4.7-1.0.0
> > > firmware-version: 16.25.6000 (MT_0000000012)
> > > expansion-rom-version:
> > > bus-info: 0000:04:00.0
> > > supports-statistics: yes
> > > supports-test: yes
> > > supports-eeprom-access: no
> > > supports-register-dump: no
> > > supports-priv-flags: yes
> > >
> > > If you needs more info from my side, please let me know.
> > >
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > >
> > > > Hi, Hideyuki
> > > >
> > > > > -----Original Message-----
> > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Sent: Monday, October 21, 2019 10:12
> > > > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>; Moti Haimovsky
> > > > > <motih@mellanox.com>; dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > action on VLAN header
> > > > >
> > > > > Dear Slava, Moti and all,
> > > > >
> > > > > Please let me know if you need more information.
> > > > > Partial answer is acceptable for me.
> > > > >
> > > > > Thanks in advaince!
> > > >
> > > > I'm sorry for delay, your issue is still in progress.
> > > > I've tested your rules on my standard configuration - these ones
> > > > are rejected by FW/SW, not by DPDK code. Moti tested the flows on
> > > > custom
> > > setup (I suppose experimental FW/kernel).
> > > > AFAIK, VLAN feature was planned to GA with OFED 4.7.1, please, let
> > > > me check it (hope in few days, there are holidays still lasting in IL).
> > > >
> > > > With best regards, Slava
> > > > >
> > > > > BR,
> > > > > HIdeyuki Yamashita
> > > > > NTT TechnoCross
> > > > >
> > > > > > Dear Slava and experts,
> > > > > >
> > > > > > Thanks for your answering me.
> > > > > > Baased on your answer, I tested using testpmd.
> > > > > > And about the outcome, I have several questions.
> > > > > >
> > > > > >
> > > > > > [1.Test environment]
> > > > > > OS:Ubuntu18.04
> > > > > > NIC1:MCX4121A-ACAT 25G
> > > > > > NIC2:MCX516A-CCAT 100G
> > > > > > Repo:dpdk-next-net
> > > > > >
> > > > > > I checked that the following is shown in git log command.
> > > > > > 9f1e94469 net/mlx5: fix netlink rdma socket callback routine
> > > > > > 50735012c
> > > > > > net/mlx5: support reading module EEPROM data
> > > > > > f53a5f917 net/mlx5: support modify VLAN ID on existing VLAN
> > > > > > header
> > > > > > 9af8046a1 net/mlx5: support modify VLAN ID on new VLAN header
> > > > > > 43184603e net/mlx5: support modifying VLAN priority on VLAN
> > > > > > header
> > > > > > 4f59ffbd8 net/mlx5: support push flow action on VLAN header
> > > > > > b4bd8f5da
> > > > > > net/mlx5: support pop flow action on VLAN header 048e3e84c
> > > net/mlx5:
> > > > > > add VLAN push/pop DR commands to glue
> > > > > >
> > > > > > [2.Test result]
> > > > > > I tested the follwoing flows with testpmd included in dpdk-next-net.
> > > > > >
> > > > > > A.flow create 0 ingress pattern eth / vlan id is 100 / end
> > > > > > actions OF_POP_VLAN / end B.flow create 0 ingress pattern eth
> > > > > > dst is BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype
> > > > > > 1000 / end C.flow create 0 ingress pattern eth dst is
> > > > > > BB:BB:BB:BB:BB:BB / end actions OF_SET_VLAN_VID vlan_vid 200 /
> > > > > > end D.flow create X ingress pattern eth dst is
> > > > > > BB:BB:BB:BB:BB:BB / end actions of_SET_VLAN_PCP vlan_pcp 3 /
> > > > > > end E.flow create 0 egress pattern eth src is
> > > > > > BB:BB:BB:BB:BB:BB / end actions OF_PUSH_VLAN ethertype 1000 /
> > > > > > end
> > > > > >
> > > > > > A-D, resulted in "Caught error type 16 (specific action): cause:
> > > > > 0x7ffcc711db48, action not supported: Operation not supported".
> > > > > > E resulted in "Egress is not supported".
> > > > > >
> > > > > > [3. Quetions]
> > > > > > Q1. What is the appropriate flow to entag/detag VLAN using
> testpmd?
> > > > > >  I think related commits are included so it "should" work and
> > > > > > my guess is
> > > > > that my flow is somehow wrong.
> > > > > > Q2. Is it correct understanding that "egress" is not supported
> > > > > > for
> > > > > > mlx5
> > > > > PMD?
> > > > > > Q3. If yes, is it possible to entag VLAN tag to the outgoing
> > > > > > packet from
> > > > > physical NIC by using rte_flow?
> > > > > >
> > > > > > BR,
> > > > > > Hideyuki Yamashita
> > > > > > NTT TechnoCross
> > > > > >
> > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > > > Sent: Friday, October 4, 2019 13:35
> > > > > > > > To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > > > Cc: Moti Haimovsky <motih@mellanox.com>; Slava Ovsiienko
> > > > > > > > <viacheslavo@mellanox.com>; dev@dpdk.org
> > > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for
> > > > > > > > flow action on VLAN header
> > > > > > > >
> > > > > > > > Can somebody (Mellanox guys?) help me out?
> > > > > > >
> > > > > > > Hi, Hideyuki
> > > > > > >
> > > > > > > I'm sorry, there are long holidays in IL, so let me try to answer.
> > > > > > >
> > > > > > > >
> > > > > > > > > Hello Moti,
> > > > > > > > >
> > > > > > > > > I have some questions on the patch.
> > > > > > > > > Just want to know how to use it.
> > > > > > > > >
> > > > > > > > > Q1. Is it correct understanding that the patch will be
> > > > > > > > > reflected in
> > > > > > > > > 19.11 if it is approved?
> > > > > > >
> > > > > > > Yes, it is merged and should be reflected.
> > > > > > >
> > > > > > > > >
> > > > > > > > > Q2.Which action should I specify when I want to insert
> > > > > > > > > VLAN tag to non-VLAN frame?
> > > > > > > > >
> > > > > > > > > OF_PUSH_VLAN and OF_SET_VLAN_VID and
> OF_SET_VLAN_PCP ?
> > > > > > >
> > > > > > > All of them, OF_PUSH_VLAN inserts the VLAN header,
> > > > > > > OF_SET_VLAN_VID and OF_SET_VLAN_PCP fill the fields with
> > > appropriate values.
> > > > > > >
> > > > > > > > >
> > > > > > > > > Q3. Is it possible to detag VLAN when it receives VLAN
> > > > > > > > > tagged frame from outside of the host?
> > > > > > > Do you mean some complex configuration with multiple VMs and
> > > > > engaged
> > > > > > > E-Switch feature? Anyway, there are multiple ways to strip
> > > > > > > (untag) VLAN
> > > > > header:
> > > > > > > - with E-Switch rules (including match on specified port)
> > > > > > > - with local port rules
> > > > > > > - stripping VLAN in Rx queue
> > > > > > >
> > > > > > > > >
> > > > > > > > > Q4. Is it possible to entag VLAN to non-VLAN frame when
> > > > > > > > > it sends packet to outside of host?
> > > > > > > Yes.
> > > > > > >
> > > > > > > > >
> > > > > > > > > Q5.Are there any restriction to conbime other ACTIONS
> > > > > > > > > like
> > > QUEUE?
> > > > > > > Should no be. Action QUEUE is on Rx NIC namespace, VLAN POP
> > > > > > > is
> > > > > supported there.
> > > > > > >
> > > > > > > > >
> > > > > > > > > Q6. Is it possible to apply rte_flow actions for
> > > > > > > > > specified tx queue of physical NIC?
> > > > > > > > > (e.g. VM connect with PHY:0 using tx queue index:1, I
> > > > > > > > > want to entag VLAN 101 to the traffic from VM to PHY:0
> > > > > > > > > is it
> > > > > > > > > possible?)
> > > > > > > Directly - no, there is no item to match with specific Tx queue.
> > > > > > >
> > > > > > > If setting VLAN on specific Tx queue is desired we have two
> options:
> > > > > > >
> > > > > > > - engage Tx offload DEV_TX_OFFLOAD_VLAN_INSERT, and
> provide
> > > VLAN
> > > > > > > with  each packet being transferred to tx_burst
> > > > > > >
> > > > > > > - engage DEV_TX_OFFLOAD_MATCH_METADATA feature, and set
> > > > > specific
> > > > > > > metadata for all packets on specific queue. Then the rules
> > > > > > > matching with this metadata may be inserted.
> > > > > > >
> > > > > > > [snip]
> > > > > > >
> > > > > > > With best regards, Slava
> > > > > >
> > > > >
> > > > >
> > >
> >
> 
>
  
Hideyuki Yamashita Oct. 30, 2019, 10:46 a.m. UTC | #11
Hello Slava,

Thanks for your help.
I added magic phrase. with chaging PCI number with proper one in my env.
It changes situation but still result in error.

I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
Your help is appreciated.

I think it is getting closer.


tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-gcc/app$
sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 --socket-mem 512,512
--huge-dir=/mnt/h
uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 --txq=16 --rxq=16
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1015 net_mlx5
net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device mlx5_3

Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: B8:59:9F:DB:22:20
Checking link statuses...
Done
testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan / queue index 0 / end
Caught error type 1 (cause unspecified): cannot create table: Cannot allocate memory


BR,
Hideyuki Yamashita
  
Slava Ovsiienko Oct. 31, 2019, 7:11 a.m. UTC | #12
Hi, Hideyuki

> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Wednesday, October 30, 2019 12:46
> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Hello Slava,
> 
> Thanks for your help.
> I added magic phrase. with chaging PCI number with proper one in my env.

> It changes situation but still result in error.
> 
> I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
> Your help is appreciated.
> 
> I think it is getting closer.
> tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> gcc/app$
> sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 --socket-mem 512,512 -
> -huge-dir=/mnt/h uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2

mlx5 PMD supports two flow engines:
- Verbs, this is legacy one, almost no new features are being added, just bug fixes,
  provides slow rule insertion rate, etc.
- Direct Rules, the new one, all new features are being added here.

(We had one more intermediate engine  - Direct Verbs, it was dropped,
but prefix dv in dv_flow_en remains 😊)

Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX, ConnectX-5, ConnectX-6, etc.
Direct Rules is supported for NICs starting from ConnectX-5.
"dv_flow_en=1" partameter engages Direct Rules, but I see you run testpmd
over 03:00.0 which is ConnectX-4, not  supporting Direct Rules.
Please, run over ConnectX-5 you have on your host.

As for error - it is not related to memory, rdma core just failed to create the group table,
because ConnectX-4 does not support DR.

With best regards, Slava

> --txq=16 --rxq=16
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1015 net_mlx5
> net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device
> mlx5_3
> 
> Interactive-mode selected
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> 
> Warning! port-topology=paired and odd forward ports number, the last port
> will pair with itself.
> 
> Configuring Port 0 (socket 0)
> Port 0: B8:59:9F:DB:22:20
> Checking link statuses...
> Done
> testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is
> testpmd> 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan
> testpmd> / queue index 0 / end
> Caught error type 1 (cause unspecified): cannot create table: Cannot allocate
> memory
> 
> 
> BR,
> Hideyuki Yamashita
  
Hideyuki Yamashita Oct. 31, 2019, 9:51 a.m. UTC | #13
Dear Slava,

Your guess is corrrect.
When I put flow into Connect-X5, it was successful.

General question.
Are there any way to input flow to ConnectX-4?
In another word, are there any way to activate Verb?
And which type of flow is supported in Verb?

-----------------------------------------------------------
tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 --socket-mem 512,512 --huge-dir=/mnt/h
uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 --txq=16 --rxq=16
[sudo] password for tx_h-yamashita:
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1017 net_mlx5
net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device mlx5_          1

Interactive-mode selected

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socke          t=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456, size=2176, socke          t=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will p          air with itself.

Configuring Port 0 (socket 0)
Port 0: B8:59:9F:C1:4A:CE
Checking link statuses...
Done
testpmd>
testpmd>  flow create 0 ingress group 1 priority 0 pattern eth dst is 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan  / queue index 0 / end
Flow rule #0 created
testpmd>
--------------------------------------------------------------------------------------------------------------

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hi, Hideyuki
> 
> > -----Original Message-----
> > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Sent: Wednesday, October 30, 2019 12:46
> > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > VLAN header
> > 
> > Hello Slava,
> > 
> > Thanks for your help.
> > I added magic phrase. with chaging PCI number with proper one in my env.
> 
> > It changes situation but still result in error.
> > 
> > I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
> > Your help is appreciated.
> > 
> > I think it is getting closer.
> > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > gcc/app$
> > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 --socket-mem 512,512 -
> > -huge-dir=/mnt/h uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2
> 
> mlx5 PMD supports two flow engines:
> - Verbs, this is legacy one, almost no new features are being added, just bug fixes,
>   provides slow rule insertion rate, etc.
> - Direct Rules, the new one, all new features are being added here.
> 
> (We had one more intermediate engine  - Direct Verbs, it was dropped,
> but prefix dv in dv_flow_en remains ??)
> 
> Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX, ConnectX-5, ConnectX-6, etc.
> Direct Rules is supported for NICs starting from ConnectX-5.
> "dv_flow_en=1" partameter engages Direct Rules, but I see you run testpmd
> over 03:00.0 which is ConnectX-4, not  supporting Direct Rules.
> Please, run over ConnectX-5 you have on your host.
> 
> As for error - it is not related to memory, rdma core just failed to create the group table,
> because ConnectX-4 does not support DR.
> 
> With best regards, Slava
> 
> > --txq=16 --rxq=16
> > EAL: Detected 48 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: Selected IOVA mode 'PA'
> > EAL: Probing VFIO support...
> > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > EAL:   probe driver: 15b3:1015 net_mlx5
> > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device
> > mlx5_3
> > 
> > Interactive-mode selected
> > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > size=2176, socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > size=2176, socket=1
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > 
> > Warning! port-topology=paired and odd forward ports number, the last port
> > will pair with itself.
> > 
> > Configuring Port 0 (socket 0)
> > Port 0: B8:59:9F:DB:22:20
> > Checking link statuses...
> > Done
> > testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is
> > testpmd> 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan
> > testpmd> / queue index 0 / end
> > Caught error type 1 (cause unspecified): cannot create table: Cannot allocate
> > memory
> > 
> > 
> > BR,
> > Hideyuki Yamashita
>
  
Slava Ovsiienko Oct. 31, 2019, 10:36 a.m. UTC | #14
> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Thursday, October 31, 2019 11:52
> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Dear Slava,
> 
> Your guess is corrrect.
> When I put flow into Connect-X5, it was successful.
Very nice.

> 
> General question.
As we know - general questions are the most hard ones to answer 😊.

> Are there any way to input flow to ConnectX-4?
As usual - with RTE flow API.  Just omit dv_flow_en, or specify dv_flow_en=0
and mlx5 PMD will handle RTE flow API via Verbs engine, supported by ConnectX-4. 

> In another word, are there any way to activate Verb?
> And which type of flow is supported in Verb?
Please, see flow_verbs_validate() routine in the mlx5_flow_verbs.c,
it shows which RTE flow items and actions are actually supported by Verbs.

With best regards, Slava


> 
> -----------------------------------------------------------
> tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 --socket-
> mem 512,512 --huge-dir=/mnt/h
> uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 --txq=16 --rxq=16
> [sudo] password for tx_h-yamashita:
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1017 net_mlx5
> net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device
> mlx5_          1
> 
> Interactive-mode selected
> 
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> size=2176, socke          t=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> size=2176, socke          t=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> 
> Warning! port-topology=paired and odd forward ports number, the last port
> will p          air with itself.
> 
> Configuring Port 0 (socket 0)
> Port 0: B8:59:9F:C1:4A:CE
> Checking link statuses...
> Done
> testpmd>
> testpmd>  flow create 0 ingress group 1 priority 0 pattern eth dst is
> 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan  / queue
> index 0 / end
> Flow rule #0 created
> testpmd>
> ---------------------------------------------------------------------------------------------
> -----------------
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > Hi, Hideyuki
> >
> > > -----Original Message-----
> > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Sent: Wednesday, October 30, 2019 12:46
> > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > Cc: dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > action on VLAN header
> > >
> > > Hello Slava,
> > >
> > > Thanks for your help.
> > > I added magic phrase. with chaging PCI number with proper one in my
> env.
> >
> > > It changes situation but still result in error.
> > >
> > > I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
> > > Your help is appreciated.
> > >
> > > I think it is getting closer.
> > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > > gcc/app$
> > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 --socket-mem
> > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > --portmask=0x1 --nb-cores=2
> >
> > mlx5 PMD supports two flow engines:
> > - Verbs, this is legacy one, almost no new features are being added, just
> bug fixes,
> >   provides slow rule insertion rate, etc.
> > - Direct Rules, the new one, all new features are being added here.
> >
> > (We had one more intermediate engine  - Direct Verbs, it was dropped,
> > but prefix dv in dv_flow_en remains ??)
> >
> > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX, ConnectX-5,
> ConnectX-6, etc.
> > Direct Rules is supported for NICs starting from ConnectX-5.
> > "dv_flow_en=1" partameter engages Direct Rules, but I see you run
> > testpmd over 03:00.0 which is ConnectX-4, not  supporting Direct Rules.
> > Please, run over ConnectX-5 you have on your host.
> >
> > As for error - it is not related to memory, rdma core just failed to
> > create the group table, because ConnectX-4 does not support DR.
> >
> > With best regards, Slava
> >
> > > --txq=16 --rxq=16
> > > EAL: Detected 48 lcore(s)
> > > EAL: Detected 2 NUMA nodes
> > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > EAL: Selected IOVA mode 'PA'
> > > EAL: Probing VFIO support...
> > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on
> > > device
> > > mlx5_3
> > >
> > > Interactive-mode selected
> > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > > size=2176, socket=0
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > > size=2176, socket=1
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > >
> > > Warning! port-topology=paired and odd forward ports number, the last
> > > port will pair with itself.
> > >
> > > Configuring Port 0 (socket 0)
> > > Port 0: B8:59:9F:DB:22:20
> > > Checking link statuses...
> > > Done
> > > testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is
> > > testpmd> 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions
> > > testpmd> of_pop_vlan / queue index 0 / end
> > > Caught error type 1 (cause unspecified): cannot create table: Cannot
> > > allocate memory
> > >
> > >
> > > BR,
> > > Hideyuki Yamashita
> >
>
  
Hideyuki Yamashita Nov. 5, 2019, 10:26 a.m. UTC | #15
Dear Slava,

Thanks for your response.

Inputting other flows failed while some flows are created.
Please help on the following two cases.

1) I would like to detag vlan tag which has specific destionation MAC
address.  No condition about vlan id value.

testpmd> flow create 0 ingress group 1 pattern eth dst is AA:AA:AA:AA:AA:AA / vlan / any / end actions of_pop_vlan / queue index 1 / end
Caught error type 10 (item specification): VLAN cannot be empty: Invalid argument
testpmd> flow create 0 ingress group 1 pattern eth dst is AA:AA:AA:AA:AA:AA / vlan vid is 100 / end actions of_pop_vlan / queue index 1 / end
Flow rule #0 created

2) I would like to entag vlan tag

testpmd> flow create 0 egress group 1 pattern eth src is BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp vlan_pcp 3 / end
Caught error type 16 (specific action): cause: 0x7ffdc9d98348, match on VLAN is required in order to set VLAN VID: Invalid argument

Thanks!

BR,
Hideyuki Yamashita
NTT TechnoCross



> > -----Original Message-----
> > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Sent: Thursday, October 31, 2019 11:52
> > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > VLAN header
> > 
> > Dear Slava,
> > 
> > Your guess is corrrect.
> > When I put flow into Connect-X5, it was successful.
> Very nice.
> 
> > 
> > General question.
> As we know - general questions are the most hard ones to answer ??.
> 
> > Are there any way to input flow to ConnectX-4?
> As usual - with RTE flow API.  Just omit dv_flow_en, or specify dv_flow_en=0
> and mlx5 PMD will handle RTE flow API via Verbs engine, supported by ConnectX-4. 
> 
> > In another word, are there any way to activate Verb?
> > And which type of flow is supported in Verb?
> Please, see flow_verbs_validate() routine in the mlx5_flow_verbs.c,
> it shows which RTE flow items and actions are actually supported by Verbs.
> 
> With best regards, Slava
> 
> 
> > 
> > -----------------------------------------------------------
> > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 --socket-
> > mem 512,512 --huge-dir=/mnt/h
> > uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 --txq=16 --rxq=16
> > [sudo] password for tx_h-yamashita:
> > EAL: Detected 48 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: Selected IOVA mode 'PA'
> > EAL: Probing VFIO support...
> > EAL: PCI device 0000:04:00.0 on NUMA socket 0
> > EAL:   probe driver: 15b3:1017 net_mlx5
> > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device
> > mlx5_          1
> > 
> > Interactive-mode selected
> > 
> > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > size=2176, socke          t=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > size=2176, socke          t=1
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > 
> > Warning! port-topology=paired and odd forward ports number, the last port
> > will p          air with itself.
> > 
> > Configuring Port 0 (socket 0)
> > Port 0: B8:59:9F:C1:4A:CE
> > Checking link statuses...
> > Done
> > testpmd>
> > testpmd>  flow create 0 ingress group 1 priority 0 pattern eth dst is
> > 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan  / queue
> > index 0 / end
> > Flow rule #0 created
> > testpmd>
> > ---------------------------------------------------------------------------------------------
> > -----------------
> > 
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> > 
> > > Hi, Hideyuki
> > >
> > > > -----Original Message-----
> > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > Sent: Wednesday, October 30, 2019 12:46
> > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > Cc: dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > action on VLAN header
> > > >
> > > > Hello Slava,
> > > >
> > > > Thanks for your help.
> > > > I added magic phrase. with chaging PCI number with proper one in my
> > env.
> > >
> > > > It changes situation but still result in error.
> > > >
> > > > I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
> > > > Your help is appreciated.
> > > >
> > > > I think it is getting closer.
> > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > > > gcc/app$
> > > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 --socket-mem
> > > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > > --portmask=0x1 --nb-cores=2
> > >
> > > mlx5 PMD supports two flow engines:
> > > - Verbs, this is legacy one, almost no new features are being added, just
> > bug fixes,
> > >   provides slow rule insertion rate, etc.
> > > - Direct Rules, the new one, all new features are being added here.
> > >
> > > (We had one more intermediate engine  - Direct Verbs, it was dropped,
> > > but prefix dv in dv_flow_en remains ??)
> > >
> > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX, ConnectX-5,
> > ConnectX-6, etc.
> > > Direct Rules is supported for NICs starting from ConnectX-5.
> > > "dv_flow_en=1" partameter engages Direct Rules, but I see you run
> > > testpmd over 03:00.0 which is ConnectX-4, not  supporting Direct Rules.
> > > Please, run over ConnectX-5 you have on your host.
> > >
> > > As for error - it is not related to memory, rdma core just failed to
> > > create the group table, because ConnectX-4 does not support DR.
> > >
> > > With best regards, Slava
> > >
> > > > --txq=16 --rxq=16
> > > > EAL: Detected 48 lcore(s)
> > > > EAL: Detected 2 NUMA nodes
> > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > EAL: Selected IOVA mode 'PA'
> > > > EAL: Probing VFIO support...
> > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on
> > > > device
> > > > mlx5_3
> > > >
> > > > Interactive-mode selected
> > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > > > size=2176, socket=0
> > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > > > size=2176, socket=1
> > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > >
> > > > Warning! port-topology=paired and odd forward ports number, the last
> > > > port will pair with itself.
> > > >
> > > > Configuring Port 0 (socket 0)
> > > > Port 0: B8:59:9F:DB:22:20
> > > > Checking link statuses...
> > > > Done
> > > > testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is
> > > > testpmd> 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions
> > > > testpmd> of_pop_vlan / queue index 0 / end
> > > > Caught error type 1 (cause unspecified): cannot create table: Cannot
> > > > allocate memory
> > > >
> > > >
> > > > BR,
> > > > Hideyuki Yamashita
> > >
> > 
>
  
Hideyuki Yamashita Nov. 6, 2019, 11:03 a.m. UTC | #16
Dear Slava,

Additional question.
When I use testpmd in dpdk-next-net repo, it works in general.
However when I use dpdk19.11-rc1,  testpmd does not recognize connectX-5
NIC.

Is it correct that ConnectX-5 will be recognized in 19.11 release
finally?
If yes, which release candidate the necessary change will be mergerd and
available?

BR,
Hideyuki Yamashita
NTT TechnoCross


> Dear Slava,
> 
> Thanks for your response.
> 
> Inputting other flows failed while some flows are created.
> Please help on the following two cases.
> 
> 1) I would like to detag vlan tag which has specific destionation MAC
> address.  No condition about vlan id value.
> 
> testpmd> flow create 0 ingress group 1 pattern eth dst is AA:AA:AA:AA:AA:AA / vlan / any / end actions of_pop_vlan / queue index 1 / end
> Caught error type 10 (item specification): VLAN cannot be empty: Invalid argument
> testpmd> flow create 0 ingress group 1 pattern eth dst is AA:AA:AA:AA:AA:AA / vlan vid is 100 / end actions of_pop_vlan / queue index 1 / end
> Flow rule #0 created
> 
> 2) I would like to entag vlan tag
> 
> testpmd> flow create 0 egress group 1 pattern eth src is BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp vlan_pcp 3 / end
> Caught error type 16 (specific action): cause: 0x7ffdc9d98348, match on VLAN is required in order to set VLAN VID: Invalid argument
> 
> Thanks!
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> 
> 
> > > -----Original Message-----
> > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Sent: Thursday, October 31, 2019 11:52
> > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > Cc: dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > > VLAN header
> > > 
> > > Dear Slava,
> > > 
> > > Your guess is corrrect.
> > > When I put flow into Connect-X5, it was successful.
> > Very nice.
> > 
> > > 
> > > General question.
> > As we know - general questions are the most hard ones to answer ??.
> > 
> > > Are there any way to input flow to ConnectX-4?
> > As usual - with RTE flow API.  Just omit dv_flow_en, or specify dv_flow_en=0
> > and mlx5 PMD will handle RTE flow API via Verbs engine, supported by ConnectX-4. 
> > 
> > > In another word, are there any way to activate Verb?
> > > And which type of flow is supported in Verb?
> > Please, see flow_verbs_validate() routine in the mlx5_flow_verbs.c,
> > it shows which RTE flow items and actions are actually supported by Verbs.
> > 
> > With best regards, Slava
> > 
> > 
> > > 
> > > -----------------------------------------------------------
> > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > > gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 --socket-
> > > mem 512,512 --huge-dir=/mnt/h
> > > uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 --txq=16 --rxq=16
> > > [sudo] password for tx_h-yamashita:
> > > EAL: Detected 48 lcore(s)
> > > EAL: Detected 2 NUMA nodes
> > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > EAL: Selected IOVA mode 'PA'
> > > EAL: Probing VFIO support...
> > > EAL: PCI device 0000:04:00.0 on NUMA socket 0
> > > EAL:   probe driver: 15b3:1017 net_mlx5
> > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device
> > > mlx5_          1
> > > 
> > > Interactive-mode selected
> > > 
> > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > > size=2176, socke          t=0
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > > size=2176, socke          t=1
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > 
> > > Warning! port-topology=paired and odd forward ports number, the last port
> > > will p          air with itself.
> > > 
> > > Configuring Port 0 (socket 0)
> > > Port 0: B8:59:9F:C1:4A:CE
> > > Checking link statuses...
> > > Done
> > > testpmd>
> > > testpmd>  flow create 0 ingress group 1 priority 0 pattern eth dst is
> > > 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan  / queue
> > > index 0 / end
> > > Flow rule #0 created
> > > testpmd>
> > > ---------------------------------------------------------------------------------------------
> > > -----------------
> > > 
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > > 
> > > > Hi, Hideyuki
> > > >
> > > > > -----Original Message-----
> > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Sent: Wednesday, October 30, 2019 12:46
> > > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > Cc: dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > action on VLAN header
> > > > >
> > > > > Hello Slava,
> > > > >
> > > > > Thanks for your help.
> > > > > I added magic phrase. with chaging PCI number with proper one in my
> > > env.
> > > >
> > > > > It changes situation but still result in error.
> > > > >
> > > > > I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
> > > > > Your help is appreciated.
> > > > >
> > > > > I think it is getting closer.
> > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > > > > gcc/app$
> > > > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 --socket-mem
> > > > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > > > --portmask=0x1 --nb-cores=2
> > > >
> > > > mlx5 PMD supports two flow engines:
> > > > - Verbs, this is legacy one, almost no new features are being added, just
> > > bug fixes,
> > > >   provides slow rule insertion rate, etc.
> > > > - Direct Rules, the new one, all new features are being added here.
> > > >
> > > > (We had one more intermediate engine  - Direct Verbs, it was dropped,
> > > > but prefix dv in dv_flow_en remains ??)
> > > >
> > > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX, ConnectX-5,
> > > ConnectX-6, etc.
> > > > Direct Rules is supported for NICs starting from ConnectX-5.
> > > > "dv_flow_en=1" partameter engages Direct Rules, but I see you run
> > > > testpmd over 03:00.0 which is ConnectX-4, not  supporting Direct Rules.
> > > > Please, run over ConnectX-5 you have on your host.
> > > >
> > > > As for error - it is not related to memory, rdma core just failed to
> > > > create the group table, because ConnectX-4 does not support DR.
> > > >
> > > > With best regards, Slava
> > > >
> > > > > --txq=16 --rxq=16
> > > > > EAL: Detected 48 lcore(s)
> > > > > EAL: Detected 2 NUMA nodes
> > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > EAL: Selected IOVA mode 'PA'
> > > > > EAL: Probing VFIO support...
> > > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on
> > > > > device
> > > > > mlx5_3
> > > > >
> > > > > Interactive-mode selected
> > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > > > > size=2176, socket=0
> > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > > > > size=2176, socket=1
> > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > >
> > > > > Warning! port-topology=paired and odd forward ports number, the last
> > > > > port will pair with itself.
> > > > >
> > > > > Configuring Port 0 (socket 0)
> > > > > Port 0: B8:59:9F:DB:22:20
> > > > > Checking link statuses...
> > > > > Done
> > > > > testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is
> > > > > testpmd> 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions
> > > > > testpmd> of_pop_vlan / queue index 0 / end
> > > > > Caught error type 1 (cause unspecified): cannot create table: Cannot
> > > > > allocate memory
> > > > >
> > > > >
> > > > > BR,
> > > > > Hideyuki Yamashita
> > > >
> > > 
> > 
>
  
Slava Ovsiienko Nov. 6, 2019, 4:35 p.m. UTC | #17
Hi, Hideyuki

> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Wednesday, November 6, 2019 13:04
> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Dear Slava,
> 
> Additional question.
> When I use testpmd in dpdk-next-net repo, it works in general.
> However when I use dpdk19.11-rc1,  testpmd does not recognize connectX-5
> NIC.

It is quite strange, it should be, ConnectX-5 is base Mellanox NIC now.
Could you, please:
- configure "CONFIG_RTE_LIBRTE_MLX5_DEBUG=y" in ./config/common_base
- reconfigure DPDK and rebuild testpmd
- run testpmd with --log-level=99 --log-level=pmd.net.mlx5:8 (before -- separator)
- see (and provide) the log, where it drops the eth_dev object spawning

> 
> Is it correct that ConnectX-5 will be recognized in 19.11 release finally?

It should be recognized in 19.11rc1, possible we have some configuration issue,
let's have a look at.

> If yes, which release candidate the necessary change will be mergerd and
> available?
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> 
> > Dear Slava,
> >
> > Thanks for your response.
> >
> > Inputting other flows failed while some flows are created.
> > Please help on the following two cases.
> >
> > 1) I would like to detag vlan tag which has specific destionation MAC
> > address.  No condition about vlan id value.
> >
> > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > testpmd> AA:AA:AA:AA:AA:AA / vlan / any / end actions of_pop_vlan /
> > testpmd> queue index 1 / end
> > Caught error type 10 (item specification): VLAN cannot be empty:
> > Invalid argument
> > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > testpmd> AA:AA:AA:AA:AA:AA / vlan vid is 100 / end actions of_pop_vlan
> > testpmd> / queue index 1 / end
> > Flow rule #0 created

I'll check, possible this validation reject is imposed by HW limitations - it requires the VLAN header presence
and (IIRC) VID match. If possible - we'll fix.

> >
> > 2) I would like to entag vlan tag
> >
> > testpmd> flow create 0 egress group 1 pattern eth src is
> > testpmd> BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype
> > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp
> > testpmd> vlan_pcp 3 / end
> > Caught error type 16 (specific action): cause: 0x7ffdc9d98348, match
> > on VLAN is required in order to set VLAN VID: Invalid argument
> >

It is fixed (and patch Is already merged - http://patches.dpdk.org/patch/62295/),
let's try coming 19.11rc2. I inserted your Flow successfully on current Upstream..

With best regards, Slava



> > Thanks!
> >
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> >
> >
> >
> > > > -----Original Message-----
> > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > Sent: Thursday, October 31, 2019 11:52
> > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > Cc: dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > action on VLAN header
> > > >
> > > > Dear Slava,
> > > >
> > > > Your guess is corrrect.
> > > > When I put flow into Connect-X5, it was successful.
> > > Very nice.
> > >
> > > >
> > > > General question.
> > > As we know - general questions are the most hard ones to answer ??.
> > >
> > > > Are there any way to input flow to ConnectX-4?
> > > As usual - with RTE flow API.  Just omit dv_flow_en, or specify
> > > dv_flow_en=0 and mlx5 PMD will handle RTE flow API via Verbs engine,
> supported by ConnectX-4.
> > >
> > > > In another word, are there any way to activate Verb?
> > > > And which type of flow is supported in Verb?
> > > Please, see flow_verbs_validate() routine in the mlx5_flow_verbs.c,
> > > it shows which RTE flow items and actions are actually supported by
> Verbs.
> > >
> > > With best regards, Slava
> > >
> > >
> > > >
> > > > -----------------------------------------------------------
> > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > > > gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 --
> socket-
> > > > mem 512,512 --huge-dir=/mnt/h
> > > > uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2
> > > > --txq=16 --rxq=16 [sudo] password for tx_h-yamashita:
> > > > EAL: Detected 48 lcore(s)
> > > > EAL: Detected 2 NUMA nodes
> > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > EAL: Selected IOVA mode 'PA'
> > > > EAL: Probing VFIO support...
> > > > EAL: PCI device 0000:04:00.0 on NUMA socket 0
> > > > EAL:   probe driver: 15b3:1017 net_mlx5
> > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on
> device
> > > > mlx5_          1
> > > >
> > > > Interactive-mode selected
> > > >
> > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > > > size=2176, socke          t=0
> > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > > > size=2176, socke          t=1
> > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > >
> > > > Warning! port-topology=paired and odd forward ports number, the last
> port
> > > > will p          air with itself.
> > > >
> > > > Configuring Port 0 (socket 0)
> > > > Port 0: B8:59:9F:C1:4A:CE
> > > > Checking link statuses...
> > > > Done
> > > > testpmd>
> > > > testpmd>  flow create 0 ingress group 1 priority 0 pattern eth dst
> > > > testpmd> is
> > > > 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan  /
> queue
> > > > index 0 / end
> > > > Flow rule #0 created
> > > > testpmd>
> > > > ------------------------------------------------------------------
> > > > ---------------------------
> > > > -----------------
> > > >
> > > > BR,
> > > > Hideyuki Yamashita
> > > > NTT TechnoCross
> > > >
> > > > > Hi, Hideyuki
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > Sent: Wednesday, October 30, 2019 12:46
> > > > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > > Cc: dev@dpdk.org
> > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > > action on VLAN header
> > > > > >
> > > > > > Hello Slava,
> > > > > >
> > > > > > Thanks for your help.
> > > > > > I added magic phrase. with chaging PCI number with proper one
> > > > > > in my
> > > > env.
> > > > >
> > > > > > It changes situation but still result in error.
> > > > > >
> > > > > > I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
> > > > > > Your help is appreciated.
> > > > > >
> > > > > > I think it is getting closer.
> > > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-
> linuxapp-
> > > > > > gcc/app$
> > > > > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1
> > > > > > --socket-mem
> > > > > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > > > > --portmask=0x1 --nb-cores=2
> > > > >
> > > > > mlx5 PMD supports two flow engines:
> > > > > - Verbs, this is legacy one, almost no new features are being
> > > > > added, just
> > > > bug fixes,
> > > > >   provides slow rule insertion rate, etc.
> > > > > - Direct Rules, the new one, all new features are being added here.
> > > > >
> > > > > (We had one more intermediate engine  - Direct Verbs, it was
> > > > > dropped, but prefix dv in dv_flow_en remains ??)
> > > > >
> > > > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX,
> > > > > ConnectX-5,
> > > > ConnectX-6, etc.
> > > > > Direct Rules is supported for NICs starting from ConnectX-5.
> > > > > "dv_flow_en=1" partameter engages Direct Rules, but I see you
> > > > > run testpmd over 03:00.0 which is ConnectX-4, not  supporting Direct
> Rules.
> > > > > Please, run over ConnectX-5 you have on your host.
> > > > >
> > > > > As for error - it is not related to memory, rdma core just
> > > > > failed to create the group table, because ConnectX-4 does not
> support DR.
> > > > >
> > > > > With best regards, Slava
> > > > >
> > > > > > --txq=16 --rxq=16
> > > > > > EAL: Detected 48 lcore(s)
> > > > > > EAL: Detected 2 NUMA nodes
> > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > EAL: Selected IOVA mode 'PA'
> > > > > > EAL: Probing VFIO support...
> > > > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > > > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port
> > > > > > 1 on device
> > > > > > mlx5_3
> > > > > >
> > > > > > Interactive-mode selected
> > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>:
> > > > > > n=171456, size=2176, socket=0
> > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>:
> > > > > > n=171456, size=2176, socket=1
> > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > >
> > > > > > Warning! port-topology=paired and odd forward ports number,
> > > > > > the last port will pair with itself.
> > > > > >
> > > > > > Configuring Port 0 (socket 0)
> > > > > > Port 0: B8:59:9F:DB:22:20
> > > > > > Checking link statuses...
> > > > > > Done
> > > > > > testpmd> flow create 0 ingress group 1 priority 0 pattern eth
> > > > > > testpmd> dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 / end
> > > > > > testpmd> actions of_pop_vlan / queue index 0 / end
> > > > > > Caught error type 1 (cause unspecified): cannot create table:
> > > > > > Cannot allocate memory
> > > > > >
> > > > > >
> > > > > > BR,
> > > > > > Hideyuki Yamashita
> > > > >
> > > >
> > >
> >
>
  
Hideyuki Yamashita Nov. 7, 2019, 4:46 a.m. UTC | #18
Hi Slava,

Thanks for your response.

1. As you pointed out, it was configuration issue (CONFIG_RTE_LIBRTE_MLX5_DEBUG=y)!
When I turned out the configuration, 19.11 rc1 recognized Connect-X5
corrcetly.

Thanks for your help.

2. How about the question I put in my previouse email
(how to create flow for entag VLAN tag on not-tagged packet)

Thanks again.


BR,
Hideyuki Yamashita
NTT TechnoCross

> Hi, Hideyuki
> 
> > -----Original Message-----
> > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Sent: Wednesday, November 6, 2019 13:04
> > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > VLAN header
> > 
> > Dear Slava,
> > 
> > Additional question.
> > When I use testpmd in dpdk-next-net repo, it works in general.
> > However when I use dpdk19.11-rc1,  testpmd does not recognize connectX-5
> > NIC.
> 
> It is quite strange, it should be, ConnectX-5 is base Mellanox NIC now.
> Could you, please:
> - configure "CONFIG_RTE_LIBRTE_MLX5_DEBUG=y" in ./config/common_base
> - reconfigure DPDK and rebuild testpmd
> - run testpmd with --log-level=99 --log-level=pmd.net.mlx5:8 (before -- separator)
> - see (and provide) the log, where it drops the eth_dev object spawning
> 
> > 
> > Is it correct that ConnectX-5 will be recognized in 19.11 release finally?
> 
> It should be recognized in 19.11rc1, possible we have some configuration issue,
> let's have a look at.
> 
> > If yes, which release candidate the necessary change will be mergerd and
> > available?
> > 
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> > 
> > 
> > > Dear Slava,
> > >
> > > Thanks for your response.
> > >
> > > Inputting other flows failed while some flows are created.
> > > Please help on the following two cases.
> > >
> > > 1) I would like to detag vlan tag which has specific destionation MAC
> > > address.  No condition about vlan id value.
> > >
> > > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > > testpmd> AA:AA:AA:AA:AA:AA / vlan / any / end actions of_pop_vlan /
> > > testpmd> queue index 1 / end
> > > Caught error type 10 (item specification): VLAN cannot be empty:
> > > Invalid argument
> > > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > > testpmd> AA:AA:AA:AA:AA:AA / vlan vid is 100 / end actions of_pop_vlan
> > > testpmd> / queue index 1 / end
> > > Flow rule #0 created
> 
> I'll check, possible this validation reject is imposed by HW limitations - it requires the VLAN header presence
> and (IIRC) VID match. If possible - we'll fix.
> 
> > >
> > > 2) I would like to entag vlan tag
> > >
> > > testpmd> flow create 0 egress group 1 pattern eth src is
> > > testpmd> BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype
> > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp
> > > testpmd> vlan_pcp 3 / end
> > > Caught error type 16 (specific action): cause: 0x7ffdc9d98348, match
> > > on VLAN is required in order to set VLAN VID: Invalid argument
> > >
> 
> It is fixed (and patch Is already merged - http://patches.dpdk.org/patch/62295/),
> let's try coming 19.11rc2. I inserted your Flow successfully on current Upstream..
> 
> With best regards, Slava
> 
> 
> 
> > > Thanks!
> > >
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > >
> > >
> > >
> > > > > -----Original Message-----
> > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Sent: Thursday, October 31, 2019 11:52
> > > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > Cc: dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > action on VLAN header
> > > > >
> > > > > Dear Slava,
> > > > >
> > > > > Your guess is corrrect.
> > > > > When I put flow into Connect-X5, it was successful.
> > > > Very nice.
> > > >
> > > > >
> > > > > General question.
> > > > As we know - general questions are the most hard ones to answer ??.
> > > >
> > > > > Are there any way to input flow to ConnectX-4?
> > > > As usual - with RTE flow API.  Just omit dv_flow_en, or specify
> > > > dv_flow_en=0 and mlx5 PMD will handle RTE flow API via Verbs engine,
> > supported by ConnectX-4.
> > > >
> > > > > In another word, are there any way to activate Verb?
> > > > > And which type of flow is supported in Verb?
> > > > Please, see flow_verbs_validate() routine in the mlx5_flow_verbs.c,
> > > > it shows which RTE flow items and actions are actually supported by
> > Verbs.
> > > >
> > > > With best regards, Slava
> > > >
> > > >
> > > > >
> > > > > -----------------------------------------------------------
> > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > > > > gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 --
> > socket-
> > > > > mem 512,512 --huge-dir=/mnt/h
> > > > > uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2
> > > > > --txq=16 --rxq=16 [sudo] password for tx_h-yamashita:
> > > > > EAL: Detected 48 lcore(s)
> > > > > EAL: Detected 2 NUMA nodes
> > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > EAL: Selected IOVA mode 'PA'
> > > > > EAL: Probing VFIO support...
> > > > > EAL: PCI device 0000:04:00.0 on NUMA socket 0
> > > > > EAL:   probe driver: 15b3:1017 net_mlx5
> > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on
> > device
> > > > > mlx5_          1
> > > > >
> > > > > Interactive-mode selected
> > > > >
> > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > > > > size=2176, socke          t=0
> > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > > > > size=2176, socke          t=1
> > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > >
> > > > > Warning! port-topology=paired and odd forward ports number, the last
> > port
> > > > > will p          air with itself.
> > > > >
> > > > > Configuring Port 0 (socket 0)
> > > > > Port 0: B8:59:9F:C1:4A:CE
> > > > > Checking link statuses...
> > > > > Done
> > > > > testpmd>
> > > > > testpmd>  flow create 0 ingress group 1 priority 0 pattern eth dst
> > > > > testpmd> is
> > > > > 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan  /
> > queue
> > > > > index 0 / end
> > > > > Flow rule #0 created
> > > > > testpmd>
> > > > > ------------------------------------------------------------------
> > > > > ---------------------------
> > > > > -----------------
> > > > >
> > > > > BR,
> > > > > Hideyuki Yamashita
> > > > > NTT TechnoCross
> > > > >
> > > > > > Hi, Hideyuki
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > > Sent: Wednesday, October 30, 2019 12:46
> > > > > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > > > Cc: dev@dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > > > action on VLAN header
> > > > > > >
> > > > > > > Hello Slava,
> > > > > > >
> > > > > > > Thanks for your help.
> > > > > > > I added magic phrase. with chaging PCI number with proper one
> > > > > > > in my
> > > > > env.
> > > > > >
> > > > > > > It changes situation but still result in error.
> > > > > > >
> > > > > > > I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
> > > > > > > Your help is appreciated.
> > > > > > >
> > > > > > > I think it is getting closer.
> > > > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-
> > linuxapp-
> > > > > > > gcc/app$
> > > > > > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1
> > > > > > > --socket-mem
> > > > > > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > > > > > --portmask=0x1 --nb-cores=2
> > > > > >
> > > > > > mlx5 PMD supports two flow engines:
> > > > > > - Verbs, this is legacy one, almost no new features are being
> > > > > > added, just
> > > > > bug fixes,
> > > > > >   provides slow rule insertion rate, etc.
> > > > > > - Direct Rules, the new one, all new features are being added here.
> > > > > >
> > > > > > (We had one more intermediate engine  - Direct Verbs, it was
> > > > > > dropped, but prefix dv in dv_flow_en remains ??)
> > > > > >
> > > > > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX,
> > > > > > ConnectX-5,
> > > > > ConnectX-6, etc.
> > > > > > Direct Rules is supported for NICs starting from ConnectX-5.
> > > > > > "dv_flow_en=1" partameter engages Direct Rules, but I see you
> > > > > > run testpmd over 03:00.0 which is ConnectX-4, not  supporting Direct
> > Rules.
> > > > > > Please, run over ConnectX-5 you have on your host.
> > > > > >
> > > > > > As for error - it is not related to memory, rdma core just
> > > > > > failed to create the group table, because ConnectX-4 does not
> > support DR.
> > > > > >
> > > > > > With best regards, Slava
> > > > > >
> > > > > > > --txq=16 --rxq=16
> > > > > > > EAL: Detected 48 lcore(s)
> > > > > > > EAL: Detected 2 NUMA nodes
> > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > > EAL: Selected IOVA mode 'PA'
> > > > > > > EAL: Probing VFIO support...
> > > > > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > > > > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port
> > > > > > > 1 on device
> > > > > > > mlx5_3
> > > > > > >
> > > > > > > Interactive-mode selected
> > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>:
> > > > > > > n=171456, size=2176, socket=0
> > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>:
> > > > > > > n=171456, size=2176, socket=1
> > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > >
> > > > > > > Warning! port-topology=paired and odd forward ports number,
> > > > > > > the last port will pair with itself.
> > > > > > >
> > > > > > > Configuring Port 0 (socket 0)
> > > > > > > Port 0: B8:59:9F:DB:22:20
> > > > > > > Checking link statuses...
> > > > > > > Done
> > > > > > > testpmd> flow create 0 ingress group 1 priority 0 pattern eth
> > > > > > > testpmd> dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 / end
> > > > > > > testpmd> actions of_pop_vlan / queue index 0 / end
> > > > > > > Caught error type 1 (cause unspecified): cannot create table:
> > > > > > > Cannot allocate memory
> > > > > > >
> > > > > > >
> > > > > > > BR,
> > > > > > > Hideyuki Yamashita
> > > > > >
> > > > >
> > > >
> > >
> > 
>
  
Slava Ovsiienko Nov. 7, 2019, 6:01 a.m. UTC | #19
Hi, Hideyuki

> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Thursday, November 7, 2019 6:46
> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Hi Slava,
> 
> Thanks for your response.
> 
> 1. As you pointed out, it was configuration issue
> (CONFIG_RTE_LIBRTE_MLX5_DEBUG=y)!
> When I turned out the configuration, 19.11 rc1 recognized Connect-X5
> corrcetly.
No-no, it is not configuration, this just enables debug features and Is helpful to locate
the reason why ConnectX-5 was not detected on your setup. In release product, of coarse,
the CONFIG_RTE_LIBRTE_MLX5_DEBUG must be "n"
Or was it just missed "CONFIG_RTE_LIBRTE_MLX5_PMD=y" ?

> 
> Thanks for your help.
> 
> 2. How about the question I put in my previouse email (how to create flow
> for entag VLAN tag on not-tagged packet)

I'm sorry, I did not express my answer in clear way.
This issue is fixed, now you entagging Flow can be created successfully, I rechecked.

Now it works:

> > > > testpmd> flow create 0 egress group 1 pattern eth src is
> > > > testpmd> BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype
> > > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp
> > > > testpmd> vlan_pcp 3 / end

Please, take (coming on Friday) 19.11rc2 and try.

With best regards, Slava

> 
> Thanks again.
> 
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > Hi, Hideyuki
> >
> > > -----Original Message-----
> > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Sent: Wednesday, November 6, 2019 13:04
> > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > Cc: dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > action on VLAN header
> > >
> > > Dear Slava,
> > >
> > > Additional question.
> > > When I use testpmd in dpdk-next-net repo, it works in general.
> > > However when I use dpdk19.11-rc1,  testpmd does not recognize
> > > connectX-5 NIC.
> >
> > It is quite strange, it should be, ConnectX-5 is base Mellanox NIC now.
> > Could you, please:
> > - configure "CONFIG_RTE_LIBRTE_MLX5_DEBUG=y" in
> ./config/common_base
> > - reconfigure DPDK and rebuild testpmd
> > - run testpmd with --log-level=99 --log-level=pmd.net.mlx5:8 (before
> > -- separator)
> > - see (and provide) the log, where it drops the eth_dev object
> > spawning
> >
> > >
> > > Is it correct that ConnectX-5 will be recognized in 19.11 release finally?
> >
> > It should be recognized in 19.11rc1, possible we have some
> > configuration issue, let's have a look at.
> >
> > > If yes, which release candidate the necessary change will be mergerd
> > > and available?
> > >
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > >
> > >
> > > > Dear Slava,
> > > >
> > > > Thanks for your response.
> > > >
> > > > Inputting other flows failed while some flows are created.
> > > > Please help on the following two cases.
> > > >
> > > > 1) I would like to detag vlan tag which has specific destionation
> > > > MAC address.  No condition about vlan id value.
> > > >
> > > > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > > > testpmd> AA:AA:AA:AA:AA:AA / vlan / any / end actions of_pop_vlan
> > > > testpmd> / queue index 1 / end
> > > > Caught error type 10 (item specification): VLAN cannot be empty:
> > > > Invalid argument
> > > > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > > > testpmd> AA:AA:AA:AA:AA:AA / vlan vid is 100 / end actions
> > > > testpmd> of_pop_vlan / queue index 1 / end
> > > > Flow rule #0 created
> >
> > I'll check, possible this validation reject is imposed by HW
> > limitations - it requires the VLAN header presence and (IIRC) VID match. If
> possible - we'll fix.
> >
> > > >
> > > > 2) I would like to entag vlan tag
> > > >
> > > > testpmd> flow create 0 egress group 1 pattern eth src is
> > > > testpmd> BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype
> > > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp
> > > > testpmd> vlan_pcp 3 / end
> > > > Caught error type 16 (specific action): cause: 0x7ffdc9d98348,
> > > > match on VLAN is required in order to set VLAN VID: Invalid
> > > > argument
> > > >
> >
> > It is fixed (and patch Is already merged -
> >
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> >
> es.dpdk.org%2Fpatch%2F62295%2F&amp;data=02%7C01%7Cviacheslavo%4
> 0mellan
> >
> ox.com%7Ca17dfb64b04f430237ff08d7633d7346%7Ca652971c7d2e4d9ba6
> a4d14925
> >
> 6f461b%7C0%7C1%7C637086987908448715&amp;sdata=Uvi1bWYT%2BaHo
> TSHkQ8AF6%
> > 2FnTx%2FP5UrMqtZ3gAzjqGAA%3D&amp;reserved=0),
> > let's try coming 19.11rc2. I inserted your Flow successfully on current
> Upstream..
> >
> > With best regards, Slava
> >
> >
> >
> > > > Thanks!
> > > >
> > > > BR,
> > > > Hideyuki Yamashita
> > > > NTT TechnoCross
> > > >
> > > >
> > > >
> > > > > > -----Original Message-----
> > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > Sent: Thursday, October 31, 2019 11:52
> > > > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > > Cc: dev@dpdk.org
> > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > > action on VLAN header
> > > > > >
> > > > > > Dear Slava,
> > > > > >
> > > > > > Your guess is corrrect.
> > > > > > When I put flow into Connect-X5, it was successful.
> > > > > Very nice.
> > > > >
> > > > > >
> > > > > > General question.
> > > > > As we know - general questions are the most hard ones to answer ??.
> > > > >
> > > > > > Are there any way to input flow to ConnectX-4?
> > > > > As usual - with RTE flow API.  Just omit dv_flow_en, or specify
> > > > > dv_flow_en=0 and mlx5 PMD will handle RTE flow API via Verbs
> > > > > engine,
> > > supported by ConnectX-4.
> > > > >
> > > > > > In another word, are there any way to activate Verb?
> > > > > > And which type of flow is supported in Verb?
> > > > > Please, see flow_verbs_validate() routine in the
> > > > > mlx5_flow_verbs.c, it shows which RTE flow items and actions are
> > > > > actually supported by
> > > Verbs.
> > > > >
> > > > > With best regards, Slava
> > > > >
> > > > >
> > > > > >
> > > > > > -----------------------------------------------------------
> > > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-
> linuxapp-
> > > > > > gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1
> --
> > > socket-
> > > > > > mem 512,512 --huge-dir=/mnt/h
> > > > > > uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2
> > > > > > --txq=16 --rxq=16 [sudo] password for tx_h-yamashita:
> > > > > > EAL: Detected 48 lcore(s)
> > > > > > EAL: Detected 2 NUMA nodes
> > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > EAL: Selected IOVA mode 'PA'
> > > > > > EAL: Probing VFIO support...
> > > > > > EAL: PCI device 0000:04:00.0 on NUMA socket 0
> > > > > > EAL:   probe driver: 15b3:1017 net_mlx5
> > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port
> > > > > > 1 on
> > > device
> > > > > > mlx5_          1
> > > > > >
> > > > > > Interactive-mode selected
> > > > > >
> > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>:
> n=171456,
> > > > > > size=2176, socke          t=0
> > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>:
> n=171456,
> > > > > > size=2176, socke          t=1
> > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > >
> > > > > > Warning! port-topology=paired and odd forward ports number,
> > > > > > the last
> > > port
> > > > > > will p          air with itself.
> > > > > >
> > > > > > Configuring Port 0 (socket 0)
> > > > > > Port 0: B8:59:9F:C1:4A:CE
> > > > > > Checking link statuses...
> > > > > > Done
> > > > > > testpmd>
> > > > > > testpmd>  flow create 0 ingress group 1 priority 0 pattern eth
> > > > > > testpmd> dst is
> > > > > > 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan
> /
> > > queue
> > > > > > index 0 / end
> > > > > > Flow rule #0 created
> > > > > > testpmd>
> > > > > > --------------------------------------------------------------
> > > > > > ----
> > > > > > ---------------------------
> > > > > > -----------------
> > > > > >
> > > > > > BR,
> > > > > > Hideyuki Yamashita
> > > > > > NTT TechnoCross
> > > > > >
> > > > > > > Hi, Hideyuki
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > > > Sent: Wednesday, October 30, 2019 12:46
> > > > > > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > > > > Cc: dev@dpdk.org
> > > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for
> > > > > > > > flow action on VLAN header
> > > > > > > >
> > > > > > > > Hello Slava,
> > > > > > > >
> > > > > > > > Thanks for your help.
> > > > > > > > I added magic phrase. with chaging PCI number with proper
> > > > > > > > one in my
> > > > > > env.
> > > > > > >
> > > > > > > > It changes situation but still result in error.
> > > > > > > >
> > > > > > > > I used /usertools/dpdk-setup.sh to allocate hugepage
> dynamically.
> > > > > > > > Your help is appreciated.
> > > > > > > >
> > > > > > > > I think it is getting closer.
> > > > > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-
> > > linuxapp-
> > > > > > > > gcc/app$
> > > > > > > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1
> > > > > > > > --socket-mem
> > > > > > > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > > > > > > --portmask=0x1 --nb-cores=2
> > > > > > >
> > > > > > > mlx5 PMD supports two flow engines:
> > > > > > > - Verbs, this is legacy one, almost no new features are
> > > > > > > being added, just
> > > > > > bug fixes,
> > > > > > >   provides slow rule insertion rate, etc.
> > > > > > > - Direct Rules, the new one, all new features are being added
> here.
> > > > > > >
> > > > > > > (We had one more intermediate engine  - Direct Verbs, it was
> > > > > > > dropped, but prefix dv in dv_flow_en remains ??)
> > > > > > >
> > > > > > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX,
> > > > > > > ConnectX-5,
> > > > > > ConnectX-6, etc.
> > > > > > > Direct Rules is supported for NICs starting from ConnectX-5.
> > > > > > > "dv_flow_en=1" partameter engages Direct Rules, but I see
> > > > > > > you run testpmd over 03:00.0 which is ConnectX-4, not
> > > > > > > supporting Direct
> > > Rules.
> > > > > > > Please, run over ConnectX-5 you have on your host.
> > > > > > >
> > > > > > > As for error - it is not related to memory, rdma core just
> > > > > > > failed to create the group table, because ConnectX-4 does
> > > > > > > not
> > > support DR.
> > > > > > >
> > > > > > > With best regards, Slava
> > > > > > >
> > > > > > > > --txq=16 --rxq=16
> > > > > > > > EAL: Detected 48 lcore(s)
> > > > > > > > EAL: Detected 2 NUMA nodes
> > > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > > > EAL: Selected IOVA mode 'PA'
> > > > > > > > EAL: Probing VFIO support...
> > > > > > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > > > > > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx
> > > > > > > > port
> > > > > > > > 1 on device
> > > > > > > > mlx5_3
> > > > > > > >
> > > > > > > > Interactive-mode selected
> > > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>:
> > > > > > > > n=171456, size=2176, socket=0
> > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>:
> > > > > > > > n=171456, size=2176, socket=1
> > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > >
> > > > > > > > Warning! port-topology=paired and odd forward ports
> > > > > > > > number, the last port will pair with itself.
> > > > > > > >
> > > > > > > > Configuring Port 0 (socket 0) Port 0: B8:59:9F:DB:22:20
> > > > > > > > Checking link statuses...
> > > > > > > > Done
> > > > > > > > testpmd> flow create 0 ingress group 1 priority 0 pattern
> > > > > > > > testpmd> eth dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 /
> > > > > > > > testpmd> end actions of_pop_vlan / queue index 0 / end
> > > > > > > > Caught error type 1 (cause unspecified): cannot create table:
> > > > > > > > Cannot allocate memory
> > > > > > > >
> > > > > > > >
> > > > > > > > BR,
> > > > > > > > Hideyuki Yamashita
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
  
Hideyuki Yamashita Nov. 7, 2019, 11:02 a.m. UTC | #20
Hello Slava,

About 1, when I turned on "CONFIG_RTE_LIBRTE_MLX5_PMD=y" it worked.
About 2, I used the latest dpdk-next-net, creating flow for entag VLAN
was successful as following:

Configuring Port 0 (socket 0)
Port 0: B8:59:9F:C1:4A:CE
Configuring Port 1 (socket 0)
Port 1: B8:59:9F:C1:4A:CF
Checking link statuses...
Done
testpmd> flow create 0 egress group 1 pattern eth src is BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp vlan_pcp 3 / end
Flow rule #0 created
testpmd> flow create 0 egress group 0 pattern eth
 dst [TOKEN]: destination MAC
 src [TOKEN]: source MAC
 type [TOKEN]: EtherType
 / [TOKEN]: specify next pattern item
testpmd> flow create 0 egress group 0 pattern eth / a
 any [TOKEN]: match any protocol for the current layer
 arp_eth_ipv4 [TOKEN]: match ARP header for Ethernet/IPv4
testpmd> flow create 0 egress group 0 pattern eth / end actions jump group 1
Bad arguments
testpmd> flow create 0 egress group 0 pattern eth / end actions jump group 1 / end
Flow rule #1 created

In short, my questions resolved!
Thanks!

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hi, Hideyuki

> > 1. As you pointed out, it was configuration issue
> > (CONFIG_RTE_LIBRTE_MLX5_DEBUG=y)!
> > When I turned out the configuration, 19.11 rc1 recognized Connect-X5
> > corrcetly.
> No-no, it is not configuration, this just enables debug features and Is helpful to locate
> the reason why ConnectX-5 was not detected on your setup. In release product, of coarse,
> the CONFIG_RTE_LIBRTE_MLX5_DEBUG must be "n"
> Or was it just missed "CONFIG_RTE_LIBRTE_MLX5_PMD=y" ?
> 
> > 
> > Thanks for your help.
> > 
> > 2. How about the question I put in my previouse email (how to create flow
> > for entag VLAN tag on not-tagged packet)
> 
> I'm sorry, I did not express my answer in clear way.
> This issue is fixed, now you entagging Flow can be created successfully, I rechecked.
> 
> Now it works:
> 
> > > > > testpmd> flow create 0 egress group 1 pattern eth src is
> > > > > testpmd> BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype
> > > > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp
> > > > > testpmd> vlan_pcp 3 / end
> 
> Please, take (coming on Friday) 19.11rc2 and try.
> 
> With best regards, Slava



> > 
> > Thanks again.
> > 
> > 
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> > 
> > > Hi, Hideyuki
> > >
> > > > -----Original Message-----
> > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > Sent: Wednesday, November 6, 2019 13:04
> > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > Cc: dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > action on VLAN header
> > > >
> > > > Dear Slava,
> > > >
> > > > Additional question.
> > > > When I use testpmd in dpdk-next-net repo, it works in general.
> > > > However when I use dpdk19.11-rc1,  testpmd does not recognize
> > > > connectX-5 NIC.
> > >
> > > It is quite strange, it should be, ConnectX-5 is base Mellanox NIC now.
> > > Could you, please:
> > > - configure "CONFIG_RTE_LIBRTE_MLX5_DEBUG=y" in
> > ./config/common_base
> > > - reconfigure DPDK and rebuild testpmd
> > > - run testpmd with --log-level=99 --log-level=pmd.net.mlx5:8 (before
> > > -- separator)
> > > - see (and provide) the log, where it drops the eth_dev object
> > > spawning
> > >
> > > >
> > > > Is it correct that ConnectX-5 will be recognized in 19.11 release finally?
> > >
> > > It should be recognized in 19.11rc1, possible we have some
> > > configuration issue, let's have a look at.
> > >
> > > > If yes, which release candidate the necessary change will be mergerd
> > > > and available?
> > > >
> > > > BR,
> > > > Hideyuki Yamashita
> > > > NTT TechnoCross
> > > >
> > > >
> > > > > Dear Slava,
> > > > >
> > > > > Thanks for your response.
> > > > >
> > > > > Inputting other flows failed while some flows are created.
> > > > > Please help on the following two cases.
> > > > >
> > > > > 1) I would like to detag vlan tag which has specific destionation
> > > > > MAC address.  No condition about vlan id value.
> > > > >
> > > > > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > > > > testpmd> AA:AA:AA:AA:AA:AA / vlan / any / end actions of_pop_vlan
> > > > > testpmd> / queue index 1 / end
> > > > > Caught error type 10 (item specification): VLAN cannot be empty:
> > > > > Invalid argument
> > > > > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > > > > testpmd> AA:AA:AA:AA:AA:AA / vlan vid is 100 / end actions
> > > > > testpmd> of_pop_vlan / queue index 1 / end
> > > > > Flow rule #0 created
> > >
> > > I'll check, possible this validation reject is imposed by HW
> > > limitations - it requires the VLAN header presence and (IIRC) VID match. If
> > possible - we'll fix.
> > >
> > > > >
> > > > > 2) I would like to entag vlan tag
> > > > >
> > > > > testpmd> flow create 0 egress group 1 pattern eth src is
> > > > > testpmd> BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype
> > > > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp
> > > > > testpmd> vlan_pcp 3 / end
> > > > > Caught error type 16 (specific action): cause: 0x7ffdc9d98348,
> > > > > match on VLAN is required in order to set VLAN VID: Invalid
> > > > > argument
> > > > >
> > >
> > > It is fixed (and patch Is already merged -
> > >
> > https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> > >
> > es.dpdk.org%2Fpatch%2F62295%2F&amp;data=02%7C01%7Cviacheslavo%4
> > 0mellan
> > >
> > ox.com%7Ca17dfb64b04f430237ff08d7633d7346%7Ca652971c7d2e4d9ba6
> > a4d14925
> > >
> > 6f461b%7C0%7C1%7C637086987908448715&amp;sdata=Uvi1bWYT%2BaHo
> > TSHkQ8AF6%
> > > 2FnTx%2FP5UrMqtZ3gAzjqGAA%3D&amp;reserved=0),
> > > let's try coming 19.11rc2. I inserted your Flow successfully on current
> > Upstream..
> > >
> > > With best regards, Slava
> > >
> > >
> > >
> > > > > Thanks!
> > > > >
> > > > > BR,
> > > > > Hideyuki Yamashita
> > > > > NTT TechnoCross
> > > > >
> > > > >
> > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > > Sent: Thursday, October 31, 2019 11:52
> > > > > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > > > Cc: dev@dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > > > action on VLAN header
> > > > > > >
> > > > > > > Dear Slava,
> > > > > > >
> > > > > > > Your guess is corrrect.
> > > > > > > When I put flow into Connect-X5, it was successful.
> > > > > > Very nice.
> > > > > >
> > > > > > >
> > > > > > > General question.
> > > > > > As we know - general questions are the most hard ones to answer ??.
> > > > > >
> > > > > > > Are there any way to input flow to ConnectX-4?
> > > > > > As usual - with RTE flow API.  Just omit dv_flow_en, or specify
> > > > > > dv_flow_en=0 and mlx5 PMD will handle RTE flow API via Verbs
> > > > > > engine,
> > > > supported by ConnectX-4.
> > > > > >
> > > > > > > In another word, are there any way to activate Verb?
> > > > > > > And which type of flow is supported in Verb?
> > > > > > Please, see flow_verbs_validate() routine in the
> > > > > > mlx5_flow_verbs.c, it shows which RTE flow items and actions are
> > > > > > actually supported by
> > > > Verbs.
> > > > > >
> > > > > > With best regards, Slava
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > -----------------------------------------------------------
> > > > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-
> > linuxapp-
> > > > > > > gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1
> > --
> > > > socket-
> > > > > > > mem 512,512 --huge-dir=/mnt/h
> > > > > > > uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2
> > > > > > > --txq=16 --rxq=16 [sudo] password for tx_h-yamashita:
> > > > > > > EAL: Detected 48 lcore(s)
> > > > > > > EAL: Detected 2 NUMA nodes
> > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > > EAL: Selected IOVA mode 'PA'
> > > > > > > EAL: Probing VFIO support...
> > > > > > > EAL: PCI device 0000:04:00.0 on NUMA socket 0
> > > > > > > EAL:   probe driver: 15b3:1017 net_mlx5
> > > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port
> > > > > > > 1 on
> > > > device
> > > > > > > mlx5_          1
> > > > > > >
> > > > > > > Interactive-mode selected
> > > > > > >
> > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>:
> > n=171456,
> > > > > > > size=2176, socke          t=0
> > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>:
> > n=171456,
> > > > > > > size=2176, socke          t=1
> > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > >
> > > > > > > Warning! port-topology=paired and odd forward ports number,
> > > > > > > the last
> > > > port
> > > > > > > will p          air with itself.
> > > > > > >
> > > > > > > Configuring Port 0 (socket 0)
> > > > > > > Port 0: B8:59:9F:C1:4A:CE
> > > > > > > Checking link statuses...
> > > > > > > Done
> > > > > > > testpmd>
> > > > > > > testpmd>  flow create 0 ingress group 1 priority 0 pattern eth
> > > > > > > testpmd> dst is
> > > > > > > 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan
> > /
> > > > queue
> > > > > > > index 0 / end
> > > > > > > Flow rule #0 created
> > > > > > > testpmd>
> > > > > > > --------------------------------------------------------------
> > > > > > > ----
> > > > > > > ---------------------------
> > > > > > > -----------------
> > > > > > >
> > > > > > > BR,
> > > > > > > Hideyuki Yamashita
> > > > > > > NTT TechnoCross
> > > > > > >
> > > > > > > > Hi, Hideyuki
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > > > > > Sent: Wednesday, October 30, 2019 12:46
> > > > > > > > > To: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > > > > > Cc: dev@dpdk.org
> > > > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for
> > > > > > > > > flow action on VLAN header
> > > > > > > > >
> > > > > > > > > Hello Slava,
> > > > > > > > >
> > > > > > > > > Thanks for your help.
> > > > > > > > > I added magic phrase. with chaging PCI number with proper
> > > > > > > > > one in my
> > > > > > > env.
> > > > > > > >
> > > > > > > > > It changes situation but still result in error.
> > > > > > > > >
> > > > > > > > > I used /usertools/dpdk-setup.sh to allocate hugepage
> > dynamically.
> > > > > > > > > Your help is appreciated.
> > > > > > > > >
> > > > > > > > > I think it is getting closer.
> > > > > > > > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-
> > > > linuxapp-
> > > > > > > > > gcc/app$
> > > > > > > > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1
> > > > > > > > > --socket-mem
> > > > > > > > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > > > > > > > --portmask=0x1 --nb-cores=2
> > > > > > > >
> > > > > > > > mlx5 PMD supports two flow engines:
> > > > > > > > - Verbs, this is legacy one, almost no new features are
> > > > > > > > being added, just
> > > > > > > bug fixes,
> > > > > > > >   provides slow rule insertion rate, etc.
> > > > > > > > - Direct Rules, the new one, all new features are being added
> > here.
> > > > > > > >
> > > > > > > > (We had one more intermediate engine  - Direct Verbs, it was
> > > > > > > > dropped, but prefix dv in dv_flow_en remains ??)
> > > > > > > >
> > > > > > > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX,
> > > > > > > > ConnectX-5,
> > > > > > > ConnectX-6, etc.
> > > > > > > > Direct Rules is supported for NICs starting from ConnectX-5.
> > > > > > > > "dv_flow_en=1" partameter engages Direct Rules, but I see
> > > > > > > > you run testpmd over 03:00.0 which is ConnectX-4, not
> > > > > > > > supporting Direct
> > > > Rules.
> > > > > > > > Please, run over ConnectX-5 you have on your host.
> > > > > > > >
> > > > > > > > As for error - it is not related to memory, rdma core just
> > > > > > > > failed to create the group table, because ConnectX-4 does
> > > > > > > > not
> > > > support DR.
> > > > > > > >
> > > > > > > > With best regards, Slava
> > > > > > > >
> > > > > > > > > --txq=16 --rxq=16
> > > > > > > > > EAL: Detected 48 lcore(s)
> > > > > > > > > EAL: Detected 2 NUMA nodes
> > > > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > > > > EAL: Selected IOVA mode 'PA'
> > > > > > > > > EAL: Probing VFIO support...
> > > > > > > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > > > > > > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > > > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx
> > > > > > > > > port
> > > > > > > > > 1 on device
> > > > > > > > > mlx5_3
> > > > > > > > >
> > > > > > > > > Interactive-mode selected
> > > > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>:
> > > > > > > > > n=171456, size=2176, socket=0
> > > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>:
> > > > > > > > > n=171456, size=2176, socket=1
> > > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > > >
> > > > > > > > > Warning! port-topology=paired and odd forward ports
> > > > > > > > > number, the last port will pair with itself.
> > > > > > > > >
> > > > > > > > > Configuring Port 0 (socket 0) Port 0: B8:59:9F:DB:22:20
> > > > > > > > > Checking link statuses...
> > > > > > > > > Done
> > > > > > > > > testpmd> flow create 0 ingress group 1 priority 0 pattern
> > > > > > > > > testpmd> eth dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 /
> > > > > > > > > testpmd> end actions of_pop_vlan / queue index 0 / end
> > > > > > > > > Caught error type 1 (cause unspecified): cannot create table:
> > > > > > > > > Cannot allocate memory
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > BR,
> > > > > > > > > Hideyuki Yamashita
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > 
>
  
Hideyuki Yamashita Nov. 14, 2019, 5:01 a.m. UTC | #21
Hello Slava,

As I reported to you, creating flow was successful with Connect-X5.
However when I sent packets to the NIC from outer side of
the host, I have problem.


[Case 1]
Packet distribution on multi-queue based on dst MAC address.

NIC config:
04:00.0 Mellanox Connect-X5
0.5.00.0 Intel XXV710

testpmd startup param:
sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect

flow command:
testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66 / end actions queue index 1 / end
Flow rule #0 created
testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66 type mask 0xffff / end actions queue index 1 / end
Flow rule #0 created

Packet reception:(no VLAN tag)
port 0/queue 0: received 1 packets
  src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
port 1/queue 0: sent 1 packets
  src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN

port 1/queue 1: received 1 packets
  src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
port 0/queue 1: sent 1 packets
  src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
  ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN

Result:
Matched packet queued to queue=0 port=0. Not queue=1, port=0.

Expectation:
When receiving packet which has dst MAC 11:22:33:44:55:66 should be
received on queue=1 port=0.

Question:
Why matching packet is NOT enqueued into queue=1 on port=0?


[Case 2]
Packet distribution on multi-queue based on VLAN tag

testpmd startup param:
sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect

flow command:
flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end actions queue index 1 / of_pop_vlan / end
flow create 0 ingress group 0 pattern eth / end actions jump group 1 / end

Packet Reception: (VLAN100)
port 0/queue 1: received 1 packets
  src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=56 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
  ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
port 1/queue 1: sent 1 packets
  src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=56 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
  ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN

Result:
Matched packetd queued to queue=1, port=0
Other packet(VLAN101 packet) discarded.

Expectation:
Matched packet queued to queue =1, port=0
Non Matched packet queued to queue=0, port=0

Question:
Is above behavior collect?
What is the default behavior of unmatchedd packets (queue to queue=0 or
discard packet)

BR,
Hideyuki Yamashita
NTT TechnoCross
  
Hideyuki Yamashita Nov. 14, 2019, 5:06 a.m. UTC | #22
Hello Slava,

Note that I am using the following dpdk rc.
DPDK 19.11 rc-1

BR,
HIdeyuki Yamashita
NTT TechnoCross

> Hello Slava,
> 
> As I reported to you, creating flow was successful with Connect-X5.
> However when I sent packets to the NIC from outer side of
> the host, I have problem.
> 
> 
> [Case 1]
> Packet distribution on multi-queue based on dst MAC address.
> 
> NIC config:
> 04:00.0 Mellanox Connect-X5
> 0.5.00.0 Intel XXV710
> 
> testpmd startup param:
> sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect
> 
> flow command:
> testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66 / end actions queue index 1 / end
> Flow rule #0 created
> testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66 type mask 0xffff / end actions queue index 1 / end
> Flow rule #0 created
> 
> Packet reception:(no VLAN tag)
> port 0/queue 0: received 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
>   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> port 1/queue 0: sent 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
>   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> 
> port 1/queue 1: received 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
>   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> port 0/queue 1: sent 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
>   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> 
> Result:
> Matched packet queued to queue=0 port=0. Not queue=1, port=0.
> 
> Expectation:
> When receiving packet which has dst MAC 11:22:33:44:55:66 should be
> received on queue=1 port=0.
> 
> Question:
> Why matching packet is NOT enqueued into queue=1 on port=0?
> 
> 
> [Case 2]
> Packet distribution on multi-queue based on VLAN tag
> 
> testpmd startup param:
> sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect
> 
> flow command:
> flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end actions queue index 1 / of_pop_vlan / end
> flow create 0 ingress group 0 pattern eth / end actions jump group 1 / end
> 
> Packet Reception: (VLAN100)
> port 0/queue 1: received 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=56 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
>   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> port 1/queue 1: sent 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=56 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  - sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
>   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> 
> Result:
> Matched packetd queued to queue=1, port=0
> Other packet(VLAN101 packet) discarded.
> 
> Expectation:
> Matched packet queued to queue =1, port=0
> Non Matched packet queued to queue=0, port=0
> 
> Question:
> Is above behavior collect?
> What is the default behavior of unmatchedd packets (queue to queue=0 or
> discard packet)
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> 
> 
>  
>
  
Slava Ovsiienko Nov. 15, 2019, 7:16 a.m. UTC | #23
Hi, Hideyuki

The frame in your report is broadcast/multicast. Please, try unicast one.
For broadcast we have the ticket, currently issue is under investigation.
Anyway, thanks for reporting.

With best regards, Slava

> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> Sent: Thursday, November 14, 2019 7:02
> To: dev@dpdk.org
> Cc: Slava Ovsiienko <viacheslavo@mellanox.com>
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Hello Slava,
> 
> As I reported to you, creating flow was successful with Connect-X5.
> However when I sent packets to the NIC from outer side of the host, I have
> problem.
> 
> 
> [Case 1]
> Packet distribution on multi-queue based on dst MAC address.
> 
> NIC config:
> 04:00.0 Mellanox Connect-X5
> 0.5.00.0 Intel XXV710
> 
> testpmd startup param:
> sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --pkt-
> filter-mode=perfect
> 
> flow command:
> testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66 /
> testpmd> end actions queue index 1 / end
> Flow rule #0 created
> testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66 type
> testpmd> mask 0xffff / end actions queue index 1 / end
> Flow rule #0 created
> 
> Packet reception:(no VLAN tag)
> port 0/queue 0: received 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60
> - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  -
> sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
>   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 0: sent 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60
> - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  -
> sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
>   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> 
> port 1/queue 1: received 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60
> - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  - sw
> ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
>   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 0/queue 1: sent 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60
> - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  - sw
> ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
>   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> 
> Result:
> Matched packet queued to queue=0 port=0. Not queue=1, port=0.
> 
> Expectation:
> When receiving packet which has dst MAC 11:22:33:44:55:66 should be
> received on queue=1 port=0.
> 
> Question:
> Why matching packet is NOT enqueued into queue=1 on port=0?
> 
> 
> [Case 2]
> Packet distribution on multi-queue based on VLAN tag
> 
> testpmd startup param:
> sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --pkt-
> filter-mode=perfect
> 
> flow command:
> flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end actions queue
> index 1 / of_pop_vlan / end flow create 0 ingress group 0 pattern eth / end
> actions jump group 1 / end
> 
> Packet Reception: (VLAN100)
> port 0/queue 1: received 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=56
> - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  -
> sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
>   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 1: sent 1 packets
>   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=56
> - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  -
> sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
>   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> 
> Result:
> Matched packetd queued to queue=1, port=0 Other packet(VLAN101 packet)
> discarded.
> 
> Expectation:
> Matched packet queued to queue =1, port=0 Non Matched packet queued to
> queue=0, port=0
> 
> Question:
> Is above behavior collect?
> What is the default behavior of unmatchedd packets (queue to queue=0 or
> discard packet)
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> 
> 
> 
>
  
Hideyuki Yamashita Nov. 18, 2019, 6:11 a.m. UTC | #24
Hi Slava,


Thanks for your response.

1. Is the bug number is the follwoing?
https://bugs.dpdk.org/show_bug.cgi?id=96

2.I've sent packets using scapy with the follwing script
and I think it is unicast ICMP.
How did you thought the packets are broadcast/muticast?
Note that I am not familiar with log of testpmd.

----------------------------------------------------------------------------------------------
from scapy.all import *

vlan_vid = 100
vlan_prio = 0
vlan_id = 0
vlan_flg = True
src_mac = "CC:CC:CC:CC:CC:CC" 
dst_mac = "11:22:33:44:55:66" 
dst_ip = "192.168.200.101" 
iface = "p7p1" 
pps = 5
loop = 5

def icmp_send():
    ls(Dot1Q)
    if vlan_flg:
        pkt = Ether(dst=dst_mac, src=src_mac)/Dot1Q(vlan=vlan_vid, prio=vlan_prio, id=vlan_id)/IP(dst=dst_ip)/ICMP()
    else:
        pkt = Ether(dst=dst_mac, src=src_mac)/IP(dst=dst_ip)/ICMP()
    pkt.show()
    sendpfast(pkt, iface=iface, pps=pps, loop=loop, file_cache=True)

icmp_send()
-----------------------------------------------------------------------------

Thanks!

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hi, Hideyuki
> 
> The frame in your report is broadcast/multicast. Please, try unicast one.
> For broadcast we have the ticket, currently issue is under investigation.
> Anyway, thanks for reporting.
> 
> With best regards, Slava
> 
> > -----Original Message-----
> > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > Sent: Thursday, November 14, 2019 7:02
> > To: dev@dpdk.org
> > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>
> > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> > VLAN header
> > 
> > Hello Slava,
> > 
> > As I reported to you, creating flow was successful with Connect-X5.
> > However when I sent packets to the NIC from outer side of the host, I have
> > problem.
> > 
> > 
> > [Case 1]
> > Packet distribution on multi-queue based on dst MAC address.
> > 
> > NIC config:
> > 04:00.0 Mellanox Connect-X5
> > 0.5.00.0 Intel XXV710
> > 
> > testpmd startup param:
> > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --pkt-
> > filter-mode=perfect
> > 
> > flow command:
> > testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66 /
> > testpmd> end actions queue index 1 / end
> > Flow rule #0 created
> > testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66 type
> > testpmd> mask 0xffff / end actions queue index 1 / end
> > Flow rule #0 created
> > 
> > Packet reception:(no VLAN tag)
> > port 0/queue 0: received 1 packets
> >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60
> > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  -
> > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
> >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 0: sent 1 packets
> >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60
> > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  -
> > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
> >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > 
> > port 1/queue 1: received 1 packets
> >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60
> > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  - sw
> > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 0/queue 1: sent 1 packets
> >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=60
> > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  - sw
> > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > 
> > Result:
> > Matched packet queued to queue=0 port=0. Not queue=1, port=0.
> > 
> > Expectation:
> > When receiving packet which has dst MAC 11:22:33:44:55:66 should be
> > received on queue=1 port=0.
> > 
> > Question:
> > Why matching packet is NOT enqueued into queue=1 on port=0?
> > 
> > 
> > [Case 2]
> > Packet distribution on multi-queue based on VLAN tag
> > 
> > testpmd startup param:
> > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --pkt-
> > filter-mode=perfect
> > 
> > flow command:
> > flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end actions queue
> > index 1 / of_pop_vlan / end flow create 0 ingress group 0 pattern eth / end
> > actions jump group 1 / end
> > 
> > Packet Reception: (VLAN100)
> > port 0/queue 1: received 1 packets
> >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=56
> > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  -
> > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 1: sent 1 packets
> >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 - length=56
> > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG  -
> > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > 
> > Result:
> > Matched packetd queued to queue=1, port=0 Other packet(VLAN101 packet)
> > discarded.
> > 
> > Expectation:
> > Matched packet queued to queue =1, port=0 Non Matched packet queued to
> > queue=0, port=0
> > 
> > Question:
> > Is above behavior collect?
> > What is the default behavior of unmatchedd packets (queue to queue=0 or
> > discard packet)
> > 
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> > 
> > 
> > 
> > 
> >
  
Matan Azrad Nov. 18, 2019, 10:03 a.m. UTC | #25
Hi

This bit on in dst mac address = "01:00:00:00:00:00" means the packets is L2 multicast packet.

When you run Testpmd application the multicast configuration is forwarded to the device by default.

So, you have 2 rules:
The default which try to match on the above dst mac bit and to do RSS action for all the queues.
Your rule which try to match on dst mac 11:22:33:44:55:66 (the multicast bit is on) and more and to send it to queue 1.

So, your flow is sub flow of the default flow.

Since the current behavior in our driver is to put all the multicast rules in the same priority - the behavior for the case is unpredictable:
1. you can get the packet twice for the 2 rules.
2. you can get the packet onl for the default RSS action.
3. you can get the packet only on queue 1 as in your rule.

Unfortunately, here, you got option 1 I think (you get the packet twice in your application).

This behavior in our driver to put the 2 rules in same behavior is in discussion by us - maybe it will be changed later.

To workaround the issue:
1. do not configure the default rules (run with --flow-isolate-all in testpmd cmdline).
2. do not configure 2 different multicast rules (even with different priorities).

Enjoy, let me know if you need more help....

Matan

From: Hideyuki Yamashita
> Hi Slava,
> 
> 
> Thanks for your response.
> 
> 1. Is the bug number is the follwoing?
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.
> dpdk.org%2Fshow_bug.cgi%3Fid%3D96&amp;data=02%7C01%7Cmatan%40
> mellanox.com%7Ce10ce5ee0f8f4350c6a508d76bee3a97%7Ca652971c7d2e4d
> 9ba6a4d149256f461b%7C0%7C0%7C637096543244123210&amp;sdata=V3V21
> gwJExHt7mkg0sAEwW%2FLTCIbJEkHznNtUCVkN%2BA%3D&amp;reserved=0
> 
> 2.I've sent packets using scapy with the follwing script and I think it is unicast
> ICMP.
> How did you thought the packets are broadcast/muticast?
> Note that I am not familiar with log of testpmd.
> 
> ----------------------------------------------------------------------------------------------
> from scapy.all import *
> 
> vlan_vid = 100
> vlan_prio = 0
> vlan_id = 0
> vlan_flg = True
> src_mac = "CC:CC:CC:CC:CC:CC"
> dst_mac = "11:22:33:44:55:66"
> dst_ip = "192.168.200.101"
> iface = "p7p1"
> pps = 5
> loop = 5
> 
> def icmp_send():
>     ls(Dot1Q)
>     if vlan_flg:
>         pkt = Ether(dst=dst_mac, src=src_mac)/Dot1Q(vlan=vlan_vid,
> prio=vlan_prio, id=vlan_id)/IP(dst=dst_ip)/ICMP()
>     else:
>         pkt = Ether(dst=dst_mac, src=src_mac)/IP(dst=dst_ip)/ICMP()
>     pkt.show()
>     sendpfast(pkt, iface=iface, pps=pps, loop=loop, file_cache=True)
> 
> icmp_send()
> -----------------------------------------------------------------------------
> 
> Thanks!
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > Hi, Hideyuki
> >
> > The frame in your report is broadcast/multicast. Please, try unicast one.
> > For broadcast we have the ticket, currently issue is under investigation.
> > Anyway, thanks for reporting.
> >
> > With best regards, Slava
> >
> > > -----Original Message-----
> > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > Sent: Thursday, November 14, 2019 7:02
> > > To: dev@dpdk.org
> > > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > action on VLAN header
> > >
> > > Hello Slava,
> > >
> > > As I reported to you, creating flow was successful with Connect-X5.
> > > However when I sent packets to the NIC from outer side of the host,
> > > I have problem.
> > >
> > >
> > > [Case 1]
> > > Packet distribution on multi-queue based on dst MAC address.
> > >
> > > NIC config:
> > > 04:00.0 Mellanox Connect-X5
> > > 0.5.00.0 Intel XXV710
> > >
> > > testpmd startup param:
> > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> pkt-
> > > filter-mode=perfect
> > >
> > > flow command:
> > > testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66
> > > testpmd> / end actions queue index 1 / end
> > > Flow rule #0 created
> > > testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66
> > > testpmd> type mask 0xffff / end actions queue index 1 / end
> > > Flow rule #0 created
> > >
> > > Packet reception:(no VLAN tag)
> > > port 0/queue 0: received 1 packets
> > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > length=60
> > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> L4_NONFRAG  -
> > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
> > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 0: sent 1 packets
> > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > length=60
> > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> L4_NONFRAG  -
> > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
> > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > >
> > > port 1/queue 1: received 1 packets
> > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > length=60
> > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> sw
> > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 0/queue 1: sent 1 packets
> > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > length=60
> > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> sw
> > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > >
> > > Result:
> > > Matched packet queued to queue=0 port=0. Not queue=1, port=0.
> > >
> > > Expectation:
> > > When receiving packet which has dst MAC 11:22:33:44:55:66 should be
> > > received on queue=1 port=0.
> > >
> > > Question:
> > > Why matching packet is NOT enqueued into queue=1 on port=0?
> > >
> > >
> > > [Case 2]
> > > Packet distribution on multi-queue based on VLAN tag
> > >
> > > testpmd startup param:
> > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> pkt-
> > > filter-mode=perfect
> > >
> > > flow command:
> > > flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end
> > > actions queue index 1 / of_pop_vlan / end flow create 0 ingress
> > > group 0 pattern eth / end actions jump group 1 / end
> > >
> > > Packet Reception: (VLAN100)
> > > port 0/queue 1: received 1 packets
> > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > length=56
> > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> L4_NONFRAG  -
> > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 1: sent 1 packets
> > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > length=56
> > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> L4_NONFRAG  -
> > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > >
> > > Result:
> > > Matched packetd queued to queue=1, port=0 Other packet(VLAN101
> > > packet) discarded.
> > >
> > > Expectation:
> > > Matched packet queued to queue =1, port=0 Non Matched packet
> queued
> > > to queue=0, port=0
> > >
> > > Question:
> > > Is above behavior collect?
> > > What is the default behavior of unmatchedd packets (queue to queue=0
> > > or discard packet)
> > >
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > >
> > >
> > >
> > >
> > >
>
  
Hideyuki Yamashita Nov. 19, 2019, 11:36 a.m. UTC | #26
Hello Matan and Slava,

Thanks for your quick response.

1. What you were saying is correct.
 When I create flow with dst mac 10:22:33:44:55:66 instead of
11:22:33:44:55:66, received packets are queued to specified queue.

Thanks for your advice!

Q1. What is the problem with broadcast/multicast address? 
Q2. What is the bug number on Bugzilla of DPDK?
Q3. What is the default behavior of unmatched packets?
Discard packet or queue those to default queue(e.g. queue=0)?

When I tested packet distribution with vlan id, unmatched packets 
looked discarded..
I would like to know what is the default handling.

Thanks!

Best Regards,
Hideyuki Yamashita
NTT TechnoCross


> Hi
> 
> This bit on in dst mac address = "01:00:00:00:00:00" means the packets is L2 multicast packet.
> 
> When you run Testpmd application the multicast configuration is forwarded to the device by default.
> 
> So, you have 2 rules:
> The default which try to match on the above dst mac bit and to do RSS action for all the queues.
> Your rule which try to match on dst mac 11:22:33:44:55:66 (the multicast bit is on) and more and to send it to queue 1.
> 
> So, your flow is sub flow of the default flow.
> 
> Since the current behavior in our driver is to put all the multicast rules in the same priority - the behavior for the case is unpredictable:
> 1. you can get the packet twice for the 2 rules.
> 2. you can get the packet onl for the default RSS action.
> 3. you can get the packet only on queue 1 as in your rule.
> 
> Unfortunately, here, you got option 1 I think (you get the packet twice in your application).
> 
> This behavior in our driver to put the 2 rules in same behavior is in discussion by us - maybe it will be changed later.
> 
> To workaround the issue:
> 1. do not configure the default rules (run with --flow-isolate-all in testpmd cmdline).
> 2. do not configure 2 different multicast rules (even with different priorities).
> 
> Enjoy, let me know if you need more help....
> 
> Matan
> 
> From: Hideyuki Yamashita
> > Hi Slava,
> > 
> > 
> > Thanks for your response.
> > 
> > 1. Is the bug number is the follwoing?
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.
> > dpdk.org%2Fshow_bug.cgi%3Fid%3D96&amp;data=02%7C01%7Cmatan%40
> > mellanox.com%7Ce10ce5ee0f8f4350c6a508d76bee3a97%7Ca652971c7d2e4d
> > 9ba6a4d149256f461b%7C0%7C0%7C637096543244123210&amp;sdata=V3V21
> > gwJExHt7mkg0sAEwW%2FLTCIbJEkHznNtUCVkN%2BA%3D&amp;reserved=0
> > 
> > 2.I've sent packets using scapy with the follwing script and I think it is unicast
> > ICMP.
> > How did you thought the packets are broadcast/muticast?
> > Note that I am not familiar with log of testpmd.
> > 
> > ----------------------------------------------------------------------------------------------
> > from scapy.all import *
> > 
> > vlan_vid = 100
> > vlan_prio = 0
> > vlan_id = 0
> > vlan_flg = True
> > src_mac = "CC:CC:CC:CC:CC:CC"
> > dst_mac = "11:22:33:44:55:66"
> > dst_ip = "192.168.200.101"
> > iface = "p7p1"
> > pps = 5
> > loop = 5
> > 
> > def icmp_send():
> >     ls(Dot1Q)
> >     if vlan_flg:
> >         pkt = Ether(dst=dst_mac, src=src_mac)/Dot1Q(vlan=vlan_vid,
> > prio=vlan_prio, id=vlan_id)/IP(dst=dst_ip)/ICMP()
> >     else:
> >         pkt = Ether(dst=dst_mac, src=src_mac)/IP(dst=dst_ip)/ICMP()
> >     pkt.show()
> >     sendpfast(pkt, iface=iface, pps=pps, loop=loop, file_cache=True)
> > 
> > icmp_send()
> > -----------------------------------------------------------------------------
> > 
> > Thanks!
> > 
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> > 
> > > Hi, Hideyuki
> > >
> > > The frame in your report is broadcast/multicast. Please, try unicast one.
> > > For broadcast we have the ticket, currently issue is under investigation.
> > > Anyway, thanks for reporting.
> > >
> > > With best regards, Slava
> > >
> > > > -----Original Message-----
> > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > Sent: Thursday, November 14, 2019 7:02
> > > > To: dev@dpdk.org
> > > > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > action on VLAN header
> > > >
> > > > Hello Slava,
> > > >
> > > > As I reported to you, creating flow was successful with Connect-X5.
> > > > However when I sent packets to the NIC from outer side of the host,
> > > > I have problem.
> > > >
> > > >
> > > > [Case 1]
> > > > Packet distribution on multi-queue based on dst MAC address.
> > > >
> > > > NIC config:
> > > > 04:00.0 Mellanox Connect-X5
> > > > 0.5.00.0 Intel XXV710
> > > >
> > > > testpmd startup param:
> > > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> > pkt-
> > > > filter-mode=perfect
> > > >
> > > > flow command:
> > > > testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66
> > > > testpmd> / end actions queue index 1 / end
> > > > Flow rule #0 created
> > > > testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66
> > > > testpmd> type mask 0xffff / end actions queue index 1 / end
> > > > Flow rule #0 created
> > > >
> > > > Packet reception:(no VLAN tag)
> > > > port 0/queue 0: received 1 packets
> > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > length=60
> > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > L4_NONFRAG  -
> > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
> > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 0: sent 1 packets
> > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > length=60
> > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > L4_NONFRAG  -
> > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
> > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > >
> > > > port 1/queue 1: received 1 packets
> > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > length=60
> > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> > sw
> > > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 0/queue 1: sent 1 packets
> > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > length=60
> > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> > sw
> > > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > >
> > > > Result:
> > > > Matched packet queued to queue=0 port=0. Not queue=1, port=0.
> > > >
> > > > Expectation:
> > > > When receiving packet which has dst MAC 11:22:33:44:55:66 should be
> > > > received on queue=1 port=0.
> > > >
> > > > Question:
> > > > Why matching packet is NOT enqueued into queue=1 on port=0?
> > > >
> > > >
> > > > [Case 2]
> > > > Packet distribution on multi-queue based on VLAN tag
> > > >
> > > > testpmd startup param:
> > > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> > pkt-
> > > > filter-mode=perfect
> > > >
> > > > flow command:
> > > > flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end
> > > > actions queue index 1 / of_pop_vlan / end flow create 0 ingress
> > > > group 0 pattern eth / end actions jump group 1 / end
> > > >
> > > > Packet Reception: (VLAN100)
> > > > port 0/queue 1: received 1 packets
> > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > length=56
> > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > L4_NONFRAG  -
> > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 1: sent 1 packets
> > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > length=56
> > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > L4_NONFRAG  -
> > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > >
> > > > Result:
> > > > Matched packetd queued to queue=1, port=0 Other packet(VLAN101
> > > > packet) discarded.
> > > >
> > > > Expectation:
> > > > Matched packet queued to queue =1, port=0 Non Matched packet
> > queued
> > > > to queue=0, port=0
> > > >
> > > > Question:
> > > > Is above behavior collect?
> > > > What is the default behavior of unmatchedd packets (queue to queue=0
> > > > or discard packet)
> > > >
> > > > BR,
> > > > Hideyuki Yamashita
> > > > NTT TechnoCross
> > > >
> > > >
> > > >
> > > >
> > > >
> > 
>
  
Hideyuki Yamashita Nov. 26, 2019, 7:10 a.m. UTC | #27
Hello Matan and Slava,

Thanks for your quick response.

How about the following?

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hello Matan and Slava,
> 
> Thanks for your quick response.
> 
> 1. What you were saying is correct.
>  When I create flow with dst mac 10:22:33:44:55:66 instead of
> 11:22:33:44:55:66, received packets are queued to specified queue.
> 
> Thanks for your advice!
> 
> Q1. What is the problem with broadcast/multicast address? 
> Q2. What is the bug number on Bugzilla of DPDK?
> Q3. What is the default behavior of unmatched packets?
> Discard packet or queue those to default queue(e.g. queue=0)?
> 
> When I tested packet distribution with vlan id, unmatched packets 
> looked discarded..
> I would like to know what is the default handling.
> 
> Thanks!
> 
> Best Regards,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> 
> > Hi
> > 
> > This bit on in dst mac address = "01:00:00:00:00:00" means the packets is L2 multicast packet.
> > 
> > When you run Testpmd application the multicast configuration is forwarded to the device by default.
> > 
> > So, you have 2 rules:
> > The default which try to match on the above dst mac bit and to do RSS action for all the queues.
> > Your rule which try to match on dst mac 11:22:33:44:55:66 (the multicast bit is on) and more and to send it to queue 1.
> > 
> > So, your flow is sub flow of the default flow.
> > 
> > Since the current behavior in our driver is to put all the multicast rules in the same priority - the behavior for the case is unpredictable:
> > 1. you can get the packet twice for the 2 rules.
> > 2. you can get the packet onl for the default RSS action.
> > 3. you can get the packet only on queue 1 as in your rule.
> > 
> > Unfortunately, here, you got option 1 I think (you get the packet twice in your application).
> > 
> > This behavior in our driver to put the 2 rules in same behavior is in discussion by us - maybe it will be changed later.
> > 
> > To workaround the issue:
> > 1. do not configure the default rules (run with --flow-isolate-all in testpmd cmdline).
> > 2. do not configure 2 different multicast rules (even with different priorities).
> > 
> > Enjoy, let me know if you need more help....
> > 
> > Matan
> > 
> > From: Hideyuki Yamashita
> > > Hi Slava,
> > > 
> > > 
> > > Thanks for your response.
> > > 
> > > 1. Is the bug number is the follwoing?
> > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.
> > > dpdk.org%2Fshow_bug.cgi%3Fid%3D96&amp;data=02%7C01%7Cmatan%40
> > > mellanox.com%7Ce10ce5ee0f8f4350c6a508d76bee3a97%7Ca652971c7d2e4d
> > > 9ba6a4d149256f461b%7C0%7C0%7C637096543244123210&amp;sdata=V3V21
> > > gwJExHt7mkg0sAEwW%2FLTCIbJEkHznNtUCVkN%2BA%3D&amp;reserved=0
> > > 
> > > 2.I've sent packets using scapy with the follwing script and I think it is unicast
> > > ICMP.
> > > How did you thought the packets are broadcast/muticast?
> > > Note that I am not familiar with log of testpmd.
> > > 
> > > ----------------------------------------------------------------------------------------------
> > > from scapy.all import *
> > > 
> > > vlan_vid = 100
> > > vlan_prio = 0
> > > vlan_id = 0
> > > vlan_flg = True
> > > src_mac = "CC:CC:CC:CC:CC:CC"
> > > dst_mac = "11:22:33:44:55:66"
> > > dst_ip = "192.168.200.101"
> > > iface = "p7p1"
> > > pps = 5
> > > loop = 5
> > > 
> > > def icmp_send():
> > >     ls(Dot1Q)
> > >     if vlan_flg:
> > >         pkt = Ether(dst=dst_mac, src=src_mac)/Dot1Q(vlan=vlan_vid,
> > > prio=vlan_prio, id=vlan_id)/IP(dst=dst_ip)/ICMP()
> > >     else:
> > >         pkt = Ether(dst=dst_mac, src=src_mac)/IP(dst=dst_ip)/ICMP()
> > >     pkt.show()
> > >     sendpfast(pkt, iface=iface, pps=pps, loop=loop, file_cache=True)
> > > 
> > > icmp_send()
> > > -----------------------------------------------------------------------------
> > > 
> > > Thanks!
> > > 
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > > 
> > > > Hi, Hideyuki
> > > >
> > > > The frame in your report is broadcast/multicast. Please, try unicast one.
> > > > For broadcast we have the ticket, currently issue is under investigation.
> > > > Anyway, thanks for reporting.
> > > >
> > > > With best regards, Slava
> > > >
> > > > > -----Original Message-----
> > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Sent: Thursday, November 14, 2019 7:02
> > > > > To: dev@dpdk.org
> > > > > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > action on VLAN header
> > > > >
> > > > > Hello Slava,
> > > > >
> > > > > As I reported to you, creating flow was successful with Connect-X5.
> > > > > However when I sent packets to the NIC from outer side of the host,
> > > > > I have problem.
> > > > >
> > > > >
> > > > > [Case 1]
> > > > > Packet distribution on multi-queue based on dst MAC address.
> > > > >
> > > > > NIC config:
> > > > > 04:00.0 Mellanox Connect-X5
> > > > > 0.5.00.0 Intel XXV710
> > > > >
> > > > > testpmd startup param:
> > > > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> > > pkt-
> > > > > filter-mode=perfect
> > > > >
> > > > > flow command:
> > > > > testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66
> > > > > testpmd> / end actions queue index 1 / end
> > > > > Flow rule #0 created
> > > > > testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66
> > > > > testpmd> type mask 0xffff / end actions queue index 1 / end
> > > > > Flow rule #0 created
> > > > >
> > > > > Packet reception:(no VLAN tag)
> > > > > port 0/queue 0: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 0: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > port 1/queue 1: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> > > sw
> > > > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 0/queue 1: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> > > sw
> > > > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > Result:
> > > > > Matched packet queued to queue=0 port=0. Not queue=1, port=0.
> > > > >
> > > > > Expectation:
> > > > > When receiving packet which has dst MAC 11:22:33:44:55:66 should be
> > > > > received on queue=1 port=0.
> > > > >
> > > > > Question:
> > > > > Why matching packet is NOT enqueued into queue=1 on port=0?
> > > > >
> > > > >
> > > > > [Case 2]
> > > > > Packet distribution on multi-queue based on VLAN tag
> > > > >
> > > > > testpmd startup param:
> > > > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> > > pkt-
> > > > > filter-mode=perfect
> > > > >
> > > > > flow command:
> > > > > flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end
> > > > > actions queue index 1 / of_pop_vlan / end flow create 0 ingress
> > > > > group 0 pattern eth / end actions jump group 1 / end
> > > > >
> > > > > Packet Reception: (VLAN100)
> > > > > port 0/queue 1: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=56
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 1: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=56
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > Result:
> > > > > Matched packetd queued to queue=1, port=0 Other packet(VLAN101
> > > > > packet) discarded.
> > > > >
> > > > > Expectation:
> > > > > Matched packet queued to queue =1, port=0 Non Matched packet
> > > queued
> > > > > to queue=0, port=0
> > > > >
> > > > > Question:
> > > > > Is above behavior collect?
> > > > > What is the default behavior of unmatchedd packets (queue to queue=0
> > > > > or discard packet)
> > > > >
> > > > > BR,
> > > > > Hideyuki Yamashita
> > > > > NTT TechnoCross
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > 
> > 
> 
>
  
Hideyuki Yamashita Dec. 4, 2019, 2:43 a.m. UTC | #28
Hello Matan and Slava,

Thanks for your quick response.

How about the following?

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hello Matan and Slava,
> 
> Thanks for your quick response.
> 
> 1. What you were saying is correct.
>  When I create flow with dst mac 10:22:33:44:55:66 instead of
> 11:22:33:44:55:66, received packets are queued to specified queue.
> 
> Thanks for your advice!
> 
> Q1. What is the problem with broadcast/multicast address? 
> Q2. What is the bug number on Bugzilla of DPDK?
> Q3. What is the default behavior of unmatched packets?
> Discard packet or queue those to default queue(e.g. queue=0)?
> 
> When I tested packet distribution with vlan id, unmatched packets 
> looked discarded..
> I would like to know what is the default handling.
> 
> Thanks!
> 
> Best Regards,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> 
> > Hi
> > 
> > This bit on in dst mac address = "01:00:00:00:00:00" means the packets is L2 multicast packet.
> > 
> > When you run Testpmd application the multicast configuration is forwarded to the device by default.
> > 
> > So, you have 2 rules:
> > The default which try to match on the above dst mac bit and to do RSS action for all the queues.
> > Your rule which try to match on dst mac 11:22:33:44:55:66 (the multicast bit is on) and more and to send it to queue 1.
> > 
> > So, your flow is sub flow of the default flow.
> > 
> > Since the current behavior in our driver is to put all the multicast rules in the same priority - the behavior for the case is unpredictable:
> > 1. you can get the packet twice for the 2 rules.
> > 2. you can get the packet onl for the default RSS action.
> > 3. you can get the packet only on queue 1 as in your rule.
> > 
> > Unfortunately, here, you got option 1 I think (you get the packet twice in your application).
> > 
> > This behavior in our driver to put the 2 rules in same behavior is in discussion by us - maybe it will be changed later.
> > 
> > To workaround the issue:
> > 1. do not configure the default rules (run with --flow-isolate-all in testpmd cmdline).
> > 2. do not configure 2 different multicast rules (even with different priorities).
> > 
> > Enjoy, let me know if you need more help....
> > 
> > Matan
> > 
> > From: Hideyuki Yamashita
> > > Hi Slava,
> > > 
> > > 
> > > Thanks for your response.
> > > 
> > > 1. Is the bug number is the follwoing?
> > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs.
> > > dpdk.org%2Fshow_bug.cgi%3Fid%3D96&amp;data=02%7C01%7Cmatan%40
> > > mellanox.com%7Ce10ce5ee0f8f4350c6a508d76bee3a97%7Ca652971c7d2e4d
> > > 9ba6a4d149256f461b%7C0%7C0%7C637096543244123210&amp;sdata=V3V21
> > > gwJExHt7mkg0sAEwW%2FLTCIbJEkHznNtUCVkN%2BA%3D&amp;reserved=0
> > > 
> > > 2.I've sent packets using scapy with the follwing script and I think it is unicast
> > > ICMP.
> > > How did you thought the packets are broadcast/muticast?
> > > Note that I am not familiar with log of testpmd.
> > > 
> > > ----------------------------------------------------------------------------------------------
> > > from scapy.all import *
> > > 
> > > vlan_vid = 100
> > > vlan_prio = 0
> > > vlan_id = 0
> > > vlan_flg = True
> > > src_mac = "CC:CC:CC:CC:CC:CC"
> > > dst_mac = "11:22:33:44:55:66"
> > > dst_ip = "192.168.200.101"
> > > iface = "p7p1"
> > > pps = 5
> > > loop = 5
> > > 
> > > def icmp_send():
> > >     ls(Dot1Q)
> > >     if vlan_flg:
> > >         pkt = Ether(dst=dst_mac, src=src_mac)/Dot1Q(vlan=vlan_vid,
> > > prio=vlan_prio, id=vlan_id)/IP(dst=dst_ip)/ICMP()
> > >     else:
> > >         pkt = Ether(dst=dst_mac, src=src_mac)/IP(dst=dst_ip)/ICMP()
> > >     pkt.show()
> > >     sendpfast(pkt, iface=iface, pps=pps, loop=loop, file_cache=True)
> > > 
> > > icmp_send()
> > > -----------------------------------------------------------------------------
> > > 
> > > Thanks!
> > > 
> > > BR,
> > > Hideyuki Yamashita
> > > NTT TechnoCross
> > > 
> > > > Hi, Hideyuki
> > > >
> > > > The frame in your report is broadcast/multicast. Please, try unicast one.
> > > > For broadcast we have the ticket, currently issue is under investigation.
> > > > Anyway, thanks for reporting.
> > > >
> > > > With best regards, Slava
> > > >
> > > > > -----Original Message-----
> > > > > From: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
> > > > > Sent: Thursday, November 14, 2019 7:02
> > > > > To: dev@dpdk.org
> > > > > Cc: Slava Ovsiienko <viacheslavo@mellanox.com>
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > action on VLAN header
> > > > >
> > > > > Hello Slava,
> > > > >
> > > > > As I reported to you, creating flow was successful with Connect-X5.
> > > > > However when I sent packets to the NIC from outer side of the host,
> > > > > I have problem.
> > > > >
> > > > >
> > > > > [Case 1]
> > > > > Packet distribution on multi-queue based on dst MAC address.
> > > > >
> > > > > NIC config:
> > > > > 04:00.0 Mellanox Connect-X5
> > > > > 0.5.00.0 Intel XXV710
> > > > >
> > > > > testpmd startup param:
> > > > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> > > pkt-
> > > > > filter-mode=perfect
> > > > >
> > > > > flow command:
> > > > > testpmd> flow create 0 ingress pattern eth dst is 11:22:33:44:55:66
> > > > > testpmd> / end actions queue index 1 / end
> > > > > Flow rule #0 created
> > > > > testpmd> flow create 1 ingress pattern eth dst is 11:22:33:44:55:66
> > > > > testpmd> type mask 0xffff / end actions queue index 1 / end
> > > > > Flow rule #0 created
> > > > >
> > > > > Packet reception:(no VLAN tag)
> > > > > port 0/queue 0: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x0
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 0: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x0
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > port 1/queue 1: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> > > sw
> > > > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 0/queue 1: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=60
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_ICMP  -
> > > sw
> > > > > ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > Result:
> > > > > Matched packet queued to queue=0 port=0. Not queue=1, port=0.
> > > > >
> > > > > Expectation:
> > > > > When receiving packet which has dst MAC 11:22:33:44:55:66 should be
> > > > > received on queue=1 port=0.
> > > > >
> > > > > Question:
> > > > > Why matching packet is NOT enqueued into queue=1 on port=0?
> > > > >
> > > > >
> > > > > [Case 2]
> > > > > Packet distribution on multi-queue based on VLAN tag
> > > > >
> > > > > testpmd startup param:
> > > > > sudo ./testpmd -c 1ffff -n 4 --socket-mem=1024,1024 --log-level=10 -w
> > > > > 04:00.0,dv_flow_en=1  -w 05:00.0    -- -i --rxq=16 --txq=16 --disable-rss --
> > > pkt-
> > > > > filter-mode=perfect
> > > > >
> > > > > flow command:
> > > > > flow create 0 ingress group 1 pattern eth / vlan vid is 100 / end
> > > > > actions queue index 1 / of_pop_vlan / end flow create 0 ingress
> > > > > group 0 pattern eth / end actions jump group 1 / end
> > > > >
> > > > > Packet Reception: (VLAN100)
> > > > > port 0/queue 1: received 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=56
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Receive queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN port 1/queue 1: sent 1 packets
> > > > >   src=CC:CC:CC:CC:CC:CC - dst=11:22:33:44:55:66 - type=0x0800 -
> > > > > length=56
> > > > > - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN
> > > L4_NONFRAG  -
> > > > > sw ptype: L2_ETHER L3_IPV4  - l2_len=14 - l3_len=20 - Send queue=0x1
> > > > >   ol_flags: PKT_RX_L4_CKSUM_UNKNOWN PKT_RX_IP_CKSUM_GOOD
> > > > > PKT_RX_OUTER_L4_CKSUM_UNKNOWN
> > > > >
> > > > > Result:
> > > > > Matched packetd queued to queue=1, port=0 Other packet(VLAN101
> > > > > packet) discarded.
> > > > >
> > > > > Expectation:
> > > > > Matched packet queued to queue =1, port=0 Non Matched packet
> > > queued
> > > > > to queue=0, port=0
> > > > >
> > > > > Question:
> > > > > Is above behavior collect?
> > > > > What is the default behavior of unmatchedd packets (queue to queue=0
> > > > > or discard packet)
> > > > >
> > > > > BR,
> > > > > Hideyuki Yamashita
> > > > > NTT TechnoCross
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > 
> > 
> 
>