mbox series

[v3,00/34] A new net PMD - ice

Message ID 1544598004-27099-1-git-send-email-wenzhuo.lu@intel.com (mailing list archive)
Headers show
Series A new net PMD - ice | expand

Message

Wenzhuo Lu Dec. 12, 2018, 6:59 a.m. UTC
This patch set adds the support of a new net PMD,
Intel® Ethernet Network Adapters E810, also
called ice.

Besides enabling this new NIC, also some other features
supported on this NIC.
Like below,

Basic features:
1, Basic device operations: probe, initialization, start/stop, configure, info get.
2, RX/TX queue operations: setup/release, start/stop, info get.
3, RX/TX.

HW Offload features:
1, CRC Stripping/insertion.
2, L2/L3 checksum strip/insertion.
3, PVID set.
4, TPID change.
5, TSO (LRO/RSC not supported).

Stats:
1, statics & xstatics.

Switch functions:
1, MAC Filter Add/Delete.
2, VLAN Filter Add/Delete.

Power saving:
1, RX interrupt mode.

Misc:
1, Interrupt For Link Status.
2, firmware info query.
3, Jumbo Frame Support.
4, ptype check.
5, EEPROM check and set.

v2:
 - Fix shared lib compile issue.
 - Add meson build support.
 - Update documents.
 - Fix more checkpatch issues.

v3:
 - Removed the support of secondary process.
 - Splitted the base code to more patches.
 - Pass NULL to rte_zmalloc.
 - Changed some magic numbers to macros.
 - Fixed the wrong implementation of a specific bitmap.


Paul M Stillwell Jr (14):
  net/ice: Add registers for Intel(R) E800 Series NIC
  net/ice: Add basic structures
  net/ice: Add admin queue structures and commands
  net/ice: Add sideband queue info
  net/ice: Add device IDs for Intel(r) E800 Series NICs
  net/ice: Add control queue information
  net/ice: Add data center bridging (DCB)
  net/ice: Add basic transmit scheduler
  net/ice: Add virtual switch code
  net/ice: Add code to work with the NVM
  net/ice: Add common functions
  net/ice: Add various headers
  net/ice: Add protocol structures and defines
  net/ice: Add structures for RX/TX queues

Wenzhuo Lu (20):
  net/ice: add OS specific implementation
  net/ice: support device initialization
  net/ice: support device and queue ops
  net/ice: support getting device information
  net/ice: support packet type getting
  net/ice: support link update
  net/ice: support MTU setting
  net/ice: support MAC ops
  net/ice: support VLAN ops
  net/ice: support RSS
  net/ice: support RX queue interruption
  net/ice: support FW version getting
  net/ice: support EEPROM information getting
  net/ice: support statistics
  net/ice: support queue information getting
  net/ice: support basic RX/TX
  net/ice: support advance RX/TX
  net/ice: support descriptor ops
  doc: add ICE description and update release note
  net/ice: support meson build

 MAINTAINERS                              |    7 +
 config/common_base                       |    9 +
 doc/guides/nics/features/ice.ini         |   38 +
 doc/guides/nics/ice.rst                  |  101 +
 doc/guides/rel_notes/release_19_02.rst   |    4 +
 drivers/net/Makefile                     |    1 +
 drivers/net/ice/Makefile                 |   76 +
 drivers/net/ice/base/README              |   22 +
 drivers/net/ice/base/ice_adminq_cmd.h    | 1891 ++++++
 drivers/net/ice/base/ice_alloc.h         |   22 +
 drivers/net/ice/base/ice_common.c        | 3521 +++++++++++
 drivers/net/ice/base/ice_common.h        |  186 +
 drivers/net/ice/base/ice_controlq.c      | 1098 ++++
 drivers/net/ice/base/ice_controlq.h      |   97 +
 drivers/net/ice/base/ice_dcb.c           | 1385 +++++
 drivers/net/ice/base/ice_dcb.h           |  220 +
 drivers/net/ice/base/ice_devids.h        |   17 +
 drivers/net/ice/base/ice_flex_type.h     |   19 +
 drivers/net/ice/base/ice_flow.h          |    8 +
 drivers/net/ice/base/ice_hw_autogen.h    | 9815 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_lan_tx_rx.h     | 2291 +++++++
 drivers/net/ice/base/ice_nvm.c           |  387 ++
 drivers/net/ice/base/ice_osdep.h         |  524 ++
 drivers/net/ice/base/ice_protocol_type.h |  248 +
 drivers/net/ice/base/ice_sbq_cmd.h       |   93 +
 drivers/net/ice/base/ice_sched.c         | 5380 ++++++++++++++++
 drivers/net/ice/base/ice_sched.h         |  210 +
 drivers/net/ice/base/ice_status.h        |   45 +
 drivers/net/ice/base/ice_switch.c        | 2812 +++++++++
 drivers/net/ice/base/ice_switch.h        |  333 +
 drivers/net/ice/base/ice_type.h          |  869 +++
 drivers/net/ice/base/meson.build         |   30 +
 drivers/net/ice/ice_ethdev.c             | 3263 ++++++++++
 drivers/net/ice/ice_ethdev.h             |  318 +
 drivers/net/ice/ice_lan_rxtx.c           | 2898 +++++++++
 drivers/net/ice/ice_logs.h               |   45 +
 drivers/net/ice/ice_rxtx.h               |  155 +
 drivers/net/ice/meson.build              |   15 +
 drivers/net/ice/rte_pmd_ice_version.map  |    4 +
 drivers/net/meson.build                  |    1 +
 mk/rte.app.mk                            |    1 +
 41 files changed, 38459 insertions(+)
 create mode 100644 doc/guides/nics/features/ice.ini
 create mode 100644 doc/guides/nics/ice.rst
 create mode 100644 drivers/net/ice/Makefile
 create mode 100644 drivers/net/ice/base/README
 create mode 100644 drivers/net/ice/base/ice_adminq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_alloc.h
 create mode 100644 drivers/net/ice/base/ice_common.c
 create mode 100644 drivers/net/ice/base/ice_common.h
 create mode 100644 drivers/net/ice/base/ice_controlq.c
 create mode 100644 drivers/net/ice/base/ice_controlq.h
 create mode 100644 drivers/net/ice/base/ice_dcb.c
 create mode 100644 drivers/net/ice/base/ice_dcb.h
 create mode 100644 drivers/net/ice/base/ice_devids.h
 create mode 100644 drivers/net/ice/base/ice_flex_type.h
 create mode 100644 drivers/net/ice/base/ice_flow.h
 create mode 100644 drivers/net/ice/base/ice_hw_autogen.h
 create mode 100644 drivers/net/ice/base/ice_lan_tx_rx.h
 create mode 100644 drivers/net/ice/base/ice_nvm.c
 create mode 100644 drivers/net/ice/base/ice_osdep.h
 create mode 100644 drivers/net/ice/base/ice_protocol_type.h
 create mode 100644 drivers/net/ice/base/ice_sbq_cmd.h
 create mode 100644 drivers/net/ice/base/ice_sched.c
 create mode 100644 drivers/net/ice/base/ice_sched.h
 create mode 100644 drivers/net/ice/base/ice_status.h
 create mode 100644 drivers/net/ice/base/ice_switch.c
 create mode 100644 drivers/net/ice/base/ice_switch.h
 create mode 100644 drivers/net/ice/base/ice_type.h
 create mode 100644 drivers/net/ice/base/meson.build
 create mode 100644 drivers/net/ice/ice_ethdev.c
 create mode 100644 drivers/net/ice/ice_ethdev.h
 create mode 100644 drivers/net/ice/ice_lan_rxtx.c
 create mode 100644 drivers/net/ice/ice_logs.h
 create mode 100644 drivers/net/ice/ice_rxtx.h
 create mode 100644 drivers/net/ice/meson.build
 create mode 100644 drivers/net/ice/rte_pmd_ice_version.map

Comments

Varghese, Vipin Dec. 13, 2018, 6:02 a.m. UTC | #1
Hi,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Wenzhuo Lu
> Sent: Wednesday, December 12, 2018 12:30 PM
> To: dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> 
> This patch set adds the support of a new net PMD, Intel® Ethernet Network
> Adapters E810, also called ice.
> 
> Besides enabling this new NIC, also some other features supported on this NIC.

Can you mention the other features

> Like below,
> 
> Basic features:
> 1, Basic device operations: probe, initialization, start/stop, configure, info get.
> 2, RX/TX queue operations: setup/release, start/stop, info get.
> 3, RX/TX.
> 
> HW Offload features:
> 1, CRC Stripping/insertion.
> 2, L2/L3 checksum strip/insertion.
> 3, PVID set.
> 4, TPID change.
> 5, TSO (LRO/RSC not supported).
> 
> Stats:
> 1, statics & xstatics.
> 
> Switch functions:
> 1, MAC Filter Add/Delete.
> 2, VLAN Filter Add/Delete.
> 
> Power saving:
> 1, RX interrupt mode.
> 
> Misc:
> 1, Interrupt For Link Status.
> 2, firmware info query.
> 3, Jumbo Frame Support.
> 4, ptype check.
> 5, EEPROM check and set.
> 

Can you add section to highlight the changes with "---". This is part of 'http://doc.dpdk.org/guides/contributing/patches.html' for 'This can be added to the cover letter or the annotations'

> v2:
>  - Fix shared lib compile issue.
>  - Add meson build support.
>  - Update documents.
>  - Fix more checkpatch issues.
> 
> v3:
>  - Removed the support of secondary process.
>  - Splitted the base code to more patches.
>  - Pass NULL to rte_zmalloc.
>  - Changed some magic numbers to macros.
>  - Fixed the wrong implementation of a specific bitmap.

Not all comments are addressed or closed from V2. So I have to assume you will be doing the same for v4. 


Some of the items

[PATCH v2 03/20] net/ice: support device and queue ops
> > > > +
> > > > +static int
> > > > +ice_dev_start(struct rte_eth_dev *dev) {
> > > > +	struct rte_eth_dev_data *data = dev->data;
> > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > >dev_private);
> > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > >dev_private);
> > > > +	uint16_t nb_rxq = 0;
> > > > +	uint16_t nb_txq, i;
> > > > +	int ret;
> > > > +
> > > > +	ICE_PROC_SECONDARY_CHECK;
> > >
> > > Device start is not supported, but how is this differentiated from 
> > > primary configured device vs secondary configured device.
> > >
> > > Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w 
> > > BB:DD:F'. In this case since we are checking process type this 
> > > will return without
> > start?
> Two updates with respect to your comment, 1. tool and application like 
> dpdk-procinfo will no longer be able pull data since you are asking to black list.
> 2. If there are functions which needs to shared, like primary using rx-0 and tx-0 then secondary rx-1 and tx-1 how to make this work?


[PATCH v2 01/20] net/ice: add base code
> Note: In version 1 I enquired about unit or DTS validation for PMD. Is 
> this still holding good?
Yes, it's planned and on going.

[PATCH v2 02/20] net/ice: support device initialization
> +# Compile burst-oriented ICE PMD driver # CONFIG_RTE_LIBRTE_ICE_PMD=y

Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to configure. But here is it already set to 'y'. Is this correct? If yes can you update ' https://patches.dpdk.org/patch/48488/'

 [PATCH v2 20/20] net/ice: support meson build
> > > > Should not meson build option be add start. That is in patch 
> > > > 1/20 so compile options does not fail?
> > > It will not fail. Enabling the compile earlier only means the code 
> > > can be
> > compiled.
> > > But, to use this device we do need the whole patch set. From this 
> > > point of view, compiling it at the end maybe better.
> > Thanks for update, so will 'meson-build' success if apply 3 patches?
> Sure, meson build will not be broken by any one of these patches. Only 
> until this patch, what built by meson can support ice.
Thanks for confirmation that you have tried './devtools/test-meson-builds.sh' and the intermediate build for ICE_DSI PMD does not fail.

 [PATCH v2 16/20] net/ice: support basic RX/TX
 
 [PATCH v2 14/20] net/ice: support statistics
 > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
> > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> >
> > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN. 
> > Should we add VSI VLAN here?
> Don't need. They're different functions. We add crc length here 
> because of HW counts the packet length before crc is added.
So you are not fetching stats from HW registers from switch is this correct? How will you get stats for actually transmitted in xstats? As I understand xstats is for switch HW stats right?

 [PATCH v2 04/20] net/ice: support getting	device	information
 > > Does this mean per queue offload capability is not supported? If 
> > yes, can you mention this in release notes under 'support or limitation'
> No, it's not supported. We have a document, ice.ini, to list all the 
> features supported. All the others are not supported.
> BTW, I don't think anything not supported is limitation.
If I understand correctly,  ICE_DSI_PMD is advertising it has not offload for RX and TX. But you are stating in ice.ini you are listing offload supports. So let me rephrase the question 'if you support port level offload capability, it will reflect for all queues rx and tx. But if you reflect queue level offload as 0 for rx and tx, then APIs rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue offload enabled should fail. Is this correct understanding?'

> > If device switch is not configured (default value from NVM) should 
> > we highlight the switch can support speed 10, 100, 1000, 1000 and son on?
> No, this's the capability getting from HW.
If HW is supported or configured for 10, 100, 25G then those should be returned correctly this I agree. But when the device is queried for capability it should highlight all supported speeds of switch. Am I right?

> > If speed is not true as stated above, can you please add this to 
> > release notes and documentation.
> Here listed all the case we can get from HW.
Please add to ice_dsi documentation also.

snipped
Wenzhuo Lu Dec. 13, 2018, 7:10 a.m. UTC | #2
Hi Vipin,


> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 13, 2018 2:02 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> 
> Hi,
> 
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Wenzhuo Lu
> > Sent: Wednesday, December 12, 2018 12:30 PM
> > To: dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>
> > Subject: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> >
> > This patch set adds the support of a new net PMD, Intel® Ethernet
> > Network Adapters E810, also called ice.
> >
> > Besides enabling this new NIC, also some other features supported on this
> NIC.
> 
> Can you mention the other features
Sorry for misleading you. The other features is below features. I'll reword it.

> 
> > Like below,
> >
> > Basic features:
> > 1, Basic device operations: probe, initialization, start/stop, configure, info
> get.
> > 2, RX/TX queue operations: setup/release, start/stop, info get.
> > 3, RX/TX.
> >
> > HW Offload features:
> > 1, CRC Stripping/insertion.
> > 2, L2/L3 checksum strip/insertion.
> > 3, PVID set.
> > 4, TPID change.
> > 5, TSO (LRO/RSC not supported).
> >
> > Stats:
> > 1, statics & xstatics.
> >
> > Switch functions:
> > 1, MAC Filter Add/Delete.
> > 2, VLAN Filter Add/Delete.
> >
> > Power saving:
> > 1, RX interrupt mode.
> >
> > Misc:
> > 1, Interrupt For Link Status.
> > 2, firmware info query.
> > 3, Jumbo Frame Support.
> > 4, ptype check.
> > 5, EEPROM check and set.
> >
> 
> Can you add section to highlight the changes with "---". This is part of
> 'http://doc.dpdk.org/guides/contributing/patches.html' for 'This can be
> added to the cover letter or the annotations'
Will add it.

> 
> > v2:
> >  - Fix shared lib compile issue.
> >  - Add meson build support.
> >  - Update documents.
> >  - Fix more checkpatch issues.
> >
> > v3:
> >  - Removed the support of secondary process.
> >  - Splitted the base code to more patches.
> >  - Pass NULL to rte_zmalloc.
> >  - Changed some magic numbers to macros.
> >  - Fixed the wrong implementation of a specific bitmap.
> 
> Not all comments are addressed or closed from V2. So I have to assume you
> will be doing the same for v4.
> 
> 
> Some of the items
> 
> [PATCH v2 03/20] net/ice: support device and queue ops
> > > > > +
> > > > > +static int
> > > > > +ice_dev_start(struct rte_eth_dev *dev) {
> > > > > +	struct rte_eth_dev_data *data = dev->data;
> > > > > +	struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data-
> > > > > >dev_private);
> > > > > +	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data-
> > > > >dev_private);
> > > > > +	uint16_t nb_rxq = 0;
> > > > > +	uint16_t nb_txq, i;
> > > > > +	int ret;
> > > > > +
> > > > > +	ICE_PROC_SECONDARY_CHECK;
> > > >
> > > > Device start is not supported, but how is this differentiated from
> > > > primary configured device vs secondary configured device.
> > > >
> > > > Ie: primary uses black list '-b BB:DD:F' while secondary uses '-w
> > > > BB:DD:F'. In this case since we are checking process type this
> > > > will return without
> > > start?
> > Two updates with respect to your comment, 1. tool and application like
> > dpdk-procinfo will no longer be able pull data since you are asking to black
> list.
> > 2. If there are functions which needs to shared, like primary using rx-0 and
> tx-0 then secondary rx-1 and tx-1 how to make this work?
> 
> 
> [PATCH v2 01/20] net/ice: add base code
> > Note: In version 1 I enquired about unit or DTS validation for PMD. Is
> > this still holding good?
> Yes, it's planned and on going.
I confirmed validation is going on this device. But not impact this patch set unless we  found bug and need to fix it.

> 
> [PATCH v2 02/20] net/ice: support device initialization
> > +# Compile burst-oriented ICE PMD driver #
> CONFIG_RTE_LIBRTE_ICE_PMD=y
> 
> Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to
> configure. But here is it already set to 'y'. Is this correct? If yes can you
> update ' https://patches.dpdk.org/patch/48488/'
We've discussed the setting. I don’t know anything left. If there's, please let me know.

> 
>  [PATCH v2 20/20] net/ice: support meson build
> > > > > Should not meson build option be add start. That is in patch
> > > > > 1/20 so compile options does not fail?
> > > > It will not fail. Enabling the compile earlier only means the code
> > > > can be
> > > compiled.
> > > > But, to use this device we do need the whole patch set. From this
> > > > point of view, compiling it at the end maybe better.
> > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > Sure, meson build will not be broken by any one of these patches. Only
> > until this patch, what built by meson can support ice.
> Thanks for confirmation that you have tried './devtools/test-meson-
> builds.sh' and the intermediate build for ICE_DSI PMD does not fail.
I said meson build is working. But don't know what's ICE_DSI PMD.

> 
>  [PATCH v2 16/20] net/ice: support basic RX/TX
> 
>  [PATCH v2 14/20] net/ice: support statistics
>  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast +
> > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > >
> > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > Should we add VSI VLAN here?
> > Don't need. They're different functions. We add crc length here
> > because of HW counts the packet length before crc is added.
> So you are not fetching stats from HW registers from switch is this correct?
> How will you get stats for actually transmitted in xstats? As I understand
> xstats is for switch HW stats right?
No. The stats is got from HW. We just correct it as HW doesn’t have the chance to add CRC length. But I don't understand why switch is mentioned here.

> 
>  [PATCH v2 04/20] net/ice: support getting	device	information
>  > > Does this mean per queue offload capability is not supported? If
> > > yes, can you mention this in release notes under 'support or limitation'
> > No, it's not supported. We have a document, ice.ini, to list all the
> > features supported. All the others are not supported.
> > BTW, I don't think anything not supported is limitation.
> If I understand correctly,  ICE_DSI_PMD is advertising it has not offload for
> RX and TX. But you are stating in ice.ini you are listing offload supports. So
> let me rephrase the question 'if you support port level offload capability, it
> will reflect for all queues rx and tx. But if you reflect queue level offload as 0
> for rx and tx, then APIs rte_eth_rx_queue_setup and
> rte_eth_tx_queue_setup if queue offload enabled should fail. Is this correct
> understanding?'
Sorry, I don’t know what's ICE_DSI_PMD.

> 
> > > If device switch is not configured (default value from NVM) should
> > > we highlight the switch can support speed 10, 100, 1000, 1000 and son
> on?
> > No, this's the capability getting from HW.
> If HW is supported or configured for 10, 100, 25G then those should be
> returned correctly this I agree. But when the device is queried for capability
> it should highlight all supported speeds of switch. Am I right?
No. Here shows the result not all the speeds supported. Like the speed after auto negotiation.

> 
> > > If speed is not true as stated above, can you please add this to
> > > release notes and documentation.
> > Here listed all the case we can get from HW.
> Please add to ice_dsi documentation also.
Sorry, no idea about ice_dsi.

> 
> snipped
Varghese, Vipin Dec. 13, 2018, 1:09 p.m. UTC | #3
Hi Wenzhuo,

Thanks for the update, couple of suggestion in my opinion shared below

Snipped
> > [PATCH v2 02/20] net/ice: support device initialization
> > > +# Compile burst-oriented ICE PMD driver #
> > CONFIG_RTE_LIBRTE_ICE_PMD=y
> >
> > Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to
> > configure. But here is it already set to 'y'. Is this correct? If yes
> > can you update ' https://patches.dpdk.org/patch/48488/'
> We've discussed the setting. I don’t know anything left. If there's, please let me
> know.
I think I poorly communicated, so let me try again. In document it is stated to configure as 'y'. But in your default config is 'y'. So either make the default as 'n' so user can configure or reword in document as 'Ensure the config is set to y for before building'

> 
> >
> >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > Should not meson build option be add start. That is in patch
> > > > > > 1/20 so compile options does not fail?
> > > > > It will not fail. Enabling the compile earlier only means the
> > > > > code can be
> > > > compiled.
> > > > > But, to use this device we do need the whole patch set. From
> > > > > this point of view, compiling it at the end maybe better.
> > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > Sure, meson build will not be broken by any one of these patches.
> > > Only until this patch, what built by meson can support ice.
> > Thanks for confirmation that you have tried './devtools/test-meson-
> > builds.sh' and the intermediate build for ICE_DSI PMD does not fail.
> I said meson build is working. But don't know what's ICE_DSI PMD.

Once again apologies if I poorly communicated ICE_PMD, my question is 'if I take patch 1/20 in v2 can I build for librte_pmd_ice'? is not we are expecting each additional layering for functionality like mtu set, switch set should be compiling and not just the last build?

> 
> >
> >  [PATCH v2 16/20] net/ice: support basic RX/TX
> >
> >  [PATCH v2 14/20] net/ice: support statistics
> >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast
> +
> > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > >
> > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > Should we add VSI VLAN here?
> > > Don't need. They're different functions. We add crc length here
> > > because of HW counts the packet length before crc is added.
> > So you are not fetching stats from HW registers from switch is this correct?
> > How will you get stats for actually transmitted in xstats? As I
> > understand xstats is for switch HW stats right?
> No. The stats is got from HW. We just correct it as HW doesn’t have the chance
> to add CRC length. But I don't understand why switch is mentioned here.
Will ice pmd  for CVL expose all the switch stats as 'xstats'? If yes, can you share in patch comments what are switch level stats?

> 
> >
> >  [PATCH v2 04/20] net/ice: support getting	device	information
> >  > > Does this mean per queue offload capability is not supported? If
> > > > yes, can you mention this in release notes under 'support or limitation'
> > > No, it's not supported. We have a document, ice.ini, to list all the
> > > features supported. All the others are not supported.
> > > BTW, I don't think anything not supported is limitation.
> > If I understand correctly,  ICE_DSI_PMD is advertising it has not
> > offload for RX and TX. But you are stating in ice.ini you are listing
> > offload supports. So let me rephrase the question 'if you support port
> > level offload capability, it will reflect for all queues rx and tx.
> > But if you reflect queue level offload as 0 for rx and tx, then APIs
> > rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue offload
> > enabled should fail. Is this correct understanding?'
> Sorry, I don’t know what's ICE_DSI_PMD.
ICE_PMD 


> 
> >
> > > > If device switch is not configured (default value from NVM) should
> > > > we highlight the switch can support speed 10, 100, 1000, 1000 and
> > > > son
> > on?
> > > No, this's the capability getting from HW.
> > If HW is supported or configured for 10, 100, 25G then those should be
> > returned correctly this I agree. But when the device is queried for
> > capability it should highlight all supported speeds of switch. Am I right?
> No. Here shows the result not all the speeds supported. Like the speed after
> auto negotiation.
As per your current statement "If user uses API rte_eth_dev_info_get to get speed_capa, current speed will be returned as auto negotiated value and not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M| ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I will leave this to others to comment since in my humble opinion this is not expected.

> 
> >
> > > > If speed is not true as stated above, can you please add this to
> > > > release notes and documentation.
> > > Here listed all the case we can get from HW.
> > Please add to ice_dsi documentation also.
> Sorry, no idea about ice_dsi.
My mistake ICE_PMD is what I am referring to.

> 
> >
> > snipped
Wenzhuo Lu Dec. 14, 2018, 1:11 a.m. UTC | #4
Hi Vipin,

> -----Original Message-----
> From: Varghese, Vipin
> Sent: Thursday, December 13, 2018 9:10 PM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> 
> Hi Wenzhuo,
> 
> Thanks for the update, couple of suggestion in my opinion shared below
> 
> Snipped
> > > [PATCH v2 02/20] net/ice: support device initialization
> > > > +# Compile burst-oriented ICE PMD driver #
> > > CONFIG_RTE_LIBRTE_ICE_PMD=y
> > >
> > > Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to
> > > configure. But here is it already set to 'y'. Is this correct? If
> > > yes can you update ' https://patches.dpdk.org/patch/48488/'
> > We've discussed the setting. I don’t know anything left. If there's,
> > please let me know.
> I think I poorly communicated, so let me try again. In document it is stated to
> configure as 'y'. But in your default config is 'y'. So either make the default as
> 'n' so user can configure or reword in document as 'Ensure the config is set
> to y for before building'
In the patch, I only see "``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``) ". No conflict with the configure file.

> 
> >
> > >
> > >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > > Should not meson build option be add start. That is in patch
> > > > > > > 1/20 so compile options does not fail?
> > > > > > It will not fail. Enabling the compile earlier only means the
> > > > > > code can be
> > > > > compiled.
> > > > > > But, to use this device we do need the whole patch set. From
> > > > > > this point of view, compiling it at the end maybe better.
> > > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > > Sure, meson build will not be broken by any one of these patches.
> > > > Only until this patch, what built by meson can support ice.
> > > Thanks for confirmation that you have tried './devtools/test-meson-
> > > builds.sh' and the intermediate build for ICE_DSI PMD does not fail.
> > I said meson build is working. But don't know what's ICE_DSI PMD.
> 
> Once again apologies if I poorly communicated ICE_PMD, my question is 'if I
> take patch 1/20 in v2 can I build for librte_pmd_ice'? is not we are expecting
> each additional layering for functionality like mtu set, switch set should be
> compiling and not just the last build?
O, you want to support meson build from the beginning. I can move it forward, although I recommend to use it after all the patches.

> 
> >
> > >
> > >  [PATCH v2 16/20] net/ice: support basic RX/TX
> > >
> > >  [PATCH v2 14/20] net/ice: support statistics
> > >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns-
> >eth.tx_multicast
> > +
> > > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > > >
> > > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > > Should we add VSI VLAN here?
> > > > Don't need. They're different functions. We add crc length here
> > > > because of HW counts the packet length before crc is added.
> > > So you are not fetching stats from HW registers from switch is this
> correct?
> > > How will you get stats for actually transmitted in xstats? As I
> > > understand xstats is for switch HW stats right?
> > No. The stats is got from HW. We just correct it as HW doesn’t have
> > the chance to add CRC length. But I don't understand why switch is
> mentioned here.
> Will ice pmd  for CVL expose all the switch stats as 'xstats'? If yes, can you
> share in patch comments what are switch level stats?
No. xstats means the extend stats which is not covered by the basic stats. It can be anything which is not in basic stats.

> 
> >
> > >
> > >  [PATCH v2 04/20] net/ice: support getting	device	information
> > >  > > Does this mean per queue offload capability is not supported?
> > > If
> > > > > yes, can you mention this in release notes under 'support or
> limitation'
> > > > No, it's not supported. We have a document, ice.ini, to list all
> > > > the features supported. All the others are not supported.
> > > > BTW, I don't think anything not supported is limitation.
> > > If I understand correctly,  ICE_DSI_PMD is advertising it has not
> > > offload for RX and TX. But you are stating in ice.ini you are
> > > listing offload supports. So let me rephrase the question 'if you
> > > support port level offload capability, it will reflect for all queues rx and tx.
> > > But if you reflect queue level offload as 0 for rx and tx, then APIs
> > > rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue offload
> > > enabled should fail. Is this correct understanding?'
> > Sorry, I don’t know what's ICE_DSI_PMD.
> ICE_PMD
No exactly. We follow the rule of queue and port offload setting. If the feature is enabled in port level, we can ignore the queue setting. If the feature is not enabled in port level, we still can enable it per queue.

> 
> 
> >
> > >
> > > > > If device switch is not configured (default value from NVM)
> > > > > should we highlight the switch can support speed 10, 100, 1000,
> > > > > 1000 and son
> > > on?
> > > > No, this's the capability getting from HW.
> > > If HW is supported or configured for 10, 100, 25G then those should
> > > be returned correctly this I agree. But when the device is queried
> > > for capability it should highlight all supported speeds of switch. Am I right?
> > No. Here shows the result not all the speeds supported. Like the speed
> > after auto negotiation.
> As per your current statement "If user uses API rte_eth_dev_info_get to get
> speed_capa, current speed will be returned as auto negotiated value and
> not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M|
> ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I will
> leave this to others to comment since in my humble opinion this is not
> expected.
OK. I'll change it to the bitmap.

> 
> >
> > >
> > > > > If speed is not true as stated above, can you please add this to
> > > > > release notes and documentation.
> > > > Here listed all the case we can get from HW.
> > > Please add to ice_dsi documentation also.
> > Sorry, no idea about ice_dsi.
> My mistake ICE_PMD is what I am referring to.
> 
> >
> > >
> > > snipped
Varghese, Vipin Dec. 14, 2018, 3:26 a.m. UTC | #5
Hi Wenzhuo,

snipped
> >
> > Thanks for the update, couple of suggestion in my opinion shared below
> >
> > Snipped
> > > > [PATCH v2 02/20] net/ice: support device initialization
> > > > > +# Compile burst-oriented ICE PMD driver #
> > > > CONFIG_RTE_LIBRTE_ICE_PMD=y
> > > >
> > > > Based on ' https://patches.dpdk.org/patch/48488/' it is suggested
> > > > to configure. But here is it already set to 'y'. Is this correct?
> > > > If yes can you update ' https://patches.dpdk.org/patch/48488/'
> > > We've discussed the setting. I don’t know anything left. If there's,
> > > please let me know.
> > I think I poorly communicated, so let me try again. In document it is
> > stated to configure as 'y'. But in your default config is 'y'. So
> > either make the default as 'n' so user can configure or reword in
> > document as 'Ensure the config is set to y for before building'
> In the patch, I only see "``CONFIG_RTE_LIBRTE_ICE_PMD`` (default ``y``) ". No
> conflict with the configure file.
I think I am failing you to convey even after multiple attempts. Hence I humbly leave this to other from mailing list to get it sorted.

> 
> >
> > >
> > > >
> > > >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > > > Should not meson build option be add start. That is in
> > > > > > > > patch
> > > > > > > > 1/20 so compile options does not fail?
> > > > > > > It will not fail. Enabling the compile earlier only means
> > > > > > > the code can be
> > > > > > compiled.
> > > > > > > But, to use this device we do need the whole patch set. From
> > > > > > > this point of view, compiling it at the end maybe better.
> > > > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > > > Sure, meson build will not be broken by any one of these patches.
> > > > > Only until this patch, what built by meson can support ice.
> > > > Thanks for confirmation that you have tried
> > > > './devtools/test-meson- builds.sh' and the intermediate build for ICE_DSI
> PMD does not fail.
> > > I said meson build is working. But don't know what's ICE_DSI PMD.
> >
> > Once again apologies if I poorly communicated ICE_PMD, my question is
> > 'if I take patch 1/20 in v2 can I build for librte_pmd_ice'? is not we
> > are expecting each additional layering for functionality like mtu set,
> > switch set should be compiling and not just the last build?
> O, you want to support meson build from the beginning. I can move it forward,
> although I recommend to use it after all the patches.
Thank you for consideration, I think this approach helps a lot since we are systematically adding code to ICE_PMD

> 
> >
> > >
> > > >
> > > >  [PATCH v2 16/20] net/ice: support basic RX/TX
> > > >
> > > >  [PATCH v2 14/20] net/ice: support statistics
> > > >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns-
> > >eth.tx_multicast
> > > +
> > > > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > > > >
> > > > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > > > Should we add VSI VLAN here?
> > > > > Don't need. They're different functions. We add crc length here
> > > > > because of HW counts the packet length before crc is added.
> > > > So you are not fetching stats from HW registers from switch is
> > > > this
> > correct?
> > > > How will you get stats for actually transmitted in xstats? As I
> > > > understand xstats is for switch HW stats right?
> > > No. The stats is got from HW. We just correct it as HW doesn’t have
> > > the chance to add CRC length. But I don't understand why switch is
> > mentioned here.
> > Will ice pmd  for CVL expose all the switch stats as 'xstats'? If yes,
> > can you share in patch comments what are switch level stats?
> No. xstats means the extend stats which is not covered by the basic stats. It can be
> anything which is not in basic stats.
Thanks for confirming, hence is xstats covering for ICE Switch stats which is not covered in PMD stats? If yes, can you mention the list of stats covered under xstats that is fetched specifically from ICE switch?

> 
> >
> > >
> > > >
> > > >  [PATCH v2 04/20] net/ice: support getting	device	information
> > > >  > > Does this mean per queue offload capability is not supported?
> > > > If
> > > > > > yes, can you mention this in release notes under 'support or
> > limitation'
> > > > > No, it's not supported. We have a document, ice.ini, to list all
> > > > > the features supported. All the others are not supported.
> > > > > BTW, I don't think anything not supported is limitation.
> > > > If I understand correctly,  ICE_DSI_PMD is advertising it has not
> > > > offload for RX and TX. But you are stating in ice.ini you are
> > > > listing offload supports. So let me rephrase the question 'if you
> > > > support port level offload capability, it will reflect for all queues rx and tx.
> > > > But if you reflect queue level offload as 0 for rx and tx, then
> > > > APIs rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue
> > > > offload enabled should fail. Is this correct understanding?'
> > > Sorry, I don’t know what's ICE_DSI_PMD.
> > ICE_PMD
> No exactly. We follow the rule of queue and port offload setting. If the feature is
> enabled in port level, we can ignore the queue setting. If the feature is not
> enabled in port level, we still can enable it per queue.
Thanks for confirming, so what will be result for non-configured ICE port for offload for port and queue using rte_eth_dev_get_info?

> 
> >
> >
> > >
> > > >
> > > > > > If device switch is not configured (default value from NVM)
> > > > > > should we highlight the switch can support speed 10, 100,
> > > > > > 1000,
> > > > > > 1000 and son
> > > > on?
> > > > > No, this's the capability getting from HW.
> > > > If HW is supported or configured for 10, 100, 25G then those
> > > > should be returned correctly this I agree. But when the device is
> > > > queried for capability it should highlight all supported speeds of switch. Am I
> right?
> > > No. Here shows the result not all the speeds supported. Like the
> > > speed after auto negotiation.
> > As per your current statement "If user uses API rte_eth_dev_info_get
> > to get speed_capa, current speed will be returned as auto negotiated
> > value and not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M|
> > ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I will
> > leave this to others to comment since in my humble opinion this is not
> > expected.
> OK. I'll change it to the bitmap.
Thanks, appreciate the understanding

> 
> >
> > >
> > > >
> > > > > > If speed is not true as stated above, can you please add this
> > > > > > to release notes and documentation.
> > > > > Here listed all the case we can get from HW.
> > > > Please add to ice_dsi documentation also.
> > > Sorry, no idea about ice_dsi.
> > My mistake ICE_PMD is what I am referring to.
> >
> > >
> > > >
> > > > snipped
Wenzhuo Lu Dec. 14, 2018, 8:20 a.m. UTC | #6
> -----Original Message-----
> From: Varghese, Vipin
> Sent: Friday, December 14, 2018 11:26 AM
> To: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v3 00/34] A new net PMD - ice
> 
> Hi Wenzhuo,
> 
> snipped
> > >
> > > Thanks for the update, couple of suggestion in my opinion shared
> > > below
> > >
> > > Snipped
> > > > > [PATCH v2 02/20] net/ice: support device initialization
> > > > > > +# Compile burst-oriented ICE PMD driver #
> > > > > CONFIG_RTE_LIBRTE_ICE_PMD=y
> > > > >
> > > > > Based on ' https://patches.dpdk.org/patch/48488/' it is
> > > > > suggested to configure. But here is it already set to 'y'. Is this correct?
> > > > > If yes can you update ' https://patches.dpdk.org/patch/48488/'
> > > > We've discussed the setting. I don’t know anything left. If
> > > > there's, please let me know.
> > > I think I poorly communicated, so let me try again. In document it
> > > is stated to configure as 'y'. But in your default config is 'y'. So
> > > either make the default as 'n' so user can configure or reword in
> > > document as 'Ensure the config is set to y for before building'
> > In the patch, I only see "``CONFIG_RTE_LIBRTE_ICE_PMD`` (default
> > ``y``) ". No conflict with the configure file.
> I think I am failing you to convey even after multiple attempts. Hence I
> humbly leave this to other from mailing list to get it sorted.
I also feel confused. As the compile config is set to 'y' and the document says the default value is 'y'. I just don’t find the problem.

> 
> >
> > >
> > > >
> > > > >
> > > > >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > > > > Should not meson build option be add start. That is in
> > > > > > > > > patch
> > > > > > > > > 1/20 so compile options does not fail?
> > > > > > > > It will not fail. Enabling the compile earlier only means
> > > > > > > > the code can be
> > > > > > > compiled.
> > > > > > > > But, to use this device we do need the whole patch set.
> > > > > > > > From this point of view, compiling it at the end maybe better.
> > > > > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > > > > Sure, meson build will not be broken by any one of these patches.
> > > > > > Only until this patch, what built by meson can support ice.
> > > > > Thanks for confirmation that you have tried
> > > > > './devtools/test-meson- builds.sh' and the intermediate build
> > > > > for ICE_DSI
> > PMD does not fail.
> > > > I said meson build is working. But don't know what's ICE_DSI PMD.
> > >
> > > Once again apologies if I poorly communicated ICE_PMD, my question
> > > is 'if I take patch 1/20 in v2 can I build for librte_pmd_ice'? is
> > > not we are expecting each additional layering for functionality like
> > > mtu set, switch set should be compiling and not just the last build?
> > O, you want to support meson build from the beginning. I can move it
> > forward, although I recommend to use it after all the patches.
> Thank you for consideration, I think this approach helps a lot since we are
> systematically adding code to ICE_PMD
> 
> >
> > >
> > > >
> > > > >
> > > > >  [PATCH v2 16/20] net/ice: support basic RX/TX
> > > > >
> > > > >  [PATCH v2 14/20] net/ice: support statistics
> > > > >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns-
> > > >eth.tx_multicast
> > > > +
> > > > > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > > > > >
> > > > > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > > > > Should we add VSI VLAN here?
> > > > > > Don't need. They're different functions. We add crc length
> > > > > > here because of HW counts the packet length before crc is added.
> > > > > So you are not fetching stats from HW registers from switch is
> > > > > this
> > > correct?
> > > > > How will you get stats for actually transmitted in xstats? As I
> > > > > understand xstats is for switch HW stats right?
> > > > No. The stats is got from HW. We just correct it as HW doesn’t
> > > > have the chance to add CRC length. But I don't understand why
> > > > switch is
> > > mentioned here.
> > > Will ice pmd  for CVL expose all the switch stats as 'xstats'? If
> > > yes, can you share in patch comments what are switch level stats?
> > No. xstats means the extend stats which is not covered by the basic
> > stats. It can be anything which is not in basic stats.
> Thanks for confirming, hence is xstats covering for ICE Switch stats which is
> not covered in PMD stats? If yes, can you mention the list of stats covered
> under xstats that is fetched specifically from ICE switch?
Sorry. I cannot do that. The HW is somehow black box to us. I cannot tell you something I'm not sure about.

> 
> >
> > >
> > > >
> > > > >
> > > > >  [PATCH v2 04/20] net/ice: support getting	device	information
> > > > >  > > Does this mean per queue offload capability is not supported?
> > > > > If
> > > > > > > yes, can you mention this in release notes under 'support or
> > > limitation'
> > > > > > No, it's not supported. We have a document, ice.ini, to list
> > > > > > all the features supported. All the others are not supported.
> > > > > > BTW, I don't think anything not supported is limitation.
> > > > > If I understand correctly,  ICE_DSI_PMD is advertising it has
> > > > > not offload for RX and TX. But you are stating in ice.ini you
> > > > > are listing offload supports. So let me rephrase the question
> > > > > 'if you support port level offload capability, it will reflect for all
> queues rx and tx.
> > > > > But if you reflect queue level offload as 0 for rx and tx, then
> > > > > APIs rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue
> > > > > offload enabled should fail. Is this correct understanding?'
> > > > Sorry, I don’t know what's ICE_DSI_PMD.
> > > ICE_PMD
> > No exactly. We follow the rule of queue and port offload setting. If
> > the feature is enabled in port level, we can ignore the queue setting.
> > If the feature is not enabled in port level, we still can enable it per queue.
> Thanks for confirming, so what will be result for non-configured ICE port for
> offload for port and queue using rte_eth_dev_get_info?
The get_info also show the default value. The offload is set to 0 by default.

> 
> >
> > >
> > >
> > > >
> > > > >
> > > > > > > If device switch is not configured (default value from NVM)
> > > > > > > should we highlight the switch can support speed 10, 100,
> > > > > > > 1000,
> > > > > > > 1000 and son
> > > > > on?
> > > > > > No, this's the capability getting from HW.
> > > > > If HW is supported or configured for 10, 100, 25G then those
> > > > > should be returned correctly this I agree. But when the device
> > > > > is queried for capability it should highlight all supported
> > > > > speeds of switch. Am I
> > right?
> > > > No. Here shows the result not all the speeds supported. Like the
> > > > speed after auto negotiation.
> > > As per your current statement "If user uses API rte_eth_dev_info_get
> > > to get speed_capa, current speed will be returned as auto negotiated
> > > value and not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M|
> > > ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I
> will
> > > leave this to others to comment since in my humble opinion this is
> > > not expected.
> > OK. I'll change it to the bitmap.
> Thanks, appreciate the understanding
> 
> >
> > >
> > > >
> > > > >
> > > > > > > If speed is not true as stated above, can you please add
> > > > > > > this to release notes and documentation.
> > > > > > Here listed all the case we can get from HW.
> > > > > Please add to ice_dsi documentation also.
> > > > Sorry, no idea about ice_dsi.
> > > My mistake ICE_PMD is what I am referring to.
> > >
> > > >
> > > > >
> > > > > snipped