[dpdk-dev,4/4] pmd_hw_support.py: Add tool to query binaries for hw support information
diff mbox

Message ID 1463431287-4551-5-git-send-email-nhorman@tuxdriver.com
State Superseded, archived
Delegated to: Thomas Monjalon
Headers show

Commit Message

Neil Horman May 16, 2016, 8:41 p.m. UTC
This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
and, if found parses the remainder of the string as a json encoded string,
outputting the results in either a human readable or raw, script parseable
format

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Bruce Richardson <bruce.richardson@intel.com>
CC: Thomas Monjalon <thomas.monjalon@6wind.com>
CC: Stephen Hemminger <stephen@networkplumber.org>
CC: Panu Matilainen <pmatilai@redhat.com>
---
 tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 174 insertions(+)
 create mode 100755 tools/pmd_hw_support.py

Comments

Panu Matilainen May 18, 2016, 11:48 a.m. UTC | #1
On 05/16/2016 11:41 PM, Neil Horman wrote:
> This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
> and, if found parses the remainder of the string as a json encoded string,
> outputting the results in either a human readable or raw, script parseable
> format
>
> Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
> CC: Bruce Richardson <bruce.richardson@intel.com>
> CC: Thomas Monjalon <thomas.monjalon@6wind.com>
> CC: Stephen Hemminger <stephen@networkplumber.org>
> CC: Panu Matilainen <pmatilai@redhat.com>
> ---
>  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 174 insertions(+)
>  create mode 100755 tools/pmd_hw_support.py
>
> diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
> new file mode 100755
> index 0000000..0669aca
> --- /dev/null
> +++ b/tools/pmd_hw_support.py
> @@ -0,0 +1,174 @@
> +#!/usr/bin/python3

I think this should use /usr/bin/python to be consistent with the other 
python scripts, and like the others work with python 2 and 3. I only 
tested it with python2 after changing this and it seemed to work fine so 
the compatibility side should be fine as-is.

On the whole, AFAICT the patch series does what it promises, and works 
for both static and shared linkage. Using JSON formatted strings in an 
ELF section is a sound working technical solution for the storage of the 
data. But the difference between the two cases makes me wonder about 
this all...

For static library build, you'd query the application executable, eg 
testpmd, to get the data out. For a shared library build, that method 
gives absolutely nothing because the data is scattered around in 
individual libraries which might be just about wherever, and you need to 
somehow discover the location + correct library files to be able to 
query that. For the shared case, perhaps the script could be taught to 
walk files in CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark 
correct/identical results when querying the executable as with static 
builds. If identical operation between static and shared versions is a 
requirement (without running the app in question) then query through the 
executable itself is practically the only option. Unless some kind of 
(auto-generated) external config file system ala kernel depmod / 
modules.dep etc is brought into the picture.

For shared library configurations, having the data in the individual 
pmds is valuable as one could for example have rpm autogenerate provides 
from the data to ease/automate installation (in case of split packaging 
and/or 3rd party drivers). And no doubt other interesting possibilities. 
With static builds that kind of thing is not possible.

Calling up on the list of requirements from 
http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of 
technical requirements but perhaps we should stop for a moment to think 
about the use-cases first?

To name some from the top of my head:
- user wants to know whether the hardware on the system is supported
- user wants to know which package(s) need to be installed to support 
the system hardware
- user wants to list all supported hardware before going shopping
- [what else?]

...and then think how these things would look like from the user 
perspective, in the light of the two quite dramatically differing cases 
of static vs shared linkage.

P.S. Sorry for being late to this party, I'm having some health issues 
so my level of participation is a bit on-and-off at the moment.

	- Panu -
Neil Horman May 18, 2016, 12:03 p.m. UTC | #2
On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
> On 05/16/2016 11:41 PM, Neil Horman wrote:
> > This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
> > and, if found parses the remainder of the string as a json encoded string,
> > outputting the results in either a human readable or raw, script parseable
> > format
> > 
> > Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
> > CC: Bruce Richardson <bruce.richardson@intel.com>
> > CC: Thomas Monjalon <thomas.monjalon@6wind.com>
> > CC: Stephen Hemminger <stephen@networkplumber.org>
> > CC: Panu Matilainen <pmatilai@redhat.com>
> > ---
> >  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 174 insertions(+)
> >  create mode 100755 tools/pmd_hw_support.py
> > 
> > diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
> > new file mode 100755
> > index 0000000..0669aca
> > --- /dev/null
> > +++ b/tools/pmd_hw_support.py
> > @@ -0,0 +1,174 @@
> > +#!/usr/bin/python3
> 
> I think this should use /usr/bin/python to be consistent with the other
> python scripts, and like the others work with python 2 and 3. I only tested
> it with python2 after changing this and it seemed to work fine so the
> compatibility side should be fine as-is.
> 
Sure, I can change the python executable, that makes sense.

> On the whole, AFAICT the patch series does what it promises, and works for
> both static and shared linkage. Using JSON formatted strings in an ELF
> section is a sound working technical solution for the storage of the data.
> But the difference between the two cases makes me wonder about this all...
You mean the difference between checking static binaries and dynamic binaries?
yes, there is some functional difference there

> 
> For static library build, you'd query the application executable, eg
Correct.

> testpmd, to get the data out. For a shared library build, that method gives
> absolutely nothing because the data is scattered around in individual
> libraries which might be just about wherever, and you need to somehow
Correct, I figured that users would be smart enough to realize that with
dynamically linked executables, they would need to look at DSO's, but I agree,
its a glaring diffrence.

> discover the location + correct library files to be able to query that. For
> the shared case, perhaps the script could be taught to walk files in
> CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
My initial thought would be to run ldd on the executable, and use a heuristic to
determine relevant pmd DSO's, and then feed each of those through the python
script.  I didn't want to go to that trouble unless there was consensus on it
though.


> when querying the executable as with static builds. If identical operation
> between static and shared versions is a requirement (without running the app
> in question) then query through the executable itself is practically the
> only option. Unless some kind of (auto-generated) external config file
> system ala kernel depmod / modules.dep etc is brought into the picture.
Yeah, I'm really trying to avoid that, as I think its really not a typical part
of how user space libraries are interacted with.

> 
> For shared library configurations, having the data in the individual pmds is
> valuable as one could for example have rpm autogenerate provides from the
> data to ease/automate installation (in case of split packaging and/or 3rd
> party drivers). And no doubt other interesting possibilities. With static
> builds that kind of thing is not possible.
Right.

Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
For those situations I don't think we have any way of 'knowing' that the
application intends to use them.

> 
> Calling up on the list of requirements from
> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
> technical requirements but perhaps we should stop for a moment to think
> about the use-cases first?

To ennumerate the list:

- query all drivers in static binary or shared library (works)
- stripping resiliency (works)
- human friendly (works)
- script friendly (works)
- show driver name (works)
- list supported device id / name (works)
- list driver options (not yet, but possible)
- show driver version if available (nope, but possible)
- show dpdk version (nope, but possible)
- show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
- room for extra information? (works)

Of the items that are missing, I've already got a V2 started that can do driver
options, and is easier to expand.  Adding in the the DPDK and PMD version should
be easy (though I think they can be left out, as theres currently no globaly
defined DPDK release version, its all just implicit, and driver versions aren't
really there either).  I'm also hesitant to include kernel dependencies without
defining exactly what they mean (just module dependencies, or feature
enablement, or something else?).  Once we define it though, adding it can be
easy.

I'll have a v2 posted soon, with the consensus corrections you have above, as
well as some other cleanups

Best
Neil

> 
> To name some from the top of my head:
> - user wants to know whether the hardware on the system is supported
> - user wants to know which package(s) need to be installed to support the
> system hardware
> - user wants to list all supported hardware before going shopping
> - [what else?]
> 
> ...and then think how these things would look like from the user
> perspective, in the light of the two quite dramatically differing cases of
> static vs shared linkage.
> 
> P.S. Sorry for being late to this party, I'm having some health issues so my
> level of participation is a bit on-and-off at the moment.
> 
> 	- Panu -
>
Thomas Monjalon May 18, 2016, 12:38 p.m. UTC | #3
2016-05-18 14:48, Panu Matilainen:
> Calling up on the list of requirements from 
> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of 
> technical requirements but perhaps we should stop for a moment to think 
> about the use-cases first?
> 
> To name some from the top of my head:
> - user wants to know whether the hardware on the system is supported

supported by what?
* by a statically linked app
* by a DPDK he has downloaded and built
* by a DPDK provided as shared library by its Linux vendor
In the first 2 cases he knows where the files are.
In the Linux distribution case, there can be a default directory set
by the Linux vendor for the script looking at the infos. Only the Linux
vendor knows where the PMDs files are.

> - user wants to know which package(s) need to be installed to support 
> the system hardware

You mean "which DPDK packages"?
Are some informations showed when doing "packager search dpdk"?
or "packager show dpdk-driverX"?
Do you want to show the PCI ids in the description of the packages?

> - user wants to list all supported hardware before going shopping

Why doing shopping? For a DPDK usage or for a specific application?
The application should mentions the supported hardware.
For more general DPDK information, there is this this page:
	http://dpdk.org/doc/nics
But it may be not enough accurate for some PCI id exceptions.
For more details, he must use a listing tool.
Panu Matilainen May 18, 2016, 12:48 p.m. UTC | #4
On 05/18/2016 03:03 PM, Neil Horman wrote:
> On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
>> On 05/16/2016 11:41 PM, Neil Horman wrote:
>>> This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
>>> and, if found parses the remainder of the string as a json encoded string,
>>> outputting the results in either a human readable or raw, script parseable
>>> format
>>>
>>> Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
>>> CC: Bruce Richardson <bruce.richardson@intel.com>
>>> CC: Thomas Monjalon <thomas.monjalon@6wind.com>
>>> CC: Stephen Hemminger <stephen@networkplumber.org>
>>> CC: Panu Matilainen <pmatilai@redhat.com>
>>> ---
>>>  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
>>>  1 file changed, 174 insertions(+)
>>>  create mode 100755 tools/pmd_hw_support.py
>>>
>>> diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
>>> new file mode 100755
>>> index 0000000..0669aca
>>> --- /dev/null
>>> +++ b/tools/pmd_hw_support.py
>>> @@ -0,0 +1,174 @@
>>> +#!/usr/bin/python3
>>
>> I think this should use /usr/bin/python to be consistent with the other
>> python scripts, and like the others work with python 2 and 3. I only tested
>> it with python2 after changing this and it seemed to work fine so the
>> compatibility side should be fine as-is.
>>
> Sure, I can change the python executable, that makes sense.
>
>> On the whole, AFAICT the patch series does what it promises, and works for
>> both static and shared linkage. Using JSON formatted strings in an ELF
>> section is a sound working technical solution for the storage of the data.
>> But the difference between the two cases makes me wonder about this all...
> You mean the difference between checking static binaries and dynamic binaries?
> yes, there is some functional difference there
>
>>
>> For static library build, you'd query the application executable, eg
> Correct.
>
>> testpmd, to get the data out. For a shared library build, that method gives
>> absolutely nothing because the data is scattered around in individual
>> libraries which might be just about wherever, and you need to somehow
> Correct, I figured that users would be smart enough to realize that with
> dynamically linked executables, they would need to look at DSO's, but I agree,
> its a glaring diffrence.

Being able to look at DSOs is good, but expecting the user to figure out 
which DSOs might be loaded and not and where to look is going to be well 
above many users. At very least it's not what I would call user-friendly.

>> discover the location + correct library files to be able to query that. For
>> the shared case, perhaps the script could be taught to walk files in
>> CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
> My initial thought would be to run ldd on the executable, and use a heuristic to
> determine relevant pmd DSO's, and then feed each of those through the python
> script.  I didn't want to go to that trouble unless there was consensus on it
> though.

Problem is, ldd doesn't know about them either because the pmds are not 
linked to the executables at all anymore. They could be force-linked of 
course, but that means giving up the flexibility of plugins, which IMO 
is a no-go. Except maybe as an option, but then that would be a third 
case to support.


>
>> when querying the executable as with static builds. If identical operation
>> between static and shared versions is a requirement (without running the app
>> in question) then query through the executable itself is practically the
>> only option. Unless some kind of (auto-generated) external config file
>> system ala kernel depmod / modules.dep etc is brought into the picture.
> Yeah, I'm really trying to avoid that, as I think its really not a typical part
> of how user space libraries are interacted with.
>
>>
>> For shared library configurations, having the data in the individual pmds is
>> valuable as one could for example have rpm autogenerate provides from the
>> data to ease/automate installation (in case of split packaging and/or 3rd
>> party drivers). And no doubt other interesting possibilities. With static
>> builds that kind of thing is not possible.
> Right.
>
> Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
> For those situations I don't think we have any way of 'knowing' that the
> application intends to use them.

Hence my comment about CONFIG_RTE_EAL_PMD_PATH above, it at least 
provides a reasonable heuristic of what would be loaded by the app when 
run. But ultimately the only way to know what hardware is supported at a 
given time is to run an app which calls rte_eal_init() to load all the 
drivers that are present and work from there, because besides 
CONFIG_RTE_EAL_PMD_PATH this can be affected by runtime commandline 
switches and applies to both shared and static builds.

>>
>> Calling up on the list of requirements from
>> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
>> technical requirements but perhaps we should stop for a moment to think
>> about the use-cases first?
>
> To ennumerate the list:
>
> - query all drivers in static binary or shared library (works)
> - stripping resiliency (works)
> - human friendly (works)
> - script friendly (works)
> - show driver name (works)
> - list supported device id / name (works)
> - list driver options (not yet, but possible)
> - show driver version if available (nope, but possible)
> - show dpdk version (nope, but possible)
> - show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
> - room for extra information? (works)
>
> Of the items that are missing, I've already got a V2 started that can do driver
> options, and is easier to expand.  Adding in the the DPDK and PMD version should
> be easy (though I think they can be left out, as theres currently no globaly
> defined DPDK release version, its all just implicit, and driver versions aren't
> really there either).  I'm also hesitant to include kernel dependencies without
> defining exactly what they mean (just module dependencies, or feature
> enablement, or something else?).  Once we define it though, adding it can be
> easy.

Yup. I just think the shared/static difference needs to be sorted out 
somehow, eg requiring user to know about DSOs is not human-friendly at 
all. That's why I called for the higher level use-cases in my previous 
email.

>
> I'll have a v2 posted soon, with the consensus corrections you have above, as
> well as some other cleanups
>
> Best
> Neil
>
>>
>> To name some from the top of my head:
>> - user wants to know whether the hardware on the system is supported
>> - user wants to know which package(s) need to be installed to support the
>> system hardware
>> - user wants to list all supported hardware before going shopping
>> - [what else?]
>>
>> ...and then think how these things would look like from the user
>> perspective, in the light of the two quite dramatically differing cases of
>> static vs shared linkage.


	- Panu -
Panu Matilainen May 18, 2016, 1:09 p.m. UTC | #5
On 05/18/2016 03:38 PM, Thomas Monjalon wrote:
> 2016-05-18 14:48, Panu Matilainen:
>> Calling up on the list of requirements from
>> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
>> technical requirements but perhaps we should stop for a moment to think
>> about the use-cases first?
>>
>> To name some from the top of my head:
>> - user wants to know whether the hardware on the system is supported
>
> supported by what?
> * by a statically linked app
> * by a DPDK he has downloaded and built
> * by a DPDK provided as shared library by its Linux vendor

All three?

> In the first 2 cases he knows where the files are.
> In the Linux distribution case, there can be a default directory set
> by the Linux vendor for the script looking at the infos. Only the Linux
> vendor knows where the PMDs files are.

For case 3), EAL and the DPDK build system know where the PMDs are via 
CONFIG_RTE_EAL_PMD_PATH (if set of course, otherwise there's not much hope)

>
>> - user wants to know which package(s) need to be installed to support
>> the system hardware
>
> You mean "which DPDK packages"?

Yes. This is of course only relevant if PMDs are split across several 
different packages (splitting might not make much sense yet, but as the 
number grows that might well change)

> Are some informations showed when doing "packager search dpdk"?
> or "packager show dpdk-driverX"?
> Do you want to show the PCI ids in the description of the packages?

Something along those lines - such things are being done by distros for 
eg firmware, printer drivers, kernel drivers by modalias etc.

>> - user wants to list all supported hardware before going shopping
>
> Why doing shopping? For a DPDK usage or for a specific application?

To buy hardware which is supported by DPDK, in a general case.

> The application should mentions the supported hardware.
> For more general DPDK information, there is this this page:
> 	http://dpdk.org/doc/nics
> But it may be not enough accurate for some PCI id exceptions.
> For more details, he must use a listing tool.

Yes. The point is, what kind of tool/solution can be made to behave 
identically between shared and static configs, in a user-friendly way. I 
just listed a few obvious (to me at least) use-cases, and was asking for 
others that I didn't think of.

	- Panu -
Thomas Monjalon May 18, 2016, 1:26 p.m. UTC | #6
2016-05-18 16:09, Panu Matilainen:
> On 05/18/2016 03:38 PM, Thomas Monjalon wrote:
> > 2016-05-18 14:48, Panu Matilainen:
> >> Calling up on the list of requirements from
> >> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
> >> technical requirements but perhaps we should stop for a moment to think
> >> about the use-cases first?
> >>
> >> To name some from the top of my head:
> >> - user wants to know whether the hardware on the system is supported
> >
> > supported by what?
> > * by a statically linked app
> > * by a DPDK he has downloaded and built
> > * by a DPDK provided as shared library by its Linux vendor
> 
> All three?

Not at the same time ;)

> > In the first 2 cases he knows where the files are.
> > In the Linux distribution case, there can be a default directory set
> > by the Linux vendor for the script looking at the infos. Only the Linux
> > vendor knows where the PMDs files are.
> 
> For case 3), EAL and the DPDK build system know where the PMDs are via 
> CONFIG_RTE_EAL_PMD_PATH (if set of course, otherwise there's not much hope)

In case 3 (DPDK packaged in distribution), I would rely on the packager (you)
who knows where the libraries are installed.
You can even have a script calling system tools (lspci or other from your
distribution) to get hardware infos and then check if it matches the PCI ids
listed by the DPDK tool.

> >> - user wants to know which package(s) need to be installed to support
> >> the system hardware
> >
> > You mean "which DPDK packages"?
> 
> Yes. This is of course only relevant if PMDs are split across several 
> different packages (splitting might not make much sense yet, but as the 
> number grows that might well change)
> 
> > Are some informations showed when doing "packager search dpdk"?
> > or "packager show dpdk-driverX"?
> > Do you want to show the PCI ids in the description of the packages?
> 
> Something along those lines - such things are being done by distros for 
> eg firmware, printer drivers, kernel drivers by modalias etc.

So the packager would call the DPDK tool listing PCI ids of compiled libs.

> >> - user wants to list all supported hardware before going shopping
> >
> > Why doing shopping? For a DPDK usage or for a specific application?
> 
> To buy hardware which is supported by DPDK, in a general case.
> 
> > The application should mentions the supported hardware.
> > For more general DPDK information, there is this this page:
> > 	http://dpdk.org/doc/nics
> > But it may be not enough accurate for some PCI id exceptions.
> > For more details, he must use a listing tool.
> 
> Yes. The point is, what kind of tool/solution can be made to behave 
> identically between shared and static configs, in a user-friendly way. I 
> just listed a few obvious (to me at least) use-cases, and was asking for 
> others that I didn't think of.

For a user-friendly output, we should not export only PCI ids but also
the commercial names.

About the static/shared case, we can have a script which look at testpmd
plus the shared libs. In a dev space, it is easy to find the files.
In a packaged system, the script can get some configuration variables from
the distribution.
Neil Horman May 18, 2016, 1:48 p.m. UTC | #7
On Wed, May 18, 2016 at 03:48:12PM +0300, Panu Matilainen wrote:
> On 05/18/2016 03:03 PM, Neil Horman wrote:
> > On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
> > > On 05/16/2016 11:41 PM, Neil Horman wrote:
> > > > This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
> > > > and, if found parses the remainder of the string as a json encoded string,
> > > > outputting the results in either a human readable or raw, script parseable
> > > > format
> > > > 
> > > > Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
> > > > CC: Bruce Richardson <bruce.richardson@intel.com>
> > > > CC: Thomas Monjalon <thomas.monjalon@6wind.com>
> > > > CC: Stephen Hemminger <stephen@networkplumber.org>
> > > > CC: Panu Matilainen <pmatilai@redhat.com>
> > > > ---
> > > >  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 174 insertions(+)
> > > >  create mode 100755 tools/pmd_hw_support.py
> > > > 
> > > > diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
> > > > new file mode 100755
> > > > index 0000000..0669aca
> > > > --- /dev/null
> > > > +++ b/tools/pmd_hw_support.py
> > > > @@ -0,0 +1,174 @@
> > > > +#!/usr/bin/python3
> > > 
> > > I think this should use /usr/bin/python to be consistent with the other
> > > python scripts, and like the others work with python 2 and 3. I only tested
> > > it with python2 after changing this and it seemed to work fine so the
> > > compatibility side should be fine as-is.
> > > 
> > Sure, I can change the python executable, that makes sense.
> > 
> > > On the whole, AFAICT the patch series does what it promises, and works for
> > > both static and shared linkage. Using JSON formatted strings in an ELF
> > > section is a sound working technical solution for the storage of the data.
> > > But the difference between the two cases makes me wonder about this all...
> > You mean the difference between checking static binaries and dynamic binaries?
> > yes, there is some functional difference there
> > 
> > > 
> > > For static library build, you'd query the application executable, eg
> > Correct.
> > 
> > > testpmd, to get the data out. For a shared library build, that method gives
> > > absolutely nothing because the data is scattered around in individual
> > > libraries which might be just about wherever, and you need to somehow
> > Correct, I figured that users would be smart enough to realize that with
> > dynamically linked executables, they would need to look at DSO's, but I agree,
> > its a glaring diffrence.
> 
> Being able to look at DSOs is good, but expecting the user to figure out
> which DSOs might be loaded and not and where to look is going to be well
> above many users. At very least it's not what I would call user-friendly.
> 
I disagree, there is no linkage between an application and the dso's it opens
via dlopen that is exportable.  The only way to handle that is to have a
standard search path for the pmd_hw_info python script.  Thats just like modinfo
works (i.e. "modinfo bnx2" finds the bnx2 module for the running kernel).  We
can of course do something simmilar, but we have no existing implicit path
information to draw from to do that (because you can have multiple dpdk installs
side by side).  The only way around that is to explicitly call out the path on
the command line.

> > > discover the location + correct library files to be able to query that. For
> > > the shared case, perhaps the script could be taught to walk files in
> > > CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
> > My initial thought would be to run ldd on the executable, and use a heuristic to
> > determine relevant pmd DSO's, and then feed each of those through the python
> > script.  I didn't want to go to that trouble unless there was consensus on it
> > though.
> 
> Problem is, ldd doesn't know about them either because the pmds are not
> linked to the executables at all anymore. They could be force-linked of
> course, but that means giving up the flexibility of plugins, which IMO is a
> no-go. Except maybe as an option, but then that would be a third case to
> support.
> 
Thats not true at all, or at least its a perfectly valid way to link the DSO's
in at link time via -lrte_pmd_<driver>.  Its really just the dlopen case we need
to worry about.  I would argue that, if they're not explicitly linked in like
that, then its correct to indicate that an application supports no hardware,
because it actually doesn't, it only supports the pmds that it chooses to list
on the command line.  And if a user is savy enough to specify a pmd on the
application command line, then they are perfectly capable of specifying the same
path to the hw_info script.

> 
> > 
> > > when querying the executable as with static builds. If identical operation
> > > between static and shared versions is a requirement (without running the app
> > > in question) then query through the executable itself is practically the
> > > only option. Unless some kind of (auto-generated) external config file
> > > system ala kernel depmod / modules.dep etc is brought into the picture.
> > Yeah, I'm really trying to avoid that, as I think its really not a typical part
> > of how user space libraries are interacted with.
> > 
> > > 
> > > For shared library configurations, having the data in the individual pmds is
> > > valuable as one could for example have rpm autogenerate provides from the
> > > data to ease/automate installation (in case of split packaging and/or 3rd
> > > party drivers). And no doubt other interesting possibilities. With static
> > > builds that kind of thing is not possible.
> > Right.
> > 
> > Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
> > For those situations I don't think we have any way of 'knowing' that the
> > application intends to use them.
> 
> Hence my comment about CONFIG_RTE_EAL_PMD_PATH above, it at least provides a
> reasonable heuristic of what would be loaded by the app when run. But
> ultimately the only way to know what hardware is supported at a given time
> is to run an app which calls rte_eal_init() to load all the drivers that are
> present and work from there, because besides CONFIG_RTE_EAL_PMD_PATH this
> can be affected by runtime commandline switches and applies to both shared
> and static builds.
> 
I'm not sure I agree with that.  Its clearly tempting to use, but its not
at all guaranteed to be accurate (the default is just set to "", and there is no
promise anyone will set it properly). And it also requires that the binary will
be tied to a specific release.  I really think that, given the fact that
distributions generally try to package dpdk in such a way that multiple dpdk
versions might be available, the right solution is to just require a full path
specification if you want to get hw info for a DSO that is dynamically loaded
via dlopen from the command line.  Otherwise you're going to fall into this trap
where you might be looking implicitly at an older version of the PMD while your
application may use a newer version.

> > > 
> > > Calling up on the list of requirements from
> > > http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
> > > technical requirements but perhaps we should stop for a moment to think
> > > about the use-cases first?
> > 
> > To ennumerate the list:
> > 
> > - query all drivers in static binary or shared library (works)
> > - stripping resiliency (works)
> > - human friendly (works)
> > - script friendly (works)
> > - show driver name (works)
> > - list supported device id / name (works)
> > - list driver options (not yet, but possible)
> > - show driver version if available (nope, but possible)
> > - show dpdk version (nope, but possible)
> > - show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
> > - room for extra information? (works)
> > 
> > Of the items that are missing, I've already got a V2 started that can do driver
> > options, and is easier to expand.  Adding in the the DPDK and PMD version should
> > be easy (though I think they can be left out, as theres currently no globaly
> > defined DPDK release version, its all just implicit, and driver versions aren't
> > really there either).  I'm also hesitant to include kernel dependencies without
> > defining exactly what they mean (just module dependencies, or feature
> > enablement, or something else?).  Once we define it though, adding it can be
> > easy.
> 
> Yup. I just think the shared/static difference needs to be sorted out
> somehow, eg requiring user to know about DSOs is not human-friendly at all.
> That's why I called for the higher level use-cases in my previous email.
> 

I disagree with that.  While its reasonable to give users the convienience of
scanning the DT_NEEDED entries of a binary and scanning those DSO's.  If a user
has to specify the PMD to load in an application (either on the command line or
via a configuration file), then its reasonable assume that they (a) know where
to find that pmd and (b) are savy enough to pass that same path to a hardware
info tool.  Thats the exact same way that modinfo works (save for the fact that
modinfo can implicitly check the running kernel version to find the appropriate
path for a modular driver).

The only other thing that seems reasonable to me would be to scan
LD_LIBRARY_PATH.  I would assume that, if an application is linked dynamically,
the individual DSO's (librte_sched.so, etc), need to be in LD_LIBRARY_PATH.  If
thats the case, then we can assume that the appropriate PMD DSO's are there too,
and we can search there.  We can also check the standard /usr/lib and /lib paths
with that.  I think that would make fairly good sense.


> > 
> > I'll have a v2 posted soon, with the consensus corrections you have above, as
> > well as some other cleanups
> > 
> > Best
> > Neil
> > 
> > > 
> > > To name some from the top of my head:
> > > - user wants to know whether the hardware on the system is supported
> > > - user wants to know which package(s) need to be installed to support the
> > > system hardware
> > > - user wants to list all supported hardware before going shopping
> > > - [what else?]
> > > 
> > > ...and then think how these things would look like from the user
> > > perspective, in the light of the two quite dramatically differing cases of
> > > static vs shared linkage.
> 
> 
> 	- Panu -
>
Neil Horman May 18, 2016, 1:54 p.m. UTC | #8
On Wed, May 18, 2016 at 03:26:42PM +0200, Thomas Monjalon wrote:
> 2016-05-18 16:09, Panu Matilainen:
> > On 05/18/2016 03:38 PM, Thomas Monjalon wrote:
> > > 2016-05-18 14:48, Panu Matilainen:
> > >> Calling up on the list of requirements from
> > >> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
> > >> technical requirements but perhaps we should stop for a moment to think
> > >> about the use-cases first?
> > >>
> > >> To name some from the top of my head:
> > >> - user wants to know whether the hardware on the system is supported
> > >
> > > supported by what?
> > > * by a statically linked app
> > > * by a DPDK he has downloaded and built
> > > * by a DPDK provided as shared library by its Linux vendor
> > 
> > All three?
> 
> Not at the same time ;)
> 
> > > In the first 2 cases he knows where the files are.
> > > In the Linux distribution case, there can be a default directory set
> > > by the Linux vendor for the script looking at the infos. Only the Linux
> > > vendor knows where the PMDs files are.
> > 
> > For case 3), EAL and the DPDK build system know where the PMDs are via 
> > CONFIG_RTE_EAL_PMD_PATH (if set of course, otherwise there's not much hope)
> 
> In case 3 (DPDK packaged in distribution), I would rely on the packager (you)
> who knows where the libraries are installed.
> You can even have a script calling system tools (lspci or other from your
> distribution) to get hardware infos and then check if it matches the PCI ids
> listed by the DPDK tool.
> 

I think the only sane solution here is to scan for the file in
/lib:/usr/lib:$LD_LIBRAR_PATH.  Thats the only way that we can come close to
mimicing what the application will do when linking.

Truthfully, the RTE_EAL_PMD_PATH variable should be dropped and set to that
anyway to ensure that the system admin can point to the right libraries when
installing an application.

> > >> - user wants to know which package(s) need to be installed to support
> > >> the system hardware
> > >
> > > You mean "which DPDK packages"?
> > 
> > Yes. This is of course only relevant if PMDs are split across several 
> > different packages (splitting might not make much sense yet, but as the 
> > number grows that might well change)
> > 
> > > Are some informations showed when doing "packager search dpdk"?
> > > or "packager show dpdk-driverX"?
> > > Do you want to show the PCI ids in the description of the packages?
> > 
> > Something along those lines - such things are being done by distros for 
> > eg firmware, printer drivers, kernel drivers by modalias etc.
> 
> So the packager would call the DPDK tool listing PCI ids of compiled libs.
> 
> > >> - user wants to list all supported hardware before going shopping
> > >
> > > Why doing shopping? For a DPDK usage or for a specific application?
> > 
> > To buy hardware which is supported by DPDK, in a general case.
> > 
> > > The application should mentions the supported hardware.
> > > For more general DPDK information, there is this this page:
> > > 	http://dpdk.org/doc/nics
> > > But it may be not enough accurate for some PCI id exceptions.
> > > For more details, he must use a listing tool.
> > 
> > Yes. The point is, what kind of tool/solution can be made to behave 
> > identically between shared and static configs, in a user-friendly way. I 
> > just listed a few obvious (to me at least) use-cases, and was asking for 
> > others that I didn't think of.
> 
> For a user-friendly output, we should not export only PCI ids but also
> the commercial names.
> 
Thats not something that the DSO's should export explicitly. If you want
commercial names, the hw_info tool should incorporate the use of the pci.ids
file from the pci hardware database project (thats how lspci works).  That seems
like a nice bell and whistle to add later though.  Lets get the initial
functionality working first before we start adding features like that

> About the static/shared case, we can have a script which look at testpmd
> plus the shared libs. In a dev space, it is easy to find the files.
> In a packaged system, the script can get some configuration variables from
> the distribution.
> 
>
Panu Matilainen May 19, 2016, 6:08 a.m. UTC | #9
On 05/18/2016 04:48 PM, Neil Horman wrote:
> On Wed, May 18, 2016 at 03:48:12PM +0300, Panu Matilainen wrote:
>> On 05/18/2016 03:03 PM, Neil Horman wrote:
>>> On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
>>>> On 05/16/2016 11:41 PM, Neil Horman wrote:
>>>>> This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
>>>>> and, if found parses the remainder of the string as a json encoded string,
>>>>> outputting the results in either a human readable or raw, script parseable
>>>>> format
>>>>>
>>>>> Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
>>>>> CC: Bruce Richardson <bruce.richardson@intel.com>
>>>>> CC: Thomas Monjalon <thomas.monjalon@6wind.com>
>>>>> CC: Stephen Hemminger <stephen@networkplumber.org>
>>>>> CC: Panu Matilainen <pmatilai@redhat.com>
>>>>> ---
>>>>>  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>  1 file changed, 174 insertions(+)
>>>>>  create mode 100755 tools/pmd_hw_support.py
>>>>>
>>>>> diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
>>>>> new file mode 100755
>>>>> index 0000000..0669aca
>>>>> --- /dev/null
>>>>> +++ b/tools/pmd_hw_support.py
>>>>> @@ -0,0 +1,174 @@
>>>>> +#!/usr/bin/python3
>>>>
>>>> I think this should use /usr/bin/python to be consistent with the other
>>>> python scripts, and like the others work with python 2 and 3. I only tested
>>>> it with python2 after changing this and it seemed to work fine so the
>>>> compatibility side should be fine as-is.
>>>>
>>> Sure, I can change the python executable, that makes sense.
>>>
>>>> On the whole, AFAICT the patch series does what it promises, and works for
>>>> both static and shared linkage. Using JSON formatted strings in an ELF
>>>> section is a sound working technical solution for the storage of the data.
>>>> But the difference between the two cases makes me wonder about this all...
>>> You mean the difference between checking static binaries and dynamic binaries?
>>> yes, there is some functional difference there
>>>
>>>>
>>>> For static library build, you'd query the application executable, eg
>>> Correct.
>>>
>>>> testpmd, to get the data out. For a shared library build, that method gives
>>>> absolutely nothing because the data is scattered around in individual
>>>> libraries which might be just about wherever, and you need to somehow
>>> Correct, I figured that users would be smart enough to realize that with
>>> dynamically linked executables, they would need to look at DSO's, but I agree,
>>> its a glaring diffrence.
>>
>> Being able to look at DSOs is good, but expecting the user to figure out
>> which DSOs might be loaded and not and where to look is going to be well
>> above many users. At very least it's not what I would call user-friendly.
>>
> I disagree, there is no linkage between an application and the dso's it opens
> via dlopen that is exportable.  The only way to handle that is to have a
> standard search path for the pmd_hw_info python script.  Thats just like modinfo
> works (i.e. "modinfo bnx2" finds the bnx2 module for the running kernel).  We
> can of course do something simmilar, but we have no existing implicit path
> information to draw from to do that (because you can have multiple dpdk installs
> side by side).  The only way around that is to explicitly call out the path on
> the command line.

There's no telling what libraries user might load at runtime with -D, 
that is true for both static and shared libraries.

When CONFIG_RTE_EAL_PMD_PATH is set, as it is likely to be on distro 
builds, you *know* that everything in that path will be loaded on 
runtime regardless of what commandline options there might be so the 
situation is actually on par with static builds. Of course you still 
dont know about ones added with -D but that's a limitation of any 
solution that works without actually running the app.

>
>>>> discover the location + correct library files to be able to query that. For
>>>> the shared case, perhaps the script could be taught to walk files in
>>>> CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
>>> My initial thought would be to run ldd on the executable, and use a heuristic to
>>> determine relevant pmd DSO's, and then feed each of those through the python
>>> script.  I didn't want to go to that trouble unless there was consensus on it
>>> though.
>>
>> Problem is, ldd doesn't know about them either because the pmds are not
>> linked to the executables at all anymore. They could be force-linked of
>> course, but that means giving up the flexibility of plugins, which IMO is a
>> no-go. Except maybe as an option, but then that would be a third case to
>> support.
>>
> Thats not true at all, or at least its a perfectly valid way to link the DSO's
> in at link time via -lrte_pmd_<driver>.  Its really just the dlopen case we need
> to worry about.  I would argue that, if they're not explicitly linked in like
> that, then its correct to indicate that an application supports no hardware,
> because it actually doesn't, it only supports the pmds that it chooses to list
> on the command line.  And if a user is savy enough to specify a pmd on the
> application command line, then they are perfectly capable of specifying the same
> path to the hw_info script.

Yes you can force-link apps to every driver on existence, but it 
requires not just linking but using --whole-archive. The apps in DPDK 
itself dont in shared link setup (take a look at testpmd) and I think 
its for a damn good reason - the drivers are plugins and that's how 
plugins are expected to work: they are not linked to, they reside in a 
specific path which is scanned at runtime and plugins loaded to provide 
extra functionality.

>>
>>>
>>>> when querying the executable as with static builds. If identical operation
>>>> between static and shared versions is a requirement (without running the app
>>>> in question) then query through the executable itself is practically the
>>>> only option. Unless some kind of (auto-generated) external config file
>>>> system ala kernel depmod / modules.dep etc is brought into the picture.
>>> Yeah, I'm really trying to avoid that, as I think its really not a typical part
>>> of how user space libraries are interacted with.
>>>
>>>>
>>>> For shared library configurations, having the data in the individual pmds is
>>>> valuable as one could for example have rpm autogenerate provides from the
>>>> data to ease/automate installation (in case of split packaging and/or 3rd
>>>> party drivers). And no doubt other interesting possibilities. With static
>>>> builds that kind of thing is not possible.
>>> Right.
>>>
>>> Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
>>> For those situations I don't think we have any way of 'knowing' that the
>>> application intends to use them.
>>
>> Hence my comment about CONFIG_RTE_EAL_PMD_PATH above, it at least provides a
>> reasonable heuristic of what would be loaded by the app when run. But
>> ultimately the only way to know what hardware is supported at a given time
>> is to run an app which calls rte_eal_init() to load all the drivers that are
>> present and work from there, because besides CONFIG_RTE_EAL_PMD_PATH this
>> can be affected by runtime commandline switches and applies to both shared
>> and static builds.
>>
> I'm not sure I agree with that.  Its clearly tempting to use, but its not
> at all guaranteed to be accurate (the default is just set to "", and there is no
> promise anyone will set it properly).

The promise is that shared builds are barely functional unless its set 
correctly, because zero drivers are linked to testpmd in shared config. 
So you're kinda likely to notice if its not set.

It defaults to empty because at the time there was no standard 
installation available at that time. Setting a reasonable default is 
tricky still because it needs to be set before build whereas install 
path is set at install time.

> And it also requires that the binary will
> be tied to a specific release.  I really think that, given the fact that
> distributions generally try to package dpdk in such a way that multiple dpdk
> versions might be available, the right solution is to just require a full path
> specification if you want to get hw info for a DSO that is dynamically loaded
> via dlopen from the command line.  Otherwise you're going to fall into this trap
> where you might be looking implicitly at an older version of the PMD while your
> application may use a newer version.

If there are multiple dpdk versions available then they just need to 
have separate PMD paths, but that's not a problem.

>>>>
>>>> Calling up on the list of requirements from
>>>> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
>>>> technical requirements but perhaps we should stop for a moment to think
>>>> about the use-cases first?
>>>
>>> To ennumerate the list:
>>>
>>> - query all drivers in static binary or shared library (works)
>>> - stripping resiliency (works)
>>> - human friendly (works)
>>> - script friendly (works)
>>> - show driver name (works)
>>> - list supported device id / name (works)
>>> - list driver options (not yet, but possible)
>>> - show driver version if available (nope, but possible)
>>> - show dpdk version (nope, but possible)
>>> - show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
>>> - room for extra information? (works)
>>>
>>> Of the items that are missing, I've already got a V2 started that can do driver
>>> options, and is easier to expand.  Adding in the the DPDK and PMD version should
>>> be easy (though I think they can be left out, as theres currently no globaly
>>> defined DPDK release version, its all just implicit, and driver versions aren't
>>> really there either).  I'm also hesitant to include kernel dependencies without
>>> defining exactly what they mean (just module dependencies, or feature
>>> enablement, or something else?).  Once we define it though, adding it can be
>>> easy.
>>
>> Yup. I just think the shared/static difference needs to be sorted out
>> somehow, eg requiring user to know about DSOs is not human-friendly at all.
>> That's why I called for the higher level use-cases in my previous email.
>>
>
> I disagree with that.  While its reasonable to give users the convienience of
> scanning the DT_NEEDED entries of a binary and scanning those DSO's.  If a user

Scanning DT_NEEDED is of course ok sane and right thing to do, its just 
not sufficient.

> has to specify the PMD to load in an application (either on the command line or
> via a configuration file), then its reasonable assume that they (a) know where

But when the PMD path is set (as it should be on a distro build), this 
is all automatic with zero action or extra config required from the user.

> to find that pmd and (b) are savy enough to pass that same path to a hardware
> info tool.  Thats the exact same way that modinfo works (save for the fact that
> modinfo can implicitly check the running kernel version to find the appropriate
> path for a modular driver).
>
> The only other thing that seems reasonable to me would be to scan
> LD_LIBRARY_PATH.  I would assume that, if an application is linked dynamically,
> the individual DSO's (librte_sched.so, etc), need to be in LD_LIBRARY_PATH.  If
> thats the case, then we can assume that the appropriate PMD DSO's are there too,
> and we can search there.  We can also check the standard /usr/lib and /lib paths
> with that.  I think that would make fairly good sense.

You really don't want go crashing through the potentially thousands of 
libraries in the standard library path going "is it a pmd, no, is it a 
pmd, no..."

	- Panu -

>
>>>
>>> I'll have a v2 posted soon, with the consensus corrections you have above, as
>>> well as some other cleanups
>>>
>>> Best
>>> Neil
>>>
>>>>
>>>> To name some from the top of my head:
>>>> - user wants to know whether the hardware on the system is supported
>>>> - user wants to know which package(s) need to be installed to support the
>>>> system hardware
>>>> - user wants to list all supported hardware before going shopping
>>>> - [what else?]
>>>>
>>>> ...and then think how these things would look like from the user
>>>> perspective, in the light of the two quite dramatically differing cases of
>>>> static vs shared linkage.
>>
>>
>> 	- Panu -
>>
Neil Horman May 19, 2016, 1:26 p.m. UTC | #10
On Thu, May 19, 2016 at 09:08:52AM +0300, Panu Matilainen wrote:
> On 05/18/2016 04:48 PM, Neil Horman wrote:
> > On Wed, May 18, 2016 at 03:48:12PM +0300, Panu Matilainen wrote:
> > > On 05/18/2016 03:03 PM, Neil Horman wrote:
> > > > On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
> > > > > On 05/16/2016 11:41 PM, Neil Horman wrote:
> > > > > > This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
> > > > > > and, if found parses the remainder of the string as a json encoded string,
> > > > > > outputting the results in either a human readable or raw, script parseable
> > > > > > format
> > > > > > 
> > > > > > Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
> > > > > > CC: Bruce Richardson <bruce.richardson@intel.com>
> > > > > > CC: Thomas Monjalon <thomas.monjalon@6wind.com>
> > > > > > CC: Stephen Hemminger <stephen@networkplumber.org>
> > > > > > CC: Panu Matilainen <pmatilai@redhat.com>
> > > > > > ---
> > > > > >  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
> > > > > >  1 file changed, 174 insertions(+)
> > > > > >  create mode 100755 tools/pmd_hw_support.py
> > > > > > 
> > > > > > diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
> > > > > > new file mode 100755
> > > > > > index 0000000..0669aca
> > > > > > --- /dev/null
> > > > > > +++ b/tools/pmd_hw_support.py
> > > > > > @@ -0,0 +1,174 @@
> > > > > > +#!/usr/bin/python3
> > > > > 
> > > > > I think this should use /usr/bin/python to be consistent with the other
> > > > > python scripts, and like the others work with python 2 and 3. I only tested
> > > > > it with python2 after changing this and it seemed to work fine so the
> > > > > compatibility side should be fine as-is.
> > > > > 
> > > > Sure, I can change the python executable, that makes sense.
> > > > 
> > > > > On the whole, AFAICT the patch series does what it promises, and works for
> > > > > both static and shared linkage. Using JSON formatted strings in an ELF
> > > > > section is a sound working technical solution for the storage of the data.
> > > > > But the difference between the two cases makes me wonder about this all...
> > > > You mean the difference between checking static binaries and dynamic binaries?
> > > > yes, there is some functional difference there
> > > > 
> > > > > 
> > > > > For static library build, you'd query the application executable, eg
> > > > Correct.
> > > > 
> > > > > testpmd, to get the data out. For a shared library build, that method gives
> > > > > absolutely nothing because the data is scattered around in individual
> > > > > libraries which might be just about wherever, and you need to somehow
> > > > Correct, I figured that users would be smart enough to realize that with
> > > > dynamically linked executables, they would need to look at DSO's, but I agree,
> > > > its a glaring diffrence.
> > > 
> > > Being able to look at DSOs is good, but expecting the user to figure out
> > > which DSOs might be loaded and not and where to look is going to be well
> > > above many users. At very least it's not what I would call user-friendly.
> > > 
> > I disagree, there is no linkage between an application and the dso's it opens
> > via dlopen that is exportable.  The only way to handle that is to have a
> > standard search path for the pmd_hw_info python script.  Thats just like modinfo
> > works (i.e. "modinfo bnx2" finds the bnx2 module for the running kernel).  We
> > can of course do something simmilar, but we have no existing implicit path
> > information to draw from to do that (because you can have multiple dpdk installs
> > side by side).  The only way around that is to explicitly call out the path on
> > the command line.
> 
> There's no telling what libraries user might load at runtime with -D, that
> is true for both static and shared libraries.
> 
I agree.

> When CONFIG_RTE_EAL_PMD_PATH is set, as it is likely to be on distro builds,
> you *know* that everything in that path will be loaded on runtime regardless
> of what commandline options there might be so the situation is actually on
> par with static builds. Of course you still dont know about ones added with
> -D but that's a limitation of any solution that works without actually
> running the app.
> 
Its not on ours, as the pmd libraries get placed in the same directory as every
other dpdk library, and no one wants to try (and fail to load
rte_sched/rte_acl/etc twice, or deal with the fallout of trying to do so, or
adjust the packaging so that pmds are placed in their own subdirectory, or
handle the need for multiuser support.

Using CONFIG_RTE_EAL_PMD_PATH also doesn't account for directory changes.  This
use case:
1) run pmdinfo <app>
2) remove DSOs from RTE_EAL_PMD_PATH
3) execute <app>

leads to erroneous results, as hardware support that was reported in (1) is no
longer available at (3)

It also completely misses any libraries that we load via the -d option on the
command line, which won't be included in RTE_EAL_PMD_PATH, so following that
path is a half measure at best, and I think that leads to erroneous results.

> > 
> > > > > discover the location + correct library files to be able to query that. For
> > > > > the shared case, perhaps the script could be taught to walk files in
> > > > > CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
> > > > My initial thought would be to run ldd on the executable, and use a heuristic to
> > > > determine relevant pmd DSO's, and then feed each of those through the python
> > > > script.  I didn't want to go to that trouble unless there was consensus on it
> > > > though.
> > > 
> > > Problem is, ldd doesn't know about them either because the pmds are not
> > > linked to the executables at all anymore. They could be force-linked of
> > > course, but that means giving up the flexibility of plugins, which IMO is a
> > > no-go. Except maybe as an option, but then that would be a third case to
> > > support.
> > > 
> > Thats not true at all, or at least its a perfectly valid way to link the DSO's
> > in at link time via -lrte_pmd_<driver>.  Its really just the dlopen case we need
> > to worry about.  I would argue that, if they're not explicitly linked in like
> > that, then its correct to indicate that an application supports no hardware,
> > because it actually doesn't, it only supports the pmds that it chooses to list
> > on the command line.  And if a user is savy enough to specify a pmd on the
> > application command line, then they are perfectly capable of specifying the same
> > path to the hw_info script.
> 
> Yes you can force-link apps to every driver on existence, but it requires
> not just linking but using --whole-archive.
For the static case, yes, and thats what DPDK does, and likely will in
perpituity, unless there is a major architectural change in the project (see
commit 20afd76a504155e947c770783ef5023e87136ad8).

> The apps in DPDK itself dont in
> shared link setup (take a look at testpmd) and I think its for a damn good
> reason - the drivers are plugins and that's how plugins are expected to
> work: they are not linked to, they reside in a specific path which is

I know, I'm the one that made that change when we introduced the
PMD_REGISTER_DRIVER macro :).  That doesn't mean its not a valid case when
building apps, and one that we can take advantage of opportunistically.  There
are three cases we have to handle:

1) Static linking - This is taken care of
2) Dynamic linking via DT_NEEDED entries - this is taken care of
3) Dynamic linking via dlopen - This is what we're discussing here

> scanned at runtime and plugins loaded to provide extra functionality.
Its the runtime part that makes this non-functional.  Just because you scan
all the DSO's in the RTE_EAL_PATH, doesn't mean they will be there when the app
is run, nor does it mean you will get a comprehensive list of hardware support,
because it doesn't include additional paths/DSO's added via -d.  I would much
rather have users understand that an app has _no_ hardware support if it uses
DSO's, because the hardware support is included with the DSO's themself, not the
application (saving for the DT_NEEDED case above, where application execution is
predicated on the availability of those shared objects)

> 
> > > 
> > > > 
> > > > > when querying the executable as with static builds. If identical operation
> > > > > between static and shared versions is a requirement (without running the app
> > > > > in question) then query through the executable itself is practically the
> > > > > only option. Unless some kind of (auto-generated) external config file
> > > > > system ala kernel depmod / modules.dep etc is brought into the picture.
> > > > Yeah, I'm really trying to avoid that, as I think its really not a typical part
> > > > of how user space libraries are interacted with.
> > > > 
> > > > > 
> > > > > For shared library configurations, having the data in the individual pmds is
> > > > > valuable as one could for example have rpm autogenerate provides from the
> > > > > data to ease/automate installation (in case of split packaging and/or 3rd
> > > > > party drivers). And no doubt other interesting possibilities. With static
> > > > > builds that kind of thing is not possible.
> > > > Right.
> > > > 
> > > > Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
> > > > For those situations I don't think we have any way of 'knowing' that the
> > > > application intends to use them.
> > > 
> > > Hence my comment about CONFIG_RTE_EAL_PMD_PATH above, it at least provides a
> > > reasonable heuristic of what would be loaded by the app when run. But
> > > ultimately the only way to know what hardware is supported at a given time
> > > is to run an app which calls rte_eal_init() to load all the drivers that are
> > > present and work from there, because besides CONFIG_RTE_EAL_PMD_PATH this
> > > can be affected by runtime commandline switches and applies to both shared
> > > and static builds.
> > > 
> > I'm not sure I agree with that.  Its clearly tempting to use, but its not
> > at all guaranteed to be accurate (the default is just set to "", and there is no
> > promise anyone will set it properly).
> 
> The promise is that shared builds are barely functional unless its set
> correctly, because zero drivers are linked to testpmd in shared config. So
> you're kinda likely to notice if its not set.
> 
You're twisting the meaning of 'barely functional' here.  I agree that shared
builds are barely functional, because they have no self-contained hardware
support, and as such, running pmdinfo.py on such an application should report
exactly that.

That said, in order to run, and DPDK application built to use shared libraries
has to use one of two methods to obtain hardware support

A) direct shared linking (the DT_NEEDED case) - This case is handled, and we
report hardware support when found, as the application won't run unless those
libraries are resolved

b) dynamic loading via dlopen - This case shouldn't be handled, because the
application in reality doesn't support  any hardware.  Hardware support is
garnered at run time when the EAL_PMD_PATH (and any other paths added via the -d
option) are scanned.  In this case, pmdinfo shouldn't report any hardware
support, it should only do so if the pmd DSO is queried directly.

> It defaults to empty because at the time there was no standard installation
> available at that time. Setting a reasonable default is tricky still because
> it needs to be set before build whereas install path is set at install time.
> 
Exactly, which is why distros don't use it.  It also neglects the multiuser case
(in which different users may want to load different hardware support).

> > And it also requires that the binary will
> > be tied to a specific release.  I really think that, given the fact that
> > distributions generally try to package dpdk in such a way that multiple dpdk
> > versions might be available, the right solution is to just require a full path
> > specification if you want to get hw info for a DSO that is dynamically loaded
> > via dlopen from the command line.  Otherwise you're going to fall into this trap
> > where you might be looking implicitly at an older version of the PMD while your
> > application may use a newer version.
> 
> If there are multiple dpdk versions available then they just need to have
> separate PMD paths, but that's not a problem.
> 
> > > > > 
> > > > > Calling up on the list of requirements from
> > > > > http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
> > > > > technical requirements but perhaps we should stop for a moment to think
> > > > > about the use-cases first?
> > > > 
> > > > To ennumerate the list:
> > > > 
> > > > - query all drivers in static binary or shared library (works)
> > > > - stripping resiliency (works)
> > > > - human friendly (works)
> > > > - script friendly (works)
> > > > - show driver name (works)
> > > > - list supported device id / name (works)
> > > > - list driver options (not yet, but possible)
> > > > - show driver version if available (nope, but possible)
> > > > - show dpdk version (nope, but possible)
> > > > - show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
> > > > - room for extra information? (works)
> > > > 
> > > > Of the items that are missing, I've already got a V2 started that can do driver
> > > > options, and is easier to expand.  Adding in the the DPDK and PMD version should
> > > > be easy (though I think they can be left out, as theres currently no globaly
> > > > defined DPDK release version, its all just implicit, and driver versions aren't
> > > > really there either).  I'm also hesitant to include kernel dependencies without
> > > > defining exactly what they mean (just module dependencies, or feature
> > > > enablement, or something else?).  Once we define it though, adding it can be
> > > > easy.
> > > 
> > > Yup. I just think the shared/static difference needs to be sorted out
> > > somehow, eg requiring user to know about DSOs is not human-friendly at all.
> > > That's why I called for the higher level use-cases in my previous email.
> > > 
> > 
> > I disagree with that.  While its reasonable to give users the convienience of
> > scanning the DT_NEEDED entries of a binary and scanning those DSO's.  If a user
> 
> Scanning DT_NEEDED is of course ok sane and right thing to do, its just not
> sufficient.
> 
But its the only sane thing we can do implicitly in the shared case, because we
know those drivers have to be resolved for the app to run.  In the
RTE_EAL_PMD_PATH or -d cases, we dont' know until runtime what drivers that will
include, and so reporting on hardware support prior to run time via scanning of
the application is erroneous.  The sane thing to do is scan the pmd DSO, which
is where the hardware support resides, and make it clear that, for an
application to get that hardware support, they need to either link dynamically
(via a DT_NEEDED entry), or specify it on the command line, or make sure its in
RTE_EAL_PMD_PATH (if the distribution set it, which so far, no one does).

> > has to specify the PMD to load in an application (either on the command line or
> > via a configuration file), then its reasonable assume that they (a) know where
> 
> But when the PMD path is set (as it should be on a distro build), this is
> all automatic with zero action or extra config required from the user.
> 
Its not,  RHEL doesn't do it, Fedora Doesn't do it, Ubuntu doesn't do it.  Even
if it is, it doesn't imply an application will get that support, as the
directory contents may change between scanning and application run time.

> > to find that pmd and (b) are savy enough to pass that same path to a hardware
> > info tool.  Thats the exact same way that modinfo works (save for the fact that
> > modinfo can implicitly check the running kernel version to find the appropriate
> > path for a modular driver).
> > 
> > The only other thing that seems reasonable to me would be to scan
> > LD_LIBRARY_PATH.  I would assume that, if an application is linked dynamically,
> > the individual DSO's (librte_sched.so, etc), need to be in LD_LIBRARY_PATH.  If
> > thats the case, then we can assume that the appropriate PMD DSO's are there too,
> > and we can search there.  We can also check the standard /usr/lib and /lib paths
> > with that.  I think that would make fairly good sense.
> 
> You really don't want go crashing through the potentially thousands of
> libraries in the standard library path going "is it a pmd, no, is it a pmd,
> no..."
What?  No.  You misunderstand.  In the above, all I'm saying is that if you
specify an application, you can scan LD_LIBRARY_PATH for libraries in the
DT_NEEDED list. I don't mean to say that we shoudl check _every_ library in
LD_LIBRARY_PATH, that would be crazy.  Of course, the same problem exists with
RTE_EAL_PMD_PATH.  In the current installation, all pmd libraries are co-located
with the rest of the dpdk library set, so RTE_EAL_PMD_PATH would have to be set
to whatever that installation path was.  And once thats done, we would have to
go crashing through every DPDK library to see if it was a pmd.  Thats just as
insane.

Neil
Panu Matilainen May 20, 2016, 7:30 a.m. UTC | #11
On 05/19/2016 04:26 PM, Neil Horman wrote:
> On Thu, May 19, 2016 at 09:08:52AM +0300, Panu Matilainen wrote:
>> On 05/18/2016 04:48 PM, Neil Horman wrote:
>>> On Wed, May 18, 2016 at 03:48:12PM +0300, Panu Matilainen wrote:
>>>> On 05/18/2016 03:03 PM, Neil Horman wrote:
>>>>> On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
>>>>>> On 05/16/2016 11:41 PM, Neil Horman wrote:
>>>>>>> This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
>>>>>>> and, if found parses the remainder of the string as a json encoded string,
>>>>>>> outputting the results in either a human readable or raw, script parseable
>>>>>>> format
>>>>>>>
>>>>>>> Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
>>>>>>> CC: Bruce Richardson <bruce.richardson@intel.com>
>>>>>>> CC: Thomas Monjalon <thomas.monjalon@6wind.com>
>>>>>>> CC: Stephen Hemminger <stephen@networkplumber.org>
>>>>>>> CC: Panu Matilainen <pmatilai@redhat.com>
>>>>>>> ---
>>>>>>>  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>>  1 file changed, 174 insertions(+)
>>>>>>>  create mode 100755 tools/pmd_hw_support.py
>>>>>>>
>>>>>>> diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
>>>>>>> new file mode 100755
>>>>>>> index 0000000..0669aca
>>>>>>> --- /dev/null
>>>>>>> +++ b/tools/pmd_hw_support.py
>>>>>>> @@ -0,0 +1,174 @@
>>>>>>> +#!/usr/bin/python3
>>>>>>
>>>>>> I think this should use /usr/bin/python to be consistent with the other
>>>>>> python scripts, and like the others work with python 2 and 3. I only tested
>>>>>> it with python2 after changing this and it seemed to work fine so the
>>>>>> compatibility side should be fine as-is.
>>>>>>
>>>>> Sure, I can change the python executable, that makes sense.
>>>>>
>>>>>> On the whole, AFAICT the patch series does what it promises, and works for
>>>>>> both static and shared linkage. Using JSON formatted strings in an ELF
>>>>>> section is a sound working technical solution for the storage of the data.
>>>>>> But the difference between the two cases makes me wonder about this all...
>>>>> You mean the difference between checking static binaries and dynamic binaries?
>>>>> yes, there is some functional difference there
>>>>>
>>>>>>
>>>>>> For static library build, you'd query the application executable, eg
>>>>> Correct.
>>>>>
>>>>>> testpmd, to get the data out. For a shared library build, that method gives
>>>>>> absolutely nothing because the data is scattered around in individual
>>>>>> libraries which might be just about wherever, and you need to somehow
>>>>> Correct, I figured that users would be smart enough to realize that with
>>>>> dynamically linked executables, they would need to look at DSO's, but I agree,
>>>>> its a glaring diffrence.
>>>>
>>>> Being able to look at DSOs is good, but expecting the user to figure out
>>>> which DSOs might be loaded and not and where to look is going to be well
>>>> above many users. At very least it's not what I would call user-friendly.
>>>>
>>> I disagree, there is no linkage between an application and the dso's it opens
>>> via dlopen that is exportable.  The only way to handle that is to have a
>>> standard search path for the pmd_hw_info python script.  Thats just like modinfo
>>> works (i.e. "modinfo bnx2" finds the bnx2 module for the running kernel).  We
>>> can of course do something simmilar, but we have no existing implicit path
>>> information to draw from to do that (because you can have multiple dpdk installs
>>> side by side).  The only way around that is to explicitly call out the path on
>>> the command line.
>>
>> There's no telling what libraries user might load at runtime with -D, that
>> is true for both static and shared libraries.
>>
> I agree.
>
>> When CONFIG_RTE_EAL_PMD_PATH is set, as it is likely to be on distro builds,
>> you *know* that everything in that path will be loaded on runtime regardless
>> of what commandline options there might be so the situation is actually on
>> par with static builds. Of course you still dont know about ones added with
>> -D but that's a limitation of any solution that works without actually
>> running the app.
>>
> Its not on ours, as the pmd libraries get placed in the same directory as every
> other dpdk library, and no one wants to try (and fail to load
> rte_sched/rte_acl/etc twice, or deal with the fallout of trying to do so, or
> adjust the packaging so that pmds are placed in their own subdirectory, or
> handle the need for multiuser support.

Err. I suggest you actually look at the package.

>
> Using CONFIG_RTE_EAL_PMD_PATH also doesn't account for directory changes.  This
> use case:
> 1) run pmdinfo <app>
> 2) remove DSOs from RTE_EAL_PMD_PATH
> 3) execute <app>
>
> leads to erroneous results, as hardware support that was reported in (1) is no
> longer available at (3)

Yes, and in place of 2) you could also add DSOs there since you found 
something missing. Just like updating statically <app> at 2) could 
change it. RTE_EAL_PMD_PATH is not expected to point to /tmp like 
location where stuff randomly appears and disappears.

>
> It also completely misses any libraries that we load via the -d option on the
> command line, which won't be included in RTE_EAL_PMD_PATH, so following that
> path is a half measure at best, and I think that leads to erroneous results.

This same problem with -d exists for statically linked apps, as you 
actually agreed earlier in your email. So that's hardly an argument here.



>>>
>>>>>> discover the location + correct library files to be able to query that. For
>>>>>> the shared case, perhaps the script could be taught to walk files in
>>>>>> CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
>>>>> My initial thought would be to run ldd on the executable, and use a heuristic to
>>>>> determine relevant pmd DSO's, and then feed each of those through the python
>>>>> script.  I didn't want to go to that trouble unless there was consensus on it
>>>>> though.
>>>>
>>>> Problem is, ldd doesn't know about them either because the pmds are not
>>>> linked to the executables at all anymore. They could be force-linked of
>>>> course, but that means giving up the flexibility of plugins, which IMO is a
>>>> no-go. Except maybe as an option, but then that would be a third case to
>>>> support.
>>>>
>>> Thats not true at all, or at least its a perfectly valid way to link the DSO's
>>> in at link time via -lrte_pmd_<driver>.  Its really just the dlopen case we need
>>> to worry about.  I would argue that, if they're not explicitly linked in like
>>> that, then its correct to indicate that an application supports no hardware,
>>> because it actually doesn't, it only supports the pmds that it chooses to list
>>> on the command line.  And if a user is savy enough to specify a pmd on the
>>> application command line, then they are perfectly capable of specifying the same
>>> path to the hw_info script.
>>
>> Yes you can force-link apps to every driver on existence, but it requires
>> not just linking but using --whole-archive.
> For the static case, yes, and thats what DPDK does, and likely will in
> perpituity, unless there is a major architectural change in the project (see
> commit 20afd76a504155e947c770783ef5023e87136ad8).
>
>> The apps in DPDK itself dont in
>> shared link setup (take a look at testpmd) and I think its for a damn good
>> reason - the drivers are plugins and that's how plugins are expected to
>> work: they are not linked to, they reside in a specific path which is
>
> I know, I'm the one that made that change when we introduced the
> PMD_REGISTER_DRIVER macro :).  That doesn't mean its not a valid case when
> building apps, and one that we can take advantage of opportunistically.  There
> are three cases we have to handle:
>
> 1) Static linking - This is taken care of
> 2) Dynamic linking via DT_NEEDED entries - this is taken care of
> 3) Dynamic linking via dlopen - This is what we're discussing here
>
>> scanned at runtime and plugins loaded to provide extra functionality.
> Its the runtime part that makes this non-functional.  Just because you scan
> all the DSO's in the RTE_EAL_PATH, doesn't mean they will be there when the app
> is run, nor does it mean you will get a comprehensive list of hardware support,
> because it doesn't include additional paths/DSO's added via -d.  I would much
> rather have users understand that an app has _no_ hardware support if it uses
> DSO's, because the hardware support is included with the DSO's themself, not the
> application (saving for the DT_NEEDED case above, where application execution is
> predicated on the availability of those shared objects)

I'm not going to repeat all the earlier arguments from above, but -d is 
different because its specified by the user at runtime.

RTE_EAL_PMD_PATH is built into the EAL library and you can't disable it 
at runtime. So an app linked to EAL with RTE_EAL_PMD_PATH configured is 
guaranteed to load everything from that path, regardless of what the 
user specifies at the runtime. I agree it is somewhat different from the 
static case because its, well, dynamic, by design. Note that I fully 
agree there is value in being able to query *just* the binary and no 
magic lookups, because for some uses you want just that so it'd need to 
be possible to disable such lookup in the tool.

Anyway, unless something really new turns up in this discussion I'm 
going to shut up now since I would hope I've made my opinion and point 
clear by now.

>>
>>>>
>>>>>
>>>>>> when querying the executable as with static builds. If identical operation
>>>>>> between static and shared versions is a requirement (without running the app
>>>>>> in question) then query through the executable itself is practically the
>>>>>> only option. Unless some kind of (auto-generated) external config file
>>>>>> system ala kernel depmod / modules.dep etc is brought into the picture.
>>>>> Yeah, I'm really trying to avoid that, as I think its really not a typical part
>>>>> of how user space libraries are interacted with.
>>>>>
>>>>>>
>>>>>> For shared library configurations, having the data in the individual pmds is
>>>>>> valuable as one could for example have rpm autogenerate provides from the
>>>>>> data to ease/automate installation (in case of split packaging and/or 3rd
>>>>>> party drivers). And no doubt other interesting possibilities. With static
>>>>>> builds that kind of thing is not possible.
>>>>> Right.
>>>>>
>>>>> Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
>>>>> For those situations I don't think we have any way of 'knowing' that the
>>>>> application intends to use them.
>>>>
>>>> Hence my comment about CONFIG_RTE_EAL_PMD_PATH above, it at least provides a
>>>> reasonable heuristic of what would be loaded by the app when run. But
>>>> ultimately the only way to know what hardware is supported at a given time
>>>> is to run an app which calls rte_eal_init() to load all the drivers that are
>>>> present and work from there, because besides CONFIG_RTE_EAL_PMD_PATH this
>>>> can be affected by runtime commandline switches and applies to both shared
>>>> and static builds.
>>>>
>>> I'm not sure I agree with that.  Its clearly tempting to use, but its not
>>> at all guaranteed to be accurate (the default is just set to "", and there is no
>>> promise anyone will set it properly).
>>
>> The promise is that shared builds are barely functional unless its set
>> correctly, because zero drivers are linked to testpmd in shared config. So
>> you're kinda likely to notice if its not set.
>>
> You're twisting the meaning of 'barely functional' here.  I agree that shared
> builds are barely functional, because they have no self-contained hardware
> support, and as such, running pmdinfo.py on such an application should report
> exactly that.
>
> That said, in order to run, and DPDK application built to use shared libraries
> has to use one of two methods to obtain hardware support
>
> A) direct shared linking (the DT_NEEDED case) - This case is handled, and we
> report hardware support when found, as the application won't run unless those
> libraries are resolved
>
> b) dynamic loading via dlopen - This case shouldn't be handled, because the
> application in reality doesn't support  any hardware.  Hardware support is
> garnered at run time when the EAL_PMD_PATH (and any other paths added via the -d
> option) are scanned.  In this case, pmdinfo shouldn't report any hardware
> support, it should only do so if the pmd DSO is queried directly.
>
>> It defaults to empty because at the time there was no standard installation
>> available at that time. Setting a reasonable default is tricky still because
>> it needs to be set before build whereas install path is set at install time.
>>
> Exactly, which is why distros don't use it.  It also neglects the multiuser case
> (in which different users may want to load different hardware support).

Ehh? It exists *primarily* for distro needs. I suggest you take a look 
how it all works in current Fedora and RHEL packages. The packaging is 
monolithic at the moment but thanks to the plugin autoloading, it would 
be possible to split drivers into different subpackages to eg provide 
minimal driver package for use in virtual guests where all space counts 
etc, and to allow 3rd party drivers to be dropped in and so on.

>>> And it also requires that the binary will
>>> be tied to a specific release.  I really think that, given the fact that
>>> distributions generally try to package dpdk in such a way that multiple dpdk
>>> versions might be available, the right solution is to just require a full path
>>> specification if you want to get hw info for a DSO that is dynamically loaded
>>> via dlopen from the command line.  Otherwise you're going to fall into this trap
>>> where you might be looking implicitly at an older version of the PMD while your
>>> application may use a newer version.
>>
>> If there are multiple dpdk versions available then they just need to have
>> separate PMD paths, but that's not a problem.
>>
>>>>>>
>>>>>> Calling up on the list of requirements from
>>>>>> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
>>>>>> technical requirements but perhaps we should stop for a moment to think
>>>>>> about the use-cases first?
>>>>>
>>>>> To ennumerate the list:
>>>>>
>>>>> - query all drivers in static binary or shared library (works)
>>>>> - stripping resiliency (works)
>>>>> - human friendly (works)
>>>>> - script friendly (works)
>>>>> - show driver name (works)
>>>>> - list supported device id / name (works)
>>>>> - list driver options (not yet, but possible)
>>>>> - show driver version if available (nope, but possible)
>>>>> - show dpdk version (nope, but possible)
>>>>> - show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
>>>>> - room for extra information? (works)
>>>>>
>>>>> Of the items that are missing, I've already got a V2 started that can do driver
>>>>> options, and is easier to expand.  Adding in the the DPDK and PMD version should
>>>>> be easy (though I think they can be left out, as theres currently no globaly
>>>>> defined DPDK release version, its all just implicit, and driver versions aren't
>>>>> really there either).  I'm also hesitant to include kernel dependencies without
>>>>> defining exactly what they mean (just module dependencies, or feature
>>>>> enablement, or something else?).  Once we define it though, adding it can be
>>>>> easy.
>>>>
>>>> Yup. I just think the shared/static difference needs to be sorted out
>>>> somehow, eg requiring user to know about DSOs is not human-friendly at all.
>>>> That's why I called for the higher level use-cases in my previous email.
>>>>
>>>
>>> I disagree with that.  While its reasonable to give users the convienience of
>>> scanning the DT_NEEDED entries of a binary and scanning those DSO's.  If a user
>>
>> Scanning DT_NEEDED is of course ok sane and right thing to do, its just not
>> sufficient.
>>
> But its the only sane thing we can do implicitly in the shared case, because we
> know those drivers have to be resolved for the app to run.  In the
> RTE_EAL_PMD_PATH or -d cases, we dont' know until runtime what drivers that will
> include, and so reporting on hardware support prior to run time via scanning of
> the application is erroneous.  The sane thing to do is scan the pmd DSO, which
> is where the hardware support resides, and make it clear that, for an
> application to get that hardware support, they need to either link dynamically
> (via a DT_NEEDED entry), or specify it on the command line, or make sure its in
> RTE_EAL_PMD_PATH (if the distribution set it, which so far, no one does).
>
>>> has to specify the PMD to load in an application (either on the command line or
>>> via a configuration file), then its reasonable assume that they (a) know where
>>
>> But when the PMD path is set (as it should be on a distro build), this is
>> all automatic with zero action or extra config required from the user.
>>
> Its not,  RHEL doesn't do it, Fedora Doesn't do it, Ubuntu doesn't do it.  Even

Again, check your facts please. I dont know about Ubuntu but Fedora and 
RHEL do set it to make the damn thing actually work out of the box 
without requiring the user to figure out magic -d arguments.

> if it is, it doesn't imply an application will get that support, as the
> directory contents may change between scanning and application run time.



>>> to find that pmd and (b) are savy enough to pass that same path to a hardware
>>> info tool.  Thats the exact same way that modinfo works (save for the fact that
>>> modinfo can implicitly check the running kernel version to find the appropriate
>>> path for a modular driver).
>>>
>>> The only other thing that seems reasonable to me would be to scan
>>> LD_LIBRARY_PATH.  I would assume that, if an application is linked dynamically,
>>> the individual DSO's (librte_sched.so, etc), need to be in LD_LIBRARY_PATH.  If
>>> thats the case, then we can assume that the appropriate PMD DSO's are there too,
>>> and we can search there.  We can also check the standard /usr/lib and /lib paths
>>> with that.  I think that would make fairly good sense.
>>
>> You really don't want go crashing through the potentially thousands of
>> libraries in the standard library path going "is it a pmd, no, is it a pmd,
>> no..."
> What?  No.  You misunderstand.  In the above, all I'm saying is that if you
> specify an application, you can scan LD_LIBRARY_PATH for libraries in the
> DT_NEEDED list. I don't mean to say that we shoudl check _every_ library in
> LD_LIBRARY_PATH, that would be crazy.  Of course, the same problem exists with
> RTE_EAL_PMD_PATH.  In the current installation, all pmd libraries are co-located
> with the rest of the dpdk library set, so RTE_EAL_PMD_PATH would have to be set
> to whatever that installation path was.  And once thats done, we would have to
> go crashing through every DPDK library to see if it was a pmd.  Thats just as
> insane.

Yes I misunderstood what you meant by looking through LD_LIBRARY_PATH, 
and you misunderstood what I meant by it. I guess we can agree on having 
misunderstood each other :) RTE_EAL_PMD_PATH needs to point to a 
directory where nothing but PMDs exist. That is the standard practise 
with plugins on all userland software.

	- Panu -

>
> Neil
>
Neil Horman May 20, 2016, 2:06 p.m. UTC | #12
On Fri, May 20, 2016 at 10:30:27AM +0300, Panu Matilainen wrote:
> On 05/19/2016 04:26 PM, Neil Horman wrote:
> > On Thu, May 19, 2016 at 09:08:52AM +0300, Panu Matilainen wrote:
> > > On 05/18/2016 04:48 PM, Neil Horman wrote:
> > > > On Wed, May 18, 2016 at 03:48:12PM +0300, Panu Matilainen wrote:
> > > > > On 05/18/2016 03:03 PM, Neil Horman wrote:
> > > > > > On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
> > > > > > > On 05/16/2016 11:41 PM, Neil Horman wrote:
> > > > > > > > This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
> > > > > > > > and, if found parses the remainder of the string as a json encoded string,
> > > > > > > > outputting the results in either a human readable or raw, script parseable
> > > > > > > > format
> > > > > > > > 
> > > > > > > > Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
> > > > > > > > CC: Bruce Richardson <bruce.richardson@intel.com>
> > > > > > > > CC: Thomas Monjalon <thomas.monjalon@6wind.com>
> > > > > > > > CC: Stephen Hemminger <stephen@networkplumber.org>
> > > > > > > > CC: Panu Matilainen <pmatilai@redhat.com>
> > > > > > > > ---
> > > > > > > >  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
> > > > > > > >  1 file changed, 174 insertions(+)
> > > > > > > >  create mode 100755 tools/pmd_hw_support.py
> > > > > > > > 
> > > > > > > > diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
> > > > > > > > new file mode 100755
> > > > > > > > index 0000000..0669aca
> > > > > > > > --- /dev/null
> > > > > > > > +++ b/tools/pmd_hw_support.py
> > > > > > > > @@ -0,0 +1,174 @@
> > > > > > > > +#!/usr/bin/python3
> > > > > > > 
> > > > > > > I think this should use /usr/bin/python to be consistent with the other
> > > > > > > python scripts, and like the others work with python 2 and 3. I only tested
> > > > > > > it with python2 after changing this and it seemed to work fine so the
> > > > > > > compatibility side should be fine as-is.
> > > > > > > 
> > > > > > Sure, I can change the python executable, that makes sense.
> > > > > > 
> > > > > > > On the whole, AFAICT the patch series does what it promises, and works for
> > > > > > > both static and shared linkage. Using JSON formatted strings in an ELF
> > > > > > > section is a sound working technical solution for the storage of the data.
> > > > > > > But the difference between the two cases makes me wonder about this all...
> > > > > > You mean the difference between checking static binaries and dynamic binaries?
> > > > > > yes, there is some functional difference there
> > > > > > 
> > > > > > > 
> > > > > > > For static library build, you'd query the application executable, eg
> > > > > > Correct.
> > > > > > 
> > > > > > > testpmd, to get the data out. For a shared library build, that method gives
> > > > > > > absolutely nothing because the data is scattered around in individual
> > > > > > > libraries which might be just about wherever, and you need to somehow
> > > > > > Correct, I figured that users would be smart enough to realize that with
> > > > > > dynamically linked executables, they would need to look at DSO's, but I agree,
> > > > > > its a glaring diffrence.
> > > > > 
> > > > > Being able to look at DSOs is good, but expecting the user to figure out
> > > > > which DSOs might be loaded and not and where to look is going to be well
> > > > > above many users. At very least it's not what I would call user-friendly.
> > > > > 
> > > > I disagree, there is no linkage between an application and the dso's it opens
> > > > via dlopen that is exportable.  The only way to handle that is to have a
> > > > standard search path for the pmd_hw_info python script.  Thats just like modinfo
> > > > works (i.e. "modinfo bnx2" finds the bnx2 module for the running kernel).  We
> > > > can of course do something simmilar, but we have no existing implicit path
> > > > information to draw from to do that (because you can have multiple dpdk installs
> > > > side by side).  The only way around that is to explicitly call out the path on
> > > > the command line.
> > > 
> > > There's no telling what libraries user might load at runtime with -D, that
> > > is true for both static and shared libraries.
> > > 
> > I agree.
> > 
> > > When CONFIG_RTE_EAL_PMD_PATH is set, as it is likely to be on distro builds,
> > > you *know* that everything in that path will be loaded on runtime regardless
> > > of what commandline options there might be so the situation is actually on
> > > par with static builds. Of course you still dont know about ones added with
> > > -D but that's a limitation of any solution that works without actually
> > > running the app.
> > > 
> > Its not on ours, as the pmd libraries get placed in the same directory as every
> > other dpdk library, and no one wants to try (and fail to load
> > rte_sched/rte_acl/etc twice, or deal with the fallout of trying to do so, or
> > adjust the packaging so that pmds are placed in their own subdirectory, or
> > handle the need for multiuser support.
> 
> Err. I suggest you actually look at the package.
> 
Oh, you're right, you did make a change to set it.  I never saw a bugzilla for
that.  Sorry.

> > 
> > Using CONFIG_RTE_EAL_PMD_PATH also doesn't account for directory changes.  This
> > use case:
> > 1) run pmdinfo <app>
> > 2) remove DSOs from RTE_EAL_PMD_PATH
> > 3) execute <app>
> > 
> > leads to erroneous results, as hardware support that was reported in (1) is no
> > longer available at (3)
> 
> Yes, and in place of 2) you could also add DSOs there since you found
> something missing. Just like updating statically <app> at 2) could change
> it. RTE_EAL_PMD_PATH is not expected to point to /tmp like location where
> stuff randomly appears and disappears.
> 
So....for this hypothetical admin that we're speaking of, you expect them to be
smart enough to understand that adding additional hardware support requires them
to build an extra DSO, and place it (or a symlink to it) in a distro determined
directory, but then not have the wherewithal to understand that querying
hardware support require that you query that specific binary, rather than
assuming the tool will automagically attempt to glean that path from the core
set of libraries?

> > 
> > It also completely misses any libraries that we load via the -d option on the
> > command line, which won't be included in RTE_EAL_PMD_PATH, so following that
> > path is a half measure at best, and I think that leads to erroneous results.
> 
> This same problem with -d exists for statically linked apps, as you actually
> agreed earlier in your email. So that's hardly an argument here.
> 
Yes, I did agree, but you're twisting my argument.  I'm not arguing that I have
some other solution for this problem, I'm saying that, as a matter of
consistency, for libraries that offer hardware support which are loaded at run
time (which applies equally to those found in RTE_EAL_PMD_PATH, and those loaded
via -d), that we should not automatically report on those libraries, because we
don't know until run time if they will be available.  For both of these cases we
should only report on hardware support if the user queries those binary DSO's
directly.  I don't care if the application automatically loads from that
directory or not, until run time, we don't know what the contents of that
directory will be, and so we aren't guaranteed that the output of pmdinfo, if it
automatically searches that path, will be accurate.

> 
> 
> > > > 
> > > > > > > discover the location + correct library files to be able to query that. For
> > > > > > > the shared case, perhaps the script could be taught to walk files in
> > > > > > > CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
> > > > > > My initial thought would be to run ldd on the executable, and use a heuristic to
> > > > > > determine relevant pmd DSO's, and then feed each of those through the python
> > > > > > script.  I didn't want to go to that trouble unless there was consensus on it
> > > > > > though.
> > > > > 
> > > > > Problem is, ldd doesn't know about them either because the pmds are not
> > > > > linked to the executables at all anymore. They could be force-linked of
> > > > > course, but that means giving up the flexibility of plugins, which IMO is a
> > > > > no-go. Except maybe as an option, but then that would be a third case to
> > > > > support.
> > > > > 
> > > > Thats not true at all, or at least its a perfectly valid way to link the DSO's
> > > > in at link time via -lrte_pmd_<driver>.  Its really just the dlopen case we need
> > > > to worry about.  I would argue that, if they're not explicitly linked in like
> > > > that, then its correct to indicate that an application supports no hardware,
> > > > because it actually doesn't, it only supports the pmds that it chooses to list
> > > > on the command line.  And if a user is savy enough to specify a pmd on the
> > > > application command line, then they are perfectly capable of specifying the same
> > > > path to the hw_info script.
> > > 
> > > Yes you can force-link apps to every driver on existence, but it requires
> > > not just linking but using --whole-archive.
> > For the static case, yes, and thats what DPDK does, and likely will in
> > perpituity, unless there is a major architectural change in the project (see
> > commit 20afd76a504155e947c770783ef5023e87136ad8).
> > 
> > > The apps in DPDK itself dont in
> > > shared link setup (take a look at testpmd) and I think its for a damn good
> > > reason - the drivers are plugins and that's how plugins are expected to
> > > work: they are not linked to, they reside in a specific path which is
> > 
> > I know, I'm the one that made that change when we introduced the
> > PMD_REGISTER_DRIVER macro :).  That doesn't mean its not a valid case when
> > building apps, and one that we can take advantage of opportunistically.  There
> > are three cases we have to handle:
> > 
> > 1) Static linking - This is taken care of
> > 2) Dynamic linking via DT_NEEDED entries - this is taken care of
> > 3) Dynamic linking via dlopen - This is what we're discussing here
> > 
> > > scanned at runtime and plugins loaded to provide extra functionality.
> > Its the runtime part that makes this non-functional.  Just because you scan
> > all the DSO's in the RTE_EAL_PATH, doesn't mean they will be there when the app
> > is run, nor does it mean you will get a comprehensive list of hardware support,
> > because it doesn't include additional paths/DSO's added via -d.  I would much
> > rather have users understand that an app has _no_ hardware support if it uses
> > DSO's, because the hardware support is included with the DSO's themself, not the
> > application (saving for the DT_NEEDED case above, where application execution is
> > predicated on the availability of those shared objects)
> 
> I'm not going to repeat all the earlier arguments from above, but -d is
> different because its specified by the user at runtime.
> 
So are the contents of the directory pointed to by RTE_EAL_PMD_PATH.

> RTE_EAL_PMD_PATH is built into the EAL library and you can't disable it at
> runtime. So an app linked to EAL with RTE_EAL_PMD_PATH configured is
> guaranteed to load everything from that path, regardless of what the user
> specifies at the runtime. I agree it is somewhat different from the static
> case because its, well, dynamic, by design. Note that I fully agree there is
> value in being able to query *just* the binary and no magic lookups, because
> for some uses you want just that so it'd need to be possible to disable such
> lookup in the tool.
> 
> Anyway, unless something really new turns up in this discussion I'm going to
> shut up now since I would hope I've made my opinion and point clear by now.
> 
Yes, you've made yourself very clear, and I hope I've done the same.  We're just
not going to agree on this.

> > > 
> > > > > 
> > > > > > 
> > > > > > > when querying the executable as with static builds. If identical operation
> > > > > > > between static and shared versions is a requirement (without running the app
> > > > > > > in question) then query through the executable itself is practically the
> > > > > > > only option. Unless some kind of (auto-generated) external config file
> > > > > > > system ala kernel depmod / modules.dep etc is brought into the picture.
> > > > > > Yeah, I'm really trying to avoid that, as I think its really not a typical part
> > > > > > of how user space libraries are interacted with.
> > > > > > 
> > > > > > > 
> > > > > > > For shared library configurations, having the data in the individual pmds is
> > > > > > > valuable as one could for example have rpm autogenerate provides from the
> > > > > > > data to ease/automate installation (in case of split packaging and/or 3rd
> > > > > > > party drivers). And no doubt other interesting possibilities. With static
> > > > > > > builds that kind of thing is not possible.
> > > > > > Right.
> > > > > > 
> > > > > > Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
> > > > > > For those situations I don't think we have any way of 'knowing' that the
> > > > > > application intends to use them.
> > > > > 
> > > > > Hence my comment about CONFIG_RTE_EAL_PMD_PATH above, it at least provides a
> > > > > reasonable heuristic of what would be loaded by the app when run. But
> > > > > ultimately the only way to know what hardware is supported at a given time
> > > > > is to run an app which calls rte_eal_init() to load all the drivers that are
> > > > > present and work from there, because besides CONFIG_RTE_EAL_PMD_PATH this
> > > > > can be affected by runtime commandline switches and applies to both shared
> > > > > and static builds.
> > > > > 
> > > > I'm not sure I agree with that.  Its clearly tempting to use, but its not
> > > > at all guaranteed to be accurate (the default is just set to "", and there is no
> > > > promise anyone will set it properly).
> > > 
> > > The promise is that shared builds are barely functional unless its set
> > > correctly, because zero drivers are linked to testpmd in shared config. So
> > > you're kinda likely to notice if its not set.
> > > 
> > You're twisting the meaning of 'barely functional' here.  I agree that shared
> > builds are barely functional, because they have no self-contained hardware
> > support, and as such, running pmdinfo.py on such an application should report
> > exactly that.
> > 
> > That said, in order to run, and DPDK application built to use shared libraries
> > has to use one of two methods to obtain hardware support
> > 
> > A) direct shared linking (the DT_NEEDED case) - This case is handled, and we
> > report hardware support when found, as the application won't run unless those
> > libraries are resolved
> > 
> > b) dynamic loading via dlopen - This case shouldn't be handled, because the
> > application in reality doesn't support  any hardware.  Hardware support is
> > garnered at run time when the EAL_PMD_PATH (and any other paths added via the -d
> > option) are scanned.  In this case, pmdinfo shouldn't report any hardware
> > support, it should only do so if the pmd DSO is queried directly.
> > 
> > > It defaults to empty because at the time there was no standard installation
> > > available at that time. Setting a reasonable default is tricky still because
> > > it needs to be set before build whereas install path is set at install time.
> > > 
> > Exactly, which is why distros don't use it.  It also neglects the multiuser case
> > (in which different users may want to load different hardware support).
> 
> Ehh? It exists *primarily* for distro needs. I suggest you take a look how
> it all works in current Fedora and RHEL packages. The packaging is
> monolithic at the moment but thanks to the plugin autoloading, it would be
> possible to split drivers into different subpackages to eg provide minimal
> driver package for use in virtual guests where all space counts etc, and to
> allow 3rd party drivers to be dropped in and so on.
> 
And if you run pmdinfo, then remove a package that provides a 3rd party driver
which it previously reported support for?

The bottom line is, the application you scan with pmdinfo doesn't acutally
support any hardware (save for whats statically linked or dynamically linked via
DT_NEEDED entries).  It has no support for any hardware provided by the plugin
interface until run time when those directories are queried and loaded.  As such
the only sane consistent thing to do is not report on that hardware support.

> > > > And it also requires that the binary will
> > > > be tied to a specific release.  I really think that, given the fact that
> > > > distributions generally try to package dpdk in such a way that multiple dpdk
> > > > versions might be available, the right solution is to just require a full path
> > > > specification if you want to get hw info for a DSO that is dynamically loaded
> > > > via dlopen from the command line.  Otherwise you're going to fall into this trap
> > > > where you might be looking implicitly at an older version of the PMD while your
> > > > application may use a newer version.
> > > 
> > > If there are multiple dpdk versions available then they just need to have
> > > separate PMD paths, but that's not a problem.
> > > 
> > > > > > > 
> > > > > > > Calling up on the list of requirements from
> > > > > > > http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
> > > > > > > technical requirements but perhaps we should stop for a moment to think
> > > > > > > about the use-cases first?
> > > > > > 
> > > > > > To ennumerate the list:
> > > > > > 
> > > > > > - query all drivers in static binary or shared library (works)
> > > > > > - stripping resiliency (works)
> > > > > > - human friendly (works)
> > > > > > - script friendly (works)
> > > > > > - show driver name (works)
> > > > > > - list supported device id / name (works)
> > > > > > - list driver options (not yet, but possible)
> > > > > > - show driver version if available (nope, but possible)
> > > > > > - show dpdk version (nope, but possible)
> > > > > > - show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
> > > > > > - room for extra information? (works)
> > > > > > 
> > > > > > Of the items that are missing, I've already got a V2 started that can do driver
> > > > > > options, and is easier to expand.  Adding in the the DPDK and PMD version should
> > > > > > be easy (though I think they can be left out, as theres currently no globaly
> > > > > > defined DPDK release version, its all just implicit, and driver versions aren't
> > > > > > really there either).  I'm also hesitant to include kernel dependencies without
> > > > > > defining exactly what they mean (just module dependencies, or feature
> > > > > > enablement, or something else?).  Once we define it though, adding it can be
> > > > > > easy.
> > > > > 
> > > > > Yup. I just think the shared/static difference needs to be sorted out
> > > > > somehow, eg requiring user to know about DSOs is not human-friendly at all.
> > > > > That's why I called for the higher level use-cases in my previous email.
> > > > > 
> > > > 
> > > > I disagree with that.  While its reasonable to give users the convienience of
> > > > scanning the DT_NEEDED entries of a binary and scanning those DSO's.  If a user
> > > 
> > > Scanning DT_NEEDED is of course ok sane and right thing to do, its just not
> > > sufficient.
> > > 
> > But its the only sane thing we can do implicitly in the shared case, because we
> > know those drivers have to be resolved for the app to run.  In the
> > RTE_EAL_PMD_PATH or -d cases, we dont' know until runtime what drivers that will
> > include, and so reporting on hardware support prior to run time via scanning of
> > the application is erroneous.  The sane thing to do is scan the pmd DSO, which
> > is where the hardware support resides, and make it clear that, for an
> > application to get that hardware support, they need to either link dynamically
> > (via a DT_NEEDED entry), or specify it on the command line, or make sure its in
> > RTE_EAL_PMD_PATH (if the distribution set it, which so far, no one does).
> > 
> > > > has to specify the PMD to load in an application (either on the command line or
> > > > via a configuration file), then its reasonable assume that they (a) know where
> > > 
> > > But when the PMD path is set (as it should be on a distro build), this is
> > > all automatic with zero action or extra config required from the user.
> > > 
> > Its not,  RHEL doesn't do it, Fedora Doesn't do it, Ubuntu doesn't do it.  Even
> 
> Again, check your facts please. I dont know about Ubuntu but Fedora and RHEL
> do set it to make the damn thing actually work out of the box without
> requiring the user to figure out magic -d arguments.
> 
Yes, I stipulated to your quiet change above, though I take issue to your
referring to a command line argument as being 'magic' while being perfectly fine
with plugin loading happening automatically due to some internal library
configuration that is invisible to the end user.

> > if it is, it doesn't imply an application will get that support, as the
> > directory contents may change between scanning and application run time.
> 
> 
> 
> > > > to find that pmd and (b) are savy enough to pass that same path to a hardware
> > > > info tool.  Thats the exact same way that modinfo works (save for the fact that
> > > > modinfo can implicitly check the running kernel version to find the appropriate
> > > > path for a modular driver).
> > > > 
> > > > The only other thing that seems reasonable to me would be to scan
> > > > LD_LIBRARY_PATH.  I would assume that, if an application is linked dynamically,
> > > > the individual DSO's (librte_sched.so, etc), need to be in LD_LIBRARY_PATH.  If
> > > > thats the case, then we can assume that the appropriate PMD DSO's are there too,
> > > > and we can search there.  We can also check the standard /usr/lib and /lib paths
> > > > with that.  I think that would make fairly good sense.
> > > 
> > > You really don't want go crashing through the potentially thousands of
> > > libraries in the standard library path going "is it a pmd, no, is it a pmd,
> > > no..."
> > What?  No.  You misunderstand.  In the above, all I'm saying is that if you
> > specify an application, you can scan LD_LIBRARY_PATH for libraries in the
> > DT_NEEDED list. I don't mean to say that we shoudl check _every_ library in
> > LD_LIBRARY_PATH, that would be crazy.  Of course, the same problem exists with
> > RTE_EAL_PMD_PATH.  In the current installation, all pmd libraries are co-located
> > with the rest of the dpdk library set, so RTE_EAL_PMD_PATH would have to be set
> > to whatever that installation path was.  And once thats done, we would have to
> > go crashing through every DPDK library to see if it was a pmd.  Thats just as
> > insane.
> 
> Yes I misunderstood what you meant by looking through LD_LIBRARY_PATH, and
> you misunderstood what I meant by it. I guess we can agree on having
> misunderstood each other :) RTE_EAL_PMD_PATH needs to point to a directory
> where nothing but PMDs exist. That is the standard practise with plugins on
> all userland software.
> 
I suppose we did misunderstand each other.

Look, I think we're simply not going to agree on this issue at all.  What about
this in the way of compromise.  I simply am not comfortable with automatically
trying to guess what hardware support will exist in an application based on the
transient contents of a plugin directory, because of all the reasons we've
already gone over, but I do understand the desire to get information about what
_might_ be automatically loaded for an application.  what if we added a 'plugin
mode' to pmdinfo. In this mode you would dpecify a dpdk installation directory
and an appropriate mode option.  When specified pmdinfo would scan librte_eal in
the specified directory, looking for an exported json string that informs us of
the configured plugin directory.  If found, we iterate through all the libraries
there displaying hw support.  That allows you to query the plugin directory for
available hardware support, while not implying that the application is
guaranteed to get it (because you didn't specifically state on the command line
that you wanted to know about the applications hardware support).

Neil

> 	- Panu -
> 
> > 
> > Neil
> > 
> 
>
Panu Matilainen May 23, 2016, 11:56 a.m. UTC | #13
On 05/20/2016 05:06 PM, Neil Horman wrote:
[...]
> Look, I think we're simply not going to agree on this issue at all.  What about
> this in the way of compromise.  I simply am not comfortable with automatically
> trying to guess what hardware support will exist in an application based on the
> transient contents of a plugin directory, because of all the reasons we've
> already gone over, but I do understand the desire to get information about what
> _might_ be automatically loaded for an application.  what if we added a 'plugin
> mode' to pmdinfo. In this mode you would dpecify a dpdk installation directory
> and an appropriate mode option.  When specified pmdinfo would scan librte_eal in
> the specified directory, looking for an exported json string that informs us of
> the configured plugin directory.  If found, we iterate through all the libraries
> there displaying hw support.  That allows you to query the plugin directory for
> available hardware support, while not implying that the application is
> guaranteed to get it (because you didn't specifically state on the command line
> that you wanted to know about the applications hardware support).

That brings it all to one tiny step away from what I've been asking: 
have the plugin mode automatically locate librte_eal from an executable. 
So I'm not quite sure where the compromise is supposed to be here :)

I do appreciate wanting to differentiate between "physically" linked-in 
and runtime-loaded hardware support, they obviously *are* different from 
a technical POV. But its also a difference an average user might not 
even know about or understand, let alone care about, they just want to 
know "will it work?"

	- Panu -
Neil Horman May 23, 2016, 1:55 p.m. UTC | #14
On Mon, May 23, 2016 at 02:56:18PM +0300, Panu Matilainen wrote:
> On 05/20/2016 05:06 PM, Neil Horman wrote:
> [...]
> > Look, I think we're simply not going to agree on this issue at all.  What about
> > this in the way of compromise.  I simply am not comfortable with automatically
> > trying to guess what hardware support will exist in an application based on the
> > transient contents of a plugin directory, because of all the reasons we've
> > already gone over, but I do understand the desire to get information about what
> > _might_ be automatically loaded for an application.  what if we added a 'plugin
> > mode' to pmdinfo. In this mode you would dpecify a dpdk installation directory
> > and an appropriate mode option.  When specified pmdinfo would scan librte_eal in
> > the specified directory, looking for an exported json string that informs us of
> > the configured plugin directory.  If found, we iterate through all the libraries
> > there displaying hw support.  That allows you to query the plugin directory for
> > available hardware support, while not implying that the application is
> > guaranteed to get it (because you didn't specifically state on the command line
> > that you wanted to know about the applications hardware support).
> 
> That brings it all to one tiny step away from what I've been asking: have
> the plugin mode automatically locate librte_eal from an executable. So I'm
> not quite sure where the compromise is supposed to be here :)
The compromise is that I'm not willing to quietly assume that a given
application linked to the dpdk library in /usr/lib64/dpdk-<version>, will get
hardware support for the cxgb4, mlx5 and ixgbe pmds, because those DSO's are in
the exported RTE_EAL_PMD_PATH.  With this method, you at least have to tell the
pmdinfo application that I wish to scan that path for pmds and report on
hardware support for whatever is found there.  Thats a different operation from
getting a report on what hardware an application supports.  i.e. its the
difference between asking the questions:

"What hardware does the application /usr/bin/foo support"
and
"What hardware support is available via the plugin DSO's pointed to by the dpdk
version in /usr/lib64/dpdk-2.2"

I feel its important for users to understand that autoloading doesn't not
guarantee support for the hardware that is autoloaded to any application.  My
compromise is to provide what your asking for, but doing so in a way that
attempts to make that differentiation clear.

> 
> I do appreciate wanting to differentiate between "physically" linked-in and
> runtime-loaded hardware support, they obviously *are* different from a
> technical POV. But its also a difference an average user might not even know
> about or understand, let alone care about, they just want to know "will it
> work?"
> 

Which average user are we talking about here?  Keep in mind the DPDK is
anything but mainstream.  Its a toolkit to implement high speed network
communications for niche or custom purpose applications.  The users of tools
like this are going to be people who understand the nuance of how
applications are built and want to tune them to work at their most efficient
point, not some person just trying to get firefox to connect to digg.  I think
its reasonable to assume that people who are running the tool have sufficient
knoweldge to understand that DSO's and static binaries may embody hardware
support differently, and that things which are loaded at run time may not be
reported at scan time (by way of corressponding example, though its not perfect,
people seem to understand that iscsid supports lots of different HBA's, but the
specific hardware support for those HBA's isn't codified in /usr/sbin/iscsid,
but rather in the modules under /lib/modules/<kversion/...).  I understand thats
a lousy comparison, as the notion of static linkage of the kernel doesn't really
compare to static application linkage, but still, people can figure out whats
going on there pretty readily, and I think they can here too.

As to your other comment, yes, the end users just want to know "will it work".
But from a philisophical standpont, I don't agree with your assertion that
we can answer this question by doing what you're asking me to do.  The answer to
"Will my application support the hardware on this system with the plugins found
in RTE_EAL_PMD_PATH?" is "I don't know".  Thats because you don't know what the
contents of that directory will be when the application is run later.  The only
things we can tell at the time we run pmdinfo are:

1) What the hardware support of a static binary is for its linked in libraries

2) What the hardware support of a dynamic binary is via its DT_NEEDED entries

3) What the hardware support of a specific PMD DSO is

I am fundamentally opposed to trying to guess what hardware support will be
loaded dynamically via dlopen methods when an application is run at a later
time, and firmly believe that it is more consistent to simply not report that
information in both the static and dynamic case, and educate the user about how
to determine hardware support for dynamically loaded PMD's (perhaps a man page
would be worthwhile here)

Neil

> 	- Panu -
> 
>
Panu Matilainen May 24, 2016, 6:15 a.m. UTC | #15
On 05/23/2016 04:55 PM, Neil Horman wrote:
> On Mon, May 23, 2016 at 02:56:18PM +0300, Panu Matilainen wrote:
>> On 05/20/2016 05:06 PM, Neil Horman wrote:
>> [...]
>>> Look, I think we're simply not going to agree on this issue at all.  What about
>>> this in the way of compromise.  I simply am not comfortable with automatically
>>> trying to guess what hardware support will exist in an application based on the
>>> transient contents of a plugin directory, because of all the reasons we've
>>> already gone over, but I do understand the desire to get information about what
>>> _might_ be automatically loaded for an application.  what if we added a 'plugin
>>> mode' to pmdinfo. In this mode you would dpecify a dpdk installation directory
>>> and an appropriate mode option.  When specified pmdinfo would scan librte_eal in
>>> the specified directory, looking for an exported json string that informs us of
>>> the configured plugin directory.  If found, we iterate through all the libraries
>>> there displaying hw support.  That allows you to query the plugin directory for
>>> available hardware support, while not implying that the application is
>>> guaranteed to get it (because you didn't specifically state on the command line
>>> that you wanted to know about the applications hardware support).
>>
>> That brings it all to one tiny step away from what I've been asking: have
>> the plugin mode automatically locate librte_eal from an executable. So I'm
>> not quite sure where the compromise is supposed to be here :)
> The compromise is that I'm not willing to quietly assume that a given
> application linked to the dpdk library in /usr/lib64/dpdk-<version>, will get
> hardware support for the cxgb4, mlx5 and ixgbe pmds, because those DSO's are in
> the exported RTE_EAL_PMD_PATH.

Why not? I dont get it.

> With this method, you at least have to tell the
> pmdinfo application that I wish to scan that path for pmds and report on
> hardware support for whatever is found there.  Thats a different operation from
> getting a report on what hardware an application supports.  i.e. its the
> difference between asking the questions:
>
> "What hardware does the application /usr/bin/foo support"
> and
> "What hardware support is available via the plugin DSO's pointed to by the dpdk
> version in /usr/lib64/dpdk-2.2"

Well, for the application to be able to load any PMDs, it will have to 
be linked to some version of librte_eal, which will have to be somewhere 
in the library search path (or use rpath).


> I feel its important for users to understand that autoloading doesn't not
> guarantee support for the hardware that is autoloaded to any application.  My
> compromise is to provide what your asking for, but doing so in a way that
> attempts to make that differentiation clear.

I would think requiring a specific option to enable the plugin scan 
should be quite enough to make that point clear.

>>
>> I do appreciate wanting to differentiate between "physically" linked-in and
>> runtime-loaded hardware support, they obviously *are* different from a
>> technical POV. But its also a difference an average user might not even know
>> about or understand, let alone care about, they just want to know "will it
>> work?"
>>
>
> Which average user are we talking about here?  Keep in mind the DPDK is
> anything but mainstream.  Its a toolkit to implement high speed network
> communications for niche or custom purpose applications.  The users of tools
> like this are going to be people who understand the nuance of how
> applications are built and want to tune them to work at their most efficient
> point, not some person just trying to get firefox to connect to digg.  I think
> its reasonable to assume that people who are running the tool have sufficient
> knoweldge to understand that DSO's and static binaries may embody hardware
> support differently, and that things which are loaded at run time may not be
> reported at scan time (by way of corressponding example, though its not perfect,
> people seem to understand that iscsid supports lots of different HBA's, but the
> specific hardware support for those HBA's isn't codified in /usr/sbin/iscsid,
> but rather in the modules under /lib/modules/<kversion/...).  I understand thats
> a lousy comparison, as the notion of static linkage of the kernel doesn't really
> compare to static application linkage, but still, people can figure out whats
> going on there pretty readily, and I think they can here too.
>
> As to your other comment, yes, the end users just want to know "will it work".
> But from a philisophical standpont, I don't agree with your assertion that
> we can answer this question by doing what you're asking me to do.  The answer to
> "Will my application support the hardware on this system with the plugins found
> in RTE_EAL_PMD_PATH?" is "I don't know".  Thats because you don't know what the
> contents of that directory will be when the application is run later.

Come on. RTE_EAL_PMD_PATH is expected to point to a system directory 
owned by root or such, stuff just doesn't randomly come and go. 
Everything is subject to root changing system configuration between now 
and some later time.

> The only things we can tell at the time we run pmdinfo are:
>
> 1) What the hardware support of a static binary is for its linked in libraries
>
> 2) What the hardware support of a dynamic binary is via its DT_NEEDED entries
>
> 3) What the hardware support of a specific PMD DSO is
>
> I am fundamentally opposed to trying to guess what hardware support will be
> loaded dynamically via dlopen methods when an application is run at a later
> time, and firmly believe that it is more consistent to simply not report that
> information in both the static and dynamic case, and educate the user about how
> to determine hardware support for dynamically loaded PMD's (perhaps a man page
> would be worthwhile here)

Well, with the plugin mode and pmd path export in your v3 patches 
(thanks for that!) all the necessary pieces are there so fishing out the 
information is a rather trivial one-liner of a script now:

---
#!/bin/sh

/usr/share/dpdk/tools/pmdinfo.py -p $(ldd $1 |awk '/librte_eal/{print $3}')
---

I fail to understand how having a separate script to handle this is 
better than having pmdinfo do it directly when run in the plugin mode, 
but as you wish.

	- Panu -
Neil Horman May 24, 2016, 2:55 p.m. UTC | #16
On Tue, May 24, 2016 at 09:15:41AM +0300, Panu Matilainen wrote:
> On 05/23/2016 04:55 PM, Neil Horman wrote:
> > On Mon, May 23, 2016 at 02:56:18PM +0300, Panu Matilainen wrote:
> > > On 05/20/2016 05:06 PM, Neil Horman wrote:
> > > [...]
> > > > Look, I think we're simply not going to agree on this issue at all.  What about
> > > > this in the way of compromise.  I simply am not comfortable with automatically
> > > > trying to guess what hardware support will exist in an application based on the
> > > > transient contents of a plugin directory, because of all the reasons we've
> > > > already gone over, but I do understand the desire to get information about what
> > > > _might_ be automatically loaded for an application.  what if we added a 'plugin
> > > > mode' to pmdinfo. In this mode you would dpecify a dpdk installation directory
> > > > and an appropriate mode option.  When specified pmdinfo would scan librte_eal in
> > > > the specified directory, looking for an exported json string that informs us of
> > > > the configured plugin directory.  If found, we iterate through all the libraries
> > > > there displaying hw support.  That allows you to query the plugin directory for
> > > > available hardware support, while not implying that the application is
> > > > guaranteed to get it (because you didn't specifically state on the command line
> > > > that you wanted to know about the applications hardware support).
> > > 
> > > That brings it all to one tiny step away from what I've been asking: have
> > > the plugin mode automatically locate librte_eal from an executable. So I'm
> > > not quite sure where the compromise is supposed to be here :)
> > The compromise is that I'm not willing to quietly assume that a given
> > application linked to the dpdk library in /usr/lib64/dpdk-<version>, will get
> > hardware support for the cxgb4, mlx5 and ixgbe pmds, because those DSO's are in
> > the exported RTE_EAL_PMD_PATH.
> 
> Why not? I dont get it.
> 
For all the reasons I've stated in the I don't even know how many emails we've
sent back and forth in this thread.  Because, once again, just because a pmd
exists in the auto load path at the time you do the scan, doesn't mean it will
be there at the time you run the application.  By creating this PMD mode, the
users has to recognize implicitly that there is something different about
autoloaded pmds, namely that they are not expressly bound to the application
itself. By specifing a special option they can still figure out what hardware
support exists in the autoload path at that time, but they have to realize that
these pmds are 'different'
 
> > With this method, you at least have to tell the
> > pmdinfo application that I wish to scan that path for pmds and report on
> > hardware support for whatever is found there.  Thats a different operation from
> > getting a report on what hardware an application supports.  i.e. its the
> > difference between asking the questions:
> > 
> > "What hardware does the application /usr/bin/foo support"
> > and
> > "What hardware support is available via the plugin DSO's pointed to by the dpdk
> > version in /usr/lib64/dpdk-2.2"
> 
> Well, for the application to be able to load any PMDs, it will have to be
> linked to some version of librte_eal, which will have to be somewhere in the
> library search path (or use rpath).
> 
Yes. but see my above complaint.  Implicitly tying the application to the PMD
autoload path is trivial, I'm not making that argument, what I'm arguing is
that, the autoload path is transient, and so telling users that application X
supports all the hardware supported by all the PMD's in the autoload path is
erroneous, because whats there the moment you scan, isn't guaranteed to be there
the moment the application runs.  Thats the compromise I've been trying to offer
you.  By creating a separate 'plugin' mode we offer the user the chance to ask
each question above independently, and thereby allow them to draw the conclusion
that tying hardware support in the autoload path is an extra step that they need
to manage (by linking to the right dpdk version, or by setting LD_LIBRARY_PATH
correctly, etc).

> 
> > I feel its important for users to understand that autoloading doesn't not
> > guarantee support for the hardware that is autoloaded to any application.  My
> > compromise is to provide what your asking for, but doing so in a way that
> > attempts to make that differentiation clear.
> 
> I would think requiring a specific option to enable the plugin scan should
> be quite enough to make that point clear.
> 
Yes, that was why I wrote the option. Soo.....Are we in agreement here?

> > > 
> > > I do appreciate wanting to differentiate between "physically" linked-in and
> > > runtime-loaded hardware support, they obviously *are* different from a
> > > technical POV. But its also a difference an average user might not even know
> > > about or understand, let alone care about, they just want to know "will it
> > > work?"
> > > 
> > 
> > Which average user are we talking about here?  Keep in mind the DPDK is
> > anything but mainstream.  Its a toolkit to implement high speed network
> > communications for niche or custom purpose applications.  The users of tools
> > like this are going to be people who understand the nuance of how
> > applications are built and want to tune them to work at their most efficient
> > point, not some person just trying to get firefox to connect to digg.  I think
> > its reasonable to assume that people who are running the tool have sufficient
> > knoweldge to understand that DSO's and static binaries may embody hardware
> > support differently, and that things which are loaded at run time may not be
> > reported at scan time (by way of corressponding example, though its not perfect,
> > people seem to understand that iscsid supports lots of different HBA's, but the
> > specific hardware support for those HBA's isn't codified in /usr/sbin/iscsid,
> > but rather in the modules under /lib/modules/<kversion/...).  I understand thats
> > a lousy comparison, as the notion of static linkage of the kernel doesn't really
> > compare to static application linkage, but still, people can figure out whats
> > going on there pretty readily, and I think they can here too.
> > 
> > As to your other comment, yes, the end users just want to know "will it work".
> > But from a philisophical standpont, I don't agree with your assertion that
> > we can answer this question by doing what you're asking me to do.  The answer to
> > "Will my application support the hardware on this system with the plugins found
> > in RTE_EAL_PMD_PATH?" is "I don't know".  Thats because you don't know what the
> > contents of that directory will be when the application is run later.
> 
> Come on. RTE_EAL_PMD_PATH is expected to point to a system directory owned
> by root or such, stuff just doesn't randomly come and go. Everything is
> subject to root changing system configuration between now and some later
> time.
> 
Sure it can, rpm's/deb/etc are added and removed regularly.  The method of
change doesn't matter, its the result that counts.

> > The only things we can tell at the time we run pmdinfo are:
> > 
> > 1) What the hardware support of a static binary is for its linked in libraries
> > 
> > 2) What the hardware support of a dynamic binary is via its DT_NEEDED entries
> > 
> > 3) What the hardware support of a specific PMD DSO is
> > 
> > I am fundamentally opposed to trying to guess what hardware support will be
> > loaded dynamically via dlopen methods when an application is run at a later
> > time, and firmly believe that it is more consistent to simply not report that
> > information in both the static and dynamic case, and educate the user about how
> > to determine hardware support for dynamically loaded PMD's (perhaps a man page
> > would be worthwhile here)
> 
> Well, with the plugin mode and pmd path export in your v3 patches (thanks
> for that!) all the necessary pieces are there so fishing out the information
> is a rather trivial one-liner of a script now:
> 
> ---
> #!/bin/sh
> 
> /usr/share/dpdk/tools/pmdinfo.py -p $(ldd $1 |awk '/librte_eal/{print $3}')
> ---
> 
> I fail to understand how having a separate script to handle this is better
> than having pmdinfo do it directly when run in the plugin mode, but as you
> wish.
> 
You don't even need a script, you just need to have the rudimentary
understanding that DPDK is architected in such a way that hardware support may
be embodied in multiple binaries, and so depending on your load method (link
time vs. run time), you may need to check a different location.  Its really not
that hard.  You have to understand that thats the case anyway if you use -d on
the command line.

> 	- Panu -
> 
>

Patch
diff mbox

diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
new file mode 100755
index 0000000..0669aca
--- /dev/null
+++ b/tools/pmd_hw_support.py
@@ -0,0 +1,174 @@ 
+#!/usr/bin/python3
+#-------------------------------------------------------------------------------
+# scripts/pmd_hw_support.py
+#
+# Utility to dump PMD_INFO_STRING support from an object file
+#
+#-------------------------------------------------------------------------------
+import os, sys
+from optparse import OptionParser
+import string
+import json
+
+# For running from development directory. It should take precedence over the
+# installed pyelftools.
+sys.path.insert(0, '.')
+
+
+from elftools import __version__
+from elftools.common.exceptions import ELFError
+from elftools.common.py3compat import (
+        ifilter, byte2int, bytes2str, itervalues, str2bytes)
+from elftools.elf.elffile import ELFFile
+from elftools.elf.dynamic import DynamicSection, DynamicSegment
+from elftools.elf.enums import ENUM_D_TAG
+from elftools.elf.segments import InterpSegment
+from elftools.elf.sections import SymbolTableSection
+from elftools.elf.gnuversions import (
+    GNUVerSymSection, GNUVerDefSection,
+    GNUVerNeedSection,
+    )
+from elftools.elf.relocation import RelocationSection
+from elftools.elf.descriptions import (
+    describe_ei_class, describe_ei_data, describe_ei_version,
+    describe_ei_osabi, describe_e_type, describe_e_machine,
+    describe_e_version_numeric, describe_p_type, describe_p_flags,
+    describe_sh_type, describe_sh_flags,
+    describe_symbol_type, describe_symbol_bind, describe_symbol_visibility,
+    describe_symbol_shndx, describe_reloc_type, describe_dyn_tag,
+    describe_ver_flags,
+    )
+from elftools.elf.constants import E_FLAGS
+from elftools.dwarf.dwarfinfo import DWARFInfo
+from elftools.dwarf.descriptions import (
+    describe_reg_name, describe_attr_value, set_global_machine_arch,
+    describe_CFI_instructions, describe_CFI_register_rule,
+    describe_CFI_CFA_rule,
+    )
+from elftools.dwarf.constants import (
+    DW_LNS_copy, DW_LNS_set_file, DW_LNE_define_file)
+from elftools.dwarf.callframe import CIE, FDE
+
+raw_output = False;
+
+class ReadElf(object):
+    """ display_* methods are used to emit output into the output stream
+    """
+    def __init__(self, file, output):
+        """ file:
+                stream object with the ELF file to read
+
+            output:
+                output stream to write to
+        """
+        self.elffile = ELFFile(file)
+        self.output = output
+
+        # Lazily initialized if a debug dump is requested
+        self._dwarfinfo = None
+
+        self._versioninfo = None
+
+    def _section_from_spec(self, spec):
+        """ Retrieve a section given a "spec" (either number or name).
+            Return None if no such section exists in the file.
+        """
+        try:
+            num = int(spec)
+            if num < self.elffile.num_sections():
+                return self.elffile.get_section(num)
+            else:
+                return None
+        except ValueError:
+            # Not a number. Must be a name then
+            return self.elffile.get_section_by_name(str2bytes(spec))
+
+    def parse_pmd_info_string(self, mystring):
+        global raw_output
+        i = mystring.index("=");
+        mystring = mystring[i+2:]
+        pmdinfo = json.loads(mystring)
+
+        if raw_output:
+            print(pmdinfo)
+            return
+
+        print("PMD NAME: " + pmdinfo["name"])
+        print("PMD TYPE: " + pmdinfo["type"])
+        if (pmdinfo["type"] == "PMD_PDEV"):
+            print("PMD HW SUPPORT:")
+            print("VENDOR\t DEVICE\t SUBVENDOR\t SUBDEVICE")
+            for i in pmdinfo["pci_ids"]:
+                print("0x%04x\t 0x%04x\t 0x%04x\t\t 0x%04x" % (i[0], i[1], i[2], i[3]))
+
+        print("")
+
+
+    def display_pmd_info_strings(self, section_spec):
+        """ Display a strings dump of a section. section_spec is either a
+            section number or a name.
+        """
+        section = self._section_from_spec(section_spec)
+        if section is None:
+            return
+
+
+        found = False
+        data = section.data()
+        dataptr = 0
+
+        while dataptr < len(data):
+            while ( dataptr < len(data) and
+                    not (32 <= byte2int(data[dataptr]) <= 127)):
+                dataptr += 1
+
+            if dataptr >= len(data):
+                break
+
+            endptr = dataptr
+            while endptr < len(data) and byte2int(data[endptr]) != 0:
+                endptr += 1
+
+            found = True
+            mystring = bytes2str(data[dataptr:endptr])
+            rc = mystring.find("PMD_INFO_STRING")
+            if (rc != -1):
+                self.parse_pmd_info_string(mystring)
+
+            dataptr = endptr
+
+
+def main(stream=None):
+    global raw_output
+
+    optparser = OptionParser(
+            usage='usage: %prog [-h|-r] <elf-file>',
+            description="Dump pmd hardware support info",
+            add_help_option=True,
+            prog='pmd_hw_support.py')
+    optparser.add_option('-r', '--raw',
+            action='store_true', dest='raw_output',
+            help='Dump raw json strings')
+
+    options, args = optparser.parse_args()
+
+    if options.raw_output:
+        raw_output = True
+
+    with open(args[0], 'rb') as file:
+        try:
+            readelf = ReadElf(file, stream or sys.stdout)
+   
+            readelf.display_pmd_info_strings(".rodata") 
+            sys.exit(0)
+ 
+        except ELFError as ex:
+            sys.stderr.write('ELF error: %s\n' % ex)
+            sys.exit(1)
+
+
+#-------------------------------------------------------------------------------
+if __name__ == '__main__':
+    main()
+
+