[v12,04/16] baseband/acc: introduce PMD for ACC200

Message ID 20221012175930.7560-5-nicolas.chautru@intel.com (mailing list archive)
State Accepted, archived
Delegated to: akhil goyal
Headers
Series bbdev ACC200 PMD |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Chautru, Nicolas Oct. 12, 2022, 5:59 p.m. UTC
  From: Nic Chautru <nicolas.chautru@intel.com>

Introduced stubs for device driver for the ACC200
integrated VRAN accelerator on SPR-EEC

Signed-off-by: Nic Chautru <nicolas.chautru@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 MAINTAINERS                            |   3 +
 doc/guides/bbdevs/acc200.rst           | 257 +++++++++++++++++++++++++
 doc/guides/bbdevs/features/acc200.ini  |  14 ++
 doc/guides/bbdevs/features/default.ini |   1 +
 doc/guides/bbdevs/index.rst            |   1 +
 doc/guides/rel_notes/release_22_11.rst |   6 +
 drivers/baseband/acc/acc200_pmd.h      |  32 +++
 drivers/baseband/acc/meson.build       |   2 +-
 drivers/baseband/acc/rte_acc200_pmd.c  | 143 ++++++++++++++
 9 files changed, 458 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/bbdevs/acc200.rst
 create mode 100644 doc/guides/bbdevs/features/acc200.ini
 create mode 100644 drivers/baseband/acc/acc200_pmd.h
 create mode 100644 drivers/baseband/acc/rte_acc200_pmd.c
  

Comments

Akhil Goyal Oct. 13, 2022, 9:11 a.m. UTC | #1
> From: Nic Chautru <nicolas.chautru@intel.com>
> 
> Introduced stubs for device driver for the ACC200
> integrated VRAN accelerator on SPR-EEC
> 
> Signed-off-by: Nic Chautru <nicolas.chautru@intel.com>
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  MAINTAINERS                            |   3 +
>  doc/guides/bbdevs/acc200.rst           | 257 +++++++++++++++++++++++++
>  doc/guides/bbdevs/features/acc200.ini  |  14 ++
>  doc/guides/bbdevs/features/default.ini |   1 +
>  doc/guides/bbdevs/index.rst            |   1 +
>  doc/guides/rel_notes/release_22_11.rst |   6 +
>  drivers/baseband/acc/acc200_pmd.h      |  32 +++
>  drivers/baseband/acc/meson.build       |   2 +-
>  drivers/baseband/acc/rte_acc200_pmd.c  | 143 ++++++++++++++
>  9 files changed, 458 insertions(+), 1 deletion(-)
>  create mode 100644 doc/guides/bbdevs/acc200.rst
>  create mode 100644 doc/guides/bbdevs/features/acc200.ini
>  create mode 100644 drivers/baseband/acc/acc200_pmd.h
>  create mode 100644 drivers/baseband/acc/rte_acc200_pmd.c
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 31597139c7..5c095b45d1 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1339,6 +1339,9 @@ F: drivers/baseband/acc/
>  F: doc/guides/bbdevs/acc100.rst
>  F: doc/guides/bbdevs/features/acc100.ini
>  F: doc/guides/bbdevs/features/acc101.ini
> +F: drivers/baseband/acc200/

Acc200 is not a folder anymore. I removed it.

> +F: doc/guides/bbdevs/acc200.rst
> +F: doc/guides/bbdevs/features/acc200.ini
  
Thomas Monjalon Oct. 30, 2022, 4:02 p.m. UTC | #2
12/10/2022 19:59, Nicolas Chautru:
> +Bind PF UIO driver(s)
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +Install the DPDK igb_uio driver, bind it with the PF PCI device ID and use
> +``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK UIO driver.

igb_uio is not recommended.
Please focus on VFIO first.

> +The igb_uio driver may be bound to the PF PCI device using one of two methods
> +for ACC200:
> +
> +
> +1. PCI functions (physical or virtual, depending on the use case) can be bound
> +to the UIO driver by repeating this command for every function.
> +
> +.. code-block:: console
> +
> +  cd <dpdk-top-level-directory>
> +  insmod ./build/kmod/igb_uio.ko
> +  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> +  lspci -vd8086:57c0
> +
> +
> +2. Another way to bind PF with DPDK UIO driver is by using the ``dpdk-devbind.py`` tool
> +
> +.. code-block:: console
> +
> +  cd <dpdk-top-level-directory>
> +  ./usertools/dpdk-devbind.py -b igb_uio 0000:f7:00.0
> +
> +where the PCI device ID (example: 0000:f7:00.0) is obtained using lspci -vd8086:57c0

This binding is not specific to the driver.
It would be better to refer to the Linux guide
instead of duplicating it again and again.

> +In a similar way the PF may be bound with vfio-pci as any PCIe device.

You could mention igb_uio here.
Is there any advantage in using igb_uio?
  
Chautru, Nicolas Oct. 31, 2022, 3:43 p.m. UTC | #3
Hi Thomas, 

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Sunday, October 30, 2022 9:03 AM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>
> Cc: dev@dpdk.org; gakhil@marvell.com; maxime.coquelin@redhat.com;
> trix@redhat.com; Richardson, Bruce <bruce.richardson@intel.com>;
> hemant.agrawal@nxp.com; david.marchand@redhat.com;
> stephen@networkplumber.org; Vargas, Hernan <hernan.vargas@intel.com>
> Subject: Re: [PATCH v12 04/16] baseband/acc: introduce PMD for ACC200
> 
> 12/10/2022 19:59, Nicolas Chautru:
> > +Bind PF UIO driver(s)
> > +~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Install the DPDK igb_uio driver, bind it with the PF PCI device ID
> > +and use ``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK
> UIO driver.
> 
> igb_uio is not recommended.
> Please focus on VFIO first.
> 
> > +The igb_uio driver may be bound to the PF PCI device using one of two
> > +methods for ACC200:
> > +
> > +
> > +1. PCI functions (physical or virtual, depending on the use case) can
> > +be bound to the UIO driver by repeating this command for every function.
> > +
> > +.. code-block:: console
> > +
> > +  cd <dpdk-top-level-directory>
> > +  insmod ./build/kmod/igb_uio.ko
> > +  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > +  lspci -vd8086:57c0
> > +
> > +
> > +2. Another way to bind PF with DPDK UIO driver is by using the
> > +``dpdk-devbind.py`` tool
> > +
> > +.. code-block:: console
> > +
> > +  cd <dpdk-top-level-directory>
> > +  ./usertools/dpdk-devbind.py -b igb_uio 0000:f7:00.0
> > +
> > +where the PCI device ID (example: 0000:f7:00.0) is obtained using
> > +lspci -vd8086:57c0
> 
> This binding is not specific to the driver.
> It would be better to refer to the Linux guide instead of duplicating it again
> and again.
> 
> > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> 
> You could mention igb_uio here.
> Is there any advantage in using igb_uio?
> 

Igb_uio is arguably easier to use to new user tend to start with it or specific ecosystem. This is typically the entry point (no iommu, no flr below the bonnet, no vfio token...) hence good to have a bit of handholding with a couple of lines capturing how to easily run a few tests. I don't believe this is too redundant to have these few lines compared to the help in bring to the user not having to double guess their steps. 
More generally there are a number of module drivers combinations that are supported based on different deployments. We don't document in too much details for the details since that is not too ACC specific and there is more documentation no pf_bb_config repo for using the PMD from the VF.. 

Basically Thomas let us know more explicitly what you are suggesting as documentation update. You just want more emphasis on vfio-pci flow (which is fair, some of it documented on pf_bb_config including the vfio token passing but we can reproduce here as well) or something else? 

Thanks!
Nic
  
Thomas Monjalon Oct. 31, 2022, 3:53 p.m. UTC | #4
31/10/2022 16:43, Chautru, Nicolas:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 12/10/2022 19:59, Nicolas Chautru:
> > > +Bind PF UIO driver(s)
> > > +~~~~~~~~~~~~~~~~~~~~~
> > > +
> > > +Install the DPDK igb_uio driver, bind it with the PF PCI device ID
> > > +and use ``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK
> > UIO driver.
> > 
> > igb_uio is not recommended.
> > Please focus on VFIO first.
> > 
> > > +The igb_uio driver may be bound to the PF PCI device using one of two
> > > +methods for ACC200:
> > > +
> > > +
> > > +1. PCI functions (physical or virtual, depending on the use case) can
> > > +be bound to the UIO driver by repeating this command for every function.
> > > +
> > > +.. code-block:: console
> > > +
> > > +  cd <dpdk-top-level-directory>
> > > +  insmod ./build/kmod/igb_uio.ko
> > > +  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > > +  lspci -vd8086:57c0
> > > +
> > > +
> > > +2. Another way to bind PF with DPDK UIO driver is by using the
> > > +``dpdk-devbind.py`` tool
> > > +
> > > +.. code-block:: console
> > > +
> > > +  cd <dpdk-top-level-directory>
> > > +  ./usertools/dpdk-devbind.py -b igb_uio 0000:f7:00.0
> > > +
> > > +where the PCI device ID (example: 0000:f7:00.0) is obtained using
> > > +lspci -vd8086:57c0
> > 
> > This binding is not specific to the driver.
> > It would be better to refer to the Linux guide instead of duplicating it again
> > and again.
> > 
> > > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> > 
> > You could mention igb_uio here.
> > Is there any advantage in using igb_uio?
> > 
> 
> Igb_uio is arguably easier to use to new user tend to start with it or specific ecosystem. This is typically the entry point (no iommu, no flr below the bonnet, no vfio token...) hence good to have a bit of handholding with a couple of lines capturing how to easily run a few tests. I don't believe this is too redundant to have these few lines compared to the help in bring to the user not having to double guess their steps. 
> More generally there are a number of module drivers combinations that are supported based on different deployments. We don't document in too much details for the details since that is not too ACC specific and there is more documentation no pf_bb_config repo for using the PMD from the VF.. 
> 
> Basically Thomas let us know more explicitly what you are suggesting as documentation update. You just want more emphasis on vfio-pci flow (which is fair, some of it documented on pf_bb_config including the vfio token passing but we can reproduce here as well) or something else? 

There are 2 things to change:
1/ igb_uio is going to be deprecated, so we must emphasize on VFIO
2/ for doc maintenance, it is better to have common steps described in one place.
If needed, you can change the common doc and refer to it.
  
Chautru, Nicolas Oct. 31, 2022, 9:41 p.m. UTC | #5
Hi Thomas, 

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> 31/10/2022 16:43, Chautru, Nicolas:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 12/10/2022 19:59, Nicolas Chautru:
> > > > +Bind PF UIO driver(s)
> > > > +~~~~~~~~~~~~~~~~~~~~~
> > > > +
> > > > +Install the DPDK igb_uio driver, bind it with the PF PCI device
> > > > +ID and use ``lspci`` to confirm the PF device is under use by
> > > > +``igb_uio`` DPDK
> > > UIO driver.
> > >
> > > igb_uio is not recommended.
> > > Please focus on VFIO first.
> > >
> > > > +The igb_uio driver may be bound to the PF PCI device using one of
> > > > +two methods for ACC200:
> > > > +
> > > > +
> > > > +1. PCI functions (physical or virtual, depending on the use case)
> > > > +can be bound to the UIO driver by repeating this command for every
> function.
> > > > +
> > > > +.. code-block:: console
> > > > +
> > > > +  cd <dpdk-top-level-directory>
> > > > +  insmod ./build/kmod/igb_uio.ko
> > > > +  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > > > +  lspci -vd8086:57c0
> > > > +
> > > > +
> > > > +2. Another way to bind PF with DPDK UIO driver is by using the
> > > > +``dpdk-devbind.py`` tool
> > > > +
> > > > +.. code-block:: console
> > > > +
> > > > +  cd <dpdk-top-level-directory>
> > > > +  ./usertools/dpdk-devbind.py -b igb_uio 0000:f7:00.0
> > > > +
> > > > +where the PCI device ID (example: 0000:f7:00.0) is obtained using
> > > > +lspci -vd8086:57c0
> > >
> > > This binding is not specific to the driver.
> > > It would be better to refer to the Linux guide instead of
> > > duplicating it again and again.
> > >
> > > > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> > >
> > > You could mention igb_uio here.
> > > Is there any advantage in using igb_uio?
> > >
> >
> > Igb_uio is arguably easier to use to new user tend to start with it or specific
> ecosystem. This is typically the entry point (no iommu, no flr below the bonnet,
> no vfio token...) hence good to have a bit of handholding with a couple of lines
> capturing how to easily run a few tests. I don't believe this is too redundant to
> have these few lines compared to the help in bring to the user not having to
> double guess their steps.
> > More generally there are a number of module drivers combinations that are
> supported based on different deployments. We don't document in too much
> details for the details since that is not too ACC specific and there is more
> documentation no pf_bb_config repo for using the PMD from the VF..
> >
> > Basically Thomas let us know more explicitly what you are suggesting as
> documentation update. You just want more emphasis on vfio-pci flow (which is
> fair, some of it documented on pf_bb_config including the vfio token passing
> but we can reproduce here as well) or something else?
> 
> There are 2 things to change:
> 1/ igb_uio is going to be deprecated, so we must emphasize on VFIO

Is there a date for deprecation? Do you mean to EOL the dpdk-kmods repository itself; or something more specific for DPDK code like removing RTE_PCI_KDRV_IGB_UIO; or last to just take out from documentation?
It tends to be historical but uio has value notably for ease of use. 

2/ for doc
> maintenance, it is better to have common steps described in one place.
> If needed, you can change the common doc and refer to it.

Do you mean to remove these sections and just add a pointer to https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html instead in all these bbdev PMDS?
Please kindly confirm. I see specific steps for binding in many other PMDs docs in DPDK, a bit redundant but provides simple steps specific to a PMD in one place. I don't mind either way. 

Thanks
Nic
  
Chautru, Nicolas Nov. 7, 2022, 11:52 p.m. UTC | #6
Hi Thomas, 
Reminder : do you mind kindly clarifying/confirming below. Then we can update the docs accordingly. Thanks. 

> -----Original Message-----
> From: Chautru, Nicolas
> Sent: Monday, October 31, 2022 2:41 PM
> To: Thomas Monjalon <thomas@monjalon.net>
> Cc: dev@dpdk.org; gakhil@marvell.com; maxime.coquelin@redhat.com;
> trix@redhat.com; Richardson, Bruce <bruce.richardson@intel.com>;
> hemant.agrawal@nxp.com; david.marchand@redhat.com;
> stephen@networkplumber.org; Vargas, Hernan <Hernan.Vargas@intel.com>
> Subject: RE: [PATCH v12 04/16] baseband/acc: introduce PMD for ACC200
> 
> Hi Thomas,
> 
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > 31/10/2022 16:43, Chautru, Nicolas:
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > 12/10/2022 19:59, Nicolas Chautru:
> > > > > +Bind PF UIO driver(s)
> > > > > +~~~~~~~~~~~~~~~~~~~~~
> > > > > +
> > > > > +Install the DPDK igb_uio driver, bind it with the PF PCI device
> > > > > +ID and use ``lspci`` to confirm the PF device is under use by
> > > > > +``igb_uio`` DPDK
> > > > UIO driver.
> > > >
> > > > igb_uio is not recommended.
> > > > Please focus on VFIO first.
> > > >
> > > > > +The igb_uio driver may be bound to the PF PCI device using one
> > > > > +of two methods for ACC200:
> > > > > +
> > > > > +
> > > > > +1. PCI functions (physical or virtual, depending on the use
> > > > > +case) can be bound to the UIO driver by repeating this command
> > > > > +for every
> > function.
> > > > > +
> > > > > +.. code-block:: console
> > > > > +
> > > > > +  cd <dpdk-top-level-directory>  insmod ./build/kmod/igb_uio.ko
> > > > > + echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > > > > +  lspci -vd8086:57c0
> > > > > +
> > > > > +
> > > > > +2. Another way to bind PF with DPDK UIO driver is by using the
> > > > > +``dpdk-devbind.py`` tool
> > > > > +
> > > > > +.. code-block:: console
> > > > > +
> > > > > +  cd <dpdk-top-level-directory>  ./usertools/dpdk-devbind.py -b
> > > > > + igb_uio 0000:f7:00.0
> > > > > +
> > > > > +where the PCI device ID (example: 0000:f7:00.0) is obtained
> > > > > +using lspci -vd8086:57c0
> > > >
> > > > This binding is not specific to the driver.
> > > > It would be better to refer to the Linux guide instead of
> > > > duplicating it again and again.
> > > >
> > > > > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> > > >
> > > > You could mention igb_uio here.
> > > > Is there any advantage in using igb_uio?
> > > >
> > >
> > > Igb_uio is arguably easier to use to new user tend to start with it
> > > or specific
> > ecosystem. This is typically the entry point (no iommu, no flr below
> > the bonnet, no vfio token...) hence good to have a bit of handholding
> > with a couple of lines capturing how to easily run a few tests. I
> > don't believe this is too redundant to have these few lines compared
> > to the help in bring to the user not having to double guess their steps.
> > > More generally there are a number of module drivers combinations
> > > that are
> > supported based on different deployments. We don't document in too
> > much details for the details since that is not too ACC specific and
> > there is more documentation no pf_bb_config repo for using the PMD from
> the VF..
> > >
> > > Basically Thomas let us know more explicitly what you are suggesting
> > > as
> > documentation update. You just want more emphasis on vfio-pci flow
> > (which is fair, some of it documented on pf_bb_config including the
> > vfio token passing but we can reproduce here as well) or something else?
> >
> > There are 2 things to change:
> > 1/ igb_uio is going to be deprecated, so we must emphasize on VFIO
> 
> Is there a date for deprecation? Do you mean to EOL the dpdk-kmods
> repository itself; or something more specific for DPDK code like removing
> RTE_PCI_KDRV_IGB_UIO; or last to just take out from documentation?
> It tends to be historical but uio has value notably for ease of use.
> 
> 2/ for doc
> > maintenance, it is better to have common steps described in one place.
> > If needed, you can change the common doc and refer to it.
> 
> Do you mean to remove these sections and just add a pointer to
> https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html instead in all these
> bbdev PMDS?
> Please kindly confirm. I see specific steps for binding in many other PMDs docs
> in DPDK, a bit redundant but provides simple steps specific to a PMD in one
> place. I don't mind either way.
> 
> Thanks
> Nic
>
  
Thomas Monjalon Nov. 8, 2022, 8:56 a.m. UTC | #7
08/11/2022 00:52, Chautru, Nicolas:
> Hi Thomas, 
> Reminder : do you mind kindly clarifying/confirming below. Then we can update the docs accordingly. Thanks. 
> 
> From: Chautru, Nicolas
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 31/10/2022 16:43, Chautru, Nicolas:
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > 12/10/2022 19:59, Nicolas Chautru:
> > > > > > +Bind PF UIO driver(s)
> > > > > > +~~~~~~~~~~~~~~~~~~~~~
> > > > > > +
> > > > > > +Install the DPDK igb_uio driver, bind it with the PF PCI device
> > > > > > +ID and use ``lspci`` to confirm the PF device is under use by
> > > > > > +``igb_uio`` DPDK
> > > > > UIO driver.
> > > > >
> > > > > igb_uio is not recommended.
> > > > > Please focus on VFIO first.
> > > > >
> > > > > > +The igb_uio driver may be bound to the PF PCI device using one
> > > > > > +of two methods for ACC200:
> > > > > > +
> > > > > > +
> > > > > > +1. PCI functions (physical or virtual, depending on the use
> > > > > > +case) can be bound to the UIO driver by repeating this command
> > > > > > +for every
> > > function.
> > > > > > +
> > > > > > +.. code-block:: console
> > > > > > +
> > > > > > +  cd <dpdk-top-level-directory>  insmod ./build/kmod/igb_uio.ko
> > > > > > + echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > > > > > +  lspci -vd8086:57c0
> > > > > > +
> > > > > > +
> > > > > > +2. Another way to bind PF with DPDK UIO driver is by using the
> > > > > > +``dpdk-devbind.py`` tool
> > > > > > +
> > > > > > +.. code-block:: console
> > > > > > +
> > > > > > +  cd <dpdk-top-level-directory>  ./usertools/dpdk-devbind.py -b
> > > > > > + igb_uio 0000:f7:00.0
> > > > > > +
> > > > > > +where the PCI device ID (example: 0000:f7:00.0) is obtained
> > > > > > +using lspci -vd8086:57c0
> > > > >
> > > > > This binding is not specific to the driver.
> > > > > It would be better to refer to the Linux guide instead of
> > > > > duplicating it again and again.
> > > > >
> > > > > > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> > > > >
> > > > > You could mention igb_uio here.
> > > > > Is there any advantage in using igb_uio?
> > > > >
> > > >
> > > > Igb_uio is arguably easier to use to new user tend to start with it
> > > > or specific
> > > ecosystem. This is typically the entry point (no iommu, no flr below
> > > the bonnet, no vfio token...) hence good to have a bit of handholding
> > > with a couple of lines capturing how to easily run a few tests. I
> > > don't believe this is too redundant to have these few lines compared
> > > to the help in bring to the user not having to double guess their steps.
> > > > More generally there are a number of module drivers combinations
> > > > that are
> > > supported based on different deployments. We don't document in too
> > > much details for the details since that is not too ACC specific and
> > > there is more documentation no pf_bb_config repo for using the PMD from
> > the VF..
> > > >
> > > > Basically Thomas let us know more explicitly what you are suggesting
> > > > as
> > > documentation update. You just want more emphasis on vfio-pci flow
> > > (which is fair, some of it documented on pf_bb_config including the
> > > vfio token passing but we can reproduce here as well) or something else?
> > >
> > > There are 2 things to change:
> > > 1/ igb_uio is going to be deprecated, so we must emphasize on VFIO
> > 
> > Is there a date for deprecation? Do you mean to EOL the dpdk-kmods
> > repository itself; or something more specific for DPDK code like removing
> > RTE_PCI_KDRV_IGB_UIO; or last to just take out from documentation?

There is no final decision yet, but the techboard wishes
we focus more on VFIO which is in Linux upstream.
Out-of-tree module (like igb_uio) is not recommended.

> > It tends to be historical but uio has value notably for ease of use.

I don't think it is easy to use an out-of-tree module.
It needs to be compiled and installed.

> > 2/ for doc
> > > maintenance, it is better to have common steps described in one place.
> > > If needed, you can change the common doc and refer to it.
> > 
> > Do you mean to remove these sections and just add a pointer to
> > https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html instead in all these
> > bbdev PMDS?

Yes
If the Linux guide is not convenient, I suggest to improve it.

> > Please kindly confirm. I see specific steps for binding in many other PMDs docs
> > in DPDK, a bit redundant but provides simple steps specific to a PMD in one
> > place. I don't mind either way.

The other PMD docs should point to a common doc.

Redundant docs make very hard to update.
  
Chautru, Nicolas Nov. 8, 2022, 11:47 p.m. UTC | #8
Hi Thomas, 

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> 08/11/2022 00:52, Chautru, Nicolas:
> > Hi Thomas,
> > Reminder : do you mind kindly clarifying/confirming below. Then we can
> update the docs accordingly. Thanks.
> >
> > From: Chautru, Nicolas
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > 31/10/2022 16:43, Chautru, Nicolas:
> > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > > 12/10/2022 19:59, Nicolas Chautru:
> > > > > > > +Bind PF UIO driver(s)
> > > > > > > +~~~~~~~~~~~~~~~~~~~~~
> > > > > > > +
> > > > > > > +Install the DPDK igb_uio driver, bind it with the PF PCI
> > > > > > > +device ID and use ``lspci`` to confirm the PF device is
> > > > > > > +under use by ``igb_uio`` DPDK
> > > > > > UIO driver.
> > > > > >
> > > > > > igb_uio is not recommended.
> > > > > > Please focus on VFIO first.
> > > > > >
> > > > > > > +The igb_uio driver may be bound to the PF PCI device using
> > > > > > > +one of two methods for ACC200:
> > > > > > > +
> > > > > > > +
> > > > > > > +1. PCI functions (physical or virtual, depending on the use
> > > > > > > +case) can be bound to the UIO driver by repeating this
> > > > > > > +command for every
> > > > function.
> > > > > > > +
> > > > > > > +.. code-block:: console
> > > > > > > +
> > > > > > > +  cd <dpdk-top-level-directory>  insmod
> > > > > > > + ./build/kmod/igb_uio.ko echo "8086 57c0" >
> > > > > > > + /sys/bus/pci/drivers/igb_uio/new_id
> > > > > > > +  lspci -vd8086:57c0
> > > > > > > +
> > > > > > > +
> > > > > > > +2. Another way to bind PF with DPDK UIO driver is by using
> > > > > > > +the ``dpdk-devbind.py`` tool
> > > > > > > +
> > > > > > > +.. code-block:: console
> > > > > > > +
> > > > > > > +  cd <dpdk-top-level-directory>
> > > > > > > + ./usertools/dpdk-devbind.py -b igb_uio 0000:f7:00.0
> > > > > > > +
> > > > > > > +where the PCI device ID (example: 0000:f7:00.0) is obtained
> > > > > > > +using lspci -vd8086:57c0
> > > > > >
> > > > > > This binding is not specific to the driver.
> > > > > > It would be better to refer to the Linux guide instead of
> > > > > > duplicating it again and again.
> > > > > >
> > > > > > > +In a similar way the PF may be bound with vfio-pci as any PCIe
> device.
> > > > > >
> > > > > > You could mention igb_uio here.
> > > > > > Is there any advantage in using igb_uio?
> > > > > >
> > > > >
> > > > > Igb_uio is arguably easier to use to new user tend to start with
> > > > > it or specific
> > > > ecosystem. This is typically the entry point (no iommu, no flr
> > > > below the bonnet, no vfio token...) hence good to have a bit of
> > > > handholding with a couple of lines capturing how to easily run a
> > > > few tests. I don't believe this is too redundant to have these few
> > > > lines compared to the help in bring to the user not having to double guess
> their steps.
> > > > > More generally there are a number of module drivers combinations
> > > > > that are
> > > > supported based on different deployments. We don't document in too
> > > > much details for the details since that is not too ACC specific
> > > > and there is more documentation no pf_bb_config repo for using the
> > > > PMD from
> > > the VF..
> > > > >
> > > > > Basically Thomas let us know more explicitly what you are
> > > > > suggesting as
> > > > documentation update. You just want more emphasis on vfio-pci flow
> > > > (which is fair, some of it documented on pf_bb_config including
> > > > the vfio token passing but we can reproduce here as well) or something
> else?
> > > >
> > > > There are 2 things to change:
> > > > 1/ igb_uio is going to be deprecated, so we must emphasize on VFIO
> > >
> > > Is there a date for deprecation? Do you mean to EOL the dpdk-kmods
> > > repository itself; or something more specific for DPDK code like
> > > removing RTE_PCI_KDRV_IGB_UIO; or last to just take out from
> documentation?
> 
> There is no final decision yet, but the techboard wishes we focus more on VFIO
> which is in Linux upstream.
> Out-of-tree module (like igb_uio) is not recommended.
> 
> > > It tends to be historical but uio has value notably for ease of use.
> 
> I don't think it is easy to use an out-of-tree module.
> It needs to be compiled and installed.

That is more up to the user. For some users/ecosystems it can be genuinely significantly easier and also historical deployments still to be supported. 
Even if vfio-pci is the focus for most deployments. 
But mentioning that since you are thinking about removing support, I see value keeping support for a bit still

> 
> > > 2/ for doc
> > > > maintenance, it is better to have common steps described in one place.
> > > > If needed, you can change the common doc and refer to it.
> > >
> > > Do you mean to remove these sections and just add a pointer to
> > > https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html instead in
> > > all these bbdev PMDS?
> 
> Yes
> If the Linux guide is not convenient, I suggest to improve it.
> 
> > > Please kindly confirm. I see specific steps for binding in many
> > > other PMDs docs in DPDK, a bit redundant but provides simple steps
> > > specific to a PMD in one place. I don't mind either way.
> 
> The other PMD docs should point to a common doc.
> 
> Redundant docs make very hard to update.
> 

OK, did an update on this v1 for your review
https://patches.dpdk.org/project/dpdk/patch/20221108234325.45589-2-nicolas.chautru@intel.com/
the warning from checkpatch is a false alarm

Thanks
Nic
  
Chautru, Nicolas Oct. 24, 2023, 7:22 a.m. UTC | #9
Hi Thomas, 
With regards to your statement "igb_uio is going to be deprecated". Can you please clarify whether this intent is being documented or captured anywhere please? In any technical meeting minutes or any other DPDK doc or communication, as I could not find it?
Much appreciated, 
Nic

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, October 31, 2022 4:54 PM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>
> Cc: dev@dpdk.org; gakhil@marvell.com; maxime.coquelin@redhat.com;
> trix@redhat.com; Richardson, Bruce <bruce.richardson@intel.com>;
> hemant.agrawal@nxp.com; david.marchand@redhat.com;
> stephen@networkplumber.org; Vargas, Hernan <hernan.vargas@intel.com>
> Subject: Re: [PATCH v12 04/16] baseband/acc: introduce PMD for ACC200
> 
> 31/10/2022 16:43, Chautru, Nicolas:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 12/10/2022 19:59, Nicolas Chautru:
> > > > +Bind PF UIO driver(s)
> > > > +~~~~~~~~~~~~~~~~~~~~~
> > > > +
> > > > +Install the DPDK igb_uio driver, bind it with the PF PCI device
> > > > +ID and use ``lspci`` to confirm the PF device is under use by
> > > > +``igb_uio`` DPDK
> > > UIO driver.
> > >
> > > igb_uio is not recommended.
> > > Please focus on VFIO first.
> > >
> > > > +The igb_uio driver may be bound to the PF PCI device using one of
> > > > +two methods for ACC200:
> > > > +
> > > > +
> > > > +1. PCI functions (physical or virtual, depending on the use case)
> > > > +can be bound to the UIO driver by repeating this command for every
> function.
> > > > +
> > > > +.. code-block:: console
> > > > +
> > > > +  cd <dpdk-top-level-directory>
> > > > +  insmod ./build/kmod/igb_uio.ko
> > > > +  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
> > > > +  lspci -vd8086:57c0
> > > > +
> > > > +
> > > > +2. Another way to bind PF with DPDK UIO driver is by using the
> > > > +``dpdk-devbind.py`` tool
> > > > +
> > > > +.. code-block:: console
> > > > +
> > > > +  cd <dpdk-top-level-directory>
> > > > +  ./usertools/dpdk-devbind.py -b igb_uio 0000:f7:00.0
> > > > +
> > > > +where the PCI device ID (example: 0000:f7:00.0) is obtained using
> > > > +lspci -vd8086:57c0
> > >
> > > This binding is not specific to the driver.
> > > It would be better to refer to the Linux guide instead of
> > > duplicating it again and again.
> > >
> > > > +In a similar way the PF may be bound with vfio-pci as any PCIe device.
> > >
> > > You could mention igb_uio here.
> > > Is there any advantage in using igb_uio?
> > >
> >
> > Igb_uio is arguably easier to use to new user tend to start with it or specific
> ecosystem. This is typically the entry point (no iommu, no flr below the bonnet,
> no vfio token...) hence good to have a bit of handholding with a couple of lines
> capturing how to easily run a few tests. I don't believe this is too redundant to
> have these few lines compared to the help in bring to the user not having to
> double guess their steps.
> > More generally there are a number of module drivers combinations that are
> supported based on different deployments. We don't document in too much
> details for the details since that is not too ACC specific and there is more
> documentation no pf_bb_config repo for using the PMD from the VF..
> >
> > Basically Thomas let us know more explicitly what you are suggesting as
> documentation update. You just want more emphasis on vfio-pci flow (which is
> fair, some of it documented on pf_bb_config including the vfio token passing
> but we can reproduce here as well) or something else?
> 
> There are 2 things to change:
> 1/ igb_uio is going to be deprecated, so we must emphasize on VFIO 2/ for doc
> maintenance, it is better to have common steps described in one place.
> If needed, you can change the common doc and refer to it.
> 
>
  

Patch

diff --git a/MAINTAINERS b/MAINTAINERS
index 31597139c7..5c095b45d1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1339,6 +1339,9 @@  F: drivers/baseband/acc/
 F: doc/guides/bbdevs/acc100.rst
 F: doc/guides/bbdevs/features/acc100.ini
 F: doc/guides/bbdevs/features/acc101.ini
+F: drivers/baseband/acc200/
+F: doc/guides/bbdevs/acc200.rst
+F: doc/guides/bbdevs/features/acc200.ini
 
 Null baseband
 M: Nicolas Chautru <nicolas.chautru@intel.com>
diff --git a/doc/guides/bbdevs/acc200.rst b/doc/guides/bbdevs/acc200.rst
new file mode 100644
index 0000000000..33c4fa9544
--- /dev/null
+++ b/doc/guides/bbdevs/acc200.rst
@@ -0,0 +1,257 @@ 
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2022 Intel Corporation
+
+Intel(R) ACC200 vRAN Dedicated Accelerator Poll Mode Driver
+===========================================================
+
+The Intel® vRAN Dedicated Accelerator ACC200 peripheral enables cost-effective
+4G and 5G next-generation virtualized Radio Access Network (vRAN) solutions
+integrated on Sapphire Rapids Edge Enhanced Processor (SPR-EE)
+Intel(R)7 based Xeon(R) multi-core server processor.
+
+Features
+--------
+
+The ACC200 includes a 5G Low Density Parity Check (LDPC) encoder/decoder,
+rate match/dematch, Hybrid Automatic Repeat Request (HARQ) with access to DDR
+memory for buffer management, a 4G Turbo encoder/decoder, a
+Fast Fourier Transform (FFT) block providing DFT/iDFT processing offload
+for the 5G Sounding Reference Signal (SRS), a Queue Manager (QMGR), and
+a DMA subsystem.
+There is no dedicated on-card memory for HARQ, this is using coherent memory
+on the CPU side.
+
+These correspond to the following features exposed by the PMD:
+
+- LDPC Encode in the Downlink (5GNR)
+- LDPC Decode in the Uplink (5GNR)
+- Turbo Encode in the Downlink (4G)
+- Turbo Decode in the Uplink (4G)
+- FFT processing
+- SR-IOV with 16 VFs per PF
+- Maximum of 256 queues per VF
+- MSI
+
+ACC200 PMD supports the following BBDEV capabilities:
+
+* For the LDPC encode operation:
+   - ``RTE_BBDEV_LDPC_CRC_24B_ATTACH`` :  set to attach CRC24B to CB(s).
+   - ``RTE_BBDEV_LDPC_RATE_MATCH`` :  if set then do not do Rate Match bypass.
+   - ``RTE_BBDEV_LDPC_INTERLEAVER_BYPASS`` : if set then bypass interleaver.
+
+* For the LDPC decode operation:
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK`` :  check CRC24B from CB(s).
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP`` :  drops CRC24B bits appended while decoding.
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_24A_CHECK`` :  check CRC24A from CB(s).
+   - ``RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK`` :  check CRC16 from CB(s).
+   - ``RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE`` :  provides an input for HARQ combining.
+   - ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE`` :  provides an input for HARQ combining.
+   - ``RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE`` :  disable early termination.
+   - ``RTE_BBDEV_LDPC_DEC_SCATTER_GATHER`` :  supports scatter-gather for input/output data.
+   - ``RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION`` :  supports compression of the HARQ input/output.
+   - ``RTE_BBDEV_LDPC_LLR_COMPRESSION`` :  supports LLR input compression.
+
+* For the turbo encode operation:
+   - ``RTE_BBDEV_TURBO_CRC_24B_ATTACH`` :  set to attach CRC24B to CB(s).
+   - ``RTE_BBDEV_TURBO_RATE_MATCH`` :  if set then do not do Rate Match bypass.
+   - ``RTE_BBDEV_TURBO_ENC_INTERRUPTS`` :  set for encoder dequeue interrupts.
+   - ``RTE_BBDEV_TURBO_RV_INDEX_BYPASS`` :  set to bypass RV index.
+   - ``RTE_BBDEV_TURBO_ENC_SCATTER_GATHER`` :  supports scatter-gather for input/output data.
+
+* For the turbo decode operation:
+   - ``RTE_BBDEV_TURBO_CRC_TYPE_24B`` :  check CRC24B from CB(s).
+   - ``RTE_BBDEV_TURBO_SUBBLOCK_DEINTERLEAVE`` :  perform subblock de-interleave.
+   - ``RTE_BBDEV_TURBO_DEC_INTERRUPTS`` :  set for decoder dequeue interrupts.
+   - ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_IN`` :  set if negative LLR input is supported.
+   - ``RTE_BBDEV_TURBO_DEC_TB_CRC_24B_KEEP`` :  keep CRC24B bits appended while decoding.
+   - ``RTE_BBDEV_TURBO_DEC_CRC_24B_DROP`` : option to drop the code block CRC after decoding.
+   - ``RTE_BBDEV_TURBO_EARLY_TERMINATION`` :  set early termination feature.
+   - ``RTE_BBDEV_TURBO_DEC_SCATTER_GATHER`` :  supports scatter-gather for input/output data.
+   - ``RTE_BBDEV_TURBO_HALF_ITERATION_EVEN`` :  set half iteration granularity.
+   - ``RTE_BBDEV_TURBO_SOFT_OUTPUT`` :  set the APP LLR soft output.
+   - ``RTE_BBDEV_TURBO_EQUALIZER`` :  set the turbo equalizer feature.
+   - ``RTE_BBDEV_TURBO_SOFT_OUT_SATURATE`` :  set the soft output saturation.
+   - ``RTE_BBDEV_TURBO_CONTINUE_CRC_MATCH`` :  set to run an extra odd iteration after CRC match.
+   - ``RTE_BBDEV_TURBO_NEG_LLR_1_BIT_SOFT_OUT`` :  set if negative APP LLR output supported.
+   - ``RTE_BBDEV_TURBO_MAP_DEC`` :  supports flexible parallel MAP engine decoding.
+
+* For the FFT operation:
+   - ``RTE_BBDEV_FFT_WINDOWING`` :  flexible windowing capability.
+   - ``RTE_BBDEV_FFT_CS_ADJUSTMENT`` :  flexible adjustment of Cyclic Shift time offset.
+   - ``RTE_BBDEV_FFT_DFT_BYPASS`` :  set for bypass the DFT and get directly into iDFT input.
+   - ``RTE_BBDEV_FFT_IDFT_BYPASS`` :  set for bypass the IDFT and get directly the DFT output.
+   - ``RTE_BBDEV_FFT_WINDOWING_BYPASS`` : set for bypass time domain windowing.
+
+Installation
+------------
+
+Section 3 of the DPDK manual provides instructions on installing and compiling DPDK.
+
+DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual.
+The bbdev test application has been tested with a configuration 40 x 1GB hugepages.
+The hugepage configuration of a server may be examined using:
+
+.. code-block:: console
+
+   grep Huge* /proc/meminfo
+
+
+Initialization
+--------------
+
+When the device first powers up, its PCI Physical Functions (PF) can be listed
+through these commands for ACC200:
+
+.. code-block:: console
+
+  sudo lspci -vd8086:57c0
+
+The physical and virtual functions are compatible with Linux UIO drivers:
+``vfio`` and ``igb_uio``. However, in order to work the 5G/4G
+FEC device first needs to be bound to one of these linux drivers through DPDK.
+
+
+Bind PF UIO driver(s)
+~~~~~~~~~~~~~~~~~~~~~
+
+Install the DPDK igb_uio driver, bind it with the PF PCI device ID and use
+``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK UIO driver.
+
+The igb_uio driver may be bound to the PF PCI device using one of two methods
+for ACC200:
+
+
+1. PCI functions (physical or virtual, depending on the use case) can be bound
+to the UIO driver by repeating this command for every function.
+
+.. code-block:: console
+
+  cd <dpdk-top-level-directory>
+  insmod ./build/kmod/igb_uio.ko
+  echo "8086 57c0" > /sys/bus/pci/drivers/igb_uio/new_id
+  lspci -vd8086:57c0
+
+
+2. Another way to bind PF with DPDK UIO driver is by using the ``dpdk-devbind.py`` tool
+
+.. code-block:: console
+
+  cd <dpdk-top-level-directory>
+  ./usertools/dpdk-devbind.py -b igb_uio 0000:f7:00.0
+
+where the PCI device ID (example: 0000:f7:00.0) is obtained using lspci -vd8086:57c0
+
+
+In a similar way the PF may be bound with vfio-pci as any PCIe device.
+
+
+Enable Virtual Functions
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Now, it should be visible in the printouts that PCI PF is under igb_uio control
+"``Kernel driver in use: igb_uio``"
+
+To show the number of available VFs on the device, read ``sriov_totalvfs`` file..
+
+.. code-block:: console
+
+  cat /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_totalvfs
+
+  where 0000\:<b>\:<d>.<f> is the PCI device ID
+
+
+To enable VFs via igb_uio, echo the number of virtual functions intended to
+enable to ``max_vfs`` file..
+
+.. code-block:: console
+
+  echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/max_vfs
+
+
+Afterwards, all VFs must be bound to appropriate UIO drivers as required, same
+way it was done with the physical function previously.
+
+Enabling SR-IOV via vfio driver is pretty much the same, except that the file
+name is different:
+
+.. code-block:: console
+
+  echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_numvfs
+
+
+Configure the VFs through PF
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The PCI virtual functions must be configured before working or getting assigned
+to VMs/Containers. The configuration involves allocating the number of hardware
+queues, priorities, load balance, bandwidth and other settings necessary for the
+device to perform FEC functions.
+
+This configuration needs to be executed at least once after reboot or PCI FLR and can
+be achieved by using the functions ``rte_acc200_configure()``,
+which sets up the parameters defined in the compatible ``acc200_conf`` structure.
+
+Test Application
+----------------
+
+BBDEV provides a test application, ``test-bbdev.py`` and range of test data for testing
+the functionality of the device, depending on the device's
+capabilities. The test application is located under app->test-bbdev folder and has the
+following options:
+
+.. code-block:: console
+
+  "-p", "--testapp-path": specifies path to the bbdev test app.
+  "-e", "--eal-params"	: EAL arguments which are passed to the test app.
+  "-t", "--timeout"	: Timeout in seconds (default=300).
+  "-c", "--test-cases"	: Defines test cases to run. Run all if not specified.
+  "-v", "--test-vector"	: Test vector path.
+  "-n", "--num-ops"	: Number of operations to process on device (default=32).
+  "-b", "--burst-size"	: Operations enqueue/dequeue burst size (default=32).
+  "-s", "--snr"		: SNR in dB used when generating LLRs for bler tests.
+  "-s", "--iter_max"	: Number of iterations for LDPC decoder.
+  "-l", "--num-lcores"	: Number of lcores to run (default=16).
+  "-i", "--init-device" : Initialise PF device with default values.
+
+
+To execute the test application tool using simple decode or encode data,
+type one of the following:
+
+.. code-block:: console
+
+  ./test-bbdev.py -c validation -n 64 -b 1 -v ./ldpc_dec_default.data
+  ./test-bbdev.py -c validation -n 64 -b 1 -v ./ldpc_enc_default.data
+
+
+The test application ``test-bbdev.py``, supports the ability to configure the
+PF device with a default set of values, if the "-i" or "- -init-device" option
+is included. The default values are defined in test_bbdev_perf.c.
+
+
+Test Vectors
+~~~~~~~~~~~~
+
+In addition to the simple LDPC decoder and LDPC encoder tests,
+bbdev also provides a range of additional tests under the test_vectors folder,
+which may be useful.
+The results of these tests will depend on the device capabilities which may
+cause some testcases to be skipped, but no failure should be reported.
+
+
+Alternate Baseband Device configuration tool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On top of the embedded configuration feature supported in test-bbdev using
+"- -init-device" option mentioned above, there is also a tool available to
+perform that device configuration using a companion application.
+The ``pf_bb_config`` application notably enables then to run bbdev-test from
+the VF and not only limited to the PF as captured above.
+
+See for more details: https://github.com/intel/pf-bb-config
+
+Specifically for the BBDEV ACC200 PMD, the command below can be used:
+
+.. code-block:: console
+
+  ./pf_bb_config ACC200 -c ./acc200/acc200_config_vf_5g.cfg
+  ./test-bbdev.py -e="-c 0xff0 -a${VF_PCI_ADDR}" -c validation -n 64 -b 64 -l 1 -v ./ldpc_dec_default.data
diff --git a/doc/guides/bbdevs/features/acc200.ini b/doc/guides/bbdevs/features/acc200.ini
new file mode 100644
index 0000000000..7319aea726
--- /dev/null
+++ b/doc/guides/bbdevs/features/acc200.ini
@@ -0,0 +1,14 @@ 
+;
+; Supported features of the 'acc200' bbdev driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Turbo Decoder (4G)     = Y
+Turbo Encoder (4G)     = Y
+LDPC Decoder (5G)      = Y
+LDPC Encoder (5G)      = Y
+LLR/HARQ Compression   = Y
+FFT/SRS                = Y
+External DDR Access    = N
+HW Accelerated         = Y
diff --git a/doc/guides/bbdevs/features/default.ini b/doc/guides/bbdevs/features/default.ini
index 494be5e400..428ea6a0de 100644
--- a/doc/guides/bbdevs/features/default.ini
+++ b/doc/guides/bbdevs/features/default.ini
@@ -11,5 +11,6 @@  Turbo Encoder (4G)     =
 LDPC Decoder (5G)      =
 LDPC Encoder (5G)      =
 LLR/HARQ Compression   =
+FFT/SRS                =
 External DDR Access    =
 HW Accelerated         =
diff --git a/doc/guides/bbdevs/index.rst b/doc/guides/bbdevs/index.rst
index cedd706fa6..4e9dea8e4c 100644
--- a/doc/guides/bbdevs/index.rst
+++ b/doc/guides/bbdevs/index.rst
@@ -14,4 +14,5 @@  Baseband Device Drivers
     fpga_lte_fec
     fpga_5gnr_fec
     acc100
+    acc200
     la12xx
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index da69689c41..3aa95b7bac 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -241,6 +241,12 @@  New Features
   Added support for lookaside sessions in event mode.
   See the :doc:`../sample_app_ug/ipsec_secgw` for more details.
 
+* **Added Intel ACC200 bbdev PMD.**
+
+  Added a new ``acc200`` bbdev driver for the Intel\ |reg| ACC200 accelerator
+  integrated on SPR-EE.  See the
+  :doc:`../bbdevs/acc200` BBDEV guide for more details on this new driver.
+
 
 Removed Items
 -------------
diff --git a/drivers/baseband/acc/acc200_pmd.h b/drivers/baseband/acc/acc200_pmd.h
new file mode 100644
index 0000000000..aaa6b7753c
--- /dev/null
+++ b/drivers/baseband/acc/acc200_pmd.h
@@ -0,0 +1,32 @@ 
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _RTE_ACC200_PMD_H_
+#define _RTE_ACC200_PMD_H_
+
+#include "acc_common.h"
+
+/* Helper macro for logging */
+#define rte_bbdev_log(level, fmt, ...) \
+	rte_log(RTE_LOG_ ## level, acc200_logtype, fmt "\n", \
+		##__VA_ARGS__)
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+#define rte_bbdev_log_debug(fmt, ...) \
+		rte_bbdev_log(DEBUG, "acc200_pmd: " fmt, \
+		##__VA_ARGS__)
+#else
+#define rte_bbdev_log_debug(fmt, ...)
+#endif
+
+/* ACC200 PF and VF driver names */
+#define ACC200PF_DRIVER_NAME           intel_acc200_pf
+#define ACC200VF_DRIVER_NAME           intel_acc200_vf
+
+/* ACC200 PCI vendor & device IDs */
+#define RTE_ACC200_VENDOR_ID           (0x8086)
+#define RTE_ACC200_PF_DEVICE_ID        (0x57C0)
+#define RTE_ACC200_VF_DEVICE_ID        (0x57C1)
+
+#endif /* _RTE_ACC200_PMD_H_ */
diff --git a/drivers/baseband/acc/meson.build b/drivers/baseband/acc/meson.build
index 9a1a3b8b07..63912f0621 100644
--- a/drivers/baseband/acc/meson.build
+++ b/drivers/baseband/acc/meson.build
@@ -3,6 +3,6 @@ 
 
 deps += ['bbdev', 'bus_vdev', 'ring', 'pci', 'bus_pci']
 
-sources = files('rte_acc100_pmd.c')
+sources = files('rte_acc100_pmd.c', 'rte_acc200_pmd.c')
 
 headers = files('rte_acc100_cfg.h')
diff --git a/drivers/baseband/acc/rte_acc200_pmd.c b/drivers/baseband/acc/rte_acc200_pmd.c
new file mode 100644
index 0000000000..c59cad1d26
--- /dev/null
+++ b/drivers/baseband/acc/rte_acc200_pmd.c
@@ -0,0 +1,143 @@ 
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_byteorder.h>
+#include <rte_errno.h>
+#include <rte_branch_prediction.h>
+#include <rte_hexdump.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#ifdef RTE_BBDEV_OFFLOAD_COST
+#include <rte_cycles.h>
+#endif
+
+#include <rte_bbdev.h>
+#include <rte_bbdev_pmd.h>
+#include "acc200_pmd.h"
+
+#ifdef RTE_LIBRTE_BBDEV_DEBUG
+RTE_LOG_REGISTER_DEFAULT(acc200_logtype, DEBUG);
+#else
+RTE_LOG_REGISTER_DEFAULT(acc200_logtype, NOTICE);
+#endif
+
+static int
+acc200_dev_close(struct rte_bbdev *dev)
+{
+	RTE_SET_USED(dev);
+	/* Ensure all in flight HW transactions are completed. */
+	usleep(ACC_LONG_WAIT);
+	return 0;
+}
+
+
+static const struct rte_bbdev_ops acc200_bbdev_ops = {
+	.close = acc200_dev_close,
+};
+
+/* ACC200 PCI PF address map. */
+static struct rte_pci_id pci_id_acc200_pf_map[] = {
+	{
+		RTE_PCI_DEVICE(RTE_ACC200_VENDOR_ID, RTE_ACC200_PF_DEVICE_ID)
+	},
+	{.device_id = 0},
+};
+
+/* ACC200 PCI VF address map. */
+static struct rte_pci_id pci_id_acc200_vf_map[] = {
+	{
+		RTE_PCI_DEVICE(RTE_ACC200_VENDOR_ID, RTE_ACC200_VF_DEVICE_ID)
+	},
+	{.device_id = 0},
+};
+
+/* Initialization Function. */
+static void
+acc200_bbdev_init(struct rte_bbdev *dev, struct rte_pci_driver *drv)
+{
+	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
+
+	dev->dev_ops = &acc200_bbdev_ops;
+
+	((struct acc_device *) dev->data->dev_private)->pf_device =
+			!strcmp(drv->driver.name,
+					RTE_STR(ACC200PF_DRIVER_NAME));
+	((struct acc_device *) dev->data->dev_private)->mmio_base =
+			pci_dev->mem_resource[0].addr;
+
+	rte_bbdev_log_debug("Init device %s [%s] @ vaddr %p paddr %#"PRIx64"",
+			drv->driver.name, dev->data->name,
+			(void *)pci_dev->mem_resource[0].addr,
+			pci_dev->mem_resource[0].phys_addr);
+}
+
+static int acc200_pci_probe(struct rte_pci_driver *pci_drv,
+	struct rte_pci_device *pci_dev)
+{
+	struct rte_bbdev *bbdev = NULL;
+	char dev_name[RTE_BBDEV_NAME_MAX_LEN];
+
+	if (pci_dev == NULL) {
+		rte_bbdev_log(ERR, "NULL PCI device");
+		return -EINVAL;
+	}
+
+	rte_pci_device_name(&pci_dev->addr, dev_name, sizeof(dev_name));
+
+	/* Allocate memory to be used privately by drivers. */
+	bbdev = rte_bbdev_allocate(pci_dev->device.name);
+	if (bbdev == NULL)
+		return -ENODEV;
+
+	/* allocate device private memory. */
+	bbdev->data->dev_private = rte_zmalloc_socket(dev_name,
+			sizeof(struct acc_device), RTE_CACHE_LINE_SIZE,
+			pci_dev->device.numa_node);
+
+	if (bbdev->data->dev_private == NULL) {
+		rte_bbdev_log(CRIT,
+				"Allocate of %zu bytes for device \"%s\" failed",
+				sizeof(struct acc_device), dev_name);
+				rte_bbdev_release(bbdev);
+			return -ENOMEM;
+	}
+
+	/* Fill HW specific part of device structure. */
+	bbdev->device = &pci_dev->device;
+	bbdev->intr_handle = pci_dev->intr_handle;
+	bbdev->data->socket_id = pci_dev->device.numa_node;
+
+	/* Invoke ACC200 device initialization function. */
+	acc200_bbdev_init(bbdev, pci_drv);
+
+	rte_bbdev_log_debug("Initialised bbdev %s (id = %u)",
+			dev_name, bbdev->data->dev_id);
+	return 0;
+}
+
+static struct rte_pci_driver acc200_pci_pf_driver = {
+		.probe = acc200_pci_probe,
+		.remove = acc_pci_remove,
+		.id_table = pci_id_acc200_pf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING
+};
+
+static struct rte_pci_driver acc200_pci_vf_driver = {
+		.probe = acc200_pci_probe,
+		.remove = acc_pci_remove,
+		.id_table = pci_id_acc200_vf_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING
+};
+
+RTE_PMD_REGISTER_PCI(ACC200PF_DRIVER_NAME, acc200_pci_pf_driver);
+RTE_PMD_REGISTER_PCI_TABLE(ACC200PF_DRIVER_NAME, pci_id_acc200_pf_map);
+RTE_PMD_REGISTER_PCI(ACC200VF_DRIVER_NAME, acc200_pci_vf_driver);
+RTE_PMD_REGISTER_PCI_TABLE(ACC200VF_DRIVER_NAME, pci_id_acc200_vf_map);