[v7,05/12] net/nfp: add flower PF setup and mempool init logic

Message ID 1660299750-10668-6-git-send-email-chaoyong.he@corigine.com (mailing list archive)
State Changes Requested, archived
Delegated to: Ferruh Yigit
Headers
Series preparation for the rte_flow offload of nfp PMD |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Chaoyong He Aug. 12, 2022, 10:22 a.m. UTC
  Adds the vNIC initialization logic for the flower PF vNIC.  The flower
firmware exposes this vNIC for the purposes of fallback traffic in the
switchdev use-case. The logic of setting up this vNIC is similar to the
logic seen in nfp_net_init() and nfp_net_start().

Adds minimal dev_ops for this PF device. Because the device is being
exposed externally to DPDK it should also be configured using DPDK
helpers like rte_eth_configure(). For these helpers to work the flower
logic needs to implements a minimal set of dev_ops. The Rx and Tx
logic for this vNIC will be added in a subsequent commit.

OVS expects incoming packets coming into the OVS datapath to be
allocated from a mempool that contains objects of type "struct
dp_packet". For the PF handling the slowpath into OVS it should
use a mempool that is compatible with OVS. This commit adds the logic
to create the OVS compatible mempool. It adds certain OVS specific
structs to be able to instantiate the mempool.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c            | 351 ++++++++++++++++++++++++-
 drivers/net/nfp/flower/nfp_flower.h            |   9 +
 drivers/net/nfp/flower/nfp_flower_ovs_compat.h |  37 +++
 drivers/net/nfp/nfp_common.h                   |  11 +
 4 files changed, 404 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/nfp/flower/nfp_flower_ovs_compat.h
  

Comments

Ferruh Yigit Sept. 5, 2022, 3:42 p.m. UTC | #1
On 8/12/2022 11:22 AM, Chaoyong He wrote:
> Adds the vNIC initialization logic for the flower PF vNIC.  The flower
> firmware exposes this vNIC for the purposes of fallback traffic in the
> switchdev use-case. The logic of setting up this vNIC is similar to the
> logic seen in nfp_net_init() and nfp_net_start().
> 
> Adds minimal dev_ops for this PF device. Because the device is being
> exposed externally to DPDK it should also be configured using DPDK
> helpers like rte_eth_configure(). For these helpers to work the flower
> logic needs to implements a minimal set of dev_ops. The Rx and Tx
> logic for this vNIC will be added in a subsequent commit.
> 
> OVS expects incoming packets coming into the OVS datapath to be
> allocated from a mempool that contains objects of type "struct
> dp_packet". For the PF handling the slowpath into OVS it should
> use a mempool that is compatible with OVS. This commit adds the logic
> to create the OVS compatible mempool. It adds certain OVS specific
> structs to be able to instantiate the mempool.
> 

Can you please elaborate what is OVS compatible mempool?

<...>

> +static inline struct nfp_app_flower *
> +nfp_app_flower_priv_get(struct nfp_pf_dev *pf_dev)
> +{
> +	if (pf_dev == NULL)
> +		return NULL;
> +	else if (pf_dev->app_id != NFP_APP_FLOWER_NIC)
> +		return NULL;
> +	else
> +		return (struct nfp_app_flower *)pf_dev->app_priv;
> +}
> +

What do you think to unify functions to get private data, instead of 
having a function for each FW, it can be possible to have single one?
  
Chaoyong He Sept. 6, 2022, 8:45 a.m. UTC | #2
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@xilinx.com>
> Sent: Monday, September 5, 2022 11:42 PM
> To: Chaoyong He <chaoyong.he@corigine.com>; dev@dpdk.org
> Cc: oss-drivers <oss-drivers@corigine.com>; Niklas Soderlund
> <niklas.soderlund@corigine.com>
> Subject: Re: [PATCH v7 05/12] net/nfp: add flower PF setup and mempool
> init logic
> 
> On 8/12/2022 11:22 AM, Chaoyong He wrote:
> > Adds the vNIC initialization logic for the flower PF vNIC.  The flower
> > firmware exposes this vNIC for the purposes of fallback traffic in the
> > switchdev use-case. The logic of setting up this vNIC is similar to
> > the logic seen in nfp_net_init() and nfp_net_start().
> >
> > Adds minimal dev_ops for this PF device. Because the device is being
> > exposed externally to DPDK it should also be configured using DPDK
> > helpers like rte_eth_configure(). For these helpers to work the flower
> > logic needs to implements a minimal set of dev_ops. The Rx and Tx
> > logic for this vNIC will be added in a subsequent commit.
> >
> > OVS expects incoming packets coming into the OVS datapath to be
> > allocated from a mempool that contains objects of type "struct
> > dp_packet". For the PF handling the slowpath into OVS it should use a
> > mempool that is compatible with OVS. This commit adds the logic to
> > create the OVS compatible mempool. It adds certain OVS specific
> > structs to be able to instantiate the mempool.
> >
> 
> Can you please elaborate what is OVS compatible mempool?
> 
> <...>
> 
> > +static inline struct nfp_app_flower * nfp_app_flower_priv_get(struct
> > +nfp_pf_dev *pf_dev) {
> > +	if (pf_dev == NULL)
> > +		return NULL;
> > +	else if (pf_dev->app_id != NFP_APP_FLOWER_NIC)
> > +		return NULL;
> > +	else
> > +		return (struct nfp_app_flower *)pf_dev->app_priv; }
> > +
> 
> What do you think to unify functions to get private data, instead of having a
> function for each FW, it can be possible to have single one?
> 

At first, we use two macros for this, and Andrew advice change them to functions.
```
#define NFP_APP_PRIV_TO_APP_NIC(app_priv)\
	((struct nfp_app_nic *)app_priv)

#define NFP_APP_PRIV_TO_APP_FLOWER(app_priv)\
	((struct nfp_app_flower *)app_priv)
```
So your advice is we unify the functions into:
```
static inline struct nfp_app_nic *
nfp_app_priv_get(struct nfp_pf_dev *pf_dev)
{
	if (pf_dev == NULL)
		return NULL;
	else if (pf_dev->app_id == NFP_APP_CORE_NIC ||
                           pf_dev->app_id == NFP_APP_FLOWER_NIC)
		return pf_dev->app_priv;
              else
                           return NULL;
}
```
and convert the pointer type at where this function been called?
  
Ferruh Yigit Sept. 6, 2022, 10:18 a.m. UTC | #3
On 9/6/2022 9:45 AM, Chaoyong He wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@xilinx.com>
>> Sent: Monday, September 5, 2022 11:42 PM
>> To: Chaoyong He <chaoyong.he@corigine.com>; dev@dpdk.org
>> Cc: oss-drivers <oss-drivers@corigine.com>; Niklas Soderlund
>> <niklas.soderlund@corigine.com>
>> Subject: Re: [PATCH v7 05/12] net/nfp: add flower PF setup and mempool
>> init logic
>>
>> On 8/12/2022 11:22 AM, Chaoyong He wrote:
>>> Adds the vNIC initialization logic for the flower PF vNIC.  The flower
>>> firmware exposes this vNIC for the purposes of fallback traffic in the
>>> switchdev use-case. The logic of setting up this vNIC is similar to
>>> the logic seen in nfp_net_init() and nfp_net_start().
>>>
>>> Adds minimal dev_ops for this PF device. Because the device is being
>>> exposed externally to DPDK it should also be configured using DPDK
>>> helpers like rte_eth_configure(). For these helpers to work the flower
>>> logic needs to implements a minimal set of dev_ops. The Rx and Tx
>>> logic for this vNIC will be added in a subsequent commit.
>>>
>>> OVS expects incoming packets coming into the OVS datapath to be
>>> allocated from a mempool that contains objects of type "struct
>>> dp_packet". For the PF handling the slowpath into OVS it should use a
>>> mempool that is compatible with OVS. This commit adds the logic to
>>> create the OVS compatible mempool. It adds certain OVS specific
>>> structs to be able to instantiate the mempool.
>>>
>>
>> Can you please elaborate what is OVS compatible mempool?
>>
>> <...>
>>
>>> +static inline struct nfp_app_flower * nfp_app_flower_priv_get(struct
>>> +nfp_pf_dev *pf_dev) {
>>> +   if (pf_dev == NULL)
>>> +           return NULL;
>>> +   else if (pf_dev->app_id != NFP_APP_FLOWER_NIC)
>>> +           return NULL;
>>> +   else
>>> +           return (struct nfp_app_flower *)pf_dev->app_priv; }
>>> +
>>
>> What do you think to unify functions to get private data, instead of having a
>> function for each FW, it can be possible to have single one?
>>
> 
> At first, we use two macros for this, and Andrew advice change them to functions.
> ```
> #define NFP_APP_PRIV_TO_APP_NIC(app_priv)\
>          ((struct nfp_app_nic *)app_priv)
> 
> #define NFP_APP_PRIV_TO_APP_FLOWER(app_priv)\
>          ((struct nfp_app_flower *)app_priv)
> ```
> So your advice is we unify the functions into:
> ```
> static inline struct nfp_app_nic *
> nfp_app_priv_get(struct nfp_pf_dev *pf_dev)
> {
>          if (pf_dev == NULL)
>                  return NULL;
>          else if (pf_dev->app_id == NFP_APP_CORE_NIC ||
>                             pf_dev->app_id == NFP_APP_FLOWER_NIC)
>                  return pf_dev->app_priv;
>                else
>                             return NULL;
> }
> ```
> and convert the pointer type at where this function been called?


Since return pointer types are different, it should return "void *",

```
static inline void *
nfp_app_priv_get(struct nfp_pf_dev *pf_dev)
{
	if (pf_dev == NULL)
		return NULL;
	else if (pf_dev->app_id == NFP_APP_CORE_NIC ||
			pf_dev->app_id == NFP_APP_FLOWER_NIC)
		return pf_dev->app_priv;
	else
		return NULL;
}
```

And when assigning a pointer from "void *", no explicit cast is required.

```
struct nfp_app_flower *app_flower;

app_flower = nfp_app_priv_get(pf_dev);
```

I think this is better to have single function, instead of different 
helper function for each FW, but I would like to get @Andrew's comment too.


Btw, since 'nfp_app_nic*_priv_get' return 'NULL' now, should callers 
check for NULL, this may introduce too many checks, and if checks are 
not necessary, what is the benefit of the function against macro?
  

Patch

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 1e12f49..6f6af19 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -14,7 +14,36 @@ 
 #include "../nfp_logs.h"
 #include "../nfp_ctrl.h"
 #include "../nfp_cpp_bridge.h"
+#include "../nfp_rxtx.h"
+#include "../nfpcore/nfp_mip.h"
+#include "../nfpcore/nfp_rtsym.h"
+#include "../nfpcore/nfp_nsp.h"
 #include "nfp_flower.h"
+#include "nfp_flower_ovs_compat.h"
+
+#define MAX_PKT_BURST 32
+#define MEMPOOL_CACHE_SIZE 512
+#define DEFAULT_FLBUF_SIZE 9216
+
+#define PF_VNIC_NB_DESC 1024
+
+static const struct rte_eth_rxconf rx_conf = {
+	.rx_free_thresh = DEFAULT_RX_FREE_THRESH,
+	.rx_drop_en = 1,
+};
+
+static const struct rte_eth_txconf tx_conf = {
+	.tx_thresh = {
+		.pthresh  = DEFAULT_TX_PTHRESH,
+		.hthresh = DEFAULT_TX_HTHRESH,
+		.wthresh = DEFAULT_TX_WTHRESH,
+	},
+	.tx_free_thresh = DEFAULT_TX_FREE_THRESH,
+};
+
+static const struct eth_dev_ops nfp_flower_pf_vnic_ops = {
+	.dev_infos_get          = nfp_net_infos_get,
+};
 
 static struct rte_service_spec flower_services[NFP_FLOWER_SERVICE_MAX] = {
 };
@@ -49,6 +78,271 @@ 
 	return ret;
 }
 
+static void
+nfp_flower_pf_mp_init(__rte_unused struct rte_mempool *mp,
+		__rte_unused void *opaque_arg,
+		void *packet,
+		__rte_unused unsigned int i)
+{
+	struct dp_packet *pkt = packet;
+	pkt->source      = DPBUF_DPDK;
+	pkt->l2_pad_size = 0;
+	pkt->l2_5_ofs    = UINT16_MAX;
+	pkt->l3_ofs      = UINT16_MAX;
+	pkt->l4_ofs      = UINT16_MAX;
+	pkt->packet_type = 0; /* PT_ETH */
+}
+
+static struct rte_mempool *
+nfp_flower_pf_mp_create(void)
+{
+	uint32_t nb_mbufs;
+	uint32_t pkt_size;
+	unsigned int numa_node;
+	uint32_t aligned_mbuf_size;
+	uint32_t mbuf_priv_data_len;
+	struct rte_mempool *pktmbuf_pool;
+	uint32_t n_rxd = PF_VNIC_NB_DESC;
+	uint32_t n_txd = PF_VNIC_NB_DESC;
+
+	nb_mbufs = RTE_MAX(n_rxd + n_txd + MAX_PKT_BURST + MEMPOOL_CACHE_SIZE, 81920U);
+
+	/*
+	 * The size of the mbuf's private area (i.e. area that holds OvS'
+	 * dp_packet data)
+	 */
+	mbuf_priv_data_len = sizeof(struct dp_packet) - sizeof(struct rte_mbuf);
+	/* The size of the entire dp_packet. */
+	pkt_size = sizeof(struct dp_packet) + RTE_MBUF_DEFAULT_BUF_SIZE;
+	/* mbuf size, rounded up to cacheline size. */
+	aligned_mbuf_size = RTE_CACHE_LINE_ROUNDUP(pkt_size);
+	mbuf_priv_data_len += (aligned_mbuf_size - pkt_size);
+
+	numa_node = rte_socket_id();
+	pktmbuf_pool = rte_pktmbuf_pool_create("flower_pf_mbuf_pool", nb_mbufs,
+			MEMPOOL_CACHE_SIZE, mbuf_priv_data_len,
+			RTE_MBUF_DEFAULT_BUF_SIZE, numa_node);
+	if (pktmbuf_pool == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot init pf vnic mbuf pool");
+		return NULL;
+	}
+
+	rte_mempool_obj_iter(pktmbuf_pool, nfp_flower_pf_mp_init, NULL);
+
+	return pktmbuf_pool;
+}
+
+static void
+nfp_flower_cleanup_pf_vnic(struct nfp_net_hw *hw)
+{
+	uint16_t i;
+	struct nfp_app_flower *app_flower;
+
+	app_flower = nfp_app_flower_priv_get(hw->pf_dev);
+
+	for (i = 0; i < hw->max_tx_queues; i++)
+		nfp_net_tx_queue_release(hw->eth_dev, i);
+
+	for (i = 0; i < hw->max_tx_queues; i++)
+		nfp_net_rx_queue_release(hw->eth_dev, i);
+
+	rte_free(hw->eth_dev->data->mac_addrs);
+	rte_mempool_free(app_flower->pf_pktmbuf_pool);
+	rte_eth_dev_release_port(hw->eth_dev);
+}
+
+static int
+nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
+{
+	uint32_t start_q;
+	uint64_t rx_bar_off;
+	uint64_t tx_bar_off;
+	const int stride = 4;
+	struct nfp_pf_dev *pf_dev;
+	struct rte_pci_device *pci_dev;
+
+	pf_dev = hw->pf_dev;
+	pci_dev = hw->pf_dev->pci_dev;
+
+	/* NFP can not handle DMA addresses requiring more than 40 bits */
+	if (rte_mem_check_dma_mask(40)) {
+		PMD_INIT_LOG(ERR, "Device %s can not be used: restricted dma mask to 40 bits!\n",
+				pci_dev->device.name);
+		return -ENODEV;
+	};
+
+	hw->device_id = pci_dev->id.device_id;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	PMD_INIT_LOG(DEBUG, "%s vNIC ctrl bar: %p", vnic_type, hw->ctrl_bar);
+
+	/* Read the number of available rx/tx queues from hardware */
+	hw->max_rx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_RXRINGS);
+	hw->max_tx_queues = nn_cfg_readl(hw, NFP_NET_CFG_MAX_TXRINGS);
+
+	/* Work out where in the BAR the queues start */
+	start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
+	tx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ;
+	start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
+	rx_bar_off = (uint64_t)start_q * NFP_QCP_QUEUE_ADDR_SZ;
+
+	hw->tx_bar = pf_dev->hw_queues + tx_bar_off;
+	hw->rx_bar = pf_dev->hw_queues + rx_bar_off;
+
+	/* Get some of the read-only fields from the config BAR */
+	hw->ver = nn_cfg_readl(hw, NFP_NET_CFG_VERSION);
+	hw->cap = nn_cfg_readl(hw, NFP_NET_CFG_CAP);
+	hw->max_mtu = nn_cfg_readl(hw, NFP_NET_CFG_MAX_MTU);
+	/* Set the current MTU to the maximum supported */
+	hw->mtu = hw->max_mtu;
+	hw->flbufsz = DEFAULT_FLBUF_SIZE;
+
+	/* read the Rx offset configured from firmware */
+	if (NFD_CFG_MAJOR_VERSION_of(hw->ver) < 2)
+		hw->rx_offset = NFP_NET_RX_OFFSET;
+	else
+		hw->rx_offset = nn_cfg_readl(hw, NFP_NET_CFG_RX_OFFSET_ADDR);
+
+	hw->ctrl = 0;
+	hw->stride_rx = stride;
+	hw->stride_tx = stride;
+
+	/* Reuse cfg queue setup function */
+	nfp_net_cfg_queue_setup(hw);
+
+	PMD_INIT_LOG(INFO, "%s vNIC max_rx_queues: %u, max_tx_queues: %u",
+			vnic_type, hw->max_rx_queues, hw->max_tx_queues);
+
+	/* Initializing spinlock for reconfigs */
+	rte_spinlock_init(&hw->reconfig_lock);
+
+	return 0;
+}
+
+static int
+nfp_flower_init_pf_vnic(struct nfp_net_hw *hw)
+{
+	int ret;
+	uint16_t i;
+	uint16_t n_txq;
+	uint16_t n_rxq;
+	uint16_t port_id;
+	unsigned int numa_node;
+	struct rte_mempool *mp;
+	struct nfp_pf_dev *pf_dev;
+	struct rte_eth_dev *eth_dev;
+	struct nfp_app_flower *app_flower;
+
+	static const struct rte_eth_conf port_conf = {
+		.rxmode = {
+			.mq_mode  = RTE_ETH_MQ_RX_RSS,
+			.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
+		},
+		.txmode = {
+			.mq_mode = RTE_ETH_MQ_TX_NONE,
+		},
+	};
+
+	/* Set up some pointers here for ease of use */
+	pf_dev = hw->pf_dev;
+	app_flower = nfp_app_flower_priv_get(pf_dev);
+
+	/*
+	 * Perform the "common" part of setting up a flower vNIC.
+	 * Mostly reading configuration from hardware.
+	 */
+	ret = nfp_flower_init_vnic_common(hw, "pf_vnic");
+	if (ret != 0)
+		goto done;
+
+	hw->eth_dev = rte_eth_dev_allocate("nfp_pf_vnic");
+	if (hw->eth_dev == NULL) {
+		ret = -ENOMEM;
+		goto done;
+	}
+
+	/* Grab the pointer to the newly created rte_eth_dev here */
+	eth_dev = hw->eth_dev;
+
+	numa_node = rte_socket_id();
+
+	/* Fill in some of the eth_dev fields */
+	eth_dev->device = &pf_dev->pci_dev->device;
+	eth_dev->data->dev_private = hw;
+
+	/* Create a mbuf pool for the PF */
+	app_flower->pf_pktmbuf_pool = nfp_flower_pf_mp_create();
+	if (app_flower->pf_pktmbuf_pool == NULL) {
+		ret = -ENOMEM;
+		goto port_release;
+	}
+
+	mp = app_flower->pf_pktmbuf_pool;
+
+	/* Add Rx/Tx functions */
+	eth_dev->dev_ops = &nfp_flower_pf_vnic_ops;
+
+	/* PF vNIC gets a random MAC */
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		ret = -ENOMEM;
+		goto mempool_cleanup;
+	}
+
+	rte_eth_random_addr(eth_dev->data->mac_addrs->addr_bytes);
+	rte_eth_dev_probing_finish(eth_dev);
+
+	/* Configure the PF device now */
+	n_rxq = hw->max_rx_queues;
+	n_txq = hw->max_tx_queues;
+	port_id = hw->eth_dev->data->port_id;
+
+	ret = rte_eth_dev_configure(port_id, n_rxq, n_txq, &port_conf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Could not configure PF device %d", ret);
+		goto mac_cleanup;
+	}
+
+	/* Set up the Rx queues */
+	for (i = 0; i < n_rxq; i++) {
+		ret = nfp_net_rx_queue_setup(eth_dev, i, PF_VNIC_NB_DESC, numa_node,
+				&rx_conf, mp);
+		if (ret) {
+			PMD_INIT_LOG(ERR, "Configure flower PF vNIC Rx queue %d failed", i);
+			goto rx_queue_cleanup;
+		}
+	}
+
+	/* Set up the Tx queues */
+	for (i = 0; i < n_txq; i++) {
+		ret = nfp_net_nfd3_tx_queue_setup(eth_dev, i, PF_VNIC_NB_DESC, numa_node,
+				&tx_conf);
+		if (ret) {
+			PMD_INIT_LOG(ERR, "Configure flower PF vNIC Tx queue %d failed", i);
+			goto tx_queue_cleanup;
+		}
+	}
+
+	return 0;
+
+tx_queue_cleanup:
+	for (i = 0; i < n_txq; i++)
+		nfp_net_tx_queue_release(eth_dev, i);
+rx_queue_cleanup:
+	for (i = 0; i < n_rxq; i++)
+		nfp_net_rx_queue_release(eth_dev, i);
+mac_cleanup:
+	rte_free(eth_dev->data->mac_addrs);
+mempool_cleanup:
+	rte_mempool_free(mp);
+port_release:
+	rte_eth_dev_release_port(hw->eth_dev);
+done:
+	return ret;
+}
+
 int
 nfp_init_app_flower(struct nfp_pf_dev *pf_dev)
 {
@@ -77,15 +371,50 @@ 
 		goto app_cleanup;
 	}
 
+	/* Grab the number of physical ports present on hardware */
+	app_flower->nfp_eth_table = nfp_eth_read_ports(pf_dev->cpp);
+	if (app_flower->nfp_eth_table == NULL) {
+		PMD_INIT_LOG(ERR, "error reading nfp ethernet table");
+		ret = -EIO;
+		goto vnic_cleanup;
+	}
+
+	/* Map the PF ctrl bar */
+	pf_dev->ctrl_bar = nfp_rtsym_map(pf_dev->sym_tbl, "_pf0_net_bar0",
+			32768, &pf_dev->ctrl_area);
+	if (pf_dev->ctrl_bar == NULL) {
+		PMD_INIT_LOG(ERR, "Cloud not map the PF vNIC ctrl bar");
+		ret = -ENODEV;
+		goto eth_tbl_cleanup;
+	}
+
+	/* Fill in the PF vNIC and populate app struct */
+	app_flower->pf_hw = pf_hw;
+	pf_hw->ctrl_bar = pf_dev->ctrl_bar;
+	pf_hw->pf_dev = pf_dev;
+	pf_hw->cpp = pf_dev->cpp;
+
+	ret = nfp_flower_init_pf_vnic(app_flower->pf_hw);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Could not initialize flower PF vNIC");
+		goto pf_cpp_area_cleanup;
+	}
+
 	/* Start up flower services */
 	ret = nfp_flower_enable_services(app_flower);
 	if (ret != 0) {
 		ret = -ESRCH;
-		goto vnic_cleanup;
+		goto pf_vnic_cleanup;
 	}
 
 	return 0;
 
+pf_vnic_cleanup:
+	nfp_flower_cleanup_pf_vnic(app_flower->pf_hw);
+pf_cpp_area_cleanup:
+	nfp_cpp_area_free(pf_dev->ctrl_area);
+eth_tbl_cleanup:
+	free(app_flower->nfp_eth_table);
 vnic_cleanup:
 	rte_free(pf_hw);
 app_cleanup:
@@ -95,8 +424,22 @@ 
 }
 
 int
-nfp_secondary_init_app_flower(__rte_unused struct nfp_cpp *cpp)
+nfp_secondary_init_app_flower(struct nfp_cpp *cpp)
 {
-	PMD_INIT_LOG(ERR, "Flower firmware not supported");
-	return -ENOTSUP;
+	struct rte_eth_dev *eth_dev;
+	const char *port_name = "pf_vnic_eth_dev";
+
+	PMD_INIT_LOG(DEBUG, "Secondary attaching to port %s", port_name);
+
+	eth_dev = rte_eth_dev_attach_secondary(port_name);
+	if (eth_dev == NULL) {
+		PMD_INIT_LOG(ERR, "Secondary process attach to port %s failed", port_name);
+		return -ENODEV;
+	}
+
+	eth_dev->process_private = cpp;
+	eth_dev->dev_ops = &nfp_flower_pf_vnic_ops;
+	rte_eth_dev_probing_finish(eth_dev);
+
+	return 0;
 }
diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index 4a9b302..f6fd4eb 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -14,6 +14,15 @@  enum nfp_flower_service {
 struct nfp_app_flower {
 	/* List of rte_service ID's for the flower app */
 	uint32_t flower_services_ids[NFP_FLOWER_SERVICE_MAX];
+
+	/* Pointer to a mempool for the PF vNIC */
+	struct rte_mempool *pf_pktmbuf_pool;
+
+	/* Pointer to the PF vNIC */
+	struct nfp_net_hw *pf_hw;
+
+	/* the eth table as reported by firmware */
+	struct nfp_eth_table *nfp_eth_table;
 };
 
 int nfp_init_app_flower(struct nfp_pf_dev *pf_dev);
diff --git a/drivers/net/nfp/flower/nfp_flower_ovs_compat.h b/drivers/net/nfp/flower/nfp_flower_ovs_compat.h
new file mode 100644
index 0000000..085de8a
--- /dev/null
+++ b/drivers/net/nfp/flower/nfp_flower_ovs_compat.h
@@ -0,0 +1,37 @@ 
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2022 Corigine, Inc.
+ * All rights reserved.
+ */
+
+#ifndef _NFP_FLOWER_OVS_COMPAT_H_
+#define _NFP_FLOWER_OVS_COMPAT_H_
+
+enum dp_packet_source {
+	DPBUF_MALLOC,              /* Obtained via malloc(). */
+	DPBUF_STACK,               /* Un-movable stack space or static buffer. */
+	DPBUF_STUB,                /* Starts on stack, may expand into heap. */
+	DPBUF_DPDK,                /* buffer data is from DPDK allocated memory. */
+	DPBUF_AFXDP,               /* Buffer data from XDP frame. */
+};
+
+#define DP_PACKET_CONTEXT_SIZE 64
+
+/*
+ * Buffer for holding packet data.  A dp_packet is automatically reallocated
+ * as necessary if it grows too large for the available memory.
+ * By default the packet type is set to Ethernet (0).
+ */
+struct dp_packet {
+	struct rte_mbuf mbuf;          /* DPDK mbuf */
+	enum dp_packet_source source;  /* Source of memory allocated as 'base'. */
+
+	uint16_t l2_pad_size;          /* Detected l2 padding size. Padding is non-pullable. */
+	uint16_t l2_5_ofs;             /* MPLS label stack offset, or UINT16_MAX */
+	uint16_t l3_ofs;               /* Network-level header offset, or UINT16_MAX. */
+	uint16_t l4_ofs;               /* Transport-level header offset, or UINT16_MAX. */
+	uint32_t cutlen;               /* length in bytes to cut from the end. */
+	uint32_t packet_type;          /* Packet type as defined in OpenFlow */
+	uint64_t data[DP_PACKET_CONTEXT_SIZE / 8];
+};
+
+#endif /* _NFP_FLOWER_OVS_COMPAT_ */
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 8d721bd..86d2292 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -402,6 +402,17 @@  static inline void nn_writeq(uint64_t val, volatile void *addr)
 		return (struct nfp_app_nic *)pf_dev->app_priv;
 }
 
+static inline struct nfp_app_flower *
+nfp_app_flower_priv_get(struct nfp_pf_dev *pf_dev)
+{
+	if (pf_dev == NULL)
+		return NULL;
+	else if (pf_dev->app_id != NFP_APP_FLOWER_NIC)
+		return NULL;
+	else
+		return (struct nfp_app_flower *)pf_dev->app_priv;
+}
+
 /* Prototypes for common NFP functions */
 int nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update);
 int nfp_net_configure(struct rte_eth_dev *dev);