diff mbox

[dpdk-dev,24/38] net/dpaa: add support for Tx and Rx queue setup

Message ID 1497591668-3320-25-git-send-email-shreyansh.jain@nxp.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers show

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation success Compilation OK

Commit Message

Shreyansh Jain June 16, 2017, 5:40 a.m. UTC
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
---
 doc/guides/nics/features/dpaa.ini |   1 +
 drivers/net/dpaa/Makefile         |   4 +
 drivers/net/dpaa/dpaa_ethdev.c    | 279 ++++++++++++++++++++++++++++++++-
 drivers/net/dpaa/dpaa_ethdev.h    |   6 +
 drivers/net/dpaa/dpaa_rxtx.c      | 313 ++++++++++++++++++++++++++++++++++++++
 drivers/net/dpaa/dpaa_rxtx.h      |  61 ++++++++
 mk/rte.app.mk                     |   1 +
 7 files changed, 660 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.c
 create mode 100644 drivers/net/dpaa/dpaa_rxtx.h

Comments

Ferruh Yigit June 28, 2017, 3:45 p.m. UTC | #1
On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
> ---
>  doc/guides/nics/features/dpaa.ini |   1 +
>  drivers/net/dpaa/Makefile         |   4 +
>  drivers/net/dpaa/dpaa_ethdev.c    | 279 ++++++++++++++++++++++++++++++++-
>  drivers/net/dpaa/dpaa_ethdev.h    |   6 +
>  drivers/net/dpaa/dpaa_rxtx.c      | 313 ++++++++++++++++++++++++++++++++++++++
>  drivers/net/dpaa/dpaa_rxtx.h      |  61 ++++++++

This patch adds initial rx/tx support, as well as rx/tx queue support as
mentioned in patch subject.

I would be for splitting patch, but even if patch not splitted, I would
suggest updating patch suject and commit log to cover patch content.

<...>
> --- a/doc/guides/nics/features/dpaa.ini
> +++ b/doc/guides/nics/features/dpaa.ini
> @@ -4,5 +4,6 @@
>  ; Refer to default.ini for the full list of available PMD features.
>  ;
>  [Features]
> +Queue start/stop     = Y

This requires following dev_ops implemented:
rx_queue_start, rx_queue_stop, tx_queue_start, tx_queue_stop

>  ARMv8                = Y
>  Usage doc            = Y

<...>

> +
> +	/* Initialize Rx FQ's */
> +	if (getenv("DPAA_NUM_RX_QUEUES"))

I think this was disscussed before, should a PMD get config options from
enviroment variable? Altough this works, I am for a more explicit
method, like dev_args.

<...>
> +
> +	dpaa_intf->rx_queues = rte_zmalloc(NULL,
> +		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);

A NULL check perhaps?

And if multi-process support desired, this should be done only for
primary process.

<...>
> +	/* Allocate memory for storing MAC addresses */
> +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
> +		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
> +	if (eth_dev->data->mac_addrs == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to "
> +						"store MAC addresses",
> +				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);

Anything to cleanup before exit?

> +		return -ENOMEM;
> +	}

<...>
> +uint16_t dpaa_eth_queue_rx(void *q,
> +			   struct rte_mbuf **bufs,
> +			   uint16_t nb_bufs)
> +{
> +	struct qman_fq *fq = q;
> +	struct qm_dqrr_entry *dq;
> +	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
> +	int ret;
> +
> +	ret = rte_dpaa_portal_init((void *)0);
> +	if (ret) {
> +		PMD_DRV_LOG(ERR, "Failure in affining portal");
> +		return 0;
> +	}

This is rx_pkt_burst function, right? Is it Ok to call
rte_dpaa_portal_init() in Rx data path?

<...>
> +	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
> +	if (!buf)
> +		goto out;

goto is not required here.

> +
> +out:
> +	return (void *)buf;
> +}
> +

<...>
> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
> +			      struct rte_mbuf **bufs __rte_unused,
> +		uint16_t nb_bufs __rte_unused)
> +{
> +	PMD_TX_LOG(DEBUG, "Drop all packets");

Should mbufs freed here?

> +
> +	/* Drop all incoming packets. No need to free packets here
> +	 * because the rte_eth f/w frees up the packets through tx_buffer
> +	 * callback in case this functions returns count less than nb_bufs
> +	 */
> +	return 0;
> +}

<...>
Shreyansh Jain June 29, 2017, 2:55 p.m. UTC | #2
On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>> ---
>>   doc/guides/nics/features/dpaa.ini |   1 +
>>   drivers/net/dpaa/Makefile         |   4 +
>>   drivers/net/dpaa/dpaa_ethdev.c    | 279 ++++++++++++++++++++++++++++++++-
>>   drivers/net/dpaa/dpaa_ethdev.h    |   6 +
>>   drivers/net/dpaa/dpaa_rxtx.c      | 313 ++++++++++++++++++++++++++++++++++++++
>>   drivers/net/dpaa/dpaa_rxtx.h      |  61 ++++++++
> 
> This patch adds initial rx/tx support, as well as rx/tx queue support as
> mentioned in patch subject.
> 
> I would be for splitting patch, but even if patch not splitted, I would
> suggest updating patch suject and commit log to cover patch content.

Ok. I will fix this (splitting if possible, else update to commit).

> 
> <...>
>> --- a/doc/guides/nics/features/dpaa.ini
>> +++ b/doc/guides/nics/features/dpaa.ini
>> @@ -4,5 +4,6 @@
>>   ; Refer to default.ini for the full list of available PMD features.
>>   ;
>>   [Features]
>> +Queue start/stop     = Y
> 
> This requires following dev_ops implemented:
> rx_queue_start, rx_queue_stop, tx_queue_start, tx_queue_stop

Ok. My understanding here was wrong - I incorrectly matched this
to queue setup/teardown. I will remove this feature listing. (and
a couple more as per your review comment on other patches).

> 
>>   ARMv8                = Y
>>   Usage doc            = Y
> 
> <...>
> 
>> +
>> +	/* Initialize Rx FQ's */
>> +	if (getenv("DPAA_NUM_RX_QUEUES"))
> 
> I think this was disscussed before, should a PMD get config options from
> enviroment variable? Altough this works, I am for a more explicit
> method, like dev_args.

Well, I do remember that discussion and still continued with it because 
1) I am not done with that dev_args changes and 2) I think this is more 
non-intrusive as this is specific to DPAA without need for expanding it 
towards dev_args (and impacting application arg list).
You think this is no-go? If so, I will fix this.

> 
> <...>
>> +
>> +	dpaa_intf->rx_queues = rte_zmalloc(NULL,
>> +		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
> 
> A NULL check perhaps?
> 
> And if multi-process support desired, this should be done only for
> primary process.

I will fix both the above.

> 
> <...>
>> +	/* Allocate memory for storing MAC addresses */
>> +	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
>> +		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
>> +	if (eth_dev->data->mac_addrs == NULL) {
>> +		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to "
>> +						"store MAC addresses",
>> +				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
> 
> Anything to cleanup before exit?

bad miss from my side. *_queues should be released - I will fix this. (I 
will run some static analyzer and fix
any other similar before next version)

> 
>> +		return -ENOMEM;
>> +	}
> 
> <...>
>> +uint16_t dpaa_eth_queue_rx(void *q,
>> +			   struct rte_mbuf **bufs,
>> +			   uint16_t nb_bufs)
>> +{
>> +	struct qman_fq *fq = q;
>> +	struct qm_dqrr_entry *dq;
>> +	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
>> +	int ret;
>> +
>> +	ret = rte_dpaa_portal_init((void *)0);
>> +	if (ret) {
>> +		PMD_DRV_LOG(ERR, "Failure in affining portal");
>> +		return 0;
>> +	}
> 
> This is rx_pkt_burst function, right? Is it Ok to call
> rte_dpaa_portal_init() in Rx data path?

Yes, actually, a portal needs to be initialized if not already - for all 
I/O operations to succeed.
rte_dpaa_portal_init segragates calls if multiple entries are made for 
initialization.

rte_dpaa_portal_init
  `-> _dpaa_portal_init() if not already initialized

> 
> <...>
>> +	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
>> +	if (!buf)
>> +		goto out;
> 
> goto is not required here.

:) yes, I will remove this stupid miss.

> 
>> +
>> +out:
>> +	return (void *)buf;
>> +}
>> +
> 
> <...>
>> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
>> +			      struct rte_mbuf **bufs __rte_unused,
>> +		uint16_t nb_bufs __rte_unused)
>> +{
>> +	PMD_TX_LOG(DEBUG, "Drop all packets");
> 
> Should mbufs freed here?
> 
>> +
>> +	/* Drop all incoming packets. No need to free packets here
>> +	 * because the rte_eth f/w frees up the packets through tx_buffer
>> +	 * callback in case this functions returns count less than nb_bufs
>> +	 */

Ah, actually I was banking on logic that in case a driver doesn't 
release memory, the API caller (on getting less than nb_bufs) would do 
that. This is case for stopped interface.

But, I agree, this is dirty fix. I will change this.

>> +	return 0;
>> +}
> 
> <...>
> 
>
Ferruh Yigit June 29, 2017, 3:41 p.m. UTC | #3
On 6/29/2017 3:55 PM, Shreyansh Jain wrote:
> On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
>> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>> ---

<...>

>>
>>> +
>>> +	/* Initialize Rx FQ's */
>>> +	if (getenv("DPAA_NUM_RX_QUEUES"))
>>
>> I think this was disscussed before, should a PMD get config options from
>> enviroment variable? Altough this works, I am for a more explicit
>> method, like dev_args.
> 
> Well, I do remember that discussion and still continued with it because 
> 1) I am not done with that dev_args changes and 2) I think this is more 
> non-intrusive as this is specific to DPAA without need for expanding it 
> towards dev_args (and impacting application arg list).
> You think this is no-go? If so, I will fix this.

Proving argument looks more clear to me, it is more visible, and for
example if multiple process will be run, environment variables can be
confusing.

But this is not no-go, I would like to hear other comments. Also I
recognized that mlx and ark drivers are also using this.

But however this is implemented, this should be clearly documented,
right now this is a hidden config.

<...>
>>> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
>>> +			      struct rte_mbuf **bufs __rte_unused,
>>> +		uint16_t nb_bufs __rte_unused)
>>> +{
>>> +	PMD_TX_LOG(DEBUG, "Drop all packets");
>>
>> Should mbufs freed here?
>>
>>> +
>>> +	/* Drop all incoming packets. No need to free packets here
>>> +	 * because the rte_eth f/w frees up the packets through tx_buffer
>>> +	 * callback in case this functions returns count less than nb_bufs
>>> +	 */
> 
> Ah, actually I was banking on logic that in case a driver doesn't 
> release memory, the API caller (on getting less than nb_bufs) would do 
> that. This is case for stopped interface.
> 
> But, I agree, this is dirty fix. I will change this.

I missed your logic here indeed, this looks a valid option too, its your
call.

> 
>>> +	return 0;
>>> +}
>>
>> <...>
>>
>>
>
Shreyansh Jain June 30, 2017, 11:48 a.m. UTC | #4
On Thursday 29 June 2017 09:11 PM, Ferruh Yigit wrote:
> On 6/29/2017 3:55 PM, Shreyansh Jain wrote:
>> On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
>>> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>> ---
> 
> <...>
> 
>>>
>>>> +
>>>> +	/* Initialize Rx FQ's */
>>>> +	if (getenv("DPAA_NUM_RX_QUEUES"))
>>>
>>> I think this was disscussed before, should a PMD get config options from
>>> enviroment variable? Altough this works, I am for a more explicit
>>> method, like dev_args.
>>
>> Well, I do remember that discussion and still continued with it because 
>> 1) I am not done with that dev_args changes and 2) I think this is more 
>> non-intrusive as this is specific to DPAA without need for expanding it 
>> towards dev_args (and impacting application arg list).
>> You think this is no-go? If so, I will fix this.
> 
> Proving argument looks more clear to me, it is more visible, and for
> example if multiple process will be run, environment variables can be
> confusing.
> 
> But this is not no-go, I would like to hear other comments. Also I
> recognized that mlx and ark drivers are also using this.
> 
> But however this is implemented, this should be clearly documented,
> right now this is a hidden config.

Agree, I will fix the documentation and add information about this.

> 
> <...>
>>>> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
>>>> +			      struct rte_mbuf **bufs __rte_unused,
>>>> +		uint16_t nb_bufs __rte_unused)
>>>> +{
>>>> +	PMD_TX_LOG(DEBUG, "Drop all packets");
>>>
>>> Should mbufs freed here?
>>>
>>>> +
>>>> +	/* Drop all incoming packets. No need to free packets here
>>>> +	 * because the rte_eth f/w frees up the packets through tx_buffer
>>>> +	 * callback in case this functions returns count less than nb_bufs
>>>> +	 */
>>
>> Ah, actually I was banking on logic that in case a driver doesn't 
>> release memory, the API caller (on getting less than nb_bufs) would do 
>> that. This is case for stopped interface.
>>
>> But, I agree, this is dirty fix. I will change this.
> 
> I missed your logic here indeed, this looks a valid option too, its your
> call.
> 
>>
>>>> +	return 0;
>>>> +}
>>>
>>> <...>
>>>
>>>
>>
> 
>
Shreyansh Jain July 4, 2017, 2:50 p.m. UTC | #5
On Thursday 29 June 2017 09:11 PM, Ferruh Yigit wrote:
> On 6/29/2017 3:55 PM, Shreyansh Jain wrote:
>> On Wednesday 28 June 2017 09:15 PM, Ferruh Yigit wrote:
>>> On 6/16/2017 6:40 AM, Shreyansh Jain wrote:
>>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>>> Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
>>>> ---
> 
> <...>
> 
>>>
>>>> +
>>>> +	/* Initialize Rx FQ's */
>>>> +	if (getenv("DPAA_NUM_RX_QUEUES"))
>>>
>>> I think this was disscussed before, should a PMD get config options from
>>> enviroment variable? Altough this works, I am for a more explicit
>>> method, like dev_args.
>>
>> Well, I do remember that discussion and still continued with it because 
>> 1) I am not done with that dev_args changes and 2) I think this is more 
>> non-intrusive as this is specific to DPAA without need for expanding it 
>> towards dev_args (and impacting application arg list).
>> You think this is no-go? If so, I will fix this.
> 
> Proving argument looks more clear to me, it is more visible, and for
> example if multiple process will be run, environment variables can be
> confusing.
> 
> But this is not no-go, I would like to hear other comments. Also I
> recognized that mlx and ark drivers are also using this.
> 
> But however this is implemented, this should be clearly documented,
> right now this is a hidden config.

I have updated the documentation to show this environment option, in v2.
Thanks for highlighting.

> 
> <...>
>>>> +uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
>>>> +			      struct rte_mbuf **bufs __rte_unused,
>>>> +		uint16_t nb_bufs __rte_unused)
>>>> +{
>>>> +	PMD_TX_LOG(DEBUG, "Drop all packets");
>>>
>>> Should mbufs freed here?
>>>
>>>> +
>>>> +	/* Drop all incoming packets. No need to free packets here
>>>> +	 * because the rte_eth f/w frees up the packets through tx_buffer
>>>> +	 * callback in case this functions returns count less than nb_bufs
>>>> +	 */
>>
>> Ah, actually I was banking on logic that in case a driver doesn't 
>> release memory, the API caller (on getting less than nb_bufs) would do 
>> that. This is case for stopped interface.
>>
>> But, I agree, this is dirty fix. I will change this.
> 
> I missed your logic here indeed, this looks a valid option too, its your
> call.
> 
>>
>>>> +	return 0;
>>>> +}
>>>
>>> <...>
>>>
>>>
>>
> 
>
diff mbox

Patch

diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 9e8befc..29ba47e 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -4,5 +4,6 @@ 
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Queue start/stop     = Y
 ARMv8                = Y
 Usage doc            = Y
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index 8fcde26..06b63fc 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -44,11 +44,13 @@  else
 CFLAGS += -O3
 CFLAGS += $(WERROR_FLAGS)
 endif
+CFLAGS +=-Wno-pointer-arith
 
 CFLAGS += -I$(RTE_SDK_DPAA)/
 CFLAGS += -I$(RTE_SDK_DPAA)/include
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
 CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
 
@@ -58,7 +60,9 @@  LIBABIVER := 1
 
 # Interfaces with DPDK
 SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_ethdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_DPAA_PMD) += dpaa_rxtx.c
 
 LDLIBS += -lrte_bus_dpaa
+LDLIBS += -lrte_mempool_dpaa
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 2401058..5a8d8af 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -62,8 +62,15 @@ 
 
 #include <rte_dpaa_bus.h>
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
 #include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <fsl_fman.h>
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -79,20 +86,104 @@  dpaa_eth_dev_configure(struct rte_eth_dev *dev __rte_unused)
 
 static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
 {
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
 	PMD_INIT_FUNC_TRACE();
 
 	/* Change tx callback to the real one */
-	dev->tx_pkt_burst = NULL;
+	dev->tx_pkt_burst = dpaa_eth_queue_tx;
+	fman_if_enable_rx(dpaa_intf->fif);
 
 	return 0;
 }
 
 static void dpaa_eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->tx_pkt_burst = NULL;
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	fman_if_disable_rx(dpaa_intf->fif);
+	dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+}
+
+static void dpaa_eth_dev_close(struct rte_eth_dev *dev)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	dpaa_eth_dev_stop(dev);
+}
+
+static
+int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+			    unsigned int socket_id __rte_unused,
+			    const struct rte_eth_rxconf *rx_conf __rte_unused,
+			    struct rte_mempool *mp)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	PMD_DRV_LOG(INFO, "Rx queue setup for queue index: %d", queue_idx);
+
+	if (!dpaa_intf->bp_info || dpaa_intf->bp_info->mp != mp) {
+		struct fman_if_ic_params icp;
+		uint32_t fd_offset;
+		uint32_t bp_size;
+
+		if (!mp->pool_data) {
+			PMD_DRV_LOG(ERR, "not an offloaded buffer pool");
+			return -1;
+		}
+		dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+
+		memset(&icp, 0, sizeof(icp));
+		/* set ICEOF for to the default value , which is 0*/
+		icp.iciof = DEFAULT_ICIOF;
+		icp.iceof = DEFAULT_RX_ICEOF;
+		icp.icsz = DEFAULT_ICSZ;
+		fman_if_set_ic_params(dpaa_intf->fif, &icp);
+
+		fd_offset = RTE_PKTMBUF_HEADROOM + DPAA_HW_BUF_RESERVE;
+		fman_if_set_fdoff(dpaa_intf->fif, fd_offset);
+
+		/* Buffer pool size should be equal to Dataroom Size*/
+		bp_size = rte_pktmbuf_data_room_size(mp);
+		fman_if_set_bp(dpaa_intf->fif, mp->size,
+			       dpaa_intf->bp_info->bpid, bp_size);
+		dpaa_intf->valid = 1;
+		PMD_DRV_LOG(INFO, "if =%s - fd_offset = %d offset = %d",
+			    dpaa_intf->name, fd_offset,
+			fman_if_get_fdoff(dpaa_intf->fif));
+	}
+	dev->data->rx_queues[queue_idx] = &dpaa_intf->rx_queues[queue_idx];
+
+	return 0;
+}
+
+static
+void dpaa_eth_rx_queue_release(void *rxq __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
 }
 
-static void dpaa_eth_dev_close(struct rte_eth_dev *dev __rte_unused)
+static
+int dpaa_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			    uint16_t nb_desc __rte_unused,
+		unsigned int socket_id __rte_unused,
+		const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+	struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	PMD_DRV_LOG(INFO, "Tx queue setup for queue index: %d", queue_idx);
+	dev->data->tx_queues[queue_idx] = &dpaa_intf->tx_queues[queue_idx];
+	return 0;
+}
+
+static void dpaa_eth_tx_queue_release(void *txq __rte_unused)
 {
 	PMD_INIT_FUNC_TRACE();
 }
@@ -102,28 +193,206 @@  static struct eth_dev_ops dpaa_devops = {
 	.dev_start		  = dpaa_eth_dev_start,
 	.dev_stop		  = dpaa_eth_dev_stop,
 	.dev_close		  = dpaa_eth_dev_close,
+
+	.rx_queue_setup		  = dpaa_eth_rx_queue_setup,
+	.tx_queue_setup		  = dpaa_eth_tx_queue_setup,
+	.rx_queue_release	  = dpaa_eth_rx_queue_release,
+	.tx_queue_release	  = dpaa_eth_tx_queue_release,
 };
 
+/* Initialise an Rx FQ */
+static int dpaa_rx_queue_init(struct qman_fq *fq,
+			      uint32_t fqid)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_reserve_fqid(fqid);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "reserve rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return -EINVAL;
+	}
+	PMD_DRV_LOG(DEBUG, "creating rx fq %p, fqid %d", fq, fqid);
+	ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "create rx fqid %d failed with ret: %d",
+			fqid, ret);
+		return ret;
+	}
+
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTA;
+
+	opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
+			   QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_a.stashing.exclusive = 0;
+	opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH;
+	opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+	opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+
+	/*Enable tail drop */
+	opts.we_mask = opts.we_mask | QM_INITFQ_WE_TDTHRESH;
+	opts.fqd.fq_ctrl = opts.fqd.fq_ctrl | QM_FQCTRL_TDE;
+	qm_fqd_taildrop_set(&opts.fqd.td, CONG_THRESHOLD_RX_Q, 1);
+
+	ret = qman_init_fq(fq, 0, &opts);
+	if (ret)
+		PMD_DRV_LOG(ERR, "init rx fqid %d failed with ret: %d",
+			fqid, ret);
+	return ret;
+}
+
+/* Initialise a Tx FQ */
+static int dpaa_tx_queue_init(struct qman_fq *fq,
+			      struct fman_if *fman_intf)
+{
+	struct qm_mcc_initfq opts;
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID |
+			     QMAN_FQ_FLAG_TO_DCPORTAL, fq);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "create tx fq failed with ret: %d", ret);
+		return ret;
+	}
+	opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL |
+		       QM_INITFQ_WE_CONTEXTB | QM_INITFQ_WE_CONTEXTA;
+	opts.fqd.dest.channel = fman_intf->tx_channel_id;
+	opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
+	opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
+	opts.fqd.context_b = 0;
+	/* no tx-confirmation */
+	opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+	opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+	PMD_DRV_LOG(DEBUG, "init tx fq %p, fqid %d", fq, fq->fqid);
+	ret = qman_init_fq(fq, QMAN_INITFQ_FLAG_SCHED, &opts);
+	if (ret)
+		PMD_DRV_LOG(ERR, "init tx fqid %d failed %d", fq->fqid, ret);
+	return ret;
+}
+
 /* Initialise a network interface */
-static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev __rte_unused)
+static int dpaa_eth_dev_init(struct rte_eth_dev *eth_dev)
 {
+	int num_cores, num_rx_fqs, fqid;
+	int loop, ret = 0;
 	int dev_id;
 	struct rte_dpaa_device *dpaa_device;
 	struct dpaa_if *dpaa_intf;
+	struct fm_eth_port_cfg *cfg;
+	struct fman_if *fman_intf;
+	struct fman_if_bpool *bp, *tmp_bp;
 
 	PMD_INIT_FUNC_TRACE();
 
 	dpaa_device = DEV_TO_DPAA_DEVICE(eth_dev->device);
 	dev_id = dpaa_device->id.dev_id;
 	dpaa_intf = eth_dev->data->dev_private;
+	cfg = &dpaa_netcfg->port_cfg[dev_id];
+	fman_intf = cfg->fman_if;
 
 	dpaa_intf->name = dpaa_device->name;
 
+	/* save fman_if & cfg in the interface struture */
+	dpaa_intf->fif = fman_intf;
 	dpaa_intf->ifid = dev_id;
+	dpaa_intf->cfg = cfg;
+
+	/* Initialize Rx FQ's */
+	if (getenv("DPAA_NUM_RX_QUEUES"))
+		num_rx_fqs = atoi(getenv("DPAA_NUM_RX_QUEUES"));
+	else
+		num_rx_fqs = DPAA_DEFAULT_NUM_PCD_QUEUES;
+
+	/* Each device can not have more than DPAA_PCD_FQID_MULTIPLIER RX queues */
+	if (num_rx_fqs <= 0 || num_rx_fqs > DPAA_PCD_FQID_MULTIPLIER) {
+		PMD_INIT_LOG(ERR, "Invalid number of RX queues\n");
+		return -EINVAL;
+	}
+
+	dpaa_intf->rx_queues = rte_zmalloc(NULL,
+		sizeof(struct qman_fq) * num_rx_fqs, MAX_CACHELINE);
+	for (loop = 0; loop < num_rx_fqs; loop++) {
+		fqid = DPAA_PCD_FQID_START + dpaa_intf->ifid *
+			DPAA_PCD_FQID_MULTIPLIER + loop;
+		ret = dpaa_rx_queue_init(&dpaa_intf->rx_queues[loop], fqid);
+		if (ret)
+			return ret;
+		dpaa_intf->rx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_rx_queues = num_rx_fqs;
+
+	/* Initialise Tx FQs. Have as many Tx FQ's as number of cores */
+	num_cores = rte_lcore_count();
+	dpaa_intf->tx_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+		num_cores, MAX_CACHELINE);
+	if (!dpaa_intf->tx_queues)
+		return -ENOMEM;
+
+	for (loop = 0; loop < num_cores; loop++) {
+		ret = dpaa_tx_queue_init(&dpaa_intf->tx_queues[loop],
+					 fman_intf);
+		if (ret)
+			return ret;
+		dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+	}
+	dpaa_intf->nb_tx_queues = num_cores;
 
+	PMD_DRV_LOG(DEBUG, "all fqs created");
+
+	/* reset bpool list, initialize bpool dynamically */
+	list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
+		list_del(&bp->node);
+		rte_free(bp);
+	}
+
+	/* Populate ethdev structure */
 	eth_dev->dev_ops = &dpaa_devops;
+	eth_dev->data->nb_rx_queues = dpaa_intf->nb_rx_queues;
+	eth_dev->data->nb_tx_queues = dpaa_intf->nb_tx_queues;
+	eth_dev->rx_pkt_burst = dpaa_eth_queue_rx;
+	eth_dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
+		ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to "
+						"store MAC addresses",
+				ETHER_ADDR_LEN * DPAA_MAX_MAC_FILTER);
+		return -ENOMEM;
+	}
 
-	return -1;
+	/* copy the primary mac address */
+	memcpy(eth_dev->data->mac_addrs[0].addr_bytes,
+		fman_intf->mac_addr.addr_bytes,
+		ETHER_ADDR_LEN);
+
+	PMD_DRV_LOG(DEBUG, "interface %s macaddr:", dpaa_device->name);
+	for (loop = 0; loop < ETHER_ADDR_LEN; loop++) {
+		if (loop != (ETHER_ADDR_LEN - 1))
+			printf("%02x:", fman_intf->mac_addr.addr_bytes[loop]);
+		else
+			printf("%02x\n", fman_intf->mac_addr.addr_bytes[loop]);
+	}
+
+	/* Disable RX mode */
+	fman_if_discard_rx_errors(fman_intf);
+	fman_if_disable_rx(fman_intf);
+	/* Disable promiscuous mode */
+	fman_if_promiscuous_disable(fman_intf);
+	/* Disable multicast */
+	fman_if_reset_mcast_filter_table(fman_intf);
+	/* Reset interface statistics */
+	fman_if_stats_reset(fman_intf);
+
+	return 0;
 }
 
 static int
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 8aeaebf..da7f3be 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -38,7 +38,13 @@ 
 #include <rte_ethdev.h>
 
 #include <rte_dpaa_logs.h>
+#include <dpaa_mempool.h>
 
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
 
 #define DPAA_MBUF_HW_ANNOTATION		64
 #define DPAA_FD_PTA_SIZE		64
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
new file mode 100644
index 0000000..d2ef513
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -0,0 +1,313 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <limits.h>
+#include <sched.h>
+#include <pthread.h>
+
+#include <rte_config.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include <rte_ip.h>
+#include <rte_tcp.h>
+#include <rte_udp.h>
+
+#include "dpaa_ethdev.h"
+#include "dpaa_rxtx.h"
+#include <rte_dpaa_bus.h>
+#include <dpaa_mempool.h>
+
+#include <fsl_usd.h>
+#include <fsl_qman.h>
+#include <fsl_bman.h>
+#include <of.h>
+#include <netcfg.h>
+
+#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
+	do { \
+		(_fd)->cmd = 0; \
+		(_fd)->opaque_addr = 0; \
+		(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
+		(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
+		(_fd)->opaque |= (_mbuf)->pkt_len; \
+		(_fd)->addr = (_mbuf)->buf_physaddr; \
+		(_fd)->bpid = _bpid; \
+	} while (0)
+
+static inline struct rte_mbuf *dpaa_eth_fd_to_mbuf(struct qm_fd *fd,
+							uint32_t ifid)
+{
+	struct pool_info_entry *bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
+	struct rte_mbuf *mbuf;
+	void *ptr;
+	uint16_t offset =
+		(fd->opaque & DPAA_FD_OFFSET_MASK) >> DPAA_FD_OFFSET_SHIFT;
+	uint32_t length = fd->opaque & DPAA_FD_LENGTH_MASK;
+
+	PMD_RX_LOG(DEBUG, " FD--->MBUF");
+
+	/* Ignoring case when format != qm_fd_contig */
+	ptr = rte_dpaa_mem_ptov(fd->addr);
+	/* Ignoring case when ptr would be NULL. That is only possible incase
+	 * of a corrupted packet
+	 */
+
+	mbuf = (struct rte_mbuf *)((char *)ptr - bp_info->meta_data_size);
+	/* Prefetch the Parse results and packet data to L1 */
+	rte_prefetch0((void *)((uint8_t *)ptr + DEFAULT_RX_ICEOF));
+	rte_prefetch0((void *)((uint8_t *)ptr + offset));
+
+	mbuf->data_off = offset;
+	mbuf->data_len = length;
+	mbuf->pkt_len = length;
+
+	mbuf->port = ifid;
+	mbuf->nb_segs = 1;
+	mbuf->ol_flags = 0;
+	mbuf->next = NULL;
+	rte_mbuf_refcnt_set(mbuf, 1);
+
+	return mbuf;
+}
+
+uint16_t dpaa_eth_queue_rx(void *q,
+			   struct rte_mbuf **bufs,
+			   uint16_t nb_bufs)
+{
+	struct qman_fq *fq = q;
+	struct qm_dqrr_entry *dq;
+	uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failure in affining portal");
+		return 0;
+	}
+
+	ret = qman_set_vdq(fq, (nb_bufs > DPAA_MAX_DEQUEUE_NUM_FRAMES) ?
+				DPAA_MAX_DEQUEUE_NUM_FRAMES : nb_bufs);
+	if (ret)
+		return 0;
+
+	do {
+		dq = qman_dequeue(fq);
+		if (!dq)
+			continue;
+		bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
+		qman_dqrr_consume(fq, dq);
+	} while (fq->flags & QMAN_FQ_STATE_VDQCR);
+
+	return num_rx;
+}
+
+static void *dpaa_get_pktbuf(struct pool_info_entry *bp_info)
+{
+	int ret;
+	uint64_t buf = 0;
+	struct bm_buffer bufs;
+
+	ret = bman_acquire(bp_info->bp, &bufs, 1, 0);
+	if (ret <= 0) {
+		PMD_DRV_LOG(WARNING, "Failed to allocate buffers %d", ret);
+		return (void *)buf;
+	}
+
+	PMD_RX_LOG(DEBUG, "got buffer 0x%llx from pool %d",
+		    bufs.addr, bufs.bpid);
+
+	buf = (uint64_t)rte_dpaa_mem_ptov(bufs.addr) - bp_info->meta_data_size;
+	if (!buf)
+		goto out;
+
+out:
+	return (void *)buf;
+}
+
+static struct rte_mbuf *dpaa_get_dmable_mbuf(struct rte_mbuf *mbuf,
+					     struct dpaa_if *dpaa_intf)
+{
+	struct rte_mbuf *dpaa_mbuf;
+
+	/* allocate pktbuffer on bpid for dpaa port */
+	dpaa_mbuf = dpaa_get_pktbuf(dpaa_intf->bp_info);
+	if (!dpaa_mbuf)
+		return NULL;
+
+	memcpy((uint8_t *)(dpaa_mbuf->buf_addr) + mbuf->data_off, (void *)
+		((uint8_t *)(mbuf->buf_addr) + mbuf->data_off), mbuf->pkt_len);
+
+	/* Copy only the required fields */
+	dpaa_mbuf->data_off = mbuf->data_off;
+	dpaa_mbuf->pkt_len = mbuf->pkt_len;
+	dpaa_mbuf->ol_flags = mbuf->ol_flags;
+	dpaa_mbuf->packet_type = mbuf->packet_type;
+	dpaa_mbuf->tx_offload = mbuf->tx_offload;
+	rte_pktmbuf_free(mbuf);
+	return dpaa_mbuf;
+}
+
+uint16_t
+dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
+{
+	struct rte_mbuf *mbuf, *mi = NULL;
+	struct rte_mempool *mp;
+	struct pool_info_entry *bp_info;
+	struct qm_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send, loop, i = 0;
+	int ret;
+
+	ret = rte_dpaa_portal_init((void *)0);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failure in affining portal");
+		return 0;
+	}
+
+	PMD_TX_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+
+	while (nb_bufs) {
+		frames_to_send = (nb_bufs >> 3) ? MAX_TX_RING_SLOTS : nb_bufs;
+		for (loop = 0; loop < frames_to_send; loop++, i++) {
+			mbuf = bufs[i];
+			if (RTE_MBUF_DIRECT(mbuf)) {
+				mp = mbuf->pool;
+			} else {
+				mi = rte_mbuf_from_indirect(mbuf);
+				mp = mi->pool;
+			}
+
+			bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
+			if (mp->ops_index == bp_info->dpaa_ops_index) {
+				PMD_TX_LOG(DEBUG, "BMAN offloaded buffer, "
+					"mbuf: %p", mbuf);
+				if (mbuf->nb_segs == 1) {
+					if (RTE_MBUF_DIRECT(mbuf)) {
+						if (rte_mbuf_refcnt_read(mbuf) > 1) {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], 0xff);
+							rte_mbuf_refcnt_update(mbuf, -1);
+						} else {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], bp_info->bpid);
+						}
+					} else {
+						if (rte_mbuf_refcnt_read(mi) > 1) {
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], 0xff);
+						} else {
+							rte_mbuf_refcnt_update(mi, 1);
+							DPAA_MBUF_TO_CONTIG_FD(mbuf,
+								&fd_arr[loop], bp_info->bpid);
+						}
+						rte_pktmbuf_free(mbuf);
+					}
+				} else {
+					PMD_DRV_LOG(DEBUG, "Number of Segments not supported");
+					/* Set frames_to_send & nb_bufs so that
+					 * packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+			} else {
+				struct qman_fq *txq = q;
+				struct dpaa_if *dpaa_intf = txq->dpaa_intf;
+
+				PMD_TX_LOG(DEBUG, "Non-BMAN offloaded buffer."
+					"Allocating an offloaded buffer");
+				mbuf = dpaa_get_dmable_mbuf(mbuf, dpaa_intf);
+				if (!mbuf) {
+					PMD_DRV_LOG(DEBUG, "no dpaa buffers.");
+					/* Set frames_to_send & nb_bufs so that
+					 * packets are transmitted till
+					 * previous frame.
+					 */
+					frames_to_send = loop;
+					nb_bufs = loop;
+					goto send_pkts;
+				}
+
+				DPAA_MBUF_TO_CONTIG_FD(mbuf, &fd_arr[loop],
+						dpaa_intf->bp_info->bpid);
+			}
+		}
+
+send_pkts:
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qman_enqueue_multi(q, &fd_arr[loop],
+					frames_to_send - loop);
+		}
+		nb_bufs -= frames_to_send;
+	}
+
+	PMD_TX_LOG(DEBUG, "Transmitted %d buffers on queue: %p", i, q);
+
+	return i;
+}
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+		uint16_t nb_bufs __rte_unused)
+{
+	PMD_TX_LOG(DEBUG, "Drop all packets");
+
+	/* Drop all incoming packets. No need to free packets here
+	 * because the rte_eth f/w frees up the packets through tx_buffer
+	 * callback in case this functions returns count less than nb_bufs
+	 */
+	return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
new file mode 100644
index 0000000..09f1aa4
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -0,0 +1,61 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
+ *   Copyright 2017 NXP. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of  Freescale Semiconductor, Inc nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __DPDK_RXTX_H__
+#define __DPDK_RXTX_H__
+
+/* internal offset from where IC is copied to packet buffer*/
+#define DEFAULT_ICIOF          32
+/* IC transfer size */
+#define DEFAULT_ICSZ	48
+
+/* IC offsets from buffer header address */
+#define DEFAULT_RX_ICEOF	16
+
+#define DPAA_MAX_DEQUEUE_NUM_FRAMES    63
+	/** <Maximum number of frames to be dequeued in a single rx call*/
+/* FD structure masks and offset */
+#define DPAA_FD_FORMAT_MASK 0xE0000000
+#define DPAA_FD_OFFSET_MASK 0x1FF00000
+#define DPAA_FD_LENGTH_MASK 0xFFFFF
+#define DPAA_FD_FORMAT_SHIFT 29
+#define DPAA_FD_OFFSET_SHIFT 20
+
+uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+
+uint16_t dpaa_eth_tx_drop_all(void *q  __rte_unused,
+			      struct rte_mbuf **bufs __rte_unused,
+			      uint16_t nb_bufs __rte_unused);
+#endif
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 80e5530..6939bc5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -181,6 +181,7 @@  endif # CONFIG_RTE_LIBRTE_DPAA2_PMD
 
 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_PMD),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_bus_dpaa
+_LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_PMD)       += -lrte_mempool_dpaa
 endif
 
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS