[dpdk-dev,v2,3/3] virtio: Add a new layer to abstract pci access method

Message ID 1453973612-8599-4-git-send-email-mukawa@igel.co.jp (mailing list archive)
State Changes Requested, archived
Headers

Commit Message

Tetsuya Mukawa Jan. 28, 2016, 9:33 a.m. UTC
  This patch addss function pointers to abstract pci access method.
This abstraction layer will be used when virtio-net PMD supports
container extension.

The below functions abstract how to access to pci configuration space.

struct virtio_pci_cfg_ops {
        int   (*map)(...);
        void  (*unmap)(...);
        void *(*get_mapped_addr)(...);
        int   (*read)(...);
};

The pci configuration space has information how to access to virtio
device registers. Basically, there are 2 ways to acccess to the
registers. One is using portio and the other is using mapped memory.
The below functions abstract this access method.

struct virtio_pci_dev_ops {
        uint8_t  (*read8)(...);
        uint16_t (*read16)(...);
        uint32_t (*read32)(...);
        void     (*write8)(...);
        void     (*write16)(...);
        void     (*write32)(...);
};

Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
---
 drivers/net/virtio/virtio_ethdev.c |   4 +-
 drivers/net/virtio/virtio_pci.c    | 531 +++++++++++++++++++++++++------------
 drivers/net/virtio/virtio_pci.h    |  24 +-
 3 files changed, 386 insertions(+), 173 deletions(-)
  

Comments

Yuanhan Liu Jan. 29, 2016, 9:17 a.m. UTC | #1
On Thu, Jan 28, 2016 at 06:33:32PM +0900, Tetsuya Mukawa wrote:
> This patch addss function pointers to abstract pci access method.
> This abstraction layer will be used when virtio-net PMD supports
> container extension.
> 
> The below functions abstract how to access to pci configuration space.
> 
> struct virtio_pci_cfg_ops {
>         int   (*map)(...);
>         void  (*unmap)(...);
>         void *(*get_mapped_addr)(...);
>         int   (*read)(...);
> };
> 
> The pci configuration space has information how to access to virtio
> device registers. Basically, there are 2 ways to acccess to the
> registers. One is using portio and the other is using mapped memory.
> The below functions abstract this access method.

One question: is there a way to map PCI memory with Qtest? I'm thinking
if we can keep the io_read/write() for Qtest as well, if so, code could
be simplified, a lot, IMO.

	--yliu
  
Tetsuya Mukawa Feb. 1, 2016, 1:50 a.m. UTC | #2
On 2016/01/29 18:17, Yuanhan Liu wrote:
> On Thu, Jan 28, 2016 at 06:33:32PM +0900, Tetsuya Mukawa wrote:
>> This patch addss function pointers to abstract pci access method.
>> This abstraction layer will be used when virtio-net PMD supports
>> container extension.
>>
>> The below functions abstract how to access to pci configuration space.
>>
>> struct virtio_pci_cfg_ops {
>>         int   (*map)(...);
>>         void  (*unmap)(...);
>>         void *(*get_mapped_addr)(...);
>>         int   (*read)(...);
>> };
>>
>> The pci configuration space has information how to access to virtio
>> device registers. Basically, there are 2 ways to acccess to the
>> registers. One is using portio and the other is using mapped memory.
>> The below functions abstract this access method.
> One question: is there a way to map PCI memory with Qtest? I'm thinking
> if we can keep the io_read/write() for Qtest as well, if so, code could
> be simplified, a lot, IMO.
>

Yes, I agree with you.
But AFAIK, we don't have a way to mmap it from DPDK application.

We may be able to map PCI configuration space to a memory address space
that guest CPU can handle.
But even in this case, I guess we cannot access the memory without qtest
messaging.

Thanks,
Tetsuya
  
Yuanhan Liu Feb. 1, 2016, 1:15 p.m. UTC | #3
On Mon, Feb 01, 2016 at 10:50:00AM +0900, Tetsuya Mukawa wrote:
> On 2016/01/29 18:17, Yuanhan Liu wrote:
> > On Thu, Jan 28, 2016 at 06:33:32PM +0900, Tetsuya Mukawa wrote:
> >> This patch addss function pointers to abstract pci access method.
> >> This abstraction layer will be used when virtio-net PMD supports
> >> container extension.
> >>
> >> The below functions abstract how to access to pci configuration space.
> >>
> >> struct virtio_pci_cfg_ops {
> >>         int   (*map)(...);
> >>         void  (*unmap)(...);
> >>         void *(*get_mapped_addr)(...);
> >>         int   (*read)(...);
> >> };
> >>
> >> The pci configuration space has information how to access to virtio
> >> device registers. Basically, there are 2 ways to acccess to the
> >> registers. One is using portio and the other is using mapped memory.
> >> The below functions abstract this access method.
> > One question: is there a way to map PCI memory with Qtest? I'm thinking
> > if we can keep the io_read/write() for Qtest as well, if so, code could
> > be simplified, a lot, IMO.
> >
> 
> Yes, I agree with you.
> But AFAIK, we don't have a way to mmap it from DPDK application.
> 
> We may be able to map PCI configuration space to a memory address space
> that guest CPU can handle.
> But even in this case, I guess we cannot access the memory without qtest
> messaging.

Acutally, I have a concern about this access abstraction, which makes
those simple funciton not inline. It won't be an issue for most of them,
as most of them are invoked during init stage, where has no impact on
performance.

notify_queue(), however, is a bit different. I was thinking the "inline
to callback (not inline)" convertion might has some impacts on the
performance. Would you do a test for me?

Another off-topic remind is that I guess you might need to send a new
version of your vhost-pmd patchset, the sooner the better. Chinese new
year is coming; I'm having vacation since the end of this week (And,
Huawei has been on vacation sine the end of last week). I hope we could
make it in v2.3.

	--yliu
  
Tetsuya Mukawa Feb. 2, 2016, 2:19 a.m. UTC | #4
On 2016/02/01 22:15, Yuanhan Liu wrote:
> On Mon, Feb 01, 2016 at 10:50:00AM +0900, Tetsuya Mukawa wrote:
>> On 2016/01/29 18:17, Yuanhan Liu wrote:
>>> On Thu, Jan 28, 2016 at 06:33:32PM +0900, Tetsuya Mukawa wrote:
>>>> This patch addss function pointers to abstract pci access method.
>>>> This abstraction layer will be used when virtio-net PMD supports
>>>> container extension.
>>>>
>>>> The below functions abstract how to access to pci configuration space.
>>>>
>>>> struct virtio_pci_cfg_ops {
>>>>         int   (*map)(...);
>>>>         void  (*unmap)(...);
>>>>         void *(*get_mapped_addr)(...);
>>>>         int   (*read)(...);
>>>> };
>>>>
>>>> The pci configuration space has information how to access to virtio
>>>> device registers. Basically, there are 2 ways to acccess to the
>>>> registers. One is using portio and the other is using mapped memory.
>>>> The below functions abstract this access method.
>>> One question: is there a way to map PCI memory with Qtest? I'm thinking
>>> if we can keep the io_read/write() for Qtest as well, if so, code could
>>> be simplified, a lot, IMO.
>>>
>> Yes, I agree with you.
>> But AFAIK, we don't have a way to mmap it from DPDK application.
>>
>> We may be able to map PCI configuration space to a memory address space
>> that guest CPU can handle.
>> But even in this case, I guess we cannot access the memory without qtest
>> messaging.
> Acutally, I have a concern about this access abstraction, which makes
> those simple funciton not inline. It won't be an issue for most of them,
> as most of them are invoked during init stage, where has no impact on
> performance.
>
> notify_queue(), however, is a bit different. I was thinking the "inline
> to callback (not inline)" convertion might has some impacts on the
> performance. Would you do a test for me?

Sure, I will be able to.
But if we concern about it, I guess it's also nice to implement the PMD
on your vtpci abstraction.
(It means we don't use the access abstraction)
Probably this lets our merging process faster.
What do you think?
Also I guess Jianfeng will implement his PMD on your abstraction.
If so, I will also follow him.
 
>
> Another off-topic remind is that I guess you might need to send a new
> version of your vhost-pmd patchset, the sooner the better. Chinese new
> year is coming; I'm having vacation since the end of this week (And,
> Huawei has been on vacation sine the end of last week). I hope we could
> make it in v2.3.

Thanks for notification. Sure I will submit it soon.

Tetsuya
  
Yuanhan Liu Feb. 2, 2016, 2:45 a.m. UTC | #5
On Tue, Feb 02, 2016 at 11:19:50AM +0900, Tetsuya Mukawa wrote:
> On 2016/02/01 22:15, Yuanhan Liu wrote:
> > On Mon, Feb 01, 2016 at 10:50:00AM +0900, Tetsuya Mukawa wrote:
> >> On 2016/01/29 18:17, Yuanhan Liu wrote:
> >>> On Thu, Jan 28, 2016 at 06:33:32PM +0900, Tetsuya Mukawa wrote:
> >>>> This patch addss function pointers to abstract pci access method.
> >>>> This abstraction layer will be used when virtio-net PMD supports
> >>>> container extension.
> >>>>
> >>>> The below functions abstract how to access to pci configuration space.
> >>>>
> >>>> struct virtio_pci_cfg_ops {
> >>>>         int   (*map)(...);
> >>>>         void  (*unmap)(...);
> >>>>         void *(*get_mapped_addr)(...);
> >>>>         int   (*read)(...);
> >>>> };
> >>>>
> >>>> The pci configuration space has information how to access to virtio
> >>>> device registers. Basically, there are 2 ways to acccess to the
> >>>> registers. One is using portio and the other is using mapped memory.
> >>>> The below functions abstract this access method.
> >>> One question: is there a way to map PCI memory with Qtest? I'm thinking
> >>> if we can keep the io_read/write() for Qtest as well, if so, code could
> >>> be simplified, a lot, IMO.
> >>>
> >> Yes, I agree with you.
> >> But AFAIK, we don't have a way to mmap it from DPDK application.
> >>
> >> We may be able to map PCI configuration space to a memory address space
> >> that guest CPU can handle.
> >> But even in this case, I guess we cannot access the memory without qtest
> >> messaging.
> > Acutally, I have a concern about this access abstraction, which makes
> > those simple funciton not inline. It won't be an issue for most of them,
> > as most of them are invoked during init stage, where has no impact on
> > performance.
> >
> > notify_queue(), however, is a bit different. I was thinking the "inline
> > to callback (not inline)" convertion might has some impacts on the
> > performance. Would you do a test for me?
> 
> Sure, I will be able to.

Thanks.

> But if we concern about it, I guess it's also nice to implement the PMD
> on your vtpci abstraction.
> (It means we don't use the access abstraction)
> Probably this lets our merging process faster.
> What do you think?

Another standalone PMD driver? (sorry that I didn't follow the
discussion). If so, won't it introduce too much duplicate code?

	--yliu
  
Tetsuya Mukawa Feb. 2, 2016, 3:55 a.m. UTC | #6
On 2016/02/02 11:45, Yuanhan Liu wrote:
> On Tue, Feb 02, 2016 at 11:19:50AM +0900, Tetsuya Mukawa wrote:
>> On 2016/02/01 22:15, Yuanhan Liu wrote:
>>> On Mon, Feb 01, 2016 at 10:50:00AM +0900, Tetsuya Mukawa wrote:
>>>> On 2016/01/29 18:17, Yuanhan Liu wrote:
>>>>> On Thu, Jan 28, 2016 at 06:33:32PM +0900, Tetsuya Mukawa wrote:
>>>>>> This patch addss function pointers to abstract pci access method.
>>>>>> This abstraction layer will be used when virtio-net PMD supports
>>>>>> container extension.
>>>>>>
>>>>>> The below functions abstract how to access to pci configuration space.
>>>>>>
>>>>>> struct virtio_pci_cfg_ops {
>>>>>>         int   (*map)(...);
>>>>>>         void  (*unmap)(...);
>>>>>>         void *(*get_mapped_addr)(...);
>>>>>>         int   (*read)(...);
>>>>>> };
>>>>>>
>>>>>> The pci configuration space has information how to access to virtio
>>>>>> device registers. Basically, there are 2 ways to acccess to the
>>>>>> registers. One is using portio and the other is using mapped memory.
>>>>>> The below functions abstract this access method.
>>>>> One question: is there a way to map PCI memory with Qtest? I'm thinking
>>>>> if we can keep the io_read/write() for Qtest as well, if so, code could
>>>>> be simplified, a lot, IMO.
>>>>>
>>>> Yes, I agree with you.
>>>> But AFAIK, we don't have a way to mmap it from DPDK application.
>>>>
>>>> We may be able to map PCI configuration space to a memory address space
>>>> that guest CPU can handle.
>>>> But even in this case, I guess we cannot access the memory without qtest
>>>> messaging.
>>> Acutally, I have a concern about this access abstraction, which makes
>>> those simple funciton not inline. It won't be an issue for most of them,
>>> as most of them are invoked during init stage, where has no impact on
>>> performance.
>>>
>>> notify_queue(), however, is a bit different. I was thinking the "inline
>>> to callback (not inline)" convertion might has some impacts on the
>>> performance. Would you do a test for me?
>> Sure, I will be able to.
> Thanks.
>
>> But if we concern about it, I guess it's also nice to implement the PMD
>> on your vtpci abstraction.
>> (It means we don't use the access abstraction)
>> Probably this lets our merging process faster.
>> What do you think?
> Another standalone PMD driver? (sorry that I didn't follow the
> discussion).

Yes, Jianfeng will submit one more virtual virtio-net PMD.

>  If so, won't it introduce too much duplicate code?

Quick look, I guess we won't have not so much duplicated code.

Tetsuya
  

Patch

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 37833a8..c477b05 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1037,7 +1037,7 @@  eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 
 	pci_dev = eth_dev->pci_dev;
 
-	if (vtpci_init(pci_dev, hw) < 0)
+	if (vtpci_init(eth_dev, hw) < 0)
 		return -1;
 
 	/* Reset the device although not necessary at startup */
@@ -1177,7 +1177,7 @@  eth_virtio_dev_uninit(struct rte_eth_dev *eth_dev)
 		rte_intr_callback_unregister(&pci_dev->intr_handle,
 						virtio_interrupt_handler,
 						eth_dev);
-	vtpci_uninit(pci_dev, hw);
+	vtpci_uninit(eth_dev, hw);
 
 	PMD_INIT_LOG(DEBUG, "dev_uninit completed");
 
diff --git a/drivers/net/virtio/virtio_pci.c b/drivers/net/virtio/virtio_pci.c
index 3e6be8c..c6d72f9 100644
--- a/drivers/net/virtio/virtio_pci.c
+++ b/drivers/net/virtio/virtio_pci.c
@@ -49,24 +49,198 @@ 
 #define PCI_CAPABILITY_LIST	0x34
 #define PCI_CAP_ID_VNDR		0x09
 
+static uint8_t
+phys_legacy_read8(struct virtio_hw *hw, uint8_t *addr)
+{
+	return inb((unsigned short)(hw->io_base + (uint64_t)addr));
+}
 
-#define VIRTIO_PCI_REG_ADDR(hw, reg) \
-	(unsigned short)((hw)->io_base + (reg))
+static uint16_t
+phys_legacy_read16(struct virtio_hw *hw, uint16_t *addr)
+{
+	return inw((unsigned short)(hw->io_base + (uint64_t)addr));
+}
 
-#define VIRTIO_READ_REG_1(hw, reg) \
-	inb((VIRTIO_PCI_REG_ADDR((hw), (reg))))
-#define VIRTIO_WRITE_REG_1(hw, reg, value) \
-	outb_p((unsigned char)(value), (VIRTIO_PCI_REG_ADDR((hw), (reg))))
+static uint32_t
+phys_legacy_read32(struct virtio_hw *hw, uint32_t *addr)
+{
+	return inl((unsigned short)(hw->io_base + (uint64_t)addr));
+}
 
-#define VIRTIO_READ_REG_2(hw, reg) \
-	inw((VIRTIO_PCI_REG_ADDR((hw), (reg))))
-#define VIRTIO_WRITE_REG_2(hw, reg, value) \
-	outw_p((unsigned short)(value), (VIRTIO_PCI_REG_ADDR((hw), (reg))))
+static void
+phys_legacy_write8(struct virtio_hw *hw, uint8_t *addr, uint8_t val)
+{
+	return outb_p((unsigned char)val,
+			(unsigned short)(hw->io_base + (uint64_t)addr));
+}
 
-#define VIRTIO_READ_REG_4(hw, reg) \
-	inl((VIRTIO_PCI_REG_ADDR((hw), (reg))))
-#define VIRTIO_WRITE_REG_4(hw, reg, value) \
-	outl_p((unsigned int)(value), (VIRTIO_PCI_REG_ADDR((hw), (reg))))
+static void
+phys_legacy_write16(struct virtio_hw *hw, uint16_t *addr, uint16_t val)
+{
+	return outw_p((unsigned short)val,
+			(unsigned short)(hw->io_base + (uint64_t)addr));
+}
+
+static void
+phys_legacy_write32(struct virtio_hw *hw, uint32_t *addr, uint32_t val)
+{
+	return outl_p((unsigned int)val,
+			(unsigned short)(hw->io_base + (uint64_t)addr));
+}
+
+static const struct virtio_pci_dev_ops phys_legacy_dev_ops = {
+	.read8		= phys_legacy_read8,
+	.read16		= phys_legacy_read16,
+	.read32		= phys_legacy_read32,
+	.write8		= phys_legacy_write8,
+	.write16	= phys_legacy_write16,
+	.write32	= phys_legacy_write32,
+};
+
+static uint8_t
+phys_modern_read8(struct virtio_hw *hw __rte_unused, uint8_t *addr)
+{
+	return *(volatile uint8_t *)addr;
+}
+
+static uint16_t
+phys_modern_read16(struct virtio_hw *hw __rte_unused, uint16_t *addr)
+{
+	return *(volatile uint16_t *)addr;
+}
+
+static uint32_t
+phys_modern_read32(struct virtio_hw *hw __rte_unused, uint32_t *addr)
+{
+	return *(volatile uint32_t *)addr;
+}
+
+static void
+phys_modern_write8(struct virtio_hw *hw __rte_unused,
+		uint8_t *addr, uint8_t val)
+{
+	*(volatile uint8_t *)addr = val;
+}
+
+static void
+phys_modern_write16(struct virtio_hw *hw __rte_unused,
+		uint16_t *addr, uint16_t val)
+{
+	*(volatile uint16_t *)addr = val;
+}
+
+static void
+phys_modern_write32(struct virtio_hw *hw __rte_unused,
+		uint32_t *addr, uint32_t val)
+{
+	*(volatile uint32_t *)addr = val;
+}
+
+static const struct virtio_pci_dev_ops phys_modern_dev_ops = {
+	.read8		= phys_modern_read8,
+	.read16		= phys_modern_read16,
+	.read32		= phys_modern_read32,
+	.write8		= phys_modern_write8,
+	.write16	= phys_modern_write16,
+	.write32	= phys_modern_write32,
+};
+
+static int
+vtpci_dev_init(struct rte_eth_dev *dev, struct virtio_hw *hw)
+{
+	if (dev->dev_type == RTE_ETH_DEV_PCI) {
+		if (hw->modern == 1)
+			hw->vtpci_dev_ops = &phys_modern_dev_ops;
+		else
+			hw->vtpci_dev_ops = &phys_legacy_dev_ops;
+		return 0;
+	}
+
+	PMD_DRV_LOG(ERR, "Unkown virtio-net device.");
+	return -1;
+}
+
+static void
+vtpci_dev_uninit(struct rte_eth_dev *dev __rte_unused, struct virtio_hw *hw)
+{
+	hw->vtpci_dev_ops = NULL;
+}
+
+static int
+phys_map_pci_cfg(struct virtio_hw *hw)
+{
+	return rte_eal_pci_map_device(hw->dev);
+}
+
+static void
+phys_unmap_pci_cfg(struct virtio_hw *hw)
+{
+	rte_eal_pci_unmap_device(hw->dev);
+}
+
+static int
+phys_read_pci_cfg(struct virtio_hw *hw, void *buf, size_t len, off_t offset)
+{
+	return rte_eal_pci_read_config(hw->dev, buf, len, offset);
+}
+
+static void *
+phys_get_mapped_addr(struct virtio_hw *hw, uint8_t bar,
+		     uint32_t offset, uint32_t length)
+{
+	uint8_t *base;
+
+	if (bar > 5) {
+		PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
+		return NULL;
+	}
+
+	if (offset + length < offset) {
+		PMD_INIT_LOG(ERR, "offset(%u) + lenght(%u) overflows",
+			offset, length);
+		return NULL;
+	}
+
+	if (offset + length > hw->dev->mem_resource[bar].len) {
+		PMD_INIT_LOG(ERR,
+			"invalid cap: overflows bar space: %u > %"PRIu64,
+			offset + length, hw->dev->mem_resource[bar].len);
+		return NULL;
+	}
+
+	base = hw->dev->mem_resource[bar].addr;
+	if (base == NULL) {
+		PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
+		return NULL;
+	}
+
+	return base + offset;
+}
+
+static const struct virtio_pci_cfg_ops phys_cfg_ops = {
+	.map			= phys_map_pci_cfg,
+	.unmap			= phys_unmap_pci_cfg,
+	.get_mapped_addr	= phys_get_mapped_addr,
+	.read			= phys_read_pci_cfg,
+};
+
+static int
+vtpci_cfg_init(struct rte_eth_dev *dev, struct virtio_hw *hw)
+{
+	if (dev->dev_type == RTE_ETH_DEV_PCI) {
+		hw->vtpci_cfg_ops = &phys_cfg_ops;
+		return 0;
+	}
+
+	PMD_DRV_LOG(ERR, "Unkown virtio-net device.");
+	return -1;
+}
+
+static void
+vtpci_cfg_uninit(struct rte_eth_dev *dev __rte_unused, struct virtio_hw *hw)
+{
+	hw->vtpci_cfg_ops = NULL;
+}
 
 static void
 legacy_read_dev_config(struct virtio_hw *hw, size_t offset,
@@ -80,13 +254,16 @@  legacy_read_dev_config(struct virtio_hw *hw, size_t offset,
 	for (d = dst; length > 0; d += size, off += size, length -= size) {
 		if (length >= 4) {
 			size = 4;
-			*(uint32_t *)d = VIRTIO_READ_REG_4(hw, off);
+			*(uint32_t *)d = hw->vtpci_dev_ops->read32(
+						hw, (uint32_t *)off);
 		} else if (length >= 2) {
 			size = 2;
-			*(uint16_t *)d = VIRTIO_READ_REG_2(hw, off);
+			*(uint16_t *)d = hw->vtpci_dev_ops->read16(
+						hw, (uint16_t *)off);
 		} else {
 			size = 1;
-			*d = VIRTIO_READ_REG_1(hw, off);
+			*d = hw->vtpci_dev_ops->read8(
+						hw, (uint8_t *)off);
 		}
 	}
 }
@@ -103,13 +280,15 @@  legacy_write_dev_config(struct virtio_hw *hw, size_t offset,
 	for (s = src; length > 0; s += size, off += size, length -= size) {
 		if (length >= 4) {
 			size = 4;
-			VIRTIO_WRITE_REG_4(hw, off, *(const uint32_t *)s);
+			hw->vtpci_dev_ops->write32(hw,
+					(uint32_t *)off, *(const uint32_t *)s);
 		} else if (length >= 2) {
 			size = 2;
-			VIRTIO_WRITE_REG_2(hw, off, *(const uint16_t *)s);
+			hw->vtpci_dev_ops->write16(hw,
+					(uint16_t *)off, *(const uint16_t *)s);
 		} else {
 			size = 1;
-			VIRTIO_WRITE_REG_1(hw, off, *s);
+			hw->vtpci_dev_ops->write8(hw, (uint8_t *)off, *s);
 		}
 	}
 }
@@ -117,7 +296,8 @@  legacy_write_dev_config(struct virtio_hw *hw, size_t offset,
 static uint64_t
 legacy_get_features(struct virtio_hw *hw)
 {
-	return VIRTIO_READ_REG_4(hw, VIRTIO_PCI_HOST_FEATURES);
+	return hw->vtpci_dev_ops->read32(hw,
+			(uint32_t *)VIRTIO_PCI_HOST_FEATURES);
 }
 
 static void
@@ -128,19 +308,20 @@  legacy_set_features(struct virtio_hw *hw, uint64_t features)
 			"only 32 bit features are allowed for legacy virtio!");
 		return;
 	}
-	VIRTIO_WRITE_REG_4(hw, VIRTIO_PCI_GUEST_FEATURES, features);
+	hw->vtpci_dev_ops->write32(hw,
+			(uint32_t *)VIRTIO_PCI_GUEST_FEATURES, features);
 }
 
 static uint8_t
 legacy_get_status(struct virtio_hw *hw)
 {
-	return VIRTIO_READ_REG_1(hw, VIRTIO_PCI_STATUS);
+	return hw->vtpci_dev_ops->read8(hw, (uint8_t *)VIRTIO_PCI_STATUS);
 }
 
 static void
 legacy_set_status(struct virtio_hw *hw, uint8_t status)
 {
-	VIRTIO_WRITE_REG_1(hw, VIRTIO_PCI_STATUS, status);
+	hw->vtpci_dev_ops->write8(hw, (uint8_t *)VIRTIO_PCI_STATUS, status);
 }
 
 static void
@@ -152,45 +333,55 @@  legacy_reset(struct virtio_hw *hw)
 static uint8_t
 legacy_get_isr(struct virtio_hw *hw)
 {
-	return VIRTIO_READ_REG_1(hw, VIRTIO_PCI_ISR);
+	return hw->vtpci_dev_ops->read8(hw, (uint8_t *)VIRTIO_PCI_ISR);
 }
 
 /* Enable one vector (0) for Link State Intrerrupt */
 static uint16_t
 legacy_set_config_irq(struct virtio_hw *hw, uint16_t vec)
 {
-	VIRTIO_WRITE_REG_2(hw, VIRTIO_MSI_CONFIG_VECTOR, vec);
-	return VIRTIO_READ_REG_2(hw, VIRTIO_MSI_CONFIG_VECTOR);
+	hw->vtpci_dev_ops->write16(hw,
+			(uint16_t *)VIRTIO_MSI_CONFIG_VECTOR, vec);
+	return hw->vtpci_dev_ops->read16(hw,
+			(uint16_t *)VIRTIO_MSI_CONFIG_VECTOR);
 }
 
 static uint16_t
 legacy_get_queue_num(struct virtio_hw *hw, uint16_t queue_id)
 {
-	VIRTIO_WRITE_REG_2(hw, VIRTIO_PCI_QUEUE_SEL, queue_id);
-	return VIRTIO_READ_REG_2(hw, VIRTIO_PCI_QUEUE_NUM);
+	hw->vtpci_dev_ops->write16(hw,
+			(uint16_t *)VIRTIO_PCI_QUEUE_SEL, queue_id);
+	return hw->vtpci_dev_ops->read16(hw,
+			(uint16_t *)VIRTIO_PCI_QUEUE_NUM);
 }
 
 static void
 legacy_setup_queue(struct virtio_hw *hw, struct virtqueue *vq)
 {
-	VIRTIO_WRITE_REG_2(hw, VIRTIO_PCI_QUEUE_SEL, vq->vq_queue_index);
+	hw->vtpci_dev_ops->write16(hw,
+			(uint16_t *)VIRTIO_PCI_QUEUE_SEL, vq->vq_queue_index);
 
-	VIRTIO_WRITE_REG_4(hw, VIRTIO_PCI_QUEUE_PFN,
-		vq->mz->phys_addr >> VIRTIO_PCI_QUEUE_ADDR_SHIFT);
+	hw->vtpci_dev_ops->write32(hw,
+			(uint32_t *)VIRTIO_PCI_QUEUE_PFN,
+			vq->vq_ring_mem >> VIRTIO_PCI_QUEUE_ADDR_SHIFT);
 }
 
 static void
 legacy_del_queue(struct virtio_hw *hw, struct virtqueue *vq)
 {
-	VIRTIO_WRITE_REG_2(hw, VIRTIO_PCI_QUEUE_SEL, vq->vq_queue_index);
+	hw->vtpci_dev_ops->write16(hw,
+			(uint16_t *)VIRTIO_PCI_QUEUE_SEL, vq->vq_queue_index);
 
-	VIRTIO_WRITE_REG_4(hw, VIRTIO_PCI_QUEUE_PFN, 0);
+	hw->vtpci_dev_ops->write32(hw,
+			(uint32_t *)VIRTIO_PCI_QUEUE_PFN, 0);
 }
 
 static void
 legacy_notify_queue(struct virtio_hw *hw, struct virtqueue *vq)
 {
-	VIRTIO_WRITE_REG_2(hw, VIRTIO_PCI_QUEUE_NOTIFY, vq->vq_queue_index);
+	hw->vtpci_dev_ops->write16(hw,
+			(uint16_t *)VIRTIO_PCI_QUEUE_NOTIFY,
+			vq->vq_queue_index);
 }
 
 #ifdef RTE_EXEC_ENV_LINUXAPP
@@ -470,47 +661,12 @@  static const struct virtio_pci_ops legacy_ops = {
 
 
 
-static inline uint8_t
-io_read8(uint8_t *addr)
-{
-	return *(volatile uint8_t *)addr;
-}
-
-static inline void
-io_write8(uint8_t *addr, uint8_t val)
-{
-	*(volatile uint8_t *)addr = val;
-}
-
-static inline uint16_t
-io_read16(uint16_t *addr)
-{
-	return *(volatile uint16_t *)addr;
-}
-
-static inline void
-io_write16(uint16_t *addr, uint16_t val)
-{
-	*(volatile uint16_t *)addr = val;
-}
-
-static inline uint32_t
-io_read32(uint32_t *addr)
-{
-	return *(volatile uint32_t *)addr;
-}
-
-static inline void
-io_write32(uint32_t *addr, uint32_t val)
-{
-	*(volatile uint32_t *)addr = val;
-}
-
 static inline void
-io_write64_twopart(uint32_t *lo, uint32_t *hi, uint64_t val)
+io_write64_twopart(struct virtio_hw *hw,
+		uint32_t *lo, uint32_t *hi, uint64_t val)
 {
-	io_write32(lo, val & ((1ULL << 32) - 1));
-	io_write32(hi, val >> 32);
+	hw->vtpci_dev_ops->write32(hw, lo, val & ((1ULL << 32) - 1));
+	hw->vtpci_dev_ops->write32(hw, hi, val >> 32);
 }
 
 static void
@@ -522,13 +678,16 @@  modern_read_dev_config(struct virtio_hw *hw, size_t offset,
 	uint8_t old_gen, new_gen;
 
 	do {
-		old_gen = io_read8(&hw->common_cfg->config_generation);
+		old_gen = hw->vtpci_dev_ops->read8(hw,
+				&hw->common_cfg->config_generation);
 
 		p = dst;
 		for (i = 0;  i < length; i++)
-			*p++ = io_read8((uint8_t *)hw->dev_cfg + offset + i);
+			*p++ = hw->vtpci_dev_ops->read8(hw,
+					(uint8_t *)hw->dev_cfg + offset + i);
 
-		new_gen = io_read8(&hw->common_cfg->config_generation);
+		new_gen = hw->vtpci_dev_ops->read8(hw,
+				&hw->common_cfg->config_generation);
 	} while (old_gen != new_gen);
 }
 
@@ -540,7 +699,8 @@  modern_write_dev_config(struct virtio_hw *hw, size_t offset,
 	const uint8_t *p = src;
 
 	for (i = 0;  i < length; i++)
-		io_write8((uint8_t *)hw->dev_cfg + offset + i, *p++);
+		hw->vtpci_dev_ops->write8(hw,
+				(uint8_t *)hw->dev_cfg + offset + i, *p++);
 }
 
 static uint64_t
@@ -548,11 +708,15 @@  modern_get_features(struct virtio_hw *hw)
 {
 	uint32_t features_lo, features_hi;
 
-	io_write32(&hw->common_cfg->device_feature_select, 0);
-	features_lo = io_read32(&hw->common_cfg->device_feature);
+	hw->vtpci_dev_ops->write32(hw,
+			&hw->common_cfg->device_feature_select, 0);
+	features_lo = hw->vtpci_dev_ops->read32(hw,
+			&hw->common_cfg->device_feature);
 
-	io_write32(&hw->common_cfg->device_feature_select, 1);
-	features_hi = io_read32(&hw->common_cfg->device_feature);
+	hw->vtpci_dev_ops->write32(hw,
+			&hw->common_cfg->device_feature_select, 1);
+	features_hi = hw->vtpci_dev_ops->read32(hw,
+			&hw->common_cfg->device_feature);
 
 	return ((uint64_t)features_hi << 32) | features_lo;
 }
@@ -560,25 +724,30 @@  modern_get_features(struct virtio_hw *hw)
 static void
 modern_set_features(struct virtio_hw *hw, uint64_t features)
 {
-	io_write32(&hw->common_cfg->guest_feature_select, 0);
-	io_write32(&hw->common_cfg->guest_feature,
-		   features & ((1ULL << 32) - 1));
+	hw->vtpci_dev_ops->write32(hw,
+			&hw->common_cfg->guest_feature_select, 0);
+	hw->vtpci_dev_ops->write32(hw,
+			&hw->common_cfg->guest_feature,
+			features & ((1ULL << 32) - 1));
 
-	io_write32(&hw->common_cfg->guest_feature_select, 1);
-	io_write32(&hw->common_cfg->guest_feature,
-		   features >> 32);
+	hw->vtpci_dev_ops->write32(hw,
+			&hw->common_cfg->guest_feature_select, 1);
+	hw->vtpci_dev_ops->write32(hw,
+			&hw->common_cfg->guest_feature, features >> 32);
 }
 
 static uint8_t
 modern_get_status(struct virtio_hw *hw)
 {
-	return io_read8(&hw->common_cfg->device_status);
+	return hw->vtpci_dev_ops->read8(hw,
+			&hw->common_cfg->device_status);
 }
 
 static void
 modern_set_status(struct virtio_hw *hw, uint8_t status)
 {
-	io_write8(&hw->common_cfg->device_status, status);
+	hw->vtpci_dev_ops->write8(hw,
+			&hw->common_cfg->device_status, status);
 }
 
 static void
@@ -591,21 +760,25 @@  modern_reset(struct virtio_hw *hw)
 static uint8_t
 modern_get_isr(struct virtio_hw *hw)
 {
-	return io_read8(hw->isr);
+	return hw->vtpci_dev_ops->read8(hw, hw->isr);
 }
 
 static uint16_t
 modern_set_config_irq(struct virtio_hw *hw, uint16_t vec)
 {
-	io_write16(&hw->common_cfg->msix_config, vec);
-	return io_read16(&hw->common_cfg->msix_config);
+	hw->vtpci_dev_ops->write16(hw,
+			&hw->common_cfg->msix_config, vec);
+	return hw->vtpci_dev_ops->read16(hw,
+			&hw->common_cfg->msix_config);
 }
 
 static uint16_t
 modern_get_queue_num(struct virtio_hw *hw, uint16_t queue_id)
 {
-	io_write16(&hw->common_cfg->queue_select, queue_id);
-	return io_read16(&hw->common_cfg->queue_size);
+	hw->vtpci_dev_ops->write16(hw,
+			&hw->common_cfg->queue_select, queue_id);
+	return hw->vtpci_dev_ops->read16(hw,
+			&hw->common_cfg->queue_size);
 }
 
 static void
@@ -620,20 +793,23 @@  modern_setup_queue(struct virtio_hw *hw, struct virtqueue *vq)
 							 ring[vq->vq_nentries]),
 				   VIRTIO_PCI_VRING_ALIGN);
 
-	io_write16(&hw->common_cfg->queue_select, vq->vq_queue_index);
+	hw->vtpci_dev_ops->write16(hw,
+			&hw->common_cfg->queue_select, vq->vq_queue_index);
 
-	io_write64_twopart(&hw->common_cfg->queue_desc_lo,
+	io_write64_twopart(hw, &hw->common_cfg->queue_desc_lo,
 			   &hw->common_cfg->queue_desc_hi, desc_addr);
-	io_write64_twopart(&hw->common_cfg->queue_avail_lo,
+	io_write64_twopart(hw, &hw->common_cfg->queue_avail_lo,
 			   &hw->common_cfg->queue_avail_hi, avail_addr);
-	io_write64_twopart(&hw->common_cfg->queue_used_lo,
+	io_write64_twopart(hw, &hw->common_cfg->queue_used_lo,
 			   &hw->common_cfg->queue_used_hi, used_addr);
 
-	notify_off = io_read16(&hw->common_cfg->queue_notify_off);
+	notify_off = hw->vtpci_dev_ops->read16(hw,
+				&hw->common_cfg->queue_notify_off);
 	vq->notify_addr = (void *)((uint8_t *)hw->notify_base +
 				notify_off * hw->notify_off_multiplier);
 
-	io_write16(&hw->common_cfg->queue_enable, 1);
+	hw->vtpci_dev_ops->write16(hw,
+			&hw->common_cfg->queue_enable, 1);
 
 	PMD_INIT_LOG(DEBUG, "queue %u addresses:", vq->vq_queue_index);
 	PMD_INIT_LOG(DEBUG, "\t desc_addr: %"PRIx64, desc_addr);
@@ -646,22 +822,24 @@  modern_setup_queue(struct virtio_hw *hw, struct virtqueue *vq)
 static void
 modern_del_queue(struct virtio_hw *hw, struct virtqueue *vq)
 {
-	io_write16(&hw->common_cfg->queue_select, vq->vq_queue_index);
+	hw->vtpci_dev_ops->write16(hw,
+			&hw->common_cfg->queue_select, vq->vq_queue_index);
 
-	io_write64_twopart(&hw->common_cfg->queue_desc_lo,
+	io_write64_twopart(hw, &hw->common_cfg->queue_desc_lo,
 			   &hw->common_cfg->queue_desc_hi, 0);
-	io_write64_twopart(&hw->common_cfg->queue_avail_lo,
+	io_write64_twopart(hw, &hw->common_cfg->queue_avail_lo,
 			   &hw->common_cfg->queue_avail_hi, 0);
-	io_write64_twopart(&hw->common_cfg->queue_used_lo,
+	io_write64_twopart(hw, &hw->common_cfg->queue_used_lo,
 			   &hw->common_cfg->queue_used_hi, 0);
 
-	io_write16(&hw->common_cfg->queue_enable, 0);
+	hw->vtpci_dev_ops->write16(hw,
+			&hw->common_cfg->queue_enable, 0);
 }
 
 static void
 modern_notify_queue(struct virtio_hw *hw __rte_unused, struct virtqueue *vq)
 {
-	io_write16(vq->notify_addr, 1);
+	hw->vtpci_dev_ops->write16(hw, vq->notify_addr, 1);
 }
 
 static const struct virtio_pci_ops modern_ops = {
@@ -680,7 +858,6 @@  static const struct virtio_pci_ops modern_ops = {
 	.notify_queue	= modern_notify_queue,
 };
 
-
 void
 vtpci_read_dev_config(struct virtio_hw *hw, size_t offset,
 		      void *dst, int length)
@@ -753,61 +930,26 @@  vtpci_irq_config(struct virtio_hw *hw, uint16_t vec)
 	return hw->vtpci_ops->set_config_irq(hw, vec);
 }
 
-static void *
-get_cfg_addr(struct rte_pci_device *dev, struct virtio_pci_cap *cap)
-{
-	uint8_t  bar    = cap->bar;
-	uint32_t length = cap->length;
-	uint32_t offset = cap->offset;
-	uint8_t *base;
-
-	if (bar > 5) {
-		PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
-		return NULL;
-	}
-
-	if (offset + length < offset) {
-		PMD_INIT_LOG(ERR, "offset(%u) + lenght(%u) overflows",
-			offset, length);
-		return NULL;
-	}
-
-	if (offset + length > dev->mem_resource[bar].len) {
-		PMD_INIT_LOG(ERR,
-			"invalid cap: overflows bar space: %u > %"PRIu64,
-			offset + length, dev->mem_resource[bar].len);
-		return NULL;
-	}
-
-	base = dev->mem_resource[bar].addr;
-	if (base == NULL) {
-		PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
-		return NULL;
-	}
-
-	return base + offset;
-}
-
 static int
-virtio_read_caps(struct rte_pci_device *dev, struct virtio_hw *hw)
+virtio_read_caps(struct virtio_hw *hw)
 {
 	uint8_t pos;
 	struct virtio_pci_cap cap;
 	int ret;
 
-	if (rte_eal_pci_map_device(dev) < 0) {
+	if (hw->vtpci_cfg_ops->map(hw) < 0) {
 		PMD_INIT_LOG(DEBUG, "failed to map pci device!");
 		return -1;
 	}
 
-	ret = rte_eal_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST);
+	ret = hw->vtpci_cfg_ops->read(hw, &pos, 1, PCI_CAPABILITY_LIST);
 	if (ret < 0) {
 		PMD_INIT_LOG(DEBUG, "failed to read pci capability list");
 		return -1;
 	}
 
 	while (pos) {
-		ret = rte_eal_pci_read_config(dev, &cap, sizeof(cap), pos);
+		ret = hw->vtpci_cfg_ops->read(hw, &cap, sizeof(cap), pos);
 		if (ret < 0) {
 			PMD_INIT_LOG(ERR,
 				"failed to read pci cap at pos: %x", pos);
@@ -827,18 +969,25 @@  virtio_read_caps(struct rte_pci_device *dev, struct virtio_hw *hw)
 
 		switch (cap.cfg_type) {
 		case VIRTIO_PCI_CAP_COMMON_CFG:
-			hw->common_cfg = get_cfg_addr(dev, &cap);
+			hw->common_cfg =
+				hw->vtpci_cfg_ops->get_mapped_addr(
+					hw, cap.bar, cap.offset, cap.length);
 			break;
 		case VIRTIO_PCI_CAP_NOTIFY_CFG:
-			rte_eal_pci_read_config(dev, &hw->notify_off_multiplier,
+			hw->vtpci_cfg_ops->read(hw, &hw->notify_off_multiplier,
 						4, pos + sizeof(cap));
-			hw->notify_base = get_cfg_addr(dev, &cap);
+			hw->notify_base =
+				hw->vtpci_cfg_ops->get_mapped_addr(
+					hw, cap.bar, cap.offset, cap.length);
 			break;
 		case VIRTIO_PCI_CAP_DEVICE_CFG:
-			hw->dev_cfg = get_cfg_addr(dev, &cap);
+			hw->dev_cfg =
+				hw->vtpci_cfg_ops->get_mapped_addr(
+					hw, cap.bar, cap.offset, cap.length);
 			break;
 		case VIRTIO_PCI_CAP_ISR_CFG:
-			hw->isr = get_cfg_addr(dev, &cap);
+			hw->isr = hw->vtpci_cfg_ops->get_mapped_addr(
+					hw, cap.bar, cap.offset, cap.length);
 			break;
 		}
 
@@ -863,43 +1012,87 @@  virtio_read_caps(struct rte_pci_device *dev, struct virtio_hw *hw)
 	return 0;
 }
 
+static int
+vtpci_modern_init(struct rte_eth_dev *dev, struct virtio_hw *hw)
+{
+	struct rte_pci_device *pci_dev = dev->pci_dev;
+
+	PMD_INIT_LOG(INFO, "modern virtio pci detected.");
+
+	if (dev->dev_type == RTE_ETH_DEV_PCI)
+		pci_dev->driver->drv_flags |= RTE_PCI_DRV_INTR_LSC;
+
+	hw->vtpci_ops = &modern_ops;
+	hw->modern = 1;
+
+	return 0;
+}
+
+static int
+vtpci_legacy_init(struct rte_eth_dev *dev, struct virtio_hw *hw)
+{
+	struct rte_pci_device *pci_dev = dev->pci_dev;
+
+	PMD_INIT_LOG(INFO, "trying with legacy virtio pci.");
+	if (dev->dev_type == RTE_ETH_DEV_PCI) {
+		if (legacy_virtio_resource_init(pci_dev) < 0)
+			return -1;
+
+		hw->use_msix = legacy_virtio_has_msix(&pci_dev->addr);
+	}
+
+	hw->io_base = (uint32_t)(uintptr_t)
+		hw->vtpci_cfg_ops->get_mapped_addr(hw, 0, 0, 0);
+	hw->vtpci_ops = &legacy_ops;
+	hw->modern = 0;
+
+	return 0;
+}
+
 int
-vtpci_init(struct rte_pci_device *dev, struct virtio_hw *hw)
+vtpci_init(struct rte_eth_dev *eth_dev, struct virtio_hw *hw)
 {
-	hw->dev = dev;
+	struct rte_pci_device *pci_dev = eth_dev->pci_dev;
+	int ret;
+
+	hw->dev = pci_dev;
+
+	if ((eth_dev->dev_type == RTE_ETH_DEV_PCI) && (pci_dev == NULL)) {
+		PMD_INIT_LOG(INFO, "No pci device specified.");
+		return -1;
+	}
+
+	if (vtpci_cfg_init(eth_dev, hw) < 0)
+		return -1;
 
 	/*
 	 * Try if we can succeed reading virtio pci caps, which exists
 	 * only on modern pci device. If failed, we fallback to legacy
 	 * virtio handling.
 	 */
-	if (virtio_read_caps(dev, hw) == 0) {
-		PMD_INIT_LOG(INFO, "modern virtio pci detected.");
-		hw->vtpci_ops = &modern_ops;
-		hw->modern    = 1;
-		dev->driver->drv_flags |= RTE_PCI_DRV_INTR_LSC;
-		return 0;
-	}
+	if (virtio_read_caps(hw) == 0)
+		ret = vtpci_modern_init(eth_dev, hw);
+	else
+		ret = vtpci_legacy_init(eth_dev, hw);
 
-	PMD_INIT_LOG(INFO, "trying with legacy virtio pci.");
-	if (legacy_virtio_resource_init(dev) < 0)
+	if (ret < 0)
 		return -1;
 
-	hw->vtpci_ops = &legacy_ops;
-	hw->use_msix = legacy_virtio_has_msix(&dev->addr);
-	hw->io_base  = (uint32_t)(uintptr_t)dev->mem_resource[0].addr;
-	hw->modern   = 0;
+	if (vtpci_dev_init(eth_dev, hw) < 0)
+		return -1;
 
 	return 0;
 }
 
 void
-vtpci_uninit(struct rte_pci_device *dev, struct virtio_hw *hw)
+vtpci_uninit(struct rte_eth_dev *eth_dev, struct virtio_hw *hw)
 {
 	hw->dev  = NULL;
 	hw->vtpci_ops = NULL;
 	hw->use_msix = 0;
 	hw->io_base  = 0;
 	hw->modern   = 0;
-	rte_eal_pci_unmap_device(dev);
+	hw->vtpci_cfg_ops->unmap(hw);
+	vtpci_dev_uninit(eth_dev, hw);
+	vtpci_cfg_uninit(eth_dev, hw);
 }
diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h
index 17c7972..7b5ad54 100644
--- a/drivers/net/virtio/virtio_pci.h
+++ b/drivers/net/virtio/virtio_pci.h
@@ -222,6 +222,24 @@  struct virtio_pci_common_cfg {
 
 struct virtio_hw;
 
+/* Functions to access pci configuration space */
+struct virtio_pci_cfg_ops {
+	int (*map)(struct virtio_hw *hw);
+	void (*unmap)(struct virtio_hw *hw);
+	void *(*get_mapped_addr)(struct virtio_hw *hw, uint8_t bar, uint32_t offset, uint32_t length);
+	int (*read)(struct virtio_hw *hw, void *buf, size_t len, off_t offset);
+};
+
+/* Functions to access pci device registers */
+struct virtio_pci_dev_ops {
+	uint8_t (*read8)(struct virtio_hw *hw, uint8_t *addr);
+	uint16_t (*read16)(struct virtio_hw *hw, uint16_t *addr);
+	uint32_t (*read32)(struct virtio_hw *hw, uint32_t *addr);
+	void (*write8)(struct virtio_hw *hw, uint8_t *addr, uint8_t val);
+	void (*write16)(struct virtio_hw *hw, uint16_t *addr, uint16_t val);
+	void (*write32)(struct virtio_hw *hw, uint32_t *addr, uint32_t val);
+};
+
 struct virtio_pci_ops {
 	void (*read_dev_cfg)(struct virtio_hw *hw, size_t offset,
 			     void *dst, int len);
@@ -266,6 +284,8 @@  struct virtio_hw {
 	struct virtio_pci_common_cfg *common_cfg;
 	struct virtio_net_config *dev_cfg;
 	const struct virtio_pci_ops *vtpci_ops;
+	const struct virtio_pci_cfg_ops *vtpci_cfg_ops;
+	const struct virtio_pci_dev_ops *vtpci_dev_ops;
 };
 
 /*
@@ -327,8 +347,8 @@  vtpci_with_feature(struct virtio_hw *hw, uint64_t bit)
 /*
  * Function declaration from virtio_pci.c
  */
-int vtpci_init(struct rte_pci_device *, struct virtio_hw *);
-void vtpci_uninit(struct rte_pci_device *dev, struct virtio_hw *);
+int vtpci_init(struct rte_eth_dev *, struct virtio_hw *);
+void vtpci_uninit(struct rte_eth_dev *, struct virtio_hw *);
 void vtpci_reset(struct virtio_hw *);
 
 void vtpci_reinit_complete(struct virtio_hw *);