Message ID | 1450098032-21198-2-git-send-email-sshukla@mvista.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 4E9AA5A43; Mon, 14 Dec 2015 14:00:47 +0100 (CET) Received: from mail-pf0-f175.google.com (mail-pf0-f175.google.com [209.85.192.175]) by dpdk.org (Postfix) with ESMTP id E909F58D4 for <dev@dpdk.org>; Mon, 14 Dec 2015 14:00:45 +0100 (CET) Received: by pfnn128 with SMTP id n128so105334211pfn.0 for <dev@dpdk.org>; Mon, 14 Dec 2015 05:00:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mvista-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=b+74FWklIBGt6Ceukbmt/LNPgW4nML56jb33hgWqNmY=; b=1+Um3aWe54ceaEtW8CtgaWMt3GoLlaHgRkxlukntnHOJErWsgujXBmLEeKczaVhthv Irev59aVhHmnFc1lfHzRu07JxUlW9MUf9uCbjINXBE8EDNqWr8SpR9PIT74cWANByx8x tWVW1pq33GA7LThWJTnclWeCU6GxWUILtRlDMmI0OLNc8r4+kN5+lniKdHZUArsiYUoe JjDhT2Fz92D82EflqLiQ5UzIOBvpGd2tU60rbbQMB6vgwT0X6oHgj0rHEWZxT3Y6BEch WeAsOPR6IoX1OsOakN0TfoerQHbej3RvzMZWpZVweYW15fmvH4KsYYlwMpemLtW+kdJp E9cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=b+74FWklIBGt6Ceukbmt/LNPgW4nML56jb33hgWqNmY=; b=MH5f18Ppw9jtp3IpFHPHiZJQFY54oKVjjCnOXcAp3Nkc+d4o2H8mm63qOR+mmYvjV/ 5VDI9zLisjwL8FQf8j/N4pGKiVG9MJlf7SymrqzxPQT2sRCrEEO2tZbjqMhMeMwxMSao F2cH3Np5pOddsZqMyr0jmV2sgpZV0x8TcN50cA3zWOsab3Kgo8Og6nr+F5BtLeax/7dn UJPKnMPbwKDsnJsTdAYvzb0ibHE0gCNCIQ3vZvCWhj5aVykKLbVoJZNFhfO1Kwwhh77f +sB/eQ2p+gWNeyVWUKJ0OYImdnKCSedV1m1DiQ00Q/DEHSJkg2ngHiUVyJ/S97OSMU8K /SZA== X-Gm-Message-State: ALoCoQl4YLx2XbCfBDDsj39AuXKszcIjzsm9VzAgnFN7YatBzyMh5cVY3Jr/vnXNkIBqDSYxFD5viktOyRyzUyFjsVWpUFn0kw== X-Received: by 10.98.68.198 with SMTP id m67mr13522466pfi.148.1450098045386; Mon, 14 Dec 2015 05:00:45 -0800 (PST) Received: from santosh-Latitude-E5530-non-vPro.mvista.com ([110.172.16.5]) by smtp.gmail.com with ESMTPSA id 9sm42506405pfn.51.2015.12.14.05.00.42 (version=TLS1_1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 14 Dec 2015 05:00:44 -0800 (PST) From: Santosh Shukla <sshukla@mvista.com> To: dev@dpdk.org Date: Mon, 14 Dec 2015 18:30:20 +0530 Message-Id: <1450098032-21198-2-git-send-email-sshukla@mvista.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1450098032-21198-1-git-send-email-sshukla@mvista.com> References: <1450098032-21198-1-git-send-email-sshukla@mvista.com> Subject: [dpdk-dev] [ [PATCH v2] 01/13] virtio: Introduce config RTE_VIRTIO_INC_VECTOR X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Santosh Shukla
Dec. 14, 2015, 1 p.m. UTC
virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx
instructions. For arm64 in particular, virtio vector implementation does not
exist(todo).
So virtio pmd driver wont build for targets like i686, arm64. By making
RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work
in non-vectored virtio mode.
Signed-off-by: Santosh Shukla <sshukla@mvista.com>
---
config/common_linuxapp | 1 +
drivers/net/virtio/Makefile | 2 +-
drivers/net/virtio/virtio_rxtx.c | 7 +++++++
3 files changed, 9 insertions(+), 1 deletion(-)
Comments
On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: > virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx > instructions. For arm64 in particular, virtio vector implementation does not > exist(todo). > > So virtio pmd driver wont build for targets like i686, arm64. By making > RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work > in non-vectored virtio mode. > > Signed-off-by: Santosh Shukla <sshukla@mvista.com> > --- Ping? any review / comment on this patch much appreciated. Thanks > config/common_linuxapp | 1 + > drivers/net/virtio/Makefile | 2 +- > drivers/net/virtio/virtio_rxtx.c | 7 +++++++ > 3 files changed, 9 insertions(+), 1 deletion(-) > > diff --git a/config/common_linuxapp b/config/common_linuxapp > index ba9e55d..275fb40 100644 > --- a/config/common_linuxapp > +++ b/config/common_linuxapp > @@ -273,6 +273,7 @@ CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_RX=n > CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX=n > CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DRIVER=n > CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DUMP=n > +CONFIG_RTE_VIRTIO_INC_VECTOR=y > > # > # Compile burst-oriented VMXNET3 PMD driver > diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile > index 43835ba..25a842d 100644 > --- a/drivers/net/virtio/Makefile > +++ b/drivers/net/virtio/Makefile > @@ -50,7 +50,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtqueue.c > SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_pci.c > SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx.c > SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_ethdev.c > -SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple.c > +SRCS-$(CONFIG_RTE_VIRTIO_INC_VECTOR) += virtio_rxtx_simple.c > > # this lib depends upon: > DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_eal lib/librte_ether > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c > index 74b39ef..23be1ff 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -438,7 +438,9 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, > > dev->data->rx_queues[queue_idx] = vq; > > +#ifdef RTE_VIRTIO_INC_VECTOR > virtio_rxq_vec_setup(vq); > +#endif > > return 0; > } > @@ -464,7 +466,10 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, > const struct rte_eth_txconf *tx_conf) > { > uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX; > + > +#ifdef RTE_VIRTIO_INC_VECTOR > struct virtio_hw *hw = dev->data->dev_private; > +#endif > struct virtqueue *vq; > uint16_t tx_free_thresh; > int ret; > @@ -477,6 +482,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, > return -EINVAL; > } > > +#ifdef RTE_VIRTIO_INC_VECTOR > /* Use simple rx/tx func if single segment and no offloads */ > if ((tx_conf->txq_flags & VIRTIO_SIMPLE_FLAGS) == VIRTIO_SIMPLE_FLAGS && > !vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { > @@ -485,6 +491,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, > dev->rx_pkt_burst = virtio_recv_pkts_vec; > use_simple_rxtx = 1; > } > +#endif > > ret = virtio_dev_queue_setup(dev, VTNET_TQ, queue_idx, vtpci_queue_idx, > nb_desc, socket_id, &vq); > -- > 1.7.9.5 >
2015-12-17 17:32, Santosh Shukla: > On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: > > virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx > > instructions. For arm64 in particular, virtio vector implementation does not > > exist(todo). > > > > So virtio pmd driver wont build for targets like i686, arm64. By making > > RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work > > in non-vectored virtio mode. > > > > Signed-off-by: Santosh Shukla <sshukla@mvista.com> > > --- > > Ping? > > any review / comment on this patch much appreciated. Thanks Why not check for SSE/AVX support instead of adding yet another config option?
On Thu, Dec 17, 2015 at 5:33 PM, Thomas Monjalon <thomas.monjalon@6wind.com> wrote: > 2015-12-17 17:32, Santosh Shukla: >> On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: >> > virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx >> > instructions. For arm64 in particular, virtio vector implementation does not >> > exist(todo). >> > >> > So virtio pmd driver wont build for targets like i686, arm64. By making >> > RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work >> > in non-vectored virtio mode. >> > >> > Signed-off-by: Santosh Shukla <sshukla@mvista.com> >> > --- >> >> Ping? >> >> any review / comment on this patch much appreciated. Thanks > > Why not check for SSE/AVX support instead of adding yet another config option? Ok, keeping a check for sse/avx across the patch wont stand true for future virtio vectored implementation lets say for arm/arm64 cases i.e.. sse2neon types. That implies user suppose to keep on appending / adding checks for see2neon for example and so forth. On other hand, motivation of including INC_VEC config was inspired from IXGBE and other pmd drivers who support vectored sse/avx _rx path and also could work w/o vectored mode. Current virtio is missing such support and arm dont have vectored sse2neon types implementation right now so its a blocker for arm case. Also keeping virtio pmd driver flexible enough to work in non-vectored mode is a requirement/ a feature.
On Thu, 17 Dec 2015 17:32:38 +0530 Santosh Shukla <sshukla@mvista.com> wrote: > On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: > > virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx > > instructions. For arm64 in particular, virtio vector implementation does not > > exist(todo). > > > > So virtio pmd driver wont build for targets like i686, arm64. By making > > RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work > > in non-vectored virtio mode. > > > > Signed-off-by: Santosh Shukla <sshukla@mvista.com> > > --- > > Ping? > > any review / comment on this patch much appreciated. Thanks The patches I posted (and were ignored by Intel) to support indirect and any layout should have much bigger performance gain than all this low level SSE bit twiddling.
On Thu, Dec 17, 2015 at 03:24:35PM -0800, Stephen Hemminger wrote: > On Thu, 17 Dec 2015 17:32:38 +0530 > Santosh Shukla <sshukla@mvista.com> wrote: > > > On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: > > > virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx > > > instructions. For arm64 in particular, virtio vector implementation does not > > > exist(todo). > > > > > > So virtio pmd driver wont build for targets like i686, arm64. By making > > > RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work > > > in non-vectored virtio mode. > > > > > > Signed-off-by: Santosh Shukla <sshukla@mvista.com> > > > --- > > > > Ping? > > > > any review / comment on this patch much appreciated. Thanks > > The patches I posted (and were ignored by Intel) to support indirect Sorry, I thought it was reviewed and got applied (and I just started to review patches for virtio recently). So, would you please send it out again? I will have a review and test ASAP. --yliu > and any layout should have much bigger performance gain than all this > low level SSE bit twiddling.
On 12/18/2015 7:25 AM, Stephen Hemminger wrote: > On Thu, 17 Dec 2015 17:32:38 +0530 > Santosh Shukla <sshukla@mvista.com> wrote: > >> On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: >>> virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx >>> instructions. For arm64 in particular, virtio vector implementation does not >>> exist(todo). >>> >>> So virtio pmd driver wont build for targets like i686, arm64. By making >>> RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work >>> in non-vectored virtio mode. >>> >>> Signed-off-by: Santosh Shukla <sshukla@mvista.com> >>> --- >> Ping? >> >> any review / comment on this patch much appreciated. Thanks > The patches I posted (and were ignored by Intel) to support indirect > and any layout should have much bigger performance gain than all this > low level SSE bit twiddling. Hi Stephen: We only did SSE twiddling to RX, which almost doubles the performance comparing to normal path in virtio/vhost performance test case. Indirect and any layout feature enabling are mostly for TX. We also did some optimization for single segment and non-offload case in TX, without using SSE, which also gives ~60% performance improvement, in Qian's result. My optimization is mostly for single segment and non-offload case, which i calls simple rx/tx. I plan to add virtio/vhost performance benchmark so that we could easily measure the performance difference for each patch. Indirect and any layout features are useful for multiple segment transmitted packet mbufs. I had acked your patch at the first time, and thought it is applied. I don't understand why you say it is ignored by Intel. > >
2015-12-18 09:52, Xie, Huawei: > On 12/18/2015 7:25 AM, Stephen Hemminger wrote: > > On Thu, 17 Dec 2015 17:32:38 +0530 > > Santosh Shukla <sshukla@mvista.com> wrote: > > > >> On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: > >>> virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx > >>> instructions. For arm64 in particular, virtio vector implementation does not > >>> exist(todo). > >>> > >>> So virtio pmd driver wont build for targets like i686, arm64. By making > >>> RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work > >>> in non-vectored virtio mode. > >>> > >>> Signed-off-by: Santosh Shukla <sshukla@mvista.com> > >>> --- > >> Ping? > >> > >> any review / comment on this patch much appreciated. Thanks > > The patches I posted (and were ignored by Intel) to support indirect > > and any layout should have much bigger performance gain than all this > > low level SSE bit twiddling. > Hi Stephen: > We only did SSE twiddling to RX, which almost doubles the performance > comparing to normal path in virtio/vhost performance test case. Indirect > and any layout feature enabling are mostly for TX. We also did some > optimization for single segment and non-offload case in TX, without > using SSE, which also gives ~60% performance improvement, in Qian's > result. My optimization is mostly for single segment and non-offload > case, which i calls simple rx/tx. > I plan to add virtio/vhost performance benchmark so that we could easily > measure the performance difference for each patch. > > Indirect and any layout features are useful for multiple segment > transmitted packet mbufs. I had acked your patch at the first time, and > thought it is applied. I don't understand why you say it is ignored by > Intel. There was an error and Stephen never replied nor pinged about it: http://dpdk.org/ml/archives/dev/2015-October/026984.html It happens. Reminder: it is the responsibility of the author to get patches reviewed and accepted. Please let's avoid useless blaming.
On Fri, Dec 18, 2015 at 4:54 AM, Stephen Hemminger <stephen@networkplumber.org> wrote: > On Thu, 17 Dec 2015 17:32:38 +0530 > Santosh Shukla <sshukla@mvista.com> wrote: > >> On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: >> > virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx >> > instructions. For arm64 in particular, virtio vector implementation does not >> > exist(todo). >> > >> > So virtio pmd driver wont build for targets like i686, arm64. By making >> > RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work >> > in non-vectored virtio mode. >> > >> > Signed-off-by: Santosh Shukla <sshukla@mvista.com> >> > --- >> >> Ping? >> >> any review / comment on this patch much appreciated. Thanks > > The patches I posted (and were ignored by Intel) to support indirect > and any layout should have much bigger performance gain than all this > low level SSE bit twiddling. > I little confused - do we care for this patch?
On Fri, 18 Dec 2015 09:52:29 +0000 "Xie, Huawei" <huawei.xie@intel.com> wrote: > > low level SSE bit twiddling. > Hi Stephen: > We only did SSE twiddling to RX, which almost doubles the performance > comparing to normal path in virtio/vhost performance test case. Indirect > and any layout feature enabling are mostly for TX. We also did some > optimization for single segment and non-offload case in TX, without > using SSE, which also gives ~60% performance improvement, in Qian's > result. My optimization is mostly for single segment and non-offload > case, which i calls simple rx/tx. > I plan to add virtio/vhost performance benchmark so that we could easily > measure the performance difference for each patch. > > Indirect and any layout features are useful for multiple segment > transmitted packet mbufs. I had acked your patch at the first time, and > thought it is applied. I don't understand why you say it is ignored by > Intel. Sorry, did not mean to blame Intel, ... more that why didn't it get in 2.2? It turns out any layout/indirect helps all transmits because they can then take a single tx descriptor rather than multiple.
2015-12-18 09:33, Stephen Hemminger: > On Fri, 18 Dec 2015 09:52:29 +0000 > "Xie, Huawei" <huawei.xie@intel.com> wrote: > > > low level SSE bit twiddling. > > Hi Stephen: > > We only did SSE twiddling to RX, which almost doubles the performance > > comparing to normal path in virtio/vhost performance test case. Indirect > > and any layout feature enabling are mostly for TX. We also did some > > optimization for single segment and non-offload case in TX, without > > using SSE, which also gives ~60% performance improvement, in Qian's > > result. My optimization is mostly for single segment and non-offload > > case, which i calls simple rx/tx. > > I plan to add virtio/vhost performance benchmark so that we could easily > > measure the performance difference for each patch. > > > > Indirect and any layout features are useful for multiple segment > > transmitted packet mbufs. I had acked your patch at the first time, and > > thought it is applied. I don't understand why you say it is ignored by > > Intel. > > Sorry, did not mean to blame Intel, ... more that why didn't it get in 2.2? I've already answered to this question: http://dpdk.org/ml/archives/dev/2015-December/030540.html There was a compilation error and you have not followed up.
On Fri, Dec 18, 2015 at 06:16:36PM +0530, Santosh Shukla wrote: > On Fri, Dec 18, 2015 at 4:54 AM, Stephen Hemminger > <stephen@networkplumber.org> wrote: > > On Thu, 17 Dec 2015 17:32:38 +0530 > > Santosh Shukla <sshukla@mvista.com> wrote: > > > >> On Mon, Dec 14, 2015 at 6:30 PM, Santosh Shukla <sshukla@mvista.com> wrote: > >> > virtio_recv_pkts_vec and other virtio vector friend apis are written for sse/avx > >> > instructions. For arm64 in particular, virtio vector implementation does not > >> > exist(todo). > >> > > >> > So virtio pmd driver wont build for targets like i686, arm64. By making > >> > RTE_VIRTIO_INC_VECTOR=n, Driver can build for non-sse/avx targets and will work > >> > in non-vectored virtio mode. > >> > > >> > Signed-off-by: Santosh Shukla <sshukla@mvista.com> > >> > --- > >> > >> Ping? > >> > >> any review / comment on this patch much appreciated. Thanks > > > > The patches I posted (and were ignored by Intel) to support indirect > > and any layout should have much bigger performance gain than all this > > low level SSE bit twiddling. > > > > I little confused - do we care for this patch? Santosh, As a reviewer that still have a lot of work to do, I don't have the bandwidth to review _all_ your patches carefully __once__. That is to say, I will only comment when I find something should be commented, from time to time when I put more thoughts there. For other patches I've no comment, it could mean that it's okay to me so far, or I'm not quite sure it's okay but I don't find anything obvious wrong. Hence, I put no comments so far. But later, when get time, I will revisit them, think more, and either ACK it, or comment it. So, you could simply keep those patches unchanged if they received no comments, and fix other comments, and send out a new version at anytime that is proper to you. --yliu
diff --git a/config/common_linuxapp b/config/common_linuxapp index ba9e55d..275fb40 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -273,6 +273,7 @@ CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_RX=n CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_TX=n CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DRIVER=n CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_DUMP=n +CONFIG_RTE_VIRTIO_INC_VECTOR=y # # Compile burst-oriented VMXNET3 PMD driver diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile index 43835ba..25a842d 100644 --- a/drivers/net/virtio/Makefile +++ b/drivers/net/virtio/Makefile @@ -50,7 +50,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtqueue.c SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_pci.c SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx.c SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_ethdev.c -SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_rxtx_simple.c +SRCS-$(CONFIG_RTE_VIRTIO_INC_VECTOR) += virtio_rxtx_simple.c # this lib depends upon: DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_eal lib/librte_ether diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 74b39ef..23be1ff 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -438,7 +438,9 @@ virtio_dev_rx_queue_setup(struct rte_eth_dev *dev, dev->data->rx_queues[queue_idx] = vq; +#ifdef RTE_VIRTIO_INC_VECTOR virtio_rxq_vec_setup(vq); +#endif return 0; } @@ -464,7 +466,10 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, const struct rte_eth_txconf *tx_conf) { uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX; + +#ifdef RTE_VIRTIO_INC_VECTOR struct virtio_hw *hw = dev->data->dev_private; +#endif struct virtqueue *vq; uint16_t tx_free_thresh; int ret; @@ -477,6 +482,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } +#ifdef RTE_VIRTIO_INC_VECTOR /* Use simple rx/tx func if single segment and no offloads */ if ((tx_conf->txq_flags & VIRTIO_SIMPLE_FLAGS) == VIRTIO_SIMPLE_FLAGS && !vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { @@ -485,6 +491,7 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, dev->rx_pkt_burst = virtio_recv_pkts_vec; use_simple_rxtx = 1; } +#endif ret = virtio_dev_queue_setup(dev, VTNET_TQ, queue_idx, vtpci_queue_idx, nb_desc, socket_id, &vq);