Message ID | 1426848383-15764-1-git-send-email-haifeng.lin@huawei.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id E2A4B5A35; Fri, 20 Mar 2015 11:48:43 +0100 (CET) Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [119.145.14.66]) by dpdk.org (Postfix) with ESMTP id D77A55690 for <dev@dpdk.org>; Fri, 20 Mar 2015 11:48:40 +0100 (CET) Received: from 172.24.2.119 (EHLO szxeml434-hub.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id BDK36156; Fri, 20 Mar 2015 18:46:34 +0800 (CST) Received: from localhost (10.177.19.115) by szxeml434-hub.china.huawei.com (10.82.67.225) with Microsoft SMTP Server id 14.3.158.1; Fri, 20 Mar 2015 18:46:25 +0800 From: linhaifeng <haifeng.lin@huawei.com> To: <dev@dpdk.org> Date: Fri, 20 Mar 2015 18:46:23 +0800 Message-ID: <1426848383-15764-1-git-send-email-haifeng.lin@huawei.com> X-Mailer: git-send-email 1.8.5.2.msysgit.0 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.177.19.115] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020203.550BFA8E.007D, ss=1, re=0.001, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 11cb5e88d42406ff6f1c9eab2b9b4429 Subject: [dpdk-dev] [PATCH] lib/librte_pmd_virtio fix can't receive packets after rx_q is empty X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Linhaifeng
March 20, 2015, 10:46 a.m. UTC
From: Linhaifeng <haifeng.lin@huawei.com> If failed to alloc mbuf ring_size times the rx_q may be empty and can't receive any packets forever because nb_used is 0 forever. so we should try to refill when nb_used is 0.After otherone free mbuf we can restart to receive packets. Signed-off-by: Linhaifeng <haifeng.lin@huawei.com> --- lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
Comments
On 3/20/2015 6:47 PM, linhaifeng wrote: > From: Linhaifeng <haifeng.lin@huawei.com> > > If failed to alloc mbuf ring_size times the rx_q may be empty and can't > receive any packets forever because nb_used is 0 forever. Agreed. In current implementation, once VQ becomes empty, we have no chance to refill it again. The simple fix is, receive one and then refill one as other PMDs. Need to consider which is best strategy in terms of performance in future. How did you find this? through code review or real workload? > so we should try to refill when nb_used is 0.After otherone free mbuf > we can restart to receive packets. > > Signed-off-by: Linhaifeng <haifeng.lin@huawei.com> > --- > lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c > index 1d74b34..5c7e0cd 100644 > --- a/lib/librte_pmd_virtio/virtio_rxtx.c > +++ b/lib/librte_pmd_virtio/virtio_rxtx.c > @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) > num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE); > > if (num == 0) > - return 0; > + goto refill; > > num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num); > PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num); > @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) > > rxvq->packets += nb_rx; > > +refill: > /* Allocate new mbuf for the used descriptor */ > error = ENOSPC; > while (likely(!virtqueue_full(rxvq))) {
On 2015/3/21 0:54, Xie, Huawei wrote: > On 3/20/2015 6:47 PM, linhaifeng wrote: >> From: Linhaifeng <haifeng.lin@huawei.com> >> >> If failed to alloc mbuf ring_size times the rx_q may be empty and can't >> receive any packets forever because nb_used is 0 forever. > Agreed. In current implementation, once VQ becomes empty, we have no > chance to refill it again. > The simple fix is, receive one and then refill one as other PMDs. Need > to consider which is best strategy in terms of performance in future. > How did you find this? through code review or real workload? We found this through real workload which use vhost_net + virtio_pmd to forward packets in VM. >> so we should try to refill when nb_used is 0.After otherone free mbuf >> we can restart to receive packets. >> >> Signed-off-by: Linhaifeng <haifeng.lin@huawei.com> >> --- >> lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c >> index 1d74b34..5c7e0cd 100644 >> --- a/lib/librte_pmd_virtio/virtio_rxtx.c >> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c >> @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) >> num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE); >> >> if (num == 0) >> - return 0; >> + goto refill; >> >> num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num); >> PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num); >> @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) >> >> rxvq->packets += nb_rx; >> >> +refill: >> /* Allocate new mbuf for the used descriptor */ >> error = ENOSPC; >> while (likely(!virtqueue_full(rxvq))) { > > >
On 2015/3/21 0:54, Xie, Huawei wrote: > On 3/20/2015 6:47 PM, linhaifeng wrote: >> From: Linhaifeng <haifeng.lin@huawei.com> >> >> If failed to alloc mbuf ring_size times the rx_q may be empty and can't >> receive any packets forever because nb_used is 0 forever. > Agreed. In current implementation, once VQ becomes empty, we have no > chance to refill it again. > The simple fix is, receive one and then refill one as other PMDs. Need "Receive one and then refill one" also have this problem.If refill also failed the VQ would be empty too. > to consider which is best strategy in terms of performance in future. > How did you find this? through code review or real workload? >> so we should try to refill when nb_used is 0.After otherone free mbuf >> we can restart to receive packets. >> >> Signed-off-by: Linhaifeng <haifeng.lin@huawei.com> >> --- >> lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c >> index 1d74b34..5c7e0cd 100644 >> --- a/lib/librte_pmd_virtio/virtio_rxtx.c >> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c >> @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) >> num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE); >> >> if (num == 0) >> - return 0; >> + goto refill; >> >> num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num); >> PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num); >> @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) >> >> rxvq->packets += nb_rx; >> >> +refill: >> /* Allocate new mbuf for the used descriptor */ >> error = ENOSPC; >> while (likely(!virtqueue_full(rxvq))) { > > >
On 3/21/2015 10:25 AM, Linhaifeng wrote: > > On 2015/3/21 0:54, Xie, Huawei wrote: >> On 3/20/2015 6:47 PM, linhaifeng wrote: >>> From: Linhaifeng <haifeng.lin@huawei.com> >>> >>> If failed to alloc mbuf ring_size times the rx_q may be empty and can't >>> receive any packets forever because nb_used is 0 forever. >> Agreed. In current implementation, once VQ becomes empty, we have no >> chance to refill it again. >> The simple fix is, receive one and then refill one as other PMDs. Need > "Receive one and then refill one" also have this problem.If refill also > failed the VQ would be empty too. Correction: refill one and receive one on success of refill > >> to consider which is best strategy in terms of performance in future. >> How did you find this? through code review or real workload? >>> so we should try to refill when nb_used is 0.After otherone free mbuf >>> we can restart to receive packets. >>> >>> Signed-off-by: Linhaifeng <haifeng.lin@huawei.com> >>> --- >>> lib/librte_pmd_virtio/virtio_rxtx.c | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> >>> diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c >>> index 1d74b34..5c7e0cd 100644 >>> --- a/lib/librte_pmd_virtio/virtio_rxtx.c >>> +++ b/lib/librte_pmd_virtio/virtio_rxtx.c >>> @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) >>> num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE); >>> >>> if (num == 0) >>> - return 0; >>> + goto refill; >>> >>> num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num); >>> PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num); >>> @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) >>> >>> rxvq->packets += nb_rx; >>> >>> +refill: >>> /* Allocate new mbuf for the used descriptor */ >>> error = ENOSPC; >>> while (likely(!virtqueue_full(rxvq))) { >> >>
diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c index 1d74b34..5c7e0cd 100644 --- a/lib/librte_pmd_virtio/virtio_rxtx.c +++ b/lib/librte_pmd_virtio/virtio_rxtx.c @@ -495,7 +495,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) num = num - ((rxvq->vq_used_cons_idx + num) % DESC_PER_CACHELINE); if (num == 0) - return 0; + goto refill; num = virtqueue_dequeue_burst_rx(rxvq, rcv_pkts, len, num); PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num); @@ -536,6 +536,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxvq->packets += nb_rx; +refill: /* Allocate new mbuf for the used descriptor */ error = ENOSPC; while (likely(!virtqueue_full(rxvq))) {