Message ID | 1516185726-31797-1-git-send-email-junjie.j.chen@intel.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers | show |
Context | Check | Description |
---|---|---|
ci/checkpatch | success | coding style OK |
ci/Intel-compilation | fail | Compilation issues |
> -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Junjie Chen > Sent: Wednesday, January 17, 2018 6:42 PM > To: yliu@fridaylinux.org; maxime.coquelin@redhat.com > Cc: dev@dpdk.org; Chen, Junjie J > Subject: [dpdk-dev] [PATCH] vhost: dequeue zero copy should restore mbuf > before return to pool > > dequeue zero copy change buf_addr and buf_iova of mbuf, and return > to mbuf pool without restore them, it breaks vm memory if others allocate > mbuf from same pool since mbuf reset doesn't reset buf_addr and buf_iova. > > Signed-off-by: Junjie Chen <junjie.j.chen@intel.com> > --- > lib/librte_vhost/virtio_net.c | 21 +++++++++++++++++++++ > 1 file changed, 21 insertions(+) > > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c > index 568ad0e..e9aaf6d 100644 > --- a/lib/librte_vhost/virtio_net.c > +++ b/lib/librte_vhost/virtio_net.c > @@ -1158,6 +1158,26 @@ mbuf_is_consumed(struct rte_mbuf *m) > return true; > } > > + > +static __rte_always_inline void > +restore_mbuf(struct rte_mbuf *m) > +{ > + uint32_t mbuf_size, priv_size; > + > + while (m) { > + priv_size = rte_pktmbuf_priv_size(m->pool); > + mbuf_size = sizeof(struct rte_mbuf) + priv_size; > + /* start of buffer is after mbuf structure and priv data */ > + m->priv_size = priv_size; I don't think we need to restore priv_size. Refer to its definition in rte_mbuf: "Size of the application private data. In case of an indirect mbuf, it stores the direct mbuf private data size." Thanks, Jianfeng > + > + m->buf_addr = (char *)m + mbuf_size; > + m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size; > + m->data_off = RTE_MIN(RTE_PKTMBUF_HEADROOM, > + (uint16_t)m->buf_len); > + m = m->next; > + } > +} > + > uint16_t > rte_vhost_dequeue_burst(int vid, uint16_t queue_id, > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > count) > @@ -1209,6 +1229,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t > queue_id, > nr_updated += 1; > > TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, > next); > + restore_mbuf(zmbuf->mbuf); > rte_pktmbuf_free(zmbuf->mbuf); > put_zmbuf(zmbuf); > vq->nr_zmbuf -= 1; > -- > 2.0.1
> > > > dequeue zero copy change buf_addr and buf_iova of mbuf, and return to > > mbuf pool without restore them, it breaks vm memory if others allocate > > mbuf from same pool since mbuf reset doesn't reset buf_addr and > buf_iova. > > > > Signed-off-by: Junjie Chen <junjie.j.chen@intel.com> > > --- > > lib/librte_vhost/virtio_net.c | 21 +++++++++++++++++++++ > > 1 file changed, 21 insertions(+) > > > > diff --git a/lib/librte_vhost/virtio_net.c > > b/lib/librte_vhost/virtio_net.c index 568ad0e..e9aaf6d 100644 > > --- a/lib/librte_vhost/virtio_net.c > > +++ b/lib/librte_vhost/virtio_net.c > > @@ -1158,6 +1158,26 @@ mbuf_is_consumed(struct rte_mbuf *m) > > return true; > > } > > > > + > > +static __rte_always_inline void > > +restore_mbuf(struct rte_mbuf *m) > > +{ > > + uint32_t mbuf_size, priv_size; > > + > > + while (m) { > > + priv_size = rte_pktmbuf_priv_size(m->pool); > > + mbuf_size = sizeof(struct rte_mbuf) + priv_size; > > + /* start of buffer is after mbuf structure and priv data */ > > + m->priv_size = priv_size; > > I don't think we need to restore priv_size. Refer to its definition in rte_mbuf: > "Size of the application private data. In case of an indirect mbuf, it > stores the direct mbuf private data size." > > Thanks, > Jianfeng You are right, I also remove restore for data_len since it is reset when allocating. Please see v2. Thanks. > > > + > > + m->buf_addr = (char *)m + mbuf_size; > > + m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size; > > + m->data_off = RTE_MIN(RTE_PKTMBUF_HEADROOM, > > + (uint16_t)m->buf_len); > > + m = m->next; > > + } > > +} > > + > > uint16_t > > rte_vhost_dequeue_burst(int vid, uint16_t queue_id, > > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > > count) > > @@ -1209,6 +1229,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t > > queue_id, > > nr_updated += 1; > > > > TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next); > > + restore_mbuf(zmbuf->mbuf); > > rte_pktmbuf_free(zmbuf->mbuf); > > put_zmbuf(zmbuf); > > vq->nr_zmbuf -= 1; > > -- > > 2.0.1
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 568ad0e..e9aaf6d 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1158,6 +1158,26 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } + +static __rte_always_inline void +restore_mbuf(struct rte_mbuf *m) +{ + uint32_t mbuf_size, priv_size; + + while (m) { + priv_size = rte_pktmbuf_priv_size(m->pool); + mbuf_size = sizeof(struct rte_mbuf) + priv_size; + /* start of buffer is after mbuf structure and priv data */ + m->priv_size = priv_size; + + m->buf_addr = (char *)m + mbuf_size; + m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size; + m->data_off = RTE_MIN(RTE_PKTMBUF_HEADROOM, + (uint16_t)m->buf_len); + m = m->next; + } +} + uint16_t rte_vhost_dequeue_burst(int vid, uint16_t queue_id, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count) @@ -1209,6 +1229,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, nr_updated += 1; TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next); + restore_mbuf(zmbuf->mbuf); rte_pktmbuf_free(zmbuf->mbuf); put_zmbuf(zmbuf); vq->nr_zmbuf -= 1;
dequeue zero copy change buf_addr and buf_iova of mbuf, and return to mbuf pool without restore them, it breaks vm memory if others allocate mbuf from same pool since mbuf reset doesn't reset buf_addr and buf_iova. Signed-off-by: Junjie Chen <junjie.j.chen@intel.com> --- lib/librte_vhost/virtio_net.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)