[v1,3/4] vhost: fix async vec buf overrun
Checks
Commit Message
Add check on the async vec buffer usage to prevent the buf overrun.
If vec buf is not sufficient to prepare for next packet's iov
creation, an async transfer will be triggered immediately to free
the vec buf.
Fixes: 78639d54563a ("vhost: introduce async enqueue registration API")
Signed-off-by: Patrick Fu <patrick.fu@intel.com>
---
lib/librte_vhost/vhost.h | 2 +-
lib/librte_vhost/virtio_net.c | 7 ++++++-
2 files changed, 7 insertions(+), 2 deletions(-)
Comments
s/buf/buffer/
On 9/11/20 3:53 AM, Patrick Fu wrote:
> Add check on the async vec buffer usage to prevent the buf overrun.
s/vec/vector/
s/buf/buffer/
> If vec buf is not sufficient to prepare for next packet's iov
same here
> creation, an async transfer will be triggered immediately to free
> the vec buf.
>
> Fixes: 78639d54563a ("vhost: introduce async enqueue registration API")
>
> Signed-off-by: Patrick Fu <patrick.fu@intel.com>
> ---
> lib/librte_vhost/vhost.h | 2 +-
> lib/librte_vhost/virtio_net.c | 7 ++++++-
> 2 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 0af0ac23d..5e669e28f 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -47,7 +47,7 @@
> #define MAX_PKT_BURST 32
>
> #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2)
> -#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2)
> +#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 4)
>
> #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \
> ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED | VRING_DESC_F_WRITE) : \
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 70c3377fb..18b91836a 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1492,6 +1492,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev,
> struct rte_vhost_iov_iter *dst_it = it_pool + 1;
> uint16_t n_free_slot, slot_idx;
> uint16_t pkt_err = 0;
> + uint16_t segs_await = 0;
> struct async_inflight_info *pkts_info = vq->async_pkts_info;
> int n_pkts = 0;
>
> @@ -1540,6 +1541,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev,
> dst_iovec += dst_it->nr_segs;
> src_it += 2;
> dst_it += 2;
> + segs_await += src_it->nr_segs;
> } else {
> pkts_info[slot_idx].info = num_buffers;
> vq->async_pkts_inflight_n++;
> @@ -1548,13 +1550,16 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev,
> vq->last_avail_idx += num_buffers;
>
> if (pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD ||
> - (pkt_idx == count - 1 && pkt_burst_idx)) {
> + (pkt_idx == count - 1 && pkt_burst_idx) ||
> + VHOST_MAX_ASYNC_VEC / 2 - segs_await <
> + BUF_VECTOR_MAX) {
Parenthesis may help.
Or better to refactor the code for better readability.
> n_pkts = vq->async_ops.transfer_data(dev->vid,
> queue_id, tdes, 0, pkt_burst_idx);
> src_iovec = vec_pool;
> dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1);
> src_it = it_pool;
> dst_it = it_pool + 1;
> + segs_await = 0;
> vq->async_pkts_inflight_n += n_pkts;
>
> if (unlikely(n_pkts < (int)pkt_burst_idx)) {
>
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, September 23, 2020 5:22 PM
> To: Fu, Patrick <patrick.fu@intel.com>; dev@dpdk.org; Xia, Chenbo
> <chenbo.xia@intel.com>
> Cc: Wang, Zhihong <zhihong.wang@intel.com>; Jiang, Cheng1
> <cheng1.jiang@intel.com>
> Subject: Re: [PATCH v1 3/4] vhost: fix async vec buf overrun
>
> s/buf/buffer/
>
> On 9/11/20 3:53 AM, Patrick Fu wrote:
> > Add check on the async vec buffer usage to prevent the buf overrun.
>
> s/vec/vector/
>
> s/buf/buffer/
>
>
> > If vec buf is not sufficient to prepare for next packet's iov
>
> same here
>
Fix in v2
> > if (pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD ||
> > - (pkt_idx == count - 1 && pkt_burst_idx)) {
> > + (pkt_idx == count - 1 && pkt_burst_idx) ||
> > + VHOST_MAX_ASYNC_VEC / 2 - segs_await <
> > + BUF_VECTOR_MAX) {
>
> Parenthesis may help.
>
> Or better to refactor the code for better readability.
>
Refactor the code seems to be a little bit tricky. I will add parenthesis together with some comments to improve readability.
Thanks,
Patrick
@@ -47,7 +47,7 @@
#define MAX_PKT_BURST 32
#define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2)
-#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2)
+#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 4)
#define PACKED_DESC_ENQUEUE_USED_FLAG(w) \
((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED | VRING_DESC_F_WRITE) : \
@@ -1492,6 +1492,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev,
struct rte_vhost_iov_iter *dst_it = it_pool + 1;
uint16_t n_free_slot, slot_idx;
uint16_t pkt_err = 0;
+ uint16_t segs_await = 0;
struct async_inflight_info *pkts_info = vq->async_pkts_info;
int n_pkts = 0;
@@ -1540,6 +1541,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev,
dst_iovec += dst_it->nr_segs;
src_it += 2;
dst_it += 2;
+ segs_await += src_it->nr_segs;
} else {
pkts_info[slot_idx].info = num_buffers;
vq->async_pkts_inflight_n++;
@@ -1548,13 +1550,16 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev,
vq->last_avail_idx += num_buffers;
if (pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD ||
- (pkt_idx == count - 1 && pkt_burst_idx)) {
+ (pkt_idx == count - 1 && pkt_burst_idx) ||
+ VHOST_MAX_ASYNC_VEC / 2 - segs_await <
+ BUF_VECTOR_MAX) {
n_pkts = vq->async_ops.transfer_data(dev->vid,
queue_id, tdes, 0, pkt_burst_idx);
src_iovec = vec_pool;
dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1);
src_it = it_pool;
dst_it = it_pool + 1;
+ segs_await = 0;
vq->async_pkts_inflight_n += n_pkts;
if (unlikely(n_pkts < (int)pkt_burst_idx)) {