[v2,2/2] vhost: add new mbuf allocation failure statistic
Checks
Commit Message
This patch introduces a new, per virtqueue, mbuf allocation
failure statistic. It can be useful to troubleshoot packets
drops due to insufficient mempool size or memory leaks.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vhost.c | 1 +
lib/vhost/vhost.h | 1 +
lib/vhost/virtio_net.c | 17 +++++++++++++----
3 files changed, 15 insertions(+), 4 deletions(-)
Comments
On Wed, Jan 31, 2024 at 8:53 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
>
> This patch introduces a new, per virtqueue, mbuf allocation
> failure statistic. It can be useful to troubleshoot packets
> drops due to insufficient mempool size or memory leaks.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Having a stat for such situation will be useful.
I just have one comment, though it is not really related to this change itself.
[snip]
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index 9951842b9f..1359c5fb1f 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -2996,6 +2996,7 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
> if (mbuf_avail == 0) {
> cur = rte_pktmbuf_alloc(mbuf_pool);
> if (unlikely(cur == NULL)) {
> + vq->stats.mbuf_alloc_failed++;
> VHOST_DATA_LOG(dev->ifname, ERR,
> "failed to allocate memory for mbuf.");
This error log here is scary as it means the datapath can be slowed
down for each multisegment mbuf in the event of a mbuf (maybe
temporary) shortage.
Besides no other mbuf allocation in the vhost library datapath
generates such log.
I would remove it, probably in a separate patch.
WDYT?
> goto error;
On 2/1/24 09:10, David Marchand wrote:
> On Wed, Jan 31, 2024 at 8:53 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>>
>> This patch introduces a new, per virtqueue, mbuf allocation
>> failure statistic. It can be useful to troubleshoot packets
>> drops due to insufficient mempool size or memory leaks.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>
> Having a stat for such situation will be useful.
>
> I just have one comment, though it is not really related to this change itself.
>
> [snip]
>
>> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
>> index 9951842b9f..1359c5fb1f 100644
>> --- a/lib/vhost/virtio_net.c
>> +++ b/lib/vhost/virtio_net.c
>> @@ -2996,6 +2996,7 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
>> if (mbuf_avail == 0) {
>> cur = rte_pktmbuf_alloc(mbuf_pool);
>> if (unlikely(cur == NULL)) {
>> + vq->stats.mbuf_alloc_failed++;
>> VHOST_DATA_LOG(dev->ifname, ERR,
>> "failed to allocate memory for mbuf.");
>
> This error log here is scary as it means the datapath can be slowed
> down for each multisegment mbuf in the event of a mbuf (maybe
> temporary) shortage.
> Besides no other mbuf allocation in the vhost library datapath
> generates such log.
>
> I would remove it, probably in a separate patch.
> WDYT?
Agree, we should not have such log in the datapath.
And now that we have the stat, it is even less useful.
Regards,
Maxime
>
>> goto error;
>
>
On Thu, Feb 1, 2024 at 9:29 AM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
> On 2/1/24 09:10, David Marchand wrote:
> > On Wed, Jan 31, 2024 at 8:53 PM Maxime Coquelin
> > <maxime.coquelin@redhat.com> wrote:
> >>
> >> This patch introduces a new, per virtqueue, mbuf allocation
> >> failure statistic. It can be useful to troubleshoot packets
> >> drops due to insufficient mempool size or memory leaks.
> >>
> >> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> >
> > Having a stat for such situation will be useful.
> >
> > I just have one comment, though it is not really related to this change itself.
> >
> > [snip]
> >
> >> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> >> index 9951842b9f..1359c5fb1f 100644
> >> --- a/lib/vhost/virtio_net.c
> >> +++ b/lib/vhost/virtio_net.c
> >> @@ -2996,6 +2996,7 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >> if (mbuf_avail == 0) {
> >> cur = rte_pktmbuf_alloc(mbuf_pool);
> >> if (unlikely(cur == NULL)) {
> >> + vq->stats.mbuf_alloc_failed++;
> >> VHOST_DATA_LOG(dev->ifname, ERR,
> >> "failed to allocate memory for mbuf.");
> >
> > This error log here is scary as it means the datapath can be slowed
> > down for each multisegment mbuf in the event of a mbuf (maybe
> > temporary) shortage.
> > Besides no other mbuf allocation in the vhost library datapath
> > generates such log.
> >
> > I would remove it, probably in a separate patch.
> > WDYT?
>
> Agree, we should not have such log in the datapath.
> And now that we have the stat, it is even less useful.
Ok, nevertheless, you can add my:
Reviewed-by: David Marchand <david.marchand@redhat.com>
On 1/31/24 20:53, Maxime Coquelin wrote:
> This patch introduces a new, per virtqueue, mbuf allocation
> failure statistic. It can be useful to troubleshoot packets
> drops due to insufficient mempool size or memory leaks.
>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
> lib/vhost/vhost.c | 1 +
> lib/vhost/vhost.h | 1 +
> lib/vhost/virtio_net.c | 17 +++++++++++++----
> 3 files changed, 15 insertions(+), 4 deletions(-)
>
Applied to next-virtio tree.
Thanks,
Maxime
@@ -55,6 +55,7 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = {
{"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)},
{"inflight_submitted", offsetof(struct vhost_virtqueue, stats.inflight_submitted)},
{"inflight_completed", offsetof(struct vhost_virtqueue, stats.inflight_completed)},
+ {"mbuf_alloc_failed", offsetof(struct vhost_virtqueue, stats.mbuf_alloc_failed)},
};
#define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings)
@@ -156,6 +156,7 @@ struct virtqueue_stats {
uint64_t iotlb_misses;
uint64_t inflight_submitted;
uint64_t inflight_completed;
+ uint64_t mbuf_alloc_failed;
uint64_t guest_notifications_suppressed;
/* Counters below are atomic, and should be incremented as such. */
RTE_ATOMIC(uint64_t) guest_notifications;
@@ -2996,6 +2996,7 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq,
if (mbuf_avail == 0) {
cur = rte_pktmbuf_alloc(mbuf_pool);
if (unlikely(cur == NULL)) {
+ vq->stats.mbuf_alloc_failed++;
VHOST_DATA_LOG(dev->ifname, ERR,
"failed to allocate memory for mbuf.");
goto error;
@@ -3123,8 +3124,10 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
count = RTE_MIN(count, avail_entries);
VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count);
- if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count))
+ if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
+ vq->stats.mbuf_alloc_failed += count;
return 0;
+ }
for (i = 0; i < count; i++) {
struct buf_vector buf_vec[BUF_VECTOR_MAX];
@@ -3494,8 +3497,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
{
uint32_t pkt_idx = 0;
- if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count))
+ if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
+ vq->stats.mbuf_alloc_failed += count;
return 0;
+ }
do {
rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
@@ -3745,8 +3750,10 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
count = RTE_MIN(count, avail_entries);
VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count);
- if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count))
+ if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) {
+ vq->stats.mbuf_alloc_failed += count;
goto out;
+ }
for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
uint16_t head_idx = 0;
@@ -4035,8 +4042,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
async_iter_reset(async);
- if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count))
+ if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) {
+ vq->stats.mbuf_alloc_failed += count;
goto out;
+ }
do {
struct rte_mbuf *pkt = pkts_prealloc[pkt_idx];