[v2,3/4] vhost: allocate and free packets in bulk
Checks
Commit Message
Now that all allocation and freeing has been moved together, use the
faster bulk versions instead of handling packets one by one.
Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
---
lib/librte_vhost/virtio_net.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
Comments
On 4/7/21 12:17 PM, Balazs Nemeth wrote:
> Now that all allocation and freeing has been moved together, use the
> faster bulk versions instead of handling packets one by one.
>
> Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
> ---
> lib/librte_vhost/virtio_net.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 496f750e3..2f0c97b91 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
> {
> uint32_t pkt_idx = 0;
> uint32_t remained = count;
> - uint16_t i;
>
> - for (i = 0; i < count; ++i)
> - pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
> + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
> + return 0;
> + }
OK, get it now :)
Maybe just add a comment in previous patches that it is going to be
handled when moving to bulk allocation.
>
> do {
> rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
> @@ -2497,8 +2497,9 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>
> } while (remained);
>
> - for (i = pkt_idx; i < count; ++i)
> - rte_pktmbuf_free(pkts[i]);
> + if (pkt_idx != count) {
> + rte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx);
> + }
>
> if (vq->shadow_used_idx) {
> do_data_copy_dequeue(vq);
>
On Thu, Apr 15, 2021 at 5:33 PM Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
> On 4/7/21 12:17 PM, Balazs Nemeth wrote:
> > Now that all allocation and freeing has been moved together, use the
> > faster bulk versions instead of handling packets one by one.
> >
> > Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
> > ---
> > lib/librte_vhost/virtio_net.c | 11 ++++++-----
> > 1 file changed, 6 insertions(+), 5 deletions(-)
> >
> > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> > index 496f750e3..2f0c97b91 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
> > {
> > uint32_t pkt_idx = 0;
> > uint32_t remained = count;
> > - uint16_t i;
> >
> > - for (i = 0; i < count; ++i)
> > - pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
> > + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
> > + return 0;
> > + }
>
> OK, get it now :)
> Maybe just add a comment in previous patches that it is going to be
> handled when moving to bulk allocation.
Looking at the whole series, it is easy to read and understand as a
single patch.
And it avoids this intermediate issue.
On 4/15/21 5:45 PM, David Marchand wrote:
> On Thu, Apr 15, 2021 at 5:33 PM Maxime Coquelin
> <maxime.coquelin@redhat.com> wrote:
>> On 4/7/21 12:17 PM, Balazs Nemeth wrote:
>>> Now that all allocation and freeing has been moved together, use the
>>> faster bulk versions instead of handling packets one by one.
>>>
>>> Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
>>> ---
>>> lib/librte_vhost/virtio_net.c | 11 ++++++-----
>>> 1 file changed, 6 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>>> index 496f750e3..2f0c97b91 100644
>>> --- a/lib/librte_vhost/virtio_net.c
>>> +++ b/lib/librte_vhost/virtio_net.c
>>> @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>>> {
>>> uint32_t pkt_idx = 0;
>>> uint32_t remained = count;
>>> - uint16_t i;
>>>
>>> - for (i = 0; i < count; ++i)
>>> - pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
>>> + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
>>> + return 0;
>>> + }
>>
>> OK, get it now :)
>> Maybe just add a comment in previous patches that it is going to be
>> handled when moving to bulk allocation.
>
> Looking at the whole series, it is easy to read and understand as a
> single patch.
> And it avoids this intermediate issue.
I agree with David, better to squash the first three patches.
Thanks,
Maxime
On 4/15/21 5:50 PM, Maxime Coquelin wrote:
>
>
> On 4/15/21 5:45 PM, David Marchand wrote:
>> On Thu, Apr 15, 2021 at 5:33 PM Maxime Coquelin
>> <maxime.coquelin@redhat.com> wrote:
>>> On 4/7/21 12:17 PM, Balazs Nemeth wrote:
>>>> Now that all allocation and freeing has been moved together, use the
>>>> faster bulk versions instead of handling packets one by one.
>>>>
>>>> Signed-off-by: Balazs Nemeth <bnemeth@redhat.com>
>>>> ---
>>>> lib/librte_vhost/virtio_net.c | 11 ++++++-----
>>>> 1 file changed, 6 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>>>> index 496f750e3..2f0c97b91 100644
>>>> --- a/lib/librte_vhost/virtio_net.c
>>>> +++ b/lib/librte_vhost/virtio_net.c
>>>> @@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
>>>> {
>>>> uint32_t pkt_idx = 0;
>>>> uint32_t remained = count;
>>>> - uint16_t i;
>>>>
>>>> - for (i = 0; i < count; ++i)
>>>> - pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
>>>> + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
>>>> + return 0;
>>>> + }
>>>
>>> OK, get it now :)
>>> Maybe just add a comment in previous patches that it is going to be
>>> handled when moving to bulk allocation.
>>
>> Looking at the whole series, it is easy to read and understand as a
>> single patch.
>> And it avoids this intermediate issue.
>
> I agree with David, better to squash the first three patches.
As David told me off-list, all the series can be squashed since the last
patch is a consequence of doing bulk allocation.
Cheers,
Maxime
> Thanks,
> Maxime
>
@@ -2469,10 +2469,10 @@ virtio_dev_tx_packed(struct virtio_net *dev,
{
uint32_t pkt_idx = 0;
uint32_t remained = count;
- uint16_t i;
- for (i = 0; i < count; ++i)
- pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
+ if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) {
+ return 0;
+ }
do {
rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
@@ -2497,8 +2497,9 @@ virtio_dev_tx_packed(struct virtio_net *dev,
} while (remained);
- for (i = pkt_idx; i < count; ++i)
- rte_pktmbuf_free(pkts[i]);
+ if (pkt_idx != count) {
+ rte_pktmbuf_free_bulk(&pkts[pkt_idx], count - pkt_idx);
+ }
if (vq->shadow_used_idx) {
do_data_copy_dequeue(vq);