mbox series

[v4,00/14] vhost packed ring performance optimization

Message ID 20191009133849.69002-1-yong.liu@intel.com (mailing list archive)
Headers show
Series vhost packed ring performance optimization | expand


Marvin Liu Oct. 9, 2019, 1:38 p.m. UTC
Packed ring has more compact ring format and thus can significantly
reduce the number of cache miss. It can lead to better performance.
This has been approved in virtio user driver, on normal E5 Xeon cpu
single core performance can raise 12%.


However vhost performance with packed ring performance was decreased.
Through analysis, mostly extra cost was from the calculating of each
descriptor flag which depended on ring wrap counter. Moreover, both
frontend and backend need to write same descriptors which will cause
cache contention. Especially when doing vhost enqueue function, virtio
refill packed ring function may write same cache line when vhost doing
enqueue function. This kind of extra cache cost will reduce the benefit
of reducing cache misses. 

For optimizing vhost packed ring performance, vhost enqueue and dequeue
function will be splitted into fast and normal path.

Several methods will be taken in fast path:
  Handle descriptors in one cache line by batch.
  Split loop function into more pieces and unroll them.
  Prerequisite check that whether I/O space can copy directly into mbuf
    space and vice versa. 
  Prerequisite check that whether descriptor mapping is successful.
  Distinguish vhost used ring update function by enqueue and dequeue
  Buffer dequeue used descriptors as many as possible.
  Update enqueue used descriptors by cache line.
  Disable sofware prefetch if hardware can do better.

After all these methods done, single core vhost PvP performance with 64B
packet on Xeon 8180 can boost 40%.

- Support meson build
- Remove memory region cache for no clear performance gain and ABI break
- Not assume ring size is power of two

- Check available index overflow
- Remove dequeue remained descs number check
- Remove changes in split ring datapath
- Call memory write barriers once when updating used flags
- Rename some functions and macros
- Code style optimization

- Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
- Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
- Optimize dequeue used ring update when in_order negotiated

Marvin Liu (14):
  vhost: add single packet enqueue function
  vhost: unify unroll pragma parameter
  vhost: add batch enqueue function for packed ring
  vhost: add single packet dequeue function
  vhost: add batch dequeue function
  vhost: flush vhost enqueue shadow ring by batch
  vhost: add flush function for batch enqueue
  vhost: buffer vhost dequeue shadow ring
  vhost: split enqueue and dequeue flush functions
  vhost: optimize enqueue function of packed ring
  vhost: add batch and single zero dequeue functions
  vhost: optimize dequeue function of packed ring
  vhost: check whether disable software pre-fetch
  vhost: optimize packed ring dequeue when in-order

 lib/librte_vhost/Makefile     |  24 +
 lib/librte_vhost/meson.build  |  11 +
 lib/librte_vhost/vhost.h      |  33 ++
 lib/librte_vhost/virtio_net.c | 993 +++++++++++++++++++++++++++-------
 4 files changed, 878 insertions(+), 183 deletions(-)