mbox series

[0/3] AF_XDP Preferred Busy Polling

Message ID 20210224111852.11947-1-ciara.loftus@intel.com (mailing list archive)
Headers
Series AF_XDP Preferred Busy Polling |

Message

Loftus, Ciara Feb. 24, 2021, 11:18 a.m. UTC
  Single-core performance of AF_XDP at high loads can be poor because
a heavily loaded NAPI context will never enter or allow for busy-polling.

1C testpmd rxonly (both IRQs and PMD on core 0):
./dpdk-testpmd -l 0-1 --vdev=net_af_xdp0,iface=eth0 --main-lcore=1 -- \
--forward-mode=rxonly
0.088Mpps

In order to achieve decent performance at high loads, it is currently
recommended ensure the IRQs for the netdev queue and the core running
the PMD are different.

2C testpmd rxonly (IRQs on core 0, PMD on core 1):
./dpdk-testpmd -l 0-1 --vdev=net_af_xdp0,iface=eth0 --main-lcore=0 -- \
--forward-mode=rxonly
19.26Mpps

However using an extra core is of course not ideal. The SO_PREFER_BUSY_POLL
socket option was introduced in kernel v5.11 to help improve 1C performance.
See [1].

This series sets this socket option on xsks created with DPDK (ie. instances of
the AF_XDP PMD) unless explicitly disabled or not supported by the kernel. It
was chosen to be enabled by default in order to bring the AF_XDP PMD in line
with most other PMDs which execute on a single core.

The following system and netdev settings are recommended in conjunction with
busy polling:
echo 2 | sudo tee /sys/class/net/eth0/napi_defer_hard_irqs
echo 200000 | sudo tee /sys/class/net/eth0/gro_flush_timeout

Re-running the 1C test with busy polling support and the above settings:
./dpdk-testpmd -l 0-1 --vdev=net_af_xdp0,iface=eth0 --main-lcore=1 -- \
--forward-mode=rxonly
10.45Mpps

A new vdev arg is introduced called 'busy_budget' whose default value is 64.
busy_budget is the value supplied to the kernel with the SO_BUSY_POLL_BUDGET
socket option and represents the busy-polling NAPI budget ie. the number of
packets the kernel will attempt to process in the netdev's NAPI context.

To set the busy budget to 256:
./dpdk-testpmd --vdev=net_af_xdp0,iface=eth0,busy_budget=256
14.06Mpps

If you still wish to run using 2 cores (one for PMD once for IRQs) it is
recommended to disable busy polling to achieve optimal 2C performance:
./dpdk-testpmd --vdev=net_af_xdp0,iface=eth0,busy_budget=0
19.09Mpps

RFC->v1:
* Fixed behaviour of busy_budget=0
* Ensure we bail out if any of the new setsockopts fail

[1] https://lwn.net/Articles/837010/

Ciara Loftus (3):
  net/af_xdp: Increase max batch size to 512
  net/af_xdp: Use recvfrom() instead of poll()
  net/af_xdp: preferred busy polling

 doc/guides/nics/af_xdp.rst          |  38 ++++++++++-
 drivers/net/af_xdp/compat.h         |  13 ++++
 drivers/net/af_xdp/rte_eth_af_xdp.c | 100 ++++++++++++++++++++++------
 3 files changed, 129 insertions(+), 22 deletions(-)