[v13,1/1] app/testpmd: support multiple mbuf pools per Rx queue
Checks
Commit Message
Some of the HW has support for choosing memory pools based on
the packet's size. The pool sort capability allows PMD/NIC to
choose a memory pool based on the packet's length.
On multiple mempool support enabled, populate mempool array
accordingly. Also, print pool name on which packet is received.
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
v13:
- Make sure protocol-based header split feature is not broken
by updating changes with latest code base.
v12:
- Process multi-segment configuration on number segments
(rx_pkt_nb_segs) greater than 1 or buffer split offload
flag (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) set.
v11:
- Resolve compilation and warning.
v10:
- Populate multi-mempool array based on mbuf_data_size_n instead
of rx_pkt_nb_segs.
---
app/test-pmd/testpmd.c | 65 ++++++++++++++++++++++++++++--------------
app/test-pmd/testpmd.h | 3 ++
app/test-pmd/util.c | 4 +--
3 files changed, 48 insertions(+), 24 deletions(-)
Comments
On 11/10/22 11:17, Hanumanth Pothula wrote:
> Some of the HW has support for choosing memory pools based on
> the packet's size. The pool sort capability allows PMD/NIC to
> choose a memory pool based on the packet's length.
>
> On multiple mempool support enabled, populate mempool array
> accordingly. Also, print pool name on which packet is received.
>
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>
> v13:
> - Make sure protocol-based header split feature is not broken
> by updating changes with latest code base.
> v12:
> - Process multi-segment configuration on number segments
> (rx_pkt_nb_segs) greater than 1 or buffer split offload
> flag (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) set.
> v11:
> - Resolve compilation and warning.
> v10:
> - Populate multi-mempool array based on mbuf_data_size_n instead
> of rx_pkt_nb_segs.
I'm sorry for inconvenience, could you rebase the patch on
current next-net/main, please. I've decided to apply protocol
based buffer split fix first. Of course, I can rebase myself,
but I want result to be checked very carefully and tested
properly. Thanks.
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, November 10, 2022 2:31 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>; Aman Singh
> <aman.deep.singh@intel.com>; Yuying Zhang <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; thomas@monjalon.net; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>
> Subject: [EXT] Re: [PATCH v13 1/1] app/testpmd: support multiple mbuf
> pools per Rx queue
>
> External Email
>
> ----------------------------------------------------------------------
> On 11/10/22 11:17, Hanumanth Pothula wrote:
> > Some of the HW has support for choosing memory pools based on the
> > packet's size. The pool sort capability allows PMD/NIC to choose a
> > memory pool based on the packet's length.
> >
> > On multiple mempool support enabled, populate mempool array
> > accordingly. Also, print pool name on which packet is received.
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> >
> > v13:
> > - Make sure protocol-based header split feature is not broken
> > by updating changes with latest code base.
> > v12:
> > - Process multi-segment configuration on number segments
> > (rx_pkt_nb_segs) greater than 1 or buffer split offload
> > flag (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) set.
> > v11:
> > - Resolve compilation and warning.
> > v10:
> > - Populate multi-mempool array based on mbuf_data_size_n instead
> > of rx_pkt_nb_segs.
>
> I'm sorry for inconvenience, could you rebase the patch on current next-
> net/main, please. I've decided to apply protocol based buffer split fix first.
> Of course, I can rebase myself, but I want result to be checked very carefully
> and tested properly. Thanks.
Sure will do that.
@@ -2647,11 +2647,19 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp)
{
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
+ struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
+ struct rte_mempool *mpx;
unsigned int i, mp_n;
int ret;
- if (rx_pkt_nb_segs <= 1 ||
- (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) {
+ /* Verify Rx queue configuration is single pool and segment or
+ * multiple pool/segment.
+ * @see rte_eth_rxconf::rx_mempools
+ * @see rte_eth_rxconf::rx_seg
+ */
+ if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+ ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
+ /* Single pool/segment configuration */
rx_conf->rx_seg = NULL;
rx_conf->rx_nseg = 0;
ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
@@ -2659,33 +2667,46 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
rx_conf, mp);
goto exit;
}
- for (i = 0; i < rx_pkt_nb_segs; i++) {
- struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
- struct rte_mempool *mpx;
- /*
- * Use last valid pool for the segments with number
- * exceeding the pool index.
- */
- mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i;
- mpx = mbuf_pool_find(socket_id, mp_n);
- /* Handle zero as mbuf data buffer size. */
- rx_seg->offset = i < rx_pkt_nb_offs ?
- rx_pkt_seg_offsets[i] : 0;
- rx_seg->mp = mpx ? mpx : mp;
- if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) {
- rx_seg->proto_hdr = rx_pkt_hdr_protos[i];
- } else {
- rx_seg->length = rx_pkt_seg_lengths[i] ?
- rx_pkt_seg_lengths[i] :
- mbuf_data_size[mp_n];
+
+ if (rx_pkt_nb_segs > 1 ||
+ rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+ for (i = 0; i < rx_pkt_nb_segs; i++) {
+ struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
+ /*
+ * Use last valid pool for the segments with number
+ * exceeding the pool index.
+ */
+ mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i;
+ mpx = mbuf_pool_find(socket_id, mp_n);
+ /* Handle zero as mbuf data buffer size. */
+ rx_seg->offset = i < rx_pkt_nb_offs ?
+ rx_pkt_seg_offsets[i] : 0;
+ rx_seg->mp = mpx ? mpx : mp;
+ if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) {
+ rx_seg->proto_hdr = rx_pkt_hdr_protos[i];
+ } else {
+ rx_seg->length = rx_pkt_seg_lengths[i] ?
+ rx_pkt_seg_lengths[i] :
+ mbuf_data_size[mp_n];
+ }
}
- }
rx_conf->rx_nseg = rx_pkt_nb_segs;
rx_conf->rx_seg = rx_useg;
+ } else {
+ /* multi-pool configuration */
+ for (i = 0; i < mbuf_data_size_n; i++) {
+ mpx = mbuf_pool_find(socket_id, i);
+ rx_mempool[i] = mpx ? mpx : mp;
+ }
+ rx_conf->rx_mempools = rx_mempool;
+ rx_conf->rx_nmempool = mbuf_data_size_n;
+ }
ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
socket_id, rx_conf, NULL);
rx_conf->rx_seg = NULL;
rx_conf->rx_nseg = 0;
+ rx_conf->rx_mempools = NULL;
+ rx_conf->rx_nmempool = 0;
exit:
ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
RTE_ETH_QUEUE_STATE_STOPPED :
@@ -80,6 +80,9 @@ extern uint8_t cl_quit;
#define MIN_TOTAL_NUM_MBUFS 1024
+/* Maximum number of pools supported per Rx queue */
+#define MAX_MEMPOOL 8
+
typedef uint8_t lcoreid_t;
typedef uint16_t portid_t;
typedef uint16_t queueid_t;
@@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
print_ether_addr(" - dst=", ð_hdr->dst_addr,
print_buf, buf_size, &cur_len);
MKDUMPSTR(print_buf, buf_size, cur_len,
- " - type=0x%04x - length=%u - nb_segs=%d",
- eth_type, (unsigned int) mb->pkt_len,
+ " - pool=%s - type=0x%04x - length=%u - nb_segs=%d",
+ mb->pool->name, eth_type, (unsigned int) mb->pkt_len,
(int)mb->nb_segs);
ol_flags = mb->ol_flags;
if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {