[v14,1/1] app/testpmd: support multiple mbuf pools per Rx queue
Checks
Commit Message
Some of the HW has support for choosing memory pools based on
the packet's size. The pool sort capability allows PMD/NIC to
choose a memory pool based on the packet's length.
On multiple mempool support enabled, populate mempool array
accordingly. Also, print pool name on which packet is received.
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
v14:
- Rebased on tip of next-net/main
v13:
- Make sure protocol-based header split feature is not broken
by updating changes with latest code base.
v12:
- Process multi-segment configuration on number segments
(rx_pkt_nb_segs) greater than 1 or buffer split offload
flag (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) set.
v11:
- Resolve compilation and warning.
v10:
- Populate multi-mempool array based on mbuf_data_size_n instead
of rx_pkt_nb_segs.
---
app/test-pmd/testpmd.c | 70 +++++++++++++++++++++++++++---------------
app/test-pmd/testpmd.h | 3 ++
app/test-pmd/util.c | 4 +--
3 files changed, 51 insertions(+), 26 deletions(-)
Comments
On 11/10/22 13:16, Hanumanth Pothula wrote:
> Some of the HW has support for choosing memory pools based on
> the packet's size. The pool sort capability allows PMD/NIC to
> choose a memory pool based on the packet's length.
>
> On multiple mempool support enabled, populate mempool array
> accordingly. Also, print pool name on which packet is received.
>
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>
> v14:
> - Rebased on tip of next-net/main
> v13:
> - Make sure protocol-based header split feature is not broken
> by updating changes with latest code base.
> v12:
> - Process multi-segment configuration on number segments
> (rx_pkt_nb_segs) greater than 1 or buffer split offload
> flag (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) set.
> v11:
> - Resolve compilation and warning.
> v10:
> - Populate multi-mempool array based on mbuf_data_size_n instead
> of rx_pkt_nb_segs.
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Applied to dpdk-next-net/main, thanks.
Hi Hanumanth,
We meet an issue on this patch, can you pls have a look quickly?
https://bugs.dpdk.org/show_bug.cgi?id=1128
Best regards,
Yu Jiang
> -----Original Message-----
> From: Hanumanth Pothula <hpothula@marvell.com>
> Sent: Thursday, November 10, 2022 6:17 PM
> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; thomas@monjalon.net;
> jerinj@marvell.com; ndabilpuram@marvell.com; hpothula@marvell.com
> Subject: [PATCH v14 1/1] app/testpmd: support multiple mbuf pools per Rx
> queue
>
> Some of the HW has support for choosing memory pools based on the packet's
> size. The pool sort capability allows PMD/NIC to choose a memory pool based
> on the packet's length.
>
> On multiple mempool support enabled, populate mempool array accordingly.
> Also, print pool name on which packet is received.
>
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
>
Hi Yu Jiang,
Please find the fix for below issue,
https://patches.dpdk.org/project/dpdk/patch/20221117113047.3088461-1-hpothula@marvell.com
Verified changes locally, both with/without multi-mempool support.
Regards,
Hanumanth
> -----Original Message-----
> From: Jiang, YuX <yux.jiang@intel.com>
> Sent: Thursday, November 17, 2022 2:13 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>; Singh, Aman
> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> Nithin Kumar Dabilpuram <ndabilpuram@marvell.com>
> Subject: [EXT] RE: [PATCH v14 1/1] app/testpmd: support multiple mbuf
> pools per Rx queue
>
> External Email
>
> ----------------------------------------------------------------------
> Hi Hanumanth,
>
> We meet an issue on this patch, can you pls have a look quickly?
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__bugs.dpdk.org_show-5Fbug.cgi-3Fid-
> 3D1128&d=DwIFAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=ZXuJnLKRi2OwoXx-
> DBHWwiPuGzcSlH1FHkeNRty_2pQ&m=CWMu6OgmCaCZqYSpbjlxN8XS2otz
> 7qzAU8raSE9f1jdzXi7Cr4kq0OKYTN1MYLex&s=EhkcKAk_QsFYhE_rH1K1n2z
> pzCyQFEmUc-9_fPNPrFQ&e=
>
> Best regards,
> Yu Jiang
>
> > -----Original Message-----
> > From: Hanumanth Pothula <hpothula@marvell.com>
> > Sent: Thursday, November 10, 2022 6:17 PM
> > To: Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> > <yuying.zhang@intel.com>
> > Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru;
> thomas@monjalon.net;
> > jerinj@marvell.com; ndabilpuram@marvell.com; hpothula@marvell.com
> > Subject: [PATCH v14 1/1] app/testpmd: support multiple mbuf pools per
> > Rx queue
> >
> > Some of the HW has support for choosing memory pools based on the
> > packet's size. The pool sort capability allows PMD/NIC to choose a
> > memory pool based on the packet's length.
> >
> > On multiple mempool support enabled, populate mempool array
> accordingly.
> > Also, print pool name on which packet is received.
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> >
@@ -2653,12 +2653,20 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp)
{
union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
+ struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
+ struct rte_mempool *mpx;
unsigned int i, mp_n;
uint32_t prev_hdrs = 0;
int ret;
- if (rx_pkt_nb_segs <= 1 ||
- (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) {
+ /* Verify Rx queue configuration is single pool and segment or
+ * multiple pool/segment.
+ * @see rte_eth_rxconf::rx_mempools
+ * @see rte_eth_rxconf::rx_seg
+ */
+ if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
+ ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0))) {
+ /* Single pool/segment configuration */
rx_conf->rx_seg = NULL;
rx_conf->rx_nseg = 0;
ret = rte_eth_rx_queue_setup(port_id, rx_queue_id,
@@ -2666,34 +2674,48 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
rx_conf, mp);
goto exit;
}
- for (i = 0; i < rx_pkt_nb_segs; i++) {
- struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
- struct rte_mempool *mpx;
- /*
- * Use last valid pool for the segments with number
- * exceeding the pool index.
- */
- mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i;
- mpx = mbuf_pool_find(socket_id, mp_n);
- /* Handle zero as mbuf data buffer size. */
- rx_seg->offset = i < rx_pkt_nb_offs ?
- rx_pkt_seg_offsets[i] : 0;
- rx_seg->mp = mpx ? mpx : mp;
- if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) {
- rx_seg->proto_hdr = rx_pkt_hdr_protos[i] & ~prev_hdrs;
- prev_hdrs |= rx_seg->proto_hdr;
- } else {
- rx_seg->length = rx_pkt_seg_lengths[i] ?
- rx_pkt_seg_lengths[i] :
- mbuf_data_size[mp_n];
+
+ if (rx_pkt_nb_segs > 1 ||
+ rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
+ /* multi-segment configuration */
+ for (i = 0; i < rx_pkt_nb_segs; i++) {
+ struct rte_eth_rxseg_split *rx_seg = &rx_useg[i].split;
+ /*
+ * Use last valid pool for the segments with number
+ * exceeding the pool index.
+ */
+ mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i;
+ mpx = mbuf_pool_find(socket_id, mp_n);
+ /* Handle zero as mbuf data buffer size. */
+ rx_seg->offset = i < rx_pkt_nb_offs ?
+ rx_pkt_seg_offsets[i] : 0;
+ rx_seg->mp = mpx ? mpx : mp;
+ if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) {
+ rx_seg->proto_hdr = rx_pkt_hdr_protos[i] & ~prev_hdrs;
+ prev_hdrs |= rx_seg->proto_hdr;
+ } else {
+ rx_seg->length = rx_pkt_seg_lengths[i] ?
+ rx_pkt_seg_lengths[i] :
+ mbuf_data_size[mp_n];
+ }
+ }
+ rx_conf->rx_nseg = rx_pkt_nb_segs;
+ rx_conf->rx_seg = rx_useg;
+ } else {
+ /* multi-pool configuration */
+ for (i = 0; i < mbuf_data_size_n; i++) {
+ mpx = mbuf_pool_find(socket_id, i);
+ rx_mempool[i] = mpx ? mpx : mp;
}
+ rx_conf->rx_mempools = rx_mempool;
+ rx_conf->rx_nmempool = mbuf_data_size_n;
}
- rx_conf->rx_nseg = rx_pkt_nb_segs;
- rx_conf->rx_seg = rx_useg;
ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
socket_id, rx_conf, NULL);
rx_conf->rx_seg = NULL;
rx_conf->rx_nseg = 0;
+ rx_conf->rx_mempools = NULL;
+ rx_conf->rx_nmempool = 0;
exit:
ports[port_id].rxq[rx_queue_id].state = rx_conf->rx_deferred_start ?
RTE_ETH_QUEUE_STATE_STOPPED :
@@ -80,6 +80,9 @@ extern uint8_t cl_quit;
#define MIN_TOTAL_NUM_MBUFS 1024
+/* Maximum number of pools supported per Rx queue */
+#define MAX_MEMPOOL 8
+
typedef uint8_t lcoreid_t;
typedef uint16_t portid_t;
typedef uint16_t queueid_t;
@@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
print_ether_addr(" - dst=", ð_hdr->dst_addr,
print_buf, buf_size, &cur_len);
MKDUMPSTR(print_buf, buf_size, cur_len,
- " - type=0x%04x - length=%u - nb_segs=%d",
- eth_type, (unsigned int) mb->pkt_len,
+ " - pool=%s - type=0x%04x - length=%u - nb_segs=%d",
+ mb->pool->name, eth_type, (unsigned int) mb->pkt_len,
(int)mb->nb_segs);
ol_flags = mb->ol_flags;
if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {