From patchwork Fri Sep 17 16:41:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Laatz X-Patchwork-Id: 99241 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2774A0C43; Fri, 17 Sep 2021 18:41:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0352141101; Fri, 17 Sep 2021 18:41:54 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 7D30D40689 for ; Fri, 17 Sep 2021 18:41:50 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="222491259" X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="222491259" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 09:41:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="546488150" Received: from silpixa00401122.ir.intel.com ([10.55.128.10]) by FMSMGA003.fm.intel.com with ESMTP; 17 Sep 2021 09:41:48 -0700 From: Kevin Laatz To: dev@dpdk.org Cc: bruce.richardson@intel.com, fengchengwen@huawei.com, conor.walsh@intel.com, Konstantin Ananyev Date: Fri, 17 Sep 2021 16:41:31 +0000 Message-Id: <20210917164136.3499904-2-kevin.laatz@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210917164136.3499904-1-kevin.laatz@intel.com> References: <20210910172737.2561156-1-kevin.laatz@intel.com> <20210917164136.3499904-1-kevin.laatz@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/6] examples/ioat: always use same lcore for both DMA requests enqueue and dequeue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Konstantin Ananyev Few changes in ioat sample behaviour: - Always do SW copy for packet metadata (mbuf fields) - Always use same lcore for both DMA requests enqueue and dequeue Main reasons for that: a) it is safer, as idxd PMD doesn't support MT safe enqueue/dequeue (yet). b) sort of more apples to apples comparison with sw copy. c) from my testing things are faster that way. Signed-off-by: Konstantin Ananyev Reviewed-by: Conor Walsh --- examples/ioat/ioatfwd.c | 185 ++++++++++++++++++++++------------------ 1 file changed, 101 insertions(+), 84 deletions(-) diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c index b3977a8be5..1498343492 100644 --- a/examples/ioat/ioatfwd.c +++ b/examples/ioat/ioatfwd.c @@ -331,43 +331,36 @@ update_mac_addrs(struct rte_mbuf *m, uint32_t dest_portid) /* Perform packet copy there is a user-defined function. 8< */ static inline void -pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst) +pktmbuf_metadata_copy(const struct rte_mbuf *src, struct rte_mbuf *dst) { - /* Copy packet metadata */ - rte_memcpy(&dst->rearm_data, - &src->rearm_data, - offsetof(struct rte_mbuf, cacheline1) - - offsetof(struct rte_mbuf, rearm_data)); + dst->data_off = src->data_off; + memcpy(&dst->rx_descriptor_fields1, &src->rx_descriptor_fields1, + offsetof(struct rte_mbuf, buf_len) - + offsetof(struct rte_mbuf, rx_descriptor_fields1)); +} - /* Copy packet data */ +/* Copy packet data */ +static inline void +pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst) +{ rte_memcpy(rte_pktmbuf_mtod(dst, char *), rte_pktmbuf_mtod(src, char *), src->data_len); } /* >8 End of perform packet copy there is a user-defined function. */ static uint32_t -ioat_enqueue_packets(struct rte_mbuf **pkts, +ioat_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], uint32_t nb_rx, uint16_t dev_id) { int ret; uint32_t i; - struct rte_mbuf *pkts_copy[MAX_PKT_BURST]; - - const uint64_t addr_offset = RTE_PTR_DIFF(pkts[0]->buf_addr, - &pkts[0]->rearm_data); - - ret = rte_mempool_get_bulk(ioat_pktmbuf_pool, - (void *)pkts_copy, nb_rx); - - if (unlikely(ret < 0)) - rte_exit(EXIT_FAILURE, "Unable to allocate memory.\n"); for (i = 0; i < nb_rx; i++) { /* Perform data copy */ ret = rte_ioat_enqueue_copy(dev_id, - pkts[i]->buf_iova - addr_offset, - pkts_copy[i]->buf_iova - addr_offset, - rte_pktmbuf_data_len(pkts[i]) + addr_offset, + rte_pktmbuf_iova(pkts[i]), + rte_pktmbuf_iova(pkts_copy[i]), + rte_pktmbuf_data_len(pkts[i]), (uintptr_t)pkts[i], (uintptr_t)pkts_copy[i]); @@ -376,20 +369,50 @@ ioat_enqueue_packets(struct rte_mbuf **pkts, } ret = i; - /* Free any not enqueued packets. */ - rte_mempool_put_bulk(ioat_pktmbuf_pool, (void *)&pkts[i], nb_rx - i); - rte_mempool_put_bulk(ioat_pktmbuf_pool, (void *)&pkts_copy[i], - nb_rx - i); - return ret; } +static inline uint32_t +ioat_enqueue(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], + uint32_t num, uint16_t dev_id) +{ + uint32_t n; + + n = ioat_enqueue_packets(pkts, pkts_copy, num, dev_id); + if (n > 0) + rte_ioat_perform_ops(dev_id); + + return n; +} + +static inline uint32_t +ioat_dequeue(struct rte_mbuf *src[], struct rte_mbuf *dst[], uint32_t num, + uint16_t dev_id) +{ + int32_t rc; + /* Dequeue the mbufs from IOAT device. Since all memory + * is DPDK pinned memory and therefore all addresses should + * be valid, we don't check for copy errors + */ + rc = rte_ioat_completed_ops(dev_id, num, NULL, NULL, + (void *)src, (void *)dst); + if (rc < 0) { + RTE_LOG(CRIT, IOAT, + "rte_ioat_completed_ops(%hu) failedi, error: %d\n", + dev_id, rte_errno); + rc = 0; + } + return rc; +} + /* Receive packets on one port and enqueue to IOAT rawdev or rte_ring. 8< */ static void ioat_rx_port(struct rxtx_port_config *rx_config) { + int32_t ret; uint32_t nb_rx, nb_enq, i, j; struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; + struct rte_mbuf *pkts_burst_copy[MAX_PKT_BURST]; for (i = 0; i < rx_config->nb_queues; i++) { @@ -401,40 +424,54 @@ ioat_rx_port(struct rxtx_port_config *rx_config) port_statistics.rx[rx_config->rxtx_port] += nb_rx; + ret = rte_mempool_get_bulk(ioat_pktmbuf_pool, + (void *)pkts_burst_copy, nb_rx); + + if (unlikely(ret < 0)) + rte_exit(EXIT_FAILURE, + "Unable to allocate memory.\n"); + + for (j = 0; j < nb_rx; j++) + pktmbuf_metadata_copy(pkts_burst[j], + pkts_burst_copy[j]); + if (copy_mode == COPY_MODE_IOAT_NUM) { - /* Perform packet hardware copy */ - nb_enq = ioat_enqueue_packets(pkts_burst, + + /* enqueue packets for hardware copy */ + nb_enq = ioat_enqueue(pkts_burst, pkts_burst_copy, nb_rx, rx_config->ioat_ids[i]); - if (nb_enq > 0) - rte_ioat_perform_ops(rx_config->ioat_ids[i]); - } else { - /* Perform packet software copy, free source packets */ - int ret; - struct rte_mbuf *pkts_burst_copy[MAX_PKT_BURST]; - ret = rte_mempool_get_bulk(ioat_pktmbuf_pool, - (void *)pkts_burst_copy, nb_rx); + /* free any not enqueued packets. */ + rte_mempool_put_bulk(ioat_pktmbuf_pool, + (void *)&pkts_burst[nb_enq], + nb_rx - nb_enq); + rte_mempool_put_bulk(ioat_pktmbuf_pool, + (void *)&pkts_burst_copy[nb_enq], + nb_rx - nb_enq); - if (unlikely(ret < 0)) - rte_exit(EXIT_FAILURE, - "Unable to allocate memory.\n"); + port_statistics.copy_dropped[rx_config->rxtx_port] += + (nb_rx - nb_enq); + /* get completed copies */ + nb_rx = ioat_dequeue(pkts_burst, pkts_burst_copy, + MAX_PKT_BURST, rx_config->ioat_ids[i]); + } else { + /* Perform packet software copy, free source packets */ for (j = 0; j < nb_rx; j++) pktmbuf_sw_copy(pkts_burst[j], pkts_burst_copy[j]); + } - rte_mempool_put_bulk(ioat_pktmbuf_pool, - (void *)pkts_burst, nb_rx); + rte_mempool_put_bulk(ioat_pktmbuf_pool, + (void *)pkts_burst, nb_rx); - nb_enq = rte_ring_enqueue_burst( - rx_config->rx_to_tx_ring, - (void *)pkts_burst_copy, nb_rx, NULL); + nb_enq = rte_ring_enqueue_burst(rx_config->rx_to_tx_ring, + (void *)pkts_burst_copy, nb_rx, NULL); - /* Free any not enqueued packets. */ - rte_mempool_put_bulk(ioat_pktmbuf_pool, - (void *)&pkts_burst_copy[nb_enq], - nb_rx - nb_enq); - } + /* Free any not enqueued packets. */ + rte_mempool_put_bulk(ioat_pktmbuf_pool, + (void *)&pkts_burst_copy[nb_enq], + nb_rx - nb_enq); port_statistics.copy_dropped[rx_config->rxtx_port] += (nb_rx - nb_enq); @@ -446,51 +483,33 @@ ioat_rx_port(struct rxtx_port_config *rx_config) static void ioat_tx_port(struct rxtx_port_config *tx_config) { - uint32_t i, j, nb_dq = 0; - struct rte_mbuf *mbufs_src[MAX_PKT_BURST]; - struct rte_mbuf *mbufs_dst[MAX_PKT_BURST]; + uint32_t i, j, nb_dq, nb_tx; + struct rte_mbuf *mbufs[MAX_PKT_BURST]; for (i = 0; i < tx_config->nb_queues; i++) { - if (copy_mode == COPY_MODE_IOAT_NUM) { - /* Dequeue the mbufs from IOAT device. Since all memory - * is DPDK pinned memory and therefore all addresses should - * be valid, we don't check for copy errors - */ - nb_dq = rte_ioat_completed_ops( - tx_config->ioat_ids[i], MAX_PKT_BURST, NULL, NULL, - (void *)mbufs_src, (void *)mbufs_dst); - } else { - /* Dequeue the mbufs from rx_to_tx_ring. */ - nb_dq = rte_ring_dequeue_burst( - tx_config->rx_to_tx_ring, (void *)mbufs_dst, - MAX_PKT_BURST, NULL); - } - - if ((int32_t) nb_dq <= 0) - return; - if (copy_mode == COPY_MODE_IOAT_NUM) - rte_mempool_put_bulk(ioat_pktmbuf_pool, - (void *)mbufs_src, nb_dq); + /* Dequeue the mbufs from rx_to_tx_ring. */ + nb_dq = rte_ring_dequeue_burst(tx_config->rx_to_tx_ring, + (void *)mbufs, MAX_PKT_BURST, NULL); + if (nb_dq == 0) + continue; /* Update macs if enabled */ if (mac_updating) { for (j = 0; j < nb_dq; j++) - update_mac_addrs(mbufs_dst[j], + update_mac_addrs(mbufs[j], tx_config->rxtx_port); } - const uint16_t nb_tx = rte_eth_tx_burst( - tx_config->rxtx_port, 0, - (void *)mbufs_dst, nb_dq); + nb_tx = rte_eth_tx_burst(tx_config->rxtx_port, 0, + (void *)mbufs, nb_dq); port_statistics.tx[tx_config->rxtx_port] += nb_tx; /* Free any unsent packets. */ if (unlikely(nb_tx < nb_dq)) rte_mempool_put_bulk(ioat_pktmbuf_pool, - (void *)&mbufs_dst[nb_tx], - nb_dq - nb_tx); + (void *)&mbufs[nb_tx], nb_dq - nb_tx); } } /* >8 End of transmitting packets from IOAT. */ @@ -853,9 +872,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) local_port_conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads; - if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE) - local_port_conf.txmode.offloads |= - DEV_TX_OFFLOAD_MBUF_FAST_FREE; ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Cannot configure device:" @@ -974,7 +990,8 @@ main(int argc, char **argv) /* Allocates mempool to hold the mbufs. 8< */ nb_mbufs = RTE_MAX(nb_ports * (nb_queues * (nb_rxd + nb_txd + - 4 * MAX_PKT_BURST) + rte_lcore_count() * MEMPOOL_CACHE_SIZE), + 4 * MAX_PKT_BURST + ring_size) + ring_size + + rte_lcore_count() * MEMPOOL_CACHE_SIZE), MIN_POOL_SIZE); /* Create the mbuf pool */ @@ -1006,8 +1023,8 @@ main(int argc, char **argv) if (copy_mode == COPY_MODE_IOAT_NUM) assign_rawdevs(); - else /* copy_mode == COPY_MODE_SW_NUM */ - assign_rings(); + + assign_rings(); /* >8 End of assigning each port resources. */ start_forwarding_cores(); From patchwork Fri Sep 17 16:41:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Laatz X-Patchwork-Id: 99242 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 73005A0C43; Fri, 17 Sep 2021 18:42:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 167AE41121; Fri, 17 Sep 2021 18:41:55 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id E68E4410F8 for ; Fri, 17 Sep 2021 18:41:51 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="222491264" X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="222491264" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 09:41:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="546488157" Received: from silpixa00401122.ir.intel.com ([10.55.128.10]) by FMSMGA003.fm.intel.com with ESMTP; 17 Sep 2021 09:41:50 -0700 From: Kevin Laatz To: dev@dpdk.org Cc: bruce.richardson@intel.com, fengchengwen@huawei.com, conor.walsh@intel.com, Konstantin Ananyev Date: Fri, 17 Sep 2021 16:41:32 +0000 Message-Id: <20210917164136.3499904-3-kevin.laatz@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210917164136.3499904-1-kevin.laatz@intel.com> References: <20210910172737.2561156-1-kevin.laatz@intel.com> <20210917164136.3499904-1-kevin.laatz@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/6] examples/ioat: add cmd-line option to control DMA batch size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Konstantin Ananyev Add a commandline options to control the HW copy batch size in the application. Signed-off-by: Konstantin Ananyev Reviewed-by: Conor Walsh --- examples/ioat/ioatfwd.c | 40 ++++++++++++++++++++++++++++++++-------- 1 file changed, 32 insertions(+), 8 deletions(-) diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c index 1498343492..4d132a87e5 100644 --- a/examples/ioat/ioatfwd.c +++ b/examples/ioat/ioatfwd.c @@ -24,6 +24,7 @@ #define CMD_LINE_OPT_NB_QUEUE "nb-queue" #define CMD_LINE_OPT_COPY_TYPE "copy-type" #define CMD_LINE_OPT_RING_SIZE "ring-size" +#define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size" /* configurable number of RX/TX ring descriptors */ #define RX_DEFAULT_RINGSIZE 1024 @@ -102,6 +103,8 @@ static uint16_t nb_txd = TX_DEFAULT_RINGSIZE; static volatile bool force_quit; +static uint32_t ioat_batch_sz = MAX_PKT_BURST; + /* ethernet addresses of ports */ static struct rte_ether_addr ioat_ports_eth_addr[RTE_MAX_ETHPORTS]; @@ -374,15 +377,25 @@ ioat_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], static inline uint32_t ioat_enqueue(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], - uint32_t num, uint16_t dev_id) + uint32_t num, uint32_t step, uint16_t dev_id) { - uint32_t n; + uint32_t i, k, m, n; + + k = 0; + for (i = 0; i < num; i += m) { + + m = RTE_MIN(step, num - i); + n = ioat_enqueue_packets(pkts + i, pkts_copy + i, m, dev_id); + k += n; + if (n > 0) + rte_ioat_perform_ops(dev_id); - n = ioat_enqueue_packets(pkts, pkts_copy, num, dev_id); - if (n > 0) - rte_ioat_perform_ops(dev_id); + /* don't try to enqueue more if HW queue is full */ + if (n != m) + break; + } - return n; + return k; } static inline uint32_t @@ -439,7 +452,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config) /* enqueue packets for hardware copy */ nb_enq = ioat_enqueue(pkts_burst, pkts_burst_copy, - nb_rx, rx_config->ioat_ids[i]); + nb_rx, ioat_batch_sz, rx_config->ioat_ids[i]); /* free any not enqueued packets. */ rte_mempool_put_bulk(ioat_pktmbuf_pool, @@ -590,6 +603,7 @@ static void ioat_usage(const char *prgname) { printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n" + " -b --dma-batch-size: number of requests per DMA batch\n" " -p --portmask: hexadecimal bitmask of ports to configure\n" " -q NQ: number of RX queues per port (default is 1)\n" " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n" @@ -631,9 +645,10 @@ static int ioat_parse_args(int argc, char **argv, unsigned int nb_ports) { static const char short_options[] = + "b:" /* dma batch size */ + "c:" /* copy type (sw|hw) */ "p:" /* portmask */ "q:" /* number of RX queues per port */ - "c:" /* copy type (sw|hw) */ "s:" /* ring size */ ; @@ -644,6 +659,7 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) {CMD_LINE_OPT_NB_QUEUE, required_argument, NULL, 'q'}, {CMD_LINE_OPT_COPY_TYPE, required_argument, NULL, 'c'}, {CMD_LINE_OPT_RING_SIZE, required_argument, NULL, 's'}, + {CMD_LINE_OPT_BATCH_SIZE, required_argument, NULL, 'b'}, {NULL, 0, 0, 0} }; @@ -660,6 +676,14 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) lgopts, &option_index)) != EOF) { switch (opt) { + case 'b': + ioat_batch_sz = atoi(optarg); + if (ioat_batch_sz > MAX_PKT_BURST) { + printf("Invalid dma batch size, %s.\n", optarg); + ioat_usage(prgname); + return -1; + } + break; /* portmask */ case 'p': ioat_enabled_port_mask = ioat_parse_portmask(optarg); From patchwork Fri Sep 17 16:41:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Laatz X-Patchwork-Id: 99243 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 483DFA0C43; Fri, 17 Sep 2021 18:42:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2A2AA41142; Fri, 17 Sep 2021 18:41:57 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 57E2B410FE for ; Fri, 17 Sep 2021 18:41:53 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="222491267" X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="222491267" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 09:41:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="546488162" Received: from silpixa00401122.ir.intel.com ([10.55.128.10]) by FMSMGA003.fm.intel.com with ESMTP; 17 Sep 2021 09:41:51 -0700 From: Kevin Laatz To: dev@dpdk.org Cc: bruce.richardson@intel.com, fengchengwen@huawei.com, conor.walsh@intel.com, Konstantin Ananyev Date: Fri, 17 Sep 2021 16:41:33 +0000 Message-Id: <20210917164136.3499904-4-kevin.laatz@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210917164136.3499904-1-kevin.laatz@intel.com> References: <20210910172737.2561156-1-kevin.laatz@intel.com> <20210917164136.3499904-1-kevin.laatz@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/6] examples/ioat: add cmd line option to control max frame size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Konstantin Ananyev Add command line option for setting the max frame size. Signed-off-by: Konstantin Ananyev Reviewed-by: Conor Walsh --- examples/ioat/ioatfwd.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c index 4d132a87e5..1711827cea 100644 --- a/examples/ioat/ioatfwd.c +++ b/examples/ioat/ioatfwd.c @@ -25,6 +25,7 @@ #define CMD_LINE_OPT_COPY_TYPE "copy-type" #define CMD_LINE_OPT_RING_SIZE "ring-size" #define CMD_LINE_OPT_BATCH_SIZE "dma-batch-size" +#define CMD_LINE_OPT_FRAME_SIZE "max-frame-size" /* configurable number of RX/TX ring descriptors */ #define RX_DEFAULT_RINGSIZE 1024 @@ -104,6 +105,7 @@ static uint16_t nb_txd = TX_DEFAULT_RINGSIZE; static volatile bool force_quit; static uint32_t ioat_batch_sz = MAX_PKT_BURST; +static uint32_t max_frame_size = RTE_ETHER_MAX_LEN; /* ethernet addresses of ports */ static struct rte_ether_addr ioat_ports_eth_addr[RTE_MAX_ETHPORTS]; @@ -604,6 +606,7 @@ ioat_usage(const char *prgname) { printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n" " -b --dma-batch-size: number of requests per DMA batch\n" + " -f --max-frame-size: max frame size\n" " -p --portmask: hexadecimal bitmask of ports to configure\n" " -q NQ: number of RX queues per port (default is 1)\n" " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n" @@ -647,6 +650,7 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) static const char short_options[] = "b:" /* dma batch size */ "c:" /* copy type (sw|hw) */ + "f:" /* max frame size */ "p:" /* portmask */ "q:" /* number of RX queues per port */ "s:" /* ring size */ @@ -660,6 +664,7 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) {CMD_LINE_OPT_COPY_TYPE, required_argument, NULL, 'c'}, {CMD_LINE_OPT_RING_SIZE, required_argument, NULL, 's'}, {CMD_LINE_OPT_BATCH_SIZE, required_argument, NULL, 'b'}, + {CMD_LINE_OPT_FRAME_SIZE, required_argument, NULL, 'f'}, {NULL, 0, 0, 0} }; @@ -684,6 +689,15 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) return -1; } break; + case 'f': + max_frame_size = atoi(optarg); + if (max_frame_size > RTE_ETHER_MAX_JUMBO_FRAME_LEN) { + printf("Invalid max frame size, %s.\n", optarg); + ioat_usage(prgname); + return -1; + } + break; + /* portmask */ case 'p': ioat_enabled_port_mask = ioat_parse_portmask(optarg); @@ -880,6 +894,11 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) struct rte_eth_dev_info dev_info; int ret, i; + if (max_frame_size > local_port_conf.rxmode.max_rx_pkt_len) { + local_port_conf.rxmode.max_rx_pkt_len = max_frame_size; + local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + } + /* Skip ports that are not enabled */ if ((ioat_enabled_port_mask & (1 << portid)) == 0) { printf("Skipping disabled port %u\n", portid); @@ -990,6 +1009,7 @@ main(int argc, char **argv) uint16_t nb_ports, portid; uint32_t i; unsigned int nb_mbufs; + size_t sz; /* Init EAL. 8< */ ret = rte_eal_init(argc, argv); @@ -1019,9 +1039,10 @@ main(int argc, char **argv) MIN_POOL_SIZE); /* Create the mbuf pool */ + sz = max_frame_size + RTE_PKTMBUF_HEADROOM; + sz = RTE_MAX(sz, (size_t)RTE_MBUF_DEFAULT_BUF_SIZE); ioat_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", nb_mbufs, - MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, - rte_socket_id()); + MEMPOOL_CACHE_SIZE, 0, sz, rte_socket_id()); if (ioat_pktmbuf_pool == NULL) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); /* >8 End of allocates mempool to hold the mbufs. */ From patchwork Fri Sep 17 16:41:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Laatz X-Patchwork-Id: 99244 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0549A0C43; Fri, 17 Sep 2021 18:42:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3640041148; Fri, 17 Sep 2021 18:41:58 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 5FE524113D for ; Fri, 17 Sep 2021 18:41:55 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="222491276" X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="222491276" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 09:41:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="546488173" Received: from silpixa00401122.ir.intel.com ([10.55.128.10]) by FMSMGA003.fm.intel.com with ESMTP; 17 Sep 2021 09:41:53 -0700 From: Kevin Laatz To: dev@dpdk.org Cc: bruce.richardson@intel.com, fengchengwen@huawei.com, conor.walsh@intel.com, Kevin Laatz Date: Fri, 17 Sep 2021 16:41:34 +0000 Message-Id: <20210917164136.3499904-5-kevin.laatz@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210917164136.3499904-1-kevin.laatz@intel.com> References: <20210910172737.2561156-1-kevin.laatz@intel.com> <20210917164136.3499904-1-kevin.laatz@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 4/6] examples/ioat: port application to dmadev APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The dmadev library abstraction allows applications to use the same APIs for all DMA device drivers in DPDK. This patch updates the ioatfwd application to make use of the new dmadev APIs, in turn making it a generic application which can be used with any of the DMA device drivers. Signed-off-by: Kevin Laatz Reviewed-by: Conor Walsh --- v2: - dmadev api name updates following rebase - use rte_config macro for max devs - use PRIu64 for printing stats --- examples/ioat/ioatfwd.c | 239 ++++++++++++++++---------------------- examples/ioat/meson.build | 8 +- 2 files changed, 105 insertions(+), 142 deletions(-) diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c index 1711827cea..df6a28f9e5 100644 --- a/examples/ioat/ioatfwd.c +++ b/examples/ioat/ioatfwd.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2019 Intel Corporation + * Copyright(c) 2019-2021 Intel Corporation */ #include @@ -10,11 +10,10 @@ #include #include -#include -#include +#include /* size of ring used for software copying between rx and tx. */ -#define RTE_LOGTYPE_IOAT RTE_LOGTYPE_USER1 +#define RTE_LOGTYPE_DMA RTE_LOGTYPE_USER1 #define MAX_PKT_BURST 32 #define MEMPOOL_CACHE_SIZE 512 #define MIN_POOL_SIZE 65536U @@ -40,8 +39,8 @@ struct rxtx_port_config { uint16_t nb_queues; /* for software copy mode */ struct rte_ring *rx_to_tx_ring; - /* for IOAT rawdev copy mode */ - uint16_t ioat_ids[MAX_RX_QUEUES_COUNT]; + /* for dmadev HW copy mode */ + uint16_t dmadev_ids[MAX_RX_QUEUES_COUNT]; }; /* Configuring ports and number of assigned lcores in struct. 8< */ @@ -60,13 +59,13 @@ struct ioat_port_statistics { uint64_t copy_dropped[RTE_MAX_ETHPORTS]; }; struct ioat_port_statistics port_statistics; - struct total_statistics { uint64_t total_packets_dropped; uint64_t total_packets_tx; uint64_t total_packets_rx; - uint64_t total_successful_enqueues; - uint64_t total_failed_enqueues; + uint64_t total_submitted; + uint64_t total_completed; + uint64_t total_failed; }; typedef enum copy_mode_t { @@ -95,6 +94,16 @@ static copy_mode_t copy_mode = COPY_MODE_IOAT_NUM; */ static unsigned short ring_size = 2048; +/* global mbuf arrays for tracking DMA bufs */ +#define MBUF_RING_SIZE 1024 +#define MBUF_RING_MASK (MBUF_RING_SIZE - 1) +struct dma_bufs { + struct rte_mbuf *bufs[MBUF_RING_SIZE]; + struct rte_mbuf *copies[MBUF_RING_SIZE]; + uint16_t sent; +}; +static struct dma_bufs dma_bufs[RTE_DMADEV_DEFAULT_MAX_DEVS]; + /* global transmission config */ struct rxtx_transmission_config cfg; @@ -131,36 +140,32 @@ print_port_stats(uint16_t port_id) /* Print out statistics for one IOAT rawdev device. */ static void -print_rawdev_stats(uint32_t dev_id, uint64_t *xstats, - unsigned int *ids_xstats, uint16_t nb_xstats, - struct rte_rawdev_xstats_name *names_xstats) +print_dmadev_stats(uint32_t dev_id, struct rte_dma_stats stats) { - uint16_t i; - - printf("\nIOAT channel %u", dev_id); - for (i = 0; i < nb_xstats; i++) - printf("\n\t %s: %*"PRIu64, - names_xstats[ids_xstats[i]].name, - (int)(37 - strlen(names_xstats[ids_xstats[i]].name)), - xstats[i]); + printf("\nDMA channel %u", dev_id); + printf("\n\t Total submitted ops: %"PRIu64"", stats.submitted); + printf("\n\t Total completed ops: %"PRIu64"", stats.completed); + printf("\n\t Total failed ops: %"PRIu64"", stats.errors); } static void print_total_stats(struct total_statistics *ts) { printf("\nAggregate statistics ===============================" - "\nTotal packets Tx: %24"PRIu64" [pps]" - "\nTotal packets Rx: %24"PRIu64" [pps]" - "\nTotal packets dropped: %19"PRIu64" [pps]", + "\nTotal packets Tx: %22"PRIu64" [pkt/s]" + "\nTotal packets Rx: %22"PRIu64" [pkt/s]" + "\nTotal packets dropped: %17"PRIu64" [pkt/s]", ts->total_packets_tx, ts->total_packets_rx, ts->total_packets_dropped); if (copy_mode == COPY_MODE_IOAT_NUM) { - printf("\nTotal IOAT successful enqueues: %8"PRIu64" [enq/s]" - "\nTotal IOAT failed enqueues: %12"PRIu64" [enq/s]", - ts->total_successful_enqueues, - ts->total_failed_enqueues); + printf("\nTotal submitted ops: %19"PRIu64" [ops/s]" + "\nTotal completed ops: %19"PRIu64" [ops/s]" + "\nTotal failed ops: %22"PRIu64" [ops/s]", + ts->total_submitted, + ts->total_completed, + ts->total_failed); } printf("\n====================================================\n"); @@ -171,13 +176,10 @@ static void print_stats(char *prgname) { struct total_statistics ts, delta_ts; + struct rte_dma_stats stats = {0}; uint32_t i, port_id, dev_id; - struct rte_rawdev_xstats_name *names_xstats; - uint64_t *xstats; - unsigned int *ids_xstats, nb_xstats; char status_string[255]; /* to print at the top of the output */ int status_strlen; - int ret; const char clr[] = { 27, '[', '2', 'J', '\0' }; const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' }; @@ -203,48 +205,6 @@ print_stats(char *prgname) sizeof(status_string) - status_strlen, "Ring Size = %d", ring_size); - /* Allocate memory for xstats names and values */ - ret = rte_rawdev_xstats_names_get( - cfg.ports[0].ioat_ids[0], NULL, 0); - if (ret < 0) - return; - nb_xstats = (unsigned int)ret; - - names_xstats = malloc(sizeof(*names_xstats) * nb_xstats); - if (names_xstats == NULL) { - rte_exit(EXIT_FAILURE, - "Error allocating xstat names memory\n"); - } - rte_rawdev_xstats_names_get(cfg.ports[0].ioat_ids[0], - names_xstats, nb_xstats); - - ids_xstats = malloc(sizeof(*ids_xstats) * 2); - if (ids_xstats == NULL) { - rte_exit(EXIT_FAILURE, - "Error allocating xstat ids_xstats memory\n"); - } - - xstats = malloc(sizeof(*xstats) * 2); - if (xstats == NULL) { - rte_exit(EXIT_FAILURE, - "Error allocating xstat memory\n"); - } - - /* Get failed/successful enqueues stats index */ - ids_xstats[0] = ids_xstats[1] = nb_xstats; - for (i = 0; i < nb_xstats; i++) { - if (!strcmp(names_xstats[i].name, "failed_enqueues")) - ids_xstats[0] = i; - else if (!strcmp(names_xstats[i].name, "successful_enqueues")) - ids_xstats[1] = i; - if (ids_xstats[0] < nb_xstats && ids_xstats[1] < nb_xstats) - break; - } - if (ids_xstats[0] == nb_xstats || ids_xstats[1] == nb_xstats) { - rte_exit(EXIT_FAILURE, - "Error getting failed/successful enqueues stats index\n"); - } - memset(&ts, 0, sizeof(struct total_statistics)); while (!force_quit) { @@ -276,17 +236,13 @@ print_stats(char *prgname) uint32_t j; for (j = 0; j < cfg.ports[i].nb_queues; j++) { - dev_id = cfg.ports[i].ioat_ids[j]; - rte_rawdev_xstats_get(dev_id, - ids_xstats, xstats, 2); - - print_rawdev_stats(dev_id, xstats, - ids_xstats, 2, names_xstats); + dev_id = cfg.ports[i].dmadev_ids[j]; + rte_dma_stats_get(dev_id, 0, &stats); + print_dmadev_stats(dev_id, stats); - delta_ts.total_failed_enqueues += - xstats[ids_xstats[0]]; - delta_ts.total_successful_enqueues += - xstats[ids_xstats[1]]; + delta_ts.total_submitted += stats.submitted; + delta_ts.total_completed += stats.completed; + delta_ts.total_failed += stats.errors; } } } @@ -294,9 +250,9 @@ print_stats(char *prgname) delta_ts.total_packets_tx -= ts.total_packets_tx; delta_ts.total_packets_rx -= ts.total_packets_rx; delta_ts.total_packets_dropped -= ts.total_packets_dropped; - delta_ts.total_failed_enqueues -= ts.total_failed_enqueues; - delta_ts.total_successful_enqueues -= - ts.total_successful_enqueues; + delta_ts.total_submitted -= ts.total_submitted; + delta_ts.total_completed -= ts.total_completed; + delta_ts.total_failed -= ts.total_failed; printf("\n"); print_total_stats(&delta_ts); @@ -306,14 +262,10 @@ print_stats(char *prgname) ts.total_packets_tx += delta_ts.total_packets_tx; ts.total_packets_rx += delta_ts.total_packets_rx; ts.total_packets_dropped += delta_ts.total_packets_dropped; - ts.total_failed_enqueues += delta_ts.total_failed_enqueues; - ts.total_successful_enqueues += - delta_ts.total_successful_enqueues; + ts.total_submitted += delta_ts.total_submitted; + ts.total_completed += delta_ts.total_completed; + ts.total_failed += delta_ts.total_failed; } - - free(names_xstats); - free(xstats); - free(ids_xstats); } static void @@ -357,20 +309,22 @@ static uint32_t ioat_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], uint32_t nb_rx, uint16_t dev_id) { + struct dma_bufs *dma = &dma_bufs[dev_id]; int ret; uint32_t i; for (i = 0; i < nb_rx; i++) { /* Perform data copy */ - ret = rte_ioat_enqueue_copy(dev_id, + ret = rte_dma_copy(dev_id, 0, rte_pktmbuf_iova(pkts[i]), rte_pktmbuf_iova(pkts_copy[i]), - rte_pktmbuf_data_len(pkts[i]), - (uintptr_t)pkts[i], - (uintptr_t)pkts_copy[i]); + rte_pktmbuf_data_len(pkts[i]), 0); - if (ret != 1) + if (ret < 0) break; + + dma->bufs[ret & MBUF_RING_MASK] = pkts[i]; + dma->copies[ret & MBUF_RING_MASK] = pkts_copy[i]; } ret = i; @@ -390,7 +344,7 @@ ioat_enqueue(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], n = ioat_enqueue_packets(pkts + i, pkts_copy + i, m, dev_id); k += n; if (n > 0) - rte_ioat_perform_ops(dev_id); + rte_dma_submit(dev_id, 0); /* don't try to enqueue more if HW queue is full */ if (n != m) @@ -404,20 +358,27 @@ static inline uint32_t ioat_dequeue(struct rte_mbuf *src[], struct rte_mbuf *dst[], uint32_t num, uint16_t dev_id) { - int32_t rc; + struct dma_bufs *dma = &dma_bufs[dev_id]; + uint16_t nb_dq, filled; /* Dequeue the mbufs from IOAT device. Since all memory * is DPDK pinned memory and therefore all addresses should * be valid, we don't check for copy errors */ - rc = rte_ioat_completed_ops(dev_id, num, NULL, NULL, - (void *)src, (void *)dst); - if (rc < 0) { - RTE_LOG(CRIT, IOAT, - "rte_ioat_completed_ops(%hu) failedi, error: %d\n", - dev_id, rte_errno); - rc = 0; + nb_dq = rte_dma_completed(dev_id, 0, num, NULL, NULL); + + /* Return early if no work to do */ + if (unlikely(nb_dq == 0)) + return nb_dq; + + /* Populate pkts_copy with the copies bufs from dma->copies */ + for (filled = 0; filled < nb_dq; filled++) { + src[filled] = dma->bufs[(dma->sent + filled) & MBUF_RING_MASK]; + dst[filled] = dma->copies[(dma->sent + filled) & MBUF_RING_MASK]; } - return rc; + dma->sent += nb_dq; + + return filled; + } /* Receive packets on one port and enqueue to IOAT rawdev or rte_ring. 8< */ @@ -454,7 +415,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config) /* enqueue packets for hardware copy */ nb_enq = ioat_enqueue(pkts_burst, pkts_burst_copy, - nb_rx, ioat_batch_sz, rx_config->ioat_ids[i]); + nb_rx, ioat_batch_sz, rx_config->dmadev_ids[i]); /* free any not enqueued packets. */ rte_mempool_put_bulk(ioat_pktmbuf_pool, @@ -469,7 +430,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config) /* get completed copies */ nb_rx = ioat_dequeue(pkts_burst, pkts_burst_copy, - MAX_PKT_BURST, rx_config->ioat_ids[i]); + MAX_PKT_BURST, rx_config->dmadev_ids[i]); } else { /* Perform packet software copy, free source packets */ for (j = 0; j < nb_rx; j++) @@ -536,7 +497,7 @@ rx_main_loop(void) uint16_t i; uint16_t nb_ports = cfg.nb_ports; - RTE_LOG(INFO, IOAT, "Entering main rx loop for copy on lcore %u\n", + RTE_LOG(INFO, DMA, "Entering main rx loop for copy on lcore %u\n", rte_lcore_id()); while (!force_quit) @@ -551,7 +512,7 @@ tx_main_loop(void) uint16_t i; uint16_t nb_ports = cfg.nb_ports; - RTE_LOG(INFO, IOAT, "Entering main tx loop for copy on lcore %u\n", + RTE_LOG(INFO, DMA, "Entering main tx loop for copy on lcore %u\n", rte_lcore_id()); while (!force_quit) @@ -566,7 +527,7 @@ rxtx_main_loop(void) uint16_t i; uint16_t nb_ports = cfg.nb_ports; - RTE_LOG(INFO, IOAT, "Entering main rx and tx loop for copy on" + RTE_LOG(INFO, DMA, "Entering main rx and tx loop for copy on" " lcore %u\n", rte_lcore_id()); while (!force_quit) @@ -581,7 +542,7 @@ static void start_forwarding_cores(void) { uint32_t lcore_id = rte_lcore_id(); - RTE_LOG(INFO, IOAT, "Entering %s on lcore %u\n", + RTE_LOG(INFO, DMA, "Entering %s on lcore %u\n", __func__, rte_lcore_id()); if (cfg.nb_lcores == 1) { @@ -794,20 +755,28 @@ check_link_status(uint32_t port_mask) static void configure_rawdev_queue(uint32_t dev_id) { - struct rte_ioat_rawdev_config dev_config = { - .ring_size = ring_size, - .no_prefetch_completions = (cfg.nb_lcores > 1), + struct rte_dma_info info; + struct rte_dma_conf dev_config = { .nb_vchans = 1 }; + struct rte_dma_vchan_conf qconf = { + .direction = RTE_DMA_DIR_MEM_TO_MEM, + .nb_desc = ring_size }; - struct rte_rawdev_info info = { .dev_private = &dev_config }; + uint16_t vchan = 0; - if (rte_rawdev_configure(dev_id, &info, sizeof(dev_config)) != 0) { - rte_exit(EXIT_FAILURE, - "Error with rte_rawdev_configure()\n"); + if (rte_dma_configure(dev_id, &dev_config) != 0) + rte_exit(EXIT_FAILURE, "Error with rte_dma_configure()\n"); + + if (rte_dma_vchan_setup(dev_id, vchan, &qconf) != 0) { + printf("Error with queue configuration\n"); + rte_panic(); } - if (rte_rawdev_start(dev_id) != 0) { - rte_exit(EXIT_FAILURE, - "Error with rte_rawdev_start()\n"); + rte_dma_info_get(dev_id, &info); + if (info.nb_vchans != 1) { + printf("Error, no configured queues reported on device id %u\n", dev_id); + rte_panic(); } + if (rte_dma_start(dev_id) != 0) + rte_exit(EXIT_FAILURE, "Error with rte_dma_start()\n"); } /* >8 End of configuration of device. */ @@ -820,18 +789,16 @@ assign_rawdevs(void) for (i = 0; i < cfg.nb_ports; i++) { for (j = 0; j < cfg.ports[i].nb_queues; j++) { - struct rte_rawdev_info rdev_info = { 0 }; + struct rte_dma_info dmadev_info = { 0 }; do { - if (rdev_id == rte_rawdev_count()) + if (rdev_id == rte_dma_count_avail()) goto end; - rte_rawdev_info_get(rdev_id++, &rdev_info, 0); - } while (rdev_info.driver_name == NULL || - strcmp(rdev_info.driver_name, - IOAT_PMD_RAWDEV_NAME_STR) != 0); + rte_dma_info_get(rdev_id++, &dmadev_info); + } while (!rte_dma_is_valid(rdev_id)); - cfg.ports[i].ioat_ids[j] = rdev_id - 1; - configure_rawdev_queue(cfg.ports[i].ioat_ids[j]); + cfg.ports[i].dmadev_ids[j] = rdev_id - 1; + configure_rawdev_queue(cfg.ports[i].dmadev_ids[j]); ++nb_rawdev; } } @@ -840,7 +807,7 @@ assign_rawdevs(void) rte_exit(EXIT_FAILURE, "Not enough IOAT rawdevs (%u) for all queues (%u).\n", nb_rawdev, cfg.nb_ports * cfg.ports[0].nb_queues); - RTE_LOG(INFO, IOAT, "Number of used rawdevs: %u.\n", nb_rawdev); + RTE_LOG(INFO, DMA, "Number of used rawdevs: %u.\n", nb_rawdev); } /* >8 End of using IOAT rawdev API functions. */ @@ -1084,15 +1051,15 @@ main(int argc, char **argv) printf("Closing port %d\n", cfg.ports[i].rxtx_port); ret = rte_eth_dev_stop(cfg.ports[i].rxtx_port); if (ret != 0) - RTE_LOG(ERR, IOAT, "rte_eth_dev_stop: err=%s, port=%u\n", + RTE_LOG(ERR, DMA, "rte_eth_dev_stop: err=%s, port=%u\n", rte_strerror(-ret), cfg.ports[i].rxtx_port); rte_eth_dev_close(cfg.ports[i].rxtx_port); if (copy_mode == COPY_MODE_IOAT_NUM) { for (j = 0; j < cfg.ports[i].nb_queues; j++) { printf("Stopping rawdev %d\n", - cfg.ports[i].ioat_ids[j]); - rte_rawdev_stop(cfg.ports[i].ioat_ids[j]); + cfg.ports[i].dmadev_ids[j]); + rte_dma_stop(cfg.ports[i].dmadev_ids[j]); } } else /* copy_mode == COPY_MODE_SW_NUM */ rte_ring_free(cfg.ports[i].rx_to_tx_ring); diff --git a/examples/ioat/meson.build b/examples/ioat/meson.build index 68bce1ab03..c1dd7c9b29 100644 --- a/examples/ioat/meson.build +++ b/examples/ioat/meson.build @@ -1,5 +1,5 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2019 Intel Corporation +# Copyright(c) 2019-2021 Intel Corporation # meson file, for building this example as part of a main DPDK build. # @@ -7,12 +7,8 @@ # DPDK instance, use 'make' allow_experimental_apis = true -build = dpdk_conf.has('RTE_RAW_IOAT') -if not build - subdir_done() -endif -deps += ['raw_ioat'] +deps += ['dmadev'] sources = files( 'ioatfwd.c', From patchwork Fri Sep 17 16:41:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Laatz X-Patchwork-Id: 99245 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F04E6A0C43; Fri, 17 Sep 2021 18:42:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A56884115F; Fri, 17 Sep 2021 18:42:00 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 01D6E4113D for ; Fri, 17 Sep 2021 18:41:56 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="222491279" X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="222491279" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 09:41:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="546488181" Received: from silpixa00401122.ir.intel.com ([10.55.128.10]) by FMSMGA003.fm.intel.com with ESMTP; 17 Sep 2021 09:41:55 -0700 From: Kevin Laatz To: dev@dpdk.org Cc: bruce.richardson@intel.com, fengchengwen@huawei.com, conor.walsh@intel.com, Kevin Laatz Date: Fri, 17 Sep 2021 16:41:35 +0000 Message-Id: <20210917164136.3499904-6-kevin.laatz@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210917164136.3499904-1-kevin.laatz@intel.com> References: <20210910172737.2561156-1-kevin.laatz@intel.com> <20210917164136.3499904-1-kevin.laatz@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 5/6] examples/ioat: update naming to match change to dmadev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Existing functions, structures, defines etc need to be updated to reflect the change to using the dmadev APIs. Signed-off-by: Kevin Laatz Reviewed-by: Conor Walsh --- examples/ioat/ioatfwd.c | 175 ++++++++++++++++++++-------------------- 1 file changed, 87 insertions(+), 88 deletions(-) diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c index df6a28f9e5..d4bff58633 100644 --- a/examples/ioat/ioatfwd.c +++ b/examples/ioat/ioatfwd.c @@ -52,13 +52,13 @@ struct rxtx_transmission_config { /* >8 End of configuration of ports and number of assigned lcores. */ /* per-port statistics struct */ -struct ioat_port_statistics { +struct dma_port_statistics { uint64_t rx[RTE_MAX_ETHPORTS]; uint64_t tx[RTE_MAX_ETHPORTS]; uint64_t tx_dropped[RTE_MAX_ETHPORTS]; uint64_t copy_dropped[RTE_MAX_ETHPORTS]; }; -struct ioat_port_statistics port_statistics; +struct dma_port_statistics port_statistics; struct total_statistics { uint64_t total_packets_dropped; uint64_t total_packets_tx; @@ -71,14 +71,14 @@ struct total_statistics { typedef enum copy_mode_t { #define COPY_MODE_SW "sw" COPY_MODE_SW_NUM, -#define COPY_MODE_IOAT "hw" - COPY_MODE_IOAT_NUM, +#define COPY_MODE_DMA "hw" + COPY_MODE_DMA_NUM, COPY_MODE_INVALID_NUM, COPY_MODE_SIZE_NUM = COPY_MODE_INVALID_NUM } copy_mode_t; /* mask of enabled ports */ -static uint32_t ioat_enabled_port_mask; +static uint32_t dma_enabled_port_mask; /* number of RX queues per port */ static uint16_t nb_queues = 1; @@ -87,9 +87,9 @@ static uint16_t nb_queues = 1; static int mac_updating = 1; /* hardare copy mode enabled by default. */ -static copy_mode_t copy_mode = COPY_MODE_IOAT_NUM; +static copy_mode_t copy_mode = COPY_MODE_DMA_NUM; -/* size of IOAT rawdev ring for hardware copy mode or +/* size of descriptor ring for hardware copy mode or * rte_ring for software copy mode */ static unsigned short ring_size = 2048; @@ -113,14 +113,14 @@ static uint16_t nb_txd = TX_DEFAULT_RINGSIZE; static volatile bool force_quit; -static uint32_t ioat_batch_sz = MAX_PKT_BURST; +static uint32_t dma_batch_sz = MAX_PKT_BURST; static uint32_t max_frame_size = RTE_ETHER_MAX_LEN; /* ethernet addresses of ports */ -static struct rte_ether_addr ioat_ports_eth_addr[RTE_MAX_ETHPORTS]; +static struct rte_ether_addr dma_ports_eth_addr[RTE_MAX_ETHPORTS]; static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS]; -struct rte_mempool *ioat_pktmbuf_pool; +struct rte_mempool *dma_pktmbuf_pool; /* Print out statistics for one port. */ static void @@ -138,7 +138,7 @@ print_port_stats(uint16_t port_id) port_statistics.copy_dropped[port_id]); } -/* Print out statistics for one IOAT rawdev device. */ +/* Print out statistics for one dmadev device. */ static void print_dmadev_stats(uint32_t dev_id, struct rte_dma_stats stats) { @@ -159,7 +159,7 @@ print_total_stats(struct total_statistics *ts) ts->total_packets_rx, ts->total_packets_dropped); - if (copy_mode == COPY_MODE_IOAT_NUM) { + if (copy_mode == COPY_MODE_DMA_NUM) { printf("\nTotal submitted ops: %19"PRIu64" [ops/s]" "\nTotal completed ops: %19"PRIu64" [ops/s]" "\nTotal failed ops: %22"PRIu64" [ops/s]", @@ -193,7 +193,7 @@ print_stats(char *prgname) status_strlen += snprintf(status_string + status_strlen, sizeof(status_string) - status_strlen, "Copy Mode = %s,\n", copy_mode == COPY_MODE_SW_NUM ? - COPY_MODE_SW : COPY_MODE_IOAT); + COPY_MODE_SW : COPY_MODE_DMA); status_strlen += snprintf(status_string + status_strlen, sizeof(status_string) - status_strlen, "Updating MAC = %s, ", mac_updating ? @@ -232,7 +232,7 @@ print_stats(char *prgname) delta_ts.total_packets_rx += port_statistics.rx[port_id]; - if (copy_mode == COPY_MODE_IOAT_NUM) { + if (copy_mode == COPY_MODE_DMA_NUM) { uint32_t j; for (j = 0; j < cfg.ports[i].nb_queues; j++) { @@ -283,7 +283,7 @@ update_mac_addrs(struct rte_mbuf *m, uint32_t dest_portid) *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40); /* src addr */ - rte_ether_addr_copy(&ioat_ports_eth_addr[dest_portid], ð->s_addr); + rte_ether_addr_copy(&dma_ports_eth_addr[dest_portid], ð->s_addr); } /* Perform packet copy there is a user-defined function. 8< */ @@ -306,7 +306,7 @@ pktmbuf_sw_copy(struct rte_mbuf *src, struct rte_mbuf *dst) /* >8 End of perform packet copy there is a user-defined function. */ static uint32_t -ioat_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], +dma_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], uint32_t nb_rx, uint16_t dev_id) { struct dma_bufs *dma = &dma_bufs[dev_id]; @@ -332,7 +332,7 @@ ioat_enqueue_packets(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], } static inline uint32_t -ioat_enqueue(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], +dma_enqueue(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], uint32_t num, uint32_t step, uint16_t dev_id) { uint32_t i, k, m, n; @@ -341,7 +341,7 @@ ioat_enqueue(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], for (i = 0; i < num; i += m) { m = RTE_MIN(step, num - i); - n = ioat_enqueue_packets(pkts + i, pkts_copy + i, m, dev_id); + n = dma_enqueue_packets(pkts + i, pkts_copy + i, m, dev_id); k += n; if (n > 0) rte_dma_submit(dev_id, 0); @@ -355,12 +355,12 @@ ioat_enqueue(struct rte_mbuf *pkts[], struct rte_mbuf *pkts_copy[], } static inline uint32_t -ioat_dequeue(struct rte_mbuf *src[], struct rte_mbuf *dst[], uint32_t num, +dma_dequeue(struct rte_mbuf *src[], struct rte_mbuf *dst[], uint32_t num, uint16_t dev_id) { struct dma_bufs *dma = &dma_bufs[dev_id]; uint16_t nb_dq, filled; - /* Dequeue the mbufs from IOAT device. Since all memory + /* Dequeue the mbufs from DMA device. Since all memory * is DPDK pinned memory and therefore all addresses should * be valid, we don't check for copy errors */ @@ -370,7 +370,7 @@ ioat_dequeue(struct rte_mbuf *src[], struct rte_mbuf *dst[], uint32_t num, if (unlikely(nb_dq == 0)) return nb_dq; - /* Populate pkts_copy with the copies bufs from dma->copies */ + /* Populate pkts_copy with the copies bufs from dma->copies for tx */ for (filled = 0; filled < nb_dq; filled++) { src[filled] = dma->bufs[(dma->sent + filled) & MBUF_RING_MASK]; dst[filled] = dma->copies[(dma->sent + filled) & MBUF_RING_MASK]; @@ -381,9 +381,9 @@ ioat_dequeue(struct rte_mbuf *src[], struct rte_mbuf *dst[], uint32_t num, } -/* Receive packets on one port and enqueue to IOAT rawdev or rte_ring. 8< */ +/* Receive packets on one port and enqueue to dmadev or rte_ring. 8< */ static void -ioat_rx_port(struct rxtx_port_config *rx_config) +dma_rx_port(struct rxtx_port_config *rx_config) { int32_t ret; uint32_t nb_rx, nb_enq, i, j; @@ -400,7 +400,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config) port_statistics.rx[rx_config->rxtx_port] += nb_rx; - ret = rte_mempool_get_bulk(ioat_pktmbuf_pool, + ret = rte_mempool_get_bulk(dma_pktmbuf_pool, (void *)pkts_burst_copy, nb_rx); if (unlikely(ret < 0)) @@ -411,17 +411,16 @@ ioat_rx_port(struct rxtx_port_config *rx_config) pktmbuf_metadata_copy(pkts_burst[j], pkts_burst_copy[j]); - if (copy_mode == COPY_MODE_IOAT_NUM) { - + if (copy_mode == COPY_MODE_DMA_NUM) { /* enqueue packets for hardware copy */ - nb_enq = ioat_enqueue(pkts_burst, pkts_burst_copy, - nb_rx, ioat_batch_sz, rx_config->dmadev_ids[i]); + nb_enq = dma_enqueue(pkts_burst, pkts_burst_copy, + nb_rx, dma_batch_sz, rx_config->dmadev_ids[i]); /* free any not enqueued packets. */ - rte_mempool_put_bulk(ioat_pktmbuf_pool, + rte_mempool_put_bulk(dma_pktmbuf_pool, (void *)&pkts_burst[nb_enq], nb_rx - nb_enq); - rte_mempool_put_bulk(ioat_pktmbuf_pool, + rte_mempool_put_bulk(dma_pktmbuf_pool, (void *)&pkts_burst_copy[nb_enq], nb_rx - nb_enq); @@ -429,7 +428,7 @@ ioat_rx_port(struct rxtx_port_config *rx_config) (nb_rx - nb_enq); /* get completed copies */ - nb_rx = ioat_dequeue(pkts_burst, pkts_burst_copy, + nb_rx = dma_dequeue(pkts_burst, pkts_burst_copy, MAX_PKT_BURST, rx_config->dmadev_ids[i]); } else { /* Perform packet software copy, free source packets */ @@ -438,14 +437,14 @@ ioat_rx_port(struct rxtx_port_config *rx_config) pkts_burst_copy[j]); } - rte_mempool_put_bulk(ioat_pktmbuf_pool, + rte_mempool_put_bulk(dma_pktmbuf_pool, (void *)pkts_burst, nb_rx); nb_enq = rte_ring_enqueue_burst(rx_config->rx_to_tx_ring, (void *)pkts_burst_copy, nb_rx, NULL); /* Free any not enqueued packets. */ - rte_mempool_put_bulk(ioat_pktmbuf_pool, + rte_mempool_put_bulk(dma_pktmbuf_pool, (void *)&pkts_burst_copy[nb_enq], nb_rx - nb_enq); @@ -453,11 +452,11 @@ ioat_rx_port(struct rxtx_port_config *rx_config) (nb_rx - nb_enq); } } -/* >8 End of receive packets on one port and enqueue to IOAT rawdev or rte_ring. */ +/* >8 End of receive packets on one port and enqueue to dmadev or rte_ring. */ -/* Transmit packets from IOAT rawdev/rte_ring for one port. 8< */ +/* Transmit packets from dmadev/rte_ring for one port. 8< */ static void -ioat_tx_port(struct rxtx_port_config *tx_config) +dma_tx_port(struct rxtx_port_config *tx_config) { uint32_t i, j, nb_dq, nb_tx; struct rte_mbuf *mbufs[MAX_PKT_BURST]; @@ -484,13 +483,13 @@ ioat_tx_port(struct rxtx_port_config *tx_config) /* Free any unsent packets. */ if (unlikely(nb_tx < nb_dq)) - rte_mempool_put_bulk(ioat_pktmbuf_pool, + rte_mempool_put_bulk(dma_pktmbuf_pool, (void *)&mbufs[nb_tx], nb_dq - nb_tx); } } -/* >8 End of transmitting packets from IOAT. */ +/* >8 End of transmitting packets from dmadev. */ -/* Main rx processing loop for IOAT rawdev. */ +/* Main rx processing loop for dmadev. */ static void rx_main_loop(void) { @@ -502,7 +501,7 @@ rx_main_loop(void) while (!force_quit) for (i = 0; i < nb_ports; i++) - ioat_rx_port(&cfg.ports[i]); + dma_rx_port(&cfg.ports[i]); } /* Main tx processing loop for hardware copy. */ @@ -517,7 +516,7 @@ tx_main_loop(void) while (!force_quit) for (i = 0; i < nb_ports; i++) - ioat_tx_port(&cfg.ports[i]); + dma_tx_port(&cfg.ports[i]); } /* Main rx and tx loop if only one worker lcore available */ @@ -532,8 +531,8 @@ rxtx_main_loop(void) while (!force_quit) for (i = 0; i < nb_ports; i++) { - ioat_rx_port(&cfg.ports[i]); - ioat_tx_port(&cfg.ports[i]); + dma_rx_port(&cfg.ports[i]); + dma_tx_port(&cfg.ports[i]); } } @@ -563,7 +562,7 @@ static void start_forwarding_cores(void) /* Display usage */ static void -ioat_usage(const char *prgname) +dma_usage(const char *prgname) { printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n" " -b --dma-batch-size: number of requests per DMA batch\n" @@ -575,12 +574,12 @@ ioat_usage(const char *prgname) " - The source MAC address is replaced by the TX port MAC address\n" " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n" " -c --copy-type CT: type of copy: sw|hw\n" - " -s --ring-size RS: size of IOAT rawdev ring for hardware copy mode or rte_ring for software copy mode\n", + " -s --ring-size RS: size of dmadev descriptor ring for hardware copy mode or rte_ring for software copy mode\n", prgname); } static int -ioat_parse_portmask(const char *portmask) +dma_parse_portmask(const char *portmask) { char *end = NULL; unsigned long pm; @@ -594,19 +593,19 @@ ioat_parse_portmask(const char *portmask) } static copy_mode_t -ioat_parse_copy_mode(const char *copy_mode) +dma_parse_copy_mode(const char *copy_mode) { if (strcmp(copy_mode, COPY_MODE_SW) == 0) return COPY_MODE_SW_NUM; - else if (strcmp(copy_mode, COPY_MODE_IOAT) == 0) - return COPY_MODE_IOAT_NUM; + else if (strcmp(copy_mode, COPY_MODE_DMA) == 0) + return COPY_MODE_DMA_NUM; return COPY_MODE_INVALID_NUM; } /* Parse the argument given in the command line of the application */ static int -ioat_parse_args(int argc, char **argv, unsigned int nb_ports) +dma_parse_args(int argc, char **argv, unsigned int nb_ports) { static const char short_options[] = "b:" /* dma batch size */ @@ -635,7 +634,7 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) int option_index; char *prgname = argv[0]; - ioat_enabled_port_mask = default_port_mask; + dma_enabled_port_mask = default_port_mask; argvopt = argv; while ((opt = getopt_long(argc, argvopt, short_options, @@ -643,10 +642,10 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) switch (opt) { case 'b': - ioat_batch_sz = atoi(optarg); - if (ioat_batch_sz > MAX_PKT_BURST) { + dma_batch_sz = atoi(optarg); + if (dma_batch_sz > MAX_PKT_BURST) { printf("Invalid dma batch size, %s.\n", optarg); - ioat_usage(prgname); + dma_usage(prgname); return -1; } break; @@ -654,19 +653,19 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) max_frame_size = atoi(optarg); if (max_frame_size > RTE_ETHER_MAX_JUMBO_FRAME_LEN) { printf("Invalid max frame size, %s.\n", optarg); - ioat_usage(prgname); + dma_usage(prgname); return -1; } break; /* portmask */ case 'p': - ioat_enabled_port_mask = ioat_parse_portmask(optarg); - if (ioat_enabled_port_mask & ~default_port_mask || - ioat_enabled_port_mask <= 0) { + dma_enabled_port_mask = dma_parse_portmask(optarg); + if (dma_enabled_port_mask & ~default_port_mask || + dma_enabled_port_mask <= 0) { printf("Invalid portmask, %s, suggest 0x%x\n", optarg, default_port_mask); - ioat_usage(prgname); + dma_usage(prgname); return -1; } break; @@ -676,16 +675,16 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) if (nb_queues == 0 || nb_queues > MAX_RX_QUEUES_COUNT) { printf("Invalid RX queues number %s. Max %u\n", optarg, MAX_RX_QUEUES_COUNT); - ioat_usage(prgname); + dma_usage(prgname); return -1; } break; case 'c': - copy_mode = ioat_parse_copy_mode(optarg); + copy_mode = dma_parse_copy_mode(optarg); if (copy_mode == COPY_MODE_INVALID_NUM) { printf("Invalid copy type. Use: sw, hw\n"); - ioat_usage(prgname); + dma_usage(prgname); return -1; } break; @@ -694,7 +693,7 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) ring_size = atoi(optarg); if (ring_size == 0) { printf("Invalid ring size, %s.\n", optarg); - ioat_usage(prgname); + dma_usage(prgname); return -1; } break; @@ -704,7 +703,7 @@ ioat_parse_args(int argc, char **argv, unsigned int nb_ports) break; default: - ioat_usage(prgname); + dma_usage(prgname); return -1; } } @@ -753,7 +752,7 @@ check_link_status(uint32_t port_mask) /* Configuration of device. 8< */ static void -configure_rawdev_queue(uint32_t dev_id) +configure_dmadev_queue(uint32_t dev_id) { struct rte_dma_info info; struct rte_dma_conf dev_config = { .nb_vchans = 1 }; @@ -780,11 +779,11 @@ configure_rawdev_queue(uint32_t dev_id) } /* >8 End of configuration of device. */ -/* Using IOAT rawdev API functions. 8< */ +/* Using dmadev API functions. 8< */ static void -assign_rawdevs(void) +assign_dmadevs(void) { - uint16_t nb_rawdev = 0, rdev_id = 0; + uint16_t nb_dmadev = 0, rdev_id = 0; uint32_t i, j; for (i = 0; i < cfg.nb_ports; i++) { @@ -798,18 +797,18 @@ assign_rawdevs(void) } while (!rte_dma_is_valid(rdev_id)); cfg.ports[i].dmadev_ids[j] = rdev_id - 1; - configure_rawdev_queue(cfg.ports[i].dmadev_ids[j]); - ++nb_rawdev; + configure_dmadev_queue(cfg.ports[i].dmadev_ids[j]); + ++nb_dmadev; } } end: - if (nb_rawdev < cfg.nb_ports * cfg.ports[0].nb_queues) + if (nb_dmadev < cfg.nb_ports * cfg.ports[0].nb_queues) rte_exit(EXIT_FAILURE, - "Not enough IOAT rawdevs (%u) for all queues (%u).\n", - nb_rawdev, cfg.nb_ports * cfg.ports[0].nb_queues); - RTE_LOG(INFO, DMA, "Number of used rawdevs: %u.\n", nb_rawdev); + "Not enough dmadevs (%u) for all queues (%u).\n", + nb_dmadev, cfg.nb_ports * cfg.ports[0].nb_queues); + RTE_LOG(INFO, DMA, "Number of used dmadevs: %u.\n", nb_dmadev); } -/* >8 End of using IOAT rawdev API functions. */ +/* >8 End of using dmadev API functions. */ /* Assign ring structures for packet exchanging. 8< */ static void @@ -867,7 +866,7 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) } /* Skip ports that are not enabled */ - if ((ioat_enabled_port_mask & (1 << portid)) == 0) { + if ((dma_enabled_port_mask & (1 << portid)) == 0) { printf("Skipping disabled port %u\n", portid); return; } @@ -894,7 +893,7 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) "Cannot adjust number of descriptors: err=%d, port=%u\n", ret, portid); - rte_eth_macaddr_get(portid, &ioat_ports_eth_addr[portid]); + rte_eth_macaddr_get(portid, &dma_ports_eth_addr[portid]); /* Init RX queues */ rxq_conf = dev_info.default_rxconf; @@ -953,7 +952,7 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) printf("Port %u, MAC address: " RTE_ETHER_ADDR_PRT_FMT "\n\n", portid, - RTE_ETHER_ADDR_BYTES(&ioat_ports_eth_addr[portid])); + RTE_ETHER_ADDR_BYTES(&dma_ports_eth_addr[portid])); cfg.ports[cfg.nb_ports].rxtx_port = portid; cfg.ports[cfg.nb_ports++].nb_queues = nb_queues; @@ -995,9 +994,9 @@ main(int argc, char **argv) rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n"); /* Parse application arguments (after the EAL ones) */ - ret = ioat_parse_args(argc, argv, nb_ports); + ret = dma_parse_args(argc, argv, nb_ports); if (ret < 0) - rte_exit(EXIT_FAILURE, "Invalid IOAT arguments\n"); + rte_exit(EXIT_FAILURE, "Invalid DMA arguments\n"); /* Allocates mempool to hold the mbufs. 8< */ nb_mbufs = RTE_MAX(nb_ports * (nb_queues * (nb_rxd + nb_txd + @@ -1008,23 +1007,23 @@ main(int argc, char **argv) /* Create the mbuf pool */ sz = max_frame_size + RTE_PKTMBUF_HEADROOM; sz = RTE_MAX(sz, (size_t)RTE_MBUF_DEFAULT_BUF_SIZE); - ioat_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", nb_mbufs, + dma_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", nb_mbufs, MEMPOOL_CACHE_SIZE, 0, sz, rte_socket_id()); - if (ioat_pktmbuf_pool == NULL) + if (dma_pktmbuf_pool == NULL) rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); /* >8 End of allocates mempool to hold the mbufs. */ /* Initialize each port. 8< */ cfg.nb_ports = 0; RTE_ETH_FOREACH_DEV(portid) - port_init(portid, ioat_pktmbuf_pool, nb_queues); + port_init(portid, dma_pktmbuf_pool, nb_queues); /* >8 End of initializing each port. */ /* Initialize port xstats */ memset(&port_statistics, 0, sizeof(port_statistics)); /* Assigning each port resources. 8< */ - while (!check_link_status(ioat_enabled_port_mask) && !force_quit) + while (!check_link_status(dma_enabled_port_mask) && !force_quit) sleep(1); /* Check if there is enough lcores for all ports. */ @@ -1033,8 +1032,8 @@ main(int argc, char **argv) rte_exit(EXIT_FAILURE, "There should be at least one worker lcore.\n"); - if (copy_mode == COPY_MODE_IOAT_NUM) - assign_rawdevs(); + if (copy_mode == COPY_MODE_DMA_NUM) + assign_dmadevs(); assign_rings(); /* >8 End of assigning each port resources. */ @@ -1055,9 +1054,9 @@ main(int argc, char **argv) rte_strerror(-ret), cfg.ports[i].rxtx_port); rte_eth_dev_close(cfg.ports[i].rxtx_port); - if (copy_mode == COPY_MODE_IOAT_NUM) { + if (copy_mode == COPY_MODE_DMA_NUM) { for (j = 0; j < cfg.ports[i].nb_queues; j++) { - printf("Stopping rawdev %d\n", + printf("Stopping dmadev %d\n", cfg.ports[i].dmadev_ids[j]); rte_dma_stop(cfg.ports[i].dmadev_ids[j]); } From patchwork Fri Sep 17 16:41:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Laatz X-Patchwork-Id: 99246 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1948AA0C43; Fri, 17 Sep 2021 18:42:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7E0B4113D; Fri, 17 Sep 2021 18:42:05 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id F0258410FE for ; Fri, 17 Sep 2021 18:42:03 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10110"; a="283840581" X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="283840581" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Sep 2021 09:42:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,301,1624345200"; d="scan'208";a="546488192" Received: from silpixa00401122.ir.intel.com ([10.55.128.10]) by FMSMGA003.fm.intel.com with ESMTP; 17 Sep 2021 09:41:56 -0700 From: Kevin Laatz To: dev@dpdk.org Cc: bruce.richardson@intel.com, fengchengwen@huawei.com, conor.walsh@intel.com, Kevin Laatz Date: Fri, 17 Sep 2021 16:41:36 +0000 Message-Id: <20210917164136.3499904-7-kevin.laatz@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210917164136.3499904-1-kevin.laatz@intel.com> References: <20210910172737.2561156-1-kevin.laatz@intel.com> <20210917164136.3499904-1-kevin.laatz@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 6/6] examples/ioat: rename application to dmafwd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since the APIs have been updated from rawdev to dmadev, the application should also be renamed to match. This patch also includes the documentation updates for the renaming. Signed-off-by: Kevin Laatz Reviewed-by: Conor Walsh --- MAINTAINERS | 7 +- .../sample_app_ug/{ioat.rst => dma.rst} | 114 +++++++++--------- doc/guides/sample_app_ug/index.rst | 2 +- doc/guides/sample_app_ug/intro.rst | 4 +- examples/{ioat => dma}/Makefile | 4 +- examples/{ioat/ioatfwd.c => dma/dmafwd.c} | 0 examples/{ioat => dma}/meson.build | 2 +- examples/meson.build | 2 +- 8 files changed, 71 insertions(+), 64 deletions(-) rename doc/guides/sample_app_ug/{ioat.rst => dma.rst} (73%) rename examples/{ioat => dma}/Makefile (97%) rename examples/{ioat/ioatfwd.c => dma/dmafwd.c} (100%) rename examples/{ioat => dma}/meson.build (94%) diff --git a/MAINTAINERS b/MAINTAINERS index 70993d23e8..500fa94c58 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1326,8 +1326,6 @@ IOAT Rawdev M: Bruce Richardson F: drivers/raw/ioat/ F: doc/guides/rawdevs/ioat.rst -F: examples/ioat/ -F: doc/guides/sample_app_ug/ioat.rst NXP DPAA2 QDMA M: Nipun Gupta @@ -1698,6 +1696,11 @@ F: doc/guides/tools/proc_info.rst Other Example Applications -------------------------- +DMAdev example +M: Kevin Laatz +F: examples/dma/ +F: doc/guides/sample_app_ug/dma.rst + Ethtool example F: examples/ethtool/ F: doc/guides/sample_app_ug/ethtool.rst diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/dma.rst similarity index 73% rename from doc/guides/sample_app_ug/ioat.rst rename to doc/guides/sample_app_ug/dma.rst index ee0a627b06..3246c780ac 100644 --- a/doc/guides/sample_app_ug/ioat.rst +++ b/doc/guides/sample_app_ug/dma.rst @@ -1,17 +1,17 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright(c) 2019 Intel Corporation. + Copyright(c) 2019-2021 Intel Corporation. .. include:: -Packet copying using Intel\ |reg| QuickData Technology -====================================================== +Packet copying using DMAdev library +=================================== Overview -------- This sample is intended as a demonstration of the basic components of a DPDK -forwarding application and example of how to use IOAT driver API to make -packets copies. +forwarding application and example of how to use the DMAdev API to make a packet +copy application. Also while forwarding, the MAC addresses are affected as follows: @@ -29,7 +29,7 @@ Compiling the Application To compile the sample application see :doc:`compiling`. -The application is located in the ``ioat`` sub-directory. +The application is located in the ``dma`` sub-directory. Running the Application @@ -38,32 +38,36 @@ Running the Application In order to run the hardware copy application, the copying device needs to be bound to user-space IO driver. -Refer to the "IOAT Rawdev Driver" chapter in the "Rawdev Drivers" document -for information on using the driver. +Refer to the "DMAdev library" chapter in the "Programmers guide" for information +on using the library. The application requires a number of command line options: .. code-block:: console - .//examples/dpdk-ioat [EAL options] -- [-p MASK] [-q NQ] [-s RS] [-c ] - [--[no-]mac-updating] + .//examples/dpdk-dma [EAL options] -- [-p MASK] [-q NQ] [-s RS] [-c ] + [--[no-]mac-updating] [-f FS] [-b BS] where, * p MASK: A hexadecimal bitmask of the ports to configure (default is all) -* q NQ: Number of Rx queues used per port equivalent to CBDMA channels +* q NQ: Number of Rx queues used per port equivalent to DMA channels per port (default is 1) * c CT: Performed packet copy type: software (sw) or hardware using DMA (hw) (default is hw) -* s RS: Size of IOAT rawdev ring for hardware copy mode or rte_ring for +* s RS: Size of dmadev descriptor ring for hardware copy mode or rte_ring for software copy mode (default is 2048) * --[no-]mac-updating: Whether MAC address of packets should be changed or not (default is mac-updating) +* f FS: set the max frame size + +* b BS: set the DMA batch size + The application can be launched in various configurations depending on provided parameters. The app can use up to 2 lcores: one of them receives incoming traffic and makes a copy of each packet. The second lcore then @@ -81,7 +85,7 @@ updating issue the command: .. code-block:: console - $ .//examples/dpdk-ioat -l 0-2 -n 2 -- -p 0x1 --mac-updating -c sw + $ .//examples/dpdk-dma -l 0-2 -n 2 -- -p 0x1 --mac-updating -c sw To run the application in a Linux environment with 2 lcores (the main lcore, plus one forwarding core), 2 ports (ports 0 and 1), hardware copying and no MAC @@ -89,7 +93,7 @@ updating issue the command: .. code-block:: console - $ .//examples/dpdk-ioat -l 0-1 -n 1 -- -p 0x3 --no-mac-updating -c hw + $ .//examples/dpdk-dma -l 0-1 -n 1 -- -p 0x3 --no-mac-updating -c hw Refer to the *DPDK Getting Started Guide* for general information on running applications and the Environment Abstraction Layer (EAL) options. @@ -114,7 +118,7 @@ The first task is to initialize the Environment Abstraction Layer (EAL). The ``argc`` and ``argv`` arguments are provided to the ``rte_eal_init()`` function. The value returned is the number of parsed arguments: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Init EAL. 8< :end-before: >8 End of init EAL. @@ -124,7 +128,7 @@ function. The value returned is the number of parsed arguments: The ``main()`` also allocates a mempool to hold the mbufs (Message Buffers) used by the application: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Allocates mempool to hold the mbufs. 8< :end-before: >8 End of allocates mempool to hold the mbufs. @@ -135,7 +139,7 @@ detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*. The ``main()`` function also initializes the ports: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Initialize each port. 8< :end-before: >8 End of initializing each port. @@ -145,9 +149,9 @@ Each port is configured using ``port_init()`` function. The Ethernet ports are configured with local settings using the ``rte_eth_dev_configure()`` function and the ``port_conf`` struct. The RSS is enabled so that multiple Rx queues could be used for packet receiving and copying by -multiple CBDMA channels per port: +multiple DMA channels per port: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Configuring port to use RSS for multiple RX queues. 8< :end-before: >8 End of configuring port to use RSS for multiple RX queues. @@ -159,7 +163,7 @@ and ``rte_eth_tx_queue_setup()`` functions. The Ethernet port is then started: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Start device. 8< :end-before: >8 End of starting device. @@ -168,7 +172,7 @@ The Ethernet port is then started: Finally the Rx port is set in promiscuous mode: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: RX port is set in promiscuous mode. 8< :end-before: >8 End of RX port is set in promiscuous mode. @@ -177,7 +181,7 @@ Finally the Rx port is set in promiscuous mode: After that each port application assigns resources needed. -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Assigning each port resources. 8< :end-before: >8 End of assigning each port resources. @@ -188,30 +192,30 @@ special structures are assigned to each port. If software copy was chosen, application have to assign ring structures for packet exchanging between lcores assigned to ports. -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Assign ring structures for packet exchanging. 8< :end-before: >8 End of assigning ring structures for packet exchanging. :dedent: 0 -When using hardware copy each Rx queue of the port is assigned an -IOAT device (``assign_rawdevs()``) using IOAT Rawdev Driver API -functions: +When using hardware copy each Rx queue of the port is assigned a DMA device +(``assign_dmadevs()``) using DMAdev library API functions: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c - :start-after: Using IOAT rawdev API functions. 8< - :end-before: >8 End of using IOAT rawdev API functions. + :start-after: Using dmadev API functions. 8< + :end-before: >8 End of using dmadev API functions. :dedent: 0 -The initialization of hardware device is done by ``rte_rawdev_configure()`` -function using ``rte_rawdev_info`` struct. After configuration the device is -started using ``rte_rawdev_start()`` function. Each of the above operations -is done in ``configure_rawdev_queue()``. +The initialization of hardware device is done by ``rte_dmadev_configure()`` and +``rte_dmadev_vchan_setup()`` functions using the ``rte_dmadev_conf`` and +``rte_dmadev_vchan_conf`` structs. After configuration the device is started +using ``rte_dmadev_start()`` function. Each of the above operations is done in +``configure_dmadev_queue()``. -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Configuration of device. 8< :end-before: >8 End of configuration of device. @@ -233,7 +237,7 @@ The Lcores Launching Functions As described above, ``main()`` function invokes ``start_forwarding_cores()`` function in order to start processing for each lcore: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Start processing for each lcore. 8< :end-before: >8 End of starting to processfor each lcore. @@ -244,7 +248,7 @@ using ``rte_eal_remote_launch()``. The configured ports, their number and number of assigned lcores are stored in user-defined ``rxtx_transmission_config`` struct: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Configuring ports and number of assigned lcores in struct. 8< :end-before: >8 End of configuration of ports and number of assigned lcores. @@ -256,24 +260,24 @@ corresponding to ports and lcores configuration provided by the user. The Lcores Processing Functions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -For receiving packets on each port, the ``ioat_rx_port()`` function is used. +For receiving packets on each port, the ``dma_rx_port()`` function is used. The function receives packets on each configured Rx queue. Depending on the -mode the user chose, it will enqueue packets to IOAT rawdev channels and +mode the user chose, it will enqueue packets to DMA channels and then invoke copy process (hardware copy), or perform software copy of each packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c - :start-after: Receive packets on one port and enqueue to IOAT rawdev or rte_ring. 8< - :end-before: >8 End of receive packets on one port and enqueue to IOAT rawdev or rte_ring. + :start-after: Receive packets on one port and enqueue to dmadev or rte_ring. 8< + :end-before: >8 End of receive packets on one port and enqueue to dmadev or rte_ring. :dedent: 0 The packets are received in burst mode using ``rte_eth_rx_burst()`` function. When using hardware copy mode the packets are enqueued in -copying device's buffer using ``ioat_enqueue_packets()`` which calls -``rte_ioat_enqueue_copy()``. When all received packets are in the -buffer the copy operations are started by calling ``rte_ioat_perform_ops()``. -Function ``rte_ioat_enqueue_copy()`` operates on physical address of +copying device's buffer using ``dma_enqueue_packets()`` which calls +``rte_dmadev_copy()``. When all received packets are in the +buffer the copy operations are started by calling ``rte_dmadev_submit()``. +Function ``rte_dmadev_copy()`` operates on physical address of the packet. Structure ``rte_mbuf`` contains only physical address to start of the data buffer (``buf_iova``). Thus the address is adjusted by ``addr_offset`` value in order to get the address of ``rearm_data`` @@ -282,25 +286,25 @@ be copied in a single operation. This method can be used because the mbufs are direct mbufs allocated by the apps. If another app uses external buffers, or indirect mbufs, then multiple copy operations must be used. -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c - :start-after: Receive packets on one port and enqueue to IOAT rawdev or rte_ring. 8< - :end-before: >8 End of receive packets on one port and enqueue to IOAT rawdev or rte_ring. + :start-after: Receive packets on one port and enqueue to dmadev or rte_ring. 8< + :end-before: >8 End of receive packets on one port and enqueue to dmadev or rte_ring. :dedent: 0 -All completed copies are processed by ``ioat_tx_port()`` function. When using -hardware copy mode the function invokes ``rte_ioat_completed_ops()`` -on each assigned IOAT channel to gather copied packets. If software copy +All completed copies are processed by ``dma_tx_port()`` function. When using +hardware copy mode the function invokes ``rte_dma_completed()`` +on each assigned DMA channel to gather copied packets. If software copy mode is used the function dequeues copied packets from the rte_ring. Then each packet MAC address is changed if it was enabled. After that copies are sent in burst mode using `` rte_eth_tx_burst()``. -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c - :start-after: Transmit packets from IOAT rawdev/rte_ring for one port. 8< - :end-before: >8 End of transmitting packets from IOAT. + :start-after: Transmit packets from dmadev/rte_ring for one port. 8< + :end-before: >8 End of transmitting packets from dmadev. :dedent: 0 The Packet Copying Functions @@ -312,7 +316,7 @@ metadata from source packet to new mbuf, and then copying a data chunk of source packet. Both memory copies are done using ``rte_memcpy()``: -.. literalinclude:: ../../../examples/ioat/ioatfwd.c +.. literalinclude:: ../../../examples/dma/dmafwd.c :language: c :start-after: Perform packet copy there is a user-defined function. 8< :end-before: >8 End of perform packet copy there is a user-defined function. diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst index e8db83d3a7..8835dd03ac 100644 --- a/doc/guides/sample_app_ug/index.rst +++ b/doc/guides/sample_app_ug/index.rst @@ -22,7 +22,7 @@ Sample Applications User Guides ip_reassembly kernel_nic_interface keep_alive - ioat + dma l2_forward_crypto l2_forward_job_stats l2_forward_real_virtual diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst index 8ff223b16c..e765f1fd6b 100644 --- a/doc/guides/sample_app_ug/intro.rst +++ b/doc/guides/sample_app_ug/intro.rst @@ -58,8 +58,8 @@ examples are highlighted below. forwarding Graph, or ``l3fwd_graph`` application does forwarding based on IPv4 like a simple router with DPDK Graph framework. -* :doc:`Hardware packet copying`: The Hardware packet copying, - or ``ioatfwd`` application demonstrates how to use IOAT rawdev driver for +* :doc:`Hardware packet copying`: The Hardware packet copying, + or ``dmafwd`` application demonstrates how to use DMAdev library for copying packets between two threads. * :doc:`Packet Distributor`: The Packet Distributor diff --git a/examples/ioat/Makefile b/examples/dma/Makefile similarity index 97% rename from examples/ioat/Makefile rename to examples/dma/Makefile index 178fc8778c..59af6478b7 100644 --- a/examples/ioat/Makefile +++ b/examples/dma/Makefile @@ -2,10 +2,10 @@ # Copyright(c) 2019 Intel Corporation # binary name -APP = ioatfwd +APP = dmafwd # all source are stored in SRCS-y -SRCS-y := ioatfwd.c +SRCS-y := dmafwd.c PKGCONF ?= pkg-config diff --git a/examples/ioat/ioatfwd.c b/examples/dma/dmafwd.c similarity index 100% rename from examples/ioat/ioatfwd.c rename to examples/dma/dmafwd.c diff --git a/examples/ioat/meson.build b/examples/dma/meson.build similarity index 94% rename from examples/ioat/meson.build rename to examples/dma/meson.build index c1dd7c9b29..9fdcad660e 100644 --- a/examples/ioat/meson.build +++ b/examples/dma/meson.build @@ -11,5 +11,5 @@ allow_experimental_apis = true deps += ['dmadev'] sources = files( - 'ioatfwd.c', + 'dmafwd.c', ) diff --git a/examples/meson.build b/examples/meson.build index 07e682401b..d50f09db12 100644 --- a/examples/meson.build +++ b/examples/meson.build @@ -12,13 +12,13 @@ all_examples = [ 'bond', 'cmdline', 'distributor', + 'dma', 'ethtool', 'eventdev_pipeline', 'fips_validation', 'flow_classify', 'flow_filtering', 'helloworld', - 'ioat', 'ip_fragmentation', 'ip_pipeline', 'ip_reassembly',