From patchwork Tue Oct 20 11:20:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81583 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 708C7A04DC; Tue, 20 Oct 2020 13:34:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 04B31BC0A; Tue, 20 Oct 2020 13:34:11 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id A6022BC06 for ; Tue, 20 Oct 2020 13:34:08 +0200 (CEST) IronPort-SDR: yrcbfobRj26ZpZ0UPoP4eQaeCogJbhQ7F0cHH5VnKl/KH5VWj6AyzouNt/By/r5LH/A1HNdrnZ j3DH0f8d/uhg== X-IronPort-AV: E=McAfee;i="6000,8403,9779"; a="166415706" X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="166415706" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2020 04:34:06 -0700 IronPort-SDR: dYYEjzHIRUmemSS/pRF6kPdrBBuw3a+1Xt4jFiRwzFgvpDeDR0AjeC69e+kncnhnePKS7u8K2H f8QDpDlwDkKw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="316000427" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga003.jf.intel.com with ESMTP; 20 Oct 2020 04:34:04 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, Cheng Jiang Date: Tue, 20 Oct 2020 11:20:55 +0000 Message-Id: <20201020112058.77168-2-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201020112058.77168-1-Cheng1.jiang@intel.com> References: <20201016042909.27542-1-Cheng1.jiang@intel.com> <20201020112058.77168-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v7 1/4] example/vhost: add async vhost args parsing function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch is to add async vhost driver arguments parsing function for CBDMA channel, DMA initiation function and args description. The meson build file is changed to fix dependency problem. With these arguments vhost device can be set to use CBDMA or CPU for enqueue operation and bind vhost device with specific CBDMA channel to accelerate data copy. Signed-off-by: Cheng Jiang --- examples/vhost/ioat.c | 117 +++++++++++++++++++++++++++++++++++++ examples/vhost/main.c | 37 +++++++++++- examples/vhost/main.h | 14 +++++ examples/vhost/meson.build | 5 ++ 4 files changed, 172 insertions(+), 1 deletion(-) create mode 100644 examples/vhost/ioat.c diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c new file mode 100644 index 000000000..c3158d3c3 --- /dev/null +++ b/examples/vhost/ioat.c @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2017 Intel Corporation + */ + +#include +#include +#include +#include + +#include "main.h" + +#define MAX_VHOST_DEVICE 1024 +#define IOAT_RING_SIZE 4096 + +struct dma_info { + struct rte_pci_addr addr; + uint16_t dev_id; + bool is_valid; +}; + +struct dma_for_vhost { + struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2]; + uint16_t nr; +}; + +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE]; + +int +open_ioat(const char *value) +{ + struct dma_for_vhost *dma_info = dma_bind; + char *input = strndup(value, strlen(value) + 1); + char *addrs = input; + char *ptrs[2]; + char *start, *end, *substr; + int64_t vid, vring_id; + struct rte_ioat_rawdev_config config; + struct rte_rawdev_info info = { .dev_private = &config }; + char name[32]; + int dev_id; + int ret = 0; + uint16_t i = 0; + char *dma_arg[MAX_VHOST_DEVICE]; + uint8_t args_nr; + + while (isblank(*addrs)) + addrs++; + if (*addrs == '\0') { + ret = -1; + goto out; + } + + /* process DMA devices within bracket. */ + addrs++; + substr = strtok(addrs, ";]"); + if (!substr) { + ret = -1; + goto out; + } + args_nr = rte_strsplit(substr, strlen(substr), + dma_arg, MAX_VHOST_DEVICE, ','); + do { + char *arg_temp = dma_arg[i]; + rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@'); + + start = strstr(ptrs[0], "txd"); + if (start == NULL) { + ret = -1; + goto out; + } + + start += 3; + vid = strtol(start, &end, 0); + if (end == start) { + ret = -1; + goto out; + } + + vring_id = 0 + VIRTIO_RXQ; + if (rte_pci_addr_parse(ptrs[1], + &(dma_info + vid)->dmas[vring_id].addr) < 0) { + ret = -1; + goto out; + } + + rte_pci_device_name(&(dma_info + vid)->dmas[vring_id].addr, + name, sizeof(name)); + dev_id = rte_rawdev_get_dev_id(name); + if (dev_id == (uint16_t)(-ENODEV) || + dev_id == (uint16_t)(-EINVAL)) { + ret = -1; + goto out; + } + + if (rte_rawdev_info_get(dev_id, &info, sizeof(config)) < 0 || + strstr(info.driver_name, "ioat") == NULL) { + ret = -1; + goto out; + } + + (dma_info + vid)->dmas[vring_id].dev_id = dev_id; + (dma_info + vid)->dmas[vring_id].is_valid = true; + config.ring_size = IOAT_RING_SIZE; + config.hdls_disable = true; + if (rte_rawdev_configure(dev_id, &info, sizeof(config)) < 0) { + ret = -1; + goto out; + } + rte_rawdev_start(dev_id); + + dma_info->nr++; + i++; + } while (i < args_nr); +out: + free(input); + return ret; +} diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 959c0c283..76f5d76cb 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -95,6 +95,10 @@ static int client_mode; static int builtin_net_driver; +static int async_vhost_driver; + +static char dma_type[MAX_LONG_OPT_SZ]; + /* Specify timeout (in useconds) between retries on RX. */ static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US; /* Specify the number of retries on RX. */ @@ -181,6 +185,15 @@ struct mbuf_table lcore_tx_queue[RTE_MAX_LCORE]; / US_PER_S * BURST_TX_DRAIN_US) #define VLAN_HLEN 4 +static inline int +open_dma(const char *value) +{ + if (strncmp(dma_type, "ioat", 4) == 0) + return open_ioat(value); + + return -1; +} + /* * Builds up the correct configuration for VMDQ VLAN pool map * according to the pool & queue limits. @@ -446,7 +459,9 @@ us_vhost_usage(const char *prgname) " --socket-file: The path of the socket file.\n" " --tx-csum [0|1] disable/enable TX checksum offload.\n" " --tso [0|1] disable/enable TCP segment offload.\n" - " --client register a vhost-user socket as client mode.\n", + " --client register a vhost-user socket as client mode.\n" + " --dma-type register dma type for your vhost async driver. For example \"ioat\" for now.\n" + " --dmas register dma channel for specific vhost device.\n", prgname); } @@ -472,6 +487,8 @@ us_vhost_parse_args(int argc, char **argv) {"tso", required_argument, NULL, 0}, {"client", no_argument, &client_mode, 1}, {"builtin-net-driver", no_argument, &builtin_net_driver, 1}, + {"dma-type", required_argument, NULL, 0}, + {"dmas", required_argument, NULL, 0}, {NULL, 0, 0, 0}, }; @@ -614,6 +631,24 @@ us_vhost_parse_args(int argc, char **argv) } } + if (!strncmp(long_option[option_index].name, + "dma-type", MAX_LONG_OPT_SZ)) { + strcpy(dma_type, optarg); + } + + if (!strncmp(long_option[option_index].name, + "dmas", MAX_LONG_OPT_SZ)) { + if (open_dma(optarg) == -1) { + if (*optarg == -1) { + RTE_LOG(INFO, VHOST_CONFIG, + "Wrong DMA args\n"); + us_vhost_usage(prgname); + } + return -1; + } + async_vhost_driver = 1; + } + break; /* Invalid option - print options. */ diff --git a/examples/vhost/main.h b/examples/vhost/main.h index 7cba0edbf..fe83d255b 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -89,4 +89,18 @@ uint16_t vs_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, uint16_t vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count); + +#ifdef RTE_ARCH_X86 + +int open_ioat(const char *value); + +#else + +static int open_ioat(const char *value __rte_unused) +{ + return -1; +} + +#endif + #endif /* _MAIN_H_ */ diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build index 872d51153..2a03f9779 100644 --- a/examples/vhost/meson.build +++ b/examples/vhost/meson.build @@ -14,3 +14,8 @@ allow_experimental_apis = true sources = files( 'main.c', 'virtio_net.c' ) + +if dpdk_conf.has('RTE_ARCH_X86') + deps += 'rawdev_ioat' + sources += files('ioat.c') +endif From patchwork Tue Oct 20 11:20:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81584 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F4135A04DC; Tue, 20 Oct 2020 13:34:48 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 28DB5BC50; Tue, 20 Oct 2020 13:34:16 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 9FC1CBC06 for ; Tue, 20 Oct 2020 13:34:09 +0200 (CEST) IronPort-SDR: bWkbjCo8GzYjgHsH6nUWVTmj5F4lvvc9hSF5S/D/Th1c/vHTKXIXTSiMQqON2MpDz6eENN+Xoy R7aHe/pKPBlw== X-IronPort-AV: E=McAfee;i="6000,8403,9779"; a="166415709" X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="166415709" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2020 04:34:09 -0700 IronPort-SDR: qHDQDW/S9ZRAQF0+T8Sg5uaw48Bdg9q2fZh2sSXeBF2seF4ABGhwi0oQgBIXMjmKylxXHkziMb kPewyaYgTKWg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="316000442" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga003.jf.intel.com with ESMTP; 20 Oct 2020 04:34:07 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, Cheng Jiang Date: Tue, 20 Oct 2020 11:20:56 +0000 Message-Id: <20201020112058.77168-3-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201020112058.77168-1-Cheng1.jiang@intel.com> References: <20201016042909.27542-1-Cheng1.jiang@intel.com> <20201020112058.77168-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v7 2/4] example/vhost: add support for vhost async data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch is to implement vhost DMA operation callbacks for CBDMA PMD and add vhost async data-path in vhost sample. With providing callback implementation for CBDMA, vswitch can leverage IOAT to accelerate vhost async data-path. Signed-off-by: Cheng Jiang --- examples/vhost/ioat.c | 100 ++++++++++++++++++++++++++++++++++++++++++ examples/vhost/main.c | 57 +++++++++++++++++++++++- examples/vhost/main.h | 12 +++++ 3 files changed, 168 insertions(+), 1 deletion(-) diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c index c3158d3c3..fa503c3db 100644 --- a/examples/vhost/ioat.c +++ b/examples/vhost/ioat.c @@ -6,11 +6,13 @@ #include #include #include +#include #include "main.h" #define MAX_VHOST_DEVICE 1024 #define IOAT_RING_SIZE 4096 +#define MAX_ENQUEUED_SIZE 256 struct dma_info { struct rte_pci_addr addr; @@ -25,6 +27,15 @@ struct dma_for_vhost { struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE]; +struct packet_tracker { + unsigned short size_track[MAX_ENQUEUED_SIZE]; + unsigned short next_read; + unsigned short next_write; + unsigned short last_remain; +}; + +struct packet_tracker cb_tracker[MAX_VHOST_DEVICE]; + int open_ioat(const char *value) { @@ -115,3 +126,92 @@ open_ioat(const char *value) free(input); return ret; } + +uint32_t +ioat_transfer_data_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_desc *descs, + struct rte_vhost_async_status *opaque_data, uint16_t count) +{ + uint32_t i_desc; + int dev_id = dma_bind[vid].dmas[queue_id * 2 + VIRTIO_RXQ].dev_id; + struct rte_vhost_iov_iter *src = NULL; + struct rte_vhost_iov_iter *dst = NULL; + unsigned long i_seg; + unsigned short mask = MAX_ENQUEUED_SIZE - 1; + unsigned short write = cb_tracker[dev_id].next_write; + + if (!opaque_data) { + for (i_desc = 0; i_desc < count; i_desc++) { + src = descs[i_desc].src; + dst = descs[i_desc].dst; + i_seg = 0; + while (i_seg < src->nr_segs) { + /* + * TODO: Assuming that the ring space of the + * IOAT device is large enough, so there is no + * error here, and the actual error handling + * will be added later. + */ + rte_ioat_enqueue_copy(dev_id, + (uintptr_t)(src->iov[i_seg].iov_base) + + src->offset, + (uintptr_t)(dst->iov[i_seg].iov_base) + + dst->offset, + src->iov[i_seg].iov_len, + 0, + 0); + i_seg++; + } + write &= mask; + cb_tracker[dev_id].size_track[write] = i_seg; + write++; + } + } else { + /* Opaque data is not supported */ + return -1; + } + /* ring the doorbell */ + rte_ioat_perform_ops(dev_id); + cb_tracker[dev_id].next_write = write; + return i_desc; +} + +uint32_t +ioat_check_completed_copies_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_status *opaque_data, + uint16_t max_packets) +{ + if (!opaque_data) { + uintptr_t dump[255]; + unsigned short n_seg; + unsigned short read, write; + unsigned short nb_packet = 0; + unsigned short mask = MAX_ENQUEUED_SIZE - 1; + unsigned short i; + int dev_id = dma_bind[vid].dmas[queue_id * 2 + + VIRTIO_RXQ].dev_id; + n_seg = rte_ioat_completed_ops(dev_id, 255, dump, dump); + n_seg += cb_tracker[dev_id].last_remain; + if (!n_seg) + return 0; + read = cb_tracker[dev_id].next_read; + write = cb_tracker[dev_id].next_write; + for (i = 0; i < max_packets; i++) { + read &= mask; + if (read == write) + break; + if (n_seg >= cb_tracker[dev_id].size_track[read]) { + n_seg -= cb_tracker[dev_id].size_track[read]; + read++; + nb_packet++; + } else { + break; + } + } + cb_tracker[dev_id].next_read = read; + cb_tracker[dev_id].last_remain = n_seg; + return nb_packet; + } + /* Opaque data is not supported */ + return -1; +} diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 76f5d76cb..2469fcf21 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -804,9 +804,22 @@ virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, struct rte_mbuf *m) { uint16_t ret; + struct rte_mbuf *m_cpl[1]; if (builtin_net_driver) { ret = vs_enqueue_pkts(dst_vdev, VIRTIO_RXQ, &m, 1); + } else if (async_vhost_driver) { + ret = rte_vhost_submit_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, + &m, 1); + + if (likely(ret)) + dst_vdev->nr_async_pkts++; + + while (likely(dst_vdev->nr_async_pkts)) { + if (rte_vhost_poll_enqueue_completed(dst_vdev->vid, + VIRTIO_RXQ, m_cpl, 1)) + dst_vdev->nr_async_pkts--; + } } else { ret = rte_vhost_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, &m, 1); } @@ -1055,6 +1068,19 @@ drain_mbuf_table(struct mbuf_table *tx_q) } } +static __rte_always_inline void +complete_async_pkts(struct vhost_dev *vdev, uint16_t qid) +{ + struct rte_mbuf *p_cpl[MAX_PKT_BURST]; + uint16_t complete_count; + + complete_count = rte_vhost_poll_enqueue_completed(vdev->vid, + qid, p_cpl, MAX_PKT_BURST); + vdev->nr_async_pkts -= complete_count; + if (complete_count) + free_pkts(p_cpl, complete_count); +} + static __rte_always_inline void drain_eth_rx(struct vhost_dev *vdev) { @@ -1063,6 +1089,10 @@ drain_eth_rx(struct vhost_dev *vdev) rx_count = rte_eth_rx_burst(ports[0], vdev->vmdq_rx_q, pkts, MAX_PKT_BURST); + + while (likely(vdev->nr_async_pkts)) + complete_async_pkts(vdev, VIRTIO_RXQ); + if (!rx_count) return; @@ -1087,16 +1117,22 @@ drain_eth_rx(struct vhost_dev *vdev) if (builtin_net_driver) { enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ, pkts, rx_count); + } else if (async_vhost_driver) { + enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, + VIRTIO_RXQ, pkts, rx_count); + vdev->nr_async_pkts += enqueue_count; } else { enqueue_count = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, pkts, rx_count); } + if (enable_stats) { rte_atomic64_add(&vdev->stats.rx_total_atomic, rx_count); rte_atomic64_add(&vdev->stats.rx_atomic, enqueue_count); } - free_pkts(pkts, rx_count); + if (!async_vhost_driver) + free_pkts(pkts, rx_count); } static __rte_always_inline void @@ -1243,6 +1279,9 @@ destroy_device(int vid) "(%d) device has been removed from data core\n", vdev->vid); + if (async_vhost_driver) + rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); + rte_free(vdev); } @@ -1257,6 +1296,12 @@ new_device(int vid) uint32_t device_num_min = num_devices; struct vhost_dev *vdev; + struct rte_vhost_async_channel_ops channel_ops = { + .transfer_data = ioat_transfer_data_cb, + .check_completed_copies = ioat_check_completed_copies_cb + }; + struct rte_vhost_async_features f; + vdev = rte_zmalloc("vhost device", sizeof(*vdev), RTE_CACHE_LINE_SIZE); if (vdev == NULL) { RTE_LOG(INFO, VHOST_DATA, @@ -1297,6 +1342,13 @@ new_device(int vid) "(%d) device has been added to data core %d\n", vid, vdev->coreid); + if (async_vhost_driver) { + f.async_inorder = 1; + f.async_threshold = 256; + return rte_vhost_async_channel_register(vid, VIRTIO_RXQ, + f.intval, &channel_ops); + } + return 0; } @@ -1535,6 +1587,9 @@ main(int argc, char *argv[]) /* Register vhost user driver to handle vhost messages. */ for (i = 0; i < nb_sockets; i++) { char *file = socket_files + i * PATH_MAX; + if (async_vhost_driver) + flags = flags | RTE_VHOST_USER_ASYNC_COPY; + ret = rte_vhost_driver_register(file, flags); if (ret != 0) { unregister_drivers(i); diff --git a/examples/vhost/main.h b/examples/vhost/main.h index fe83d255b..5a628473e 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -8,6 +8,7 @@ #include #include +#include /* Macros for printing using RTE_LOG */ #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1 @@ -51,6 +52,7 @@ struct vhost_dev { uint64_t features; size_t hdr_len; uint16_t nr_vrings; + uint16_t nr_async_pkts; struct rte_vhost_memory *mem; struct device_statistics stats; TAILQ_ENTRY(vhost_dev) global_vdev_entry; @@ -103,4 +105,14 @@ static int open_ioat(const char *value __rte_unused) #endif +uint32_t +ioat_transfer_data_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_desc *descs, + struct rte_vhost_async_status *opaque_data, uint16_t count); + +uint32_t +ioat_check_completed_copies_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_status *opaque_data, + uint16_t max_packets); + #endif /* _MAIN_H_ */ From patchwork Tue Oct 20 11:20:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81585 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6DB10A04DC; Tue, 20 Oct 2020 13:35:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C3B27BC6A; Tue, 20 Oct 2020 13:34:17 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 06307BC10 for ; Tue, 20 Oct 2020 13:34:11 +0200 (CEST) IronPort-SDR: z2n/MVnQOyxymT0HCMuG5Gc30tqn5iVSSv7eQuYz3jfFevtwLgiSvJUPDMq3A+Pyg/7TAVTFSR gzHQvUqwXq2w== X-IronPort-AV: E=McAfee;i="6000,8403,9779"; a="166415712" X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="166415712" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2020 04:34:11 -0700 IronPort-SDR: 9GgFqrd1bhy+zdii9PO7L6wpjqrdEPwrvqr0Suyoro5KE30ILu9OPvHOnISb6NYTO04aH5vJ9w 3GlPDqnL6yyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="316000448" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga003.jf.intel.com with ESMTP; 20 Oct 2020 04:34:10 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, Cheng Jiang Date: Tue, 20 Oct 2020 11:20:57 +0000 Message-Id: <20201020112058.77168-4-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201020112058.77168-1-Cheng1.jiang@intel.com> References: <20201016042909.27542-1-Cheng1.jiang@intel.com> <20201020112058.77168-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v7 3/4] doc: update vhost sample doc for vhost async data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add vhost async driver arguments information for vhost async data path in vhost sample application. Signed-off-by: Cheng Jiang --- doc/guides/sample_app_ug/vhost.rst | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst index b7ed4f8bd..0f4f70945 100644 --- a/doc/guides/sample_app_ug/vhost.rst +++ b/doc/guides/sample_app_ug/vhost.rst @@ -162,6 +162,17 @@ enabled and cannot be disabled. A very simple vhost-user net driver which demonstrates how to use the generic vhost APIs will be used when this option is given. It is disabled by default. +**--dma-type** +This parameter is used to specify DMA type for async vhost-user net driver which +demonstrates how to use the async vhost APIs. It's used in combination with dmas. + +**--dmas** +This parameter is used to specify the assigned DMA device of a vhost device. +Async vhost-user net driver will be used if --dmas is set. For example +--dmas [txd0@00:04.0,txd1@00:04.1] means use DMA channel 00:04.0 for vhost +device 0 enqueue operation and use DMA channel 00:04.1 for vhost device 1 +enqueue operation. + Common Issues ------------- From patchwork Tue Oct 20 11:20:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81586 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B974A04DC; Tue, 20 Oct 2020 13:35:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 66A4BBC7A; Tue, 20 Oct 2020 13:34:19 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 65C5EBC4C for ; Tue, 20 Oct 2020 13:34:14 +0200 (CEST) IronPort-SDR: 5nim7jetZ3vw6tc2Z0nKDPIOiMUx48WfZsRwwkU07SNXT9MtAAmLZd/tfKGkUM1maeMo0iKx3S UySie9mVa6wQ== X-IronPort-AV: E=McAfee;i="6000,8403,9779"; a="166415716" X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="166415716" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Oct 2020 04:34:13 -0700 IronPort-SDR: ohGB3n4F1So39bzDqsfOvfJxEInUAtX7tabfGdrVXwNHRZg3GERDsSlAML5XG805N2ngSrXbFw W8YZpNaTIhqw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,396,1596524400"; d="scan'208";a="316000456" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by orsmga003.jf.intel.com with ESMTP; 20 Oct 2020 04:34:12 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, Cheng Jiang Date: Tue, 20 Oct 2020 11:20:58 +0000 Message-Id: <20201020112058.77168-5-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201020112058.77168-1-Cheng1.jiang@intel.com> References: <20201016042909.27542-1-Cheng1.jiang@intel.com> <20201020112058.77168-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v7 4/4] doc: update release notes for vhost sample X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add release notes for vhost async data path support in vhost sample. Signed-off-by: Cheng Jiang --- doc/guides/rel_notes/release_20_11.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index f8686a50d..812ca8834 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -214,6 +214,12 @@ New Features * Added new ``RTE_ACL_CLASSIFY_AVX512X32`` vector implementation, which can process up to 32 flows in parallel. Requires AVX512 support. +* **Updated vhost sample application.** + + Added vhost asynchronous APIs support, which demonstrated how the application + leverage IOAT DMA channel with vhost asynchronous APIs. + See the :doc:`../sample_app_ug/vhost` for more details. + Removed Items -------------