From patchwork Fri Oct 16 04:29:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81021 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 69933A04DB; Fri, 16 Oct 2020 06:41:54 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3DE671E9BD; Fri, 16 Oct 2020 06:41:47 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id BAD0E1E9BC for ; Fri, 16 Oct 2020 06:41:45 +0200 (CEST) IronPort-SDR: 6v+MZnh2L4NtN7p+3f5bgzyCYnnVXtxSN3DUKvLLGpmtB5ZY9jArKtGQRoQpiLe+RvzMRSkOhP 8k4saN7vQxqg== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="163908303" X-IronPort-AV: E=Sophos;i="5.77,381,1596524400"; d="scan'208";a="163908303" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2020 21:41:43 -0700 IronPort-SDR: /rdhcrTeja/cVznWnHiIpnhwFojuS+JlF/+wbxLpZOhI1sZAge1eXRkZ7RcK0bUBQzjgxRWPPx 2DE3s6+Mr5bw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,381,1596524400"; d="scan'208";a="522107429" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by fmsmga005.fm.intel.com with ESMTP; 15 Oct 2020 21:41:40 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, Cheng Jiang Date: Fri, 16 Oct 2020 04:29:06 +0000 Message-Id: <20201016042909.27542-2-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201016042909.27542-1-Cheng1.jiang@intel.com> References: <20201015045428.67373-1-Cheng1.jiang@intel.com> <20201016042909.27542-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v6 1/4] example/vhost: add async vhost args parsing function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch is to add async vhost driver arguments parsing function for CBDMA channel, DMA initiation function and args description. The meson build file is changed to fix dependency problem. With these arguments vhost device can be set to use CBDMA or CPU for enqueue operation and bind vhost device with specific CBDMA channel to accelerate data copy. Signed-off-by: Cheng Jiang --- examples/vhost/ioat.c | 117 +++++++++++++++++++++++++++++++++++++ examples/vhost/main.c | 37 +++++++++++- examples/vhost/main.h | 14 +++++ examples/vhost/meson.build | 5 ++ 4 files changed, 172 insertions(+), 1 deletion(-) create mode 100644 examples/vhost/ioat.c diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c new file mode 100644 index 000000000..c3158d3c3 --- /dev/null +++ b/examples/vhost/ioat.c @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2017 Intel Corporation + */ + +#include +#include +#include +#include + +#include "main.h" + +#define MAX_VHOST_DEVICE 1024 +#define IOAT_RING_SIZE 4096 + +struct dma_info { + struct rte_pci_addr addr; + uint16_t dev_id; + bool is_valid; +}; + +struct dma_for_vhost { + struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2]; + uint16_t nr; +}; + +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE]; + +int +open_ioat(const char *value) +{ + struct dma_for_vhost *dma_info = dma_bind; + char *input = strndup(value, strlen(value) + 1); + char *addrs = input; + char *ptrs[2]; + char *start, *end, *substr; + int64_t vid, vring_id; + struct rte_ioat_rawdev_config config; + struct rte_rawdev_info info = { .dev_private = &config }; + char name[32]; + int dev_id; + int ret = 0; + uint16_t i = 0; + char *dma_arg[MAX_VHOST_DEVICE]; + uint8_t args_nr; + + while (isblank(*addrs)) + addrs++; + if (*addrs == '\0') { + ret = -1; + goto out; + } + + /* process DMA devices within bracket. */ + addrs++; + substr = strtok(addrs, ";]"); + if (!substr) { + ret = -1; + goto out; + } + args_nr = rte_strsplit(substr, strlen(substr), + dma_arg, MAX_VHOST_DEVICE, ','); + do { + char *arg_temp = dma_arg[i]; + rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@'); + + start = strstr(ptrs[0], "txd"); + if (start == NULL) { + ret = -1; + goto out; + } + + start += 3; + vid = strtol(start, &end, 0); + if (end == start) { + ret = -1; + goto out; + } + + vring_id = 0 + VIRTIO_RXQ; + if (rte_pci_addr_parse(ptrs[1], + &(dma_info + vid)->dmas[vring_id].addr) < 0) { + ret = -1; + goto out; + } + + rte_pci_device_name(&(dma_info + vid)->dmas[vring_id].addr, + name, sizeof(name)); + dev_id = rte_rawdev_get_dev_id(name); + if (dev_id == (uint16_t)(-ENODEV) || + dev_id == (uint16_t)(-EINVAL)) { + ret = -1; + goto out; + } + + if (rte_rawdev_info_get(dev_id, &info, sizeof(config)) < 0 || + strstr(info.driver_name, "ioat") == NULL) { + ret = -1; + goto out; + } + + (dma_info + vid)->dmas[vring_id].dev_id = dev_id; + (dma_info + vid)->dmas[vring_id].is_valid = true; + config.ring_size = IOAT_RING_SIZE; + config.hdls_disable = true; + if (rte_rawdev_configure(dev_id, &info, sizeof(config)) < 0) { + ret = -1; + goto out; + } + rte_rawdev_start(dev_id); + + dma_info->nr++; + i++; + } while (i < args_nr); +out: + free(input); + return ret; +} diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 959c0c283..76f5d76cb 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -95,6 +95,10 @@ static int client_mode; static int builtin_net_driver; +static int async_vhost_driver; + +static char dma_type[MAX_LONG_OPT_SZ]; + /* Specify timeout (in useconds) between retries on RX. */ static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US; /* Specify the number of retries on RX. */ @@ -181,6 +185,15 @@ struct mbuf_table lcore_tx_queue[RTE_MAX_LCORE]; / US_PER_S * BURST_TX_DRAIN_US) #define VLAN_HLEN 4 +static inline int +open_dma(const char *value) +{ + if (strncmp(dma_type, "ioat", 4) == 0) + return open_ioat(value); + + return -1; +} + /* * Builds up the correct configuration for VMDQ VLAN pool map * according to the pool & queue limits. @@ -446,7 +459,9 @@ us_vhost_usage(const char *prgname) " --socket-file: The path of the socket file.\n" " --tx-csum [0|1] disable/enable TX checksum offload.\n" " --tso [0|1] disable/enable TCP segment offload.\n" - " --client register a vhost-user socket as client mode.\n", + " --client register a vhost-user socket as client mode.\n" + " --dma-type register dma type for your vhost async driver. For example \"ioat\" for now.\n" + " --dmas register dma channel for specific vhost device.\n", prgname); } @@ -472,6 +487,8 @@ us_vhost_parse_args(int argc, char **argv) {"tso", required_argument, NULL, 0}, {"client", no_argument, &client_mode, 1}, {"builtin-net-driver", no_argument, &builtin_net_driver, 1}, + {"dma-type", required_argument, NULL, 0}, + {"dmas", required_argument, NULL, 0}, {NULL, 0, 0, 0}, }; @@ -614,6 +631,24 @@ us_vhost_parse_args(int argc, char **argv) } } + if (!strncmp(long_option[option_index].name, + "dma-type", MAX_LONG_OPT_SZ)) { + strcpy(dma_type, optarg); + } + + if (!strncmp(long_option[option_index].name, + "dmas", MAX_LONG_OPT_SZ)) { + if (open_dma(optarg) == -1) { + if (*optarg == -1) { + RTE_LOG(INFO, VHOST_CONFIG, + "Wrong DMA args\n"); + us_vhost_usage(prgname); + } + return -1; + } + async_vhost_driver = 1; + } + break; /* Invalid option - print options. */ diff --git a/examples/vhost/main.h b/examples/vhost/main.h index 7cba0edbf..fe83d255b 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -89,4 +89,18 @@ uint16_t vs_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, uint16_t vs_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count); + +#ifdef RTE_ARCH_X86 + +int open_ioat(const char *value); + +#else + +static int open_ioat(const char *value __rte_unused) +{ + return -1; +} + +#endif + #endif /* _MAIN_H_ */ diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build index 872d51153..2a03f9779 100644 --- a/examples/vhost/meson.build +++ b/examples/vhost/meson.build @@ -14,3 +14,8 @@ allow_experimental_apis = true sources = files( 'main.c', 'virtio_net.c' ) + +if dpdk_conf.has('RTE_ARCH_X86') + deps += 'rawdev_ioat' + sources += files('ioat.c') +endif From patchwork Fri Oct 16 04:29:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81022 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E754AA04DB; Fri, 16 Oct 2020 06:42:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EA5671E9C8; Fri, 16 Oct 2020 06:41:50 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id D01381E9C6 for ; Fri, 16 Oct 2020 06:41:47 +0200 (CEST) IronPort-SDR: Ach3Cv5ypz1eHJ7ERaESQnHGbhyYDCrrnhWg81CGo0iWTkEufP5x9HqzgAZMa11Lz5erjoMbPY obHPrlsNIaww== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="163908328" X-IronPort-AV: E=Sophos;i="5.77,381,1596524400"; d="scan'208";a="163908328" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2020 21:41:47 -0700 IronPort-SDR: a3cdrN+lb/gQqeucIpUpAuF9KlwExI5mPqBU9VX8siHnM5D0od5x6CtTJPeHZS7Qq6ys343yNI 736VbgUjtzIA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,381,1596524400"; d="scan'208";a="522107477" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by fmsmga005.fm.intel.com with ESMTP; 15 Oct 2020 21:41:44 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, Cheng Jiang Date: Fri, 16 Oct 2020 04:29:07 +0000 Message-Id: <20201016042909.27542-3-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201016042909.27542-1-Cheng1.jiang@intel.com> References: <20201015045428.67373-1-Cheng1.jiang@intel.com> <20201016042909.27542-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v6 2/4] example/vhost: add support for vhost async data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch is to implement vhost DMA operation callbacks for CBDMA PMD and add vhost async data-path in vhost sample. With providing callback implementation for CBDMA, vswitch can leverage IOAT to accelerate vhost async data-path. Signed-off-by: Cheng Jiang --- examples/vhost/ioat.c | 97 +++++++++++++++++++++++++++++++++++++++++++ examples/vhost/main.c | 57 ++++++++++++++++++++++++- examples/vhost/main.h | 12 ++++++ 3 files changed, 165 insertions(+), 1 deletion(-) diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c index c3158d3c3..e764b2634 100644 --- a/examples/vhost/ioat.c +++ b/examples/vhost/ioat.c @@ -6,11 +6,13 @@ #include #include #include +#include #include "main.h" #define MAX_VHOST_DEVICE 1024 #define IOAT_RING_SIZE 4096 +#define MAX_ENQUEUED_SIZE 256 struct dma_info { struct rte_pci_addr addr; @@ -25,6 +27,15 @@ struct dma_for_vhost { struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE]; +struct packet_tracker { + unsigned short size_track[MAX_ENQUEUED_SIZE]; + unsigned short next_read; + unsigned short next_write; + unsigned short last_remain; +}; + +struct packet_tracker cb_tracker[MAX_VHOST_DEVICE]; + int open_ioat(const char *value) { @@ -115,3 +126,89 @@ open_ioat(const char *value) free(input); return ret; } + +uint32_t +ioat_transfer_data_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_desc *descs, + struct rte_vhost_async_status *opaque_data, uint16_t count) +{ + int ret; + uint32_t i_desc; + int dev_id = dma_bind[vid].dmas[queue_id * 2 + VIRTIO_RXQ].dev_id; + struct rte_vhost_iov_iter *src = NULL; + struct rte_vhost_iov_iter *dst = NULL; + unsigned long i_seg; + unsigned short mask = MAX_ENQUEUED_SIZE - 1; + unsigned short write = cb_tracker[dev_id].next_write; + + if (likely(!opaque_data)) { + for (i_desc = 0; i_desc < count; i_desc++) { + src = descs[i_desc].src; + dst = descs[i_desc].dst; + i_seg = 0; + while (i_seg < src->nr_segs) { + ret = rte_ioat_enqueue_copy(dev_id, + (uintptr_t)(src->iov[i_seg].iov_base) + + src->offset, + (uintptr_t)(dst->iov[i_seg].iov_base) + + dst->offset, + src->iov[i_seg].iov_len, + 0, + 0); + if (ret != 1) + break; + i_seg++; + } + write &= mask; + cb_tracker[dev_id].size_track[write] = i_seg; + write++; + } + } else { + /* Opaque data is not supported */ + return -1; + } + /* ring the doorbell */ + rte_ioat_perform_ops(dev_id); + cb_tracker[dev_id].next_write = write; + return i_desc; +} + +uint32_t +ioat_check_completed_copies_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_status *opaque_data, + uint16_t max_packets) +{ + if (!opaque_data) { + uintptr_t dump[255]; + unsigned short n_seg; + unsigned short read, write; + unsigned short nb_packet = 0; + unsigned short mask = MAX_ENQUEUED_SIZE - 1; + unsigned short i; + int dev_id = dma_bind[vid].dmas[queue_id * 2 + + VIRTIO_RXQ].dev_id; + n_seg = rte_ioat_completed_ops(dev_id, 255, dump, dump); + n_seg += cb_tracker[dev_id].last_remain; + if (!n_seg) + return 0; + read = cb_tracker[dev_id].next_read; + write = cb_tracker[dev_id].next_write; + for (i = 0; i < max_packets; i++) { + read &= mask; + if (read == write) + break; + if (n_seg >= cb_tracker[dev_id].size_track[read]) { + n_seg -= cb_tracker[dev_id].size_track[read]; + read++; + nb_packet++; + } else { + break; + } + } + cb_tracker[dev_id].next_read = read; + cb_tracker[dev_id].last_remain = n_seg; + return nb_packet; + } + /* Opaque data is not supported */ + return -1; +} diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 76f5d76cb..2469fcf21 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -804,9 +804,22 @@ virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, struct rte_mbuf *m) { uint16_t ret; + struct rte_mbuf *m_cpl[1]; if (builtin_net_driver) { ret = vs_enqueue_pkts(dst_vdev, VIRTIO_RXQ, &m, 1); + } else if (async_vhost_driver) { + ret = rte_vhost_submit_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, + &m, 1); + + if (likely(ret)) + dst_vdev->nr_async_pkts++; + + while (likely(dst_vdev->nr_async_pkts)) { + if (rte_vhost_poll_enqueue_completed(dst_vdev->vid, + VIRTIO_RXQ, m_cpl, 1)) + dst_vdev->nr_async_pkts--; + } } else { ret = rte_vhost_enqueue_burst(dst_vdev->vid, VIRTIO_RXQ, &m, 1); } @@ -1055,6 +1068,19 @@ drain_mbuf_table(struct mbuf_table *tx_q) } } +static __rte_always_inline void +complete_async_pkts(struct vhost_dev *vdev, uint16_t qid) +{ + struct rte_mbuf *p_cpl[MAX_PKT_BURST]; + uint16_t complete_count; + + complete_count = rte_vhost_poll_enqueue_completed(vdev->vid, + qid, p_cpl, MAX_PKT_BURST); + vdev->nr_async_pkts -= complete_count; + if (complete_count) + free_pkts(p_cpl, complete_count); +} + static __rte_always_inline void drain_eth_rx(struct vhost_dev *vdev) { @@ -1063,6 +1089,10 @@ drain_eth_rx(struct vhost_dev *vdev) rx_count = rte_eth_rx_burst(ports[0], vdev->vmdq_rx_q, pkts, MAX_PKT_BURST); + + while (likely(vdev->nr_async_pkts)) + complete_async_pkts(vdev, VIRTIO_RXQ); + if (!rx_count) return; @@ -1087,16 +1117,22 @@ drain_eth_rx(struct vhost_dev *vdev) if (builtin_net_driver) { enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ, pkts, rx_count); + } else if (async_vhost_driver) { + enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, + VIRTIO_RXQ, pkts, rx_count); + vdev->nr_async_pkts += enqueue_count; } else { enqueue_count = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, pkts, rx_count); } + if (enable_stats) { rte_atomic64_add(&vdev->stats.rx_total_atomic, rx_count); rte_atomic64_add(&vdev->stats.rx_atomic, enqueue_count); } - free_pkts(pkts, rx_count); + if (!async_vhost_driver) + free_pkts(pkts, rx_count); } static __rte_always_inline void @@ -1243,6 +1279,9 @@ destroy_device(int vid) "(%d) device has been removed from data core\n", vdev->vid); + if (async_vhost_driver) + rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); + rte_free(vdev); } @@ -1257,6 +1296,12 @@ new_device(int vid) uint32_t device_num_min = num_devices; struct vhost_dev *vdev; + struct rte_vhost_async_channel_ops channel_ops = { + .transfer_data = ioat_transfer_data_cb, + .check_completed_copies = ioat_check_completed_copies_cb + }; + struct rte_vhost_async_features f; + vdev = rte_zmalloc("vhost device", sizeof(*vdev), RTE_CACHE_LINE_SIZE); if (vdev == NULL) { RTE_LOG(INFO, VHOST_DATA, @@ -1297,6 +1342,13 @@ new_device(int vid) "(%d) device has been added to data core %d\n", vid, vdev->coreid); + if (async_vhost_driver) { + f.async_inorder = 1; + f.async_threshold = 256; + return rte_vhost_async_channel_register(vid, VIRTIO_RXQ, + f.intval, &channel_ops); + } + return 0; } @@ -1535,6 +1587,9 @@ main(int argc, char *argv[]) /* Register vhost user driver to handle vhost messages. */ for (i = 0; i < nb_sockets; i++) { char *file = socket_files + i * PATH_MAX; + if (async_vhost_driver) + flags = flags | RTE_VHOST_USER_ASYNC_COPY; + ret = rte_vhost_driver_register(file, flags); if (ret != 0) { unregister_drivers(i); diff --git a/examples/vhost/main.h b/examples/vhost/main.h index fe83d255b..5a628473e 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -8,6 +8,7 @@ #include #include +#include /* Macros for printing using RTE_LOG */ #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1 @@ -51,6 +52,7 @@ struct vhost_dev { uint64_t features; size_t hdr_len; uint16_t nr_vrings; + uint16_t nr_async_pkts; struct rte_vhost_memory *mem; struct device_statistics stats; TAILQ_ENTRY(vhost_dev) global_vdev_entry; @@ -103,4 +105,14 @@ static int open_ioat(const char *value __rte_unused) #endif +uint32_t +ioat_transfer_data_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_desc *descs, + struct rte_vhost_async_status *opaque_data, uint16_t count); + +uint32_t +ioat_check_completed_copies_cb(int vid, uint16_t queue_id, + struct rte_vhost_async_status *opaque_data, + uint16_t max_packets); + #endif /* _MAIN_H_ */ From patchwork Fri Oct 16 04:29:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81023 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DDD8FA04DB; Fri, 16 Oct 2020 06:42:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 86AC01BF5B; Fri, 16 Oct 2020 06:41:54 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 17BF61E9D0 for ; Fri, 16 Oct 2020 06:41:51 +0200 (CEST) IronPort-SDR: cKkopN05yE9/gDHwB0yx/o9AbgkfB2pVXuVQK9HkdCXz9ZDnYZagn2aKamn2ftfav76FU0JGhT Dc9XgrHn2qHw== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="163908350" X-IronPort-AV: E=Sophos;i="5.77,381,1596524400"; d="scan'208";a="163908350" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2020 21:41:51 -0700 IronPort-SDR: IURpOVpuq0Uq8hsQgsVbwjKhkdqJOFoz4Gmb3IGJ3xWvhuQ6HlxxUl5p1XxIFB6G6L0E7njIXG BziQSqbtBbRg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,381,1596524400"; d="scan'208";a="522107492" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by fmsmga005.fm.intel.com with ESMTP; 15 Oct 2020 21:41:48 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, Cheng Jiang Date: Fri, 16 Oct 2020 04:29:08 +0000 Message-Id: <20201016042909.27542-4-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201016042909.27542-1-Cheng1.jiang@intel.com> References: <20201015045428.67373-1-Cheng1.jiang@intel.com> <20201016042909.27542-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v6 3/4] doc: update vhost sample doc for vhost async data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add vhost async driver arguments information for vhost async data path in vhost sample application. Signed-off-by: Cheng Jiang --- doc/guides/sample_app_ug/vhost.rst | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst index b7ed4f8bd..0f4f70945 100644 --- a/doc/guides/sample_app_ug/vhost.rst +++ b/doc/guides/sample_app_ug/vhost.rst @@ -162,6 +162,17 @@ enabled and cannot be disabled. A very simple vhost-user net driver which demonstrates how to use the generic vhost APIs will be used when this option is given. It is disabled by default. +**--dma-type** +This parameter is used to specify DMA type for async vhost-user net driver which +demonstrates how to use the async vhost APIs. It's used in combination with dmas. + +**--dmas** +This parameter is used to specify the assigned DMA device of a vhost device. +Async vhost-user net driver will be used if --dmas is set. For example +--dmas [txd0@00:04.0,txd1@00:04.1] means use DMA channel 00:04.0 for vhost +device 0 enqueue operation and use DMA channel 00:04.1 for vhost device 1 +enqueue operation. + Common Issues ------------- From patchwork Fri Oct 16 04:29:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 81024 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DF8D7A04DB; Fri, 16 Oct 2020 06:42:54 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 44D491C1EE; Fri, 16 Oct 2020 06:41:59 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 10AA01BFDD for ; Fri, 16 Oct 2020 06:41:55 +0200 (CEST) IronPort-SDR: RtKfS0A4hV46jnUjhEpiiZs5hHbuc/f9Hd9OgeQy+kR+EENc/wLT5xVACZYFkv6LdWF3GkDclF JQlQJE3PxUQg== X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="163908369" X-IronPort-AV: E=Sophos;i="5.77,381,1596524400"; d="scan'208";a="163908369" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2020 21:41:55 -0700 IronPort-SDR: 16WzPrPueXMK7O38Vb6OqhAQ9vJjYS/Z3NHhCFXOTaG5tBaokwKkmfGQRtmGaA40py+7ziix4j 9AXAwWZZ3yRw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,381,1596524400"; d="scan'208";a="522107501" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.112]) by fmsmga005.fm.intel.com with ESMTP; 15 Oct 2020 21:41:52 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, zhihong.wang@intel.com Cc: dev@dpdk.org, patrick.fu@intel.com, Cheng Jiang Date: Fri, 16 Oct 2020 04:29:09 +0000 Message-Id: <20201016042909.27542-5-Cheng1.jiang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201016042909.27542-1-Cheng1.jiang@intel.com> References: <20201015045428.67373-1-Cheng1.jiang@intel.com> <20201016042909.27542-1-Cheng1.jiang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v6 4/4] doc: update release notes for vhost sample X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add release notes for vhost async data path support in vhost sample. Signed-off-by: Cheng Jiang --- doc/guides/rel_notes/release_20_11.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index f8686a50d..812ca8834 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -214,6 +214,12 @@ New Features * Added new ``RTE_ACL_CLASSIFY_AVX512X32`` vector implementation, which can process up to 32 flows in parallel. Requires AVX512 support. +* **Updated vhost sample application.** + + Added vhost asynchronous APIs support, which demonstrated how the application + leverage IOAT DMA channel with vhost asynchronous APIs. + See the :doc:`../sample_app_ug/vhost` for more details. + Removed Items -------------