From patchwork Tue May 5 15:54:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 69768 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 072D6A04B8; Tue, 5 May 2020 17:55:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E51241D64F; Tue, 5 May 2020 17:55:08 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id CD8B31D628 for ; Tue, 5 May 2020 17:55:07 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@mellanox.com) with ESMTPS (AES256-SHA encrypted); 5 May 2020 18:55:01 +0300 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 045Fstgl022938; Tue, 5 May 2020 18:55:01 +0300 From: Matan Azrad To: dev@dpdk.org Cc: Viacheslav Ovsiienko , Shahaf Shuler , Maxime Coquelin Date: Tue, 5 May 2020 15:54:41 +0000 Message-Id: <1588694084-381748-2-git-send-email-matan@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588694084-381748-1-git-send-email-matan@mellanox.com> References: <1585826793-28709-1-git-send-email-matan@mellanox.com> <1588694084-381748-1-git-send-email-matan@mellanox.com> Subject: [dpdk-dev] [PATCH v2 1/4] vhost: inroduce operation to get vDPA queue stats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The vDPA device offloads all the datapath of the vhost device to the HW device. In order to expose to the user traffic information this patch introduces new 3 APIs to get traffic statistics, the device statistics name and to reset the statistics per virtio queue. The statistics are taken directly from the vDPA driver managing the HW device and can be different for each vendor driver. Signed-off-by: Matan Azrad --- doc/guides/rel_notes/release_20_05.rst | 5 ++ doc/guides/vdpadevs/features/default.ini | 1 + doc/guides/vdpadevs/features_overview.rst | 3 + lib/librte_vhost/rte_vdpa.h | 115 +++++++++++++++++++++++++++++- lib/librte_vhost/rte_vhost_version.map | 3 + lib/librte_vhost/vdpa.c | 47 ++++++++++++ 6 files changed, 173 insertions(+), 1 deletion(-) diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst index a5ba8a4..b7d8e86 100644 --- a/doc/guides/rel_notes/release_20_05.rst +++ b/doc/guides/rel_notes/release_20_05.rst @@ -213,6 +213,11 @@ New Features * Added IPsec inbound load-distribution support for ipsec-secgw application using NIC load distribution feature(Flow Director). +* **Added vDPA device APIs to query virtio queue statistics.** + + A new 3 APIs has been added to query virtio queue statistics, to get their + names and to reset them by a vDPA device. + Removed Items ------------- diff --git a/doc/guides/vdpadevs/features/default.ini b/doc/guides/vdpadevs/features/default.ini index 518e4f1..2c122a3 100644 --- a/doc/guides/vdpadevs/features/default.ini +++ b/doc/guides/vdpadevs/features/default.ini @@ -37,6 +37,7 @@ proto rarp = proto reply ack = proto host notifier = proto pagefault = +queue statistics = BSD nic_uio = Linux VFIO = Other kdrv = diff --git a/doc/guides/vdpadevs/features_overview.rst b/doc/guides/vdpadevs/features_overview.rst index eb7eb3b..930bc87 100644 --- a/doc/guides/vdpadevs/features_overview.rst +++ b/doc/guides/vdpadevs/features_overview.rst @@ -96,6 +96,9 @@ proto host notifier proto pagefault Slave expose page-fault FD for migration process. +queue statistics + Support virtio queue statistics query. + BSD nic_uio BSD ``nic_uio`` module supported. diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h index 3c400ee..ecb3d91 100644 --- a/lib/librte_vhost/rte_vdpa.h +++ b/lib/librte_vhost/rte_vdpa.h @@ -37,6 +37,34 @@ struct rte_vdpa_dev_addr { }; }; +/** Maximum name length for statistics counters */ +#define RTE_VDPA_STATS_NAME_SIZE 64 + +/** + * A vDPA device statistic structure + * + * This structure is used by rte_vdpa_stats_get() to provide + * statistics from the HW vDPA device. + * + * It maps a name id, corresponding to an index in the array returned + * by rte_vdpa_get_stats_names, to a statistic value. + */ +struct rte_vdpa_stat { + uint64_t id; /**< The index in stats name array */ + uint64_t value; /**< The statistic counter value */ +}; + +/** + * A name element for statistics + * + * An array of this structure is returned by rte_vdpa_get_stats_names + * It lists the names of extended statistics for a PMD. The rte_vdpa_stat + * structure references these names by their array index + */ +struct rte_vdpa_stat_name { + char name[RTE_VDPA_STATS_NAME_SIZE]; /**< The statistic name */ +}; + /** * vdpa device operations */ @@ -73,8 +101,19 @@ struct rte_vdpa_dev_ops { int (*get_notify_area)(int vid, int qid, uint64_t *offset, uint64_t *size); + /** Get statistics name */ + int (*get_stats_names)(int did, struct rte_vdpa_stat_name *stats_names, + unsigned int size); + + /** Get statistics of the queue */ + int (*get_stats)(int did, int qid, struct rte_vdpa_stat *stats, + unsigned int n); + + /** Reset statistics of the queue */ + int (*reset_stats)(int did, int qid); + /** Reserved for future extension */ - void *reserved[5]; + void *reserved[2]; }; /** @@ -200,4 +239,78 @@ struct rte_vdpa_device * __rte_experimental int rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Retrieve names of statistics of a vDPA device. + * + * There is an assumption that 'stat_names' and 'stats' arrays are matched + * by array index: stats_names[i].name => stats[i].value + * + * And the array index is same with id field of 'struct rte_vdpa_stat': + * stats[i].id == i + * + * @param did + * device id + * @param stats_names + * array of at least size elements to be filled. + * If set to NULL, the function returns the required number of elements. + * @param size + * The number of elements in stats_names array. + * @return + * A negative value on error, otherwise the number of entries filled in the + * stats name array. + */ +__rte_experimental +int +rte_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names, + unsigned int size); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Retrieve statistics of a vDPA device. + * + * There is an assumption that 'stat_names' and 'stats' arrays are matched + * by array index: stats_names[i].name => stats[i].value + * + * And the array index is same with id field of 'struct rte_vdpa_stat': + * stats[i].id == i + * + * @param did + * device id + * @param qid + * queue id + * @param stats + * A pointer to a table of structure of type rte_vdpa_stat to be filled with + * device statistics ids and values. + * @param n + * The number of elements in stats array. + * @return + * A negative value on error, otherwise the number of entries filled in the + * stats table. + */ +__rte_experimental +int +rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_stat *stats, + unsigned int n); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Reset statistics of a vDPA device. + * + * @param did + * device id + * @param qid + * queue id + * @return + * 0 on success, a negative value on error. + */ +__rte_experimental +int +rte_vdpa_reset_stats(int did, uint16_t qid); #endif /* _RTE_VDPA_H_ */ diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map index 051d08c..3a2f7df 100644 --- a/lib/librte_vhost/rte_vhost_version.map +++ b/lib/librte_vhost/rte_vhost_version.map @@ -38,6 +38,9 @@ EXPERIMENTAL { rte_vdpa_find_device_id; rte_vdpa_get_device; rte_vdpa_get_device_num; + rte_vdpa_get_stats_names; + rte_vdpa_get_stats; + rte_vdpa_reset_stats; rte_vhost_driver_attach_vdpa_device; rte_vhost_driver_detach_vdpa_device; rte_vhost_driver_get_vdpa_device_id; diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c index b2b2a10..1f5cdcd 100644 --- a/lib/librte_vhost/vdpa.c +++ b/lib/librte_vhost/vdpa.c @@ -227,3 +227,50 @@ struct rte_vdpa_device * free_ind_table(idesc); return -1; } + +int +rte_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names, + unsigned int size) +{ + struct rte_vdpa_device *vdpa_dev; + + vdpa_dev = rte_vdpa_get_device(did); + if (!vdpa_dev) + return -ENODEV; + + RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats_names, -ENOTSUP); + + return vdpa_dev->ops->get_stats_names(did, stats_names, size); +} + +int +rte_vdpa_get_stats(int did, uint16_t qid, struct rte_vdpa_stat *stats, + unsigned int n) +{ + struct rte_vdpa_device *vdpa_dev; + + vdpa_dev = rte_vdpa_get_device(did); + if (!vdpa_dev) + return -ENODEV; + + if (!stats || !n) + return -EINVAL; + + RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->get_stats, -ENOTSUP); + + return vdpa_dev->ops->get_stats(did, qid, stats, n); +} + +int +rte_vdpa_reset_stats(int did, uint16_t qid) +{ + struct rte_vdpa_device *vdpa_dev; + + vdpa_dev = rte_vdpa_get_device(did); + if (!vdpa_dev) + return -ENODEV; + + RTE_FUNC_PTR_OR_ERR_RET(vdpa_dev->ops->reset_stats, -ENOTSUP); + + return vdpa_dev->ops->reset_stats(did, qid); +} From patchwork Tue May 5 15:54:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 69769 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 949FEA04B8; Tue, 5 May 2020 17:55:20 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 92BA31D657; Tue, 5 May 2020 17:55:14 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id E691B1D656 for ; Tue, 5 May 2020 17:55:12 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@mellanox.com) with ESMTPS (AES256-SHA encrypted); 5 May 2020 18:55:07 +0300 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 045Fstgm022938; Tue, 5 May 2020 18:55:07 +0300 From: Matan Azrad To: dev@dpdk.org Cc: Viacheslav Ovsiienko , Shahaf Shuler , Maxime Coquelin Date: Tue, 5 May 2020 15:54:42 +0000 Message-Id: <1588694084-381748-3-git-send-email-matan@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588694084-381748-1-git-send-email-matan@mellanox.com> References: <1585826793-28709-1-git-send-email-matan@mellanox.com> <1588694084-381748-1-git-send-email-matan@mellanox.com> Subject: [dpdk-dev] [PATCH v2 2/4] common/mlx5: support DevX virtq stats operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add DevX API to create and query virtio queue statistics from the HW. The next counters are supported by the HW per virtio queue: received_desc. completed_desc. error_cqes. bad_desc_errors. exceed_max_chain. invalid_buffer. Signed-off-by: Matan Azrad Acked-by: Viacheslav Ovsiienko --- drivers/common/mlx5/mlx5_devx_cmds.c | 73 +++++++++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 43 +++++++++++++++ drivers/common/mlx5/mlx5_prm.h | 26 ++++++++- drivers/common/mlx5/rte_common_mlx5_version.map | 7 +++ 4 files changed, 148 insertions(+), 1 deletion(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 67c8a8c..c8fc4bb 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -460,6 +460,9 @@ struct mlx5_devx_obj * attr->vdpa.valid = !!(MLX5_GET64(cmd_hca_cap, hcattr, general_obj_types) & MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q); + attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr, + general_obj_types) & + MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS); if (attr->qos.sup) { MLX5_SET(query_hca_cap_in, in, op_mod, MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP | @@ -1255,6 +1258,7 @@ struct mlx5_devx_obj * MLX5_SET(virtio_q, virtctx, umem_3_id, attr->umems[2].id); MLX5_SET(virtio_q, virtctx, umem_3_size, attr->umems[2].size); MLX5_SET64(virtio_q, virtctx, umem_3_offset, attr->umems[2].offset); + MLX5_SET(virtio_q, virtctx, counter_set_id, attr->counters_obj_id); MLX5_SET(virtio_net_q, virtq, tisn_or_qpn, attr->tis_id); virtq_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); @@ -1529,3 +1533,72 @@ struct mlx5_devx_obj * } return ret; } + +struct mlx5_devx_obj * +mlx5_devx_cmd_create_virtio_q_counters(void *ctx) +{ + uint32_t in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + struct mlx5_devx_obj *couners_obj = rte_zmalloc(__func__, + sizeof(*couners_obj), 0); + void *hdr = MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr); + + if (!couners_obj) { + DRV_LOG(ERR, "Failed to allocate virtio queue counters data."); + rte_errno = ENOMEM; + return NULL; + } + MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode, + MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type, + MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS); + couners_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, + sizeof(out)); + if (!couners_obj->obj) { + rte_errno = errno; + DRV_LOG(ERR, "Failed to create virtio queue counters Obj using" + " DevX."); + rte_free(couners_obj); + return NULL; + } + couners_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + return couners_obj; +} + +int +mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj, + struct mlx5_devx_virtio_q_couners_attr *attr) +{ + uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(query_virtio_q_counters_out)] = {0}; + void *hdr = MLX5_ADDR_OF(query_virtio_q_counters_out, in, hdr); + void *virtio_q_counters = MLX5_ADDR_OF(query_virtio_q_counters_out, out, + virtio_q_counters); + int ret; + + MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode, + MLX5_CMD_OP_QUERY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type, + MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS); + MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, couners_obj->id); + ret = mlx5_glue->devx_obj_query(couners_obj->obj, in, sizeof(in), out, + sizeof(out)); + if (ret) { + DRV_LOG(ERR, "Failed to query virtio q counters using DevX."); + rte_errno = errno; + return -errno; + } + attr->received_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters, + received_desc); + attr->completed_desc = MLX5_GET64(virtio_q_counters, virtio_q_counters, + completed_desc); + attr->error_cqes = MLX5_GET(virtio_q_counters, virtio_q_counters, + error_cqes); + attr->bad_desc_errors = MLX5_GET(virtio_q_counters, virtio_q_counters, + bad_desc_errors); + attr->exceed_max_chain = MLX5_GET(virtio_q_counters, virtio_q_counters, + exceed_max_chain); + attr->invalid_buffer = MLX5_GET(virtio_q_counters, virtio_q_counters, + invalid_buffer); + return ret; +} diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index f7802e6..60db6da 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -64,6 +64,7 @@ struct mlx5_hca_vdpa_attr { uint32_t event_mode:3; uint32_t log_doorbell_stride:5; uint32_t log_doorbell_bar_size:5; + uint32_t queue_counters_valid:1; uint32_t max_num_virtio_queues; struct { uint32_t a; @@ -270,6 +271,7 @@ struct mlx5_devx_virtq_attr { uint32_t qp_id; uint32_t queue_index; uint32_t tis_id; + uint32_t counters_obj_id; uint64_t dirty_bitmap_addr; uint64_t type; uint64_t desc_addr; @@ -298,6 +300,15 @@ struct mlx5_devx_qp_attr { uint64_t wq_umem_offset; }; +struct mlx5_devx_virtio_q_couners_attr { + uint64_t received_desc; + uint64_t completed_desc; + uint32_t error_cqes; + uint32_t bad_desc_errors; + uint32_t exceed_max_chain; + uint32_t invalid_buffer; +}; + /* mlx5_devx_cmds.c */ struct mlx5_devx_obj *mlx5_devx_cmd_flow_counter_alloc(struct ibv_context *ctx, @@ -348,5 +359,37 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op, uint32_t remote_qp_id); int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt, struct mlx5_devx_rqt_attr *rqt_attr); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Create virtio queue counters object DevX API. + * + * @param[in] ctx + * Device context. + + * @return + * The DevX object created, NULL otherwise and rte_errno is set. + */ +__rte_experimental +struct mlx5_devx_obj *mlx5_devx_cmd_create_virtio_q_counters(void *ctx); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Query virtio queue counters object using DevX API. + * + * @param[in] couners_obj + * Pointer to virtq object structure. + * @param [in/out] attr + * Pointer to virtio queue counters attributes structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int mlx5_devx_cmd_query_virtio_q_counters(struct mlx5_devx_obj *couners_obj, + struct mlx5_devx_virtio_q_couners_attr *attr); #endif /* RTE_PMD_MLX5_DEVX_CMDS_H_ */ diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 4ab1c75..eefaf26 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -947,6 +947,7 @@ enum { enum { MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd), + MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c), }; enum { @@ -2003,6 +2004,7 @@ struct mlx5_ifc_create_cq_in_bits { enum { MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d, + MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c, }; struct mlx5_ifc_general_obj_in_cmd_hdr_bits { @@ -2021,6 +2023,27 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits { u8 reserved_at_60[0x20]; }; +struct mlx5_ifc_virtio_q_counters_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 received_desc[0x40]; + u8 completed_desc[0x40]; + u8 error_cqes[0x20]; + u8 bad_desc_errors[0x20]; + u8 exceed_max_chain[0x20]; + u8 invalid_buffer[0x20]; + u8 reserved_at_180[0x50]; +}; + +struct mlx5_ifc_create_virtio_q_counters_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; +}; + +struct mlx5_ifc_query_virtio_q_counters_out_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; +}; enum { MLX5_VIRTQ_STATE_INIT = 0, MLX5_VIRTQ_STATE_RDY = 1, @@ -2061,7 +2084,8 @@ struct mlx5_ifc_virtio_q_bits { u8 umem_3_id[0x20]; u8 umem_3_size[0x20]; u8 umem_3_offset[0x40]; - u8 reserved_at_300[0x100]; + u8 counter_set_id[0x20]; + u8 reserved_at_320[0xe0]; }; struct mlx5_ifc_virtio_net_q_bits { diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map index b58a378..8c2d224 100644 --- a/drivers/common/mlx5/rte_common_mlx5_version.map +++ b/drivers/common/mlx5/rte_common_mlx5_version.map @@ -76,3 +76,10 @@ EXPERIMENTAL { mlx5_mr_create_primary; mlx5_mr_flush_local_cache; }; + +EXPERIMENTAL { + global: + + mlx5_devx_cmd_create_virtio_q_counters; + mlx5_devx_cmd_query_virtio_q_counters; +}; From patchwork Tue May 5 15:54:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 69770 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DE15A04B8; Tue, 5 May 2020 17:55:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BAB081D656; Tue, 5 May 2020 17:55:18 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id C2A241D628 for ; Tue, 5 May 2020 17:55:16 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE2 (envelope-from matan@mellanox.com) with ESMTPS (AES256-SHA encrypted); 5 May 2020 18:55:15 +0300 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 045Fstgn022938; Tue, 5 May 2020 18:55:15 +0300 From: Matan Azrad To: dev@dpdk.org Cc: Viacheslav Ovsiienko , Shahaf Shuler , Maxime Coquelin Date: Tue, 5 May 2020 15:54:43 +0000 Message-Id: <1588694084-381748-4-git-send-email-matan@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588694084-381748-1-git-send-email-matan@mellanox.com> References: <1585826793-28709-1-git-send-email-matan@mellanox.com> <1588694084-381748-1-git-send-email-matan@mellanox.com> Subject: [dpdk-dev] [PATCH v2 3/4] vdpa/mlx5: support virtio queue statistics get X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for statistics operations. A DevX counter object is allocated per virtq in order to manage the virtq statistics. The counter object is allocated before the virtq creation and destroyed after it, so the statistics are valid only in the life time of the virtq. Signed-off-by: Matan Azrad --- doc/guides/vdpadevs/features/mlx5.ini | 1 + drivers/vdpa/mlx5/mlx5_vdpa.c | 85 +++++++++++++++++++++++++++++++++ drivers/vdpa/mlx5/mlx5_vdpa.h | 45 ++++++++++++++++++ drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 90 +++++++++++++++++++++++++++++++++++ 4 files changed, 221 insertions(+) diff --git a/doc/guides/vdpadevs/features/mlx5.ini b/doc/guides/vdpadevs/features/mlx5.ini index 1da9c1b..788d4e0 100644 --- a/doc/guides/vdpadevs/features/mlx5.ini +++ b/doc/guides/vdpadevs/features/mlx5.ini @@ -17,6 +17,7 @@ packed = Y proto mq = Y proto log shmfd = Y proto host notifier = Y +queue statistics = Y Other kdrv = Y ARMv8 = Y Power8 = Y diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index 1113d6c..a80e3f4 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -274,6 +275,85 @@ return 0; } +static int +mlx5_vdpa_get_stats_names(int did, struct rte_vdpa_stat_name *stats_names, + unsigned int size) +{ + static const char *mlx5_vdpa_stats_names[MLX5_VDPA_STATS_MAX] = { + "received_descriptors", + "completed_descriptors", + "bad descriptor errors", + "exceed max chain", + "invalid buffer", + "completion errors", + }; + struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did); + unsigned int i; + + if (priv == NULL) { + DRV_LOG(ERR, "Invalid device id: %d.", did); + return -ENODEV; + } + if (!stats_names) + return MLX5_VDPA_STATS_MAX; + size = RTE_MIN(size, (unsigned int)MLX5_VDPA_STATS_MAX); + for (i = 0; i < size; ++i) + strlcpy(stats_names[i].name, mlx5_vdpa_stats_names[i], + RTE_VDPA_STATS_NAME_SIZE); + return size; +} + +static int +mlx5_vdpa_get_stats(int did, int qid, struct rte_vdpa_stat *stats, + unsigned int n) +{ + struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did); + + if (priv == NULL) { + DRV_LOG(ERR, "Invalid device id: %d.", did); + return -ENODEV; + } + if (!priv->configured) { + DRV_LOG(ERR, "Device %d was not configured.", did); + return -ENODATA; + } + if (qid >= (int)priv->nr_virtqs) { + DRV_LOG(ERR, "Too big vring id: %d.", qid); + return -E2BIG; + } + if (!priv->caps.queue_counters_valid) { + DRV_LOG(ERR, "Virtq statistics is not supported for device %d.", + did); + return -ENOTSUP; + } + return mlx5_vdpa_virtq_stats_get(priv, qid, stats, n); +} + +static int +mlx5_vdpa_reset_stats(int did, int qid) +{ + struct mlx5_vdpa_priv *priv = mlx5_vdpa_find_priv_resource_by_did(did); + + if (priv == NULL) { + DRV_LOG(ERR, "Invalid device id: %d.", did); + return -ENODEV; + } + if (!priv->configured) { + DRV_LOG(ERR, "Device %d was not configured.", did); + return -ENODATA; + } + if (qid >= (int)priv->nr_virtqs) { + DRV_LOG(ERR, "Too big vring id: %d.", qid); + return -E2BIG; + } + if (!priv->caps.queue_counters_valid) { + DRV_LOG(ERR, "Virtq statistics is not supported for device %d.", + did); + return -ENOTSUP; + } + return mlx5_vdpa_virtq_stats_reset(priv, qid); +} + static struct rte_vdpa_dev_ops mlx5_vdpa_ops = { .get_queue_num = mlx5_vdpa_get_queue_num, .get_features = mlx5_vdpa_get_vdpa_features, @@ -286,6 +366,9 @@ .get_vfio_group_fd = NULL, .get_vfio_device_fd = mlx5_vdpa_get_device_fd, .get_notify_area = mlx5_vdpa_get_notify_area, + .get_stats_names = mlx5_vdpa_get_stats_names, + .get_stats = mlx5_vdpa_get_stats, + .reset_stats = mlx5_vdpa_reset_stats, }; static struct ibv_device * @@ -489,6 +572,8 @@ rte_errno = ENOTSUP; goto error; } + if (!attr.vdpa.queue_counters_valid) + DRV_LOG(DEBUG, "No capability to support virtq statistics."); priv = rte_zmalloc("mlx5 vDPA device private", sizeof(*priv) + sizeof(struct mlx5_vdpa_virtq) * attr.vdpa.max_num_virtio_queues * 2, diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index fcc216a..80b4c4b 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -76,6 +76,7 @@ struct mlx5_vdpa_virtq { uint16_t vq_size; struct mlx5_vdpa_priv *priv; struct mlx5_devx_obj *virtq; + struct mlx5_devx_obj *counters; struct mlx5_vdpa_event_qp eqp; struct { struct mlx5dv_devx_umem *obj; @@ -83,6 +84,7 @@ struct mlx5_vdpa_virtq { uint32_t size; } umems[3]; struct rte_intr_handle intr_handle; + struct mlx5_devx_virtio_q_couners_attr reset; }; struct mlx5_vdpa_steer { @@ -127,6 +129,16 @@ struct mlx5_vdpa_priv { struct mlx5_vdpa_virtq virtqs[]; }; +enum { + MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS, + MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS, + MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS, + MLX5_VDPA_STATS_EXCEED_MAX_CHAIN, + MLX5_VDPA_STATS_INVALID_BUFFER, + MLX5_VDPA_STATS_COMPLETION_ERRORS, + MLX5_VDPA_STATS_MAX +}; + /* * Check whether virtq is for traffic receive. * According to VIRTIO_NET Spec the virtqueues index identity its type by: @@ -352,4 +364,37 @@ int mlx5_vdpa_dirty_bitmap_set(struct mlx5_vdpa_priv *priv, uint64_t log_base, */ int mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index); +/** + * Get virtq statistics. + * + * @param[in] priv + * The vdpa driver private structure. + * @param[in] qid + * The virtq index. + * @param stats + * The virtq statistics array to fill. + * @param n + * The number of elements in @p stats array. + * + * @return + * A negative value on error, otherwise the number of entries filled in the + * @p stats array. + */ +int +mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, + struct rte_vdpa_stat *stats, unsigned int n); + +/** + * Reset virtq statistics. + * + * @param[in] priv + * The vdpa driver private structure. + * @param[in] qid + * The virtq index. + * + * @return + * A negative value on error, otherwise 0. + */ +int +mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid); #endif /* RTE_PMD_MLX5_VDPA_H_ */ diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index bd48460..d57ed59 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -72,6 +72,11 @@ rte_free(virtq->umems[i].buf); } memset(&virtq->umems, 0, sizeof(virtq->umems)); + if (virtq->counters) { + claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); + virtq->counters = NULL; + } + memset(&virtq->reset, 0, sizeof(virtq->reset)); if (virtq->eqp.fw_qp) mlx5_vdpa_event_qp_destroy(&virtq->eqp); return 0; @@ -205,6 +210,16 @@ DRV_LOG(INFO, "Virtq %d is, for sure, working by poll mode, no" " need event QPs and event mechanism.", index); } + if (priv->caps.queue_counters_valid) { + virtq->counters = mlx5_devx_cmd_create_virtio_q_counters + (priv->ctx); + if (!virtq->counters) { + DRV_LOG(ERR, "Failed to create virtq couners for virtq" + " %d.", index); + goto error; + } + attr.counters_obj_id = virtq->counters->id; + } /* Setup 3 UMEMs for each virtq. */ for (i = 0; i < RTE_DIM(virtq->umems); ++i) { virtq->umems[i].size = priv->caps.umems[i].a * vq.size + @@ -455,3 +470,78 @@ } return 0; } + +int +mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, + struct rte_vdpa_stat *stats, unsigned int n) +{ + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; + struct mlx5_devx_virtio_q_couners_attr attr = {0}; + int ret; + + if (!virtq->virtq) + DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq " + "synchronization failed.", qid); + MLX5_ASSERT(virtq->counters); + ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr); + if (ret) { + DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid); + return ret; + } + ret = (int)RTE_MIN(n, (unsigned int)MLX5_VDPA_STATS_MAX); + if (ret == MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS) + return ret; + stats[MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS] = (struct rte_vdpa_stat) { + .id = MLX5_VDPA_STATS_RECEIVED_DESCRIPTORS, + .value = attr.received_desc - virtq->reset.received_desc, + }; + if (ret == MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS) + return ret; + stats[MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS] = (struct rte_vdpa_stat) { + .id = MLX5_VDPA_STATS_COMPLETED_DESCRIPTORS, + .value = attr.completed_desc - virtq->reset.completed_desc, + }; + if (ret == MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS) + return ret; + stats[MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS] = (struct rte_vdpa_stat) { + .id = MLX5_VDPA_STATS_BAD_DESCRIPTOR_ERRORS, + .value = attr.bad_desc_errors - virtq->reset.bad_desc_errors, + }; + if (ret == MLX5_VDPA_STATS_EXCEED_MAX_CHAIN) + return ret; + stats[MLX5_VDPA_STATS_EXCEED_MAX_CHAIN] = (struct rte_vdpa_stat) { + .id = MLX5_VDPA_STATS_EXCEED_MAX_CHAIN, + .value = attr.exceed_max_chain - virtq->reset.exceed_max_chain, + }; + if (ret == MLX5_VDPA_STATS_INVALID_BUFFER) + return ret; + stats[MLX5_VDPA_STATS_INVALID_BUFFER] = (struct rte_vdpa_stat) { + .id = MLX5_VDPA_STATS_INVALID_BUFFER, + .value = attr.invalid_buffer - virtq->reset.invalid_buffer, + }; + if (ret == MLX5_VDPA_STATS_COMPLETION_ERRORS) + return ret; + stats[MLX5_VDPA_STATS_COMPLETION_ERRORS] = (struct rte_vdpa_stat) { + .id = MLX5_VDPA_STATS_COMPLETION_ERRORS, + .value = attr.error_cqes - virtq->reset.error_cqes, + }; + return ret; +} + +int +mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid) +{ + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; + int ret; + + if (!virtq->virtq) + DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq " + "synchronization failed.", qid); + MLX5_ASSERT(virtq->counters); + ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, + &virtq->reset); + if (ret) + DRV_LOG(ERR, "Failed to read virtq %d reset stats from HW.", + qid); + return ret; +} From patchwork Tue May 5 15:54:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 69771 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06240A04B8; Tue, 5 May 2020 17:55:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EF3A01D64D; Tue, 5 May 2020 17:55:33 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 1B5C81D649 for ; Tue, 5 May 2020 17:55:32 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE2 (envelope-from matan@mellanox.com) with ESMTPS (AES256-SHA encrypted); 5 May 2020 18:55:26 +0300 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 045Fstgo022938; Tue, 5 May 2020 18:55:26 +0300 From: Matan Azrad To: dev@dpdk.org Cc: Viacheslav Ovsiienko , Shahaf Shuler , Maxime Coquelin Date: Tue, 5 May 2020 15:54:44 +0000 Message-Id: <1588694084-381748-5-git-send-email-matan@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588694084-381748-1-git-send-email-matan@mellanox.com> References: <1585826793-28709-1-git-send-email-matan@mellanox.com> <1588694084-381748-1-git-send-email-matan@mellanox.com> Subject: [dpdk-dev] [PATCH v2 4/4] examples/vdpa: add statistics show command X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" A new vDPA driver feature was added to query the virtq statistics from the HW. Use this feature to show the HW queues statistics for the virtqs. Command description: stats X Y. X is the device ID. Y is the queue ID, Y=0xffff to show all the virtio queues statistics of the device X. Signed-off-by: Matan Azrad --- doc/guides/sample_app_ug/vdpa.rst | 3 +- examples/vdpa/main.c | 119 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 121 insertions(+), 1 deletion(-) diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst index 745f196..d66a724 100644 --- a/doc/guides/sample_app_ug/vdpa.rst +++ b/doc/guides/sample_app_ug/vdpa.rst @@ -44,7 +44,8 @@ where 1. help: show help message 2. list: list all available vdpa devices 3. create: create a new vdpa port with socket file and vdpa device address - 4. quit: unregister vhost driver and exit the application + 4. stats: show statistics of virtio queues + 5. quit: unregister vhost driver and exit the application Take IFCVF driver for example: diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c index d9a9112..cdef715 100644 --- a/examples/vdpa/main.c +++ b/examples/vdpa/main.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #define MAX_PATH_LEN 128 @@ -29,6 +30,9 @@ struct vdpa_port { int did; int vid; uint64_t flags; + int stats_n; + struct rte_vdpa_stat_name *stats_names; + struct rte_vdpa_stat *stats; }; static struct vdpa_port vports[MAX_VDPA_SAMPLE_PORTS]; @@ -199,6 +203,10 @@ struct vdpa_port { RTE_LOG(ERR, VDPA, "Fail to unregister vhost driver for %s.\n", socket_path); + if (vport->stats_names) { + rte_free(vport->stats_names); + vport->stats_names = NULL; + } } static void @@ -240,6 +248,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result, " help : Show interactive instructions.\n" " list : list all available vdpa devices.\n" " create : create a new vdpa port.\n" + " stats : show statistics of virtio queue, 0xffff for all.\n" " quit : exit vdpa sample app.\n" ); } @@ -363,6 +372,115 @@ static void cmd_create_vdpa_port_parsed(void *parsed_result, }, }; +/* *** STATS *** */ +struct cmd_stats_result { + cmdline_fixed_string_t stats; + uint16_t did; + uint16_t qid; +}; + +static void cmd_device_stats_parsed(void *parsed_result, struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_stats_result *res = parsed_result; + struct rte_vdpa_device *vdev = rte_vdpa_get_device(res->did); + struct vdpa_port *vport = NULL; + uint32_t first, last; + int i; + + if (!vdev) { + RTE_LOG(ERR, VDPA, "Invalid device id %" PRIu16 ".\n", + res->did); + return; + } + for (i = 0; i < RTE_MIN(MAX_VDPA_SAMPLE_PORTS, dev_total); i++) { + if (vports[i].did == res->did) { + vport = &vports[i]; + break; + } + } + if (!vport) { + RTE_LOG(ERR, VDPA, "Device id %" PRIu16 " was not created.\n", + res->did); + return; + } + if (res->qid == 0xFFFF) { + first = 0; + last = rte_vhost_get_vring_num(vport->vid); + if (last == 0) { + RTE_LOG(ERR, VDPA, "Failed to get num of actual virtqs" + " for device id %d.\n", (int)res->did); + return; + } + last--; + } else { + first = res->qid; + last = res->qid; + } + if (!vport->stats_names) { + vport->stats_n = rte_vdpa_get_stats_names(vport->did, NULL, 0); + if (vport->stats_n <= 0) { + RTE_LOG(ERR, VDPA, "Failed to get names number of " + "device %d stats.\n", (int)res->did); + return; + } + vport->stats_names = rte_zmalloc(NULL, + (sizeof(*vport->stats_names) + sizeof(*vport->stats)) * + vport->stats_n, 0); + if (!vport->stats_names) { + RTE_LOG(ERR, VDPA, "Failed to allocate memory for stat" + " names of device %d.\n", (int)res->did); + return; + } + i = rte_vdpa_get_stats_names(vport->did, vport->stats_names, + vport->stats_n); + if (vport->stats_n != i) { + RTE_LOG(ERR, VDPA, "Failed to get names of device %d " + "stats.\n", (int)res->did); + return; + } + vport->stats = (struct rte_vdpa_stat *) + (vport->stats_names + vport->stats_n); + } + cmdline_printf(cl, "\nDevice %d:\n", (int)res->did); + for (; first <= last; first++) { + memset(vport->stats, 0, sizeof(*vport->stats) * vport->stats_n); + if (rte_vdpa_get_stats(vport->did, (int)first, vport->stats, + vport->stats_n) <= 0) { + RTE_LOG(ERR, VDPA, "Failed to get vdpa queue statistics" + " for device id %d qid %d.\n", (int)res->did, + (int)first); + return; + } + cmdline_printf(cl, "\tVirtq %" PRIu32 ":\n", first); + for (i = 0; i < vport->stats_n; ++i) { + cmdline_printf(cl, "\t\t%-*s %-16" PRIu64 "\n", + RTE_VDPA_STATS_NAME_SIZE, + vport->stats_names[vport->stats[i].id].name, + vport->stats[i].value); + } + } +} + +cmdline_parse_token_string_t cmd_device_stats_ = + TOKEN_STRING_INITIALIZER(struct cmd_stats_result, stats, "stats"); +cmdline_parse_token_num_t cmd_device_id = + TOKEN_NUM_INITIALIZER(struct cmd_stats_result, did, UINT32); +cmdline_parse_token_num_t cmd_queue_id = + TOKEN_NUM_INITIALIZER(struct cmd_stats_result, qid, UINT32); + +cmdline_parse_inst_t cmd_device_stats = { + .f = cmd_device_stats_parsed, + .data = NULL, + .help_str = "stats: show device statistics", + .tokens = { + (void *)&cmd_device_stats_, + (void *)&cmd_device_id, + (void *)&cmd_queue_id, + NULL, + }, +}; + /* *** QUIT *** */ struct cmd_quit_result { cmdline_fixed_string_t quit; @@ -392,6 +510,7 @@ static void cmd_quit_parsed(__rte_unused void *parsed_result, (cmdline_parse_inst_t *)&cmd_help, (cmdline_parse_inst_t *)&cmd_list_vdpa_devices, (cmdline_parse_inst_t *)&cmd_create_vdpa_port, + (cmdline_parse_inst_t *)&cmd_device_stats, (cmdline_parse_inst_t *)&cmd_quit, NULL, };