From patchwork Mon Oct 26 07:16:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 82166 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 046CBA04B5; Mon, 26 Oct 2020 08:16:32 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DA59B1E2B; Mon, 26 Oct 2020 08:16:30 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 8D00B1D9E for ; Mon, 26 Oct 2020 08:16:28 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from xuemingl@nvidia.com) with SMTP; 26 Oct 2020 09:16:26 +0200 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09Q7GQFx005675; Mon, 26 Oct 2020 09:16:26 +0200 From: Xueming Li To: Matan Azrad , Viacheslav Ovsiienko Cc: dev@dpdk.org, xuemingl@nvidia.com, Asaf Penso Date: Mon, 26 Oct 2020 07:16:09 +0000 Message-Id: <1603696570-8606-1-git-send-email-xuemingl@nvidia.com> X-Mailer: git-send-email 1.8.3.1 Subject: [dpdk-dev] [PATCH 1/2] common/mlx5: add virtq attributes error fields X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the needed fields for virtq DevX object to read the error state. Acked-by: Matan Azrad Signed-off-by: Xueming Li --- drivers/common/mlx5/mlx5_devx_cmds.c | 3 +++ drivers/common/mlx5/mlx5_devx_cmds.h | 1 + drivers/common/mlx5/mlx5_prm.h | 9 +++++++-- 3 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 8aee12d527..dc426e9b09 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1754,6 +1754,9 @@ mlx5_devx_cmd_query_virtq(struct mlx5_devx_obj *virtq_obj, attr->hw_available_index = MLX5_GET16(virtio_net_q, virtq, hw_available_index); attr->hw_used_index = MLX5_GET16(virtio_net_q, virtq, hw_used_index); + attr->state = MLX5_GET16(virtio_net_q, virtq, state); + attr->error_type = MLX5_GET16(virtio_net_q, virtq, + virtio_q_context.error_type); return ret; } diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index abbea67784..0ea2427b75 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -298,6 +298,7 @@ struct mlx5_devx_virtq_attr { uint32_t size; uint64_t offset; } umems[3]; + uint8_t error_type; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index d342263c85..7d671a3996 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -2280,7 +2280,8 @@ struct mlx5_ifc_virtio_q_bits { u8 used_addr[0x40]; u8 available_addr[0x40]; u8 virtio_q_mkey[0x20]; - u8 reserved_at_160[0x20]; + u8 reserved_at_160[0x18]; + u8 error_type[0x8]; u8 umem_1_id[0x20]; u8 umem_1_size[0x20]; u8 umem_1_offset[0x40]; @@ -2308,7 +2309,7 @@ struct mlx5_ifc_virtio_net_q_bits { u8 vhost_log_page[0x5]; u8 reserved_at_90[0xc]; u8 state[0x4]; - u8 error_type[0x8]; + u8 reserved_at_a0[0x8]; u8 tisn_or_qpn[0x18]; u8 dirty_bitmap_mkey[0x20]; u8 dirty_bitmap_size[0x20]; @@ -2329,6 +2330,10 @@ struct mlx5_ifc_query_virtq_out_bits { struct mlx5_ifc_virtio_net_q_bits virtq; }; +enum { + MLX5_EVENT_TYPE_OBJECT_CHANGE = 0x27, +}; + enum { MLX5_QP_ST_RC = 0x0, }; From patchwork Mon Oct 26 07:16:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xueming Li X-Patchwork-Id: 82167 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 363C4A04B5; Mon, 26 Oct 2020 08:16:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EF0772BB8; Mon, 26 Oct 2020 08:16:43 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 4560C1D9E for ; Mon, 26 Oct 2020 08:16:42 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from xuemingl@nvidia.com) with SMTP; 26 Oct 2020 09:16:37 +0200 Received: from nvidia.com (pegasus05.mtr.labs.mlnx [10.210.16.100]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09Q7GQG0005675; Mon, 26 Oct 2020 09:16:37 +0200 From: Xueming Li To: Matan Azrad , Viacheslav Ovsiienko Cc: dev@dpdk.org, xuemingl@nvidia.com, Asaf Penso Date: Mon, 26 Oct 2020 07:16:10 +0000 Message-Id: <1603696570-8606-2-git-send-email-xuemingl@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1603696570-8606-1-git-send-email-xuemingl@nvidia.com> References: <1603696570-8606-1-git-send-email-xuemingl@nvidia.com> Subject: [dpdk-dev] [PATCH 2/2] vdpa/mlx5: hardware error handling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When hardware error happens, vdpa didn't get such information and leave driver in silent: working state but no response. This patch subscribes firmware virtq error event and try to recover max 3 times in 10 seconds, stop virtq if max retry number reached. When error happens, PMD log in warning level. If failed to recover, outputs error log. Query virtq statitics to get error counters report. Acked-by: Matan Azrad Signed-off-by: Xueming Li --- drivers/vdpa/mlx5/mlx5_vdpa.c | 2 + drivers/vdpa/mlx5/mlx5_vdpa.h | 37 ++++++++ drivers/vdpa/mlx5/mlx5_vdpa_event.c | 140 ++++++++++++++++++++++++++++ drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 61 +++++++++--- 4 files changed, 225 insertions(+), 15 deletions(-) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index a8f3e4b1de..ba779c10ee 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -283,6 +283,7 @@ mlx5_vdpa_dev_close(int vid) } if (priv->configured) ret |= mlx5_vdpa_lm_log(priv); + mlx5_vdpa_err_event_unset(priv); mlx5_vdpa_cqe_event_unset(priv); mlx5_vdpa_steer_unset(priv); mlx5_vdpa_virtqs_release(priv); @@ -318,6 +319,7 @@ mlx5_vdpa_dev_config(int vid) DRV_LOG(WARNING, "MTU cannot be set on device %s.", vdev->device->name); if (mlx5_vdpa_pd_create(priv) || mlx5_vdpa_mem_register(priv) || + mlx5_vdpa_err_event_setup(priv) || mlx5_vdpa_virtqs_prepare(priv) || mlx5_vdpa_steer_setup(priv) || mlx5_vdpa_cqe_event_setup(priv)) { mlx5_vdpa_dev_close(vid); diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index fcbc12ab0c..0d6886c52c 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -87,6 +87,7 @@ struct mlx5_vdpa_virtq { uint16_t vq_size; uint8_t notifier_state; bool stopped; + uint32_t version; struct mlx5_vdpa_priv *priv; struct mlx5_devx_obj *virtq; struct mlx5_devx_obj *counters; @@ -97,6 +98,8 @@ struct mlx5_vdpa_virtq { uint32_t size; } umems[3]; struct rte_intr_handle intr_handle; + uint64_t err_time[3]; /* RDTSC time of recent errors. */ + uint32_t n_retry; struct mlx5_devx_virtio_q_couners_attr reset; }; @@ -143,8 +146,10 @@ struct mlx5_vdpa_priv { struct rte_vhost_memory *vmem; uint32_t eqn; struct mlx5dv_devx_event_channel *eventc; + struct mlx5dv_devx_event_channel *err_chnl; struct mlx5dv_devx_uar *uar; struct rte_intr_handle intr_handle; + struct rte_intr_handle err_intr_handle; struct mlx5_devx_obj *td; struct mlx5_devx_obj *tis; uint16_t nr_virtqs; @@ -259,6 +264,25 @@ int mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv); */ void mlx5_vdpa_cqe_event_unset(struct mlx5_vdpa_priv *priv); +/** + * Setup error interrupt handler. + * + * @param[in] priv + * The vdpa driver private structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv); + +/** + * Unset error event handler. + * + * @param[in] priv + * The vdpa driver private structure. + */ +void mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv); + /** * Release a virtq and all its related resources. * @@ -392,6 +416,19 @@ int mlx5_vdpa_virtq_modify(struct mlx5_vdpa_virtq *virtq, int state); */ int mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index); +/** + * Query virtq information. + * + * @param[in] priv + * The vdpa driver private structure. + * @param[in] index + * The virtq index. + * + * @return + * 0 on success, a negative value otherwise. + */ +int mlx5_vdpa_virtq_query(struct mlx5_vdpa_priv *priv, int index); + /** * Get virtq statistics. * diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 8a01e42794..89df699dad 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -15,11 +15,14 @@ #include #include +#include #include "mlx5_vdpa_utils.h" #include "mlx5_vdpa.h" +#define MLX5_VDPA_ERROR_TIME_SEC 3u + void mlx5_vdpa_event_qp_global_release(struct mlx5_vdpa_priv *priv) { @@ -378,6 +381,143 @@ mlx5_vdpa_interrupt_handler(void *cb_arg) pthread_mutex_unlock(&priv->vq_config_lock); } +static void +mlx5_vdpa_err_interrupt_handler(void *cb_arg __rte_unused) +{ +#ifdef HAVE_IBV_DEVX_EVENT + struct mlx5_vdpa_priv *priv = cb_arg; + union { + struct mlx5dv_devx_async_event_hdr event_resp; + uint8_t buf[sizeof(struct mlx5dv_devx_async_event_hdr) + 128]; + } out; + uint32_t vq_index, i, version; + struct mlx5_vdpa_virtq *virtq; + uint64_t sec; + + pthread_mutex_lock(&priv->vq_config_lock); + while (mlx5_glue->devx_get_event(priv->err_chnl, &out.event_resp, + sizeof(out.buf)) >= + (ssize_t)sizeof(out.event_resp.cookie)) { + vq_index = out.event_resp.cookie && UINT32_MAX; + version = out.event_resp.cookie >> 32; + if (vq_index >= priv->nr_virtqs) { + DRV_LOG(ERR, "Invalid device %s error event virtq %d.", + priv->vdev->device->name, vq_index); + continue; + } + virtq = &priv->virtqs[vq_index]; + if (!virtq->enable || virtq->version != version) + continue; + if (rte_rdtsc() / rte_get_tsc_hz() < MLX5_VDPA_ERROR_TIME_SEC) + continue; + virtq->stopped = true; + /* Query error info. */ + if (mlx5_vdpa_virtq_query(priv, vq_index)) + goto log; + /* Disable vq. */ + if (mlx5_vdpa_virtq_enable(priv, vq_index, 0)) { + DRV_LOG(ERR, "Failed to disable virtq %d.", vq_index); + goto log; + } + /* Retry if error happens less than N times in 3 seconds. */ + sec = (rte_rdtsc() - virtq->err_time[0]) / rte_get_tsc_hz(); + if (sec > MLX5_VDPA_ERROR_TIME_SEC) { + /* Retry. */ + if (mlx5_vdpa_virtq_enable(priv, vq_index, 1)) + DRV_LOG(ERR, "Failed to enable virtq %d.", + vq_index); + else + DRV_LOG(WARNING, "Recover virtq %d: %u.", + vq_index, ++virtq->n_retry); + } else { + /* Retry timeout, give up. */ + DRV_LOG(ERR, "Device %s virtq %d failed to recover.", + priv->vdev->device->name, vq_index); + } +log: + /* Shift in current time to error time log end. */ + for (i = 1; i < RTE_DIM(virtq->err_time); i++) + virtq->err_time[i - 1] = virtq->err_time[i]; + virtq->err_time[RTE_DIM(virtq->err_time) - 1] = rte_rdtsc(); + } + pthread_mutex_unlock(&priv->vq_config_lock); +#endif +} + +int +mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv) +{ + int ret; + int flags; + + /* Setup device event channel. */ + priv->err_chnl = mlx5_glue->devx_create_event_channel(priv->ctx, 0); + if (!priv->err_chnl) { + rte_errno = errno; + DRV_LOG(ERR, "Failed to create device event channel %d.", + rte_errno); + goto error; + } + flags = fcntl(priv->err_chnl->fd, F_GETFL); + ret = fcntl(priv->err_chnl->fd, F_SETFL, flags | O_NONBLOCK); + if (ret) { + DRV_LOG(ERR, "Failed to change device event channel FD."); + goto error; + } + priv->err_intr_handle.fd = priv->err_chnl->fd; + priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT; + if (rte_intr_callback_register(&priv->err_intr_handle, + mlx5_vdpa_err_interrupt_handler, + priv)) { + priv->err_intr_handle.fd = 0; + DRV_LOG(ERR, "Failed to register error interrupt for device %d.", + priv->vid); + goto error; + } else { + DRV_LOG(DEBUG, "Registered error interrupt for device%d.", + priv->vid); + } + return 0; +error: + mlx5_vdpa_err_event_unset(priv); + return -1; +} + +void +mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv) +{ + int retries = MLX5_VDPA_INTR_RETRIES; + int ret = -EAGAIN; + union { + struct mlx5dv_devx_async_event_hdr event_resp; + uint8_t buf[sizeof(struct mlx5dv_devx_async_event_hdr) + 128]; + } out; + + if (!priv->err_intr_handle.fd) + return; + while (retries-- && ret == -EAGAIN) { + ret = rte_intr_callback_unregister(&priv->err_intr_handle, + mlx5_vdpa_err_interrupt_handler, + priv); + if (ret == -EAGAIN) { + DRV_LOG(DEBUG, "Try again to unregister fd %d " + "of error interrupt, retries = %d.", + priv->err_intr_handle.fd, retries); + rte_pause(); + } + } + memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle)); + if (priv->err_chnl) { + /* Clean all pending events. */ + while (mlx5_glue->devx_get_event(priv->err_chnl, + &out.event_resp, sizeof(out.buf)) >= + (ssize_t)sizeof(out.event_resp.cookie)) + ; + mlx5_glue->devx_destroy_event_channel(priv->err_chnl); + priv->err_chnl = NULL; + } +} + int mlx5_vdpa_cqe_event_setup(struct mlx5_vdpa_priv *priv) { diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c index 17e71cf4f4..d5ac040544 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c @@ -88,11 +88,6 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq) rte_free(virtq->umems[i].buf); } memset(&virtq->umems, 0, sizeof(virtq->umems)); - if (virtq->counters) { - claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); - virtq->counters = NULL; - } - memset(&virtq->reset, 0, sizeof(virtq->reset)); if (virtq->eqp.fw_qp) mlx5_vdpa_event_qp_destroy(&virtq->eqp); virtq->notifier_state = MLX5_VDPA_NOTIFIER_STATE_DISABLED; @@ -103,9 +98,19 @@ void mlx5_vdpa_virtqs_release(struct mlx5_vdpa_priv *priv) { int i; + struct mlx5_vdpa_virtq *virtq; - for (i = 0; i < priv->nr_virtqs; i++) - mlx5_vdpa_virtq_unset(&priv->virtqs[i]); + for (i = 0; i < priv->nr_virtqs; i++) { + virtq = &priv->virtqs[i]; + mlx5_vdpa_virtq_unset(virtq); + if (virtq->counters) { + claim_zero(mlx5_devx_cmd_destroy(virtq->counters)); + virtq->counters = NULL; + memset(&virtq->reset, 0, sizeof(virtq->reset)); + } + memset(virtq->err_time, 0, sizeof(virtq->err_time)); + virtq->n_retry = 0; + } if (priv->tis) { claim_zero(mlx5_devx_cmd_destroy(priv->tis)); priv->tis = NULL; @@ -138,7 +143,6 @@ mlx5_vdpa_virtq_modify(struct mlx5_vdpa_virtq *virtq, int state) int mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) { - struct mlx5_devx_virtq_attr attr = {0}; struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; int ret; @@ -148,6 +152,17 @@ mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) if (ret) return -1; virtq->stopped = true; + DRV_LOG(DEBUG, "vid %u virtq %u was stopped.", priv->vid, index); + return mlx5_vdpa_virtq_query(priv, index); +} + +int +mlx5_vdpa_virtq_query(struct mlx5_vdpa_priv *priv, int index) +{ + struct mlx5_devx_virtq_attr attr = {0}; + struct mlx5_vdpa_virtq *virtq = &priv->virtqs[index]; + int ret; + if (mlx5_devx_cmd_query_virtq(virtq->virtq, &attr)) { DRV_LOG(ERR, "Failed to query virtq %d.", index); return -1; @@ -162,7 +177,9 @@ mlx5_vdpa_virtq_stop(struct mlx5_vdpa_priv *priv, int index) DRV_LOG(ERR, "Failed to set virtq %d base.", index); return -1; } - DRV_LOG(DEBUG, "vid %u virtq %u was stopped.", priv->vid, index); + if (attr.state == MLX5_VIRTQ_STATE_ERROR) + DRV_LOG(WARNING, "vid %d vring %d hw error=%hhu", + priv->vid, index, attr.error_type); return 0; } @@ -195,6 +212,8 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) unsigned int i; uint16_t last_avail_idx; uint16_t last_used_idx; + uint16_t event_num = MLX5_EVENT_TYPE_OBJECT_CHANGE; + uint64_t cookie; ret = rte_vhost_get_vhost_vring(priv->vid, index, &vq); if (ret) @@ -231,8 +250,9 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) " need event QPs and event mechanism.", index); } if (priv->caps.queue_counters_valid) { - virtq->counters = mlx5_devx_cmd_create_virtio_q_counters - (priv->ctx); + if (!virtq->counters) + virtq->counters = mlx5_devx_cmd_create_virtio_q_counters + (priv->ctx); if (!virtq->counters) { DRV_LOG(ERR, "Failed to create virtq couners for virtq" " %d.", index); @@ -332,6 +352,19 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index) virtq->intr_handle.fd, index); } } + /* Subscribe virtq error event. */ + virtq->version++; + cookie = ((uint64_t)virtq->version << 32) + index; + ret = mlx5_glue->devx_subscribe_devx_event(priv->err_chnl, + virtq->virtq->obj, + sizeof(event_num), + &event_num, cookie); + if (ret) { + DRV_LOG(ERR, "Failed to subscribe device %d virtq %d error event.", + priv->vid, index); + rte_errno = errno; + goto error; + } virtq->stopped = false; DRV_LOG(DEBUG, "vid %u virtq %u was created successfully.", priv->vid, index); @@ -526,12 +559,11 @@ mlx5_vdpa_virtq_stats_get(struct mlx5_vdpa_priv *priv, int qid, struct mlx5_devx_virtio_q_couners_attr attr = {0}; int ret; - if (!virtq->virtq || !virtq->enable) { + if (!virtq->counters) { DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq " "is invalid.", qid); return -EINVAL; } - MLX5_ASSERT(virtq->counters); ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &attr); if (ret) { DRV_LOG(ERR, "Failed to read virtq %d stats from HW.", qid); @@ -583,12 +615,11 @@ mlx5_vdpa_virtq_stats_reset(struct mlx5_vdpa_priv *priv, int qid) struct mlx5_vdpa_virtq *virtq = &priv->virtqs[qid]; int ret; - if (!virtq->virtq || !virtq->enable) { + if (!virtq->counters) { DRV_LOG(ERR, "Failed to read virtq %d statistics - virtq " "is invalid.", qid); return -EINVAL; } - MLX5_ASSERT(virtq->counters); ret = mlx5_devx_cmd_query_virtio_q_counters(virtq->counters, &virtq->reset); if (ret)