From patchwork Wed Sep 8 10:30:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Laatz X-Patchwork-Id: 98328 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 21E46A0C56; Wed, 8 Sep 2021 12:31:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 98649411E9; Wed, 8 Sep 2021 12:30:48 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 8C6574119A for ; Wed, 8 Sep 2021 12:30:45 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10100"; a="281461975" X-IronPort-AV: E=Sophos;i="5.85,277,1624345200"; d="scan'208";a="281461975" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Sep 2021 03:30:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,277,1624345200"; d="scan'208";a="513213923" Received: from silpixa00401122.ir.intel.com ([10.55.128.10]) by orsmga001.jf.intel.com with ESMTP; 08 Sep 2021 03:30:43 -0700 From: Kevin Laatz To: dev@dpdk.org Cc: bruce.richardson@intel.com, fengchengwen@huawei.com, jerinj@marvell.com, conor.walsh@intel.com, Kevin Laatz Date: Wed, 8 Sep 2021 10:30:12 +0000 Message-Id: <20210908103016.1661914-14-kevin.laatz@intel.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210908103016.1661914-1-kevin.laatz@intel.com> References: <20210903105001.1179328-1-kevin.laatz@intel.com> <20210908103016.1661914-1-kevin.laatz@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 13/17] dma/idxd: add vchan status function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When testing dmadev drivers, it is useful to have the HW device in a known state. This patch adds the implementation of the function which will wait for the device to be idle (all jobs completed) before proceeding. Signed-off-by: Kevin Laatz Reviewed-by: Conor Walsh --- v3: update API name to vchan_status --- drivers/dma/idxd/idxd_bus.c | 1 + drivers/dma/idxd/idxd_common.c | 14 ++++++++++++++ drivers/dma/idxd/idxd_internal.h | 2 ++ drivers/dma/idxd/idxd_pci.c | 1 + 4 files changed, 18 insertions(+) diff --git a/drivers/dma/idxd/idxd_bus.c b/drivers/dma/idxd/idxd_bus.c index 8781195d59..8f0fcad87a 100644 --- a/drivers/dma/idxd/idxd_bus.c +++ b/drivers/dma/idxd/idxd_bus.c @@ -101,6 +101,7 @@ static const struct rte_dmadev_ops idxd_vdev_ops = { .dev_info_get = idxd_info_get, .stats_get = idxd_stats_get, .stats_reset = idxd_stats_reset, + .vchan_status = idxd_vchan_status, }; static void * diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c index 66d1b3432e..e20b41ae54 100644 --- a/drivers/dma/idxd/idxd_common.c +++ b/drivers/dma/idxd/idxd_common.c @@ -165,6 +165,20 @@ get_comp_status(struct idxd_completion *c) } } +int +idxd_vchan_status(const struct rte_dmadev *dev, uint16_t vchan __rte_unused, + enum rte_dmadev_vchan_status *status) +{ + struct idxd_dmadev *idxd = dev->dev_private; + uint16_t last_batch_write = idxd->batch_idx_write == 0 ? idxd->max_batches : + idxd->batch_idx_write - 1; + uint8_t bstatus = (idxd->batch_comp_ring[last_batch_write].status != 0); + + *status = bstatus ? RTE_DMA_VCHAN_IDLE : RTE_DMA_VCHAN_ACTIVE; + + return 0; +} + static __rte_always_inline int batch_ok(struct idxd_dmadev *idxd, uint8_t max_ops, enum rte_dma_status_code *status) { diff --git a/drivers/dma/idxd/idxd_internal.h b/drivers/dma/idxd/idxd_internal.h index c04ee002d8..fcc0235a1d 100644 --- a/drivers/dma/idxd/idxd_internal.h +++ b/drivers/dma/idxd/idxd_internal.h @@ -101,5 +101,7 @@ uint16_t idxd_completed_status(struct rte_dmadev *dev, uint16_t qid __rte_unused int idxd_stats_get(const struct rte_dmadev *dev, uint16_t vchan, struct rte_dmadev_stats *stats, uint32_t stats_sz); int idxd_stats_reset(struct rte_dmadev *dev, uint16_t vchan); +int idxd_vchan_status(const struct rte_dmadev *dev, uint16_t vchan, + enum rte_dmadev_vchan_status *status); #endif /* _IDXD_INTERNAL_H_ */ diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c index a84232b6e9..f3a5d2a970 100644 --- a/drivers/dma/idxd/idxd_pci.c +++ b/drivers/dma/idxd/idxd_pci.c @@ -118,6 +118,7 @@ static const struct rte_dmadev_ops idxd_pci_ops = { .stats_reset = idxd_stats_reset, .dev_start = idxd_pci_dev_start, .dev_stop = idxd_pci_dev_stop, + .vchan_status = idxd_vchan_status, }; /* each portal uses 4 x 4k pages */