From patchwork Wed Feb 1 09:22:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 122839 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2813941B9D; Wed, 1 Feb 2023 10:26:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E01C34301D; Wed, 1 Feb 2023 10:23:54 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9AC2B42D75 for ; Wed, 1 Feb 2023 10:23:26 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3116LRYB024189 for ; Wed, 1 Feb 2023 01:23:25 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=VkPVjIcukKU9EeHa0GTGjw1US9RHANzwfA1WPiKxioQ=; b=kp5eI8KqqpT/rKxeqOmC2QfGYc2VL2C2KEMq3qt0E3UoPKQvBsWhuFnSluOImRbgIq1r BRuHir92CqF6xl/pqGU2Rw7gZVDTasKvw+TGseAmG/3a2YmM6N8aUBy3XeQnbDkn62GF 9KZPvdcjhUMc/eimxGEVPIP0gC1O5cw0iRnMyKM9uxJSyAYe2qgwvJML+blVUK3iY+Cl FgzVhT631M2LoAmDf82NIFdU60jvhgPViXspQgDcUYJqcg9iXkmvCraXOefrg0qHYV69 GbEwSvMYv+0KmIkKp+VwE8Irh2nkb/yZ2B0i39qnoapoC3ccvXUwGRs+XzaJ9YVcL0jm KQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3nfjr8rgv6-9 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 01 Feb 2023 01:23:25 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 1 Feb 2023 01:23:21 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.42 via Frontend Transport; Wed, 1 Feb 2023 01:23:21 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 489735B6922; Wed, 1 Feb 2023 01:23:21 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v4 27/39] ml/cnxk: dequeue a burst of inference requests Date: Wed, 1 Feb 2023 01:22:58 -0800 Message-ID: <20230201092310.23252-28-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230201092310.23252-1-syalavarthi@marvell.com> References: <20221208200220.20267-1-syalavarthi@marvell.com> <20230201092310.23252-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: SA3TPFOJoIt859Ts-9vB-rTPQTsgK_Nj X-Proofpoint-ORIG-GUID: SA3TPFOJoIt859Ts-9vB-rTPQTsgK_Nj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.930,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-02-01_03,2023-01-31_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enabled driver support to dequeue inference requests from internal queue. Dequeue checks for request completion by polling the status field of the job request. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 61 ++++++++++++++++++++++++++++++++++ drivers/ml/cnxk/cn10k_ml_ops.h | 2 ++ 2 files changed, 63 insertions(+) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 1abdf6fad1..ef3cbadca7 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -473,6 +473,7 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c rte_spinlock_init(&ocm->lock); dev->enqueue_burst = cn10k_ml_enqueue_burst; + dev->dequeue_burst = cn10k_ml_dequeue_burst; mldev->nb_models_loaded = 0; mldev->state = ML_CN10K_DEV_STATE_CONFIGURED; @@ -1418,6 +1419,23 @@ queue_free_count(uint64_t head, uint64_t tail, uint64_t nb_desc) return nb_desc - queue_pending_count(head, tail, nb_desc) - 1; } +static __rte_always_inline void +cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result *result, + struct rte_ml_op *op) +{ + PLT_SET_USED(dev); + PLT_SET_USED(qp_id); + + op->impl_opaque = result->error_code; + + if (likely(result->error_code == 0)) + op->status = RTE_ML_OP_STATUS_SUCCESS; + else + op->status = RTE_ML_OP_STATUS_ERROR; + + op->user_ptr = result->user_ptr; +} + __rte_hot uint16_t cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops) @@ -1472,6 +1490,49 @@ cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op return count; } +__rte_hot uint16_t +cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, + uint16_t nb_ops) +{ + struct cn10k_ml_queue *queue; + struct cn10k_ml_req *req; + struct cn10k_ml_qp *qp; + + uint64_t status; + uint16_t count; + uint64_t tail; + + qp = dev->data->queue_pairs[qp_id]; + queue = &qp->queue; + + tail = queue->tail; + nb_ops = PLT_MIN(nb_ops, queue_pending_count(queue->head, tail, qp->nb_desc)); + count = 0; + + if (unlikely(nb_ops == 0)) + goto empty_or_active; + +dequeue_req: + req = &queue->reqs[tail]; + status = plt_read64(&req->status); + if (unlikely(status != ML_CN10K_POLL_JOB_FINISH)) + goto empty_or_active; + + cn10k_ml_result_update(dev, qp_id, &req->result, req->op); + ops[count] = req->op; + + queue_index_advance(&tail, qp->nb_desc); + count++; + + if (count < nb_ops) + goto dequeue_req; + +empty_or_active: + queue->tail = tail; + + return count; +} + struct rte_ml_dev_ops cn10k_ml_ops = { /* Device control ops */ .dev_info_get = cn10k_ml_dev_info_get, diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index d35f91a302..3178295bba 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -73,5 +73,7 @@ int cn10k_ml_model_stop(struct rte_ml_dev *dev, int16_t model_id); /* Fast-path ops */ __rte_hot uint16_t cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops); +__rte_hot uint16_t cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, + struct rte_ml_op **ops, uint16_t nb_ops); #endif /* _CN10K_ML_OPS_H_ */