From patchwork Wed Sep 20 07:25:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131706 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C048425E4; Wed, 20 Sep 2023 09:29:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDF624113C; Wed, 20 Sep 2023 09:26:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9CC9F410FC for ; Wed, 20 Sep 2023 09:25:45 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW19008355 for ; Wed, 20 Sep 2023 00:25:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=8yaUFL5k+3din5FIE4Qy6dkqTwdlnbOJP81BhUJOAqM=; b=Xgdz+sjGx1npRFXF0sMRLf7gPxYiAbtKr9X73Vpmj7Pc53glXDHDCVwwbMvHv/2sDZoa K7/rTDOhb0xxz1yH6N8LlmXX1bdjRC5SRd26sHiN6dT8jKArmYPJJQ1ij+oJhLdOo4S5 uRwpxTTzq8tMXZ+dqosk78OtONpXa7ZbArvthWR5Wp++lV8op1s4nGqJod45R5AoHSYT J0nmpJ/wfJfEgcoHv3qC+nuif7w4/dmmGa4kcpU5DNjFOEQEMFktgJAXvUufcSFutBgv lCitAWt1zQrA8twQtTJawNpz7+1Ho5len9aRTT93gUcjWNJaol4DU3EVcRbwpubSMzC+ SA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-16 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:44 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:42 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 6FFA95B6927; Wed, 20 Sep 2023 00:25:42 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 32/34] ml/cnxk: support quantize and dequantize callback Date: Wed, 20 Sep 2023 00:25:23 -0700 Message-ID: <20230920072528.14185-33-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: IuPxjAjkYHN7FmCg33wju8iX4mKTA2uj X-Proofpoint-GUID: IuPxjAjkYHN7FmCg33wju8iX4mKTA2uj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Prince Takkar Added support for quantize and dequantize callback functions for TVM models. Signed-off-by: Prince Takkar --- drivers/ml/cnxk/mvtvm_ml_model.h | 2 + drivers/ml/cnxk/mvtvm_ml_ops.c | 127 +++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_ops.h | 4 + 3 files changed, 133 insertions(+) diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index d71df36f5a..57a6ce0bb1 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -5,6 +5,8 @@ #ifndef _MVTVM_ML_MODEL_H_ #define _MVTVM_ML_MODEL_H_ +#include + #include #include diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index 95238d43d8..5292ac97fe 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -7,6 +7,8 @@ #include #include +#include + #include "cn10k_ml_ops.h" #include "mvtvm_ml_model.h" @@ -168,6 +170,8 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * callback->tvmrt_io_free = cn10k_ml_io_free; callback->tvmrt_malloc = cn10k_ml_malloc; callback->tvmrt_free = cn10k_ml_free; + callback->tvmrt_quantize = mvtvm_ml_io_quantize; + callback->tvmrt_dequantize = mvtvm_ml_io_dequantize; } else { callback = NULL; } @@ -298,3 +302,126 @@ mvtvm_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) return 0; } + +int +mvtvm_ml_io_quantize(void *device, uint16_t model_id, const char *layer_name, + const DLTensor **deq_tensor, void *qbuffer) +{ + struct cnxk_ml_io_info *info = NULL; + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + uint16_t layer_id = 0; + uint8_t *lcl_dbuffer; + uint8_t *lcl_qbuffer; + uint32_t i; + int ret; + +#ifdef CNXK_ML_DEV_DEBUG + if ((device == NULL) || (deq_tensor == NULL) || (qbuffer == NULL)) + return -EINVAL; +#endif + + cnxk_mldev = (struct cnxk_ml_dev *)device; + + model = cnxk_mldev->mldev->data->models[model_id]; +#ifdef CNXK_ML_DEV_DEBUG + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } +#endif + + /* Get layer id */ + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + +#ifdef CNXK_ML_DEV_DEBUG + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } +#endif + + info = &model->layer[layer_id].info; + lcl_qbuffer = (uint8_t *)qbuffer; + + for (i = 0; i < info->nb_inputs; i++) { + lcl_dbuffer = PLT_PTR_ADD(deq_tensor[i]->data, deq_tensor[i]->byte_offset); + + ret = cnxk_ml_io_quantize_single(&info->input[i], lcl_dbuffer, lcl_qbuffer); + if (ret < 0) + return ret; + + lcl_qbuffer += info->input[i].sz_q; + } + + return 0; +} + +int +mvtvm_ml_io_dequantize(void *device, uint16_t model_id, const char *layer_name, void *qbuffer, + const DLTensor **deq_tensor) +{ + struct cnxk_ml_io_info *info = NULL; + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + uint16_t layer_id = 0; + uint8_t *lcl_dbuffer; + uint8_t *lcl_qbuffer; + uint32_t i; + int ret; + +#ifdef CNXK_ML_DEV_DEBUG + if ((device == NULL) || (deq_tensor == NULL) || (qbuffer == NULL)) + return -EINVAL; +#endif + + cnxk_mldev = (struct cnxk_ml_dev *)device; + + model = cnxk_mldev->mldev->data->models[model_id]; +#ifdef CNXK_ML_DEV_DEBUG + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } +#endif + + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + +#ifdef CNXK_ML_DEV_DEBUG + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } +#endif + + info = &model->layer[layer_id].info; + lcl_qbuffer = (uint8_t *)qbuffer; + + for (i = 0; i < info->nb_outputs; i++) { + lcl_dbuffer = PLT_PTR_ADD(deq_tensor[i]->data, deq_tensor[i]->byte_offset); + + ret = cnxk_ml_io_dequantize_single(&info->output[i], lcl_qbuffer, lcl_dbuffer); + if (ret < 0) + return ret; + + lcl_qbuffer += info->output[i].sz_q; + } + + return 0; +} diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h index 55459f9f7f..a1a868ef4b 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.h +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -21,5 +21,9 @@ int mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_para int mvtvm_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int mvtvm_ml_model_start(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int mvtvm_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); +int mvtvm_ml_io_quantize(void *device, uint16_t model_id, const char *layer_name, + const DLTensor **deq_tensor, void *qbuffer); +int mvtvm_ml_io_dequantize(void *device, uint16_t model_id, const char *layer_name, void *qbuffer, + const DLTensor **deq_tensor); #endif /* _MVTVM_ML_OPS_H_ */