From patchwork Wed Sep 20 07:24:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131675 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 309C1425E4; Wed, 20 Sep 2023 09:25:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0A7FF40E8A; Wed, 20 Sep 2023 09:25:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 42789406B8 for ; Wed, 20 Sep 2023 09:25:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lX002294 for ; Wed, 20 Sep 2023 00:25:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Yr/D9LNAxdMBs5INeqoMzXiub8NCvgAYEzJaNZmi/RY=; b=hLbWSezBc9N+q124Iyyl/leRcZIxIhlhVqyo2VRY+ZBAUHyldvnZccNKERnSlojLXJJI vY/EipdwLulX4MPqmK8/TtQg85ls01b+tOaDC4GHCSuU9EgeHkh11AVk4732FJOOfh/0 5WHss1g6JvEdnLSICpVo9tw8X20/ep9vIb8eTYJbGvb2ZcTObmw6W3O6B43p5RFD7inv zwQFsTlo5eXAqwyaracuMOW8f4JvdLblo71WuNri0OblK2VCUrW2vfjqRbfyTrhDEQpD jU3mbpMtkSYF/u5AzL6Si9Y8N6tQBu2YEfGZr44/S4DzXxVbN+NqIvxMmlfuznm8FJ1A Vg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:34 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:32 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 3E15D5B6927; Wed, 20 Sep 2023 00:25:32 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 01/34] ml/cnxk: drop support for register polling Date: Wed, 20 Sep 2023 00:24:52 -0700 Message-ID: <20230920072528.14185-2-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Zi4yoEeRQ6hXiy9BdO3pUaf1a2QGC_BJ X-Proofpoint-GUID: Zi4yoEeRQ6hXiy9BdO3pUaf1a2QGC_BJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Dropped support for device argument "poll_mem" for cnxk ML driver. Support to use registers for polling is removed and DDR addresses would be used for polling. Signed-off-by: Srikanth Yalavarthi --- Depends-on: series-29565 ("Spec changes to support multi I/O models") doc/guides/mldevs/cnxk.rst | 16 ----- drivers/ml/cnxk/cn10k_ml_dev.c | 36 +---------- drivers/ml/cnxk/cn10k_ml_dev.h | 13 +--- drivers/ml/cnxk/cn10k_ml_ops.c | 111 ++++----------------------------- drivers/ml/cnxk/cn10k_ml_ops.h | 6 -- 5 files changed, 18 insertions(+), 164 deletions(-) diff --git a/doc/guides/mldevs/cnxk.rst b/doc/guides/mldevs/cnxk.rst index b79bc540d9..1834b1f905 100644 --- a/doc/guides/mldevs/cnxk.rst +++ b/doc/guides/mldevs/cnxk.rst @@ -180,22 +180,6 @@ Runtime Config Options in the fast path enqueue burst operation. -**Polling memory location** (default ``ddr``) - - ML cnxk driver provides the option to select the memory location to be used - for polling to check the inference request completion. - Driver supports using either the DDR address space (``ddr``) - or ML registers (``register``) as polling locations. - The parameter ``poll_mem`` is used to specify the poll location. - - For example:: - - -a 0000:00:10.0,poll_mem="register" - - With the above configuration, ML cnxk driver is configured to use ML registers - for polling in fastpath requests. - - Debugging Options ----------------- diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c index 983138a7f2..e3c2badcef 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.c +++ b/drivers/ml/cnxk/cn10k_ml_dev.c @@ -23,7 +23,6 @@ #define CN10K_ML_DEV_CACHE_MODEL_DATA "cache_model_data" #define CN10K_ML_OCM_ALLOC_MODE "ocm_alloc_mode" #define CN10K_ML_DEV_HW_QUEUE_LOCK "hw_queue_lock" -#define CN10K_ML_FW_POLL_MEM "poll_mem" #define CN10K_ML_OCM_PAGE_SIZE "ocm_page_size" #define CN10K_ML_FW_PATH_DEFAULT "/lib/firmware/mlip-fw.bin" @@ -32,7 +31,6 @@ #define CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT 1 #define CN10K_ML_OCM_ALLOC_MODE_DEFAULT "lowest" #define CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT 1 -#define CN10K_ML_FW_POLL_MEM_DEFAULT "ddr" #define CN10K_ML_OCM_PAGE_SIZE_DEFAULT 16384 /* ML firmware macros */ @@ -54,7 +52,6 @@ static const char *const valid_args[] = {CN10K_ML_FW_PATH, CN10K_ML_DEV_CACHE_MODEL_DATA, CN10K_ML_OCM_ALLOC_MODE, CN10K_ML_DEV_HW_QUEUE_LOCK, - CN10K_ML_FW_POLL_MEM, CN10K_ML_OCM_PAGE_SIZE, NULL}; @@ -103,9 +100,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde bool hw_queue_lock_set = false; bool ocm_page_size_set = false; char *ocm_alloc_mode = NULL; - bool poll_mem_set = false; bool fw_path_set = false; - char *poll_mem = NULL; char *fw_path = NULL; int ret = 0; bool found; @@ -189,17 +184,6 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde hw_queue_lock_set = true; } - if (rte_kvargs_count(kvlist, CN10K_ML_FW_POLL_MEM) == 1) { - ret = rte_kvargs_process(kvlist, CN10K_ML_FW_POLL_MEM, &parse_string_arg, - &poll_mem); - if (ret < 0) { - plt_err("Error processing arguments, key = %s\n", CN10K_ML_FW_POLL_MEM); - ret = -EINVAL; - goto exit; - } - poll_mem_set = true; - } - if (rte_kvargs_count(kvlist, CN10K_ML_OCM_PAGE_SIZE) == 1) { ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_PAGE_SIZE, &parse_integer_arg, &mldev->ocm_page_size); @@ -280,18 +264,6 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde } plt_info("ML: %s = %d", CN10K_ML_DEV_HW_QUEUE_LOCK, mldev->hw_queue_lock); - if (!poll_mem_set) { - mldev->fw.poll_mem = CN10K_ML_FW_POLL_MEM_DEFAULT; - } else { - if (!((strcmp(poll_mem, "ddr") == 0) || (strcmp(poll_mem, "register") == 0))) { - plt_err("Invalid argument, %s = %s\n", CN10K_ML_FW_POLL_MEM, poll_mem); - ret = -EINVAL; - goto exit; - } - mldev->fw.poll_mem = poll_mem; - } - plt_info("ML: %s = %s", CN10K_ML_FW_POLL_MEM, mldev->fw.poll_mem); - if (!ocm_page_size_set) { mldev->ocm_page_size = CN10K_ML_OCM_PAGE_SIZE_DEFAULT; } else { @@ -450,10 +422,7 @@ cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw) if (fw->report_dpe_warnings) flags = flags | FW_REPORT_DPE_WARNING_BITMASK; - if (strcmp(fw->poll_mem, "ddr") == 0) - flags = flags | FW_USE_DDR_POLL_ADDR_FP; - else if (strcmp(fw->poll_mem, "register") == 0) - flags = flags & ~FW_USE_DDR_POLL_ADDR_FP; + flags = flags | FW_USE_DDR_POLL_ADDR_FP; return flags; } @@ -863,5 +832,4 @@ RTE_PMD_REGISTER_PARAM_STRING(MLDEV_NAME_CN10K_PMD, CN10K_ML_FW_PATH "=<0|1>" CN10K_ML_DEV_CACHE_MODEL_DATA "=<0|1>" CN10K_ML_OCM_ALLOC_MODE "=" CN10K_ML_DEV_HW_QUEUE_LOCK - "=<0|1>" CN10K_ML_FW_POLL_MEM "=" CN10K_ML_OCM_PAGE_SIZE - "=<1024|2048|4096|8192|16384>"); + "=<0|1>" CN10K_ML_OCM_PAGE_SIZE "=<1024|2048|4096|8192|16384>"); diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index c73bf7d001..4aaeecff03 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -390,9 +390,6 @@ struct cn10k_ml_fw { /* Report DPE warnings */ int report_dpe_warnings; - /* Memory to be used for polling in fast-path requests */ - const char *poll_mem; - /* Data buffer */ uint8_t *data; @@ -525,13 +522,9 @@ struct cn10k_ml_dev { bool (*ml_jcmdq_enqueue)(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd); /* Poll handling function pointers */ - void (*set_poll_addr)(struct cn10k_ml_qp *qp, struct cn10k_ml_req *req, uint64_t idx); - void (*set_poll_ptr)(struct roc_ml *roc_ml, struct cn10k_ml_req *req); - uint64_t (*get_poll_ptr)(struct roc_ml *roc_ml, struct cn10k_ml_req *req); - - /* Memory barrier function pointers to handle synchronization */ - void (*set_enq_barrier)(void); - void (*set_deq_barrier)(void); + void (*set_poll_addr)(struct cn10k_ml_req *req); + void (*set_poll_ptr)(struct cn10k_ml_req *req); + uint64_t (*get_poll_ptr)(struct cn10k_ml_req *req); }; uint64_t cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw); diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 4abf4ae0d3..11531afd8c 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -23,11 +23,6 @@ #define ML_FLAGS_POLL_COMPL BIT(0) #define ML_FLAGS_SSO_COMPL BIT(1) -/* Scratch register range for poll mode requests */ -#define ML_POLL_REGISTER_SYNC 1023 -#define ML_POLL_REGISTER_START 1024 -#define ML_POLL_REGISTER_END 2047 - /* Error message length */ #define ERRMSG_LEN 32 @@ -82,79 +77,23 @@ print_line(FILE *fp, int len) } static inline void -cn10k_ml_set_poll_addr_ddr(struct cn10k_ml_qp *qp, struct cn10k_ml_req *req, uint64_t idx) +cn10k_ml_set_poll_addr(struct cn10k_ml_req *req) { - PLT_SET_USED(qp); - PLT_SET_USED(idx); - req->compl_W1 = PLT_U64_CAST(&req->status); } static inline void -cn10k_ml_set_poll_addr_reg(struct cn10k_ml_qp *qp, struct cn10k_ml_req *req, uint64_t idx) -{ - req->compl_W1 = ML_SCRATCH(qp->block_start + idx % qp->block_size); -} - -static inline void -cn10k_ml_set_poll_ptr_ddr(struct roc_ml *roc_ml, struct cn10k_ml_req *req) +cn10k_ml_set_poll_ptr(struct cn10k_ml_req *req) { - PLT_SET_USED(roc_ml); - plt_write64(ML_CN10K_POLL_JOB_START, req->compl_W1); } -static inline void -cn10k_ml_set_poll_ptr_reg(struct roc_ml *roc_ml, struct cn10k_ml_req *req) -{ - roc_ml_reg_write64(roc_ml, ML_CN10K_POLL_JOB_START, req->compl_W1); -} - static inline uint64_t -cn10k_ml_get_poll_ptr_ddr(struct roc_ml *roc_ml, struct cn10k_ml_req *req) +cn10k_ml_get_poll_ptr(struct cn10k_ml_req *req) { - PLT_SET_USED(roc_ml); - return plt_read64(req->compl_W1); } -static inline uint64_t -cn10k_ml_get_poll_ptr_reg(struct roc_ml *roc_ml, struct cn10k_ml_req *req) -{ - return roc_ml_reg_read64(roc_ml, req->compl_W1); -} - -static inline void -cn10k_ml_set_sync_addr(struct cn10k_ml_dev *mldev, struct cn10k_ml_req *req) -{ - if (strcmp(mldev->fw.poll_mem, "ddr") == 0) - req->compl_W1 = PLT_U64_CAST(&req->status); - else if (strcmp(mldev->fw.poll_mem, "register") == 0) - req->compl_W1 = ML_SCRATCH(ML_POLL_REGISTER_SYNC); -} - -static inline void -cn10k_ml_enq_barrier_ddr(void) -{ -} - -static inline void -cn10k_ml_deq_barrier_ddr(void) -{ -} - -static inline void -cn10k_ml_enq_barrier_register(void) -{ - dmb_st; -} - -static inline void -cn10k_ml_deq_barrier_register(void) -{ - dsb_st; -} - static void qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) { @@ -242,9 +181,6 @@ cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_des qp->stats.dequeued_count = 0; qp->stats.enqueue_err_count = 0; qp->stats.dequeue_err_count = 0; - qp->block_size = - (ML_POLL_REGISTER_END - ML_POLL_REGISTER_START + 1) / dev->data->nb_queue_pairs; - qp->block_start = ML_POLL_REGISTER_START + qp_id * qp->block_size; /* Initialize job command */ for (i = 0; i < qp->nb_desc; i++) { @@ -933,11 +869,7 @@ cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) else dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE_LF; - if (strcmp(mldev->fw.poll_mem, "register") == 0) - dev_info->max_desc = ML_CN10K_MAX_DESC_PER_QP / dev_info->max_queue_pairs; - else if (strcmp(mldev->fw.poll_mem, "ddr") == 0) - dev_info->max_desc = ML_CN10K_MAX_DESC_PER_QP; - + dev_info->max_desc = ML_CN10K_MAX_DESC_PER_QP; dev_info->max_io = ML_CN10K_MAX_INPUT_OUTPUT; dev_info->max_segments = ML_CN10K_MAX_SEGMENTS; dev_info->align_size = ML_CN10K_ALIGN_SIZE; @@ -1118,24 +1050,9 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_lf; /* Set polling function pointers */ - if (strcmp(mldev->fw.poll_mem, "ddr") == 0) { - mldev->set_poll_addr = cn10k_ml_set_poll_addr_ddr; - mldev->set_poll_ptr = cn10k_ml_set_poll_ptr_ddr; - mldev->get_poll_ptr = cn10k_ml_get_poll_ptr_ddr; - } else if (strcmp(mldev->fw.poll_mem, "register") == 0) { - mldev->set_poll_addr = cn10k_ml_set_poll_addr_reg; - mldev->set_poll_ptr = cn10k_ml_set_poll_ptr_reg; - mldev->get_poll_ptr = cn10k_ml_get_poll_ptr_reg; - } - - /* Set barrier function pointers */ - if (strcmp(mldev->fw.poll_mem, "ddr") == 0) { - mldev->set_enq_barrier = cn10k_ml_enq_barrier_ddr; - mldev->set_deq_barrier = cn10k_ml_deq_barrier_ddr; - } else if (strcmp(mldev->fw.poll_mem, "register") == 0) { - mldev->set_enq_barrier = cn10k_ml_enq_barrier_register; - mldev->set_deq_barrier = cn10k_ml_deq_barrier_register; - } + mldev->set_poll_addr = cn10k_ml_set_poll_addr; + mldev->set_poll_ptr = cn10k_ml_set_poll_ptr; + mldev->get_poll_ptr = cn10k_ml_get_poll_ptr; dev->enqueue_burst = cn10k_ml_enqueue_burst; dev->dequeue_burst = cn10k_ml_dequeue_burst; @@ -2390,15 +2307,14 @@ cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op op = ops[count]; req = &queue->reqs[head]; - mldev->set_poll_addr(qp, req, head); + mldev->set_poll_addr(req); cn10k_ml_prep_fp_job_descriptor(dev, req, op); memset(&req->result, 0, sizeof(struct cn10k_ml_result)); req->result.error_code.s.etype = ML_ETYPE_UNKNOWN; req->result.user_ptr = op->user_ptr; - mldev->set_enq_barrier(); - mldev->set_poll_ptr(&mldev->roc, req); + mldev->set_poll_ptr(req); enqueued = mldev->ml_jcmdq_enqueue(&mldev->roc, &req->jcmd); if (unlikely(!enqueued)) goto jcmdq_full; @@ -2445,7 +2361,7 @@ cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op dequeue_req: req = &queue->reqs[tail]; - status = mldev->get_poll_ptr(&mldev->roc, req); + status = mldev->get_poll_ptr(req); if (unlikely(status != ML_CN10K_POLL_JOB_FINISH)) { if (plt_tsc_cycles() < req->timeout) goto empty_or_active; @@ -2453,7 +2369,6 @@ cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op req->result.error_code.s.etype = ML_ETYPE_DRIVER; } - mldev->set_deq_barrier(); cn10k_ml_result_update(dev, qp_id, &req->result, req->op); ops[count] = req->op; @@ -2515,14 +2430,14 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) model = dev->data->models[op->model_id]; req = model->req; - cn10k_ml_set_sync_addr(mldev, req); + cn10k_ml_set_poll_addr(req); cn10k_ml_prep_fp_job_descriptor(dev, req, op); memset(&req->result, 0, sizeof(struct cn10k_ml_result)); req->result.error_code.s.etype = ML_ETYPE_UNKNOWN; req->result.user_ptr = op->user_ptr; - mldev->set_poll_ptr(&mldev->roc, req); + mldev->set_poll_ptr(req); req->jcmd.w1.s.jobptr = PLT_U64_CAST(&req->jd); timeout = true; @@ -2542,7 +2457,7 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) timeout = true; do { - if (mldev->get_poll_ptr(&mldev->roc, req) == ML_CN10K_POLL_JOB_FINISH) { + if (mldev->get_poll_ptr(req) == ML_CN10K_POLL_JOB_FINISH) { timeout = false; break; } diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index d64a9f27e6..005b093e45 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -67,12 +67,6 @@ struct cn10k_ml_qp { /* Statistics per queue-pair */ struct rte_ml_dev_stats stats; - - /* Register block start for polling */ - uint32_t block_start; - - /* Register block end for polling */ - uint32_t block_size; }; /* Device ops */ From patchwork Wed Sep 20 07:24:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131676 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4F0CB425E4; Wed, 20 Sep 2023 09:25:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 47114406B8; Wed, 20 Sep 2023 09:25:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id A531B40E0F for ; Wed, 20 Sep 2023 09:25:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lY002294 for ; Wed, 20 Sep 2023 00:25:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=y+oFCWluDAtiD+0vidYLCHyysCokhjEQOgPx/VWxXH0=; b=G1VfmWKjKu6RkDIBKqMmuPeI3QvqxSJOU/xJ2kMOEhR5naOd39vjZs7yW1phzr+rz3mK xEQS5Glx3cm9gY63WuiYcqNx00zbAwiiLcHTkVeISXx2PTihOcWmQAY/0hsfUnFVd6oD r8vp9oi/knIq6RNWuTd+CXw2hgHpGLBt2jnHXK513WpRRWtixsB3ukdpojl0eT10lfLb IrF8zLszmMp727ZBeyH/AHnUlGlKhSKLE4UVECrXxaWpMjXYWQ89BvLDO7z7u6bXtwR/ vEu4F7QqkknsNDsadZo5HK2fbTGeELQiSRwiso7wB8kfCZi/Qq8QTC8sFvpZv3AS9AnA pA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:34 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:32 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 90D075B6929; Wed, 20 Sep 2023 00:25:32 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi , Prince Takkar CC: , , Subject: [PATCH v2 02/34] ml/cnxk: drop use of RTE API for firmware read Date: Wed, 20 Sep 2023 00:24:53 -0700 Message-ID: <20230920072528.14185-3-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: EPMtn8JoyYQ7EBat5Da2ubFcSwAikiSQ X-Proofpoint-GUID: EPMtn8JoyYQ7EBat5Da2ubFcSwAikiSQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Dropped use of rte_firmware_read API to read ML firmware binary. When DPDK is built with libarchive aaupport, the the RTE API assumes the binary file as a compressed archive. This causes the ML firmware binary to be parsed incorrectly. Fixes: c29da752ffa8 ("ml/cnxk: support firmware load and device reset") Cc: syalavarthi@marvell.com Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.c | 64 +++++++++++++++++++++++++++++++--- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c index e3c2badcef..b7e6ed9a00 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.c +++ b/drivers/ml/cnxk/cn10k_ml_dev.c @@ -2,6 +2,11 @@ * Copyright (c) 2022 Marvell. */ +#include +#include +#include +#include + #include #include #include @@ -61,6 +66,57 @@ static const int valid_ocm_page_size[] = {1024, 2048, 4096, 8192, 16384}; /* Dummy operations for ML device */ struct rte_ml_dev_ops ml_dev_dummy_ops = {0}; +static int +ml_read_file(const char *file, size_t *size, char **buffer) +{ + char *file_buffer = NULL; + struct stat file_stat; + char *file_map; + int ret; + int fd; + + fd = open(file, O_RDONLY); + if (fd == -1) { + plt_err("Failed to open file: %s\n", file); + return -errno; + } + + if (fstat(fd, &file_stat) != 0) { + plt_err("fstat failed for file: %s\n", file); + close(fd); + return -errno; + } + + file_buffer = rte_malloc("ml_firmware", file_stat.st_size, PLT_CACHE_LINE_SIZE); + if (file_buffer == NULL) { + plt_err("Failed to allocate memory: %s\n", file); + ret = -ENOMEM; + goto error; + } + + file_map = mmap(0, file_stat.st_size, PROT_READ, MAP_PRIVATE, fd, 0); + if (file_map == MAP_FAILED) { + plt_err("Failed to map file: %s\n", file); + ret = -errno; + goto error; + } + + rte_memcpy(file_buffer, file_map, file_stat.st_size); + munmap(file_map, file_stat.st_size); + close(fd); + + *size = file_stat.st_size; + *buffer = file_buffer; + + return 0; + +error: + free(file_buffer); + close(fd); + + return ret; +} + static int parse_string_arg(const char *key __rte_unused, const char *value, void *extra_args) { @@ -736,7 +792,7 @@ cn10k_ml_fw_load(struct cn10k_ml_dev *mldev) { const struct plt_memzone *mz; struct cn10k_ml_fw *fw; - void *fw_buffer = NULL; + char *fw_buffer = NULL; uint64_t mz_size = 0; uint64_t fw_size = 0; int ret = 0; @@ -746,7 +802,7 @@ cn10k_ml_fw_load(struct cn10k_ml_dev *mldev) if (roc_env_is_emulator() || roc_env_is_hw()) { /* Read firmware image to a buffer */ - ret = rte_firmware_read(fw->path, &fw_buffer, &fw_size); + ret = ml_read_file(fw->path, &fw_size, &fw_buffer); if ((ret < 0) || (fw_buffer == NULL)) { plt_err("Unable to read firmware data: %s\n", fw->path); return ret; @@ -763,7 +819,7 @@ cn10k_ml_fw_load(struct cn10k_ml_dev *mldev) mz = plt_memzone_reserve_aligned(FW_MEMZONE_NAME, mz_size, 0, ML_CN10K_ALIGN_SIZE); if (mz == NULL) { plt_err("plt_memzone_reserve failed : %s", FW_MEMZONE_NAME); - free(fw_buffer); + rte_free(fw_buffer); return -ENOMEM; } fw->req = mz->addr; @@ -780,7 +836,7 @@ cn10k_ml_fw_load(struct cn10k_ml_dev *mldev) if (roc_env_is_emulator() || roc_env_is_hw()) { fw->data = PLT_PTR_ADD(mz->addr, sizeof(struct cn10k_ml_req)); ret = cn10k_ml_fw_load_cn10ka(fw, fw_buffer, fw_size); - free(fw_buffer); + rte_free(fw_buffer); } else if (roc_env_is_asim()) { fw->data = NULL; ret = cn10k_ml_fw_load_asim(fw); From patchwork Wed Sep 20 07:24:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131678 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 765BF425E4; Wed, 20 Sep 2023 09:26:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31AF840EE5; Wed, 20 Sep 2023 09:25:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DA44940E72 for ; Wed, 20 Sep 2023 09:25:36 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW0r008355 for ; Wed, 20 Sep 2023 00:25:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Ce0FsptH5L7Ah0h40Dm2/Kc8yHbxwxuZutoV5gbkT7c=; b=FzVOQzsL2cbYw6x4c5iV+xGe/hfH+NppUyS63WHT9h0JCSt2Wj/2qTUy+oSspNDAk2YH bZCNunrmFQAn9Wr9hkz3chh8o1ErVkZzhjoayob4dNsVnY1mPmAuKuTLxx6X9S06Aq3o RWfVl93IAGXdM1NNffmdFdjLeya6CgmsZYeWXABWkMYG2SB4lHafQ4H1lP4Ag/LU9TtX UYKEMx/iul0IxR0ur/mh677siihyV40JNGWwm74xUXquf7Gdu/gYc457/nXnVi140d8i zzXAYYxKX/DYwp2UAzCWvoRIW5iynrO4VYD0KiWMdtCct5MSVBFkpdUPe145agTuAMe7 GA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:35 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:33 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id EA3A85B692B; Wed, 20 Sep 2023 00:25:32 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 03/34] ml/cnxk: add generic cnxk device structure Date: Wed, 20 Sep 2023 00:24:54 -0700 Message-ID: <20230920072528.14185-4-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: TDe9mxfF9U2wtoYNt98WGMezU8gLMFcP X-Proofpoint-GUID: TDe9mxfF9U2wtoYNt98WGMezU8gLMFcP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce generic cnxk device structure. This structure is a top level device structure for the driver, which would encapsulate the target / platform specific device structure. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.c | 315 ++++++++++---------- drivers/ml/cnxk/cn10k_ml_dev.h | 47 +-- drivers/ml/cnxk/cn10k_ml_model.c | 14 +- drivers/ml/cnxk/cn10k_ml_model.h | 8 +- drivers/ml/cnxk/cn10k_ml_ocm.c | 56 ++-- drivers/ml/cnxk/cn10k_ml_ops.c | 494 +++++++++++++++++-------------- drivers/ml/cnxk/cnxk_ml_dev.c | 11 + drivers/ml/cnxk/cnxk_ml_dev.h | 58 ++++ drivers/ml/cnxk/meson.build | 2 + 9 files changed, 562 insertions(+), 443 deletions(-) create mode 100644 drivers/ml/cnxk/cnxk_ml_dev.c create mode 100644 drivers/ml/cnxk/cnxk_ml_dev.h diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c index b7e6ed9a00..367fb7014c 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.c +++ b/drivers/ml/cnxk/cn10k_ml_dev.c @@ -15,13 +15,15 @@ #include #include -#include - #include +#include + #include "cn10k_ml_dev.h" #include "cn10k_ml_ops.h" +#include "cnxk_ml_dev.h" + #define CN10K_ML_FW_PATH "fw_path" #define CN10K_ML_FW_ENABLE_DPE_WARNINGS "enable_dpe_warnings" #define CN10K_ML_FW_REPORT_DPE_WARNINGS "report_dpe_warnings" @@ -63,9 +65,6 @@ static const char *const valid_args[] = {CN10K_ML_FW_PATH, /* Supported OCM page sizes: 1KB, 2KB, 4KB, 8KB and 16KB */ static const int valid_ocm_page_size[] = {1024, 2048, 4096, 8192, 16384}; -/* Dummy operations for ML device */ -struct rte_ml_dev_ops ml_dev_dummy_ops = {0}; - static int ml_read_file(const char *file, size_t *size, char **buffer) { @@ -146,7 +145,7 @@ parse_integer_arg(const char *key __rte_unused, const char *value, void *extra_a } static int -cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mldev) +cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10k_mldev) { bool enable_dpe_warnings_set = false; bool report_dpe_warnings_set = false; @@ -183,7 +182,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde if (rte_kvargs_count(kvlist, CN10K_ML_FW_ENABLE_DPE_WARNINGS) == 1) { ret = rte_kvargs_process(kvlist, CN10K_ML_FW_ENABLE_DPE_WARNINGS, - &parse_integer_arg, &mldev->fw.enable_dpe_warnings); + &parse_integer_arg, &cn10k_mldev->fw.enable_dpe_warnings); if (ret < 0) { plt_err("Error processing arguments, key = %s\n", CN10K_ML_FW_ENABLE_DPE_WARNINGS); @@ -195,7 +194,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde if (rte_kvargs_count(kvlist, CN10K_ML_FW_REPORT_DPE_WARNINGS) == 1) { ret = rte_kvargs_process(kvlist, CN10K_ML_FW_REPORT_DPE_WARNINGS, - &parse_integer_arg, &mldev->fw.report_dpe_warnings); + &parse_integer_arg, &cn10k_mldev->fw.report_dpe_warnings); if (ret < 0) { plt_err("Error processing arguments, key = %s\n", CN10K_ML_FW_REPORT_DPE_WARNINGS); @@ -207,7 +206,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde if (rte_kvargs_count(kvlist, CN10K_ML_DEV_CACHE_MODEL_DATA) == 1) { ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_CACHE_MODEL_DATA, &parse_integer_arg, - &mldev->cache_model_data); + &cn10k_mldev->cache_model_data); if (ret < 0) { plt_err("Error processing arguments, key = %s\n", CN10K_ML_DEV_CACHE_MODEL_DATA); @@ -230,7 +229,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde if (rte_kvargs_count(kvlist, CN10K_ML_DEV_HW_QUEUE_LOCK) == 1) { ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_HW_QUEUE_LOCK, &parse_integer_arg, - &mldev->hw_queue_lock); + &cn10k_mldev->hw_queue_lock); if (ret < 0) { plt_err("Error processing arguments, key = %s\n", CN10K_ML_DEV_HW_QUEUE_LOCK); @@ -242,7 +241,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde if (rte_kvargs_count(kvlist, CN10K_ML_OCM_PAGE_SIZE) == 1) { ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_PAGE_SIZE, &parse_integer_arg, - &mldev->ocm_page_size); + &cn10k_mldev->ocm_page_size); if (ret < 0) { plt_err("Error processing arguments, key = %s\n", CN10K_ML_OCM_PAGE_SIZE); ret = -EINVAL; @@ -253,49 +252,53 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde check_args: if (!fw_path_set) - mldev->fw.path = CN10K_ML_FW_PATH_DEFAULT; + cn10k_mldev->fw.path = CN10K_ML_FW_PATH_DEFAULT; else - mldev->fw.path = fw_path; - plt_info("ML: %s = %s", CN10K_ML_FW_PATH, mldev->fw.path); + cn10k_mldev->fw.path = fw_path; + plt_info("ML: %s = %s", CN10K_ML_FW_PATH, cn10k_mldev->fw.path); if (!enable_dpe_warnings_set) { - mldev->fw.enable_dpe_warnings = CN10K_ML_FW_ENABLE_DPE_WARNINGS_DEFAULT; + cn10k_mldev->fw.enable_dpe_warnings = CN10K_ML_FW_ENABLE_DPE_WARNINGS_DEFAULT; } else { - if ((mldev->fw.enable_dpe_warnings < 0) || (mldev->fw.enable_dpe_warnings > 1)) { + if ((cn10k_mldev->fw.enable_dpe_warnings < 0) || + (cn10k_mldev->fw.enable_dpe_warnings > 1)) { plt_err("Invalid argument, %s = %d\n", CN10K_ML_FW_ENABLE_DPE_WARNINGS, - mldev->fw.enable_dpe_warnings); + cn10k_mldev->fw.enable_dpe_warnings); ret = -EINVAL; goto exit; } } - plt_info("ML: %s = %d", CN10K_ML_FW_ENABLE_DPE_WARNINGS, mldev->fw.enable_dpe_warnings); + plt_info("ML: %s = %d", CN10K_ML_FW_ENABLE_DPE_WARNINGS, + cn10k_mldev->fw.enable_dpe_warnings); if (!report_dpe_warnings_set) { - mldev->fw.report_dpe_warnings = CN10K_ML_FW_REPORT_DPE_WARNINGS_DEFAULT; + cn10k_mldev->fw.report_dpe_warnings = CN10K_ML_FW_REPORT_DPE_WARNINGS_DEFAULT; } else { - if ((mldev->fw.report_dpe_warnings < 0) || (mldev->fw.report_dpe_warnings > 1)) { + if ((cn10k_mldev->fw.report_dpe_warnings < 0) || + (cn10k_mldev->fw.report_dpe_warnings > 1)) { plt_err("Invalid argument, %s = %d\n", CN10K_ML_FW_REPORT_DPE_WARNINGS, - mldev->fw.report_dpe_warnings); + cn10k_mldev->fw.report_dpe_warnings); ret = -EINVAL; goto exit; } } - plt_info("ML: %s = %d", CN10K_ML_FW_REPORT_DPE_WARNINGS, mldev->fw.report_dpe_warnings); + plt_info("ML: %s = %d", CN10K_ML_FW_REPORT_DPE_WARNINGS, + cn10k_mldev->fw.report_dpe_warnings); if (!cache_model_data_set) { - mldev->cache_model_data = CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT; + cn10k_mldev->cache_model_data = CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT; } else { - if ((mldev->cache_model_data < 0) || (mldev->cache_model_data > 1)) { + if ((cn10k_mldev->cache_model_data < 0) || (cn10k_mldev->cache_model_data > 1)) { plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_CACHE_MODEL_DATA, - mldev->cache_model_data); + cn10k_mldev->cache_model_data); ret = -EINVAL; goto exit; } } - plt_info("ML: %s = %d", CN10K_ML_DEV_CACHE_MODEL_DATA, mldev->cache_model_data); + plt_info("ML: %s = %d", CN10K_ML_DEV_CACHE_MODEL_DATA, cn10k_mldev->cache_model_data); if (!ocm_alloc_mode_set) { - mldev->ocm.alloc_mode = CN10K_ML_OCM_ALLOC_MODE_DEFAULT; + cn10k_mldev->ocm.alloc_mode = CN10K_ML_OCM_ALLOC_MODE_DEFAULT; } else { if (!((strcmp(ocm_alloc_mode, "lowest") == 0) || (strcmp(ocm_alloc_mode, "largest") == 0))) { @@ -304,47 +307,47 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde ret = -EINVAL; goto exit; } - mldev->ocm.alloc_mode = ocm_alloc_mode; + cn10k_mldev->ocm.alloc_mode = ocm_alloc_mode; } - plt_info("ML: %s = %s", CN10K_ML_OCM_ALLOC_MODE, mldev->ocm.alloc_mode); + plt_info("ML: %s = %s", CN10K_ML_OCM_ALLOC_MODE, cn10k_mldev->ocm.alloc_mode); if (!hw_queue_lock_set) { - mldev->hw_queue_lock = CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT; + cn10k_mldev->hw_queue_lock = CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT; } else { - if ((mldev->hw_queue_lock < 0) || (mldev->hw_queue_lock > 1)) { + if ((cn10k_mldev->hw_queue_lock < 0) || (cn10k_mldev->hw_queue_lock > 1)) { plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_HW_QUEUE_LOCK, - mldev->hw_queue_lock); + cn10k_mldev->hw_queue_lock); ret = -EINVAL; goto exit; } } - plt_info("ML: %s = %d", CN10K_ML_DEV_HW_QUEUE_LOCK, mldev->hw_queue_lock); + plt_info("ML: %s = %d", CN10K_ML_DEV_HW_QUEUE_LOCK, cn10k_mldev->hw_queue_lock); if (!ocm_page_size_set) { - mldev->ocm_page_size = CN10K_ML_OCM_PAGE_SIZE_DEFAULT; + cn10k_mldev->ocm_page_size = CN10K_ML_OCM_PAGE_SIZE_DEFAULT; } else { - if (mldev->ocm_page_size < 0) { + if (cn10k_mldev->ocm_page_size < 0) { plt_err("Invalid argument, %s = %d\n", CN10K_ML_OCM_PAGE_SIZE, - mldev->ocm_page_size); + cn10k_mldev->ocm_page_size); ret = -EINVAL; goto exit; } found = false; for (i = 0; i < PLT_DIM(valid_ocm_page_size); i++) { - if (mldev->ocm_page_size == valid_ocm_page_size[i]) { + if (cn10k_mldev->ocm_page_size == valid_ocm_page_size[i]) { found = true; break; } } if (!found) { - plt_err("Unsupported ocm_page_size = %d\n", mldev->ocm_page_size); + plt_err("Unsupported ocm_page_size = %d\n", cn10k_mldev->ocm_page_size); ret = -EINVAL; goto exit; } } - plt_info("ML: %s = %d", CN10K_ML_OCM_PAGE_SIZE, mldev->ocm_page_size); + plt_info("ML: %s = %d", CN10K_ML_OCM_PAGE_SIZE, cn10k_mldev->ocm_page_size); exit: rte_kvargs_free(kvlist); @@ -356,7 +359,8 @@ static int cn10k_ml_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { struct rte_ml_dev_pmd_init_params init_params; - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; char name[RTE_ML_STR_MAX]; struct rte_ml_dev *dev; int ret; @@ -364,7 +368,7 @@ cn10k_ml_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_de PLT_SET_USED(pci_drv); init_params = (struct rte_ml_dev_pmd_init_params){ - .socket_id = rte_socket_id(), .private_data_size = sizeof(struct cn10k_ml_dev)}; + .socket_id = rte_socket_id(), .private_data_size = sizeof(struct cnxk_ml_dev)}; ret = roc_plt_init(); if (ret < 0) { @@ -380,18 +384,20 @@ cn10k_ml_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_de } /* Get private data space allocated */ - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cnxk_mldev->mldev = dev; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - mldev->roc.pci_dev = pci_dev; + cn10k_mldev->roc.pci_dev = pci_dev; - ret = cn10k_mldev_parse_devargs(dev->device->devargs, mldev); + ret = cn10k_mldev_parse_devargs(dev->device->devargs, cn10k_mldev); if (ret) { plt_err("Failed to parse devargs ret = %d", ret); goto pmd_destroy; } - ret = roc_ml_dev_init(&mldev->roc); + ret = roc_ml_dev_init(&cn10k_mldev->roc); if (ret) { plt_err("Failed to initialize ML ROC, ret = %d", ret); goto pmd_destroy; @@ -407,7 +413,7 @@ cn10k_ml_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_de dev->dequeue_burst = NULL; dev->op_error_get = NULL; - mldev->state = ML_CN10K_DEV_STATE_PROBED; + cnxk_mldev->state = ML_CNXK_DEV_STATE_PROBED; return 0; @@ -424,7 +430,7 @@ cn10k_ml_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_de static int cn10k_ml_pci_remove(struct rte_pci_device *pci_dev) { - struct cn10k_ml_dev *mldev; + struct cnxk_ml_dev *cnxk_mldev; char name[RTE_ML_STR_MAX]; struct rte_ml_dev *dev; int ret; @@ -439,8 +445,8 @@ cn10k_ml_pci_remove(struct rte_pci_device *pci_dev) return -ENODEV; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - mldev = dev->data->dev_private; - ret = roc_ml_dev_fini(&mldev->roc); + cnxk_mldev = dev->data->dev_private; + ret = roc_ml_dev_fini(&cnxk_mldev->cn10k_mldev.roc); if (ret) return ret; } @@ -486,45 +492,45 @@ cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw) static int cn10k_ml_fw_load_asim(struct cn10k_ml_fw *fw) { - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; uint64_t timeout_cycle; uint64_t reg_val64; bool timeout; int ret = 0; - mldev = fw->mldev; + cn10k_mldev = fw->cn10k_mldev; /* Reset HEAD and TAIL debug pointer registers */ - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C0); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C0); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C1); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C1); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_EXCEPTION_SP_C0); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_EXCEPTION_SP_C1); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C0); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C0); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C1); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C1); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_EXCEPTION_SP_C0); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_EXCEPTION_SP_C1); /* Set ML_MLR_BASE to base IOVA of the ML region in LLC/DRAM. */ reg_val64 = rte_eal_get_baseaddr(); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_MLR_BASE); - plt_ml_dbg("ML_MLR_BASE = 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_MLR_BASE)); - roc_ml_reg_save(&mldev->roc, ML_MLR_BASE); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_MLR_BASE); + plt_ml_dbg("ML_MLR_BASE = 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_MLR_BASE)); + roc_ml_reg_save(&cn10k_mldev->roc, ML_MLR_BASE); /* Update FW load completion structure */ fw->req->jd.hdr.jce.w1.u64 = PLT_U64_CAST(&fw->req->status); fw->req->jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_LOAD; - fw->req->jd.hdr.result = roc_ml_addr_ap2mlip(&mldev->roc, &fw->req->result); + fw->req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &fw->req->result); fw->req->jd.fw_load.flags = cn10k_ml_fw_flags_get(fw); - plt_write64(ML_CN10K_POLL_JOB_START, &fw->req->status); + plt_write64(ML_CNXK_POLL_JOB_START, &fw->req->status); plt_wmb(); /* Enqueue FW load through scratch registers */ timeout = true; - timeout_cycle = plt_tsc_cycles() + ML_CN10K_CMD_TIMEOUT * plt_tsc_hz(); - roc_ml_scratch_enqueue(&mldev->roc, &fw->req->jd); + timeout_cycle = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); + roc_ml_scratch_enqueue(&cn10k_mldev->roc, &fw->req->jd); plt_rmb(); do { - if (roc_ml_scratch_is_done_bit_set(&mldev->roc) && - (plt_read64(&fw->req->status) == ML_CN10K_POLL_JOB_FINISH)) { + if (roc_ml_scratch_is_done_bit_set(&cn10k_mldev->roc) && + (plt_read64(&fw->req->status) == ML_CNXK_POLL_JOB_FINISH)) { timeout = false; break; } @@ -536,11 +542,11 @@ cn10k_ml_fw_load_asim(struct cn10k_ml_fw *fw) } else { /* Set ML to disable new jobs */ reg_val64 = (ROC_ML_CFG_JD_SIZE | ROC_ML_CFG_MLIP_ENA); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); /* Clear scratch registers */ - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_WORK_PTR); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_FW_CTRL); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_FW_CTRL); if (timeout) { plt_err("Firmware load timeout"); @@ -554,14 +560,14 @@ cn10k_ml_fw_load_asim(struct cn10k_ml_fw *fw) } /* Reset scratch registers */ - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_FW_CTRL); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_FW_CTRL); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_WORK_PTR); /* Disable job execution, to be enabled in start */ - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val64 &= ~ROC_ML_CFG_ENA; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); - plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_CFG)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); + plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); return ret; } @@ -571,7 +577,7 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) { union ml_a35_0_rst_vector_base_s a35_0_rst_vector_base; union ml_a35_0_rst_vector_base_s a35_1_rst_vector_base; - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; uint64_t timeout_cycle; uint64_t reg_val64; uint32_t reg_val32; @@ -580,24 +586,24 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) int ret = 0; uint8_t i; - mldev = fw->mldev; + cn10k_mldev = fw->cn10k_mldev; /* Reset HEAD and TAIL debug pointer registers */ - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C0); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C0); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C1); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C1); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_EXCEPTION_SP_C0); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_EXCEPTION_SP_C1); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C0); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C0); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C1); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C1); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_EXCEPTION_SP_C0); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_EXCEPTION_SP_C1); /* (1) Write firmware images for ACC's two A35 cores to the ML region in LLC / DRAM. */ rte_memcpy(PLT_PTR_ADD(fw->data, FW_LINKER_OFFSET), buffer, size); /* (2) Set ML(0)_MLR_BASE = Base IOVA of the ML region in LLC/DRAM. */ reg_val64 = PLT_PTR_SUB_U64_CAST(fw->data, rte_eal_get_baseaddr()); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_MLR_BASE); - plt_ml_dbg("ML_MLR_BASE => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_MLR_BASE)); - roc_ml_reg_save(&mldev->roc, ML_MLR_BASE); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_MLR_BASE); + plt_ml_dbg("ML_MLR_BASE => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_MLR_BASE)); + roc_ml_reg_save(&cn10k_mldev->roc, ML_MLR_BASE); /* (3) Set ML(0)_AXI_BRIDGE_CTRL(1) = 0x184003 to remove back-pressure check on DMA AXI * bridge. @@ -605,9 +611,9 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) reg_val64 = (ROC_ML_AXI_BRIDGE_CTRL_AXI_RESP_CTRL | ROC_ML_AXI_BRIDGE_CTRL_BRIDGE_CTRL_MODE | ROC_ML_AXI_BRIDGE_CTRL_NCB_WR_BLK | ROC_ML_AXI_BRIDGE_CTRL_FORCE_WRESP_OK | ROC_ML_AXI_BRIDGE_CTRL_FORCE_RRESP_OK); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_AXI_BRIDGE_CTRL(1)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_AXI_BRIDGE_CTRL(1)); plt_ml_dbg("ML_AXI_BRIDGE_CTRL(1) => 0x%016lx", - roc_ml_reg_read64(&mldev->roc, ML_AXI_BRIDGE_CTRL(1))); + roc_ml_reg_read64(&cn10k_mldev->roc, ML_AXI_BRIDGE_CTRL(1))); /* (4) Set ML(0)_ANB(0..2)_BACKP_DISABLE = 0x3 to remove back-pressure on the AXI to NCB * bridges. @@ -615,9 +621,9 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) for (i = 0; i < ML_ANBX_NR; i++) { reg_val64 = (ROC_ML_ANBX_BACKP_DISABLE_EXTMSTR_B_BACKP_DISABLE | ROC_ML_ANBX_BACKP_DISABLE_EXTMSTR_R_BACKP_DISABLE); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_ANBX_BACKP_DISABLE(i)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_ANBX_BACKP_DISABLE(i)); plt_ml_dbg("ML_ANBX_BACKP_DISABLE(%u) => 0x%016lx", i, - roc_ml_reg_read64(&mldev->roc, ML_ANBX_BACKP_DISABLE(i))); + roc_ml_reg_read64(&cn10k_mldev->roc, ML_ANBX_BACKP_DISABLE(i))); } /* (5) Set ML(0)_ANB(0..2)_NCBI_P_OVR = 0x3000 and ML(0)_ANB(0..2)_NCBI_NP_OVR = 0x3000 to @@ -626,39 +632,40 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) for (i = 0; i < ML_ANBX_NR; i++) { reg_val64 = (ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_NS_OVR | ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_NS_OVR_VLD); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_ANBX_NCBI_P_OVR(i)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_ANBX_NCBI_P_OVR(i)); plt_ml_dbg("ML_ANBX_NCBI_P_OVR(%u) => 0x%016lx", i, - roc_ml_reg_read64(&mldev->roc, ML_ANBX_NCBI_P_OVR(i))); + roc_ml_reg_read64(&cn10k_mldev->roc, ML_ANBX_NCBI_P_OVR(i))); reg_val64 |= (ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_NS_OVR | ML_ANBX_NCBI_NP_OVR_ANB_NCBI_NP_NS_OVR_VLD); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_ANBX_NCBI_NP_OVR(i)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_ANBX_NCBI_NP_OVR(i)); plt_ml_dbg("ML_ANBX_NCBI_NP_OVR(%u) => 0x%016lx", i, - roc_ml_reg_read64(&mldev->roc, ML_ANBX_NCBI_NP_OVR(i))); + roc_ml_reg_read64(&cn10k_mldev->roc, ML_ANBX_NCBI_NP_OVR(i))); } /* (6) Set ML(0)_CFG[MLIP_CLK_FORCE] = 1, to force turning on the MLIP clock. */ - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val64 |= ROC_ML_CFG_MLIP_CLK_FORCE; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); - plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_CFG)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); + plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); /* (7) Set ML(0)_JOB_MGR_CTRL[STALL_ON_IDLE] = 0, to make sure the boot request is accepted * when there is no job in the command queue. */ - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_JOB_MGR_CTRL); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_JOB_MGR_CTRL); reg_val64 &= ~ROC_ML_JOB_MGR_CTRL_STALL_ON_IDLE; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_JOB_MGR_CTRL); - plt_ml_dbg("ML_JOB_MGR_CTRL => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_JOB_MGR_CTRL)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_JOB_MGR_CTRL); + plt_ml_dbg("ML_JOB_MGR_CTRL => 0x%016lx", + roc_ml_reg_read64(&cn10k_mldev->roc, ML_JOB_MGR_CTRL)); /* (8) Set ML(0)_CFG[ENA] = 0 and ML(0)_CFG[MLIP_ENA] = 1 to bring MLIP out of reset while * keeping the job manager disabled. */ - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val64 |= ROC_ML_CFG_MLIP_ENA; reg_val64 &= ~ROC_ML_CFG_ENA; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); - plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_CFG)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); + plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); /* (9) Wait at least 70 coprocessor clock cycles. */ plt_delay_us(FW_WAIT_CYCLES); @@ -669,53 +676,57 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) * AXI outbound address divided by 4. Read after write. */ offset = PLT_PTR_ADD_U64_CAST( - fw->data, FW_LINKER_OFFSET - roc_ml_reg_read64(&mldev->roc, ML_MLR_BASE)); + fw->data, FW_LINKER_OFFSET - roc_ml_reg_read64(&cn10k_mldev->roc, ML_MLR_BASE)); a35_0_rst_vector_base.s.addr = (offset + ML_AXI_START_ADDR) / 4; a35_1_rst_vector_base.s.addr = (offset + ML_AXI_START_ADDR) / 4; - roc_ml_reg_write32(&mldev->roc, a35_0_rst_vector_base.w.w0, ML_A35_0_RST_VECTOR_BASE_W(0)); - reg_val32 = roc_ml_reg_read32(&mldev->roc, ML_A35_0_RST_VECTOR_BASE_W(0)); + roc_ml_reg_write32(&cn10k_mldev->roc, a35_0_rst_vector_base.w.w0, + ML_A35_0_RST_VECTOR_BASE_W(0)); + reg_val32 = roc_ml_reg_read32(&cn10k_mldev->roc, ML_A35_0_RST_VECTOR_BASE_W(0)); plt_ml_dbg("ML_A35_0_RST_VECTOR_BASE_W(0) => 0x%08x", reg_val32); - roc_ml_reg_write32(&mldev->roc, a35_0_rst_vector_base.w.w1, ML_A35_0_RST_VECTOR_BASE_W(1)); - reg_val32 = roc_ml_reg_read32(&mldev->roc, ML_A35_0_RST_VECTOR_BASE_W(1)); + roc_ml_reg_write32(&cn10k_mldev->roc, a35_0_rst_vector_base.w.w1, + ML_A35_0_RST_VECTOR_BASE_W(1)); + reg_val32 = roc_ml_reg_read32(&cn10k_mldev->roc, ML_A35_0_RST_VECTOR_BASE_W(1)); plt_ml_dbg("ML_A35_0_RST_VECTOR_BASE_W(1) => 0x%08x", reg_val32); - roc_ml_reg_write32(&mldev->roc, a35_1_rst_vector_base.w.w0, ML_A35_1_RST_VECTOR_BASE_W(0)); - reg_val32 = roc_ml_reg_read32(&mldev->roc, ML_A35_1_RST_VECTOR_BASE_W(0)); + roc_ml_reg_write32(&cn10k_mldev->roc, a35_1_rst_vector_base.w.w0, + ML_A35_1_RST_VECTOR_BASE_W(0)); + reg_val32 = roc_ml_reg_read32(&cn10k_mldev->roc, ML_A35_1_RST_VECTOR_BASE_W(0)); plt_ml_dbg("ML_A35_1_RST_VECTOR_BASE_W(0) => 0x%08x", reg_val32); - roc_ml_reg_write32(&mldev->roc, a35_1_rst_vector_base.w.w1, ML_A35_1_RST_VECTOR_BASE_W(1)); - reg_val32 = roc_ml_reg_read32(&mldev->roc, ML_A35_1_RST_VECTOR_BASE_W(1)); + roc_ml_reg_write32(&cn10k_mldev->roc, a35_1_rst_vector_base.w.w1, + ML_A35_1_RST_VECTOR_BASE_W(1)); + reg_val32 = roc_ml_reg_read32(&cn10k_mldev->roc, ML_A35_1_RST_VECTOR_BASE_W(1)); plt_ml_dbg("ML_A35_1_RST_VECTOR_BASE_W(1) => 0x%08x", reg_val32); /* (11) Clear MLIP's ML(0)_SW_RST_CTRL[ACC_RST]. This will bring the ACC cores and other * MLIP components out of reset. The cores will execute firmware from the ML region as * written in step 1. */ - reg_val32 = roc_ml_reg_read32(&mldev->roc, ML_SW_RST_CTRL); + reg_val32 = roc_ml_reg_read32(&cn10k_mldev->roc, ML_SW_RST_CTRL); reg_val32 &= ~ROC_ML_SW_RST_CTRL_ACC_RST; - roc_ml_reg_write32(&mldev->roc, reg_val32, ML_SW_RST_CTRL); - reg_val32 = roc_ml_reg_read32(&mldev->roc, ML_SW_RST_CTRL); + roc_ml_reg_write32(&cn10k_mldev->roc, reg_val32, ML_SW_RST_CTRL); + reg_val32 = roc_ml_reg_read32(&cn10k_mldev->roc, ML_SW_RST_CTRL); plt_ml_dbg("ML_SW_RST_CTRL => 0x%08x", reg_val32); /* (12) Wait for notification from firmware that ML is ready for job execution. */ fw->req->jd.hdr.jce.w1.u64 = PLT_U64_CAST(&fw->req->status); fw->req->jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_LOAD; - fw->req->jd.hdr.result = roc_ml_addr_ap2mlip(&mldev->roc, &fw->req->result); + fw->req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &fw->req->result); fw->req->jd.fw_load.flags = cn10k_ml_fw_flags_get(fw); - plt_write64(ML_CN10K_POLL_JOB_START, &fw->req->status); + plt_write64(ML_CNXK_POLL_JOB_START, &fw->req->status); plt_wmb(); /* Enqueue FW load through scratch registers */ timeout = true; - timeout_cycle = plt_tsc_cycles() + ML_CN10K_CMD_TIMEOUT * plt_tsc_hz(); - roc_ml_scratch_enqueue(&mldev->roc, &fw->req->jd); + timeout_cycle = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); + roc_ml_scratch_enqueue(&cn10k_mldev->roc, &fw->req->jd); plt_rmb(); do { - if (roc_ml_scratch_is_done_bit_set(&mldev->roc) && - (plt_read64(&fw->req->status) == ML_CN10K_POLL_JOB_FINISH)) { + if (roc_ml_scratch_is_done_bit_set(&cn10k_mldev->roc) && + (plt_read64(&fw->req->status) == ML_CNXK_POLL_JOB_FINISH)) { timeout = false; break; } @@ -727,11 +738,11 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) } else { /* Set ML to disable new jobs */ reg_val64 = (ROC_ML_CFG_JD_SIZE | ROC_ML_CFG_MLIP_ENA); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); /* Clear scratch registers */ - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_WORK_PTR); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_FW_CTRL); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_FW_CTRL); if (timeout) { plt_err("Firmware load timeout"); @@ -747,49 +758,51 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) /* (13) Set ML(0)_JOB_MGR_CTRL[STALL_ON_IDLE] = 0x1; this is needed to shut down the MLIP * clock when there are no more jobs to process. */ - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_JOB_MGR_CTRL); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_JOB_MGR_CTRL); reg_val64 |= ROC_ML_JOB_MGR_CTRL_STALL_ON_IDLE; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_JOB_MGR_CTRL); - plt_ml_dbg("ML_JOB_MGR_CTRL => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_JOB_MGR_CTRL)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_JOB_MGR_CTRL); + plt_ml_dbg("ML_JOB_MGR_CTRL => 0x%016lx", + roc_ml_reg_read64(&cn10k_mldev->roc, ML_JOB_MGR_CTRL)); /* (14) Set ML(0)_CFG[MLIP_CLK_FORCE] = 0; the MLIP clock will be turned on/off based on job * activities. */ - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val64 &= ~ROC_ML_CFG_MLIP_CLK_FORCE; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); - plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_CFG)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); + plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); /* (15) Set ML(0)_CFG[ENA] to enable ML job execution. */ - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val64 |= ROC_ML_CFG_ENA; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); - plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_CFG)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); + plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); /* Reset scratch registers */ - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_FW_CTRL); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_FW_CTRL); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_WORK_PTR); /* Disable job execution, to be enabled in start */ - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val64 &= ~ROC_ML_CFG_ENA; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); - plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_CFG)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); + plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); /* Additional fixes: Set RO bit to fix O2D DMA bandwidth issue on cn10ka */ for (i = 0; i < ML_ANBX_NR; i++) { - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_ANBX_NCBI_P_OVR(i)); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_ANBX_NCBI_P_OVR(i)); reg_val64 |= (ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_RO_OVR | ML_ANBX_NCBI_P_OVR_ANB_NCBI_P_RO_OVR_VLD); - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_ANBX_NCBI_P_OVR(i)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_ANBX_NCBI_P_OVR(i)); } return ret; } int -cn10k_ml_fw_load(struct cn10k_ml_dev *mldev) +cn10k_ml_fw_load(struct cnxk_ml_dev *cnxk_mldev) { + struct cn10k_ml_dev *cn10k_mldev; const struct plt_memzone *mz; struct cn10k_ml_fw *fw; char *fw_buffer = NULL; @@ -797,8 +810,9 @@ cn10k_ml_fw_load(struct cn10k_ml_dev *mldev) uint64_t fw_size = 0; int ret = 0; - fw = &mldev->fw; - fw->mldev = mldev; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + fw = &cn10k_mldev->fw; + fw->cn10k_mldev = cn10k_mldev; if (roc_env_is_emulator() || roc_env_is_hw()) { /* Read firmware image to a buffer */ @@ -829,8 +843,8 @@ cn10k_ml_fw_load(struct cn10k_ml_dev *mldev) memset(&fw->req->jd.fw_load.version[0], '\0', MLDEV_FIRMWARE_VERSION_LENGTH); /* Reset device, if in active state */ - if (roc_ml_mlip_is_enabled(&mldev->roc)) - roc_ml_mlip_reset(&mldev->roc, true); + if (roc_ml_mlip_is_enabled(&cn10k_mldev->roc)) + roc_ml_mlip_reset(&cn10k_mldev->roc, true); /* Load firmware */ if (roc_env_is_emulator() || roc_env_is_hw()) { @@ -843,22 +857,25 @@ cn10k_ml_fw_load(struct cn10k_ml_dev *mldev) } if (ret < 0) - cn10k_ml_fw_unload(mldev); + cn10k_ml_fw_unload(cnxk_mldev); return ret; } void -cn10k_ml_fw_unload(struct cn10k_ml_dev *mldev) +cn10k_ml_fw_unload(struct cnxk_ml_dev *cnxk_mldev) { + struct cn10k_ml_dev *cn10k_mldev; const struct plt_memzone *mz; uint64_t reg_val; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + /* Disable and reset device */ - reg_val = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val &= ~ROC_ML_CFG_MLIP_ENA; - roc_ml_reg_write64(&mldev->roc, reg_val, ML_CFG); - roc_ml_mlip_reset(&mldev->roc, true); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val, ML_CFG); + roc_ml_mlip_reset(&cn10k_mldev->roc, true); mz = plt_memzone_lookup(FW_MEMZONE_NAME); if (mz != NULL) diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index 4aaeecff03..f9da1548c4 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -9,6 +9,9 @@ #include "cn10k_ml_ocm.h" +/* Dummy Device ops */ +extern struct rte_ml_dev_ops ml_dev_dummy_ops; + /* Marvell OCTEON CN10K ML PMD device name */ #define MLDEV_NAME_CN10K_PMD ml_cn10k @@ -36,17 +39,10 @@ /* Maximum number of segments for IO data */ #define ML_CN10K_MAX_SEGMENTS 1 -/* ML command timeout in seconds */ -#define ML_CN10K_CMD_TIMEOUT 5 - /* ML slow-path job flags */ #define ML_CN10K_SP_FLAGS_OCM_NONRELOCATABLE BIT(0) #define ML_CN10K_SP_FLAGS_EXTENDED_LOAD_JD BIT(1) -/* Poll mode job state */ -#define ML_CN10K_POLL_JOB_START 0 -#define ML_CN10K_POLL_JOB_FINISH 1 - /* Memory barrier macros */ #if defined(RTE_ARCH_ARM) #define dmb_st ({ asm volatile("dmb st" : : : "memory"); }) @@ -56,6 +52,7 @@ #define dsb_st #endif +struct cnxk_ml_dev; struct cn10k_ml_req; struct cn10k_ml_qp; @@ -68,21 +65,6 @@ enum cn10k_ml_job_type { ML_CN10K_JOB_TYPE_FIRMWARE_SELFTEST, }; -/* Device configuration state enum */ -enum cn10k_ml_dev_state { - /* Probed and not configured */ - ML_CN10K_DEV_STATE_PROBED = 0, - - /* Configured */ - ML_CN10K_DEV_STATE_CONFIGURED, - - /* Started */ - ML_CN10K_DEV_STATE_STARTED, - - /* Closed */ - ML_CN10K_DEV_STATE_CLOSED -}; - /* Error types enumeration */ enum cn10k_ml_error_etype { /* 0x0 */ ML_ETYPE_NO_ERROR = 0, /* No error */ @@ -379,7 +361,7 @@ struct cn10k_ml_jd { /* ML firmware structure */ struct cn10k_ml_fw { /* Device reference */ - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; /* Firmware file path */ const char *path; @@ -485,27 +467,12 @@ struct cn10k_ml_dev { /* Device ROC */ struct roc_ml roc; - /* Configuration state */ - enum cn10k_ml_dev_state state; - /* Firmware */ struct cn10k_ml_fw fw; /* OCM info */ struct cn10k_ml_ocm ocm; - /* Number of models loaded */ - uint16_t nb_models_loaded; - - /* Number of models unloaded */ - uint16_t nb_models_unloaded; - - /* Number of models started */ - uint16_t nb_models_started; - - /* Number of models stopped */ - uint16_t nb_models_stopped; - /* Extended stats data */ struct cn10k_ml_xstats xstats; @@ -528,7 +495,7 @@ struct cn10k_ml_dev { }; uint64_t cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw); -int cn10k_ml_fw_load(struct cn10k_ml_dev *mldev); -void cn10k_ml_fw_unload(struct cn10k_ml_dev *mldev); +int cn10k_ml_fw_load(struct cnxk_ml_dev *cnxk_mldev); +void cn10k_ml_fw_unload(struct cnxk_ml_dev *cnxk_mldev); #endif /* _CN10K_ML_DEV_H_ */ diff --git a/drivers/ml/cnxk/cn10k_ml_model.c b/drivers/ml/cnxk/cn10k_ml_model.c index e0b750cd8e..d146535866 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.c +++ b/drivers/ml/cnxk/cn10k_ml_model.c @@ -10,6 +10,8 @@ #include "cn10k_ml_model.h" #include "cn10k_ml_ocm.h" +#include "cnxk_ml_dev.h" + static enum rte_ml_io_type cn10k_ml_io_type_map(uint8_t type) { @@ -461,7 +463,7 @@ cn10k_ml_model_addr_update(struct cn10k_ml_model *model, uint8_t *buffer, uint8_ } int -cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *mldev, uint16_t model_id, uint8_t *buffer, +cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_id, uint8_t *buffer, uint16_t *wb_pages, uint16_t *scratch_pages) { struct cn10k_ml_model_metadata *metadata; @@ -470,7 +472,7 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *mldev, uint16_t model_id, ui uint64_t wb_size; metadata = (struct cn10k_ml_model_metadata *)buffer; - ocm = &mldev->ocm; + ocm = &cn10k_mldev->ocm; /* Assume wb_size is zero for non-relocatable models */ if (metadata->model.ocm_relocatable) @@ -494,11 +496,11 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *mldev, uint16_t model_id, ui scratch_size, *scratch_pages); /* Check if the model can be loaded on OCM */ - if ((*wb_pages + *scratch_pages) > mldev->ocm.num_pages) { + if ((*wb_pages + *scratch_pages) > cn10k_mldev->ocm.num_pages) { plt_err("Cannot create the model, OCM relocatable = %u", metadata->model.ocm_relocatable); plt_err("wb_pages (%u) + scratch_pages (%u) > %u", *wb_pages, *scratch_pages, - mldev->ocm.num_pages); + cn10k_mldev->ocm.num_pages); return -ENOMEM; } @@ -506,8 +508,8 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *mldev, uint16_t model_id, ui * prevent the library from allocating the remaining space on the tile to other models. */ if (!metadata->model.ocm_relocatable) - *scratch_pages = - PLT_MAX(PLT_U64_CAST(*scratch_pages), PLT_U64_CAST(mldev->ocm.num_pages)); + *scratch_pages = PLT_MAX(PLT_U64_CAST(*scratch_pages), + PLT_U64_CAST(cn10k_mldev->ocm.num_pages)); return 0; } diff --git a/drivers/ml/cnxk/cn10k_ml_model.h b/drivers/ml/cnxk/cn10k_ml_model.h index 4cc0744891..3128b28db7 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.h +++ b/drivers/ml/cnxk/cn10k_ml_model.h @@ -13,6 +13,8 @@ #include "cn10k_ml_ocm.h" #include "cn10k_ml_ops.h" +struct cnxk_ml_dev; + /* Model state */ enum cn10k_ml_model_state { ML_CN10K_MODEL_STATE_LOADED, @@ -489,7 +491,7 @@ struct cn10k_ml_model_stats { /* Model Object */ struct cn10k_ml_model { /* Device reference */ - struct cn10k_ml_dev *mldev; + struct cnxk_ml_dev *mldev; /* Name */ char name[RTE_ML_STR_MAX]; @@ -537,8 +539,8 @@ int cn10k_ml_model_metadata_check(uint8_t *buffer, uint64_t size); void cn10k_ml_model_metadata_update(struct cn10k_ml_model_metadata *metadata); void cn10k_ml_model_addr_update(struct cn10k_ml_model *model, uint8_t *buffer, uint8_t *base_dma_addr); -int cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *mldev, uint16_t model_id, uint8_t *buffer, - uint16_t *wb_pages, uint16_t *scratch_pages); +int cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_id, + uint8_t *buffer, uint16_t *wb_pages, uint16_t *scratch_pages); void cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cn10k_ml_model *model); #endif /* _CN10K_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/cn10k_ml_ocm.c b/drivers/ml/cnxk/cn10k_ml_ocm.c index 6fb0bb620e..aa376284d5 100644 --- a/drivers/ml/cnxk/cn10k_ml_ocm.c +++ b/drivers/ml/cnxk/cn10k_ml_ocm.c @@ -4,11 +4,13 @@ #include +#include + #include "cn10k_ml_dev.h" #include "cn10k_ml_model.h" #include "cn10k_ml_ocm.h" -#include "roc_api.h" +#include "cnxk_ml_dev.h" /* OCM macros */ #define BYTE_LEN 8 @@ -217,7 +219,8 @@ int cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t wb_pages, uint16_t scratch_pages, uint64_t *tilemask) { - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_ocm *ocm; uint16_t used_scratch_pages_max; @@ -236,8 +239,9 @@ cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t w int max_slot_sz; int page_id; - mldev = dev->data->dev_private; - ocm = &mldev->ocm; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; if (num_tiles > ML_CN10K_OCM_NUMTILES) { plt_err("Invalid num_tiles = %u (> %u)", num_tiles, ML_CN10K_OCM_NUMTILES); @@ -254,8 +258,8 @@ cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t w tile_start = 0; search_end_tile = ocm->num_tiles - num_tiles; - /* allocate for local ocm mask */ - local_ocm_mask = rte_zmalloc("local_ocm_mask", mldev->ocm.mask_words, RTE_CACHE_LINE_SIZE); + /* Allocate for local ocm mask */ + local_ocm_mask = rte_zmalloc("local_ocm_mask", ocm->mask_words, RTE_CACHE_LINE_SIZE); if (local_ocm_mask == NULL) { plt_err("Unable to allocate memory for local_ocm_mask"); return -1; @@ -271,7 +275,7 @@ cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t w PLT_MAX(ocm->tile_ocm_info[tile_id].last_wb_page, used_last_wb_page_max); } - memset(local_ocm_mask, 0, mldev->ocm.mask_words); + memset(local_ocm_mask, 0, ocm->mask_words); for (tile_id = tile_start; tile_id < tile_start + num_tiles; tile_id++) { for (word_id = 0; word_id < ocm->mask_words; word_id++) local_ocm_mask[word_id] |= ocm->tile_ocm_info[tile_id].ocm_mask[word_id]; @@ -333,8 +337,9 @@ void cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint64_t tilemask, int wb_page_start, uint16_t wb_pages, uint16_t scratch_pages) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_ocm *ocm; int scratch_page_start; @@ -345,8 +350,9 @@ cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint64_t t int tile_id; int page_id; - mldev = dev->data->dev_private; - ocm = &mldev->ocm; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; model = dev->data->models[model_id]; /* Get first set bit, tile_start */ @@ -391,8 +397,9 @@ void cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id) { struct cn10k_ml_model *local_model; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_ocm *ocm; int scratch_resize_pages; @@ -404,8 +411,9 @@ cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id) int page_id; uint16_t i; - mldev = dev->data->dev_private; - ocm = &mldev->ocm; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; model = dev->data->models[model_id]; /* Update OCM info for WB memory */ @@ -453,35 +461,37 @@ cn10k_ml_ocm_pagemask_to_str(struct cn10k_ml_ocm_tile_info *tile_info, uint16_t char *p = str; int word; - /* add prefix 0x */ + /* Add prefix 0x */ *p++ = '0'; *p++ = 'x'; - /* build one word at a time */ + /* Build hex string */ for (word = nwords - 1; word >= 0; word--) { sprintf(p, "%02X", tile_info->ocm_mask[word]); p += 2; } - /* terminate */ + /* Terminate */ *p++ = 0; } void cn10k_ml_ocm_print(struct rte_ml_dev *dev, FILE *fp) { - char *str; - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_ocm *ocm; uint8_t tile_id; uint8_t word_id; int wb_pages; + char *str; - mldev = dev->data->dev_private; - ocm = &mldev->ocm; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; - /* nibbles + prefix '0x' */ - str = rte_zmalloc("ocm_mask_str", mldev->ocm.num_pages / 4 + 2, RTE_CACHE_LINE_SIZE); + /* Nibbles + prefix '0x' */ + str = rte_zmalloc("ocm_mask_str", ocm->num_pages / 4 + 2, RTE_CACHE_LINE_SIZE); if (str == NULL) { plt_err("Unable to allocate memory for ocm_mask_str"); return; @@ -492,7 +502,7 @@ cn10k_ml_ocm_print(struct rte_ml_dev *dev, FILE *fp) cn10k_ml_ocm_pagemask_to_str(&ocm->tile_ocm_info[tile_id], ocm->mask_words, str); wb_pages = 0 - ocm->tile_ocm_info[tile_id].scratch_pages; - for (word_id = 0; word_id < mldev->ocm.mask_words; word_id++) + for (word_id = 0; word_id < ocm->mask_words; word_id++) wb_pages += rte_popcount32(ocm->tile_ocm_info[tile_id].ocm_mask[word_id]); diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 11531afd8c..3385bf50c0 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -11,6 +11,8 @@ #include "cn10k_ml_model.h" #include "cn10k_ml_ops.h" +#include "cnxk_ml_dev.h" + /* ML model macros */ #define CN10K_ML_MODEL_MEMZONE_NAME "ml_cn10k_model_mz" @@ -85,7 +87,7 @@ cn10k_ml_set_poll_addr(struct cn10k_ml_req *req) static inline void cn10k_ml_set_poll_ptr(struct cn10k_ml_req *req) { - plt_write64(ML_CN10K_POLL_JOB_START, req->compl_W1); + plt_write64(ML_CNXK_POLL_JOB_START, req->compl_W1); } static inline uint64_t @@ -175,7 +177,7 @@ cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_des qp->queue.reqs = (struct cn10k_ml_req *)va; qp->queue.head = 0; qp->queue.tail = 0; - qp->queue.wait_cycles = ML_CN10K_CMD_TIMEOUT * plt_tsc_hz(); + qp->queue.wait_cycles = ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); qp->nb_desc = nb_desc; qp->stats.enqueued_count = 0; qp->stats.dequeued_count = 0; @@ -199,16 +201,17 @@ cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_des static void cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) { - + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_ocm *ocm; char str[STR_LEN]; uint8_t i; uint8_t j; - mldev = dev->data->dev_private; - ocm = &mldev->ocm; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; model = dev->data->models[model_id]; /* Print debug info */ @@ -249,7 +252,7 @@ cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) fprintf(fp, "%*s : 0x%0*" PRIx64 "\n", FIELD_LEN, "tilemask", ML_CN10K_OCM_NUMTILES / 4, model->model_mem_map.tilemask); fprintf(fp, "%*s : 0x%" PRIx64 "\n", FIELD_LEN, "ocm_wb_start", - model->model_mem_map.wb_page_start * mldev->ocm.page_size); + model->model_mem_map.wb_page_start * cn10k_mldev->ocm.page_size); } fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_inputs", model->metadata.model.num_input); @@ -325,7 +328,7 @@ cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) } static void -cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *mldev, struct cn10k_ml_model *model, +cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cn10k_ml_model *model, struct cn10k_ml_req *req, enum cn10k_ml_job_type job_type) { struct cn10k_ml_model_metadata *metadata; @@ -340,7 +343,7 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *mldev, struct cn10k_ml_mode req->jd.hdr.model_id = model->model_id; req->jd.hdr.job_type = job_type; req->jd.hdr.fp_flags = 0x0; - req->jd.hdr.result = roc_ml_addr_ap2mlip(&mldev->roc, &req->result); + req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->result); if (job_type == ML_CN10K_JOB_TYPE_MODEL_START) { if (!model->metadata.model.ocm_relocatable) @@ -350,9 +353,9 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *mldev, struct cn10k_ml_mode req->jd.hdr.sp_flags |= ML_CN10K_SP_FLAGS_EXTENDED_LOAD_JD; req->jd.model_start.extended_args = - PLT_U64_CAST(roc_ml_addr_ap2mlip(&mldev->roc, &req->extended_args)); + PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->extended_args)); req->jd.model_start.model_dst_ddr_addr = - PLT_U64_CAST(roc_ml_addr_ap2mlip(&mldev->roc, addr->init_run_addr)); + PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, addr->init_run_addr)); req->jd.model_start.model_init_offset = 0x0; req->jd.model_start.model_main_offset = metadata->init_model.file_size; req->jd.model_start.model_finish_offset = @@ -372,7 +375,7 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *mldev, struct cn10k_ml_mode req->jd.model_start.ocm_wb_range_start = metadata->model.ocm_wb_range_start; req->jd.model_start.ocm_wb_range_end = metadata->model.ocm_wb_range_end; req->jd.model_start.ddr_wb_base_address = PLT_U64_CAST(roc_ml_addr_ap2mlip( - &mldev->roc, + &cn10k_mldev->roc, PLT_PTR_ADD(addr->finish_load_addr, metadata->finish_model.file_size))); req->jd.model_start.ddr_wb_range_start = metadata->model.ddr_wb_range_start; req->jd.model_start.ddr_wb_range_end = metadata->model.ddr_wb_range_end; @@ -383,7 +386,7 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *mldev, struct cn10k_ml_mode req->jd.model_start.output.s.ddr_range_end = metadata->model.ddr_output_range_end; req->extended_args.start.ddr_scratch_base_address = PLT_U64_CAST( - roc_ml_addr_ap2mlip(&mldev->roc, model->addr.scratch_base_addr)); + roc_ml_addr_ap2mlip(&cn10k_mldev->roc, model->addr.scratch_base_addr)); req->extended_args.start.ddr_scratch_range_start = metadata->model.ddr_scratch_range_start; req->extended_args.start.ddr_scratch_range_end = @@ -392,24 +395,20 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *mldev, struct cn10k_ml_mode } static __rte_always_inline void -cn10k_ml_prep_fp_job_descriptor(struct rte_ml_dev *dev, struct cn10k_ml_req *req, +cn10k_ml_prep_fp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cn10k_ml_req *req, struct rte_ml_op *op) { - struct cn10k_ml_dev *mldev; - - mldev = dev->data->dev_private; - req->jd.hdr.jce.w0.u64 = 0; req->jd.hdr.jce.w1.u64 = req->compl_W1; req->jd.hdr.model_id = op->model_id; req->jd.hdr.job_type = ML_CN10K_JOB_TYPE_MODEL_RUN; req->jd.hdr.fp_flags = ML_FLAGS_POLL_COMPL; req->jd.hdr.sp_flags = 0x0; - req->jd.hdr.result = roc_ml_addr_ap2mlip(&mldev->roc, &req->result); + req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->result); req->jd.model_run.input_ddr_addr = - PLT_U64_CAST(roc_ml_addr_ap2mlip(&mldev->roc, op->input[0]->addr)); + PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, op->input[0]->addr)); req->jd.model_run.output_ddr_addr = - PLT_U64_CAST(roc_ml_addr_ap2mlip(&mldev->roc, op->output[0]->addr)); + PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, op->output[0]->addr)); req->jd.model_run.num_batches = op->nb_batches; } @@ -436,66 +435,69 @@ static const struct xstat_info model_stats[] = { static int cn10k_ml_xstats_init(struct rte_ml_dev *dev) { - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; uint16_t nb_stats; uint16_t stat_id; uint16_t model; uint16_t i; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; /* Allocate memory for xstats entries. Don't allocate during reconfigure */ nb_stats = RTE_DIM(device_stats) + ML_CN10K_MAX_MODELS * RTE_DIM(model_stats); - if (mldev->xstats.entries == NULL) - mldev->xstats.entries = rte_zmalloc("cn10k_ml_xstats", - sizeof(struct cn10k_ml_xstats_entry) * nb_stats, - PLT_CACHE_LINE_SIZE); + if (cn10k_mldev->xstats.entries == NULL) + cn10k_mldev->xstats.entries = rte_zmalloc( + "cn10k_ml_xstats", sizeof(struct cn10k_ml_xstats_entry) * nb_stats, + PLT_CACHE_LINE_SIZE); - if (mldev->xstats.entries == NULL) + if (cn10k_mldev->xstats.entries == NULL) return -ENOMEM; /* Initialize device xstats */ stat_id = 0; for (i = 0; i < RTE_DIM(device_stats); i++) { - mldev->xstats.entries[stat_id].map.id = stat_id; - snprintf(mldev->xstats.entries[stat_id].map.name, - sizeof(mldev->xstats.entries[stat_id].map.name), "%s", + cn10k_mldev->xstats.entries[stat_id].map.id = stat_id; + snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, + sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), "%s", device_stats[i].name); - mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_DEVICE; - mldev->xstats.entries[stat_id].type = device_stats[i].type; - mldev->xstats.entries[stat_id].fn_id = CN10K_ML_XSTATS_FN_DEVICE; - mldev->xstats.entries[stat_id].obj_idx = 0; - mldev->xstats.entries[stat_id].reset_allowed = device_stats[i].reset_allowed; + cn10k_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_DEVICE; + cn10k_mldev->xstats.entries[stat_id].type = device_stats[i].type; + cn10k_mldev->xstats.entries[stat_id].fn_id = CN10K_ML_XSTATS_FN_DEVICE; + cn10k_mldev->xstats.entries[stat_id].obj_idx = 0; + cn10k_mldev->xstats.entries[stat_id].reset_allowed = device_stats[i].reset_allowed; stat_id++; } - mldev->xstats.count_mode_device = stat_id; + cn10k_mldev->xstats.count_mode_device = stat_id; /* Initialize model xstats */ for (model = 0; model < ML_CN10K_MAX_MODELS; model++) { - mldev->xstats.offset_for_model[model] = stat_id; + cn10k_mldev->xstats.offset_for_model[model] = stat_id; for (i = 0; i < RTE_DIM(model_stats); i++) { - mldev->xstats.entries[stat_id].map.id = stat_id; - mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_MODEL; - mldev->xstats.entries[stat_id].type = model_stats[i].type; - mldev->xstats.entries[stat_id].fn_id = CN10K_ML_XSTATS_FN_MODEL; - mldev->xstats.entries[stat_id].obj_idx = model; - mldev->xstats.entries[stat_id].reset_allowed = model_stats[i].reset_allowed; + cn10k_mldev->xstats.entries[stat_id].map.id = stat_id; + cn10k_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_MODEL; + cn10k_mldev->xstats.entries[stat_id].type = model_stats[i].type; + cn10k_mldev->xstats.entries[stat_id].fn_id = CN10K_ML_XSTATS_FN_MODEL; + cn10k_mldev->xstats.entries[stat_id].obj_idx = model; + cn10k_mldev->xstats.entries[stat_id].reset_allowed = + model_stats[i].reset_allowed; /* Name of xstat is updated during model load */ - snprintf(mldev->xstats.entries[stat_id].map.name, - sizeof(mldev->xstats.entries[stat_id].map.name), "Model-%u-%s", - model, model_stats[i].name); + snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, + sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), + "Model-%u-%s", model, model_stats[i].name); stat_id++; } - mldev->xstats.count_per_model[model] = RTE_DIM(model_stats); + cn10k_mldev->xstats.count_per_model[model] = RTE_DIM(model_stats); } - mldev->xstats.count_mode_model = stat_id - mldev->xstats.count_mode_device; - mldev->xstats.count = stat_id; + cn10k_mldev->xstats.count_mode_model = stat_id - cn10k_mldev->xstats.count_mode_device; + cn10k_mldev->xstats.count = stat_id; return 0; } @@ -503,28 +505,32 @@ cn10k_ml_xstats_init(struct rte_ml_dev *dev) static void cn10k_ml_xstats_uninit(struct rte_ml_dev *dev) { - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; - rte_free(mldev->xstats.entries); - mldev->xstats.entries = NULL; + rte_free(cn10k_mldev->xstats.entries); + cn10k_mldev->xstats.entries = NULL; - mldev->xstats.count = 0; + cn10k_mldev->xstats.count = 0; } static void cn10k_ml_xstats_model_name_update(struct rte_ml_dev *dev, uint16_t model_id) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; uint16_t rclk_freq; uint16_t sclk_freq; uint16_t stat_id; char suffix[8]; uint16_t i; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; model = dev->data->models[model_id]; stat_id = RTE_DIM(device_stats) + model_id * RTE_DIM(model_stats); @@ -536,8 +542,8 @@ cn10k_ml_xstats_model_name_update(struct rte_ml_dev *dev, uint16_t model_id) /* Update xstat name based on model name and sclk availability */ for (i = 0; i < RTE_DIM(model_stats); i++) { - snprintf(mldev->xstats.entries[stat_id].map.name, - sizeof(mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", + snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, + sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", model->metadata.model.name, model_stats[i].name, suffix); stat_id++; } @@ -547,19 +553,19 @@ static uint64_t cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, enum cn10k_ml_xstats_type type) { - struct cn10k_ml_dev *mldev; + struct cnxk_ml_dev *cnxk_mldev; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; switch (type) { case nb_models_loaded: - return mldev->nb_models_loaded; + return cnxk_mldev->nb_models_loaded; case nb_models_unloaded: - return mldev->nb_models_unloaded; + return cnxk_mldev->nb_models_unloaded; case nb_models_started: - return mldev->nb_models_started; + return cnxk_mldev->nb_models_started; case nb_models_stopped: - return mldev->nb_models_stopped; + return cnxk_mldev->nb_models_stopped; default: return -1; } @@ -651,15 +657,17 @@ static int cn10k_ml_device_xstats_reset(struct rte_ml_dev *dev, const uint16_t stat_ids[], uint16_t nb_ids) { struct cn10k_ml_xstats_entry *xs; - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; uint16_t nb_stats; uint16_t stat_id; uint32_t i; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; if (stat_ids == NULL) - nb_stats = mldev->xstats.count_mode_device; + nb_stats = cn10k_mldev->xstats.count_mode_device; else nb_stats = nb_ids; @@ -669,10 +677,10 @@ cn10k_ml_device_xstats_reset(struct rte_ml_dev *dev, const uint16_t stat_ids[], else stat_id = stat_ids[i]; - if (stat_id >= mldev->xstats.count_mode_device) + if (stat_id >= cn10k_mldev->xstats.count_mode_device) return -EINVAL; - xs = &mldev->xstats.entries[stat_id]; + xs = &cn10k_mldev->xstats.entries[stat_id]; if (!xs->reset_allowed) continue; @@ -740,15 +748,17 @@ cn10k_ml_model_xstats_reset(struct rte_ml_dev *dev, int32_t model_id, const uint uint16_t nb_ids) { struct cn10k_ml_xstats_entry *xs; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; int32_t lcl_model_id = 0; uint16_t start_id; uint16_t end_id; int32_t i; int32_t j; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; for (i = 0; i < ML_CN10K_MAX_MODELS; i++) { if (model_id == -1) { model = dev->data->models[i]; @@ -765,12 +775,13 @@ cn10k_ml_model_xstats_reset(struct rte_ml_dev *dev, int32_t model_id, const uint } } - start_id = mldev->xstats.offset_for_model[i]; - end_id = mldev->xstats.offset_for_model[i] + mldev->xstats.count_per_model[i] - 1; + start_id = cn10k_mldev->xstats.offset_for_model[i]; + end_id = cn10k_mldev->xstats.offset_for_model[i] + + cn10k_mldev->xstats.count_per_model[i] - 1; if (stat_ids == NULL) { for (j = start_id; j <= end_id; j++) { - xs = &mldev->xstats.entries[j]; + xs = &cn10k_mldev->xstats.entries[j]; cn10k_ml_reset_model_stat(dev, i, xs->type); } } else { @@ -780,7 +791,7 @@ cn10k_ml_model_xstats_reset(struct rte_ml_dev *dev, int32_t model_id, const uint stat_ids[j], lcl_model_id); return -EINVAL; } - xs = &mldev->xstats.entries[stat_ids[j]]; + xs = &cn10k_mldev->xstats.entries[stat_ids[j]]; cn10k_ml_reset_model_stat(dev, i, xs->type); } } @@ -854,17 +865,19 @@ cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) static int cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) { - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; if (dev_info == NULL) return -EINVAL; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); dev_info->driver_name = dev->device->driver->name; dev_info->max_models = ML_CN10K_MAX_MODELS; - if (mldev->hw_queue_lock) + if (cn10k_mldev->hw_queue_lock) dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE_SL; else dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE_LF; @@ -881,8 +894,9 @@ static int cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *conf) { struct rte_ml_dev_info dev_info; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_ocm *ocm; struct cn10k_ml_qp *qp; uint16_t model_id; @@ -895,7 +909,8 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c return -EINVAL; /* Get CN10K device handle */ - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; cn10k_ml_dev_info_get(dev, &dev_info); if (conf->nb_models > dev_info.max_models) { @@ -908,21 +923,21 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c return -EINVAL; } - if (mldev->state == ML_CN10K_DEV_STATE_PROBED) { + if (cnxk_mldev->state == ML_CNXK_DEV_STATE_PROBED) { plt_ml_dbg("Configuring ML device, nb_queue_pairs = %u, nb_models = %u", conf->nb_queue_pairs, conf->nb_models); /* Load firmware */ - ret = cn10k_ml_fw_load(mldev); + ret = cn10k_ml_fw_load(cnxk_mldev); if (ret != 0) return ret; - } else if (mldev->state == ML_CN10K_DEV_STATE_CONFIGURED) { + } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CONFIGURED) { plt_ml_dbg("Re-configuring ML device, nb_queue_pairs = %u, nb_models = %u", conf->nb_queue_pairs, conf->nb_models); - } else if (mldev->state == ML_CN10K_DEV_STATE_STARTED) { + } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_STARTED) { plt_err("Device can't be reconfigured in started state\n"); return -ENOTSUP; - } else if (mldev->state == ML_CN10K_DEV_STATE_CLOSED) { + } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CLOSED) { plt_err("Device can't be reconfigured after close\n"); return -ENOTSUP; } @@ -1013,10 +1028,10 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c } dev->data->nb_models = conf->nb_models; - ocm = &mldev->ocm; + ocm = &cn10k_mldev->ocm; ocm->num_tiles = ML_CN10K_OCM_NUMTILES; ocm->size_per_tile = ML_CN10K_OCM_TILESIZE; - ocm->page_size = mldev->ocm_page_size; + ocm->page_size = cn10k_mldev->ocm_page_size; ocm->num_pages = ocm->size_per_tile / ocm->page_size; ocm->mask_words = ocm->num_pages / (8 * sizeof(uint8_t)); @@ -1044,25 +1059,25 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c } /* Set JCMDQ enqueue function */ - if (mldev->hw_queue_lock == 1) - mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_sl; + if (cn10k_mldev->hw_queue_lock == 1) + cn10k_mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_sl; else - mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_lf; + cn10k_mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_lf; /* Set polling function pointers */ - mldev->set_poll_addr = cn10k_ml_set_poll_addr; - mldev->set_poll_ptr = cn10k_ml_set_poll_ptr; - mldev->get_poll_ptr = cn10k_ml_get_poll_ptr; + cn10k_mldev->set_poll_addr = cn10k_ml_set_poll_addr; + cn10k_mldev->set_poll_ptr = cn10k_ml_set_poll_ptr; + cn10k_mldev->get_poll_ptr = cn10k_ml_get_poll_ptr; dev->enqueue_burst = cn10k_ml_enqueue_burst; dev->dequeue_burst = cn10k_ml_dequeue_burst; dev->op_error_get = cn10k_ml_op_error_get; - mldev->nb_models_loaded = 0; - mldev->nb_models_started = 0; - mldev->nb_models_stopped = 0; - mldev->nb_models_unloaded = 0; - mldev->state = ML_CN10K_DEV_STATE_CONFIGURED; + cnxk_mldev->nb_models_loaded = 0; + cnxk_mldev->nb_models_started = 0; + cnxk_mldev->nb_models_stopped = 0; + cnxk_mldev->nb_models_unloaded = 0; + cnxk_mldev->state = ML_CNXK_DEV_STATE_CONFIGURED; return 0; @@ -1077,8 +1092,9 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c static int cn10k_ml_dev_close(struct rte_ml_dev *dev) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_qp *qp; uint16_t model_id; uint16_t qp_id; @@ -1086,10 +1102,11 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) if (dev == NULL) return -EINVAL; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; /* Release ocm_mask memory */ - rte_free(mldev->ocm.ocm_mask); + rte_free(cn10k_mldev->ocm.ocm_mask); /* Stop and unload all models */ for (model_id = 0; model_id < dev->data->nb_models; model_id++) { @@ -1125,21 +1142,21 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) cn10k_ml_xstats_uninit(dev); /* Unload firmware */ - cn10k_ml_fw_unload(mldev); + cn10k_ml_fw_unload(cnxk_mldev); /* Clear scratch registers */ - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_WORK_PTR); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_FW_CTRL); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C0); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C0); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C1); - roc_ml_reg_write64(&mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C1); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_WORK_PTR); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_FW_CTRL); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C0); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C0); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_HEAD_C1); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_SCRATCH_DBG_BUFFER_TAIL_C1); /* Reset ML_MLR_BASE */ - roc_ml_reg_write64(&mldev->roc, 0, ML_MLR_BASE); - plt_ml_dbg("ML_MLR_BASE = 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_MLR_BASE)); + roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_MLR_BASE); + plt_ml_dbg("ML_MLR_BASE = 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_MLR_BASE)); - mldev->state = ML_CN10K_DEV_STATE_CLOSED; + cnxk_mldev->state = ML_CNXK_DEV_STATE_CLOSED; /* Remove PCI device */ return rte_dev_remove(dev->device); @@ -1148,17 +1165,19 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) static int cn10k_ml_dev_start(struct rte_ml_dev *dev) { - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; uint64_t reg_val64; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val64 |= ROC_ML_CFG_ENA; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); - plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_CFG)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); + plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); - mldev->state = ML_CN10K_DEV_STATE_STARTED; + cnxk_mldev->state = ML_CNXK_DEV_STATE_STARTED; return 0; } @@ -1166,17 +1185,19 @@ cn10k_ml_dev_start(struct rte_ml_dev *dev) static int cn10k_ml_dev_stop(struct rte_ml_dev *dev) { - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; uint64_t reg_val64; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; - reg_val64 = roc_ml_reg_read64(&mldev->roc, ML_CFG); + reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); reg_val64 &= ~ROC_ML_CFG_ENA; - roc_ml_reg_write64(&mldev->roc, reg_val64, ML_CFG); - plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&mldev->roc, ML_CFG)); + roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); + plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); - mldev->state = ML_CN10K_DEV_STATE_CONFIGURED; + cnxk_mldev->state = ML_CNXK_DEV_STATE_CONFIGURED; return 0; } @@ -1259,22 +1280,24 @@ cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mod int32_t model_id, struct rte_ml_dev_xstats_map *xstats_map, uint32_t size) { - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; uint32_t xstats_mode_count; uint32_t idx = 0; uint32_t i; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; xstats_mode_count = 0; switch (mode) { case RTE_ML_DEV_XSTATS_DEVICE: - xstats_mode_count = mldev->xstats.count_mode_device; + xstats_mode_count = cn10k_mldev->xstats.count_mode_device; break; case RTE_ML_DEV_XSTATS_MODEL: if (model_id >= ML_CN10K_MAX_MODELS) break; - xstats_mode_count = mldev->xstats.count_per_model[model_id]; + xstats_mode_count = cn10k_mldev->xstats.count_per_model[model_id]; break; default: return -EINVAL; @@ -1283,16 +1306,17 @@ cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mod if (xstats_mode_count > size || xstats_map == NULL) return xstats_mode_count; - for (i = 0; i < mldev->xstats.count && idx < size; i++) { - if (mldev->xstats.entries[i].mode != mode) + for (i = 0; i < cn10k_mldev->xstats.count && idx < size; i++) { + if (cn10k_mldev->xstats.entries[i].mode != mode) continue; if (mode != RTE_ML_DEV_XSTATS_DEVICE && - model_id != mldev->xstats.entries[i].obj_idx) + model_id != cn10k_mldev->xstats.entries[i].obj_idx) continue; - strncpy(xstats_map[idx].name, mldev->xstats.entries[i].map.name, RTE_ML_STR_MAX); - xstats_map[idx].id = mldev->xstats.entries[i].map.id; + strncpy(xstats_map[idx].name, cn10k_mldev->xstats.entries[i].map.name, + RTE_ML_STR_MAX); + xstats_map[idx].id = cn10k_mldev->xstats.entries[i].map.id; idx++; } @@ -1304,13 +1328,15 @@ cn10k_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16 uint64_t *value) { struct cn10k_ml_xstats_entry *xs; - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; cn10k_ml_xstats_fn fn; uint32_t i; - mldev = dev->data->dev_private; - for (i = 0; i < mldev->xstats.count; i++) { - xs = &mldev->xstats.entries[i]; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + for (i = 0; i < cn10k_mldev->xstats.count; i++) { + xs = &cn10k_mldev->xstats.entries[i]; if (strncmp(xs->map.name, name, RTE_ML_STR_MAX) == 0) { if (stat_id != NULL) *stat_id = xs->map.id; @@ -1344,24 +1370,26 @@ cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode const uint16_t stat_ids[], uint64_t values[], uint16_t nb_ids) { struct cn10k_ml_xstats_entry *xs; - struct cn10k_ml_dev *mldev; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; uint32_t xstats_mode_count; cn10k_ml_xstats_fn fn; uint64_t val; uint32_t idx; uint32_t i; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; xstats_mode_count = 0; switch (mode) { case RTE_ML_DEV_XSTATS_DEVICE: - xstats_mode_count = mldev->xstats.count_mode_device; + xstats_mode_count = cn10k_mldev->xstats.count_mode_device; break; case RTE_ML_DEV_XSTATS_MODEL: if (model_id >= ML_CN10K_MAX_MODELS) return -EINVAL; - xstats_mode_count = mldev->xstats.count_per_model[model_id]; + xstats_mode_count = cn10k_mldev->xstats.count_per_model[model_id]; break; default: return -EINVAL; @@ -1369,8 +1397,8 @@ cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode idx = 0; for (i = 0; i < nb_ids && idx < xstats_mode_count; i++) { - xs = &mldev->xstats.entries[stat_ids[i]]; - if (stat_ids[i] > mldev->xstats.count || xs->mode != mode) + xs = &cn10k_mldev->xstats.entries[stat_ids[i]]; + if (stat_ids[i] > cn10k_mldev->xstats.count || xs->mode != mode) continue; if (mode == RTE_ML_DEV_XSTATS_MODEL && model_id != xs->obj_idx) { @@ -1418,8 +1446,9 @@ cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mo static int cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_fw *fw; uint32_t head_loc; @@ -1432,8 +1461,9 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) if (roc_env_is_asim()) return 0; - mldev = dev->data->dev_private; - fw = &mldev->fw; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + fw = &cn10k_mldev->fw; /* Dump model info */ for (model_id = 0; model_id < dev->data->nb_models; model_id++) { @@ -1451,15 +1481,19 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) for (core_id = 0; core_id <= 1; core_id++) { bufsize = fw->req->jd.fw_load.debug.debug_buffer_size; if (core_id == 0) { - head_loc = roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_DBG_BUFFER_HEAD_C0); - tail_loc = roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_DBG_BUFFER_TAIL_C0); + head_loc = + roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_DBG_BUFFER_HEAD_C0); + tail_loc = + roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_DBG_BUFFER_TAIL_C0); head_ptr = PLT_PTR_CAST(fw->req->jd.fw_load.debug.core0_debug_ptr); - head_ptr = roc_ml_addr_mlip2ap(&mldev->roc, head_ptr); + head_ptr = roc_ml_addr_mlip2ap(&cn10k_mldev->roc, head_ptr); } else { - head_loc = roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_DBG_BUFFER_HEAD_C1); - tail_loc = roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_DBG_BUFFER_TAIL_C1); + head_loc = + roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_DBG_BUFFER_HEAD_C1); + tail_loc = + roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_DBG_BUFFER_TAIL_C1); head_ptr = PLT_PTR_CAST(fw->req->jd.fw_load.debug.core1_debug_ptr); - head_ptr = roc_ml_addr_mlip2ap(&mldev->roc, head_ptr); + head_ptr = roc_ml_addr_mlip2ap(&cn10k_mldev->roc, head_ptr); } if (head_loc < tail_loc) { fprintf(fp, "%.*s\n", tail_loc - head_loc, &head_ptr[head_loc]); @@ -1473,18 +1507,18 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) for (core_id = 0; core_id <= 1; core_id++) { bufsize = fw->req->jd.fw_load.debug.exception_state_size; if ((core_id == 0) && - (roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0) != 0)) { + (roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0) != 0)) { head_ptr = PLT_PTR_CAST(fw->req->jd.fw_load.debug.core0_exception_buffer); fprintf(fp, "ML_SCRATCH_EXCEPTION_SP_C0 = 0x%016lx", - roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0)); - head_ptr = roc_ml_addr_mlip2ap(&mldev->roc, head_ptr); + roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0)); + head_ptr = roc_ml_addr_mlip2ap(&cn10k_mldev->roc, head_ptr); fprintf(fp, "%.*s", bufsize, head_ptr); - } else if ((core_id == 1) && - (roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1) != 0)) { + } else if ((core_id == 1) && (roc_ml_reg_read64(&cn10k_mldev->roc, + ML_SCRATCH_EXCEPTION_SP_C1) != 0)) { head_ptr = PLT_PTR_CAST(fw->req->jd.fw_load.debug.core1_exception_buffer); fprintf(fp, "ML_SCRATCH_EXCEPTION_SP_C1 = 0x%016lx", - roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1)); - head_ptr = roc_ml_addr_mlip2ap(&mldev->roc, head_ptr); + roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1)); + head_ptr = roc_ml_addr_mlip2ap(&cn10k_mldev->roc, head_ptr); fprintf(fp, "%.*s", bufsize, head_ptr); } } @@ -1495,14 +1529,16 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) static int cn10k_ml_dev_selftest(struct rte_ml_dev *dev) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; const struct plt_memzone *mz; - struct cn10k_ml_dev *mldev; struct cn10k_ml_req *req; uint64_t timeout_cycle; bool timeout; int ret; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; mz = plt_memzone_reserve_aligned("dev_selftest", sizeof(struct cn10k_ml_req), 0, ML_CN10K_ALIGN_SIZE); if (mz == NULL) { @@ -1515,20 +1551,20 @@ cn10k_ml_dev_selftest(struct rte_ml_dev *dev) memset(&req->jd, 0, sizeof(struct cn10k_ml_jd)); req->jd.hdr.jce.w1.u64 = PLT_U64_CAST(&req->status); req->jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_SELFTEST; - req->jd.hdr.result = roc_ml_addr_ap2mlip(&mldev->roc, &req->result); - req->jd.fw_load.flags = cn10k_ml_fw_flags_get(&mldev->fw); - plt_write64(ML_CN10K_POLL_JOB_START, &req->status); + req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->result); + req->jd.fw_load.flags = cn10k_ml_fw_flags_get(&cn10k_mldev->fw); + plt_write64(ML_CNXK_POLL_JOB_START, &req->status); plt_wmb(); /* Enqueue firmware selftest request through scratch registers */ timeout = true; - timeout_cycle = plt_tsc_cycles() + ML_CN10K_CMD_TIMEOUT * plt_tsc_hz(); - roc_ml_scratch_enqueue(&mldev->roc, &req->jd); + timeout_cycle = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); + roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->jd); plt_rmb(); do { - if (roc_ml_scratch_is_done_bit_set(&mldev->roc) && - (plt_read64(&req->status) == ML_CN10K_POLL_JOB_FINISH)) { + if (roc_ml_scratch_is_done_bit_set(&cn10k_mldev->roc) && + (plt_read64(&req->status) == ML_CNXK_POLL_JOB_FINISH)) { timeout = false; break; } @@ -1552,8 +1588,8 @@ int cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, uint16_t *model_id) { struct cn10k_ml_model_metadata *metadata; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; char str[RTE_MEMZONE_NAMESIZE]; const struct plt_memzone *mz; @@ -1574,7 +1610,7 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, if (ret != 0) return ret; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; /* Find model ID */ found = false; @@ -1591,7 +1627,8 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, } /* Get WB and scratch pages, check if model can be loaded. */ - ret = cn10k_ml_model_ocm_pages_count(mldev, idx, params->addr, &wb_pages, &scratch_pages); + ret = cn10k_ml_model_ocm_pages_count(&cnxk_mldev->cn10k_mldev, idx, params->addr, &wb_pages, + &scratch_pages); if (ret < 0) return ret; @@ -1623,7 +1660,7 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, } model = mz->addr; - model->mldev = mldev; + model->mldev = cnxk_mldev; model->model_id = idx; rte_memcpy(&model->metadata, params->addr, sizeof(struct cn10k_ml_model_metadata)); @@ -1680,7 +1717,7 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, plt_spinlock_init(&model->lock); model->state = ML_CN10K_MODEL_STATE_LOADED; dev->data->models[idx] = model; - mldev->nb_models_loaded++; + cnxk_mldev->nb_models_loaded++; /* Update xstats names */ cn10k_ml_xstats_model_name_update(dev, idx); @@ -1695,9 +1732,9 @@ cn10k_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) { char str[RTE_MEMZONE_NAMESIZE]; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; + struct cnxk_ml_dev *cnxk_mldev; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; model = dev->data->models[model_id]; if (model == NULL) { @@ -1711,7 +1748,7 @@ cn10k_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) } dev->data->models[model_id] = NULL; - mldev->nb_models_unloaded++; + cnxk_mldev->nb_models_unloaded++; snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", CN10K_ML_MODEL_MEMZONE_NAME, model_id); return plt_memzone_free(plt_memzone_lookup(str)); @@ -1720,8 +1757,9 @@ cn10k_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) int cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_ocm *ocm; struct cn10k_ml_req *req; @@ -1735,8 +1773,9 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) bool locked; int ret = 0; - mldev = dev->data->dev_private; - ocm = &mldev->ocm; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; model = dev->data->models[model_id]; if (model == NULL) { @@ -1746,11 +1785,11 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) /* Prepare JD */ req = model->req; - cn10k_ml_prep_sp_job_descriptor(mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_START); + cn10k_ml_prep_sp_job_descriptor(cn10k_mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_START); req->result.error_code.u64 = 0x0; req->result.user_ptr = NULL; - plt_write64(ML_CN10K_POLL_JOB_START, &req->status); + plt_write64(ML_CNXK_POLL_JOB_START, &req->status); plt_wmb(); num_tiles = model->metadata.model.tile_end - model->metadata.model.tile_start + 1; @@ -1815,26 +1854,26 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) job_dequeued = false; do { if (!job_enqueued) { - req->timeout = plt_tsc_cycles() + ML_CN10K_CMD_TIMEOUT * plt_tsc_hz(); - job_enqueued = roc_ml_scratch_enqueue(&mldev->roc, &req->jd); + req->timeout = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); + job_enqueued = roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->jd); } if (job_enqueued && !job_dequeued) - job_dequeued = roc_ml_scratch_dequeue(&mldev->roc, &req->jd); + job_dequeued = roc_ml_scratch_dequeue(&cn10k_mldev->roc, &req->jd); if (job_dequeued) break; } while (plt_tsc_cycles() < req->timeout); if (job_dequeued) { - if (plt_read64(&req->status) == ML_CN10K_POLL_JOB_FINISH) { + if (plt_read64(&req->status) == ML_CNXK_POLL_JOB_FINISH) { if (req->result.error_code.u64 == 0) ret = 0; else ret = -1; } } else { /* Reset scratch registers */ - roc_ml_scratch_queue_reset(&mldev->roc); + roc_ml_scratch_queue_reset(&cn10k_mldev->roc); ret = -ETIME; } @@ -1843,7 +1882,7 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) if (plt_spinlock_trylock(&model->lock) != 0) { if (ret == 0) { model->state = ML_CN10K_MODEL_STATE_STARTED; - mldev->nb_models_started++; + cnxk_mldev->nb_models_started++; } else { model->state = ML_CN10K_MODEL_STATE_UNKNOWN; } @@ -1867,7 +1906,7 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) if (ret < 0) { /* Call unload to update model and FW state, ignore error */ rte_ml_model_stop(dev->data->dev_id, model_id); } else { - if (mldev->cache_model_data && roc_model_is_cn10ka()) + if (cn10k_mldev->cache_model_data && roc_model_is_cn10ka()) ret = cn10k_ml_cache_model_data(dev, model_id); } @@ -1877,8 +1916,9 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) int cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_ocm *ocm; struct cn10k_ml_req *req; @@ -1887,8 +1927,9 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) bool locked; int ret = 0; - mldev = dev->data->dev_private; - ocm = &mldev->ocm; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; model = dev->data->models[model_id]; if (model == NULL) { @@ -1898,11 +1939,11 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) /* Prepare JD */ req = model->req; - cn10k_ml_prep_sp_job_descriptor(mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_STOP); + cn10k_ml_prep_sp_job_descriptor(cn10k_mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_STOP); req->result.error_code.u64 = 0x0; req->result.user_ptr = NULL; - plt_write64(ML_CN10K_POLL_JOB_START, &req->status); + plt_write64(ML_CNXK_POLL_JOB_START, &req->status); plt_wmb(); locked = false; @@ -1941,33 +1982,33 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) job_dequeued = false; do { if (!job_enqueued) { - req->timeout = plt_tsc_cycles() + ML_CN10K_CMD_TIMEOUT * plt_tsc_hz(); - job_enqueued = roc_ml_scratch_enqueue(&mldev->roc, &req->jd); + req->timeout = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); + job_enqueued = roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->jd); } if (job_enqueued && !job_dequeued) - job_dequeued = roc_ml_scratch_dequeue(&mldev->roc, &req->jd); + job_dequeued = roc_ml_scratch_dequeue(&cn10k_mldev->roc, &req->jd); if (job_dequeued) break; } while (plt_tsc_cycles() < req->timeout); if (job_dequeued) { - if (plt_read64(&req->status) == ML_CN10K_POLL_JOB_FINISH) { + if (plt_read64(&req->status) == ML_CNXK_POLL_JOB_FINISH) { if (req->result.error_code.u64 == 0x0) ret = 0; else ret = -1; } } else { - roc_ml_scratch_queue_reset(&mldev->roc); + roc_ml_scratch_queue_reset(&cn10k_mldev->roc); ret = -ETIME; } locked = false; while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { - mldev->nb_models_stopped++; + cnxk_mldev->nb_models_stopped++; model->state = ML_CN10K_MODEL_STATE_LOADED; plt_spinlock_unlock(&model->lock); locked = true; @@ -2211,8 +2252,9 @@ cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result struct rte_ml_op *op) { struct cn10k_ml_model_stats *stats; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_qp *qp; uint64_t hw_latency; uint64_t fw_latency; @@ -2258,14 +2300,16 @@ cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result /* Handle driver error */ if (result->error_code.s.etype == ML_ETYPE_DRIVER) { - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; /* Check for exception */ - if ((roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0) != 0) || - (roc_ml_reg_read64(&mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1) != 0)) + if ((roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0) != + 0) || + (roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1) != 0)) result->error_code.s.stype = ML_DRIVER_ERR_EXCEPTION; - else if ((roc_ml_reg_read64(&mldev->roc, ML_CORE_INT_LO) != 0) || - (roc_ml_reg_read64(&mldev->roc, ML_CORE_INT_HI) != 0)) + else if ((roc_ml_reg_read64(&cn10k_mldev->roc, ML_CORE_INT_LO) != 0) || + (roc_ml_reg_read64(&cn10k_mldev->roc, ML_CORE_INT_HI) != 0)) result->error_code.s.stype = ML_DRIVER_ERR_FW_ERROR; else result->error_code.s.stype = ML_DRIVER_ERR_UNKNOWN; @@ -2282,8 +2326,9 @@ __rte_hot uint16_t cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_queue *queue; - struct cn10k_ml_dev *mldev; struct cn10k_ml_req *req; struct cn10k_ml_qp *qp; struct rte_ml_op *op; @@ -2292,7 +2337,8 @@ cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op uint64_t head; bool enqueued; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; qp = dev->data->queue_pairs[qp_id]; queue = &qp->queue; @@ -2307,15 +2353,15 @@ cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op op = ops[count]; req = &queue->reqs[head]; - mldev->set_poll_addr(req); - cn10k_ml_prep_fp_job_descriptor(dev, req, op); + cn10k_mldev->set_poll_addr(req); + cn10k_ml_prep_fp_job_descriptor(cn10k_mldev, req, op); memset(&req->result, 0, sizeof(struct cn10k_ml_result)); req->result.error_code.s.etype = ML_ETYPE_UNKNOWN; req->result.user_ptr = op->user_ptr; - mldev->set_poll_ptr(req); - enqueued = mldev->ml_jcmdq_enqueue(&mldev->roc, &req->jcmd); + cn10k_mldev->set_poll_ptr(req); + enqueued = cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->jcmd); if (unlikely(!enqueued)) goto jcmdq_full; @@ -2339,8 +2385,9 @@ __rte_hot uint16_t cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_queue *queue; - struct cn10k_ml_dev *mldev; struct cn10k_ml_req *req; struct cn10k_ml_qp *qp; @@ -2348,7 +2395,8 @@ cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op uint16_t count; uint64_t tail; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; qp = dev->data->queue_pairs[qp_id]; queue = &qp->queue; @@ -2361,8 +2409,8 @@ cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op dequeue_req: req = &queue->reqs[tail]; - status = mldev->get_poll_ptr(req); - if (unlikely(status != ML_CN10K_POLL_JOB_FINISH)) { + status = cn10k_mldev->get_poll_ptr(req); + if (unlikely(status != ML_CNXK_POLL_JOB_FINISH)) { if (plt_tsc_cycles() < req->timeout) goto empty_or_active; else /* Timeout, set indication of driver error */ @@ -2420,30 +2468,32 @@ cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op, struct rte_m __rte_hot int cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) { + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_model *model; - struct cn10k_ml_dev *mldev; struct cn10k_ml_req *req; bool timeout; int ret = 0; - mldev = dev->data->dev_private; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; model = dev->data->models[op->model_id]; req = model->req; cn10k_ml_set_poll_addr(req); - cn10k_ml_prep_fp_job_descriptor(dev, req, op); + cn10k_ml_prep_fp_job_descriptor(cn10k_mldev, req, op); memset(&req->result, 0, sizeof(struct cn10k_ml_result)); req->result.error_code.s.etype = ML_ETYPE_UNKNOWN; req->result.user_ptr = op->user_ptr; - mldev->set_poll_ptr(req); + cn10k_mldev->set_poll_ptr(req); req->jcmd.w1.s.jobptr = PLT_U64_CAST(&req->jd); timeout = true; - req->timeout = plt_tsc_cycles() + ML_CN10K_CMD_TIMEOUT * plt_tsc_hz(); + req->timeout = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); do { - if (mldev->ml_jcmdq_enqueue(&mldev->roc, &req->jcmd)) { + if (cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->jcmd)) { req->op = op; timeout = false; break; @@ -2457,7 +2507,7 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) timeout = true; do { - if (mldev->get_poll_ptr(req) == ML_CN10K_POLL_JOB_FINISH) { + if (cn10k_mldev->get_poll_ptr(req) == ML_CNXK_POLL_JOB_FINISH) { timeout = false; break; } diff --git a/drivers/ml/cnxk/cnxk_ml_dev.c b/drivers/ml/cnxk/cnxk_ml_dev.c new file mode 100644 index 0000000000..2a5c17c973 --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_dev.c @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include +#include + +#include "cnxk_ml_dev.h" + +/* Dummy operations for ML device */ +struct rte_ml_dev_ops ml_dev_dummy_ops = {0}; diff --git a/drivers/ml/cnxk/cnxk_ml_dev.h b/drivers/ml/cnxk/cnxk_ml_dev.h new file mode 100644 index 0000000000..51315de622 --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_dev.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _CNXK_ML_DEV_H_ +#define _CNXK_ML_DEV_H_ + +#include + +#include "cn10k_ml_dev.h" + +/* ML command timeout in seconds */ +#define ML_CNXK_CMD_TIMEOUT 5 + +/* Poll mode job state */ +#define ML_CNXK_POLL_JOB_START 0 +#define ML_CNXK_POLL_JOB_FINISH 1 + +/* Device configuration state enum */ +enum cnxk_ml_dev_state { + /* Probed and not configured */ + ML_CNXK_DEV_STATE_PROBED = 0, + + /* Configured */ + ML_CNXK_DEV_STATE_CONFIGURED, + + /* Started */ + ML_CNXK_DEV_STATE_STARTED, + + /* Closed */ + ML_CNXK_DEV_STATE_CLOSED +}; + +/* Device private data */ +struct cnxk_ml_dev { + /* RTE device */ + struct rte_ml_dev *mldev; + + /* Configuration state */ + enum cnxk_ml_dev_state state; + + /* Number of models loaded */ + uint16_t nb_models_loaded; + + /* Number of models unloaded */ + uint16_t nb_models_unloaded; + + /* Number of models started */ + uint16_t nb_models_started; + + /* Number of models stopped */ + uint16_t nb_models_stopped; + + /* CN10K device structure */ + struct cn10k_ml_dev cn10k_mldev; +}; + +#endif /* _CNXK_ML_DEV_H_ */ diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 94fa4283b1..03a2d4ecf2 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -12,6 +12,7 @@ driver_sdk_headers = files( 'cn10k_ml_ops.h', 'cn10k_ml_model.h', 'cn10k_ml_ocm.h', + 'cnxk_ml_dev.h', ) sources = files( @@ -19,6 +20,7 @@ sources = files( 'cn10k_ml_ops.c', 'cn10k_ml_model.c', 'cn10k_ml_ocm.c', + 'cnxk_ml_dev.c', ) deps += ['mldev', 'common_cnxk', 'kvargs', 'hash'] From patchwork Wed Sep 20 07:24:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131677 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3EE56425E4; Wed, 20 Sep 2023 09:25:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E057840EF1; Wed, 20 Sep 2023 09:25:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5F14340E6E for ; Wed, 20 Sep 2023 09:25:36 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lZ002294 for ; Wed, 20 Sep 2023 00:25:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=QR/+HV3mWPTyQZHN+xgmWZ3AP+zayM4yjR1Tef1nelc=; b=YvKZ3/rcLcYBfGOZLlSCsbvO4QNEgALZqmVpQ0IeFTN9ZZBDgjbs/riKlD/XZy+9dzD6 4tsGRHh/yq9LXiBYXiRH4iNwwf9KYB5s+J2+sru7DOzoYhnDtNmjm6FGTRVbx75IJpgW M2skTboVs51gUpdnyxzTFV2DtUgI9v9IVsl76I9LipLdiW5d6DZH62N6ROTo8qeAKEnZ QA1nXwq+HX+n8G2iT3BGHY5JHDL2Zw+fhkkTIw4O5UTSeZscbYTIVGRHJZYOsvZMpfZc eS+lWE/l9QaEgk1qn1LxQ/BdwupHENHmOAUVMTdqGhmz2TUGkvMVIIOM6DDvjWAzxhlY LQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:35 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:33 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 5584E5B6922; Wed, 20 Sep 2023 00:25:33 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 04/34] ml/cnxk: add generic model and layer structures Date: Wed, 20 Sep 2023 00:24:55 -0700 Message-ID: <20230920072528.14185-5-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: unVzc4T0we1ZJIFOm_mZBfynLA1dHFuN X-Proofpoint-GUID: unVzc4T0we1ZJIFOm_mZBfynLA1dHFuN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce generic cnxk model and layer structure. These structures would enable supporting models with multiple layers. A model is a collection of multiple independent layers with flow dependencies between the layers. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.h | 9 +- drivers/ml/cnxk/cn10k_ml_model.c | 244 ++++++++-------- drivers/ml/cnxk/cn10k_ml_model.h | 122 ++------ drivers/ml/cnxk/cn10k_ml_ocm.c | 49 +++- drivers/ml/cnxk/cn10k_ml_ocm.h | 9 +- drivers/ml/cnxk/cn10k_ml_ops.c | 487 +++++++++++++++++-------------- drivers/ml/cnxk/cnxk_ml_io.h | 79 +++++ drivers/ml/cnxk/cnxk_ml_model.c | 7 + drivers/ml/cnxk/cnxk_ml_model.h | 111 +++++++ drivers/ml/cnxk/meson.build | 3 + 10 files changed, 653 insertions(+), 467 deletions(-) create mode 100644 drivers/ml/cnxk/cnxk_ml_io.h create mode 100644 drivers/ml/cnxk/cnxk_ml_model.c create mode 100644 drivers/ml/cnxk/cnxk_ml_model.h diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index f9da1548c4..99ff0a344a 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -9,6 +9,8 @@ #include "cn10k_ml_ocm.h" +#include "cnxk_ml_io.h" + /* Dummy Device ops */ extern struct rte_ml_dev_ops ml_dev_dummy_ops; @@ -21,9 +23,6 @@ extern struct rte_ml_dev_ops ml_dev_dummy_ops; /* Device alignment size */ #define ML_CN10K_ALIGN_SIZE 128 -/* Maximum number of models per device */ -#define ML_CN10K_MAX_MODELS 16 - /* Maximum number of queue-pairs per device, spinlock version */ #define ML_CN10K_MAX_QP_PER_DEVICE_SL 16 @@ -455,8 +454,8 @@ struct cn10k_ml_xstats { struct cn10k_ml_xstats_entry *entries; /* Store num stats and offset of the stats for each model */ - uint16_t count_per_model[ML_CN10K_MAX_MODELS]; - uint16_t offset_for_model[ML_CN10K_MAX_MODELS]; + uint16_t count_per_model[ML_CNXK_MAX_MODELS]; + uint16_t offset_for_model[ML_CNXK_MAX_MODELS]; uint16_t count_mode_device; uint16_t count_mode_model; uint16_t count; diff --git a/drivers/ml/cnxk/cn10k_ml_model.c b/drivers/ml/cnxk/cn10k_ml_model.c index d146535866..0ea6520bf7 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.c +++ b/drivers/ml/cnxk/cn10k_ml_model.c @@ -11,6 +11,7 @@ #include "cn10k_ml_ocm.h" #include "cnxk_ml_dev.h" +#include "cnxk_ml_model.h" static enum rte_ml_io_type cn10k_ml_io_type_map(uint8_t type) @@ -312,19 +313,17 @@ cn10k_ml_model_metadata_update(struct cn10k_ml_model_metadata *metadata) } void -cn10k_ml_model_addr_update(struct cn10k_ml_model *model, uint8_t *buffer, uint8_t *base_dma_addr) +cn10k_ml_layer_addr_update(struct cnxk_ml_layer *layer, uint8_t *buffer, uint8_t *base_dma_addr) { struct cn10k_ml_model_metadata *metadata; - struct cn10k_ml_model_addr *addr; + struct cn10k_ml_layer_addr *addr; size_t model_data_size; uint8_t *dma_addr_load; uint8_t *dma_addr_run; - uint8_t i; - uint8_t j; int fpos; - metadata = &model->metadata; - addr = &model->addr; + metadata = &layer->glow.metadata; + addr = &layer->glow.addr; model_data_size = metadata->init_model.file_size + metadata->main_model.file_size + metadata->finish_model.file_size + metadata->weights_bias.file_size; @@ -362,102 +361,136 @@ cn10k_ml_model_addr_update(struct cn10k_ml_model *model, uint8_t *buffer, uint8_ addr->wb_base_addr = PLT_PTR_SUB(dma_addr_load, metadata->weights_bias.mem_offset); addr->wb_load_addr = PLT_PTR_ADD(addr->wb_base_addr, metadata->weights_bias.mem_offset); rte_memcpy(addr->wb_load_addr, PLT_PTR_ADD(buffer, fpos), metadata->weights_bias.file_size); +} + +void +cn10k_ml_layer_info_update(struct cnxk_ml_layer *layer) +{ + struct cn10k_ml_model_metadata *metadata; + uint8_t i; + uint8_t j; + + metadata = &layer->glow.metadata; /* Inputs */ - addr->total_input_sz_d = 0; - addr->total_input_sz_q = 0; + layer->info.nb_inputs = metadata->model.num_input; + layer->info.total_input_sz_d = 0; + layer->info.total_input_sz_q = 0; for (i = 0; i < metadata->model.num_input; i++) { if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - addr->input[i].nb_dims = 4; - addr->input[i].shape[0] = metadata->input1[i].shape.w; - addr->input[i].shape[1] = metadata->input1[i].shape.x; - addr->input[i].shape[2] = metadata->input1[i].shape.y; - addr->input[i].shape[3] = metadata->input1[i].shape.z; - - addr->input[i].nb_elements = + strncpy(layer->info.input[i].name, (char *)metadata->input1[i].input_name, + MRVL_ML_INPUT_NAME_LEN); + layer->info.input[i].dtype = metadata->input1[i].input_type; + layer->info.input[i].qtype = metadata->input1[i].model_input_type; + layer->info.input[i].nb_dims = 4; + layer->info.input[i].shape[0] = metadata->input1[i].shape.w; + layer->info.input[i].shape[1] = metadata->input1[i].shape.x; + layer->info.input[i].shape[2] = metadata->input1[i].shape.y; + layer->info.input[i].shape[3] = metadata->input1[i].shape.z; + layer->info.input[i].nb_elements = metadata->input1[i].shape.w * metadata->input1[i].shape.x * metadata->input1[i].shape.y * metadata->input1[i].shape.z; - addr->input[i].sz_d = - addr->input[i].nb_elements * + layer->info.input[i].sz_d = + layer->info.input[i].nb_elements * rte_ml_io_type_size_get(metadata->input1[i].input_type); - addr->input[i].sz_q = - addr->input[i].nb_elements * + layer->info.input[i].sz_q = + layer->info.input[i].nb_elements * rte_ml_io_type_size_get(metadata->input1[i].model_input_type); - addr->total_input_sz_d += addr->input[i].sz_d; - addr->total_input_sz_q += addr->input[i].sz_q; + layer->info.input[i].scale = metadata->input1[i].qscale; + + layer->info.total_input_sz_d += layer->info.input[i].sz_d; + layer->info.total_input_sz_q += layer->info.input[i].sz_q; plt_ml_dbg( - "model_id = %u, input[%u] - w:%u x:%u y:%u z:%u, sz_d = %u sz_q = %u", - model->model_id, i, metadata->input1[i].shape.w, + "index = %u, input1[%u] - w:%u x:%u y:%u z:%u, sz_d = %u sz_q = %u", + layer->index, i, metadata->input1[i].shape.w, metadata->input1[i].shape.x, metadata->input1[i].shape.y, - metadata->input1[i].shape.z, addr->input[i].sz_d, - addr->input[i].sz_q); + metadata->input1[i].shape.z, layer->info.input[i].sz_d, + layer->info.input[i].sz_q); } else { j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - addr->input[i].nb_dims = 4; - addr->input[i].shape[0] = metadata->input2[j].shape.w; - addr->input[i].shape[1] = metadata->input2[j].shape.x; - addr->input[i].shape[2] = metadata->input2[j].shape.y; - addr->input[i].shape[3] = metadata->input2[j].shape.z; - - addr->input[i].nb_elements = + strncpy(layer->info.input[i].name, (char *)metadata->input2[j].input_name, + MRVL_ML_INPUT_NAME_LEN); + layer->info.input[i].dtype = metadata->input2[j].input_type; + layer->info.input[i].qtype = metadata->input2[j].model_input_type; + layer->info.input[i].nb_dims = 4; + layer->info.input[i].shape[0] = metadata->input2[j].shape.w; + layer->info.input[i].shape[1] = metadata->input2[j].shape.x; + layer->info.input[i].shape[2] = metadata->input2[j].shape.y; + layer->info.input[i].shape[3] = metadata->input2[j].shape.z; + layer->info.input[i].nb_elements = metadata->input2[j].shape.w * metadata->input2[j].shape.x * metadata->input2[j].shape.y * metadata->input2[j].shape.z; - addr->input[i].sz_d = - addr->input[i].nb_elements * + layer->info.input[i].sz_d = + layer->info.input[i].nb_elements * rte_ml_io_type_size_get(metadata->input2[j].input_type); - addr->input[i].sz_q = - addr->input[i].nb_elements * + layer->info.input[i].sz_q = + layer->info.input[i].nb_elements * rte_ml_io_type_size_get(metadata->input2[j].model_input_type); - addr->total_input_sz_d += addr->input[i].sz_d; - addr->total_input_sz_q += addr->input[i].sz_q; + layer->info.input[i].scale = metadata->input2[j].qscale; + + layer->info.total_input_sz_d += layer->info.input[i].sz_d; + layer->info.total_input_sz_q += layer->info.input[i].sz_q; plt_ml_dbg( - "model_id = %u, input2[%u] - w:%u x:%u y:%u z:%u, sz_d = %u sz_q = %u", - model->model_id, j, metadata->input2[j].shape.w, + "index = %u, input2[%u] - w:%u x:%u y:%u z:%u, sz_d = %u sz_q = %u", + layer->index, j, metadata->input2[j].shape.w, metadata->input2[j].shape.x, metadata->input2[j].shape.y, - metadata->input2[j].shape.z, addr->input[i].sz_d, - addr->input[i].sz_q); + metadata->input2[j].shape.z, layer->info.input[i].sz_d, + layer->info.input[i].sz_q); } } /* Outputs */ - addr->total_output_sz_q = 0; - addr->total_output_sz_d = 0; + layer->info.nb_outputs = metadata->model.num_output; + layer->info.total_output_sz_q = 0; + layer->info.total_output_sz_d = 0; for (i = 0; i < metadata->model.num_output; i++) { if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - addr->output[i].nb_dims = 1; - addr->output[i].shape[0] = metadata->output1[i].size; - addr->output[i].nb_elements = metadata->output1[i].size; - addr->output[i].sz_d = - addr->output[i].nb_elements * + strncpy(layer->info.output[i].name, + (char *)metadata->output1[i].output_name, MRVL_ML_OUTPUT_NAME_LEN); + layer->info.output[i].dtype = metadata->output1[i].output_type; + layer->info.output[i].qtype = metadata->output1[i].model_output_type; + layer->info.output[i].nb_dims = 1; + layer->info.output[i].shape[0] = metadata->output1[i].size; + layer->info.output[i].nb_elements = metadata->output1[i].size; + layer->info.output[i].sz_d = + layer->info.output[i].nb_elements * rte_ml_io_type_size_get(metadata->output1[i].output_type); - addr->output[i].sz_q = - addr->output[i].nb_elements * + layer->info.output[i].sz_q = + layer->info.output[i].nb_elements * rte_ml_io_type_size_get(metadata->output1[i].model_output_type); - addr->total_output_sz_q += addr->output[i].sz_q; - addr->total_output_sz_d += addr->output[i].sz_d; + layer->info.output[i].scale = metadata->output1[i].dscale; - plt_ml_dbg("model_id = %u, output[%u] - sz_d = %u, sz_q = %u", - model->model_id, i, addr->output[i].sz_d, addr->output[i].sz_q); + layer->info.total_output_sz_q += layer->info.output[i].sz_q; + layer->info.total_output_sz_d += layer->info.output[i].sz_d; + + plt_ml_dbg("index = %u, output1[%u] - sz_d = %u, sz_q = %u", layer->index, + i, layer->info.output[i].sz_d, layer->info.output[i].sz_q); } else { j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - addr->output[i].nb_dims = 1; - addr->output[i].shape[0] = metadata->output2[j].size; - addr->output[i].nb_elements = metadata->output2[j].size; - addr->output[i].sz_d = - addr->output[i].nb_elements * + strncpy(layer->info.output[i].name, + (char *)metadata->output2[j].output_name, MRVL_ML_OUTPUT_NAME_LEN); + layer->info.output[i].dtype = metadata->output2[j].output_type; + layer->info.output[i].qtype = metadata->output2[j].model_output_type; + layer->info.output[i].nb_dims = 1; + layer->info.output[i].shape[0] = metadata->output2[j].size; + layer->info.output[i].nb_elements = metadata->output2[j].size; + layer->info.output[i].sz_d = + layer->info.output[i].nb_elements * rte_ml_io_type_size_get(metadata->output2[j].output_type); - addr->output[i].sz_q = - addr->output[i].nb_elements * + layer->info.output[i].sz_q = + layer->info.output[i].nb_elements * rte_ml_io_type_size_get(metadata->output2[j].model_output_type); - addr->total_output_sz_q += addr->output[i].sz_q; - addr->total_output_sz_d += addr->output[i].sz_d; + layer->info.output[i].scale = metadata->output2[j].dscale; + + layer->info.total_output_sz_q += layer->info.output[i].sz_q; + layer->info.total_output_sz_d += layer->info.output[i].sz_d; - plt_ml_dbg("model_id = %u, output2[%u] - sz_d = %u, sz_q = %u", - model->model_id, j, addr->output[i].sz_d, addr->output[i].sz_q); + plt_ml_dbg("index = %u, output2[%u] - sz_d = %u, sz_q = %u", layer->index, + j, layer->info.output[i].sz_d, layer->info.output[i].sz_q); } } } @@ -515,23 +548,23 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_ } void -cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cn10k_ml_model *model) +cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cnxk_ml_model *model) { struct cn10k_ml_model_metadata *metadata; - struct cn10k_ml_model_addr *addr; + struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct rte_ml_model_info *info; struct rte_ml_io_info *output; struct rte_ml_io_info *input; - struct cn10k_ml_dev *mldev; + struct cnxk_ml_layer *layer; uint8_t i; - uint8_t j; - mldev = dev->data->dev_private; - metadata = &model->metadata; + cnxk_mldev = dev->data->dev_private; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + metadata = &model->glow.metadata; info = PLT_PTR_CAST(model->info); input = PLT_PTR_ADD(info, sizeof(struct rte_ml_model_info)); output = PLT_PTR_ADD(input, metadata->model.num_input * sizeof(struct rte_ml_io_info)); - addr = &model->addr; /* Set model info */ memset(info, 0, sizeof(struct rte_ml_model_info)); @@ -543,7 +576,8 @@ cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cn10k_ml_model *model) info->device_id = dev->data->dev_id; info->io_layout = RTE_ML_IO_LAYOUT_PACKED; info->min_batches = model->batch_size; - info->max_batches = mldev->fw.req->jd.fw_load.cap.s.max_num_batches / model->batch_size; + info->max_batches = + cn10k_mldev->fw.req->jd.fw_load.cap.s.max_num_batches / model->batch_size; info->nb_inputs = metadata->model.num_input; info->input_info = input; info->nb_outputs = metadata->model.num_output; @@ -551,56 +585,26 @@ cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cn10k_ml_model *model) info->wb_size = metadata->weights_bias.file_size; /* Set input info */ + layer = &model->layer[0]; for (i = 0; i < info->nb_inputs; i++) { - if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - rte_memcpy(input[i].name, metadata->input1[i].input_name, - MRVL_ML_INPUT_NAME_LEN); - input[i].nb_dims = addr->input[i].nb_dims; - input[i].shape = addr->input[i].shape; - input[i].type = metadata->input1[i].model_input_type; - input[i].nb_elements = addr->input[i].nb_elements; - input[i].size = - addr->input[i].nb_elements * - rte_ml_io_type_size_get(metadata->input1[i].model_input_type); - } else { - j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - - rte_memcpy(input[i].name, metadata->input2[j].input_name, - MRVL_ML_INPUT_NAME_LEN); - input[i].nb_dims = addr->input[i].nb_dims; - input[i].shape = addr->input[i].shape; - input[i].type = metadata->input2[j].model_input_type; - input[i].nb_elements = addr->input[i].nb_elements; - input[i].size = - addr->input[i].nb_elements * - rte_ml_io_type_size_get(metadata->input2[j].model_input_type); - } + rte_memcpy(input[i].name, layer->info.input[i].name, MRVL_ML_INPUT_NAME_LEN); + input[i].nb_dims = layer->info.input[i].nb_dims; + input[i].shape = &layer->info.input[i].shape[0]; + input[i].type = layer->info.input[i].qtype; + input[i].nb_elements = layer->info.input[i].nb_elements; + input[i].size = layer->info.input[i].nb_elements * + rte_ml_io_type_size_get(layer->info.input[i].qtype); } /* Set output info */ + layer = &model->layer[0]; for (i = 0; i < info->nb_outputs; i++) { - if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - rte_memcpy(output[i].name, metadata->output1[i].output_name, - MRVL_ML_OUTPUT_NAME_LEN); - output[i].nb_dims = addr->output[i].nb_dims; - output[i].shape = addr->output[i].shape; - output[i].type = metadata->output1[i].model_output_type; - output[i].nb_elements = addr->output[i].nb_elements; - output[i].size = - addr->output[i].nb_elements * - rte_ml_io_type_size_get(metadata->output1[i].model_output_type); - } else { - j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - - rte_memcpy(output[i].name, metadata->output2[j].output_name, - MRVL_ML_OUTPUT_NAME_LEN); - output[i].nb_dims = addr->output[i].nb_dims; - output[i].shape = addr->output[i].shape; - output[i].type = metadata->output2[j].model_output_type; - output[i].nb_elements = addr->output[i].nb_elements; - output[i].size = - addr->output[i].nb_elements * - rte_ml_io_type_size_get(metadata->output2[j].model_output_type); - } + rte_memcpy(output[i].name, layer->info.output[i].name, MRVL_ML_INPUT_NAME_LEN); + output[i].nb_dims = layer->info.output[i].nb_dims; + output[i].shape = &layer->info.output[i].shape[0]; + output[i].type = layer->info.output[i].qtype; + output[i].nb_elements = layer->info.output[i].nb_elements; + output[i].size = layer->info.output[i].nb_elements * + rte_ml_io_type_size_get(layer->info.output[i].qtype); } } diff --git a/drivers/ml/cnxk/cn10k_ml_model.h b/drivers/ml/cnxk/cn10k_ml_model.h index 3128b28db7..206a369ca7 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.h +++ b/drivers/ml/cnxk/cn10k_ml_model.h @@ -13,15 +13,8 @@ #include "cn10k_ml_ocm.h" #include "cn10k_ml_ops.h" -struct cnxk_ml_dev; - -/* Model state */ -enum cn10k_ml_model_state { - ML_CN10K_MODEL_STATE_LOADED, - ML_CN10K_MODEL_STATE_JOB_ACTIVE, - ML_CN10K_MODEL_STATE_STARTED, - ML_CN10K_MODEL_STATE_UNKNOWN, -}; +struct cnxk_ml_model; +struct cnxk_ml_layer; /* Model Metadata : v 2.3.0.1 */ #define MRVL_ML_MODEL_MAGIC_STRING "MRVL" @@ -369,7 +362,7 @@ struct cn10k_ml_model_metadata { }; /* Model address structure */ -struct cn10k_ml_model_addr { +struct cn10k_ml_layer_addr { /* Base DMA address for load */ void *base_dma_addr_load; @@ -408,58 +401,10 @@ struct cn10k_ml_model_addr { /* End tile */ uint8_t tile_end; - - /* Input address and size */ - struct { - /* Number of dimensions in shape */ - uint32_t nb_dims; - - /* Shape of input */ - uint32_t shape[4]; - - /* Number of elements */ - uint32_t nb_elements; - - /* Dequantized input size */ - uint32_t sz_d; - - /* Quantized input size */ - uint32_t sz_q; - } input[MRVL_ML_NUM_INPUT_OUTPUT]; - - /* Output address and size */ - struct { - /* Number of dimensions in shape */ - uint32_t nb_dims; - - /* Shape of input */ - uint32_t shape[4]; - - /* Number of elements */ - uint32_t nb_elements; - - /* Dequantize output size */ - uint32_t sz_d; - - /* Quantized output size */ - uint32_t sz_q; - } output[MRVL_ML_NUM_INPUT_OUTPUT]; - - /* Total size of quantized input */ - uint32_t total_input_sz_q; - - /* Total size of dequantized input */ - uint32_t total_input_sz_d; - - /* Total size of quantized output */ - uint32_t total_output_sz_q; - - /* Total size of dequantized output */ - uint32_t total_output_sz_d; }; /* Model fast-path stats */ -struct cn10k_ml_model_stats { +struct cn10k_ml_layer_stats { /* Total hardware latency, sum of all inferences */ uint64_t hw_latency_tot; @@ -488,59 +433,38 @@ struct cn10k_ml_model_stats { uint64_t fw_reset_count; }; -/* Model Object */ -struct cn10k_ml_model { - /* Device reference */ - struct cnxk_ml_dev *mldev; - - /* Name */ - char name[RTE_ML_STR_MAX]; - - /* ID */ - uint16_t model_id; - - /* Batch size */ - uint32_t batch_size; - - /* Metadata */ +struct cn10k_ml_layer_data { + /* Model / Layer: metadata */ struct cn10k_ml_model_metadata metadata; - /* Address structure */ - struct cn10k_ml_model_addr addr; + /* Layer: address structure */ + struct cn10k_ml_layer_addr addr; - /* Tile and memory information object */ - struct cn10k_ml_ocm_model_map model_mem_map; + /* Layer: Tile and memory information object */ + struct cn10k_ml_ocm_layer_map ocm_map; - /* Internal model information structure - * Size of the buffer = sizeof(struct rte_ml_model_info) - * + num_inputs * sizeof(struct rte_ml_io_info) - * + num_outputs * sizeof(struct rte_ml_io_info). - * Structures would be arranged in the same order in the buffer. - */ - uint8_t *info; - - /* Spinlock, used to update model state */ - plt_spinlock_t lock; - - /* State */ - enum cn10k_ml_model_state state; - - /* Slow-path operations request pointer */ + /* Layer: Slow-path operations request pointer */ struct cn10k_ml_req *req; - /* Stats for burst ops */ - struct cn10k_ml_model_stats *burst_stats; + /* Layer: Stats for burst ops */ + struct cn10k_ml_layer_stats *burst_stats; - /* Stats for sync ops */ - struct cn10k_ml_model_stats *sync_stats; + /* Layer: Stats for sync ops */ + struct cn10k_ml_layer_stats *sync_stats; +}; + +struct cn10k_ml_model_data { + /* Model / Layer: metadata */ + struct cn10k_ml_model_metadata metadata; }; int cn10k_ml_model_metadata_check(uint8_t *buffer, uint64_t size); void cn10k_ml_model_metadata_update(struct cn10k_ml_model_metadata *metadata); -void cn10k_ml_model_addr_update(struct cn10k_ml_model *model, uint8_t *buffer, +void cn10k_ml_layer_addr_update(struct cnxk_ml_layer *layer, uint8_t *buffer, uint8_t *base_dma_addr); +void cn10k_ml_layer_info_update(struct cnxk_ml_layer *layer); int cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_id, uint8_t *buffer, uint16_t *wb_pages, uint16_t *scratch_pages); -void cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cn10k_ml_model *model); +void cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cnxk_ml_model *model); #endif /* _CN10K_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/cn10k_ml_ocm.c b/drivers/ml/cnxk/cn10k_ml_ocm.c index aa376284d5..5682778e87 100644 --- a/drivers/ml/cnxk/cn10k_ml_ocm.c +++ b/drivers/ml/cnxk/cn10k_ml_ocm.c @@ -11,6 +11,7 @@ #include "cn10k_ml_ocm.h" #include "cnxk_ml_dev.h" +#include "cnxk_ml_model.h" /* OCM macros */ #define BYTE_LEN 8 @@ -334,12 +335,14 @@ cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t w } void -cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint64_t tilemask, - int wb_page_start, uint16_t wb_pages, uint16_t scratch_pages) +cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t layer_id, + uint64_t tilemask, int wb_page_start, uint16_t wb_pages, + uint16_t scratch_pages) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; struct cn10k_ml_ocm *ocm; int scratch_page_start; @@ -354,6 +357,7 @@ cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint64_t t cn10k_mldev = &cnxk_mldev->cn10k_mldev; ocm = &cn10k_mldev->ocm; model = dev->data->models[model_id]; + layer = &model->layer[layer_id]; /* Get first set bit, tile_start */ tile_start = 0; @@ -383,8 +387,8 @@ cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint64_t t PLT_MAX(ocm->tile_ocm_info[tile_id].last_wb_page, wb_page_end); } - model->addr.tile_start = tile_start; - model->addr.tile_end = tile_end; + layer->glow.addr.tile_start = tile_start; + layer->glow.addr.tile_end = tile_end; plt_ml_dbg("model_id = %u, tilemask = 0x%016lx", model_id, tilemask); plt_ml_dbg("model_id = %u, wb_page_start = %d, wb_page_end = %d", model_id, wb_page_start, @@ -394,12 +398,14 @@ cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint64_t t } void -cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id) +cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t layer_id) { - struct cn10k_ml_model *local_model; + struct cnxk_ml_model *local_model; + struct cnxk_ml_layer *local_layer; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; struct cn10k_ml_ocm *ocm; int scratch_resize_pages; @@ -410,16 +416,19 @@ cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id) int tile_id; int page_id; uint16_t i; + uint16_t j; cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; ocm = &cn10k_mldev->ocm; model = dev->data->models[model_id]; + layer = &model->layer[layer_id]; /* Update OCM info for WB memory */ - wb_page_start = model->model_mem_map.wb_page_start; - wb_page_end = wb_page_start + model->model_mem_map.wb_pages - 1; - for (tile_id = model->addr.tile_start; tile_id <= model->addr.tile_end; tile_id++) { + wb_page_start = layer->glow.ocm_map.wb_page_start; + wb_page_end = wb_page_start + layer->glow.ocm_map.wb_pages - 1; + for (tile_id = layer->glow.addr.tile_start; tile_id <= layer->glow.addr.tile_end; + tile_id++) { for (page_id = wb_page_start; page_id <= wb_page_end; page_id++) { CLEAR_BIT(ocm->tile_ocm_info[tile_id].ocm_mask[page_id / OCM_MAP_WORD_SIZE], page_id % OCM_MAP_WORD_SIZE); @@ -433,11 +442,19 @@ cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id) scratch_resize_pages = 0; for (i = 0; i < dev->data->nb_models; i++) { local_model = dev->data->models[i]; - if ((i != model_id) && (local_model != NULL)) { - if (IS_BIT_SET(local_model->model_mem_map.tilemask, tile_id)) - scratch_resize_pages = PLT_MAX( - (int)local_model->model_mem_map.scratch_pages, - scratch_resize_pages); + if (local_model == NULL) + continue; + + for (j = 0; j < local_model->nb_layers; j++) { + local_layer = &local_model->layer[j]; + if (local_layer != layer && + local_layer->glow.ocm_map.ocm_reserved) { + if (IS_BIT_SET(local_layer->glow.ocm_map.tilemask, tile_id)) + scratch_resize_pages = + PLT_MAX((int)local_layer->glow.ocm_map + .scratch_pages, + scratch_resize_pages); + } } } diff --git a/drivers/ml/cnxk/cn10k_ml_ocm.h b/drivers/ml/cnxk/cn10k_ml_ocm.h index 3404e7fd65..720f8caf76 100644 --- a/drivers/ml/cnxk/cn10k_ml_ocm.h +++ b/drivers/ml/cnxk/cn10k_ml_ocm.h @@ -27,7 +27,7 @@ struct cn10k_ml_ocm_tile_info { }; /* Model OCM map structure */ -struct cn10k_ml_ocm_model_map { +struct cn10k_ml_ocm_layer_map { /* Status of OCM reservation */ bool ocm_reserved; @@ -77,9 +77,10 @@ struct cn10k_ml_ocm { int cn10k_ml_ocm_tilecount(uint64_t tilemask, int *start, int *end); int cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t wb_pages, uint16_t scratch_pages, uint64_t *tilemask); -void cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint64_t tilemask, - int wb_page_start, uint16_t wb_pages, uint16_t scratch_pages); -void cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id); +void cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t layer_id, + uint64_t tilemask, int wb_page_start, uint16_t wb_pages, + uint16_t scratch_pages); +void cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t layer_id); void cn10k_ml_ocm_print(struct rte_ml_dev *dev, FILE *fp); #endif /* _CN10K_ML_OCM_H_ */ diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 3385bf50c0..a52509630f 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -12,6 +12,7 @@ #include "cn10k_ml_ops.h" #include "cnxk_ml_dev.h" +#include "cnxk_ml_model.h" /* ML model macros */ #define CN10K_ML_MODEL_MEMZONE_NAME "ml_cn10k_model_mz" @@ -203,7 +204,7 @@ cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cn10k_ml_ocm *ocm; char str[STR_LEN]; uint8_t i; @@ -216,77 +217,80 @@ cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) /* Print debug info */ print_line(fp, LINE_LEN); - fprintf(fp, " Model Information (%s)\n", model->metadata.model.name); + fprintf(fp, " Model Information (%s)\n", model->glow.metadata.model.name); print_line(fp, LINE_LEN); - fprintf(fp, "%*s : %s\n", FIELD_LEN, "name", model->metadata.model.name); - fprintf(fp, "%*s : %u.%u.%u.%u\n", FIELD_LEN, "version", model->metadata.model.version[0], - model->metadata.model.version[1], model->metadata.model.version[2], - model->metadata.model.version[3]); + fprintf(fp, "%*s : %s\n", FIELD_LEN, "name", model->glow.metadata.model.name); + fprintf(fp, "%*s : %u.%u.%u.%u\n", FIELD_LEN, "version", + model->glow.metadata.model.version[0], model->glow.metadata.model.version[1], + model->glow.metadata.model.version[2], model->glow.metadata.model.version[3]); if (strlen(model->name) != 0) fprintf(fp, "%*s : %s\n", FIELD_LEN, "debug_name", model->name); fprintf(fp, "%*s : 0x%016lx\n", FIELD_LEN, "model", PLT_U64_CAST(model)); fprintf(fp, "%*s : %u\n", FIELD_LEN, "model_id", model->model_id); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "batch_size", model->metadata.model.batch_size); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_layers", model->metadata.model.num_layers); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "batch_size", model->glow.metadata.model.batch_size); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_layers", model->glow.metadata.model.num_layers); /* Print model state */ - if (model->state == ML_CN10K_MODEL_STATE_LOADED) + if (model->state == ML_CNXK_MODEL_STATE_LOADED) fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "loaded"); - if (model->state == ML_CN10K_MODEL_STATE_JOB_ACTIVE) + if (model->state == ML_CNXK_MODEL_STATE_JOB_ACTIVE) fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "job_active"); - if (model->state == ML_CN10K_MODEL_STATE_STARTED) + if (model->state == ML_CNXK_MODEL_STATE_STARTED) fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "started"); /* Print OCM status */ fprintf(fp, "%*s : %" PRIu64 " bytes\n", FIELD_LEN, "wb_size", - model->metadata.model.ocm_wb_range_end - model->metadata.model.ocm_wb_range_start + - 1); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "wb_pages", model->model_mem_map.wb_pages); + model->glow.metadata.model.ocm_wb_range_end - + model->glow.metadata.model.ocm_wb_range_start + 1); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "wb_pages", model->layer[0].glow.ocm_map.wb_pages); fprintf(fp, "%*s : %" PRIu64 " bytes\n", FIELD_LEN, "scratch_size", - ocm->size_per_tile - model->metadata.model.ocm_tmp_range_floor); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "scratch_pages", model->model_mem_map.scratch_pages); + ocm->size_per_tile - model->glow.metadata.model.ocm_tmp_range_floor); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "scratch_pages", + model->layer[0].glow.ocm_map.scratch_pages); fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_tiles", - model->metadata.model.tile_end - model->metadata.model.tile_start + 1); + model->glow.metadata.model.tile_end - model->glow.metadata.model.tile_start + 1); - if (model->state == ML_CN10K_MODEL_STATE_STARTED) { + if (model->state == ML_CNXK_MODEL_STATE_STARTED) { fprintf(fp, "%*s : 0x%0*" PRIx64 "\n", FIELD_LEN, "tilemask", - ML_CN10K_OCM_NUMTILES / 4, model->model_mem_map.tilemask); + ML_CN10K_OCM_NUMTILES / 4, model->layer[0].glow.ocm_map.tilemask); fprintf(fp, "%*s : 0x%" PRIx64 "\n", FIELD_LEN, "ocm_wb_start", - model->model_mem_map.wb_page_start * cn10k_mldev->ocm.page_size); + model->layer[0].glow.ocm_map.wb_page_start * cn10k_mldev->ocm.page_size); } - fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_inputs", model->metadata.model.num_input); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_outputs", model->metadata.model.num_output); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_inputs", model->glow.metadata.model.num_input); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_outputs", model->glow.metadata.model.num_output); fprintf(fp, "\n"); print_line(fp, LINE_LEN); fprintf(fp, "%8s %16s %12s %18s %12s\n", "input", "input_name", "input_type", "model_input_type", "quantize"); print_line(fp, LINE_LEN); - for (i = 0; i < model->metadata.model.num_input; i++) { + for (i = 0; i < model->glow.metadata.model.num_input; i++) { if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { fprintf(fp, "%8u ", i); - fprintf(fp, "%*s ", 16, model->metadata.input1[i].input_name); - rte_ml_io_type_to_str(model->metadata.input1[i].input_type, str, STR_LEN); + fprintf(fp, "%*s ", 16, model->glow.metadata.input1[i].input_name); + rte_ml_io_type_to_str(model->glow.metadata.input1[i].input_type, str, + STR_LEN); fprintf(fp, "%*s ", 12, str); - rte_ml_io_type_to_str(model->metadata.input1[i].model_input_type, str, + rte_ml_io_type_to_str(model->glow.metadata.input1[i].model_input_type, str, STR_LEN); fprintf(fp, "%*s ", 18, str); fprintf(fp, "%*s", 12, - (model->metadata.input1[i].quantize == 1 ? "Yes" : "No")); + (model->glow.metadata.input1[i].quantize == 1 ? "Yes" : "No")); fprintf(fp, "\n"); } else { j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; fprintf(fp, "%8u ", i); - fprintf(fp, "%*s ", 16, model->metadata.input2[j].input_name); - rte_ml_io_type_to_str(model->metadata.input2[j].input_type, str, STR_LEN); + fprintf(fp, "%*s ", 16, model->glow.metadata.input2[j].input_name); + rte_ml_io_type_to_str(model->glow.metadata.input2[j].input_type, str, + STR_LEN); fprintf(fp, "%*s ", 12, str); - rte_ml_io_type_to_str(model->metadata.input2[j].model_input_type, str, + rte_ml_io_type_to_str(model->glow.metadata.input2[j].model_input_type, str, STR_LEN); fprintf(fp, "%*s ", 18, str); fprintf(fp, "%*s", 12, - (model->metadata.input2[j].quantize == 1 ? "Yes" : "No")); + (model->glow.metadata.input2[j].quantize == 1 ? "Yes" : "No")); fprintf(fp, "\n"); } } @@ -296,29 +300,31 @@ cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) fprintf(fp, "%8s %16s %12s %18s %12s\n", "output", "output_name", "output_type", "model_output_type", "dequantize"); print_line(fp, LINE_LEN); - for (i = 0; i < model->metadata.model.num_output; i++) { + for (i = 0; i < model->glow.metadata.model.num_output; i++) { if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { fprintf(fp, "%8u ", i); - fprintf(fp, "%*s ", 16, model->metadata.output1[i].output_name); - rte_ml_io_type_to_str(model->metadata.output1[i].output_type, str, STR_LEN); - fprintf(fp, "%*s ", 12, str); - rte_ml_io_type_to_str(model->metadata.output1[i].model_output_type, str, + fprintf(fp, "%*s ", 16, model->glow.metadata.output1[i].output_name); + rte_ml_io_type_to_str(model->glow.metadata.output1[i].output_type, str, STR_LEN); + fprintf(fp, "%*s ", 12, str); + rte_ml_io_type_to_str(model->glow.metadata.output1[i].model_output_type, + str, STR_LEN); fprintf(fp, "%*s ", 18, str); fprintf(fp, "%*s", 12, - (model->metadata.output1[i].dequantize == 1 ? "Yes" : "No")); + (model->glow.metadata.output1[i].dequantize == 1 ? "Yes" : "No")); fprintf(fp, "\n"); } else { j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; fprintf(fp, "%8u ", i); - fprintf(fp, "%*s ", 16, model->metadata.output2[j].output_name); - rte_ml_io_type_to_str(model->metadata.output2[j].output_type, str, STR_LEN); - fprintf(fp, "%*s ", 12, str); - rte_ml_io_type_to_str(model->metadata.output2[j].model_output_type, str, + fprintf(fp, "%*s ", 16, model->glow.metadata.output2[j].output_name); + rte_ml_io_type_to_str(model->glow.metadata.output2[j].output_type, str, STR_LEN); + fprintf(fp, "%*s ", 12, str); + rte_ml_io_type_to_str(model->glow.metadata.output2[j].model_output_type, + str, STR_LEN); fprintf(fp, "%*s ", 18, str); fprintf(fp, "%*s", 12, - (model->metadata.output2[j].dequantize == 1 ? "Yes" : "No")); + (model->glow.metadata.output2[j].dequantize == 1 ? "Yes" : "No")); fprintf(fp, "\n"); } } @@ -328,14 +334,14 @@ cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) } static void -cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cn10k_ml_model *model, +cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml_model *model, struct cn10k_ml_req *req, enum cn10k_ml_job_type job_type) { struct cn10k_ml_model_metadata *metadata; - struct cn10k_ml_model_addr *addr; + struct cn10k_ml_layer_addr *addr; - metadata = &model->metadata; - addr = &model->addr; + metadata = &model->glow.metadata; + addr = &model->layer[0].glow.addr; memset(&req->jd, 0, sizeof(struct cn10k_ml_jd)); req->jd.hdr.jce.w0.u64 = 0; @@ -346,7 +352,7 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cn10k_m req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->result); if (job_type == ML_CN10K_JOB_TYPE_MODEL_START) { - if (!model->metadata.model.ocm_relocatable) + if (!model->glow.metadata.model.ocm_relocatable) req->jd.hdr.sp_flags = ML_CN10K_SP_FLAGS_OCM_NONRELOCATABLE; else req->jd.hdr.sp_flags = 0x0; @@ -386,7 +392,7 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cn10k_m req->jd.model_start.output.s.ddr_range_end = metadata->model.ddr_output_range_end; req->extended_args.start.ddr_scratch_base_address = PLT_U64_CAST( - roc_ml_addr_ap2mlip(&cn10k_mldev->roc, model->addr.scratch_base_addr)); + roc_ml_addr_ap2mlip(&cn10k_mldev->roc, addr->scratch_base_addr)); req->extended_args.start.ddr_scratch_range_start = metadata->model.ddr_scratch_range_start; req->extended_args.start.ddr_scratch_range_end = @@ -446,7 +452,7 @@ cn10k_ml_xstats_init(struct rte_ml_dev *dev) cn10k_mldev = &cnxk_mldev->cn10k_mldev; /* Allocate memory for xstats entries. Don't allocate during reconfigure */ - nb_stats = RTE_DIM(device_stats) + ML_CN10K_MAX_MODELS * RTE_DIM(model_stats); + nb_stats = RTE_DIM(device_stats) + ML_CNXK_MAX_MODELS * RTE_DIM(model_stats); if (cn10k_mldev->xstats.entries == NULL) cn10k_mldev->xstats.entries = rte_zmalloc( "cn10k_ml_xstats", sizeof(struct cn10k_ml_xstats_entry) * nb_stats, @@ -473,7 +479,7 @@ cn10k_ml_xstats_init(struct rte_ml_dev *dev) cn10k_mldev->xstats.count_mode_device = stat_id; /* Initialize model xstats */ - for (model = 0; model < ML_CN10K_MAX_MODELS; model++) { + for (model = 0; model < ML_CNXK_MAX_MODELS; model++) { cn10k_mldev->xstats.offset_for_model[model] = stat_id; for (i = 0; i < RTE_DIM(model_stats); i++) { @@ -522,7 +528,7 @@ cn10k_ml_xstats_model_name_update(struct rte_ml_dev *dev, uint16_t model_id) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; uint16_t rclk_freq; uint16_t sclk_freq; uint16_t stat_id; @@ -544,7 +550,7 @@ cn10k_ml_xstats_model_name_update(struct rte_ml_dev *dev, uint16_t model_id) for (i = 0; i < RTE_DIM(model_stats); i++) { snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", - model->metadata.model.name, model_stats[i].name, suffix); + model->layer[0].glow.metadata.model.name, model_stats[i].name, suffix); stat_id++; } } @@ -577,9 +583,9 @@ cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, do { \ value = 0; \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - value += model->burst_stats[qp_id].str##_latency_tot; \ - count += model->burst_stats[qp_id].dequeued_count - \ - model->burst_stats[qp_id].str##_reset_count; \ + value += model->layer[0].glow.burst_stats[qp_id].str##_latency_tot; \ + count += model->layer[0].glow.burst_stats[qp_id].dequeued_count - \ + model->layer[0].glow.burst_stats[qp_id].str##_reset_count; \ } \ if (count != 0) \ value = value / count; \ @@ -589,9 +595,10 @@ cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, do { \ value = UINT64_MAX; \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - value = PLT_MIN(value, model->burst_stats[qp_id].str##_latency_min); \ - count += model->burst_stats[qp_id].dequeued_count - \ - model->burst_stats[qp_id].str##_reset_count; \ + value = PLT_MIN( \ + value, model->layer[0].glow.burst_stats[qp_id].str##_latency_min); \ + count += model->layer[0].glow.burst_stats[qp_id].dequeued_count - \ + model->layer[0].glow.burst_stats[qp_id].str##_reset_count; \ } \ if (count == 0) \ value = 0; \ @@ -601,9 +608,10 @@ cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, do { \ value = 0; \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - value = PLT_MAX(value, model->burst_stats[qp_id].str##_latency_max); \ - count += model->burst_stats[qp_id].dequeued_count - \ - model->burst_stats[qp_id].str##_reset_count; \ + value = PLT_MAX( \ + value, model->layer[0].glow.burst_stats[qp_id].str##_latency_max); \ + count += model->layer[0].glow.burst_stats[qp_id].dequeued_count - \ + model->layer[0].glow.burst_stats[qp_id].str##_reset_count; \ } \ if (count == 0) \ value = 0; \ @@ -612,7 +620,7 @@ cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, static uint64_t cn10k_ml_model_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx, enum cn10k_ml_xstats_type type) { - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; uint16_t rclk_freq; /* MHz */ uint16_t sclk_freq; /* MHz */ uint64_t count = 0; @@ -693,28 +701,28 @@ cn10k_ml_device_xstats_reset(struct rte_ml_dev *dev, const uint16_t stat_ids[], #define ML_AVG_RESET_FOREACH_QP(dev, model, qp_id, str) \ do { \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - model->burst_stats[qp_id].str##_latency_tot = 0; \ - model->burst_stats[qp_id].str##_reset_count = \ - model->burst_stats[qp_id].dequeued_count; \ + model->layer[0].glow.burst_stats[qp_id].str##_latency_tot = 0; \ + model->layer[0].glow.burst_stats[qp_id].str##_reset_count = \ + model->layer[0].glow.burst_stats[qp_id].dequeued_count; \ } \ } while (0) #define ML_MIN_RESET_FOREACH_QP(dev, model, qp_id, str) \ do { \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) \ - model->burst_stats[qp_id].str##_latency_min = UINT64_MAX; \ + model->layer[0].glow.burst_stats[qp_id].str##_latency_min = UINT64_MAX; \ } while (0) #define ML_MAX_RESET_FOREACH_QP(dev, model, qp_id, str) \ do { \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) \ - model->burst_stats[qp_id].str##_latency_max = 0; \ + model->layer[0].glow.burst_stats[qp_id].str##_latency_max = 0; \ } while (0) static void cn10k_ml_reset_model_stat(struct rte_ml_dev *dev, uint16_t model_id, enum cn10k_ml_xstats_type type) { - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; uint32_t qp_id; model = dev->data->models[model_id]; @@ -750,7 +758,7 @@ cn10k_ml_model_xstats_reset(struct rte_ml_dev *dev, int32_t model_id, const uint struct cn10k_ml_xstats_entry *xs; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; int32_t lcl_model_id = 0; uint16_t start_id; uint16_t end_id; @@ -759,7 +767,7 @@ cn10k_ml_model_xstats_reset(struct rte_ml_dev *dev, int32_t model_id, const uint cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; - for (i = 0; i < ML_CN10K_MAX_MODELS; i++) { + for (i = 0; i < ML_CNXK_MAX_MODELS; i++) { if (model_id == -1) { model = dev->data->models[i]; if (model == NULL) /* Skip inactive models */ @@ -804,7 +812,7 @@ static int cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) { struct rte_ml_model_info *info; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct rte_ml_buff_seg seg[2]; struct rte_ml_buff_seg *inp; struct rte_ml_buff_seg *out; @@ -855,7 +863,7 @@ cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) op.input = &inp; op.output = &out; - memset(model->req, 0, sizeof(struct cn10k_ml_req)); + memset(model->layer[0].glow.req, 0, sizeof(struct cn10k_ml_req)); ret = cn10k_ml_inference_sync(dev, &op); plt_memzone_free(mz); @@ -876,7 +884,7 @@ cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); dev_info->driver_name = dev->device->driver->name; - dev_info->max_models = ML_CN10K_MAX_MODELS; + dev_info->max_models = ML_CNXK_MAX_MODELS; if (cn10k_mldev->hw_queue_lock) dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE_SL; else @@ -896,7 +904,7 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c struct rte_ml_dev_info dev_info; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cn10k_ml_ocm *ocm; struct cn10k_ml_qp *qp; uint16_t model_id; @@ -1002,11 +1010,11 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c for (model_id = 0; model_id < dev->data->nb_models; model_id++) { model = dev->data->models[model_id]; if (model != NULL) { - if (model->state == ML_CN10K_MODEL_STATE_STARTED) { + if (model->state == ML_CNXK_MODEL_STATE_STARTED) { if (cn10k_ml_model_stop(dev, model_id) != 0) plt_err("Could not stop model %u", model_id); } - if (model->state == ML_CN10K_MODEL_STATE_LOADED) { + if (model->state == ML_CNXK_MODEL_STATE_LOADED) { if (cn10k_ml_model_unload(dev, model_id) != 0) plt_err("Could not unload model %u", model_id); } @@ -1094,7 +1102,7 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cn10k_ml_qp *qp; uint16_t model_id; uint16_t qp_id; @@ -1112,11 +1120,11 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) for (model_id = 0; model_id < dev->data->nb_models; model_id++) { model = dev->data->models[model_id]; if (model != NULL) { - if (model->state == ML_CN10K_MODEL_STATE_STARTED) { + if (model->state == ML_CNXK_MODEL_STATE_STARTED) { if (cn10k_ml_model_stop(dev, model_id) != 0) plt_err("Could not stop model %u", model_id); } - if (model->state == ML_CN10K_MODEL_STATE_LOADED) { + if (model->state == ML_CNXK_MODEL_STATE_LOADED) { if (cn10k_ml_model_unload(dev, model_id) != 0) plt_err("Could not unload model %u", model_id); } @@ -1295,7 +1303,7 @@ cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mod xstats_mode_count = cn10k_mldev->xstats.count_mode_device; break; case RTE_ML_DEV_XSTATS_MODEL: - if (model_id >= ML_CN10K_MAX_MODELS) + if (model_id >= ML_CNXK_MAX_MODELS) break; xstats_mode_count = cn10k_mldev->xstats.count_per_model[model_id]; break; @@ -1387,7 +1395,7 @@ cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode xstats_mode_count = cn10k_mldev->xstats.count_mode_device; break; case RTE_ML_DEV_XSTATS_MODEL: - if (model_id >= ML_CN10K_MAX_MODELS) + if (model_id >= ML_CNXK_MAX_MODELS) return -EINVAL; xstats_mode_count = cn10k_mldev->xstats.count_per_model[model_id]; break; @@ -1448,7 +1456,7 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cn10k_ml_fw *fw; uint32_t head_loc; @@ -1589,7 +1597,7 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, { struct cn10k_ml_model_metadata *metadata; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; char str[RTE_MEMZONE_NAMESIZE]; const struct plt_memzone *mz; @@ -1644,9 +1652,9 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, metadata->model.num_input * sizeof(struct rte_ml_io_info) + metadata->model.num_output * sizeof(struct rte_ml_io_info); model_info_size = PLT_ALIGN_CEIL(model_info_size, ML_CN10K_ALIGN_SIZE); - model_stats_size = (dev->data->nb_queue_pairs + 1) * sizeof(struct cn10k_ml_model_stats); + model_stats_size = (dev->data->nb_queue_pairs + 1) * sizeof(struct cn10k_ml_layer_stats); - mz_size = PLT_ALIGN_CEIL(sizeof(struct cn10k_ml_model), ML_CN10K_ALIGN_SIZE) + + mz_size = PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), ML_CN10K_ALIGN_SIZE) + 2 * model_data_size + model_scratch_size + model_info_size + PLT_ALIGN_CEIL(sizeof(struct cn10k_ml_req), ML_CN10K_ALIGN_SIZE) + model_stats_size; @@ -1660,62 +1668,85 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, } model = mz->addr; - model->mldev = cnxk_mldev; + model->cnxk_mldev = cnxk_mldev; model->model_id = idx; + dev->data->models[idx] = model; - rte_memcpy(&model->metadata, params->addr, sizeof(struct cn10k_ml_model_metadata)); - cn10k_ml_model_metadata_update(&model->metadata); + rte_memcpy(&model->glow.metadata, params->addr, sizeof(struct cn10k_ml_model_metadata)); + cn10k_ml_model_metadata_update(&model->glow.metadata); + + /* Set model name */ + rte_memcpy(model->name, (char *)model->glow.metadata.model.name, 64); /* Enable support for batch_size of 256 */ - if (model->metadata.model.batch_size == 0) + if (model->glow.metadata.model.batch_size == 0) model->batch_size = 256; else - model->batch_size = model->metadata.model.batch_size; + model->batch_size = model->glow.metadata.model.batch_size; + + /* Since the number of layers that the driver would be handling for glow models is + * always 1. consider the entire model as a model with single layer. This would + * ignore the num_layers from metadata. + */ + model->nb_layers = 1; + + /* Copy metadata to internal buffer */ + rte_memcpy(&model->layer[0].glow.metadata, params->addr, + sizeof(struct cn10k_ml_model_metadata)); + cn10k_ml_model_metadata_update(&model->layer[0].glow.metadata); + model->layer[0].model = model; /* Set DMA base address */ base_dma_addr = PLT_PTR_ADD( - mz->addr, PLT_ALIGN_CEIL(sizeof(struct cn10k_ml_model), ML_CN10K_ALIGN_SIZE)); - cn10k_ml_model_addr_update(model, params->addr, base_dma_addr); - model->addr.scratch_base_addr = PLT_PTR_ADD(base_dma_addr, 2 * model_data_size); + mz->addr, PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), ML_CN10K_ALIGN_SIZE)); + cn10k_ml_layer_addr_update(&model->layer[0], params->addr, base_dma_addr); + model->layer[0].glow.addr.scratch_base_addr = + PLT_PTR_ADD(base_dma_addr, 2 * model_data_size); /* Copy data from load to run. run address to be used by MLIP */ - rte_memcpy(model->addr.base_dma_addr_run, model->addr.base_dma_addr_load, model_data_size); + rte_memcpy(model->layer[0].glow.addr.base_dma_addr_run, + model->layer[0].glow.addr.base_dma_addr_load, model_data_size); + + /* Update internal I/O data structure */ + cn10k_ml_layer_info_update(&model->layer[0]); /* Initialize model_mem_map */ - memset(&model->model_mem_map, 0, sizeof(struct cn10k_ml_ocm_model_map)); - model->model_mem_map.ocm_reserved = false; - model->model_mem_map.tilemask = 0; - model->model_mem_map.wb_page_start = -1; - model->model_mem_map.wb_pages = wb_pages; - model->model_mem_map.scratch_pages = scratch_pages; + memset(&model->layer[0].glow.ocm_map, 0, sizeof(struct cn10k_ml_ocm_layer_map)); + model->layer[0].glow.ocm_map.ocm_reserved = false; + model->layer[0].glow.ocm_map.tilemask = 0; + model->layer[0].glow.ocm_map.wb_page_start = -1; + model->layer[0].glow.ocm_map.wb_pages = wb_pages; + model->layer[0].glow.ocm_map.scratch_pages = scratch_pages; /* Set model info */ - model->info = PLT_PTR_ADD(model->addr.scratch_base_addr, model_scratch_size); + model->info = PLT_PTR_ADD(model->layer[0].glow.addr.scratch_base_addr, model_scratch_size); cn10k_ml_model_info_set(dev, model); /* Set slow-path request address and state */ - model->req = PLT_PTR_ADD(model->info, model_info_size); + model->layer[0].glow.req = PLT_PTR_ADD(model->info, model_info_size); /* Reset burst and sync stats */ - model->burst_stats = PLT_PTR_ADD( - model->req, PLT_ALIGN_CEIL(sizeof(struct cn10k_ml_req), ML_CN10K_ALIGN_SIZE)); + model->layer[0].glow.burst_stats = + PLT_PTR_ADD(model->layer[0].glow.req, + PLT_ALIGN_CEIL(sizeof(struct cn10k_ml_req), ML_CN10K_ALIGN_SIZE)); for (qp_id = 0; qp_id < dev->data->nb_queue_pairs + 1; qp_id++) { - model->burst_stats[qp_id].hw_latency_tot = 0; - model->burst_stats[qp_id].hw_latency_min = UINT64_MAX; - model->burst_stats[qp_id].hw_latency_max = 0; - model->burst_stats[qp_id].fw_latency_tot = 0; - model->burst_stats[qp_id].fw_latency_min = UINT64_MAX; - model->burst_stats[qp_id].fw_latency_max = 0; - model->burst_stats[qp_id].hw_reset_count = 0; - model->burst_stats[qp_id].fw_reset_count = 0; - model->burst_stats[qp_id].dequeued_count = 0; - } - model->sync_stats = - PLT_PTR_ADD(model->burst_stats, - dev->data->nb_queue_pairs * sizeof(struct cn10k_ml_model_stats)); + model->layer[0].glow.burst_stats[qp_id].hw_latency_tot = 0; + model->layer[0].glow.burst_stats[qp_id].hw_latency_min = UINT64_MAX; + model->layer[0].glow.burst_stats[qp_id].hw_latency_max = 0; + model->layer[0].glow.burst_stats[qp_id].fw_latency_tot = 0; + model->layer[0].glow.burst_stats[qp_id].fw_latency_min = UINT64_MAX; + model->layer[0].glow.burst_stats[qp_id].fw_latency_max = 0; + model->layer[0].glow.burst_stats[qp_id].hw_reset_count = 0; + model->layer[0].glow.burst_stats[qp_id].fw_reset_count = 0; + model->layer[0].glow.burst_stats[qp_id].dequeued_count = 0; + } + + model->layer[0].glow.sync_stats = + PLT_PTR_ADD(model->layer[0].glow.burst_stats, + dev->data->nb_queue_pairs * sizeof(struct cn10k_ml_layer_stats)); plt_spinlock_init(&model->lock); - model->state = ML_CN10K_MODEL_STATE_LOADED; + model->state = ML_CNXK_MODEL_STATE_LOADED; dev->data->models[idx] = model; cnxk_mldev->nb_models_loaded++; @@ -1731,7 +1762,7 @@ int cn10k_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) { char str[RTE_MEMZONE_NAMESIZE]; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cnxk_ml_dev *cnxk_mldev; cnxk_mldev = dev->data->dev_private; @@ -1742,7 +1773,7 @@ cn10k_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) return -EINVAL; } - if (model->state != ML_CN10K_MODEL_STATE_LOADED) { + if (model->state != ML_CNXK_MODEL_STATE_LOADED) { plt_err("Cannot unload. Model in use."); return -EBUSY; } @@ -1759,7 +1790,7 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cn10k_ml_ocm *ocm; struct cn10k_ml_req *req; @@ -1784,7 +1815,7 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) } /* Prepare JD */ - req = model->req; + req = model->layer[0].glow.req; cn10k_ml_prep_sp_job_descriptor(cn10k_mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_START); req->result.error_code.u64 = 0x0; req->result.user_ptr = NULL; @@ -1792,63 +1823,66 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) plt_write64(ML_CNXK_POLL_JOB_START, &req->status); plt_wmb(); - num_tiles = model->metadata.model.tile_end - model->metadata.model.tile_start + 1; + num_tiles = model->layer[0].glow.metadata.model.tile_end - + model->layer[0].glow.metadata.model.tile_start + 1; locked = false; while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { - if (model->state == ML_CN10K_MODEL_STATE_STARTED) { + if (model->state == ML_CNXK_MODEL_STATE_STARTED) { plt_ml_dbg("Model already started, model = 0x%016lx", PLT_U64_CAST(model)); plt_spinlock_unlock(&model->lock); return 1; } - if (model->state == ML_CN10K_MODEL_STATE_JOB_ACTIVE) { + if (model->state == ML_CNXK_MODEL_STATE_JOB_ACTIVE) { plt_err("A slow-path job is active for the model = 0x%016lx", PLT_U64_CAST(model)); plt_spinlock_unlock(&model->lock); return -EBUSY; } - model->state = ML_CN10K_MODEL_STATE_JOB_ACTIVE; + model->state = ML_CNXK_MODEL_STATE_JOB_ACTIVE; plt_spinlock_unlock(&model->lock); locked = true; } } - while (!model->model_mem_map.ocm_reserved) { + while (!model->layer[0].glow.ocm_map.ocm_reserved) { if (plt_spinlock_trylock(&ocm->lock) != 0) { wb_page_start = cn10k_ml_ocm_tilemask_find( - dev, num_tiles, model->model_mem_map.wb_pages, - model->model_mem_map.scratch_pages, &tilemask); + dev, num_tiles, model->layer[0].glow.ocm_map.wb_pages, + model->layer[0].glow.ocm_map.scratch_pages, &tilemask); if (wb_page_start == -1) { plt_err("Free pages not available on OCM tiles"); plt_err("Failed to start model = 0x%016lx, name = %s", - PLT_U64_CAST(model), model->metadata.model.name); + PLT_U64_CAST(model), + model->layer[0].glow.metadata.model.name); plt_spinlock_unlock(&ocm->lock); return -ENOMEM; } - model->model_mem_map.tilemask = tilemask; - model->model_mem_map.wb_page_start = wb_page_start; + model->layer[0].glow.ocm_map.tilemask = tilemask; + model->layer[0].glow.ocm_map.wb_page_start = wb_page_start; - cn10k_ml_ocm_reserve_pages( - dev, model->model_id, model->model_mem_map.tilemask, - model->model_mem_map.wb_page_start, model->model_mem_map.wb_pages, - model->model_mem_map.scratch_pages); - model->model_mem_map.ocm_reserved = true; + cn10k_ml_ocm_reserve_pages(dev, model->model_id, 0, + model->layer[0].glow.ocm_map.tilemask, + model->layer[0].glow.ocm_map.wb_page_start, + model->layer[0].glow.ocm_map.wb_pages, + model->layer[0].glow.ocm_map.scratch_pages); + model->layer[0].glow.ocm_map.ocm_reserved = true; plt_spinlock_unlock(&ocm->lock); } } /* Update JD */ - cn10k_ml_ocm_tilecount(model->model_mem_map.tilemask, &tile_start, &tile_end); + cn10k_ml_ocm_tilecount(model->layer[0].glow.ocm_map.tilemask, &tile_start, &tile_end); req->jd.model_start.tilemask = GENMASK_ULL(tile_end, tile_start); req->jd.model_start.ocm_wb_base_address = - model->model_mem_map.wb_page_start * ocm->page_size; + model->layer[0].glow.ocm_map.wb_page_start * ocm->page_size; job_enqueued = false; job_dequeued = false; @@ -1881,10 +1915,10 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { if (ret == 0) { - model->state = ML_CN10K_MODEL_STATE_STARTED; + model->state = ML_CNXK_MODEL_STATE_STARTED; cnxk_mldev->nb_models_started++; } else { - model->state = ML_CN10K_MODEL_STATE_UNKNOWN; + model->state = ML_CNXK_MODEL_STATE_UNKNOWN; } plt_spinlock_unlock(&model->lock); @@ -1892,12 +1926,12 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) } } - if (model->state == ML_CN10K_MODEL_STATE_UNKNOWN) { - while (model->model_mem_map.ocm_reserved) { + if (model->state == ML_CNXK_MODEL_STATE_UNKNOWN) { + while (model->layer[0].glow.ocm_map.ocm_reserved) { if (plt_spinlock_trylock(&ocm->lock) != 0) { - cn10k_ml_ocm_free_pages(dev, model->model_id); - model->model_mem_map.ocm_reserved = false; - model->model_mem_map.tilemask = 0x0; + cn10k_ml_ocm_free_pages(dev, model->model_id, 0); + model->layer[0].glow.ocm_map.ocm_reserved = false; + model->layer[0].glow.ocm_map.tilemask = 0x0; plt_spinlock_unlock(&ocm->lock); } } @@ -1918,7 +1952,7 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cn10k_ml_ocm *ocm; struct cn10k_ml_req *req; @@ -1938,7 +1972,7 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) } /* Prepare JD */ - req = model->req; + req = model->layer[0].glow.req; cn10k_ml_prep_sp_job_descriptor(cn10k_mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_STOP); req->result.error_code.u64 = 0x0; req->result.user_ptr = NULL; @@ -1949,31 +1983,31 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) locked = false; while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { - if (model->state == ML_CN10K_MODEL_STATE_LOADED) { + if (model->state == ML_CNXK_MODEL_STATE_LOADED) { plt_ml_dbg("Model not started, model = 0x%016lx", PLT_U64_CAST(model)); plt_spinlock_unlock(&model->lock); return 1; } - if (model->state == ML_CN10K_MODEL_STATE_JOB_ACTIVE) { + if (model->state == ML_CNXK_MODEL_STATE_JOB_ACTIVE) { plt_err("A slow-path job is active for the model = 0x%016lx", PLT_U64_CAST(model)); plt_spinlock_unlock(&model->lock); return -EBUSY; } - model->state = ML_CN10K_MODEL_STATE_JOB_ACTIVE; + model->state = ML_CNXK_MODEL_STATE_JOB_ACTIVE; plt_spinlock_unlock(&model->lock); locked = true; } } - while (model->model_mem_map.ocm_reserved) { + while (model->layer[0].glow.ocm_map.ocm_reserved) { if (plt_spinlock_trylock(&ocm->lock) != 0) { - cn10k_ml_ocm_free_pages(dev, model->model_id); - model->model_mem_map.ocm_reserved = false; - model->model_mem_map.tilemask = 0x0; + cn10k_ml_ocm_free_pages(dev, model->model_id, 0); + model->layer[0].glow.ocm_map.ocm_reserved = false; + model->layer[0].glow.ocm_map.tilemask = 0x0; plt_spinlock_unlock(&ocm->lock); } } @@ -2009,7 +2043,7 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { cnxk_mldev->nb_models_stopped++; - model->state = ML_CN10K_MODEL_STATE_LOADED; + model->state = ML_CNXK_MODEL_STATE_LOADED; plt_spinlock_unlock(&model->lock); locked = true; } @@ -2022,7 +2056,7 @@ static int cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_model_info *model_info) { - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; model = dev->data->models[model_id]; @@ -2041,7 +2075,7 @@ cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, static int cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buffer) { - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; size_t size; model = dev->data->models[model_id]; @@ -2051,19 +2085,23 @@ cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *bu return -EINVAL; } - if (model->state == ML_CN10K_MODEL_STATE_UNKNOWN) + if (model->state == ML_CNXK_MODEL_STATE_UNKNOWN) return -1; - else if (model->state != ML_CN10K_MODEL_STATE_LOADED) + else if (model->state != ML_CNXK_MODEL_STATE_LOADED) return -EBUSY; - size = model->metadata.init_model.file_size + model->metadata.main_model.file_size + - model->metadata.finish_model.file_size + model->metadata.weights_bias.file_size; + size = model->layer[0].glow.metadata.init_model.file_size + + model->layer[0].glow.metadata.main_model.file_size + + model->layer[0].glow.metadata.finish_model.file_size + + model->layer[0].glow.metadata.weights_bias.file_size; /* Update model weights & bias */ - rte_memcpy(model->addr.wb_load_addr, buffer, model->metadata.weights_bias.file_size); + rte_memcpy(model->layer[0].glow.addr.wb_load_addr, buffer, + model->layer[0].glow.metadata.weights_bias.file_size); /* Copy data from load to run. run address to be used by MLIP */ - rte_memcpy(model->addr.base_dma_addr_run, model->addr.base_dma_addr_load, size); + rte_memcpy(model->layer[0].glow.addr.base_dma_addr_run, + model->layer[0].glow.addr.base_dma_addr_load, size); return 0; } @@ -2072,7 +2110,7 @@ static int cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **dbuffer, struct rte_ml_buff_seg **qbuffer) { - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; uint8_t model_input_type; uint8_t *lcl_dbuffer; uint8_t *lcl_qbuffer; @@ -2092,57 +2130,58 @@ cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_bu lcl_dbuffer = dbuffer[0]->addr; lcl_qbuffer = qbuffer[0]->addr; - for (i = 0; i < model->metadata.model.num_input; i++) { + for (i = 0; i < model->layer[0].glow.metadata.model.num_input; i++) { if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - input_type = model->metadata.input1[i].input_type; - model_input_type = model->metadata.input1[i].model_input_type; - qscale = model->metadata.input1[i].qscale; + input_type = model->layer[0].glow.metadata.input1[i].input_type; + model_input_type = model->layer[0].glow.metadata.input1[i].model_input_type; + qscale = model->layer[0].glow.metadata.input1[i].qscale; } else { j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - input_type = model->metadata.input2[j].input_type; - model_input_type = model->metadata.input2[j].model_input_type; - qscale = model->metadata.input2[j].qscale; + input_type = model->layer[0].glow.metadata.input2[j].input_type; + model_input_type = model->layer[0].glow.metadata.input2[j].model_input_type; + qscale = model->layer[0].glow.metadata.input2[j].qscale; } if (input_type == model_input_type) { - rte_memcpy(lcl_qbuffer, lcl_dbuffer, model->addr.input[i].sz_d); + rte_memcpy(lcl_qbuffer, lcl_dbuffer, model->layer[0].info.input[i].sz_d); } else { - switch (model->metadata.input1[i].model_input_type) { + switch (model->layer[0].glow.metadata.input1[i].model_input_type) { case RTE_ML_IO_TYPE_INT8: - ret = rte_ml_io_float32_to_int8(qscale, - model->addr.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); + ret = rte_ml_io_float32_to_int8( + qscale, model->layer[0].info.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); break; case RTE_ML_IO_TYPE_UINT8: - ret = rte_ml_io_float32_to_uint8(qscale, - model->addr.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); + ret = rte_ml_io_float32_to_uint8( + qscale, model->layer[0].info.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); break; case RTE_ML_IO_TYPE_INT16: - ret = rte_ml_io_float32_to_int16(qscale, - model->addr.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); + ret = rte_ml_io_float32_to_int16( + qscale, model->layer[0].info.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); break; case RTE_ML_IO_TYPE_UINT16: - ret = rte_ml_io_float32_to_uint16(qscale, - model->addr.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); + ret = rte_ml_io_float32_to_uint16( + qscale, model->layer[0].info.input[i].nb_elements, + lcl_dbuffer, lcl_qbuffer); break; case RTE_ML_IO_TYPE_FP16: - ret = rte_ml_io_float32_to_float16(model->addr.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); + ret = rte_ml_io_float32_to_float16( + model->layer[0].info.input[i].nb_elements, lcl_dbuffer, + lcl_qbuffer); break; default: plt_err("Unsupported model_input_type[%u] : %u", i, - model->metadata.input1[i].model_input_type); + model->layer[0].glow.metadata.input1[i].model_input_type); ret = -ENOTSUP; } if (ret < 0) return ret; } - lcl_dbuffer += model->addr.input[i].sz_d; - lcl_qbuffer += model->addr.input[i].sz_q; + lcl_dbuffer += model->layer[0].info.input[i].sz_d; + lcl_qbuffer += model->layer[0].info.input[i].sz_q; } return 0; @@ -2152,7 +2191,7 @@ static int cn10k_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **qbuffer, struct rte_ml_buff_seg **dbuffer) { - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; uint8_t model_output_type; uint8_t *lcl_qbuffer; uint8_t *lcl_dbuffer; @@ -2172,58 +2211,60 @@ cn10k_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_ lcl_dbuffer = dbuffer[0]->addr; lcl_qbuffer = qbuffer[0]->addr; - for (i = 0; i < model->metadata.model.num_output; i++) { + for (i = 0; i < model->layer[0].glow.metadata.model.num_output; i++) { if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - output_type = model->metadata.output1[i].output_type; - model_output_type = model->metadata.output1[i].model_output_type; - dscale = model->metadata.output1[i].dscale; + output_type = model->layer[0].glow.metadata.output1[i].output_type; + model_output_type = + model->layer[0].glow.metadata.output1[i].model_output_type; + dscale = model->layer[0].glow.metadata.output1[i].dscale; } else { j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - output_type = model->metadata.output2[j].output_type; - model_output_type = model->metadata.output2[j].model_output_type; - dscale = model->metadata.output2[j].dscale; + output_type = model->layer[0].glow.metadata.output2[j].output_type; + model_output_type = + model->layer[0].glow.metadata.output2[j].model_output_type; + dscale = model->layer[0].glow.metadata.output2[j].dscale; } if (output_type == model_output_type) { - rte_memcpy(lcl_dbuffer, lcl_qbuffer, model->addr.output[i].sz_q); + rte_memcpy(lcl_dbuffer, lcl_qbuffer, model->layer[0].info.output[i].sz_q); } else { - switch (model->metadata.output1[i].model_output_type) { + switch (model->layer[0].glow.metadata.output1[i].model_output_type) { case RTE_ML_IO_TYPE_INT8: - ret = rte_ml_io_int8_to_float32(dscale, - model->addr.output[i].nb_elements, - lcl_qbuffer, lcl_dbuffer); + ret = rte_ml_io_int8_to_float32( + dscale, model->layer[0].info.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); break; case RTE_ML_IO_TYPE_UINT8: - ret = rte_ml_io_uint8_to_float32(dscale, - model->addr.output[i].nb_elements, - lcl_qbuffer, lcl_dbuffer); + ret = rte_ml_io_uint8_to_float32( + dscale, model->layer[0].info.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); break; case RTE_ML_IO_TYPE_INT16: - ret = rte_ml_io_int16_to_float32(dscale, - model->addr.output[i].nb_elements, - lcl_qbuffer, lcl_dbuffer); + ret = rte_ml_io_int16_to_float32( + dscale, model->layer[0].info.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); break; case RTE_ML_IO_TYPE_UINT16: - ret = rte_ml_io_uint16_to_float32(dscale, - model->addr.output[i].nb_elements, - lcl_qbuffer, lcl_dbuffer); + ret = rte_ml_io_uint16_to_float32( + dscale, model->layer[0].info.output[i].nb_elements, + lcl_qbuffer, lcl_dbuffer); break; case RTE_ML_IO_TYPE_FP16: ret = rte_ml_io_float16_to_float32( - model->addr.output[i].nb_elements, lcl_qbuffer, + model->layer[0].info.output[i].nb_elements, lcl_qbuffer, lcl_dbuffer); break; default: plt_err("Unsupported model_output_type[%u] : %u", i, - model->metadata.output1[i].model_output_type); + model->layer[0].glow.metadata.output1[i].model_output_type); ret = -ENOTSUP; } if (ret < 0) return ret; } - lcl_qbuffer += model->addr.output[i].sz_q; - lcl_dbuffer += model->addr.output[i].sz_d; + lcl_qbuffer += model->layer[0].info.output[i].sz_q; + lcl_dbuffer += model->layer[0].info.output[i].sz_d; } return 0; @@ -2251,10 +2292,10 @@ static __rte_always_inline void cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result *result, struct rte_ml_op *op) { - struct cn10k_ml_model_stats *stats; + struct cn10k_ml_layer_stats *stats; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cn10k_ml_qp *qp; uint64_t hw_latency; uint64_t fw_latency; @@ -2264,9 +2305,9 @@ cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result if (likely(qp_id >= 0)) { qp = dev->data->queue_pairs[qp_id]; qp->stats.dequeued_count++; - stats = &model->burst_stats[qp_id]; + stats = &model->layer[0].glow.burst_stats[qp_id]; } else { - stats = model->sync_stats; + stats = model->layer[0].glow.sync_stats; } if (unlikely(stats->dequeued_count == stats->hw_reset_count)) { @@ -2470,7 +2511,7 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_model *model; + struct cnxk_ml_model *model; struct cn10k_ml_req *req; bool timeout; int ret = 0; @@ -2478,7 +2519,7 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; model = dev->data->models[op->model_id]; - req = model->req; + req = model->layer[0].glow.req; cn10k_ml_set_poll_addr(req); cn10k_ml_prep_fp_job_descriptor(cn10k_mldev, req, op); diff --git a/drivers/ml/cnxk/cnxk_ml_io.h b/drivers/ml/cnxk/cnxk_ml_io.h new file mode 100644 index 0000000000..29ec7ec511 --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_io.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _CNXK_ML_IO_H_ +#define _CNXK_ML_IO_H_ + +#include + +/* Maximum number of models per device */ +#define ML_CNXK_MAX_MODELS 16 + +/* Maximum number of layers per model */ +#define ML_CNXK_MODEL_MAX_LAYERS 32 + +/* Maximum number of inputs or outputs per layer or model */ +#define ML_CNXK_MODEL_MAX_INPUT_OUTPUT 32 + +/* Maximum number of dimensions per I/O shape */ +#define ML_CNXK_MODEL_MAX_DIMS 8 + +/* Input / Output structure */ +struct cnxk_ml_io { + /* name */ + char name[RTE_ML_STR_MAX]; + + /* dequantized data type */ + enum rte_ml_io_type dtype; + + /* quantized data type */ + enum rte_ml_io_type qtype; + + /* Number of dimensions in shape */ + uint32_t nb_dims; + + /* Shape of input */ + uint32_t shape[ML_CNXK_MODEL_MAX_DIMS]; + + /* Number of elements */ + uint32_t nb_elements; + + /* Dequantized input size */ + uint32_t sz_d; + + /* Quantized input size */ + uint32_t sz_q; + + /* Scale */ + float scale; +}; + +/* Model / Layer IO structure */ +struct cnxk_ml_io_info { + /* Number of inputs */ + uint16_t nb_inputs; + + /* Model / Layer inputs */ + struct cnxk_ml_io input[ML_CNXK_MODEL_MAX_INPUT_OUTPUT]; + + /* Total size of quantized input */ + uint32_t total_input_sz_q; + + /* Total size of dequantized input */ + uint32_t total_input_sz_d; + + /* Number of outputs */ + uint16_t nb_outputs; + + /* Model / Layer outputs */ + struct cnxk_ml_io output[ML_CNXK_MODEL_MAX_INPUT_OUTPUT]; + + /* Total size of quantized output */ + uint32_t total_output_sz_q; + + /* Total size of dequantized output */ + uint32_t total_output_sz_d; +}; + +#endif /* _CNXK_ML_IO_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_model.c b/drivers/ml/cnxk/cnxk_ml_model.c new file mode 100644 index 0000000000..3d735ced3e --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_model.c @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include + +#include "cnxk_ml_model.h" diff --git a/drivers/ml/cnxk/cnxk_ml_model.h b/drivers/ml/cnxk/cnxk_ml_model.h new file mode 100644 index 0000000000..a2994dbb71 --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_model.h @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _CNXK_ML_MODEL_H_ +#define _CNXK_ML_MODEL_H_ + +#include + +#include + +#include "cn10k_ml_model.h" + +#include "cnxk_ml_io.h" + +struct cnxk_ml_dev; +struct cnxk_ml_model; + +/* Model state */ +enum cnxk_ml_model_state { + /* Unknown state */ + ML_CNXK_MODEL_STATE_UNKNOWN, + + /* Model loaded */ + ML_CNXK_MODEL_STATE_LOADED, + + /* A slow-path job is active, start or stop */ + ML_CNXK_MODEL_STATE_JOB_ACTIVE, + + /* Model started */ + ML_CNXK_MODEL_STATE_STARTED, +}; + +/* Layer state */ +enum cnxk_ml_layer_state { + /* Unknown state */ + ML_CNXK_LAYER_STATE_UNKNOWN, + + /* Layer loaded */ + ML_CNXK_LAYER_STATE_LOADED, + + /* A slow-path job is active, start or stop */ + ML_CNXK_LAYER_STATE_JOB_ACTIVE, + + /* Layer started */ + ML_CNXK_LAYER_STATE_STARTED, +}; + +/* Layer object */ +struct cnxk_ml_layer { + /* Name*/ + char name[RTE_ML_STR_MAX]; + + /* Model handle */ + struct cnxk_ml_model *model; + + /* Index mapped with firmware's model_id */ + uint16_t index; + + /* Input / Output */ + struct cnxk_ml_io_info info; + + /* Batch size */ + uint32_t batch_size; + + /* State */ + enum cnxk_ml_layer_state state; + + /* Glow layer specific data */ + struct cn10k_ml_layer_data glow; +}; + +/* Model Object */ +struct cnxk_ml_model { + /* Device reference */ + struct cnxk_ml_dev *cnxk_mldev; + + /* ID */ + uint16_t model_id; + + /* Name */ + char name[RTE_ML_STR_MAX]; + + /* Model specific data - glow */ + struct cn10k_ml_model_data glow; + + /* Batch size */ + uint32_t batch_size; + + /* Number of layers */ + uint16_t nb_layers; + + /* Layer info */ + struct cnxk_ml_layer layer[ML_CNXK_MODEL_MAX_LAYERS]; + + /* State */ + enum cnxk_ml_model_state state; + + /* Internal model information structure + * Size of the buffer = sizeof(struct rte_ml_model_info) + * + num_inputs * sizeof(struct rte_ml_io_info) + * + num_outputs * sizeof(struct rte_ml_io_info). + * Structures would be arranged in the same order in the buffer. + */ + uint8_t *info; + + /* Spinlock, used to update model state */ + plt_spinlock_t lock; +}; + +#endif /* _CNXK_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 03a2d4ecf2..72e03b15b5 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -13,6 +13,8 @@ driver_sdk_headers = files( 'cn10k_ml_model.h', 'cn10k_ml_ocm.h', 'cnxk_ml_dev.h', + 'cnxk_ml_io.h', + 'cnxk_ml_model.h', ) sources = files( @@ -21,6 +23,7 @@ sources = files( 'cn10k_ml_model.c', 'cn10k_ml_ocm.c', 'cnxk_ml_dev.c', + 'cnxk_ml_model.c', ) deps += ['mldev', 'common_cnxk', 'kvargs', 'hash'] From patchwork Wed Sep 20 07:24:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131681 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3C4D1425E4; Wed, 20 Sep 2023 09:26:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4DB7A410FA; Wed, 20 Sep 2023 09:25:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A866640ED1 for ; Wed, 20 Sep 2023 09:25:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW0s008355 for ; Wed, 20 Sep 2023 00:25:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=KdL1XmAb/XzLYMjMDCfPEg5alWib0eyuPnZfLoVuJRc=; b=kIcPWIILvXiuHyTkIrDtrweq4JL3TUR954K+BQzxZpPhsBnFJgSmxOf6Lu7nhSuG5Sg9 x1DCIdDNNvOy4/Y3cu3u0EM1UoV0bSq1IQ6TxlwjGciTyxRiN5t+ZrpJWqtkj+CMDfTQ kwMrTvs6OwskIBLIYc+tHsCBA8mNHeiSKI+ZdHY8nAIU9EASmfAphw94yTMqr4kOdvfT 6WcBP5KZhYQY0PK+rCsBFWMCL9H69o+GN4XeCETH5XdmNnV8rfqjwqHqhI2zKAL/BZZ7 yDa/sDfQCYKuFfJZbiS1Z1QM+cnQe9DZ1lEu8a0eih/c7iH6QabY71ua1YfZIsjczX4B nw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:36 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:33 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id B682D5B6927; Wed, 20 Sep 2023 00:25:33 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 05/34] ml/cnxk: add generic cnxk request structure Date: Wed, 20 Sep 2023 00:24:56 -0700 Message-ID: <20230920072528.14185-6-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: kIu6C18SpLWcZJZ4ba2xUAaTqMYUdnTk X-Proofpoint-GUID: kIu6C18SpLWcZJZ4ba2xUAaTqMYUdnTk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added generic cnxk request structure. Moved common fields from cn10k structures to cnxk structure. Moved job related structures and enumerations to ops headers. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.c | 70 ++++--- drivers/ml/cnxk/cn10k_ml_dev.h | 269 +------------------------ drivers/ml/cnxk/cn10k_ml_model.c | 6 +- drivers/ml/cnxk/cn10k_ml_model.h | 4 +- drivers/ml/cnxk/cn10k_ml_ops.c | 329 +++++++++++++++++-------------- drivers/ml/cnxk/cn10k_ml_ops.h | 296 +++++++++++++++++++++++---- drivers/ml/cnxk/cnxk_ml_ops.c | 7 + drivers/ml/cnxk/cnxk_ml_ops.h | 63 ++++++ drivers/ml/cnxk/meson.build | 2 + 9 files changed, 558 insertions(+), 488 deletions(-) create mode 100644 drivers/ml/cnxk/cnxk_ml_ops.c create mode 100644 drivers/ml/cnxk/cnxk_ml_ops.h diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c index 367fb7014c..f6e05cfc47 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.c +++ b/drivers/ml/cnxk/cn10k_ml_dev.c @@ -23,6 +23,7 @@ #include "cn10k_ml_ops.h" #include "cnxk_ml_dev.h" +#include "cnxk_ml_ops.h" #define CN10K_ML_FW_PATH "fw_path" #define CN10K_ML_FW_ENABLE_DPE_WARNINGS "enable_dpe_warnings" @@ -457,20 +458,23 @@ cn10k_ml_pci_remove(struct rte_pci_device *pci_dev) static void cn10k_ml_fw_print_info(struct cn10k_ml_fw *fw) { - plt_info("ML Firmware Version = %s", fw->req->jd.fw_load.version); - - plt_ml_dbg("Firmware capabilities = 0x%016lx", fw->req->jd.fw_load.cap.u64); - plt_ml_dbg("Version = %s", fw->req->jd.fw_load.version); - plt_ml_dbg("core0_debug_ptr = 0x%016lx", fw->req->jd.fw_load.debug.core0_debug_ptr); - plt_ml_dbg("core1_debug_ptr = 0x%016lx", fw->req->jd.fw_load.debug.core1_debug_ptr); - plt_ml_dbg("debug_buffer_size = %u bytes", fw->req->jd.fw_load.debug.debug_buffer_size); + plt_info("ML Firmware Version = %s", fw->req->cn10k_req.jd.fw_load.version); + + plt_ml_dbg("Firmware capabilities = 0x%016lx", fw->req->cn10k_req.jd.fw_load.cap.u64); + plt_ml_dbg("Version = %s", fw->req->cn10k_req.jd.fw_load.version); + plt_ml_dbg("core0_debug_ptr = 0x%016lx", + fw->req->cn10k_req.jd.fw_load.debug.core0_debug_ptr); + plt_ml_dbg("core1_debug_ptr = 0x%016lx", + fw->req->cn10k_req.jd.fw_load.debug.core1_debug_ptr); + plt_ml_dbg("debug_buffer_size = %u bytes", + fw->req->cn10k_req.jd.fw_load.debug.debug_buffer_size); plt_ml_dbg("core0_exception_buffer = 0x%016lx", - fw->req->jd.fw_load.debug.core0_exception_buffer); + fw->req->cn10k_req.jd.fw_load.debug.core0_exception_buffer); plt_ml_dbg("core1_exception_buffer = 0x%016lx", - fw->req->jd.fw_load.debug.core1_exception_buffer); + fw->req->cn10k_req.jd.fw_load.debug.core1_exception_buffer); plt_ml_dbg("exception_state_size = %u bytes", - fw->req->jd.fw_load.debug.exception_state_size); - plt_ml_dbg("flags = 0x%016lx", fw->req->jd.fw_load.flags); + fw->req->cn10k_req.jd.fw_load.debug.exception_state_size); + plt_ml_dbg("flags = 0x%016lx", fw->req->cn10k_req.jd.fw_load.flags); } uint64_t @@ -515,29 +519,30 @@ cn10k_ml_fw_load_asim(struct cn10k_ml_fw *fw) roc_ml_reg_save(&cn10k_mldev->roc, ML_MLR_BASE); /* Update FW load completion structure */ - fw->req->jd.hdr.jce.w1.u64 = PLT_U64_CAST(&fw->req->status); - fw->req->jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_LOAD; - fw->req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &fw->req->result); - fw->req->jd.fw_load.flags = cn10k_ml_fw_flags_get(fw); - plt_write64(ML_CNXK_POLL_JOB_START, &fw->req->status); + fw->req->cn10k_req.jd.hdr.jce.w1.u64 = PLT_U64_CAST(&fw->req->cn10k_req.status); + fw->req->cn10k_req.jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_LOAD; + fw->req->cn10k_req.jd.hdr.result = + roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &fw->req->cn10k_req.result); + fw->req->cn10k_req.jd.fw_load.flags = cn10k_ml_fw_flags_get(fw); + plt_write64(ML_CNXK_POLL_JOB_START, &fw->req->cn10k_req.status); plt_wmb(); /* Enqueue FW load through scratch registers */ timeout = true; timeout_cycle = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); - roc_ml_scratch_enqueue(&cn10k_mldev->roc, &fw->req->jd); + roc_ml_scratch_enqueue(&cn10k_mldev->roc, &fw->req->cn10k_req.jd); plt_rmb(); do { if (roc_ml_scratch_is_done_bit_set(&cn10k_mldev->roc) && - (plt_read64(&fw->req->status) == ML_CNXK_POLL_JOB_FINISH)) { + (plt_read64(&fw->req->cn10k_req.status) == ML_CNXK_POLL_JOB_FINISH)) { timeout = false; break; } } while (plt_tsc_cycles() < timeout_cycle); /* Check firmware load status, clean-up and exit on failure. */ - if ((!timeout) && (fw->req->result.error_code.u64 == 0)) { + if ((!timeout) && (fw->req->cn10k_req.result.error_code == 0)) { cn10k_ml_fw_print_info(fw); } else { /* Set ML to disable new jobs */ @@ -711,29 +716,30 @@ cn10k_ml_fw_load_cn10ka(struct cn10k_ml_fw *fw, void *buffer, uint64_t size) plt_ml_dbg("ML_SW_RST_CTRL => 0x%08x", reg_val32); /* (12) Wait for notification from firmware that ML is ready for job execution. */ - fw->req->jd.hdr.jce.w1.u64 = PLT_U64_CAST(&fw->req->status); - fw->req->jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_LOAD; - fw->req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &fw->req->result); - fw->req->jd.fw_load.flags = cn10k_ml_fw_flags_get(fw); - plt_write64(ML_CNXK_POLL_JOB_START, &fw->req->status); + fw->req->cn10k_req.jd.hdr.jce.w1.u64 = PLT_U64_CAST(&fw->req->cn10k_req.status); + fw->req->cn10k_req.jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_LOAD; + fw->req->cn10k_req.jd.hdr.result = + roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &fw->req->cn10k_req.result); + fw->req->cn10k_req.jd.fw_load.flags = cn10k_ml_fw_flags_get(fw); + plt_write64(ML_CNXK_POLL_JOB_START, &fw->req->cn10k_req.status); plt_wmb(); /* Enqueue FW load through scratch registers */ timeout = true; timeout_cycle = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); - roc_ml_scratch_enqueue(&cn10k_mldev->roc, &fw->req->jd); + roc_ml_scratch_enqueue(&cn10k_mldev->roc, &fw->req->cn10k_req.jd); plt_rmb(); do { if (roc_ml_scratch_is_done_bit_set(&cn10k_mldev->roc) && - (plt_read64(&fw->req->status) == ML_CNXK_POLL_JOB_FINISH)) { + (plt_read64(&fw->req->cn10k_req.status) == ML_CNXK_POLL_JOB_FINISH)) { timeout = false; break; } } while (plt_tsc_cycles() < timeout_cycle); /* Check firmware load status, clean-up and exit on failure. */ - if ((!timeout) && (fw->req->result.error_code.u64 == 0)) { + if ((!timeout) && (fw->req->cn10k_req.result.error_code == 0)) { cn10k_ml_fw_print_info(fw); } else { /* Set ML to disable new jobs */ @@ -823,11 +829,11 @@ cn10k_ml_fw_load(struct cnxk_ml_dev *cnxk_mldev) } /* Reserve memzone for firmware load completion and data */ - mz_size = sizeof(struct cn10k_ml_req) + fw_size + FW_STACK_BUFFER_SIZE + + mz_size = sizeof(struct cnxk_ml_req) + fw_size + FW_STACK_BUFFER_SIZE + FW_DEBUG_BUFFER_SIZE + FW_EXCEPTION_BUFFER_SIZE; } else if (roc_env_is_asim()) { /* Reserve memzone for firmware load completion */ - mz_size = sizeof(struct cn10k_ml_req); + mz_size = sizeof(struct cnxk_ml_req); } mz = plt_memzone_reserve_aligned(FW_MEMZONE_NAME, mz_size, 0, ML_CN10K_ALIGN_SIZE); @@ -839,8 +845,8 @@ cn10k_ml_fw_load(struct cnxk_ml_dev *cnxk_mldev) fw->req = mz->addr; /* Reset firmware load completion structure */ - memset(&fw->req->jd, 0, sizeof(struct cn10k_ml_jd)); - memset(&fw->req->jd.fw_load.version[0], '\0', MLDEV_FIRMWARE_VERSION_LENGTH); + memset(&fw->req->cn10k_req.jd, 0, sizeof(struct cn10k_ml_jd)); + memset(&fw->req->cn10k_req.jd.fw_load.version[0], '\0', MLDEV_FIRMWARE_VERSION_LENGTH); /* Reset device, if in active state */ if (roc_ml_mlip_is_enabled(&cn10k_mldev->roc)) @@ -848,7 +854,7 @@ cn10k_ml_fw_load(struct cnxk_ml_dev *cnxk_mldev) /* Load firmware */ if (roc_env_is_emulator() || roc_env_is_hw()) { - fw->data = PLT_PTR_ADD(mz->addr, sizeof(struct cn10k_ml_req)); + fw->data = PLT_PTR_ADD(mz->addr, sizeof(struct cnxk_ml_req)); ret = cn10k_ml_fw_load_cn10ka(fw, fw_buffer, fw_size); rte_free(fw_buffer); } else if (roc_env_is_asim()) { diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index 99ff0a344a..1852d4f6c9 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -17,9 +17,6 @@ extern struct rte_ml_dev_ops ml_dev_dummy_ops; /* Marvell OCTEON CN10K ML PMD device name */ #define MLDEV_NAME_CN10K_PMD ml_cn10k -/* Firmware version string length */ -#define MLDEV_FIRMWARE_VERSION_LENGTH 32 - /* Device alignment size */ #define ML_CN10K_ALIGN_SIZE 128 @@ -52,17 +49,8 @@ extern struct rte_ml_dev_ops ml_dev_dummy_ops; #endif struct cnxk_ml_dev; -struct cn10k_ml_req; -struct cn10k_ml_qp; - -/* Job types */ -enum cn10k_ml_job_type { - ML_CN10K_JOB_TYPE_MODEL_RUN = 0, - ML_CN10K_JOB_TYPE_MODEL_STOP, - ML_CN10K_JOB_TYPE_MODEL_START, - ML_CN10K_JOB_TYPE_FIRMWARE_LOAD, - ML_CN10K_JOB_TYPE_FIRMWARE_SELFTEST, -}; +struct cnxk_ml_req; +struct cnxk_ml_qp; /* Error types enumeration */ enum cn10k_ml_error_etype { @@ -112,251 +100,6 @@ union cn10k_ml_error_code { uint64_t u64; }; -/* Firmware stats */ -struct cn10k_ml_fw_stats { - /* Firmware start cycle */ - uint64_t fw_start; - - /* Firmware end cycle */ - uint64_t fw_end; - - /* Hardware start cycle */ - uint64_t hw_start; - - /* Hardware end cycle */ - uint64_t hw_end; -}; - -/* Result structure */ -struct cn10k_ml_result { - /* Job error code */ - union cn10k_ml_error_code error_code; - - /* Firmware stats */ - struct cn10k_ml_fw_stats stats; - - /* User context pointer */ - void *user_ptr; -}; - -/* Firmware capability structure */ -union cn10k_ml_fw_cap { - uint64_t u64; - - struct { - /* CMPC completion support */ - uint64_t cmpc_completions : 1; - - /* Poll mode completion support */ - uint64_t poll_completions : 1; - - /* SSO completion support */ - uint64_t sso_completions : 1; - - /* Support for model side loading */ - uint64_t side_load_model : 1; - - /* Batch execution */ - uint64_t batch_run : 1; - - /* Max number of models to be loaded in parallel */ - uint64_t max_models : 8; - - /* Firmware statistics */ - uint64_t fw_stats : 1; - - /* Hardware statistics */ - uint64_t hw_stats : 1; - - /* Max number of batches */ - uint64_t max_num_batches : 16; - - uint64_t rsvd : 33; - } s; -}; - -/* Firmware debug info structure */ -struct cn10k_ml_fw_debug { - /* ACC core 0 debug buffer */ - uint64_t core0_debug_ptr; - - /* ACC core 1 debug buffer */ - uint64_t core1_debug_ptr; - - /* ACC core 0 exception state buffer */ - uint64_t core0_exception_buffer; - - /* ACC core 1 exception state buffer */ - uint64_t core1_exception_buffer; - - /* Debug buffer size per core */ - uint32_t debug_buffer_size; - - /* Exception state dump size */ - uint32_t exception_state_size; -}; - -/* Job descriptor header (32 bytes) */ -struct cn10k_ml_jd_header { - /* Job completion structure */ - struct ml_jce_s jce; - - /* Model ID */ - uint64_t model_id : 8; - - /* Job type */ - uint64_t job_type : 8; - - /* Flags for fast-path jobs */ - uint64_t fp_flags : 16; - - /* Flags for slow-path jobs */ - uint64_t sp_flags : 16; - uint64_t rsvd : 16; - - /* Job result pointer */ - uint64_t *result; -}; - -/* Extra arguments for job descriptor */ -union cn10k_ml_jd_extended_args { - struct cn10k_ml_jd_extended_args_section_start { - /** DDR Scratch base address */ - uint64_t ddr_scratch_base_address; - - /** DDR Scratch range start */ - uint64_t ddr_scratch_range_start; - - /** DDR Scratch range end */ - uint64_t ddr_scratch_range_end; - - uint8_t rsvd[104]; - } start; -}; - -/* Job descriptor structure */ -struct cn10k_ml_jd { - /* Job descriptor header (32 bytes) */ - struct cn10k_ml_jd_header hdr; - - union { - struct cn10k_ml_jd_section_fw_load { - /* Firmware capability structure (8 bytes) */ - union cn10k_ml_fw_cap cap; - - /* Firmware version (32 bytes) */ - uint8_t version[MLDEV_FIRMWARE_VERSION_LENGTH]; - - /* Debug capability structure (40 bytes) */ - struct cn10k_ml_fw_debug debug; - - /* Flags to control error handling */ - uint64_t flags; - - uint8_t rsvd[8]; - } fw_load; - - struct cn10k_ml_jd_section_model_start { - /* Extended arguments */ - uint64_t extended_args; - - /* Destination model start address in DDR relative to ML_MLR_BASE */ - uint64_t model_dst_ddr_addr; - - /* Offset to model init section in the model */ - uint64_t model_init_offset : 32; - - /* Size of init section in the model */ - uint64_t model_init_size : 32; - - /* Offset to model main section in the model */ - uint64_t model_main_offset : 32; - - /* Size of main section in the model */ - uint64_t model_main_size : 32; - - /* Offset to model finish section in the model */ - uint64_t model_finish_offset : 32; - - /* Size of finish section in the model */ - uint64_t model_finish_size : 32; - - /* Offset to WB in model bin */ - uint64_t model_wb_offset : 32; - - /* Number of model layers */ - uint64_t num_layers : 8; - - /* Number of gather entries, 0 means linear input mode (= no gather) */ - uint64_t num_gather_entries : 8; - - /* Number of scatter entries 0 means linear input mode (= no scatter) */ - uint64_t num_scatter_entries : 8; - - /* Tile mask to load model */ - uint64_t tilemask : 8; - - /* Batch size of model */ - uint64_t batch_size : 32; - - /* OCM WB base address */ - uint64_t ocm_wb_base_address : 32; - - /* OCM WB range start */ - uint64_t ocm_wb_range_start : 32; - - /* OCM WB range End */ - uint64_t ocm_wb_range_end : 32; - - /* DDR WB address */ - uint64_t ddr_wb_base_address; - - /* DDR WB range start */ - uint64_t ddr_wb_range_start : 32; - - /* DDR WB range end */ - uint64_t ddr_wb_range_end : 32; - - union { - /* Points to gather list if num_gather_entries > 0 */ - void *gather_list; - struct { - /* Linear input mode */ - uint64_t ddr_range_start : 32; - uint64_t ddr_range_end : 32; - } s; - } input; - - union { - /* Points to scatter list if num_scatter_entries > 0 */ - void *scatter_list; - struct { - /* Linear output mode */ - uint64_t ddr_range_start : 32; - uint64_t ddr_range_end : 32; - } s; - } output; - } model_start; - - struct cn10k_ml_jd_section_model_stop { - uint8_t rsvd[96]; - } model_stop; - - struct cn10k_ml_jd_section_model_run { - /* Address of the input for the run relative to ML_MLR_BASE */ - uint64_t input_ddr_addr; - - /* Address of the output for the run relative to ML_MLR_BASE */ - uint64_t output_ddr_addr; - - /* Number of batches to run in variable batch processing */ - uint16_t num_batches; - - uint8_t rsvd[78]; - } model_run; - }; -}; - /* ML firmware structure */ struct cn10k_ml_fw { /* Device reference */ @@ -375,7 +118,7 @@ struct cn10k_ml_fw { uint8_t *data; /* Firmware load / handshake request structure */ - struct cn10k_ml_req *req; + struct cnxk_ml_req *req; }; /* Extended stats types enum */ @@ -488,9 +231,9 @@ struct cn10k_ml_dev { bool (*ml_jcmdq_enqueue)(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd); /* Poll handling function pointers */ - void (*set_poll_addr)(struct cn10k_ml_req *req); - void (*set_poll_ptr)(struct cn10k_ml_req *req); - uint64_t (*get_poll_ptr)(struct cn10k_ml_req *req); + void (*set_poll_addr)(struct cnxk_ml_req *req); + void (*set_poll_ptr)(struct cnxk_ml_req *req); + uint64_t (*get_poll_ptr)(struct cnxk_ml_req *req); }; uint64_t cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw); diff --git a/drivers/ml/cnxk/cn10k_ml_model.c b/drivers/ml/cnxk/cn10k_ml_model.c index 0ea6520bf7..2a0ae44cfd 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.c +++ b/drivers/ml/cnxk/cn10k_ml_model.c @@ -12,6 +12,7 @@ #include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" +#include "cnxk_ml_ops.h" static enum rte_ml_io_type cn10k_ml_io_type_map(uint8_t type) @@ -551,7 +552,6 @@ void cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cnxk_ml_model *model) { struct cn10k_ml_model_metadata *metadata; - struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; struct rte_ml_model_info *info; struct rte_ml_io_info *output; @@ -560,7 +560,6 @@ cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cnxk_ml_model *model) uint8_t i; cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; metadata = &model->glow.metadata; info = PLT_PTR_CAST(model->info); input = PLT_PTR_ADD(info, sizeof(struct rte_ml_model_info)); @@ -577,7 +576,8 @@ cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cnxk_ml_model *model) info->io_layout = RTE_ML_IO_LAYOUT_PACKED; info->min_batches = model->batch_size; info->max_batches = - cn10k_mldev->fw.req->jd.fw_load.cap.s.max_num_batches / model->batch_size; + cnxk_mldev->cn10k_mldev.fw.req->cn10k_req.jd.fw_load.cap.s.max_num_batches / + model->batch_size; info->nb_inputs = metadata->model.num_input; info->input_info = input; info->nb_outputs = metadata->model.num_output; diff --git a/drivers/ml/cnxk/cn10k_ml_model.h b/drivers/ml/cnxk/cn10k_ml_model.h index 206a369ca7..74ada1531a 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.h +++ b/drivers/ml/cnxk/cn10k_ml_model.h @@ -11,10 +11,10 @@ #include "cn10k_ml_dev.h" #include "cn10k_ml_ocm.h" -#include "cn10k_ml_ops.h" struct cnxk_ml_model; struct cnxk_ml_layer; +struct cnxk_ml_req; /* Model Metadata : v 2.3.0.1 */ #define MRVL_ML_MODEL_MAGIC_STRING "MRVL" @@ -444,7 +444,7 @@ struct cn10k_ml_layer_data { struct cn10k_ml_ocm_layer_map ocm_map; /* Layer: Slow-path operations request pointer */ - struct cn10k_ml_req *req; + struct cnxk_ml_req *req; /* Layer: Stats for burst ops */ struct cn10k_ml_layer_stats *burst_stats; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index a52509630f..2b1fa08154 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -13,6 +13,7 @@ #include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" +#include "cnxk_ml_ops.h" /* ML model macros */ #define CN10K_ML_MODEL_MEMZONE_NAME "ml_cn10k_model_mz" @@ -80,31 +81,31 @@ print_line(FILE *fp, int len) } static inline void -cn10k_ml_set_poll_addr(struct cn10k_ml_req *req) +cn10k_ml_set_poll_addr(struct cnxk_ml_req *req) { - req->compl_W1 = PLT_U64_CAST(&req->status); + req->status = &req->cn10k_req.status; } static inline void -cn10k_ml_set_poll_ptr(struct cn10k_ml_req *req) +cn10k_ml_set_poll_ptr(struct cnxk_ml_req *req) { - plt_write64(ML_CNXK_POLL_JOB_START, req->compl_W1); + plt_write64(ML_CNXK_POLL_JOB_START, req->status); } static inline uint64_t -cn10k_ml_get_poll_ptr(struct cn10k_ml_req *req) +cn10k_ml_get_poll_ptr(struct cnxk_ml_req *req) { - return plt_read64(req->compl_W1); + return plt_read64(req->status); } static void qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) { - snprintf(name, size, "cn10k_ml_qp_mem_%u:%u", dev_id, qp_id); + snprintf(name, size, "cnxk_ml_qp_mem_%u:%u", dev_id, qp_id); } static int -cn10k_ml_qp_destroy(const struct rte_ml_dev *dev, struct cn10k_ml_qp *qp) +cnxk_ml_qp_destroy(const struct rte_ml_dev *dev, struct cnxk_ml_qp *qp) { const struct rte_memzone *qp_mem; char name[RTE_MEMZONE_NAMESIZE]; @@ -124,14 +125,14 @@ cn10k_ml_qp_destroy(const struct rte_ml_dev *dev, struct cn10k_ml_qp *qp) static int cn10k_ml_dev_queue_pair_release(struct rte_ml_dev *dev, uint16_t queue_pair_id) { - struct cn10k_ml_qp *qp; + struct cnxk_ml_qp *qp; int ret; qp = dev->data->queue_pairs[queue_pair_id]; if (qp == NULL) return -EINVAL; - ret = cn10k_ml_qp_destroy(dev, qp); + ret = cnxk_ml_qp_destroy(dev, qp); if (ret) { plt_err("Could not destroy queue pair %u", queue_pair_id); return ret; @@ -142,18 +143,18 @@ cn10k_ml_dev_queue_pair_release(struct rte_ml_dev *dev, uint16_t queue_pair_id) return 0; } -static struct cn10k_ml_qp * -cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_desc, int socket_id) +static struct cnxk_ml_qp * +cnxk_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_desc, int socket_id) { const struct rte_memzone *qp_mem; char name[RTE_MEMZONE_NAMESIZE]; - struct cn10k_ml_qp *qp; + struct cnxk_ml_qp *qp; uint32_t len; uint8_t *va; uint64_t i; /* Allocate queue pair */ - qp = rte_zmalloc_socket("cn10k_ml_pmd_queue_pair", sizeof(struct cn10k_ml_qp), ROC_ALIGN, + qp = rte_zmalloc_socket("cn10k_ml_pmd_queue_pair", sizeof(struct cnxk_ml_qp), ROC_ALIGN, socket_id); if (qp == NULL) { plt_err("Could not allocate queue pair"); @@ -161,7 +162,7 @@ cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_des } /* For request queue */ - len = nb_desc * sizeof(struct cn10k_ml_req); + len = nb_desc * sizeof(struct cnxk_ml_req); qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id, qp_id); qp_mem = rte_memzone_reserve_aligned( name, len, socket_id, RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB, ROC_ALIGN); @@ -175,7 +176,7 @@ cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_des /* Initialize Request queue */ qp->id = qp_id; - qp->queue.reqs = (struct cn10k_ml_req *)va; + qp->queue.reqs = (struct cnxk_ml_req *)va; qp->queue.head = 0; qp->queue.tail = 0; qp->queue.wait_cycles = ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); @@ -187,8 +188,9 @@ cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_des /* Initialize job command */ for (i = 0; i < qp->nb_desc; i++) { - memset(&qp->queue.reqs[i].jd, 0, sizeof(struct cn10k_ml_jd)); - qp->queue.reqs[i].jcmd.w1.s.jobptr = PLT_U64_CAST(&qp->queue.reqs[i].jd); + memset(&qp->queue.reqs[i].cn10k_req.jd, 0, sizeof(struct cn10k_ml_jd)); + qp->queue.reqs[i].cn10k_req.jcmd.w1.s.jobptr = + PLT_U64_CAST(&qp->queue.reqs[i].cn10k_req.jd); } return qp; @@ -335,7 +337,7 @@ cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) static void cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml_model *model, - struct cn10k_ml_req *req, enum cn10k_ml_job_type job_type) + struct cnxk_ml_req *req, enum cn10k_ml_job_type job_type) { struct cn10k_ml_model_metadata *metadata; struct cn10k_ml_layer_addr *addr; @@ -343,79 +345,88 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml metadata = &model->glow.metadata; addr = &model->layer[0].glow.addr; - memset(&req->jd, 0, sizeof(struct cn10k_ml_jd)); - req->jd.hdr.jce.w0.u64 = 0; - req->jd.hdr.jce.w1.u64 = PLT_U64_CAST(&req->status); - req->jd.hdr.model_id = model->model_id; - req->jd.hdr.job_type = job_type; - req->jd.hdr.fp_flags = 0x0; - req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->result); + memset(&req->cn10k_req.jd, 0, sizeof(struct cn10k_ml_jd)); + req->cn10k_req.jd.hdr.jce.w0.u64 = 0; + req->cn10k_req.jd.hdr.jce.w1.u64 = PLT_U64_CAST(&req->cn10k_req.status); + req->cn10k_req.jd.hdr.model_id = model->model_id; + req->cn10k_req.jd.hdr.job_type = job_type; + req->cn10k_req.jd.hdr.fp_flags = 0x0; + req->cn10k_req.jd.hdr.result = + roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->cn10k_req.result); if (job_type == ML_CN10K_JOB_TYPE_MODEL_START) { if (!model->glow.metadata.model.ocm_relocatable) - req->jd.hdr.sp_flags = ML_CN10K_SP_FLAGS_OCM_NONRELOCATABLE; + req->cn10k_req.jd.hdr.sp_flags = ML_CN10K_SP_FLAGS_OCM_NONRELOCATABLE; else - req->jd.hdr.sp_flags = 0x0; + req->cn10k_req.jd.hdr.sp_flags = 0x0; - req->jd.hdr.sp_flags |= ML_CN10K_SP_FLAGS_EXTENDED_LOAD_JD; - req->jd.model_start.extended_args = - PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->extended_args)); - req->jd.model_start.model_dst_ddr_addr = + req->cn10k_req.jd.hdr.sp_flags |= ML_CN10K_SP_FLAGS_EXTENDED_LOAD_JD; + req->cn10k_req.jd.model_start.extended_args = PLT_U64_CAST( + roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->cn10k_req.extended_args)); + req->cn10k_req.jd.model_start.model_dst_ddr_addr = PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, addr->init_run_addr)); - req->jd.model_start.model_init_offset = 0x0; - req->jd.model_start.model_main_offset = metadata->init_model.file_size; - req->jd.model_start.model_finish_offset = + req->cn10k_req.jd.model_start.model_init_offset = 0x0; + req->cn10k_req.jd.model_start.model_main_offset = metadata->init_model.file_size; + req->cn10k_req.jd.model_start.model_finish_offset = metadata->init_model.file_size + metadata->main_model.file_size; - req->jd.model_start.model_init_size = metadata->init_model.file_size; - req->jd.model_start.model_main_size = metadata->main_model.file_size; - req->jd.model_start.model_finish_size = metadata->finish_model.file_size; - req->jd.model_start.model_wb_offset = metadata->init_model.file_size + - metadata->main_model.file_size + - metadata->finish_model.file_size; - req->jd.model_start.num_layers = metadata->model.num_layers; - req->jd.model_start.num_gather_entries = 0; - req->jd.model_start.num_scatter_entries = 0; - req->jd.model_start.tilemask = 0; /* Updated after reserving pages */ - req->jd.model_start.batch_size = model->batch_size; - req->jd.model_start.ocm_wb_base_address = 0; /* Updated after reserving pages */ - req->jd.model_start.ocm_wb_range_start = metadata->model.ocm_wb_range_start; - req->jd.model_start.ocm_wb_range_end = metadata->model.ocm_wb_range_end; - req->jd.model_start.ddr_wb_base_address = PLT_U64_CAST(roc_ml_addr_ap2mlip( - &cn10k_mldev->roc, - PLT_PTR_ADD(addr->finish_load_addr, metadata->finish_model.file_size))); - req->jd.model_start.ddr_wb_range_start = metadata->model.ddr_wb_range_start; - req->jd.model_start.ddr_wb_range_end = metadata->model.ddr_wb_range_end; - req->jd.model_start.input.s.ddr_range_start = metadata->model.ddr_input_range_start; - req->jd.model_start.input.s.ddr_range_end = metadata->model.ddr_input_range_end; - req->jd.model_start.output.s.ddr_range_start = + req->cn10k_req.jd.model_start.model_init_size = metadata->init_model.file_size; + req->cn10k_req.jd.model_start.model_main_size = metadata->main_model.file_size; + req->cn10k_req.jd.model_start.model_finish_size = metadata->finish_model.file_size; + req->cn10k_req.jd.model_start.model_wb_offset = metadata->init_model.file_size + + metadata->main_model.file_size + + metadata->finish_model.file_size; + req->cn10k_req.jd.model_start.num_layers = metadata->model.num_layers; + req->cn10k_req.jd.model_start.num_gather_entries = 0; + req->cn10k_req.jd.model_start.num_scatter_entries = 0; + req->cn10k_req.jd.model_start.tilemask = 0; /* Updated after reserving pages */ + req->cn10k_req.jd.model_start.batch_size = model->batch_size; + req->cn10k_req.jd.model_start.ocm_wb_base_address = + 0; /* Updated after reserving pages */ + req->cn10k_req.jd.model_start.ocm_wb_range_start = + metadata->model.ocm_wb_range_start; + req->cn10k_req.jd.model_start.ocm_wb_range_end = metadata->model.ocm_wb_range_end; + req->cn10k_req.jd.model_start.ddr_wb_base_address = + PLT_U64_CAST(roc_ml_addr_ap2mlip( + &cn10k_mldev->roc, PLT_PTR_ADD(addr->finish_load_addr, + metadata->finish_model.file_size))); + req->cn10k_req.jd.model_start.ddr_wb_range_start = + metadata->model.ddr_wb_range_start; + req->cn10k_req.jd.model_start.ddr_wb_range_end = metadata->model.ddr_wb_range_end; + req->cn10k_req.jd.model_start.input.s.ddr_range_start = + metadata->model.ddr_input_range_start; + req->cn10k_req.jd.model_start.input.s.ddr_range_end = + metadata->model.ddr_input_range_end; + req->cn10k_req.jd.model_start.output.s.ddr_range_start = metadata->model.ddr_output_range_start; - req->jd.model_start.output.s.ddr_range_end = metadata->model.ddr_output_range_end; + req->cn10k_req.jd.model_start.output.s.ddr_range_end = + metadata->model.ddr_output_range_end; - req->extended_args.start.ddr_scratch_base_address = PLT_U64_CAST( + req->cn10k_req.extended_args.start.ddr_scratch_base_address = PLT_U64_CAST( roc_ml_addr_ap2mlip(&cn10k_mldev->roc, addr->scratch_base_addr)); - req->extended_args.start.ddr_scratch_range_start = + req->cn10k_req.extended_args.start.ddr_scratch_range_start = metadata->model.ddr_scratch_range_start; - req->extended_args.start.ddr_scratch_range_end = + req->cn10k_req.extended_args.start.ddr_scratch_range_end = metadata->model.ddr_scratch_range_end; } } static __rte_always_inline void -cn10k_ml_prep_fp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cn10k_ml_req *req, +cn10k_ml_prep_fp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml_req *req, struct rte_ml_op *op) { - req->jd.hdr.jce.w0.u64 = 0; - req->jd.hdr.jce.w1.u64 = req->compl_W1; - req->jd.hdr.model_id = op->model_id; - req->jd.hdr.job_type = ML_CN10K_JOB_TYPE_MODEL_RUN; - req->jd.hdr.fp_flags = ML_FLAGS_POLL_COMPL; - req->jd.hdr.sp_flags = 0x0; - req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->result); - req->jd.model_run.input_ddr_addr = + req->cn10k_req.jd.hdr.jce.w0.u64 = 0; + req->cn10k_req.jd.hdr.jce.w1.u64 = PLT_U64_CAST(req->status); + req->cn10k_req.jd.hdr.model_id = op->model_id; + req->cn10k_req.jd.hdr.job_type = ML_CN10K_JOB_TYPE_MODEL_RUN; + req->cn10k_req.jd.hdr.fp_flags = ML_FLAGS_POLL_COMPL; + req->cn10k_req.jd.hdr.sp_flags = 0x0; + req->cn10k_req.jd.hdr.result = + roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->cn10k_req.result); + req->cn10k_req.jd.model_run.input_ddr_addr = PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, op->input[0]->addr)); - req->jd.model_run.output_ddr_addr = + req->cn10k_req.jd.model_run.output_ddr_addr = PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, op->output[0]->addr)); - req->jd.model_run.num_batches = op->nb_batches; + req->cn10k_req.jd.model_run.num_batches = op->nb_batches; } struct xstat_info { @@ -863,7 +874,7 @@ cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) op.input = &inp; op.output = &out; - memset(model->layer[0].glow.req, 0, sizeof(struct cn10k_ml_req)); + memset(model->layer[0].glow.req, 0, sizeof(struct cnxk_ml_req)); ret = cn10k_ml_inference_sync(dev, &op); plt_memzone_free(mz); @@ -906,7 +917,7 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; struct cn10k_ml_ocm *ocm; - struct cn10k_ml_qp *qp; + struct cnxk_ml_qp *qp; uint16_t model_id; uint32_t mz_size; uint16_t tile_id; @@ -1103,7 +1114,7 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; - struct cn10k_ml_qp *qp; + struct cnxk_ml_qp *qp; uint16_t model_id; uint16_t qp_id; @@ -1138,7 +1149,7 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { qp = dev->data->queue_pairs[qp_id]; if (qp != NULL) { - if (cn10k_ml_qp_destroy(dev, qp) != 0) + if (cnxk_ml_qp_destroy(dev, qp) != 0) plt_err("Could not destroy queue pair %u", qp_id); dev->data->queue_pairs[qp_id] = NULL; } @@ -1215,7 +1226,7 @@ cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, const struct rte_ml_dev_qp_conf *qp_conf, int socket_id) { struct rte_ml_dev_info dev_info; - struct cn10k_ml_qp *qp; + struct cnxk_ml_qp *qp; uint32_t nb_desc; if (queue_pair_id >= dev->data->nb_queue_pairs) { @@ -1241,7 +1252,7 @@ cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, */ nb_desc = (qp_conf->nb_desc == dev_info.max_desc) ? dev_info.max_desc : qp_conf->nb_desc + 1; - qp = cn10k_ml_qp_create(dev, queue_pair_id, nb_desc, socket_id); + qp = cnxk_ml_qp_create(dev, queue_pair_id, nb_desc, socket_id); if (qp == NULL) { plt_err("Could not create queue pair %u", queue_pair_id); return -ENOMEM; @@ -1254,7 +1265,7 @@ cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, static int cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats) { - struct cn10k_ml_qp *qp; + struct cnxk_ml_qp *qp; int qp_id; for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { @@ -1271,7 +1282,7 @@ cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats) static void cn10k_ml_dev_stats_reset(struct rte_ml_dev *dev) { - struct cn10k_ml_qp *qp; + struct cnxk_ml_qp *qp; int qp_id; for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { @@ -1487,20 +1498,22 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) /* Dump debug buffer */ for (core_id = 0; core_id <= 1; core_id++) { - bufsize = fw->req->jd.fw_load.debug.debug_buffer_size; + bufsize = fw->req->cn10k_req.jd.fw_load.debug.debug_buffer_size; if (core_id == 0) { head_loc = roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_DBG_BUFFER_HEAD_C0); tail_loc = roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_DBG_BUFFER_TAIL_C0); - head_ptr = PLT_PTR_CAST(fw->req->jd.fw_load.debug.core0_debug_ptr); + head_ptr = + PLT_PTR_CAST(fw->req->cn10k_req.jd.fw_load.debug.core0_debug_ptr); head_ptr = roc_ml_addr_mlip2ap(&cn10k_mldev->roc, head_ptr); } else { head_loc = roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_DBG_BUFFER_HEAD_C1); tail_loc = roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_DBG_BUFFER_TAIL_C1); - head_ptr = PLT_PTR_CAST(fw->req->jd.fw_load.debug.core1_debug_ptr); + head_ptr = + PLT_PTR_CAST(fw->req->cn10k_req.jd.fw_load.debug.core1_debug_ptr); head_ptr = roc_ml_addr_mlip2ap(&cn10k_mldev->roc, head_ptr); } if (head_loc < tail_loc) { @@ -1513,17 +1526,19 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) /* Dump exception info */ for (core_id = 0; core_id <= 1; core_id++) { - bufsize = fw->req->jd.fw_load.debug.exception_state_size; + bufsize = fw->req->cn10k_req.jd.fw_load.debug.exception_state_size; if ((core_id == 0) && (roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0) != 0)) { - head_ptr = PLT_PTR_CAST(fw->req->jd.fw_load.debug.core0_exception_buffer); + head_ptr = PLT_PTR_CAST( + fw->req->cn10k_req.jd.fw_load.debug.core0_exception_buffer); fprintf(fp, "ML_SCRATCH_EXCEPTION_SP_C0 = 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0)); head_ptr = roc_ml_addr_mlip2ap(&cn10k_mldev->roc, head_ptr); fprintf(fp, "%.*s", bufsize, head_ptr); } else if ((core_id == 1) && (roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1) != 0)) { - head_ptr = PLT_PTR_CAST(fw->req->jd.fw_load.debug.core1_exception_buffer); + head_ptr = PLT_PTR_CAST( + fw->req->cn10k_req.jd.fw_load.debug.core1_exception_buffer); fprintf(fp, "ML_SCRATCH_EXCEPTION_SP_C1 = 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1)); head_ptr = roc_ml_addr_mlip2ap(&cn10k_mldev->roc, head_ptr); @@ -1540,14 +1555,14 @@ cn10k_ml_dev_selftest(struct rte_ml_dev *dev) struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; const struct plt_memzone *mz; - struct cn10k_ml_req *req; + struct cnxk_ml_req *req; uint64_t timeout_cycle; bool timeout; int ret; cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; - mz = plt_memzone_reserve_aligned("dev_selftest", sizeof(struct cn10k_ml_req), 0, + mz = plt_memzone_reserve_aligned("dev_selftest", sizeof(struct cnxk_ml_req), 0, ML_CN10K_ALIGN_SIZE); if (mz == NULL) { plt_err("Could not allocate reserved memzone"); @@ -1556,23 +1571,24 @@ cn10k_ml_dev_selftest(struct rte_ml_dev *dev) req = mz->addr; /* Prepare load completion structure */ - memset(&req->jd, 0, sizeof(struct cn10k_ml_jd)); - req->jd.hdr.jce.w1.u64 = PLT_U64_CAST(&req->status); - req->jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_SELFTEST; - req->jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->result); - req->jd.fw_load.flags = cn10k_ml_fw_flags_get(&cn10k_mldev->fw); - plt_write64(ML_CNXK_POLL_JOB_START, &req->status); + memset(&req->cn10k_req.jd, 0, sizeof(struct cn10k_ml_jd)); + req->cn10k_req.jd.hdr.jce.w1.u64 = PLT_U64_CAST(&req->cn10k_req.status); + req->cn10k_req.jd.hdr.job_type = ML_CN10K_JOB_TYPE_FIRMWARE_SELFTEST; + req->cn10k_req.jd.hdr.result = + roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->cn10k_req.result); + req->cn10k_req.jd.fw_load.flags = cn10k_ml_fw_flags_get(&cn10k_mldev->fw); + plt_write64(ML_CNXK_POLL_JOB_START, &req->cn10k_req.status); plt_wmb(); /* Enqueue firmware selftest request through scratch registers */ timeout = true; timeout_cycle = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); - roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->jd); + roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jd); plt_rmb(); do { if (roc_ml_scratch_is_done_bit_set(&cn10k_mldev->roc) && - (plt_read64(&req->status) == ML_CNXK_POLL_JOB_FINISH)) { + (plt_read64(&req->cn10k_req.status) == ML_CNXK_POLL_JOB_FINISH)) { timeout = false; break; } @@ -1583,7 +1599,7 @@ cn10k_ml_dev_selftest(struct rte_ml_dev *dev) if (timeout) { ret = -ETIME; } else { - if (req->result.error_code.u64 != 0) + if (req->cn10k_req.result.error_code != 0) ret = -1; } @@ -1656,7 +1672,7 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, mz_size = PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), ML_CN10K_ALIGN_SIZE) + 2 * model_data_size + model_scratch_size + model_info_size + - PLT_ALIGN_CEIL(sizeof(struct cn10k_ml_req), ML_CN10K_ALIGN_SIZE) + + PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_req), ML_CN10K_ALIGN_SIZE) + model_stats_size; /* Allocate memzone for model object and model data */ @@ -1728,7 +1744,7 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, /* Reset burst and sync stats */ model->layer[0].glow.burst_stats = PLT_PTR_ADD(model->layer[0].glow.req, - PLT_ALIGN_CEIL(sizeof(struct cn10k_ml_req), ML_CN10K_ALIGN_SIZE)); + PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_req), ML_CN10K_ALIGN_SIZE)); for (qp_id = 0; qp_id < dev->data->nb_queue_pairs + 1; qp_id++) { model->layer[0].glow.burst_stats[qp_id].hw_latency_tot = 0; model->layer[0].glow.burst_stats[qp_id].hw_latency_min = UINT64_MAX; @@ -1792,7 +1808,7 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; struct cn10k_ml_ocm *ocm; - struct cn10k_ml_req *req; + struct cnxk_ml_req *req; bool job_enqueued; bool job_dequeued; @@ -1817,10 +1833,10 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) /* Prepare JD */ req = model->layer[0].glow.req; cn10k_ml_prep_sp_job_descriptor(cn10k_mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_START); - req->result.error_code.u64 = 0x0; - req->result.user_ptr = NULL; + req->cn10k_req.result.error_code = 0x0; + req->cn10k_req.result.user_ptr = NULL; - plt_write64(ML_CNXK_POLL_JOB_START, &req->status); + plt_write64(ML_CNXK_POLL_JOB_START, &req->cn10k_req.status); plt_wmb(); num_tiles = model->layer[0].glow.metadata.model.tile_end - @@ -1880,8 +1896,8 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) /* Update JD */ cn10k_ml_ocm_tilecount(model->layer[0].glow.ocm_map.tilemask, &tile_start, &tile_end); - req->jd.model_start.tilemask = GENMASK_ULL(tile_end, tile_start); - req->jd.model_start.ocm_wb_base_address = + req->cn10k_req.jd.model_start.tilemask = GENMASK_ULL(tile_end, tile_start); + req->cn10k_req.jd.model_start.ocm_wb_base_address = model->layer[0].glow.ocm_map.wb_page_start * ocm->page_size; job_enqueued = false; @@ -1889,19 +1905,21 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) do { if (!job_enqueued) { req->timeout = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); - job_enqueued = roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->jd); + job_enqueued = + roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jd); } if (job_enqueued && !job_dequeued) - job_dequeued = roc_ml_scratch_dequeue(&cn10k_mldev->roc, &req->jd); + job_dequeued = + roc_ml_scratch_dequeue(&cn10k_mldev->roc, &req->cn10k_req.jd); if (job_dequeued) break; } while (plt_tsc_cycles() < req->timeout); if (job_dequeued) { - if (plt_read64(&req->status) == ML_CNXK_POLL_JOB_FINISH) { - if (req->result.error_code.u64 == 0) + if (plt_read64(&req->cn10k_req.status) == ML_CNXK_POLL_JOB_FINISH) { + if (req->cn10k_req.result.error_code == 0) ret = 0; else ret = -1; @@ -1954,7 +1972,7 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; struct cn10k_ml_ocm *ocm; - struct cn10k_ml_req *req; + struct cnxk_ml_req *req; bool job_enqueued; bool job_dequeued; @@ -1974,10 +1992,10 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) /* Prepare JD */ req = model->layer[0].glow.req; cn10k_ml_prep_sp_job_descriptor(cn10k_mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_STOP); - req->result.error_code.u64 = 0x0; - req->result.user_ptr = NULL; + req->cn10k_req.result.error_code = 0x0; + req->cn10k_req.result.user_ptr = NULL; - plt_write64(ML_CNXK_POLL_JOB_START, &req->status); + plt_write64(ML_CNXK_POLL_JOB_START, &req->cn10k_req.status); plt_wmb(); locked = false; @@ -2017,19 +2035,21 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) do { if (!job_enqueued) { req->timeout = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); - job_enqueued = roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->jd); + job_enqueued = + roc_ml_scratch_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jd); } if (job_enqueued && !job_dequeued) - job_dequeued = roc_ml_scratch_dequeue(&cn10k_mldev->roc, &req->jd); + job_dequeued = + roc_ml_scratch_dequeue(&cn10k_mldev->roc, &req->cn10k_req.jd); if (job_dequeued) break; } while (plt_tsc_cycles() < req->timeout); if (job_dequeued) { - if (plt_read64(&req->status) == ML_CNXK_POLL_JOB_FINISH) { - if (req->result.error_code.u64 == 0x0) + if (plt_read64(&req->cn10k_req.status) == ML_CNXK_POLL_JOB_FINISH) { + if (req->cn10k_req.result.error_code == 0x0) ret = 0; else ret = -1; @@ -2289,18 +2309,23 @@ queue_free_count(uint64_t head, uint64_t tail, uint64_t nb_desc) } static __rte_always_inline void -cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result *result, - struct rte_ml_op *op) +cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cnxk_ml_req *req) { + union cn10k_ml_error_code *error_code; struct cn10k_ml_layer_stats *stats; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; + struct cn10k_ml_result *result; struct cnxk_ml_model *model; - struct cn10k_ml_qp *qp; + struct cnxk_ml_qp *qp; + struct rte_ml_op *op; uint64_t hw_latency; uint64_t fw_latency; - if (likely(result->error_code.u64 == 0)) { + result = &req->cn10k_req.result; + op = req->op; + + if (likely(result->error_code == 0)) { model = dev->data->models[op->model_id]; if (likely(qp_id >= 0)) { qp = dev->data->queue_pairs[qp_id]; @@ -2331,7 +2356,7 @@ cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result stats->fw_latency_max = PLT_MAX(stats->fw_latency_max, fw_latency); stats->dequeued_count++; - op->impl_opaque = result->error_code.u64; + op->impl_opaque = result->error_code; op->status = RTE_ML_OP_STATUS_SUCCESS; } else { if (likely(qp_id >= 0)) { @@ -2340,7 +2365,8 @@ cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result } /* Handle driver error */ - if (result->error_code.s.etype == ML_ETYPE_DRIVER) { + error_code = (union cn10k_ml_error_code *)&result->error_code; + if (error_code->s.etype == ML_ETYPE_DRIVER) { cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; @@ -2348,15 +2374,15 @@ cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cn10k_ml_result if ((roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0) != 0) || (roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1) != 0)) - result->error_code.s.stype = ML_DRIVER_ERR_EXCEPTION; + error_code->s.stype = ML_DRIVER_ERR_EXCEPTION; else if ((roc_ml_reg_read64(&cn10k_mldev->roc, ML_CORE_INT_LO) != 0) || (roc_ml_reg_read64(&cn10k_mldev->roc, ML_CORE_INT_HI) != 0)) - result->error_code.s.stype = ML_DRIVER_ERR_FW_ERROR; + error_code->s.stype = ML_DRIVER_ERR_FW_ERROR; else - result->error_code.s.stype = ML_DRIVER_ERR_UNKNOWN; + error_code->s.stype = ML_DRIVER_ERR_UNKNOWN; } - op->impl_opaque = result->error_code.u64; + op->impl_opaque = result->error_code; op->status = RTE_ML_OP_STATUS_ERROR; } @@ -2367,11 +2393,12 @@ __rte_hot uint16_t cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops) { + union cn10k_ml_error_code *error_code; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_queue *queue; - struct cn10k_ml_req *req; - struct cn10k_ml_qp *qp; + struct cnxk_ml_queue *queue; + struct cnxk_ml_req *req; + struct cnxk_ml_qp *qp; struct rte_ml_op *op; uint16_t count; @@ -2397,12 +2424,13 @@ cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op cn10k_mldev->set_poll_addr(req); cn10k_ml_prep_fp_job_descriptor(cn10k_mldev, req, op); - memset(&req->result, 0, sizeof(struct cn10k_ml_result)); - req->result.error_code.s.etype = ML_ETYPE_UNKNOWN; - req->result.user_ptr = op->user_ptr; + memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result)); + error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; + error_code->s.etype = ML_ETYPE_UNKNOWN; + req->cn10k_req.result.user_ptr = op->user_ptr; cn10k_mldev->set_poll_ptr(req); - enqueued = cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->jcmd); + enqueued = cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jcmd); if (unlikely(!enqueued)) goto jcmdq_full; @@ -2426,11 +2454,12 @@ __rte_hot uint16_t cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops) { + union cn10k_ml_error_code *error_code; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - struct cn10k_ml_queue *queue; - struct cn10k_ml_req *req; - struct cn10k_ml_qp *qp; + struct cnxk_ml_queue *queue; + struct cnxk_ml_req *req; + struct cnxk_ml_qp *qp; uint64_t status; uint16_t count; @@ -2452,13 +2481,15 @@ cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op req = &queue->reqs[tail]; status = cn10k_mldev->get_poll_ptr(req); if (unlikely(status != ML_CNXK_POLL_JOB_FINISH)) { - if (plt_tsc_cycles() < req->timeout) + if (plt_tsc_cycles() < req->timeout) { goto empty_or_active; - else /* Timeout, set indication of driver error */ - req->result.error_code.s.etype = ML_ETYPE_DRIVER; + } else { /* Timeout, set indication of driver error */ + error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; + error_code->s.etype = ML_ETYPE_DRIVER; + } } - cn10k_ml_result_update(dev, qp_id, &req->result, req->op); + cn10k_ml_result_update(dev, qp_id, req); ops[count] = req->op; queue_index_advance(&tail, qp->nb_desc); @@ -2509,10 +2540,11 @@ cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op, struct rte_m __rte_hot int cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) { + union cn10k_ml_error_code *error_code; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; - struct cn10k_ml_req *req; + struct cnxk_ml_req *req; bool timeout; int ret = 0; @@ -2524,17 +2556,18 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) cn10k_ml_set_poll_addr(req); cn10k_ml_prep_fp_job_descriptor(cn10k_mldev, req, op); - memset(&req->result, 0, sizeof(struct cn10k_ml_result)); - req->result.error_code.s.etype = ML_ETYPE_UNKNOWN; - req->result.user_ptr = op->user_ptr; + memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result)); + error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; + error_code->s.etype = ML_ETYPE_UNKNOWN; + req->cn10k_req.result.user_ptr = op->user_ptr; cn10k_mldev->set_poll_ptr(req); - req->jcmd.w1.s.jobptr = PLT_U64_CAST(&req->jd); + req->cn10k_req.jcmd.w1.s.jobptr = PLT_U64_CAST(&req->cn10k_req.jd); timeout = true; req->timeout = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); do { - if (cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->jcmd)) { + if (cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jcmd)) { req->op = op; timeout = false; break; @@ -2557,7 +2590,7 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) if (timeout) ret = -ETIME; else - cn10k_ml_result_update(dev, -1, &req->result, req->op); + cn10k_ml_result_update(dev, -1, req); error_enqueue: return ret; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 005b093e45..fd5992e192 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -10,63 +10,279 @@ #include -#include "cn10k_ml_dev.h" +/* Firmware version string length */ +#define MLDEV_FIRMWARE_VERSION_LENGTH 32 -/* Request structure */ -struct cn10k_ml_req { - /* Job descriptor */ - struct cn10k_ml_jd jd; +/* Job types */ +enum cn10k_ml_job_type { + ML_CN10K_JOB_TYPE_MODEL_RUN = 0, + ML_CN10K_JOB_TYPE_MODEL_STOP, + ML_CN10K_JOB_TYPE_MODEL_START, + ML_CN10K_JOB_TYPE_FIRMWARE_LOAD, + ML_CN10K_JOB_TYPE_FIRMWARE_SELFTEST, +}; - /* Job descriptor extra arguments */ - union cn10k_ml_jd_extended_args extended_args; +/* Firmware stats */ +struct cn10k_ml_stats { + /* Firmware start cycle */ + uint64_t fw_start; - /* Job result */ - struct cn10k_ml_result result; + /* Firmware end cycle */ + uint64_t fw_end; - /* Status field for poll mode requests */ - volatile uint64_t status; + /* Hardware start cycle */ + uint64_t hw_start; - /* Job command */ - struct ml_job_cmd_s jcmd; + /* Hardware end cycle */ + uint64_t hw_end; +}; + +/* Result structure */ +struct cn10k_ml_result { + /* Job error code */ + uint64_t error_code; + + /* Stats */ + struct cn10k_ml_stats stats; + + /* User context pointer */ + void *user_ptr; +}; + +/* Firmware capability structure */ +union cn10k_ml_fw_cap { + uint64_t u64; + + struct { + /* CMPC completion support */ + uint64_t cmpc_completions : 1; + + /* Poll mode completion support */ + uint64_t poll_completions : 1; + + /* SSO completion support */ + uint64_t sso_completions : 1; + + /* Support for model side loading */ + uint64_t side_load_model : 1; - /* Job completion W1 */ - uint64_t compl_W1; + /* Batch execution */ + uint64_t batch_run : 1; - /* Timeout cycle */ - uint64_t timeout; + /* Max number of models to be loaded in parallel */ + uint64_t max_models : 8; - /* Op */ - struct rte_ml_op *op; -} __rte_aligned(ROC_ALIGN); + /* Firmware statistics */ + uint64_t fw_stats : 1; -/* Request queue */ -struct cn10k_ml_queue { - /* Array of requests */ - struct cn10k_ml_req *reqs; + /* Hardware statistics */ + uint64_t hw_stats : 1; - /* Head of the queue, used for enqueue */ - uint64_t head; + /* Max number of batches */ + uint64_t max_num_batches : 16; - /* Tail of the queue, used for dequeue */ - uint64_t tail; + uint64_t rsvd : 33; + } s; +}; + +/* Firmware debug info structure */ +struct cn10k_ml_fw_debug { + /* ACC core 0 debug buffer */ + uint64_t core0_debug_ptr; + + /* ACC core 1 debug buffer */ + uint64_t core1_debug_ptr; + + /* ACC core 0 exception state buffer */ + uint64_t core0_exception_buffer; + + /* ACC core 1 exception state buffer */ + uint64_t core1_exception_buffer; + + /* Debug buffer size per core */ + uint32_t debug_buffer_size; - /* Wait cycles before timeout */ - uint64_t wait_cycles; + /* Exception state dump size */ + uint32_t exception_state_size; }; -/* Queue-pair structure */ -struct cn10k_ml_qp { - /* ID */ - uint32_t id; +/* Job descriptor header (32 bytes) */ +struct cn10k_ml_jd_header { + /* Job completion structure */ + struct ml_jce_s jce; + + /* Model ID */ + uint64_t model_id : 8; + + /* Job type */ + uint64_t job_type : 8; + + /* Flags for fast-path jobs */ + uint64_t fp_flags : 16; + + /* Flags for slow-path jobs */ + uint64_t sp_flags : 16; + uint64_t rsvd : 16; + + /* Job result pointer */ + uint64_t *result; +}; + +/* Extra arguments for job descriptor */ +union cn10k_ml_jd_extended_args { + struct cn10k_ml_jd_extended_args_section_start { + /* DDR Scratch base address */ + uint64_t ddr_scratch_base_address; + + /* DDR Scratch range start */ + uint64_t ddr_scratch_range_start; + + /* DDR Scratch range end */ + uint64_t ddr_scratch_range_end; + + uint8_t rsvd[104]; + } start; +}; + +/* Job descriptor structure */ +struct cn10k_ml_jd { + /* Job descriptor header (32 bytes) */ + struct cn10k_ml_jd_header hdr; + + union { + struct cn10k_ml_jd_section_fw_load { + /* Firmware capability structure (8 bytes) */ + union cn10k_ml_fw_cap cap; + + /* Firmware version (32 bytes) */ + uint8_t version[MLDEV_FIRMWARE_VERSION_LENGTH]; + + /* Debug capability structure (40 bytes) */ + struct cn10k_ml_fw_debug debug; - /* Number of descriptors */ - uint64_t nb_desc; + /* Flags to control error handling */ + uint64_t flags; - /* Request queue */ - struct cn10k_ml_queue queue; + uint8_t rsvd[8]; + } fw_load; - /* Statistics per queue-pair */ - struct rte_ml_dev_stats stats; + struct cn10k_ml_jd_section_model_start { + /* Extended arguments */ + uint64_t extended_args; + + /* Destination model start address in DDR relative to ML_MLR_BASE */ + uint64_t model_dst_ddr_addr; + + /* Offset to model init section in the model */ + uint64_t model_init_offset : 32; + + /* Size of init section in the model */ + uint64_t model_init_size : 32; + + /* Offset to model main section in the model */ + uint64_t model_main_offset : 32; + + /* Size of main section in the model */ + uint64_t model_main_size : 32; + + /* Offset to model finish section in the model */ + uint64_t model_finish_offset : 32; + + /* Size of finish section in the model */ + uint64_t model_finish_size : 32; + + /* Offset to WB in model bin */ + uint64_t model_wb_offset : 32; + + /* Number of model layers */ + uint64_t num_layers : 8; + + /* Number of gather entries, 0 means linear input mode (= no gather) */ + uint64_t num_gather_entries : 8; + + /* Number of scatter entries 0 means linear input mode (= no scatter) */ + uint64_t num_scatter_entries : 8; + + /* Tile mask to load model */ + uint64_t tilemask : 8; + + /* Batch size of model */ + uint64_t batch_size : 32; + + /* OCM WB base address */ + uint64_t ocm_wb_base_address : 32; + + /* OCM WB range start */ + uint64_t ocm_wb_range_start : 32; + + /* OCM WB range End */ + uint64_t ocm_wb_range_end : 32; + + /* DDR WB address */ + uint64_t ddr_wb_base_address; + + /* DDR WB range start */ + uint64_t ddr_wb_range_start : 32; + + /* DDR WB range end */ + uint64_t ddr_wb_range_end : 32; + + union { + /* Points to gather list if num_gather_entries > 0 */ + void *gather_list; + struct { + /* Linear input mode */ + uint64_t ddr_range_start : 32; + uint64_t ddr_range_end : 32; + } s; + } input; + + union { + /* Points to scatter list if num_scatter_entries > 0 */ + void *scatter_list; + struct { + /* Linear output mode */ + uint64_t ddr_range_start : 32; + uint64_t ddr_range_end : 32; + } s; + } output; + } model_start; + + struct cn10k_ml_jd_section_model_stop { + uint8_t rsvd[96]; + } model_stop; + + struct cn10k_ml_jd_section_model_run { + /* Address of the input for the run relative to ML_MLR_BASE */ + uint64_t input_ddr_addr; + + /* Address of the output for the run relative to ML_MLR_BASE */ + uint64_t output_ddr_addr; + + /* Number of batches to run in variable batch processing */ + uint16_t num_batches; + + uint8_t rsvd[78]; + } model_run; + }; +} __plt_aligned(ROC_ALIGN); + +/* CN10K specific request */ +struct cn10k_ml_req { + /* Job descriptor */ + struct cn10k_ml_jd jd; + + /* Job descriptor extra arguments */ + union cn10k_ml_jd_extended_args extended_args; + + /* Status field for poll mode requests */ + volatile uint64_t status; + + /* Job command */ + struct ml_job_cmd_s jcmd; + + /* Result */ + struct cn10k_ml_result result; }; /* Device ops */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c new file mode 100644 index 0000000000..f1872dcf7c --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include + +#include "cnxk_ml_ops.h" diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h new file mode 100644 index 0000000000..b953fb0f5f --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _CNXK_ML_OPS_H_ +#define _CNXK_ML_OPS_H_ + +#include +#include + +#include + +#include "cn10k_ml_ops.h" + +/* Request structure */ +struct cnxk_ml_req { + /* Device specific request */ + union { + /* CN10K */ + struct cn10k_ml_req cn10k_req; + }; + + /* Address of status field */ + volatile uint64_t *status; + + /* Timeout cycle */ + uint64_t timeout; + + /* Op */ + struct rte_ml_op *op; +} __rte_aligned(ROC_ALIGN); + +/* Request queue */ +struct cnxk_ml_queue { + /* Array of requests */ + struct cnxk_ml_req *reqs; + + /* Head of the queue, used for enqueue */ + uint64_t head; + + /* Tail of the queue, used for dequeue */ + uint64_t tail; + + /* Wait cycles before timeout */ + uint64_t wait_cycles; +}; + +/* Queue-pair structure */ +struct cnxk_ml_qp { + /* ID */ + uint32_t id; + + /* Number of descriptors */ + uint64_t nb_desc; + + /* Request queue */ + struct cnxk_ml_queue queue; + + /* Statistics per queue-pair */ + struct rte_ml_dev_stats stats; +}; + +#endif /* _CNXK_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 72e03b15b5..73db458fcd 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -15,6 +15,7 @@ driver_sdk_headers = files( 'cnxk_ml_dev.h', 'cnxk_ml_io.h', 'cnxk_ml_model.h', + 'cnxk_ml_ops.h', ) sources = files( @@ -24,6 +25,7 @@ sources = files( 'cn10k_ml_ocm.c', 'cnxk_ml_dev.c', 'cnxk_ml_model.c', + 'cnxk_ml_ops.c', ) deps += ['mldev', 'common_cnxk', 'kvargs', 'hash'] From patchwork Wed Sep 20 07:24:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131682 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5D2B1425E4; Wed, 20 Sep 2023 09:26:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 91BEA41104; Wed, 20 Sep 2023 09:25:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 68F9340ED1 for ; Wed, 20 Sep 2023 09:25:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW0t008355 for ; Wed, 20 Sep 2023 00:25:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=oHiYgSMZfMiSNxxMnosJ1bbGQDZMkjy7CIIwhPhLFWg=; b=OliQvigcG7LacDA1QD2Be/aLoNNBw68Jq+G8mWo59Wz61Rei9pOXsORayKRImgqnBpR6 jxXfb1NhZXlC0TBX3iXUCP4OScY0hHSaUY9e9a2/MWODo7pV2oLOKOpAEWQhU/yQ1Hn+ p8/iWiVnE3V3zp/A+SGCkrDBkikoOFpHPBpdvkmQmPkQCtn2cJanlAZsN5Bg3ROo2mDk O82s+PiFcDwh/02LIgbAq3JALQIr4N61S+HD8QQyWVgqvRykxZheB8fe+/rsc1aQI+QO zDCyfUsbShLns5nkGoRtKAcNmHWUO0g2br1kv8FhH9ic+oM0nkagfR8rgosgbaxkbPV0 8A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:37 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:34 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:34 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 14F805B6928; Wed, 20 Sep 2023 00:25:34 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 06/34] ml/cnxk: add generic cnxk xstats structures Date: Wed, 20 Sep 2023 00:24:57 -0700 Message-ID: <20230920072528.14185-7-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Rqwtd2JgfhofCbJTCv_MaWCddV0WKjXw X-Proofpoint-GUID: Rqwtd2JgfhofCbJTCv_MaWCddV0WKjXw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduced generic xstats structures and renamed cn10k xstats enumerations with cnxk prefix. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.h | 86 +--------------- drivers/ml/cnxk/cn10k_ml_model.h | 6 +- drivers/ml/cnxk/cn10k_ml_ops.c | 169 ++++++++++++++----------------- drivers/ml/cnxk/cnxk_ml_xstats.h | 128 +++++++++++++++++++++++ drivers/ml/cnxk/meson.build | 1 + 5 files changed, 210 insertions(+), 180 deletions(-) create mode 100644 drivers/ml/cnxk/cnxk_ml_xstats.h diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index 1852d4f6c9..be989e0a20 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -10,6 +10,7 @@ #include "cn10k_ml_ocm.h" #include "cnxk_ml_io.h" +#include "cnxk_ml_xstats.h" /* Dummy Device ops */ extern struct rte_ml_dev_ops ml_dev_dummy_ops; @@ -121,89 +122,6 @@ struct cn10k_ml_fw { struct cnxk_ml_req *req; }; -/* Extended stats types enum */ -enum cn10k_ml_xstats_type { - /* Number of models loaded */ - nb_models_loaded, - - /* Number of models unloaded */ - nb_models_unloaded, - - /* Number of models started */ - nb_models_started, - - /* Number of models stopped */ - nb_models_stopped, - - /* Average inference hardware latency */ - avg_hw_latency, - - /* Minimum hardware latency */ - min_hw_latency, - - /* Maximum hardware latency */ - max_hw_latency, - - /* Average firmware latency */ - avg_fw_latency, - - /* Minimum firmware latency */ - min_fw_latency, - - /* Maximum firmware latency */ - max_fw_latency, -}; - -/* Extended stats function type enum. */ -enum cn10k_ml_xstats_fn_type { - /* Device function */ - CN10K_ML_XSTATS_FN_DEVICE, - - /* Model function */ - CN10K_ML_XSTATS_FN_MODEL, -}; - -/* Function pointer to get xstats for a type */ -typedef uint64_t (*cn10k_ml_xstats_fn)(struct rte_ml_dev *dev, uint16_t obj_idx, - enum cn10k_ml_xstats_type stat); - -/* Extended stats entry structure */ -struct cn10k_ml_xstats_entry { - /* Name-ID map */ - struct rte_ml_dev_xstats_map map; - - /* xstats mode, device or model */ - enum rte_ml_dev_xstats_mode mode; - - /* Type of xstats */ - enum cn10k_ml_xstats_type type; - - /* xstats function */ - enum cn10k_ml_xstats_fn_type fn_id; - - /* Object ID, model ID for model stat type */ - uint16_t obj_idx; - - /* Allowed to reset the stat */ - uint8_t reset_allowed; - - /* An offset to be taken away to emulate resets */ - uint64_t reset_value; -}; - -/* Extended stats data */ -struct cn10k_ml_xstats { - /* Pointer to xstats entries */ - struct cn10k_ml_xstats_entry *entries; - - /* Store num stats and offset of the stats for each model */ - uint16_t count_per_model[ML_CNXK_MAX_MODELS]; - uint16_t offset_for_model[ML_CNXK_MAX_MODELS]; - uint16_t count_mode_device; - uint16_t count_mode_model; - uint16_t count; -}; - /* Device private data */ struct cn10k_ml_dev { /* Device ROC */ @@ -216,7 +134,7 @@ struct cn10k_ml_dev { struct cn10k_ml_ocm ocm; /* Extended stats data */ - struct cn10k_ml_xstats xstats; + struct cnxk_ml_xstats xstats; /* Enable / disable model data caching */ int cache_model_data; diff --git a/drivers/ml/cnxk/cn10k_ml_model.h b/drivers/ml/cnxk/cn10k_ml_model.h index 74ada1531a..5c32f48c68 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.h +++ b/drivers/ml/cnxk/cn10k_ml_model.h @@ -404,7 +404,7 @@ struct cn10k_ml_layer_addr { }; /* Model fast-path stats */ -struct cn10k_ml_layer_stats { +struct cn10k_ml_layer_xstats { /* Total hardware latency, sum of all inferences */ uint64_t hw_latency_tot; @@ -447,10 +447,10 @@ struct cn10k_ml_layer_data { struct cnxk_ml_req *req; /* Layer: Stats for burst ops */ - struct cn10k_ml_layer_stats *burst_stats; + struct cn10k_ml_layer_xstats *burst_xstats; /* Layer: Stats for sync ops */ - struct cn10k_ml_layer_stats *sync_stats; + struct cn10k_ml_layer_xstats *sync_xstats; }; struct cn10k_ml_model_data { diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 2b1fa08154..03a7447dc8 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -14,6 +14,7 @@ #include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" #include "cnxk_ml_ops.h" +#include "cnxk_ml_xstats.h" /* ML model macros */ #define CN10K_ML_MODEL_MEMZONE_NAME "ml_cn10k_model_mz" @@ -429,26 +430,6 @@ cn10k_ml_prep_fp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml req->cn10k_req.jd.model_run.num_batches = op->nb_batches; } -struct xstat_info { - char name[32]; - enum cn10k_ml_xstats_type type; - uint8_t reset_allowed; -}; - -/* Note: Device stats are not allowed to be reset. */ -static const struct xstat_info device_stats[] = { - {"nb_models_loaded", nb_models_loaded, 0}, - {"nb_models_unloaded", nb_models_unloaded, 0}, - {"nb_models_started", nb_models_started, 0}, - {"nb_models_stopped", nb_models_stopped, 0}, -}; - -static const struct xstat_info model_stats[] = { - {"Avg-HW-Latency", avg_hw_latency, 1}, {"Min-HW-Latency", min_hw_latency, 1}, - {"Max-HW-Latency", max_hw_latency, 1}, {"Avg-FW-Latency", avg_fw_latency, 1}, - {"Min-FW-Latency", min_fw_latency, 1}, {"Max-FW-Latency", max_fw_latency, 1}, -}; - static int cn10k_ml_xstats_init(struct rte_ml_dev *dev) { @@ -463,10 +444,10 @@ cn10k_ml_xstats_init(struct rte_ml_dev *dev) cn10k_mldev = &cnxk_mldev->cn10k_mldev; /* Allocate memory for xstats entries. Don't allocate during reconfigure */ - nb_stats = RTE_DIM(device_stats) + ML_CNXK_MAX_MODELS * RTE_DIM(model_stats); + nb_stats = RTE_DIM(device_xstats) + ML_CNXK_MAX_MODELS * RTE_DIM(layer_xstats); if (cn10k_mldev->xstats.entries == NULL) cn10k_mldev->xstats.entries = rte_zmalloc( - "cn10k_ml_xstats", sizeof(struct cn10k_ml_xstats_entry) * nb_stats, + "cn10k_ml_xstats", sizeof(struct cnxk_ml_xstats_entry) * nb_stats, PLT_CACHE_LINE_SIZE); if (cn10k_mldev->xstats.entries == NULL) @@ -474,17 +455,17 @@ cn10k_ml_xstats_init(struct rte_ml_dev *dev) /* Initialize device xstats */ stat_id = 0; - for (i = 0; i < RTE_DIM(device_stats); i++) { + for (i = 0; i < RTE_DIM(device_xstats); i++) { cn10k_mldev->xstats.entries[stat_id].map.id = stat_id; snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), "%s", - device_stats[i].name); + device_xstats[i].name); cn10k_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_DEVICE; - cn10k_mldev->xstats.entries[stat_id].type = device_stats[i].type; - cn10k_mldev->xstats.entries[stat_id].fn_id = CN10K_ML_XSTATS_FN_DEVICE; + cn10k_mldev->xstats.entries[stat_id].type = device_xstats[i].type; + cn10k_mldev->xstats.entries[stat_id].fn_id = CNXK_ML_XSTATS_FN_DEVICE; cn10k_mldev->xstats.entries[stat_id].obj_idx = 0; - cn10k_mldev->xstats.entries[stat_id].reset_allowed = device_stats[i].reset_allowed; + cn10k_mldev->xstats.entries[stat_id].reset_allowed = device_xstats[i].reset_allowed; stat_id++; } cn10k_mldev->xstats.count_mode_device = stat_id; @@ -493,24 +474,24 @@ cn10k_ml_xstats_init(struct rte_ml_dev *dev) for (model = 0; model < ML_CNXK_MAX_MODELS; model++) { cn10k_mldev->xstats.offset_for_model[model] = stat_id; - for (i = 0; i < RTE_DIM(model_stats); i++) { + for (i = 0; i < RTE_DIM(layer_xstats); i++) { cn10k_mldev->xstats.entries[stat_id].map.id = stat_id; cn10k_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_MODEL; - cn10k_mldev->xstats.entries[stat_id].type = model_stats[i].type; - cn10k_mldev->xstats.entries[stat_id].fn_id = CN10K_ML_XSTATS_FN_MODEL; + cn10k_mldev->xstats.entries[stat_id].type = layer_xstats[i].type; + cn10k_mldev->xstats.entries[stat_id].fn_id = CNXK_ML_XSTATS_FN_MODEL; cn10k_mldev->xstats.entries[stat_id].obj_idx = model; cn10k_mldev->xstats.entries[stat_id].reset_allowed = - model_stats[i].reset_allowed; + layer_xstats[i].reset_allowed; /* Name of xstat is updated during model load */ snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), - "Model-%u-%s", model, model_stats[i].name); + "Model-%u-%s", model, layer_xstats[i].name); stat_id++; } - cn10k_mldev->xstats.count_per_model[model] = RTE_DIM(model_stats); + cn10k_mldev->xstats.count_per_model[model] = RTE_DIM(layer_xstats); } cn10k_mldev->xstats.count_mode_model = stat_id - cn10k_mldev->xstats.count_mode_device; @@ -549,7 +530,7 @@ cn10k_ml_xstats_model_name_update(struct rte_ml_dev *dev, uint16_t model_id) cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; model = dev->data->models[model_id]; - stat_id = RTE_DIM(device_stats) + model_id * RTE_DIM(model_stats); + stat_id = RTE_DIM(device_xstats) + model_id * RTE_DIM(layer_xstats); roc_clk_freq_get(&rclk_freq, &sclk_freq); if (sclk_freq == 0) @@ -558,17 +539,17 @@ cn10k_ml_xstats_model_name_update(struct rte_ml_dev *dev, uint16_t model_id) strcpy(suffix, "ns"); /* Update xstat name based on model name and sclk availability */ - for (i = 0; i < RTE_DIM(model_stats); i++) { + for (i = 0; i < RTE_DIM(layer_xstats); i++) { snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", - model->layer[0].glow.metadata.model.name, model_stats[i].name, suffix); + model->layer[0].glow.metadata.model.name, layer_xstats[i].name, suffix); stat_id++; } } static uint64_t cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, - enum cn10k_ml_xstats_type type) + enum cnxk_ml_xstats_type type) { struct cnxk_ml_dev *cnxk_mldev; @@ -594,9 +575,9 @@ cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, do { \ value = 0; \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - value += model->layer[0].glow.burst_stats[qp_id].str##_latency_tot; \ - count += model->layer[0].glow.burst_stats[qp_id].dequeued_count - \ - model->layer[0].glow.burst_stats[qp_id].str##_reset_count; \ + value += model->layer[0].glow.burst_xstats[qp_id].str##_latency_tot; \ + count += model->layer[0].glow.burst_xstats[qp_id].dequeued_count - \ + model->layer[0].glow.burst_xstats[qp_id].str##_reset_count; \ } \ if (count != 0) \ value = value / count; \ @@ -607,9 +588,10 @@ cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, value = UINT64_MAX; \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ value = PLT_MIN( \ - value, model->layer[0].glow.burst_stats[qp_id].str##_latency_min); \ - count += model->layer[0].glow.burst_stats[qp_id].dequeued_count - \ - model->layer[0].glow.burst_stats[qp_id].str##_reset_count; \ + value, \ + model->layer[0].glow.burst_xstats[qp_id].str##_latency_min); \ + count += model->layer[0].glow.burst_xstats[qp_id].dequeued_count - \ + model->layer[0].glow.burst_xstats[qp_id].str##_reset_count; \ } \ if (count == 0) \ value = 0; \ @@ -620,16 +602,17 @@ cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, value = 0; \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ value = PLT_MAX( \ - value, model->layer[0].glow.burst_stats[qp_id].str##_latency_max); \ - count += model->layer[0].glow.burst_stats[qp_id].dequeued_count - \ - model->layer[0].glow.burst_stats[qp_id].str##_reset_count; \ + value, \ + model->layer[0].glow.burst_xstats[qp_id].str##_latency_max); \ + count += model->layer[0].glow.burst_xstats[qp_id].dequeued_count - \ + model->layer[0].glow.burst_xstats[qp_id].str##_reset_count; \ } \ if (count == 0) \ value = 0; \ } while (0) static uint64_t -cn10k_ml_model_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx, enum cn10k_ml_xstats_type type) +cn10k_ml_model_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx, enum cnxk_ml_xstats_type type) { struct cnxk_ml_model *model; uint16_t rclk_freq; /* MHz */ @@ -675,8 +658,8 @@ cn10k_ml_model_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx, enum cn10k_ml static int cn10k_ml_device_xstats_reset(struct rte_ml_dev *dev, const uint16_t stat_ids[], uint16_t nb_ids) { - struct cn10k_ml_xstats_entry *xs; struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_xstats_entry *xs; struct cnxk_ml_dev *cnxk_mldev; uint16_t nb_stats; uint16_t stat_id; @@ -712,26 +695,26 @@ cn10k_ml_device_xstats_reset(struct rte_ml_dev *dev, const uint16_t stat_ids[], #define ML_AVG_RESET_FOREACH_QP(dev, model, qp_id, str) \ do { \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - model->layer[0].glow.burst_stats[qp_id].str##_latency_tot = 0; \ - model->layer[0].glow.burst_stats[qp_id].str##_reset_count = \ - model->layer[0].glow.burst_stats[qp_id].dequeued_count; \ + model->layer[0].glow.burst_xstats[qp_id].str##_latency_tot = 0; \ + model->layer[0].glow.burst_xstats[qp_id].str##_reset_count = \ + model->layer[0].glow.burst_xstats[qp_id].dequeued_count; \ } \ } while (0) #define ML_MIN_RESET_FOREACH_QP(dev, model, qp_id, str) \ do { \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) \ - model->layer[0].glow.burst_stats[qp_id].str##_latency_min = UINT64_MAX; \ + model->layer[0].glow.burst_xstats[qp_id].str##_latency_min = UINT64_MAX; \ } while (0) #define ML_MAX_RESET_FOREACH_QP(dev, model, qp_id, str) \ do { \ for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) \ - model->layer[0].glow.burst_stats[qp_id].str##_latency_max = 0; \ + model->layer[0].glow.burst_xstats[qp_id].str##_latency_max = 0; \ } while (0) static void -cn10k_ml_reset_model_stat(struct rte_ml_dev *dev, uint16_t model_id, enum cn10k_ml_xstats_type type) +cn10k_ml_reset_model_stat(struct rte_ml_dev *dev, uint16_t model_id, enum cnxk_ml_xstats_type type) { struct cnxk_ml_model *model; uint32_t qp_id; @@ -766,8 +749,8 @@ static int cn10k_ml_model_xstats_reset(struct rte_ml_dev *dev, int32_t model_id, const uint16_t stat_ids[], uint16_t nb_ids) { - struct cn10k_ml_xstats_entry *xs; struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_xstats_entry *xs; struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; int32_t lcl_model_id = 0; @@ -1346,10 +1329,10 @@ static int cn10k_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16_t *stat_id, uint64_t *value) { - struct cn10k_ml_xstats_entry *xs; + struct cnxk_ml_xstats_entry *xs; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; - cn10k_ml_xstats_fn fn; + cnxk_ml_xstats_fn fn; uint32_t i; cnxk_mldev = dev->data->dev_private; @@ -1361,10 +1344,10 @@ cn10k_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16 *stat_id = xs->map.id; switch (xs->fn_id) { - case CN10K_ML_XSTATS_FN_DEVICE: + case CNXK_ML_XSTATS_FN_DEVICE: fn = cn10k_ml_dev_xstat_get; break; - case CN10K_ML_XSTATS_FN_MODEL: + case CNXK_ML_XSTATS_FN_MODEL: fn = cn10k_ml_model_xstat_get; break; default: @@ -1388,11 +1371,11 @@ static int cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, const uint16_t stat_ids[], uint64_t values[], uint16_t nb_ids) { - struct cn10k_ml_xstats_entry *xs; struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_xstats_entry *xs; struct cnxk_ml_dev *cnxk_mldev; uint32_t xstats_mode_count; - cn10k_ml_xstats_fn fn; + cnxk_ml_xstats_fn fn; uint64_t val; uint32_t idx; uint32_t i; @@ -1427,10 +1410,10 @@ cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode } switch (xs->fn_id) { - case CN10K_ML_XSTATS_FN_DEVICE: + case CNXK_ML_XSTATS_FN_DEVICE: fn = cn10k_ml_dev_xstat_get; break; - case CN10K_ML_XSTATS_FN_MODEL: + case CNXK_ML_XSTATS_FN_MODEL: fn = cn10k_ml_model_xstat_get; break; default: @@ -1668,7 +1651,7 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, metadata->model.num_input * sizeof(struct rte_ml_io_info) + metadata->model.num_output * sizeof(struct rte_ml_io_info); model_info_size = PLT_ALIGN_CEIL(model_info_size, ML_CN10K_ALIGN_SIZE); - model_stats_size = (dev->data->nb_queue_pairs + 1) * sizeof(struct cn10k_ml_layer_stats); + model_stats_size = (dev->data->nb_queue_pairs + 1) * sizeof(struct cn10k_ml_layer_xstats); mz_size = PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), ML_CN10K_ALIGN_SIZE) + 2 * model_data_size + model_scratch_size + model_info_size + @@ -1742,24 +1725,24 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, model->layer[0].glow.req = PLT_PTR_ADD(model->info, model_info_size); /* Reset burst and sync stats */ - model->layer[0].glow.burst_stats = + model->layer[0].glow.burst_xstats = PLT_PTR_ADD(model->layer[0].glow.req, PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_req), ML_CN10K_ALIGN_SIZE)); for (qp_id = 0; qp_id < dev->data->nb_queue_pairs + 1; qp_id++) { - model->layer[0].glow.burst_stats[qp_id].hw_latency_tot = 0; - model->layer[0].glow.burst_stats[qp_id].hw_latency_min = UINT64_MAX; - model->layer[0].glow.burst_stats[qp_id].hw_latency_max = 0; - model->layer[0].glow.burst_stats[qp_id].fw_latency_tot = 0; - model->layer[0].glow.burst_stats[qp_id].fw_latency_min = UINT64_MAX; - model->layer[0].glow.burst_stats[qp_id].fw_latency_max = 0; - model->layer[0].glow.burst_stats[qp_id].hw_reset_count = 0; - model->layer[0].glow.burst_stats[qp_id].fw_reset_count = 0; - model->layer[0].glow.burst_stats[qp_id].dequeued_count = 0; + model->layer[0].glow.burst_xstats[qp_id].hw_latency_tot = 0; + model->layer[0].glow.burst_xstats[qp_id].hw_latency_min = UINT64_MAX; + model->layer[0].glow.burst_xstats[qp_id].hw_latency_max = 0; + model->layer[0].glow.burst_xstats[qp_id].fw_latency_tot = 0; + model->layer[0].glow.burst_xstats[qp_id].fw_latency_min = UINT64_MAX; + model->layer[0].glow.burst_xstats[qp_id].fw_latency_max = 0; + model->layer[0].glow.burst_xstats[qp_id].hw_reset_count = 0; + model->layer[0].glow.burst_xstats[qp_id].fw_reset_count = 0; + model->layer[0].glow.burst_xstats[qp_id].dequeued_count = 0; } - model->layer[0].glow.sync_stats = - PLT_PTR_ADD(model->layer[0].glow.burst_stats, - dev->data->nb_queue_pairs * sizeof(struct cn10k_ml_layer_stats)); + model->layer[0].glow.sync_xstats = + PLT_PTR_ADD(model->layer[0].glow.burst_xstats, + dev->data->nb_queue_pairs * sizeof(struct cn10k_ml_layer_xstats)); plt_spinlock_init(&model->lock); model->state = ML_CNXK_MODEL_STATE_LOADED; @@ -2312,7 +2295,7 @@ static __rte_always_inline void cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cnxk_ml_req *req) { union cn10k_ml_error_code *error_code; - struct cn10k_ml_layer_stats *stats; + struct cn10k_ml_layer_xstats *xstats; struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_result *result; @@ -2330,31 +2313,31 @@ cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cnxk_ml_req *re if (likely(qp_id >= 0)) { qp = dev->data->queue_pairs[qp_id]; qp->stats.dequeued_count++; - stats = &model->layer[0].glow.burst_stats[qp_id]; + xstats = &model->layer[0].glow.burst_xstats[qp_id]; } else { - stats = model->layer[0].glow.sync_stats; + xstats = model->layer[0].glow.sync_xstats; } - if (unlikely(stats->dequeued_count == stats->hw_reset_count)) { - stats->hw_latency_min = UINT64_MAX; - stats->hw_latency_max = 0; + if (unlikely(xstats->dequeued_count == xstats->hw_reset_count)) { + xstats->hw_latency_min = UINT64_MAX; + xstats->hw_latency_max = 0; } - if (unlikely(stats->dequeued_count == stats->fw_reset_count)) { - stats->fw_latency_min = UINT64_MAX; - stats->fw_latency_max = 0; + if (unlikely(xstats->dequeued_count == xstats->fw_reset_count)) { + xstats->fw_latency_min = UINT64_MAX; + xstats->fw_latency_max = 0; } hw_latency = result->stats.hw_end - result->stats.hw_start; fw_latency = result->stats.fw_end - result->stats.fw_start - hw_latency; - stats->hw_latency_tot += hw_latency; - stats->hw_latency_min = PLT_MIN(stats->hw_latency_min, hw_latency); - stats->hw_latency_max = PLT_MAX(stats->hw_latency_max, hw_latency); - stats->fw_latency_tot += fw_latency; - stats->fw_latency_min = PLT_MIN(stats->fw_latency_min, fw_latency); - stats->fw_latency_max = PLT_MAX(stats->fw_latency_max, fw_latency); - stats->dequeued_count++; + xstats->hw_latency_tot += hw_latency; + xstats->hw_latency_min = PLT_MIN(xstats->hw_latency_min, hw_latency); + xstats->hw_latency_max = PLT_MAX(xstats->hw_latency_max, hw_latency); + xstats->fw_latency_tot += fw_latency; + xstats->fw_latency_min = PLT_MIN(xstats->fw_latency_min, fw_latency); + xstats->fw_latency_max = PLT_MAX(xstats->fw_latency_max, fw_latency); + xstats->dequeued_count++; op->impl_opaque = result->error_code; op->status = RTE_ML_OP_STATUS_SUCCESS; diff --git a/drivers/ml/cnxk/cnxk_ml_xstats.h b/drivers/ml/cnxk/cnxk_ml_xstats.h new file mode 100644 index 0000000000..0d405679ca --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_xstats.h @@ -0,0 +1,128 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _CNXK_ML_XSTATS_H_ +#define _CNXK_ML_XSTATS_H_ + +#include "cnxk_ml_io.h" + +/* Extended stats types enum */ +enum cnxk_ml_xstats_type { + /* Number of models loaded */ + nb_models_loaded, + + /* Number of models unloaded */ + nb_models_unloaded, + + /* Number of models started */ + nb_models_started, + + /* Number of models stopped */ + nb_models_stopped, + + /* Average inference hardware latency */ + avg_hw_latency, + + /* Minimum hardware latency */ + min_hw_latency, + + /* Maximum hardware latency */ + max_hw_latency, + + /* Average firmware latency */ + avg_fw_latency, + + /* Minimum firmware latency */ + min_fw_latency, + + /* Maximum firmware latency */ + max_fw_latency, + + /* Average runtime latency */ + avg_rt_latency, + + /* Minimum runtime latency */ + min_rt_latency, + + /* Maximum runtime latency */ + max_rt_latency, +}; + +/* Extended stats function type enum. */ +enum cnxk_ml_xstats_fn_type { + /* Device function */ + CNXK_ML_XSTATS_FN_DEVICE, + + /* Model function */ + CNXK_ML_XSTATS_FN_MODEL, +}; + +/* Function pointer to get xstats for a type */ +typedef uint64_t (*cnxk_ml_xstats_fn)(struct rte_ml_dev *cnxk_mldev, uint16_t obj_idx, + enum cnxk_ml_xstats_type stat); + +/* Extended stats entry structure */ +struct cnxk_ml_xstats_entry { + /* Name-ID map */ + struct rte_ml_dev_xstats_map map; + + /* xstats mode, device or model */ + enum rte_ml_dev_xstats_mode mode; + + /* Type of xstats */ + enum cnxk_ml_xstats_type type; + + /* xstats function */ + enum cnxk_ml_xstats_fn_type fn_id; + + /* Object ID, model ID for model stat type */ + uint16_t obj_idx; + + /* Layer ID, valid for model stat type */ + int32_t layer_id; + + /* Allowed to reset the stat */ + uint8_t reset_allowed; + + /* An offset to be taken away to emulate resets */ + uint64_t reset_value; +}; + +/* Extended stats data */ +struct cnxk_ml_xstats { + /* Pointer to xstats entries */ + struct cnxk_ml_xstats_entry *entries; + + /* Store num stats and offset of the stats for each model */ + uint16_t count_per_model[ML_CNXK_MAX_MODELS]; + uint16_t offset_for_model[ML_CNXK_MAX_MODELS]; + uint16_t count_per_layer[ML_CNXK_MAX_MODELS][ML_CNXK_MODEL_MAX_LAYERS]; + uint16_t offset_for_layer[ML_CNXK_MAX_MODELS][ML_CNXK_MODEL_MAX_LAYERS]; + uint16_t count_mode_device; + uint16_t count_mode_model; + uint16_t count; +}; + +struct cnxk_ml_xstat_info { + char name[32]; + enum cnxk_ml_xstats_type type; + uint8_t reset_allowed; +}; + +/* Device xstats. Note: Device stats are not allowed to be reset. */ +static const struct cnxk_ml_xstat_info device_xstats[] = { + {"nb_models_loaded", nb_models_loaded, 0}, + {"nb_models_unloaded", nb_models_unloaded, 0}, + {"nb_models_started", nb_models_started, 0}, + {"nb_models_stopped", nb_models_stopped, 0}, +}; + +/* Layer xstats */ +static const struct cnxk_ml_xstat_info layer_xstats[] = { + {"Avg-HW-Latency", avg_hw_latency, 1}, {"Min-HW-Latency", min_hw_latency, 1}, + {"Max-HW-Latency", max_hw_latency, 1}, {"Avg-FW-Latency", avg_fw_latency, 1}, + {"Min-FW-Latency", min_fw_latency, 1}, {"Max-FW-Latency", max_fw_latency, 1}, +}; + +#endif /* _CNXK_ML_XSTATS_H_ */ diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 73db458fcd..6385ac4548 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -16,6 +16,7 @@ driver_sdk_headers = files( 'cnxk_ml_io.h', 'cnxk_ml_model.h', 'cnxk_ml_ops.h', + 'cnxk_ml_xstats.h', ) sources = files( From patchwork Wed Sep 20 07:24:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131679 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DF963425E4; Wed, 20 Sep 2023 09:26:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 83E2540EDB; Wed, 20 Sep 2023 09:25:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 0B43540E96 for ; Wed, 20 Sep 2023 09:25:36 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633la002294 for ; Wed, 20 Sep 2023 00:25:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Tk7jMdWACmQMC8zch8Kh4UN/i0ooSzdG/VVBEs8EXP8=; b=RcdHSmx7ZKLp8H9WadMReSaOMYRoIHDyc9aFJflUQOA52T3W7ymjOHz1T3qjhjowdUOt 7vMYB0uizeWVoghJhC1a1snG8xDlr7L2TqlzKvqDhhiELSqmN4/KeWY3gO8acDORLbPw j2UFZFlGN/sl/IPSClHXNRr9HYCY506pvjdtdn52QUVYzkWQ2jq8RrV9iuDxVBUMXkve 5ujHYIwlb/stvA0zzpCRhKXaWqyB8/s1hSwdyZBPxH89STsE5IYJLb9uSnfj8fs+fcd8 BOEshG33iRJen1jGkmKsYnMHIS1nGB4mySGE+n96GFJfkAJvGNsuvtQ3B/We8FYvBkAt SQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:36 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:34 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:34 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 658325B6926; Wed, 20 Sep 2023 00:25:34 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 07/34] ml/cnxk: rename cnxk ops function pointers struct Date: Wed, 20 Sep 2023 00:24:58 -0700 Message-ID: <20230920072528.14185-8-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: POUO8K_TzWGSg3pEdHNA-AnEGOgNTNNT X-Proofpoint-GUID: POUO8K_TzWGSg3pEdHNA-AnEGOgNTNNT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Renamed cn10k ML ops structure with cnxk prefix. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.c | 2 +- drivers/ml/cnxk/cn10k_ml_ops.c | 73 +++++++++------------------------- drivers/ml/cnxk/cn10k_ml_ops.h | 34 +++++++++++++++- drivers/ml/cnxk/cnxk_ml_ops.c | 38 ++++++++++++++++++ drivers/ml/cnxk/cnxk_ml_ops.h | 2 + 5 files changed, 93 insertions(+), 56 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c index f6e05cfc47..20c114b8bf 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.c +++ b/drivers/ml/cnxk/cn10k_ml_dev.c @@ -404,7 +404,7 @@ cn10k_ml_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_de goto pmd_destroy; } - dev->dev_ops = &cn10k_ml_ops; + dev->dev_ops = &cnxk_ml_ops; } else { plt_err("CN10K ML Ops are not supported on secondary process"); dev->dev_ops = &ml_dev_dummy_ops; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 03a7447dc8..e6383283d3 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -123,7 +123,7 @@ cnxk_ml_qp_destroy(const struct rte_ml_dev *dev, struct cnxk_ml_qp *qp) return 0; } -static int +int cn10k_ml_dev_queue_pair_release(struct rte_ml_dev *dev, uint16_t queue_pair_id) { struct cnxk_ml_qp *qp; @@ -864,7 +864,7 @@ cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) return ret; } -static int +int cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) { struct cn10k_ml_dev *cn10k_mldev; @@ -892,7 +892,7 @@ cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) return 0; } -static int +int cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *conf) { struct rte_ml_dev_info dev_info; @@ -1091,7 +1091,7 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c return ret; } -static int +int cn10k_ml_dev_close(struct rte_ml_dev *dev) { struct cn10k_ml_dev *cn10k_mldev; @@ -1164,7 +1164,7 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) return rte_dev_remove(dev->device); } -static int +int cn10k_ml_dev_start(struct rte_ml_dev *dev) { struct cn10k_ml_dev *cn10k_mldev; @@ -1184,7 +1184,7 @@ cn10k_ml_dev_start(struct rte_ml_dev *dev) return 0; } -static int +int cn10k_ml_dev_stop(struct rte_ml_dev *dev) { struct cn10k_ml_dev *cn10k_mldev; @@ -1204,7 +1204,7 @@ cn10k_ml_dev_stop(struct rte_ml_dev *dev) return 0; } -static int +int cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, const struct rte_ml_dev_qp_conf *qp_conf, int socket_id) { @@ -1245,7 +1245,7 @@ cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, return 0; } -static int +int cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats) { struct cnxk_ml_qp *qp; @@ -1262,7 +1262,7 @@ cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats) return 0; } -static void +void cn10k_ml_dev_stats_reset(struct rte_ml_dev *dev) { struct cnxk_ml_qp *qp; @@ -1277,7 +1277,7 @@ cn10k_ml_dev_stats_reset(struct rte_ml_dev *dev) } } -static int +int cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, struct rte_ml_dev_xstats_map *xstats_map, uint32_t size) @@ -1325,7 +1325,7 @@ cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mod return idx; } -static int +int cn10k_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16_t *stat_id, uint64_t *value) { @@ -1367,7 +1367,7 @@ cn10k_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16 return -EINVAL; } -static int +int cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, const uint16_t stat_ids[], uint64_t values[], uint16_t nb_ids) { @@ -1431,7 +1431,7 @@ cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode return idx; } -static int +int cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, const uint16_t stat_ids[], uint16_t nb_ids) { @@ -1445,7 +1445,7 @@ cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mo return 0; } -static int +int cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) { struct cn10k_ml_dev *cn10k_mldev; @@ -1532,7 +1532,7 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) return 0; } -static int +int cn10k_ml_dev_selftest(struct rte_ml_dev *dev) { struct cn10k_ml_dev *cn10k_mldev; @@ -2055,7 +2055,7 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) return ret; } -static int +int cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_model_info *model_info) { @@ -2075,7 +2075,7 @@ cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, return 0; } -static int +int cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buffer) { struct cnxk_ml_model *model; @@ -2109,7 +2109,7 @@ cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *bu return 0; } -static int +int cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **dbuffer, struct rte_ml_buff_seg **qbuffer) { @@ -2190,7 +2190,7 @@ cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_bu return 0; } -static int +int cn10k_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **qbuffer, struct rte_ml_buff_seg **dbuffer) { @@ -2578,38 +2578,3 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) error_enqueue: return ret; } - -struct rte_ml_dev_ops cn10k_ml_ops = { - /* Device control ops */ - .dev_info_get = cn10k_ml_dev_info_get, - .dev_configure = cn10k_ml_dev_configure, - .dev_close = cn10k_ml_dev_close, - .dev_start = cn10k_ml_dev_start, - .dev_stop = cn10k_ml_dev_stop, - .dev_dump = cn10k_ml_dev_dump, - .dev_selftest = cn10k_ml_dev_selftest, - - /* Queue-pair handling ops */ - .dev_queue_pair_setup = cn10k_ml_dev_queue_pair_setup, - .dev_queue_pair_release = cn10k_ml_dev_queue_pair_release, - - /* Stats ops */ - .dev_stats_get = cn10k_ml_dev_stats_get, - .dev_stats_reset = cn10k_ml_dev_stats_reset, - .dev_xstats_names_get = cn10k_ml_dev_xstats_names_get, - .dev_xstats_by_name_get = cn10k_ml_dev_xstats_by_name_get, - .dev_xstats_get = cn10k_ml_dev_xstats_get, - .dev_xstats_reset = cn10k_ml_dev_xstats_reset, - - /* Model ops */ - .model_load = cn10k_ml_model_load, - .model_unload = cn10k_ml_model_unload, - .model_start = cn10k_ml_model_start, - .model_stop = cn10k_ml_model_stop, - .model_info_get = cn10k_ml_model_info_get, - .model_params_update = cn10k_ml_model_params_update, - - /* I/O ops */ - .io_quantize = cn10k_ml_io_quantize, - .io_dequantize = cn10k_ml_io_dequantize, -}; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index fd5992e192..16480b9ad8 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -286,7 +286,29 @@ struct cn10k_ml_req { }; /* Device ops */ -extern struct rte_ml_dev_ops cn10k_ml_ops; +int cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info); +int cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *conf); +int cn10k_ml_dev_close(struct rte_ml_dev *dev); +int cn10k_ml_dev_start(struct rte_ml_dev *dev); +int cn10k_ml_dev_stop(struct rte_ml_dev *dev); +int cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp); +int cn10k_ml_dev_selftest(struct rte_ml_dev *dev); +int cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, + const struct rte_ml_dev_qp_conf *qp_conf, int socket_id); +int cn10k_ml_dev_queue_pair_release(struct rte_ml_dev *dev, uint16_t queue_pair_id); + +int cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats); +void cn10k_ml_dev_stats_reset(struct rte_ml_dev *dev); +int cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, + int32_t model_id, struct rte_ml_dev_xstats_map *xstats_map, + uint32_t size); +int cn10k_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16_t *stat_id, + uint64_t *value); +int cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, + int32_t model_id, const uint16_t stat_ids[], uint64_t values[], + uint16_t nb_ids); +int cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, + int32_t model_id, const uint16_t stat_ids[], uint16_t nb_ids); /* Slow-path ops */ int cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, @@ -294,6 +316,16 @@ int cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *para int cn10k_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id); int cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id); int cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id); +int cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, + struct rte_ml_model_info *model_info); +int cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buffer); + +/* I/O ops */ +int cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, + struct rte_ml_buff_seg **dbuffer, struct rte_ml_buff_seg **qbuffer); + +int cn10k_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, + struct rte_ml_buff_seg **qbuffer, struct rte_ml_buff_seg **dbuffer); /* Fast-path ops */ __rte_hot uint16_t cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index f1872dcf7c..89e0d9d32c 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -3,5 +3,43 @@ */ #include +#include + +#include "cn10k_ml_ops.h" #include "cnxk_ml_ops.h" + +struct rte_ml_dev_ops cnxk_ml_ops = { + /* Device control ops */ + .dev_info_get = cn10k_ml_dev_info_get, + .dev_configure = cn10k_ml_dev_configure, + .dev_close = cn10k_ml_dev_close, + .dev_start = cn10k_ml_dev_start, + .dev_stop = cn10k_ml_dev_stop, + .dev_dump = cn10k_ml_dev_dump, + .dev_selftest = cn10k_ml_dev_selftest, + + /* Queue-pair handling ops */ + .dev_queue_pair_setup = cn10k_ml_dev_queue_pair_setup, + .dev_queue_pair_release = cn10k_ml_dev_queue_pair_release, + + /* Stats ops */ + .dev_stats_get = cn10k_ml_dev_stats_get, + .dev_stats_reset = cn10k_ml_dev_stats_reset, + .dev_xstats_names_get = cn10k_ml_dev_xstats_names_get, + .dev_xstats_by_name_get = cn10k_ml_dev_xstats_by_name_get, + .dev_xstats_get = cn10k_ml_dev_xstats_get, + .dev_xstats_reset = cn10k_ml_dev_xstats_reset, + + /* Model ops */ + .model_load = cn10k_ml_model_load, + .model_unload = cn10k_ml_model_unload, + .model_start = cn10k_ml_model_start, + .model_stop = cn10k_ml_model_stop, + .model_info_get = cn10k_ml_model_info_get, + .model_params_update = cn10k_ml_model_params_update, + + /* I/O ops */ + .io_quantize = cn10k_ml_io_quantize, + .io_dequantize = cn10k_ml_io_dequantize, +}; diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index b953fb0f5f..a925c07580 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -60,4 +60,6 @@ struct cnxk_ml_qp { struct rte_ml_dev_stats stats; }; +extern struct rte_ml_dev_ops cnxk_ml_ops; + #endif /* _CNXK_ML_OPS_H_ */ From patchwork Wed Sep 20 07:24:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131680 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F637425E4; Wed, 20 Sep 2023 09:26:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DD8EE410F1; Wed, 20 Sep 2023 09:25:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id A23DA40ED0 for ; Wed, 20 Sep 2023 09:25:37 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lc002294 for ; Wed, 20 Sep 2023 00:25:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=CaIAxLxBW+uZo6hz1/C5PrtQY50veMdzzZ+CCdCIDIw=; b=FpawZautVNMIfC2CE+ZPwHjNs0HZmGG5r7BmITtq4xkZe1PfhmbWBOOOq/JK4VYgbxhT 01bd2CaYexmZ6UcWxc+ZfzXQaHOz3ql3qkbxerZrqbYDV0MQFBdPpqPqC2p74cBKJUsx qPjExRMAL5ciBYGYpZJfBxoojphMkj8fKyXGrzT0mwPNbBseKMjpYCOloItbQ1Ydzz0m NQlfjE54J++S+/AhAXorTc+jE7DEv5pzMeB4mlBr0etDftg0hQ5y/mkpEaAxno07u5RU 74T6GijoAmATXp3hv+vKZUZCQK0UL59cAnOilsqRxQm4C2gVrVsdZTXGewJqn3YWf5K3 Iw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:36 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:34 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:34 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id B408A5B6929; Wed, 20 Sep 2023 00:25:34 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 08/34] ml/cnxk: update device handling functions Date: Wed, 20 Sep 2023 00:24:59 -0700 Message-ID: <20230920072528.14185-9-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Iu0Sr8fmEPyKKTyrCTa2WwDru5KHDuz8 X-Proofpoint-GUID: Iu0Sr8fmEPyKKTyrCTa2WwDru5KHDuz8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement CNXK wrapper functions for dev_info_get, dev_configure, dev_close, dev_start and dev_stop. The wrapper functions allocate / release common resources for the ML driver and invoke device specific functions. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 230 ++------------------------ drivers/ml/cnxk/cn10k_ml_ops.h | 16 +- drivers/ml/cnxk/cnxk_ml_dev.h | 3 + drivers/ml/cnxk/cnxk_ml_ops.c | 286 ++++++++++++++++++++++++++++++++- drivers/ml/cnxk/cnxk_ml_ops.h | 3 + 5 files changed, 314 insertions(+), 224 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index e6383283d3..0f32f3b2bb 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -105,7 +105,7 @@ qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) snprintf(name, size, "cnxk_ml_qp_mem_%u:%u", dev_id, qp_id); } -static int +int cnxk_ml_qp_destroy(const struct rte_ml_dev *dev, struct cnxk_ml_qp *qp) { const struct rte_memzone *qp_mem; @@ -865,20 +865,12 @@ cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) } int -cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) +cn10k_ml_dev_info_get(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_dev_info *dev_info) { struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - if (dev_info == NULL) - return -EINVAL; - - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; - memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); - dev_info->driver_name = dev->device->driver->name; - dev_info->max_models = ML_CNXK_MAX_MODELS; if (cn10k_mldev->hw_queue_lock) dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE_SL; else @@ -893,143 +885,17 @@ cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) } int -cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *conf) +cn10k_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf) { - struct rte_ml_dev_info dev_info; struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - struct cnxk_ml_model *model; struct cn10k_ml_ocm *ocm; - struct cnxk_ml_qp *qp; - uint16_t model_id; - uint32_t mz_size; uint16_t tile_id; - uint16_t qp_id; int ret; - if (dev == NULL || conf == NULL) - return -EINVAL; + RTE_SET_USED(conf); - /* Get CN10K device handle */ - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; - cn10k_ml_dev_info_get(dev, &dev_info); - if (conf->nb_models > dev_info.max_models) { - plt_err("Invalid device config, nb_models > %u\n", dev_info.max_models); - return -EINVAL; - } - - if (conf->nb_queue_pairs > dev_info.max_queue_pairs) { - plt_err("Invalid device config, nb_queue_pairs > %u\n", dev_info.max_queue_pairs); - return -EINVAL; - } - - if (cnxk_mldev->state == ML_CNXK_DEV_STATE_PROBED) { - plt_ml_dbg("Configuring ML device, nb_queue_pairs = %u, nb_models = %u", - conf->nb_queue_pairs, conf->nb_models); - - /* Load firmware */ - ret = cn10k_ml_fw_load(cnxk_mldev); - if (ret != 0) - return ret; - } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CONFIGURED) { - plt_ml_dbg("Re-configuring ML device, nb_queue_pairs = %u, nb_models = %u", - conf->nb_queue_pairs, conf->nb_models); - } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_STARTED) { - plt_err("Device can't be reconfigured in started state\n"); - return -ENOTSUP; - } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CLOSED) { - plt_err("Device can't be reconfigured after close\n"); - return -ENOTSUP; - } - - /* Configure queue-pairs */ - if (dev->data->queue_pairs == NULL) { - mz_size = sizeof(dev->data->queue_pairs[0]) * conf->nb_queue_pairs; - dev->data->queue_pairs = - rte_zmalloc("cn10k_mldev_queue_pairs", mz_size, RTE_CACHE_LINE_SIZE); - if (dev->data->queue_pairs == NULL) { - dev->data->nb_queue_pairs = 0; - plt_err("Failed to get memory for queue_pairs, nb_queue_pairs %u", - conf->nb_queue_pairs); - return -ENOMEM; - } - } else { /* Re-configure */ - void **queue_pairs; - - /* Release all queue pairs as ML spec doesn't support queue_pair_destroy. */ - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { - qp = dev->data->queue_pairs[qp_id]; - if (qp != NULL) { - ret = cn10k_ml_dev_queue_pair_release(dev, qp_id); - if (ret < 0) - return ret; - } - } - - queue_pairs = dev->data->queue_pairs; - queue_pairs = - rte_realloc(queue_pairs, sizeof(queue_pairs[0]) * conf->nb_queue_pairs, - RTE_CACHE_LINE_SIZE); - if (queue_pairs == NULL) { - dev->data->nb_queue_pairs = 0; - plt_err("Failed to realloc queue_pairs, nb_queue_pairs = %u", - conf->nb_queue_pairs); - ret = -ENOMEM; - goto error; - } - - memset(queue_pairs, 0, sizeof(queue_pairs[0]) * conf->nb_queue_pairs); - dev->data->queue_pairs = queue_pairs; - } - dev->data->nb_queue_pairs = conf->nb_queue_pairs; - - /* Allocate ML models */ - if (dev->data->models == NULL) { - mz_size = sizeof(dev->data->models[0]) * conf->nb_models; - dev->data->models = rte_zmalloc("cn10k_mldev_models", mz_size, RTE_CACHE_LINE_SIZE); - if (dev->data->models == NULL) { - dev->data->nb_models = 0; - plt_err("Failed to get memory for ml_models, nb_models %u", - conf->nb_models); - ret = -ENOMEM; - goto error; - } - } else { - /* Re-configure */ - void **models; - - /* Stop and unload all models */ - for (model_id = 0; model_id < dev->data->nb_models; model_id++) { - model = dev->data->models[model_id]; - if (model != NULL) { - if (model->state == ML_CNXK_MODEL_STATE_STARTED) { - if (cn10k_ml_model_stop(dev, model_id) != 0) - plt_err("Could not stop model %u", model_id); - } - if (model->state == ML_CNXK_MODEL_STATE_LOADED) { - if (cn10k_ml_model_unload(dev, model_id) != 0) - plt_err("Could not unload model %u", model_id); - } - dev->data->models[model_id] = NULL; - } - } - - models = dev->data->models; - models = rte_realloc(models, sizeof(models[0]) * conf->nb_models, - RTE_CACHE_LINE_SIZE); - if (models == NULL) { - dev->data->nb_models = 0; - plt_err("Failed to realloc ml_models, nb_models = %u", conf->nb_models); - ret = -ENOMEM; - goto error; - } - memset(models, 0, sizeof(models[0]) * conf->nb_models); - dev->data->models = models; - } - dev->data->nb_models = conf->nb_models; - ocm = &cn10k_mldev->ocm; ocm->num_tiles = ML_CN10K_OCM_NUMTILES; ocm->size_per_tile = ML_CN10K_OCM_TILESIZE; @@ -1042,8 +908,7 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c rte_zmalloc("ocm_mask", ocm->mask_words * ocm->num_tiles, RTE_CACHE_LINE_SIZE); if (ocm->ocm_mask == NULL) { plt_err("Unable to allocate memory for OCM mask"); - ret = -ENOMEM; - goto error; + return -ENOMEM; } for (tile_id = 0; tile_id < ocm->num_tiles; tile_id++) { @@ -1054,10 +919,10 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c rte_spinlock_init(&ocm->lock); /* Initialize xstats */ - ret = cn10k_ml_xstats_init(dev); + ret = cn10k_ml_xstats_init(cnxk_mldev->mldev); if (ret != 0) { plt_err("Failed to initialize xstats"); - goto error; + return ret; } /* Set JCMDQ enqueue function */ @@ -1071,77 +936,25 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c cn10k_mldev->set_poll_ptr = cn10k_ml_set_poll_ptr; cn10k_mldev->get_poll_ptr = cn10k_ml_get_poll_ptr; - dev->enqueue_burst = cn10k_ml_enqueue_burst; - dev->dequeue_burst = cn10k_ml_dequeue_burst; - dev->op_error_get = cn10k_ml_op_error_get; - - cnxk_mldev->nb_models_loaded = 0; - cnxk_mldev->nb_models_started = 0; - cnxk_mldev->nb_models_stopped = 0; - cnxk_mldev->nb_models_unloaded = 0; - cnxk_mldev->state = ML_CNXK_DEV_STATE_CONFIGURED; + cnxk_mldev->mldev->enqueue_burst = cn10k_ml_enqueue_burst; + cnxk_mldev->mldev->dequeue_burst = cn10k_ml_dequeue_burst; + cnxk_mldev->mldev->op_error_get = cn10k_ml_op_error_get; return 0; - -error: - rte_free(dev->data->queue_pairs); - - rte_free(dev->data->models); - - return ret; } int -cn10k_ml_dev_close(struct rte_ml_dev *dev) +cn10k_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev) { struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - struct cnxk_ml_model *model; - struct cnxk_ml_qp *qp; - uint16_t model_id; - uint16_t qp_id; - if (dev == NULL) - return -EINVAL; - - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; /* Release ocm_mask memory */ rte_free(cn10k_mldev->ocm.ocm_mask); - /* Stop and unload all models */ - for (model_id = 0; model_id < dev->data->nb_models; model_id++) { - model = dev->data->models[model_id]; - if (model != NULL) { - if (model->state == ML_CNXK_MODEL_STATE_STARTED) { - if (cn10k_ml_model_stop(dev, model_id) != 0) - plt_err("Could not stop model %u", model_id); - } - if (model->state == ML_CNXK_MODEL_STATE_LOADED) { - if (cn10k_ml_model_unload(dev, model_id) != 0) - plt_err("Could not unload model %u", model_id); - } - dev->data->models[model_id] = NULL; - } - } - - rte_free(dev->data->models); - - /* Destroy all queue pairs */ - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { - qp = dev->data->queue_pairs[qp_id]; - if (qp != NULL) { - if (cnxk_ml_qp_destroy(dev, qp) != 0) - plt_err("Could not destroy queue pair %u", qp_id); - dev->data->queue_pairs[qp_id] = NULL; - } - } - - rte_free(dev->data->queue_pairs); - /* Un-initialize xstats */ - cn10k_ml_xstats_uninit(dev); + cn10k_ml_xstats_uninit(cnxk_mldev->mldev); /* Unload firmware */ cn10k_ml_fw_unload(cnxk_mldev); @@ -1158,20 +971,15 @@ cn10k_ml_dev_close(struct rte_ml_dev *dev) roc_ml_reg_write64(&cn10k_mldev->roc, 0, ML_MLR_BASE); plt_ml_dbg("ML_MLR_BASE = 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_MLR_BASE)); - cnxk_mldev->state = ML_CNXK_DEV_STATE_CLOSED; - - /* Remove PCI device */ - return rte_dev_remove(dev->device); + return 0; } int -cn10k_ml_dev_start(struct rte_ml_dev *dev) +cn10k_ml_dev_start(struct cnxk_ml_dev *cnxk_mldev) { struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; uint64_t reg_val64; - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); @@ -1179,19 +987,15 @@ cn10k_ml_dev_start(struct rte_ml_dev *dev) roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); - cnxk_mldev->state = ML_CNXK_DEV_STATE_STARTED; - return 0; } int -cn10k_ml_dev_stop(struct rte_ml_dev *dev) +cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev) { struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; uint64_t reg_val64; - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; reg_val64 = roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG); @@ -1199,8 +1003,6 @@ cn10k_ml_dev_stop(struct rte_ml_dev *dev) roc_ml_reg_write64(&cn10k_mldev->roc, reg_val64, ML_CFG); plt_ml_dbg("ML_CFG => 0x%016lx", roc_ml_reg_read64(&cn10k_mldev->roc, ML_CFG)); - cnxk_mldev->state = ML_CNXK_DEV_STATE_CONFIGURED; - return 0; } @@ -1221,7 +1023,7 @@ cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, if (dev->data->queue_pairs[queue_pair_id] != NULL) cn10k_ml_dev_queue_pair_release(dev, queue_pair_id); - cn10k_ml_dev_info_get(dev, &dev_info); + cnxk_ml_dev_info_get(dev, &dev_info); if ((qp_conf->nb_desc > dev_info.max_desc) || (qp_conf->nb_desc == 0)) { plt_err("Could not setup queue pair for %u descriptors", qp_conf->nb_desc); return -EINVAL; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 16480b9ad8..d50b5bede7 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -10,6 +10,9 @@ #include +struct cnxk_ml_dev; +struct cnxk_ml_qp; + /* Firmware version string length */ #define MLDEV_FIRMWARE_VERSION_LENGTH 32 @@ -286,11 +289,11 @@ struct cn10k_ml_req { }; /* Device ops */ -int cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info); -int cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *conf); -int cn10k_ml_dev_close(struct rte_ml_dev *dev); -int cn10k_ml_dev_start(struct rte_ml_dev *dev); -int cn10k_ml_dev_stop(struct rte_ml_dev *dev); +int cn10k_ml_dev_info_get(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_dev_info *dev_info); +int cn10k_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf); +int cn10k_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); +int cn10k_ml_dev_start(struct cnxk_ml_dev *cnxk_mldev); +int cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev); int cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp); int cn10k_ml_dev_selftest(struct rte_ml_dev *dev); int cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, @@ -336,4 +339,7 @@ __rte_hot int cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op struct rte_ml_op_error *error); __rte_hot int cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op); +/* Temporarily set below functions as non-static */ +int cnxk_ml_qp_destroy(const struct rte_ml_dev *dev, struct cnxk_ml_qp *qp); + #endif /* _CN10K_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_dev.h b/drivers/ml/cnxk/cnxk_ml_dev.h index 51315de622..02605fa28f 100644 --- a/drivers/ml/cnxk/cnxk_ml_dev.h +++ b/drivers/ml/cnxk/cnxk_ml_dev.h @@ -53,6 +53,9 @@ struct cnxk_ml_dev { /* CN10K device structure */ struct cn10k_ml_dev cn10k_mldev; + + /* Maximum number of layers */ + uint64_t max_nb_layers; }; #endif /* _CNXK_ML_DEV_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 89e0d9d32c..83d5cbae58 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -7,15 +7,291 @@ #include "cn10k_ml_ops.h" +#include "cnxk_ml_dev.h" +#include "cnxk_ml_io.h" +#include "cnxk_ml_model.h" #include "cnxk_ml_ops.h" +int +cnxk_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) +{ + struct cnxk_ml_dev *cnxk_mldev; + + if (dev == NULL || dev_info == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); + dev_info->driver_name = dev->device->driver->name; + dev_info->max_models = ML_CNXK_MAX_MODELS; + + return cn10k_ml_dev_info_get(cnxk_mldev, dev_info); +} + +static int +cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *conf) +{ + struct rte_ml_dev_info dev_info; + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + struct cnxk_ml_qp *qp; + uint16_t model_id; + uint32_t mz_size; + uint16_t qp_id; + int ret; + + if (dev == NULL) + return -EINVAL; + + /* Get CNXK device handle */ + cnxk_mldev = dev->data->dev_private; + + cnxk_ml_dev_info_get(dev, &dev_info); + if (conf->nb_models > dev_info.max_models) { + plt_err("Invalid device config, nb_models > %u\n", dev_info.max_models); + return -EINVAL; + } + + if (conf->nb_queue_pairs > dev_info.max_queue_pairs) { + plt_err("Invalid device config, nb_queue_pairs > %u\n", dev_info.max_queue_pairs); + return -EINVAL; + } + + if (cnxk_mldev->state == ML_CNXK_DEV_STATE_PROBED) { + plt_ml_dbg("Configuring ML device, nb_queue_pairs = %u, nb_models = %u", + conf->nb_queue_pairs, conf->nb_models); + + /* Load firmware */ + ret = cn10k_ml_fw_load(cnxk_mldev); + if (ret != 0) + return ret; + } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CONFIGURED) { + plt_ml_dbg("Re-configuring ML device, nb_queue_pairs = %u, nb_models = %u", + conf->nb_queue_pairs, conf->nb_models); + } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_STARTED) { + plt_err("Device can't be reconfigured in started state\n"); + return -ENOTSUP; + } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CLOSED) { + plt_err("Device can't be reconfigured after close\n"); + return -ENOTSUP; + } + + /* Configure queue-pairs */ + if (dev->data->queue_pairs == NULL) { + mz_size = sizeof(dev->data->queue_pairs[0]) * conf->nb_queue_pairs; + dev->data->queue_pairs = + rte_zmalloc("cnxk_mldev_queue_pairs", mz_size, RTE_CACHE_LINE_SIZE); + if (dev->data->queue_pairs == NULL) { + dev->data->nb_queue_pairs = 0; + plt_err("Failed to get memory for queue_pairs, nb_queue_pairs %u", + conf->nb_queue_pairs); + return -ENOMEM; + } + } else { /* Re-configure */ + void **queue_pairs; + + /* Release all queue pairs as ML spec doesn't support queue_pair_destroy. */ + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + qp = dev->data->queue_pairs[qp_id]; + if (qp != NULL) { + ret = cn10k_ml_dev_queue_pair_release(dev, qp_id); + if (ret < 0) + return ret; + } + } + + queue_pairs = dev->data->queue_pairs; + queue_pairs = + rte_realloc(queue_pairs, sizeof(queue_pairs[0]) * conf->nb_queue_pairs, + RTE_CACHE_LINE_SIZE); + if (queue_pairs == NULL) { + dev->data->nb_queue_pairs = 0; + plt_err("Failed to realloc queue_pairs, nb_queue_pairs = %u", + conf->nb_queue_pairs); + ret = -ENOMEM; + goto error; + } + + memset(queue_pairs, 0, sizeof(queue_pairs[0]) * conf->nb_queue_pairs); + dev->data->queue_pairs = queue_pairs; + } + dev->data->nb_queue_pairs = conf->nb_queue_pairs; + + /* Allocate ML models */ + if (dev->data->models == NULL) { + mz_size = sizeof(dev->data->models[0]) * conf->nb_models; + dev->data->models = rte_zmalloc("cnxk_mldev_models", mz_size, RTE_CACHE_LINE_SIZE); + if (dev->data->models == NULL) { + dev->data->nb_models = 0; + plt_err("Failed to get memory for ml_models, nb_models %u", + conf->nb_models); + ret = -ENOMEM; + goto error; + } + } else { + /* Re-configure */ + void **models; + + /* Stop and unload all models */ + for (model_id = 0; model_id < dev->data->nb_models; model_id++) { + model = dev->data->models[model_id]; + if (model != NULL) { + if (model->state == ML_CNXK_MODEL_STATE_STARTED) { + if (cn10k_ml_model_stop(dev, model_id) != 0) + plt_err("Could not stop model %u", model_id); + } + if (model->state == ML_CNXK_MODEL_STATE_LOADED) { + if (cn10k_ml_model_unload(dev, model_id) != 0) + plt_err("Could not unload model %u", model_id); + } + dev->data->models[model_id] = NULL; + } + } + + models = dev->data->models; + models = rte_realloc(models, sizeof(models[0]) * conf->nb_models, + RTE_CACHE_LINE_SIZE); + if (models == NULL) { + dev->data->nb_models = 0; + plt_err("Failed to realloc ml_models, nb_models = %u", conf->nb_models); + ret = -ENOMEM; + goto error; + } + memset(models, 0, sizeof(models[0]) * conf->nb_models); + dev->data->models = models; + } + dev->data->nb_models = conf->nb_models; + + ret = cn10k_ml_dev_configure(cnxk_mldev, conf); + if (ret != 0) { + plt_err("Failed to configure CN10K ML Device"); + goto error; + } + + /* Set device capabilities */ + cnxk_mldev->max_nb_layers = + cnxk_mldev->cn10k_mldev.fw.req->cn10k_req.jd.fw_load.cap.s.max_models; + + cnxk_mldev->nb_models_loaded = 0; + cnxk_mldev->nb_models_started = 0; + cnxk_mldev->nb_models_stopped = 0; + cnxk_mldev->nb_models_unloaded = 0; + cnxk_mldev->state = ML_CNXK_DEV_STATE_CONFIGURED; + + return 0; + +error: + rte_free(dev->data->queue_pairs); + rte_free(dev->data->models); + + return ret; +} + +static int +cnxk_ml_dev_close(struct rte_ml_dev *dev) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + struct cnxk_ml_qp *qp; + uint16_t model_id; + uint16_t qp_id; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + if (cn10k_ml_dev_close(cnxk_mldev) != 0) + plt_err("Failed to close CN10K ML Device"); + + /* Stop and unload all models */ + for (model_id = 0; model_id < dev->data->nb_models; model_id++) { + model = dev->data->models[model_id]; + if (model != NULL) { + if (model->state == ML_CNXK_MODEL_STATE_STARTED) { + if (cn10k_ml_model_stop(dev, model_id) != 0) + plt_err("Could not stop model %u", model_id); + } + if (model->state == ML_CNXK_MODEL_STATE_LOADED) { + if (cn10k_ml_model_unload(dev, model_id) != 0) + plt_err("Could not unload model %u", model_id); + } + dev->data->models[model_id] = NULL; + } + } + + rte_free(dev->data->models); + + /* Destroy all queue pairs */ + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + qp = dev->data->queue_pairs[qp_id]; + if (qp != NULL) { + if (cnxk_ml_qp_destroy(dev, qp) != 0) + plt_err("Could not destroy queue pair %u", qp_id); + dev->data->queue_pairs[qp_id] = NULL; + } + } + + rte_free(dev->data->queue_pairs); + + cnxk_mldev->state = ML_CNXK_DEV_STATE_CLOSED; + + /* Remove PCI device */ + return rte_dev_remove(dev->device); +} + +static int +cnxk_ml_dev_start(struct rte_ml_dev *dev) +{ + struct cnxk_ml_dev *cnxk_mldev; + int ret; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + ret = cn10k_ml_dev_start(cnxk_mldev); + if (ret != 0) { + plt_err("Failed to start CN10K ML Device"); + return ret; + } + + cnxk_mldev->state = ML_CNXK_DEV_STATE_STARTED; + + return 0; +} + +static int +cnxk_ml_dev_stop(struct rte_ml_dev *dev) +{ + struct cnxk_ml_dev *cnxk_mldev; + int ret; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + ret = cn10k_ml_dev_stop(cnxk_mldev); + if (ret != 0) { + plt_err("Failed to stop CN10K ML Device"); + return ret; + } + + cnxk_mldev->state = ML_CNXK_DEV_STATE_CONFIGURED; + + return 0; +} + struct rte_ml_dev_ops cnxk_ml_ops = { /* Device control ops */ - .dev_info_get = cn10k_ml_dev_info_get, - .dev_configure = cn10k_ml_dev_configure, - .dev_close = cn10k_ml_dev_close, - .dev_start = cn10k_ml_dev_start, - .dev_stop = cn10k_ml_dev_stop, + .dev_info_get = cnxk_ml_dev_info_get, + .dev_configure = cnxk_ml_dev_configure, + .dev_close = cnxk_ml_dev_close, + .dev_start = cnxk_ml_dev_start, + .dev_stop = cnxk_ml_dev_stop, .dev_dump = cn10k_ml_dev_dump, .dev_selftest = cn10k_ml_dev_selftest, diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index a925c07580..2996928d7d 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -62,4 +62,7 @@ struct cnxk_ml_qp { extern struct rte_ml_dev_ops cnxk_ml_ops; +/* Temporarily set cnxk driver functions as non-static */ +int cnxk_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info); + #endif /* _CNXK_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131685 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A5AD425E4; Wed, 20 Sep 2023 09:26:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EEB9341153; Wed, 20 Sep 2023 09:25:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 08E6D40ED1 for ; Wed, 20 Sep 2023 09:25:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW0u008355 for ; Wed, 20 Sep 2023 00:25:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=E/Wxc29JKKKpMM5PqBb7aWRAkwqWjNU57EX6GKEjD5k=; b=ZT+vVidcjEo9pz6iN8ihI5Aa69zpkqxqFCMk9CvMi9BE8RUhQSHspj7cEsv772Cmz6Il uccvM7BfCb9+AJOps6WHlOyuNBGn4DCOkMHIPg32SZvbb8Lix8eAqK9jP+aJaNxYyYRy ++QggFnN0eVepMsZBmXX6dYbEQY0PzP9pkRbCL73YxQnJkGuU0BNKrSjub/gKTAnDyU6 qKvVZkT4s9eBTuJ2k7Oz+uhqlaQbvrBuAFYeVwc7p3zcDQKlOUOlJqnFzZOBHWamjv4i GwQQo43VM6Enmcjp6vM4UtFZgNxD7h31DTMFSkTipSjanwfMHR/yHV7mRx2v5vTYi16q Ag== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:35 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 0C4375B6927; Wed, 20 Sep 2023 00:25:35 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 09/34] ml/cnxk: update queue-pair handling functions Date: Wed, 20 Sep 2023 00:25:00 -0700 Message-ID: <20230920072528.14185-10-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: nlvHIgVTXUXg3_ByELJKw55Le-J428xM X-Proofpoint-GUID: nlvHIgVTXUXg3_ByELJKw55Le-J428xM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added cnxk wrapper function to handle ML device queue-pairs. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 135 +---------------------------- drivers/ml/cnxk/cn10k_ml_ops.h | 7 +- drivers/ml/cnxk/cnxk_ml_ops.c | 153 ++++++++++++++++++++++++++++++++- drivers/ml/cnxk/cnxk_ml_ops.h | 3 - 4 files changed, 154 insertions(+), 144 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 0f32f3b2bb..330cb050cb 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -99,93 +99,12 @@ cn10k_ml_get_poll_ptr(struct cnxk_ml_req *req) return plt_read64(req->status); } -static void -qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) -{ - snprintf(name, size, "cnxk_ml_qp_mem_%u:%u", dev_id, qp_id); -} - -int -cnxk_ml_qp_destroy(const struct rte_ml_dev *dev, struct cnxk_ml_qp *qp) -{ - const struct rte_memzone *qp_mem; - char name[RTE_MEMZONE_NAMESIZE]; - int ret; - - qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id, qp->id); - qp_mem = rte_memzone_lookup(name); - ret = rte_memzone_free(qp_mem); - if (ret) - return ret; - - rte_free(qp); - - return 0; -} - -int -cn10k_ml_dev_queue_pair_release(struct rte_ml_dev *dev, uint16_t queue_pair_id) -{ - struct cnxk_ml_qp *qp; - int ret; - - qp = dev->data->queue_pairs[queue_pair_id]; - if (qp == NULL) - return -EINVAL; - - ret = cnxk_ml_qp_destroy(dev, qp); - if (ret) { - plt_err("Could not destroy queue pair %u", queue_pair_id); - return ret; - } - - dev->data->queue_pairs[queue_pair_id] = NULL; - - return 0; -} - -static struct cnxk_ml_qp * -cnxk_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_desc, int socket_id) +void +cn10k_ml_qp_initialize(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_qp *qp) { - const struct rte_memzone *qp_mem; - char name[RTE_MEMZONE_NAMESIZE]; - struct cnxk_ml_qp *qp; - uint32_t len; - uint8_t *va; uint64_t i; - /* Allocate queue pair */ - qp = rte_zmalloc_socket("cn10k_ml_pmd_queue_pair", sizeof(struct cnxk_ml_qp), ROC_ALIGN, - socket_id); - if (qp == NULL) { - plt_err("Could not allocate queue pair"); - return NULL; - } - - /* For request queue */ - len = nb_desc * sizeof(struct cnxk_ml_req); - qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id, qp_id); - qp_mem = rte_memzone_reserve_aligned( - name, len, socket_id, RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB, ROC_ALIGN); - if (qp_mem == NULL) { - plt_err("Could not reserve memzone: %s", name); - goto qp_free; - } - - va = qp_mem->addr; - memset(va, 0, len); - - /* Initialize Request queue */ - qp->id = qp_id; - qp->queue.reqs = (struct cnxk_ml_req *)va; - qp->queue.head = 0; - qp->queue.tail = 0; - qp->queue.wait_cycles = ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); - qp->nb_desc = nb_desc; - qp->stats.enqueued_count = 0; - qp->stats.dequeued_count = 0; - qp->stats.enqueue_err_count = 0; - qp->stats.dequeue_err_count = 0; + RTE_SET_USED(cnxk_mldev); /* Initialize job command */ for (i = 0; i < qp->nb_desc; i++) { @@ -193,13 +112,6 @@ cnxk_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_desc qp->queue.reqs[i].cn10k_req.jcmd.w1.s.jobptr = PLT_U64_CAST(&qp->queue.reqs[i].cn10k_req.jd); } - - return qp; - -qp_free: - rte_free(qp); - - return NULL; } static void @@ -1006,47 +918,6 @@ cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev) return 0; } -int -cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, - const struct rte_ml_dev_qp_conf *qp_conf, int socket_id) -{ - struct rte_ml_dev_info dev_info; - struct cnxk_ml_qp *qp; - uint32_t nb_desc; - - if (queue_pair_id >= dev->data->nb_queue_pairs) { - plt_err("Queue-pair id = %u (>= max queue pairs supported, %u)\n", queue_pair_id, - dev->data->nb_queue_pairs); - return -EINVAL; - } - - if (dev->data->queue_pairs[queue_pair_id] != NULL) - cn10k_ml_dev_queue_pair_release(dev, queue_pair_id); - - cnxk_ml_dev_info_get(dev, &dev_info); - if ((qp_conf->nb_desc > dev_info.max_desc) || (qp_conf->nb_desc == 0)) { - plt_err("Could not setup queue pair for %u descriptors", qp_conf->nb_desc); - return -EINVAL; - } - plt_ml_dbg("Creating queue-pair, queue_pair_id = %u, nb_desc = %u", queue_pair_id, - qp_conf->nb_desc); - - /* As the number of usable descriptors is 1 less than the queue size being created, we - * increment the size of queue by 1 than the requested size, except when the requested size - * is equal to the maximum possible size. - */ - nb_desc = - (qp_conf->nb_desc == dev_info.max_desc) ? dev_info.max_desc : qp_conf->nb_desc + 1; - qp = cnxk_ml_qp_create(dev, queue_pair_id, nb_desc, socket_id); - if (qp == NULL) { - plt_err("Could not create queue pair %u", queue_pair_id); - return -ENOMEM; - } - dev->data->queue_pairs[queue_pair_id] = qp; - - return 0; -} - int cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats) { diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index d50b5bede7..2d0a49d5cd 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -296,9 +296,6 @@ int cn10k_ml_dev_start(struct cnxk_ml_dev *cnxk_mldev); int cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev); int cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp); int cn10k_ml_dev_selftest(struct rte_ml_dev *dev); -int cn10k_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, - const struct rte_ml_dev_qp_conf *qp_conf, int socket_id); -int cn10k_ml_dev_queue_pair_release(struct rte_ml_dev *dev, uint16_t queue_pair_id); int cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats); void cn10k_ml_dev_stats_reset(struct rte_ml_dev *dev); @@ -339,7 +336,7 @@ __rte_hot int cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op struct rte_ml_op_error *error); __rte_hot int cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op); -/* Temporarily set below functions as non-static */ -int cnxk_ml_qp_destroy(const struct rte_ml_dev *dev, struct cnxk_ml_qp *qp); +/* Misc ops */ +void cn10k_ml_qp_initialize(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_qp *qp); #endif /* _CN10K_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 83d5cbae58..1767a8a3db 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -12,7 +12,107 @@ #include "cnxk_ml_model.h" #include "cnxk_ml_ops.h" -int +static void +qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) +{ + snprintf(name, size, "cnxk_ml_qp_mem_%u:%u", dev_id, qp_id); +} + +static int +cnxk_ml_qp_destroy(const struct rte_ml_dev *dev, struct cnxk_ml_qp *qp) +{ + const struct rte_memzone *qp_mem; + char name[RTE_MEMZONE_NAMESIZE]; + int ret; + + qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id, qp->id); + qp_mem = rte_memzone_lookup(name); + ret = rte_memzone_free(qp_mem); + if (ret) + return ret; + + rte_free(qp); + + return 0; +} + +static int +cnxk_ml_dev_queue_pair_release(struct rte_ml_dev *dev, uint16_t queue_pair_id) +{ + struct cnxk_ml_qp *qp; + int ret; + + qp = dev->data->queue_pairs[queue_pair_id]; + if (qp == NULL) + return -EINVAL; + + ret = cnxk_ml_qp_destroy(dev, qp); + if (ret) { + plt_err("Could not destroy queue pair %u", queue_pair_id); + return ret; + } + + dev->data->queue_pairs[queue_pair_id] = NULL; + + return 0; +} + +static struct cnxk_ml_qp * +cnxk_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_desc, int socket_id) +{ + const struct rte_memzone *qp_mem; + char name[RTE_MEMZONE_NAMESIZE]; + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_qp *qp; + uint32_t len; + uint8_t *va; + + cnxk_mldev = dev->data->dev_private; + + /* Allocate queue pair */ + qp = rte_zmalloc_socket("cnxk_ml_pmd_queue_pair", sizeof(struct cnxk_ml_qp), ROC_ALIGN, + socket_id); + if (qp == NULL) { + plt_err("Could not allocate queue pair"); + return NULL; + } + + /* For request queue */ + len = nb_desc * sizeof(struct cnxk_ml_req); + qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id, qp_id); + qp_mem = rte_memzone_reserve_aligned( + name, len, socket_id, RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB, ROC_ALIGN); + if (qp_mem == NULL) { + plt_err("Could not reserve memzone: %s", name); + goto qp_free; + } + + va = qp_mem->addr; + memset(va, 0, len); + + /* Initialize Request queue */ + qp->id = qp_id; + qp->queue.reqs = (struct cnxk_ml_req *)va; + qp->queue.head = 0; + qp->queue.tail = 0; + qp->queue.wait_cycles = ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); + qp->nb_desc = nb_desc; + qp->stats.enqueued_count = 0; + qp->stats.dequeued_count = 0; + qp->stats.enqueue_err_count = 0; + qp->stats.dequeue_err_count = 0; + + cn10k_ml_qp_initialize(cnxk_mldev, qp); + + return qp; + +qp_free: + rte_free(qp); + + return NULL; +} + +static int cnxk_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) { struct cnxk_ml_dev *cnxk_mldev; @@ -95,7 +195,7 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { qp = dev->data->queue_pairs[qp_id]; if (qp != NULL) { - ret = cn10k_ml_dev_queue_pair_release(dev, qp_id); + ret = cnxk_ml_dev_queue_pair_release(dev, qp_id); if (ret < 0) return ret; } @@ -285,6 +385,51 @@ cnxk_ml_dev_stop(struct rte_ml_dev *dev) return 0; } +static int +cnxk_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, + const struct rte_ml_dev_qp_conf *qp_conf, int socket_id) +{ + struct rte_ml_dev_info dev_info; + struct cnxk_ml_qp *qp; + uint32_t nb_desc; + + if (queue_pair_id >= dev->data->nb_queue_pairs) { + plt_err("Queue-pair id = %u (>= max queue pairs supported, %u)\n", queue_pair_id, + dev->data->nb_queue_pairs); + return -EINVAL; + } + + if (dev->data->queue_pairs[queue_pair_id] != NULL) + cnxk_ml_dev_queue_pair_release(dev, queue_pair_id); + + cnxk_ml_dev_info_get(dev, &dev_info); + if (qp_conf->nb_desc == 0) { + plt_err("Could not setup queue pair for %u descriptors", qp_conf->nb_desc); + return -EINVAL; + } else if (qp_conf->nb_desc > dev_info.max_desc) { + plt_err("Could not setup queue pair for %u descriptors (> %u)", qp_conf->nb_desc, + dev_info.max_desc); + return -EINVAL; + } + plt_ml_dbg("Creating queue-pair, queue_pair_id = %u, nb_desc = %u", queue_pair_id, + qp_conf->nb_desc); + + /* As the number of usable descriptors is 1 less than the queue size being created, we + * increment the size of queue by 1 than the requested size, except when the requested size + * is equal to the maximum possible size. + */ + nb_desc = + (qp_conf->nb_desc == dev_info.max_desc) ? dev_info.max_desc : qp_conf->nb_desc + 1; + qp = cnxk_ml_qp_create(dev, queue_pair_id, nb_desc, socket_id); + if (qp == NULL) { + plt_err("Could not create queue pair %u", queue_pair_id); + return -ENOMEM; + } + dev->data->queue_pairs[queue_pair_id] = qp; + + return 0; +} + struct rte_ml_dev_ops cnxk_ml_ops = { /* Device control ops */ .dev_info_get = cnxk_ml_dev_info_get, @@ -296,8 +441,8 @@ struct rte_ml_dev_ops cnxk_ml_ops = { .dev_selftest = cn10k_ml_dev_selftest, /* Queue-pair handling ops */ - .dev_queue_pair_setup = cn10k_ml_dev_queue_pair_setup, - .dev_queue_pair_release = cn10k_ml_dev_queue_pair_release, + .dev_queue_pair_setup = cnxk_ml_dev_queue_pair_setup, + .dev_queue_pair_release = cnxk_ml_dev_queue_pair_release, /* Stats ops */ .dev_stats_get = cn10k_ml_dev_stats_get, diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index 2996928d7d..a925c07580 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -62,7 +62,4 @@ struct cnxk_ml_qp { extern struct rte_ml_dev_ops cnxk_ml_ops; -/* Temporarily set cnxk driver functions as non-static */ -int cnxk_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info); - #endif /* _CNXK_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131683 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FB94425E4; Wed, 20 Sep 2023 09:26:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9EE5A4113D; Wed, 20 Sep 2023 09:25:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2571D40ED2 for ; Wed, 20 Sep 2023 09:25:38 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633ld002294 for ; Wed, 20 Sep 2023 00:25:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=donKA04W4jIt6ez3NxK2ykcMLGIsQgExIdtlQuSogUc=; b=WVcwVztjfo25ZM+BGjDezVQRQwackLSCI3YW05m6L1DxmyA8UjUsSCBtVrRdTQ5ugbhg WA4n4q5a4ILjFIEZFhxDgFYIDlTHwBF9X4xEajk3yw/+wW4WF4ym2TEcdyFHfMjYyA3z 1Bb1DNf1MmeMI89+ZytXftAibmPilJig8ppREFJ0VtbFIpNE/qpmNY6qHfGQTmp+WVRY 2577ljkM7TFo7TAdBpALiFytcoyu8hM+ZA73K1/k5cH8GwThKi3lg+c3mT/XiVQayDbf tSqCxW5F0sHz25ku8AN8B5Zvn9VhohUF0p6g1sEHPgr1/WfZy/7vwrjHBOj+YcdbSNbl lw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:37 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:35 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 5F3845B6928; Wed, 20 Sep 2023 00:25:35 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 10/34] ml/cnxk: update model load and unload functions Date: Wed, 20 Sep 2023 00:25:01 -0700 Message-ID: <20230920072528.14185-11-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ISkdErVnFTTJcJFKWs6tQI_D6-ZcJ_8q X-Proofpoint-GUID: ISkdErVnFTTJcJFKWs6tQI_D6-ZcJ_8q X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented cnxk wrapper functions to load and unload ML models. Wrapper functions would invoke the cn10k model load and unload functions. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_model.c | 239 ++++++++++++------------- drivers/ml/cnxk/cn10k_ml_model.h | 25 +-- drivers/ml/cnxk/cn10k_ml_ops.c | 296 ++++++++++++++++++------------- drivers/ml/cnxk/cn10k_ml_ops.h | 12 +- drivers/ml/cnxk/cnxk_ml_dev.h | 15 ++ drivers/ml/cnxk/cnxk_ml_ops.c | 144 ++++++++++++++- drivers/ml/cnxk/cnxk_ml_ops.h | 2 + 7 files changed, 455 insertions(+), 278 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_model.c b/drivers/ml/cnxk/cn10k_ml_model.c index 2a0ae44cfd..9a336cd18f 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.c +++ b/drivers/ml/cnxk/cn10k_ml_model.c @@ -6,7 +6,6 @@ #include -#include "cn10k_ml_dev.h" #include "cn10k_ml_model.h" #include "cn10k_ml_ocm.h" @@ -318,42 +317,31 @@ cn10k_ml_layer_addr_update(struct cnxk_ml_layer *layer, uint8_t *buffer, uint8_t { struct cn10k_ml_model_metadata *metadata; struct cn10k_ml_layer_addr *addr; - size_t model_data_size; uint8_t *dma_addr_load; - uint8_t *dma_addr_run; int fpos; metadata = &layer->glow.metadata; addr = &layer->glow.addr; - model_data_size = metadata->init_model.file_size + metadata->main_model.file_size + - metadata->finish_model.file_size + metadata->weights_bias.file_size; /* Base address */ addr->base_dma_addr_load = base_dma_addr; - addr->base_dma_addr_run = PLT_PTR_ADD(addr->base_dma_addr_load, model_data_size); /* Init section */ dma_addr_load = addr->base_dma_addr_load; - dma_addr_run = addr->base_dma_addr_run; fpos = sizeof(struct cn10k_ml_model_metadata); addr->init_load_addr = dma_addr_load; - addr->init_run_addr = dma_addr_run; rte_memcpy(dma_addr_load, PLT_PTR_ADD(buffer, fpos), metadata->init_model.file_size); /* Main section */ dma_addr_load += metadata->init_model.file_size; - dma_addr_run += metadata->init_model.file_size; fpos += metadata->init_model.file_size; addr->main_load_addr = dma_addr_load; - addr->main_run_addr = dma_addr_run; rte_memcpy(dma_addr_load, PLT_PTR_ADD(buffer, fpos), metadata->main_model.file_size); /* Finish section */ dma_addr_load += metadata->main_model.file_size; - dma_addr_run += metadata->main_model.file_size; fpos += metadata->main_model.file_size; addr->finish_load_addr = dma_addr_load; - addr->finish_run_addr = dma_addr_run; rte_memcpy(dma_addr_load, PLT_PTR_ADD(buffer, fpos), metadata->finish_model.file_size); /* Weights and Bias section */ @@ -365,140 +353,140 @@ cn10k_ml_layer_addr_update(struct cnxk_ml_layer *layer, uint8_t *buffer, uint8_t } void -cn10k_ml_layer_info_update(struct cnxk_ml_layer *layer) +cn10k_ml_layer_io_info_update(struct cnxk_ml_io_info *io_info, + struct cn10k_ml_model_metadata *metadata) { - struct cn10k_ml_model_metadata *metadata; uint8_t i; uint8_t j; - metadata = &layer->glow.metadata; - /* Inputs */ - layer->info.nb_inputs = metadata->model.num_input; - layer->info.total_input_sz_d = 0; - layer->info.total_input_sz_q = 0; + io_info->nb_inputs = metadata->model.num_input; + io_info->total_input_sz_d = 0; + io_info->total_input_sz_q = 0; for (i = 0; i < metadata->model.num_input; i++) { if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - strncpy(layer->info.input[i].name, (char *)metadata->input1[i].input_name, + strncpy(io_info->input[i].name, (char *)metadata->input1[i].input_name, MRVL_ML_INPUT_NAME_LEN); - layer->info.input[i].dtype = metadata->input1[i].input_type; - layer->info.input[i].qtype = metadata->input1[i].model_input_type; - layer->info.input[i].nb_dims = 4; - layer->info.input[i].shape[0] = metadata->input1[i].shape.w; - layer->info.input[i].shape[1] = metadata->input1[i].shape.x; - layer->info.input[i].shape[2] = metadata->input1[i].shape.y; - layer->info.input[i].shape[3] = metadata->input1[i].shape.z; - layer->info.input[i].nb_elements = + io_info->input[i].dtype = metadata->input1[i].input_type; + io_info->input[i].qtype = metadata->input1[i].model_input_type; + io_info->input[i].nb_dims = 4; + io_info->input[i].shape[0] = metadata->input1[i].shape.w; + io_info->input[i].shape[1] = metadata->input1[i].shape.x; + io_info->input[i].shape[2] = metadata->input1[i].shape.y; + io_info->input[i].shape[3] = metadata->input1[i].shape.z; + io_info->input[i].nb_elements = metadata->input1[i].shape.w * metadata->input1[i].shape.x * metadata->input1[i].shape.y * metadata->input1[i].shape.z; - layer->info.input[i].sz_d = - layer->info.input[i].nb_elements * + io_info->input[i].sz_d = + io_info->input[i].nb_elements * rte_ml_io_type_size_get(metadata->input1[i].input_type); - layer->info.input[i].sz_q = - layer->info.input[i].nb_elements * + io_info->input[i].sz_q = + io_info->input[i].nb_elements * rte_ml_io_type_size_get(metadata->input1[i].model_input_type); - layer->info.input[i].scale = metadata->input1[i].qscale; + io_info->input[i].scale = metadata->input1[i].qscale; - layer->info.total_input_sz_d += layer->info.input[i].sz_d; - layer->info.total_input_sz_q += layer->info.input[i].sz_q; + io_info->total_input_sz_d += io_info->input[i].sz_d; + io_info->total_input_sz_q += io_info->input[i].sz_q; plt_ml_dbg( - "index = %u, input1[%u] - w:%u x:%u y:%u z:%u, sz_d = %u sz_q = %u", - layer->index, i, metadata->input1[i].shape.w, + "layer_name = %s, input1[%u] - w:%u x:%u y:%u z:%u, sz_d = %u sz_q = %u", + metadata->model.name, i, metadata->input1[i].shape.w, metadata->input1[i].shape.x, metadata->input1[i].shape.y, - metadata->input1[i].shape.z, layer->info.input[i].sz_d, - layer->info.input[i].sz_q); + metadata->input1[i].shape.z, io_info->input[i].sz_d, + io_info->input[i].sz_q); } else { j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - strncpy(layer->info.input[i].name, (char *)metadata->input2[j].input_name, + strncpy(io_info->input[i].name, (char *)metadata->input2[j].input_name, MRVL_ML_INPUT_NAME_LEN); - layer->info.input[i].dtype = metadata->input2[j].input_type; - layer->info.input[i].qtype = metadata->input2[j].model_input_type; - layer->info.input[i].nb_dims = 4; - layer->info.input[i].shape[0] = metadata->input2[j].shape.w; - layer->info.input[i].shape[1] = metadata->input2[j].shape.x; - layer->info.input[i].shape[2] = metadata->input2[j].shape.y; - layer->info.input[i].shape[3] = metadata->input2[j].shape.z; - layer->info.input[i].nb_elements = + io_info->input[i].dtype = metadata->input2[j].input_type; + io_info->input[i].qtype = metadata->input2[j].model_input_type; + io_info->input[i].nb_dims = 4; + io_info->input[i].shape[0] = metadata->input2[j].shape.w; + io_info->input[i].shape[1] = metadata->input2[j].shape.x; + io_info->input[i].shape[2] = metadata->input2[j].shape.y; + io_info->input[i].shape[3] = metadata->input2[j].shape.z; + io_info->input[i].nb_elements = metadata->input2[j].shape.w * metadata->input2[j].shape.x * metadata->input2[j].shape.y * metadata->input2[j].shape.z; - layer->info.input[i].sz_d = - layer->info.input[i].nb_elements * + io_info->input[i].sz_d = + io_info->input[i].nb_elements * rte_ml_io_type_size_get(metadata->input2[j].input_type); - layer->info.input[i].sz_q = - layer->info.input[i].nb_elements * + io_info->input[i].sz_q = + io_info->input[i].nb_elements * rte_ml_io_type_size_get(metadata->input2[j].model_input_type); - layer->info.input[i].scale = metadata->input2[j].qscale; + io_info->input[i].scale = metadata->input2[j].qscale; - layer->info.total_input_sz_d += layer->info.input[i].sz_d; - layer->info.total_input_sz_q += layer->info.input[i].sz_q; + io_info->total_input_sz_d += io_info->input[i].sz_d; + io_info->total_input_sz_q += io_info->input[i].sz_q; plt_ml_dbg( - "index = %u, input2[%u] - w:%u x:%u y:%u z:%u, sz_d = %u sz_q = %u", - layer->index, j, metadata->input2[j].shape.w, + "layer_name = %s, input2[%u] - w:%u x:%u y:%u z:%u, sz_d = %u sz_q = %u", + metadata->model.name, j, metadata->input2[j].shape.w, metadata->input2[j].shape.x, metadata->input2[j].shape.y, - metadata->input2[j].shape.z, layer->info.input[i].sz_d, - layer->info.input[i].sz_q); + metadata->input2[j].shape.z, io_info->input[i].sz_d, + io_info->input[i].sz_q); } } /* Outputs */ - layer->info.nb_outputs = metadata->model.num_output; - layer->info.total_output_sz_q = 0; - layer->info.total_output_sz_d = 0; + io_info->nb_outputs = metadata->model.num_output; + io_info->total_output_sz_q = 0; + io_info->total_output_sz_d = 0; for (i = 0; i < metadata->model.num_output; i++) { if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - strncpy(layer->info.output[i].name, - (char *)metadata->output1[i].output_name, MRVL_ML_OUTPUT_NAME_LEN); - layer->info.output[i].dtype = metadata->output1[i].output_type; - layer->info.output[i].qtype = metadata->output1[i].model_output_type; - layer->info.output[i].nb_dims = 1; - layer->info.output[i].shape[0] = metadata->output1[i].size; - layer->info.output[i].nb_elements = metadata->output1[i].size; - layer->info.output[i].sz_d = - layer->info.output[i].nb_elements * + strncpy(io_info->output[i].name, (char *)metadata->output1[i].output_name, + MRVL_ML_OUTPUT_NAME_LEN); + io_info->output[i].dtype = metadata->output1[i].output_type; + io_info->output[i].qtype = metadata->output1[i].model_output_type; + io_info->output[i].nb_dims = 1; + io_info->output[i].shape[0] = metadata->output1[i].size; + io_info->output[i].nb_elements = metadata->output1[i].size; + io_info->output[i].sz_d = + io_info->output[i].nb_elements * rte_ml_io_type_size_get(metadata->output1[i].output_type); - layer->info.output[i].sz_q = - layer->info.output[i].nb_elements * + io_info->output[i].sz_q = + io_info->output[i].nb_elements * rte_ml_io_type_size_get(metadata->output1[i].model_output_type); - layer->info.output[i].scale = metadata->output1[i].dscale; + io_info->output[i].scale = metadata->output1[i].dscale; - layer->info.total_output_sz_q += layer->info.output[i].sz_q; - layer->info.total_output_sz_d += layer->info.output[i].sz_d; + io_info->total_output_sz_q += io_info->output[i].sz_q; + io_info->total_output_sz_d += io_info->output[i].sz_d; - plt_ml_dbg("index = %u, output1[%u] - sz_d = %u, sz_q = %u", layer->index, - i, layer->info.output[i].sz_d, layer->info.output[i].sz_q); + plt_ml_dbg("layer_name = %s, output1[%u] - sz_d = %u, sz_q = %u", + metadata->model.name, i, io_info->output[i].sz_d, + io_info->output[i].sz_q); } else { j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - strncpy(layer->info.output[i].name, - (char *)metadata->output2[j].output_name, MRVL_ML_OUTPUT_NAME_LEN); - layer->info.output[i].dtype = metadata->output2[j].output_type; - layer->info.output[i].qtype = metadata->output2[j].model_output_type; - layer->info.output[i].nb_dims = 1; - layer->info.output[i].shape[0] = metadata->output2[j].size; - layer->info.output[i].nb_elements = metadata->output2[j].size; - layer->info.output[i].sz_d = - layer->info.output[i].nb_elements * + strncpy(io_info->output[i].name, (char *)metadata->output2[j].output_name, + MRVL_ML_OUTPUT_NAME_LEN); + io_info->output[i].dtype = metadata->output2[j].output_type; + io_info->output[i].qtype = metadata->output2[j].model_output_type; + io_info->output[i].nb_dims = 1; + io_info->output[i].shape[0] = metadata->output2[j].size; + io_info->output[i].nb_elements = metadata->output2[j].size; + io_info->output[i].sz_d = + io_info->output[i].nb_elements * rte_ml_io_type_size_get(metadata->output2[j].output_type); - layer->info.output[i].sz_q = - layer->info.output[i].nb_elements * + io_info->output[i].sz_q = + io_info->output[i].nb_elements * rte_ml_io_type_size_get(metadata->output2[j].model_output_type); - layer->info.output[i].scale = metadata->output2[j].dscale; + io_info->output[i].scale = metadata->output2[j].dscale; - layer->info.total_output_sz_q += layer->info.output[i].sz_q; - layer->info.total_output_sz_d += layer->info.output[i].sz_d; + io_info->total_output_sz_q += io_info->output[i].sz_q; + io_info->total_output_sz_d += io_info->output[i].sz_d; - plt_ml_dbg("index = %u, output2[%u] - sz_d = %u, sz_q = %u", layer->index, - j, layer->info.output[i].sz_d, layer->info.output[i].sz_q); + plt_ml_dbg("layer_name = %s, output2[%u] - sz_d = %u, sz_q = %u", + metadata->model.name, j, io_info->output[i].sz_d, + io_info->output[i].sz_q); } } } int -cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_id, uint8_t *buffer, - uint16_t *wb_pages, uint16_t *scratch_pages) +cn10k_ml_model_ocm_pages_count(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, + uint8_t *buffer, uint16_t *wb_pages, uint16_t *scratch_pages) { struct cn10k_ml_model_metadata *metadata; struct cn10k_ml_ocm *ocm; @@ -506,7 +494,7 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_ uint64_t wb_size; metadata = (struct cn10k_ml_model_metadata *)buffer; - ocm = &cn10k_mldev->ocm; + ocm = &cnxk_mldev->cn10k_mldev.ocm; /* Assume wb_size is zero for non-relocatable models */ if (metadata->model.ocm_relocatable) @@ -518,7 +506,7 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_ *wb_pages = wb_size / ocm->page_size + 1; else *wb_pages = wb_size / ocm->page_size; - plt_ml_dbg("model_id = %u, wb_size = %" PRIu64 ", wb_pages = %u", model_id, wb_size, + plt_ml_dbg("index = %u, wb_size = %" PRIu64 ", wb_pages = %u", layer->index, wb_size, *wb_pages); scratch_size = ocm->size_per_tile - metadata->model.ocm_tmp_range_floor; @@ -526,15 +514,15 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_ *scratch_pages = scratch_size / ocm->page_size + 1; else *scratch_pages = scratch_size / ocm->page_size; - plt_ml_dbg("model_id = %u, scratch_size = %" PRIu64 ", scratch_pages = %u", model_id, + plt_ml_dbg("index = %u, scratch_size = %" PRIu64 ", scratch_pages = %u", layer->index, scratch_size, *scratch_pages); /* Check if the model can be loaded on OCM */ - if ((*wb_pages + *scratch_pages) > cn10k_mldev->ocm.num_pages) { + if ((*wb_pages + *scratch_pages) > ocm->num_pages) { plt_err("Cannot create the model, OCM relocatable = %u", metadata->model.ocm_relocatable); plt_err("wb_pages (%u) + scratch_pages (%u) > %u", *wb_pages, *scratch_pages, - cn10k_mldev->ocm.num_pages); + ocm->num_pages); return -ENOMEM; } @@ -542,28 +530,25 @@ cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_ * prevent the library from allocating the remaining space on the tile to other models. */ if (!metadata->model.ocm_relocatable) - *scratch_pages = PLT_MAX(PLT_U64_CAST(*scratch_pages), - PLT_U64_CAST(cn10k_mldev->ocm.num_pages)); + *scratch_pages = + PLT_MAX(PLT_U64_CAST(*scratch_pages), PLT_U64_CAST(ocm->num_pages)); return 0; } void -cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cnxk_ml_model *model) +cn10k_ml_model_info_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + struct cnxk_ml_io_info *io_info, struct cn10k_ml_model_metadata *metadata) { - struct cn10k_ml_model_metadata *metadata; - struct cnxk_ml_dev *cnxk_mldev; struct rte_ml_model_info *info; struct rte_ml_io_info *output; struct rte_ml_io_info *input; - struct cnxk_ml_layer *layer; uint8_t i; - cnxk_mldev = dev->data->dev_private; metadata = &model->glow.metadata; info = PLT_PTR_CAST(model->info); input = PLT_PTR_ADD(info, sizeof(struct rte_ml_model_info)); - output = PLT_PTR_ADD(input, metadata->model.num_input * sizeof(struct rte_ml_io_info)); + output = PLT_PTR_ADD(input, ML_CNXK_MODEL_MAX_INPUT_OUTPUT * sizeof(struct rte_ml_io_info)); /* Set model info */ memset(info, 0, sizeof(struct rte_ml_model_info)); @@ -572,39 +557,37 @@ cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cnxk_ml_model *model) metadata->model.version[1], metadata->model.version[2], metadata->model.version[3]); info->model_id = model->model_id; - info->device_id = dev->data->dev_id; + info->device_id = cnxk_mldev->mldev->data->dev_id; info->io_layout = RTE_ML_IO_LAYOUT_PACKED; info->min_batches = model->batch_size; info->max_batches = cnxk_mldev->cn10k_mldev.fw.req->cn10k_req.jd.fw_load.cap.s.max_num_batches / model->batch_size; - info->nb_inputs = metadata->model.num_input; + info->nb_inputs = io_info->nb_inputs; info->input_info = input; - info->nb_outputs = metadata->model.num_output; + info->nb_outputs = io_info->nb_outputs; info->output_info = output; info->wb_size = metadata->weights_bias.file_size; /* Set input info */ - layer = &model->layer[0]; for (i = 0; i < info->nb_inputs; i++) { - rte_memcpy(input[i].name, layer->info.input[i].name, MRVL_ML_INPUT_NAME_LEN); - input[i].nb_dims = layer->info.input[i].nb_dims; - input[i].shape = &layer->info.input[i].shape[0]; - input[i].type = layer->info.input[i].qtype; - input[i].nb_elements = layer->info.input[i].nb_elements; - input[i].size = layer->info.input[i].nb_elements * - rte_ml_io_type_size_get(layer->info.input[i].qtype); + rte_memcpy(input[i].name, io_info->input[i].name, MRVL_ML_INPUT_NAME_LEN); + input[i].nb_dims = io_info->input[i].nb_dims; + input[i].shape = &io_info->input[i].shape[0]; + input[i].type = io_info->input[i].qtype; + input[i].nb_elements = io_info->input[i].nb_elements; + input[i].size = io_info->input[i].nb_elements * + rte_ml_io_type_size_get(io_info->input[i].qtype); } /* Set output info */ - layer = &model->layer[0]; for (i = 0; i < info->nb_outputs; i++) { - rte_memcpy(output[i].name, layer->info.output[i].name, MRVL_ML_INPUT_NAME_LEN); - output[i].nb_dims = layer->info.output[i].nb_dims; - output[i].shape = &layer->info.output[i].shape[0]; - output[i].type = layer->info.output[i].qtype; - output[i].nb_elements = layer->info.output[i].nb_elements; - output[i].size = layer->info.output[i].nb_elements * - rte_ml_io_type_size_get(layer->info.output[i].qtype); + rte_memcpy(output[i].name, io_info->output[i].name, MRVL_ML_INPUT_NAME_LEN); + output[i].nb_dims = io_info->output[i].nb_dims; + output[i].shape = &io_info->output[i].shape[0]; + output[i].type = io_info->output[i].qtype; + output[i].nb_elements = io_info->output[i].nb_elements; + output[i].size = io_info->output[i].nb_elements * + rte_ml_io_type_size_get(io_info->output[i].qtype); } } diff --git a/drivers/ml/cnxk/cn10k_ml_model.h b/drivers/ml/cnxk/cn10k_ml_model.h index 5c32f48c68..45290b84ce 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.h +++ b/drivers/ml/cnxk/cn10k_ml_model.h @@ -9,9 +9,11 @@ #include -#include "cn10k_ml_dev.h" #include "cn10k_ml_ocm.h" +#include "cnxk_ml_io.h" + +struct cnxk_ml_dev; struct cnxk_ml_model; struct cnxk_ml_layer; struct cnxk_ml_req; @@ -366,27 +368,15 @@ struct cn10k_ml_layer_addr { /* Base DMA address for load */ void *base_dma_addr_load; - /* Base DMA address for run */ - void *base_dma_addr_run; - /* Init section load address */ void *init_load_addr; - /* Init section run address */ - void *init_run_addr; - /* Main section load address */ void *main_load_addr; - /* Main section run address */ - void *main_run_addr; - /* Finish section load address */ void *finish_load_addr; - /* Finish section run address */ - void *finish_run_addr; - /* Weights and Bias base address */ void *wb_base_addr; @@ -462,9 +452,12 @@ int cn10k_ml_model_metadata_check(uint8_t *buffer, uint64_t size); void cn10k_ml_model_metadata_update(struct cn10k_ml_model_metadata *metadata); void cn10k_ml_layer_addr_update(struct cnxk_ml_layer *layer, uint8_t *buffer, uint8_t *base_dma_addr); -void cn10k_ml_layer_info_update(struct cnxk_ml_layer *layer); -int cn10k_ml_model_ocm_pages_count(struct cn10k_ml_dev *cn10k_mldev, uint16_t model_id, +void cn10k_ml_layer_io_info_update(struct cnxk_ml_io_info *io_info, + struct cn10k_ml_model_metadata *metadata); +int cn10k_ml_model_ocm_pages_count(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, uint8_t *buffer, uint16_t *wb_pages, uint16_t *scratch_pages); -void cn10k_ml_model_info_set(struct rte_ml_dev *dev, struct cnxk_ml_model *model); +void cn10k_ml_model_info_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + struct cnxk_ml_io_info *io_info, + struct cn10k_ml_model_metadata *metadata); #endif /* _CN10K_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 330cb050cb..3bfc63d9d4 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -19,6 +19,9 @@ /* ML model macros */ #define CN10K_ML_MODEL_MEMZONE_NAME "ml_cn10k_model_mz" +/* ML layer macros */ +#define CN10K_ML_LAYER_MEMZONE_NAME "ml_cn10k_layer_mz" + /* Debug print width */ #define STR_LEN 12 #define FIELD_LEN 16 @@ -277,7 +280,7 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml req->cn10k_req.jd.model_start.extended_args = PLT_U64_CAST( roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->cn10k_req.extended_args)); req->cn10k_req.jd.model_start.model_dst_ddr_addr = - PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, addr->init_run_addr)); + PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, addr->init_load_addr)); req->cn10k_req.jd.model_start.model_init_offset = 0x0; req->cn10k_req.jd.model_start.model_main_offset = metadata->init_model.file_size; req->cn10k_req.jd.model_start.model_finish_offset = @@ -1265,85 +1268,171 @@ cn10k_ml_dev_selftest(struct rte_ml_dev *dev) } int -cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, uint16_t *model_id) +cn10k_ml_layer_load(void *device, uint16_t model_id, const char *layer_name, uint8_t *buffer, + size_t size, uint16_t *index) { struct cn10k_ml_model_metadata *metadata; struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; char str[RTE_MEMZONE_NAMESIZE]; const struct plt_memzone *mz; - size_t model_scratch_size; - size_t model_stats_size; - size_t model_data_size; - size_t model_info_size; + size_t layer_object_size = 0; + size_t layer_scratch_size; + size_t layer_xstats_size; uint8_t *base_dma_addr; uint16_t scratch_pages; + uint16_t layer_id = 0; uint16_t wb_pages; uint64_t mz_size; uint16_t idx; - bool found; int qp_id; int ret; - ret = cn10k_ml_model_metadata_check(params->addr, params->size); + PLT_SET_USED(size); + PLT_SET_USED(layer_name); + + cnxk_mldev = (struct cnxk_ml_dev *)device; + if (cnxk_mldev == NULL) { + plt_err("Invalid device = %p", device); + return -EINVAL; + } + + model = cnxk_mldev->mldev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + + layer = &model->layer[layer_id]; + + ret = cn10k_ml_model_metadata_check(buffer, size); if (ret != 0) return ret; - cnxk_mldev = dev->data->dev_private; - - /* Find model ID */ - found = false; - for (idx = 0; idx < dev->data->nb_models; idx++) { - if (dev->data->models[idx] == NULL) { - found = true; + /* Get index */ + for (idx = 0; idx < cnxk_mldev->max_nb_layers; idx++) { + if (!cnxk_mldev->index_map[idx].active) { + layer->index = idx; break; } } - if (!found) { - plt_err("No slots available to load new model"); - return -ENOMEM; + if (idx >= cnxk_mldev->max_nb_layers) { + plt_err("No slots available for model layers, model_id = %u, layer_id = %u", + model->model_id, layer_id); + return -1; } + layer->model = model; + /* Get WB and scratch pages, check if model can be loaded. */ - ret = cn10k_ml_model_ocm_pages_count(&cnxk_mldev->cn10k_mldev, idx, params->addr, &wb_pages, - &scratch_pages); + ret = cn10k_ml_model_ocm_pages_count(cnxk_mldev, layer, buffer, &wb_pages, &scratch_pages); if (ret < 0) return ret; - /* Compute memzone size */ - metadata = (struct cn10k_ml_model_metadata *)params->addr; - model_data_size = metadata->init_model.file_size + metadata->main_model.file_size + - metadata->finish_model.file_size + metadata->weights_bias.file_size; - model_scratch_size = PLT_ALIGN_CEIL(metadata->model.ddr_scratch_range_end - + /* Compute layer memzone size */ + metadata = (struct cn10k_ml_model_metadata *)buffer; + layer_object_size = metadata->init_model.file_size + metadata->main_model.file_size + + metadata->finish_model.file_size + metadata->weights_bias.file_size; + layer_object_size = PLT_ALIGN_CEIL(layer_object_size, ML_CN10K_ALIGN_SIZE); + layer_scratch_size = PLT_ALIGN_CEIL(metadata->model.ddr_scratch_range_end - metadata->model.ddr_scratch_range_start + 1, ML_CN10K_ALIGN_SIZE); - model_data_size = PLT_ALIGN_CEIL(model_data_size, ML_CN10K_ALIGN_SIZE); - model_info_size = sizeof(struct rte_ml_model_info) + - metadata->model.num_input * sizeof(struct rte_ml_io_info) + - metadata->model.num_output * sizeof(struct rte_ml_io_info); - model_info_size = PLT_ALIGN_CEIL(model_info_size, ML_CN10K_ALIGN_SIZE); - model_stats_size = (dev->data->nb_queue_pairs + 1) * sizeof(struct cn10k_ml_layer_xstats); - - mz_size = PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), ML_CN10K_ALIGN_SIZE) + - 2 * model_data_size + model_scratch_size + model_info_size + - PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_req), ML_CN10K_ALIGN_SIZE) + - model_stats_size; + layer_xstats_size = (cnxk_mldev->mldev->data->nb_queue_pairs + 1) * + sizeof(struct cn10k_ml_layer_xstats); - /* Allocate memzone for model object and model data */ - snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", CN10K_ML_MODEL_MEMZONE_NAME, idx); + /* Allocate memzone for model data */ + mz_size = layer_object_size + layer_scratch_size + + PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_req), ML_CN10K_ALIGN_SIZE) + + layer_xstats_size; + snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u_%u", CN10K_ML_LAYER_MEMZONE_NAME, + model->model_id, layer_id); mz = plt_memzone_reserve_aligned(str, mz_size, 0, ML_CN10K_ALIGN_SIZE); if (!mz) { plt_err("plt_memzone_reserve failed : %s", str); return -ENOMEM; } - model = mz->addr; - model->cnxk_mldev = cnxk_mldev; - model->model_id = idx; - dev->data->models[idx] = model; + /* Copy metadata to internal buffer */ + rte_memcpy(&layer->glow.metadata, buffer, sizeof(struct cn10k_ml_model_metadata)); + cn10k_ml_model_metadata_update(&layer->glow.metadata); + + /* Set layer name */ + rte_memcpy(layer->name, layer->glow.metadata.model.name, MRVL_ML_MODEL_NAME_LEN); + + /* Enable support for batch_size of 256 */ + if (layer->glow.metadata.model.batch_size == 0) + layer->batch_size = 256; + else + layer->batch_size = layer->glow.metadata.model.batch_size; + + /* Set DMA base address */ + base_dma_addr = mz->addr; + cn10k_ml_layer_addr_update(layer, buffer, base_dma_addr); + + /* Set scratch base address */ + layer->glow.addr.scratch_base_addr = PLT_PTR_ADD(base_dma_addr, layer_object_size); + + /* Update internal I/O data structure */ + cn10k_ml_layer_io_info_update(&layer->info, &layer->glow.metadata); + + /* Initialize model_mem_map */ + memset(&layer->glow.ocm_map, 0, sizeof(struct cn10k_ml_ocm_layer_map)); + layer->glow.ocm_map.ocm_reserved = false; + layer->glow.ocm_map.tilemask = 0; + layer->glow.ocm_map.wb_page_start = -1; + layer->glow.ocm_map.wb_pages = wb_pages; + layer->glow.ocm_map.scratch_pages = scratch_pages; + + /* Set slow-path request address and state */ + layer->glow.req = PLT_PTR_ADD(mz->addr, layer_object_size + layer_scratch_size); + + /* Reset burst and sync stats */ + layer->glow.burst_xstats = PLT_PTR_ADD( + layer->glow.req, PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_req), ML_CN10K_ALIGN_SIZE)); + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs + 1; qp_id++) { + layer->glow.burst_xstats[qp_id].hw_latency_tot = 0; + layer->glow.burst_xstats[qp_id].hw_latency_min = UINT64_MAX; + layer->glow.burst_xstats[qp_id].hw_latency_max = 0; + layer->glow.burst_xstats[qp_id].fw_latency_tot = 0; + layer->glow.burst_xstats[qp_id].fw_latency_min = UINT64_MAX; + layer->glow.burst_xstats[qp_id].fw_latency_max = 0; + layer->glow.burst_xstats[qp_id].hw_reset_count = 0; + layer->glow.burst_xstats[qp_id].fw_reset_count = 0; + layer->glow.burst_xstats[qp_id].dequeued_count = 0; + } + + layer->glow.sync_xstats = + PLT_PTR_ADD(layer->glow.burst_xstats, cnxk_mldev->mldev->data->nb_queue_pairs * + sizeof(struct cn10k_ml_layer_xstats)); + + /* Update xstats names */ + cn10k_ml_xstats_model_name_update(cnxk_mldev->mldev, idx); + + layer->state = ML_CNXK_LAYER_STATE_LOADED; + cnxk_mldev->index_map[idx].model_id = model->model_id; + cnxk_mldev->index_map[idx].layer_id = layer_id; + cnxk_mldev->index_map[idx].active = true; + *index = idx; + + return 0; +} + +int +cn10k_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, + struct cnxk_ml_model *model) +{ + struct cnxk_ml_layer *layer; + int ret; + + /* Metadata check */ + ret = cn10k_ml_model_metadata_check(params->addr, params->size); + if (ret != 0) + return ret; + /* Copy metadata to internal buffer */ rte_memcpy(&model->glow.metadata, params->addr, sizeof(struct cn10k_ml_model_metadata)); cn10k_ml_model_metadata_update(&model->glow.metadata); @@ -1362,99 +1451,62 @@ cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, */ model->nb_layers = 1; - /* Copy metadata to internal buffer */ - rte_memcpy(&model->layer[0].glow.metadata, params->addr, - sizeof(struct cn10k_ml_model_metadata)); - cn10k_ml_model_metadata_update(&model->layer[0].glow.metadata); - model->layer[0].model = model; - - /* Set DMA base address */ - base_dma_addr = PLT_PTR_ADD( - mz->addr, PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), ML_CN10K_ALIGN_SIZE)); - cn10k_ml_layer_addr_update(&model->layer[0], params->addr, base_dma_addr); - model->layer[0].glow.addr.scratch_base_addr = - PLT_PTR_ADD(base_dma_addr, 2 * model_data_size); - - /* Copy data from load to run. run address to be used by MLIP */ - rte_memcpy(model->layer[0].glow.addr.base_dma_addr_run, - model->layer[0].glow.addr.base_dma_addr_load, model_data_size); - - /* Update internal I/O data structure */ - cn10k_ml_layer_info_update(&model->layer[0]); - - /* Initialize model_mem_map */ - memset(&model->layer[0].glow.ocm_map, 0, sizeof(struct cn10k_ml_ocm_layer_map)); - model->layer[0].glow.ocm_map.ocm_reserved = false; - model->layer[0].glow.ocm_map.tilemask = 0; - model->layer[0].glow.ocm_map.wb_page_start = -1; - model->layer[0].glow.ocm_map.wb_pages = wb_pages; - model->layer[0].glow.ocm_map.scratch_pages = scratch_pages; - - /* Set model info */ - model->info = PLT_PTR_ADD(model->layer[0].glow.addr.scratch_base_addr, model_scratch_size); - cn10k_ml_model_info_set(dev, model); - - /* Set slow-path request address and state */ - model->layer[0].glow.req = PLT_PTR_ADD(model->info, model_info_size); - - /* Reset burst and sync stats */ - model->layer[0].glow.burst_xstats = - PLT_PTR_ADD(model->layer[0].glow.req, - PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_req), ML_CN10K_ALIGN_SIZE)); - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs + 1; qp_id++) { - model->layer[0].glow.burst_xstats[qp_id].hw_latency_tot = 0; - model->layer[0].glow.burst_xstats[qp_id].hw_latency_min = UINT64_MAX; - model->layer[0].glow.burst_xstats[qp_id].hw_latency_max = 0; - model->layer[0].glow.burst_xstats[qp_id].fw_latency_tot = 0; - model->layer[0].glow.burst_xstats[qp_id].fw_latency_min = UINT64_MAX; - model->layer[0].glow.burst_xstats[qp_id].fw_latency_max = 0; - model->layer[0].glow.burst_xstats[qp_id].hw_reset_count = 0; - model->layer[0].glow.burst_xstats[qp_id].fw_reset_count = 0; - model->layer[0].glow.burst_xstats[qp_id].dequeued_count = 0; + /* Load layer and get the index */ + layer = &model->layer[0]; + ret = cn10k_ml_layer_load(cnxk_mldev, model->model_id, NULL, params->addr, params->size, + &layer->index); + if (ret != 0) { + plt_err("Model layer load failed: model_id = %u, layer_id = %u", model->model_id, + 0); + return ret; } - model->layer[0].glow.sync_xstats = - PLT_PTR_ADD(model->layer[0].glow.burst_xstats, - dev->data->nb_queue_pairs * sizeof(struct cn10k_ml_layer_xstats)); - - plt_spinlock_init(&model->lock); - model->state = ML_CNXK_MODEL_STATE_LOADED; - dev->data->models[idx] = model; - cnxk_mldev->nb_models_loaded++; - - /* Update xstats names */ - cn10k_ml_xstats_model_name_update(dev, idx); - - *model_id = idx; + cn10k_ml_model_info_set(cnxk_mldev, model, &model->layer[0].info, &model->glow.metadata); return 0; } int -cn10k_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) +cn10k_ml_layer_unload(void *device, uint16_t model_id, const char *layer_name) { - char str[RTE_MEMZONE_NAMESIZE]; - struct cnxk_ml_model *model; struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; - cnxk_mldev = dev->data->dev_private; - model = dev->data->models[model_id]; + char str[RTE_MEMZONE_NAMESIZE]; + uint16_t layer_id = 0; + int ret; + PLT_SET_USED(layer_name); + + cnxk_mldev = (struct cnxk_ml_dev *)device; + if (cnxk_mldev == NULL) { + plt_err("Invalid device = %p", device); + return -EINVAL; + } + + model = cnxk_mldev->mldev->data->models[model_id]; if (model == NULL) { plt_err("Invalid model_id = %u", model_id); return -EINVAL; } - if (model->state != ML_CNXK_MODEL_STATE_LOADED) { - plt_err("Cannot unload. Model in use."); - return -EBUSY; - } + layer = &model->layer[layer_id]; - dev->data->models[model_id] = NULL; - cnxk_mldev->nb_models_unloaded++; + snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u_%u", CN10K_ML_LAYER_MEMZONE_NAME, + model->model_id, layer_id); + ret = plt_memzone_free(plt_memzone_lookup(str)); - snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", CN10K_ML_MODEL_MEMZONE_NAME, model_id); - return plt_memzone_free(plt_memzone_lookup(str)); + layer->state = ML_CNXK_LAYER_STATE_UNKNOWN; + cnxk_mldev->index_map[layer->index].active = false; + + return ret; +} + +int +cn10k_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) +{ + return cn10k_ml_layer_unload(cnxk_mldev, model->model_id, NULL); } int @@ -1752,7 +1804,6 @@ int cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buffer) { struct cnxk_ml_model *model; - size_t size; model = dev->data->models[model_id]; @@ -1766,19 +1817,10 @@ cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *bu else if (model->state != ML_CNXK_MODEL_STATE_LOADED) return -EBUSY; - size = model->layer[0].glow.metadata.init_model.file_size + - model->layer[0].glow.metadata.main_model.file_size + - model->layer[0].glow.metadata.finish_model.file_size + - model->layer[0].glow.metadata.weights_bias.file_size; - /* Update model weights & bias */ rte_memcpy(model->layer[0].glow.addr.wb_load_addr, buffer, model->layer[0].glow.metadata.weights_bias.file_size); - /* Copy data from load to run. run address to be used by MLIP */ - rte_memcpy(model->layer[0].glow.addr.base_dma_addr_run, - model->layer[0].glow.addr.base_dma_addr_load, size); - return 0; } diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 2d0a49d5cd..677219dfdf 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -12,6 +12,7 @@ struct cnxk_ml_dev; struct cnxk_ml_qp; +struct cnxk_ml_model; /* Firmware version string length */ #define MLDEV_FIRMWARE_VERSION_LENGTH 32 @@ -311,9 +312,9 @@ int cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mod int32_t model_id, const uint16_t stat_ids[], uint16_t nb_ids); /* Slow-path ops */ -int cn10k_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, - uint16_t *model_id); -int cn10k_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id); +int cn10k_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, + struct cnxk_ml_model *model); +int cn10k_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id); int cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id); int cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, @@ -339,4 +340,9 @@ __rte_hot int cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op * /* Misc ops */ void cn10k_ml_qp_initialize(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_qp *qp); +/* Layer ops */ +int cn10k_ml_layer_load(void *device, uint16_t model_id, const char *layer_name, uint8_t *buffer, + size_t size, uint16_t *index); +int cn10k_ml_layer_unload(void *device, uint16_t model_id, const char *layer_name); + #endif /* _CN10K_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_dev.h b/drivers/ml/cnxk/cnxk_ml_dev.h index 02605fa28f..1590249abd 100644 --- a/drivers/ml/cnxk/cnxk_ml_dev.h +++ b/drivers/ml/cnxk/cnxk_ml_dev.h @@ -31,6 +31,18 @@ enum cnxk_ml_dev_state { ML_CNXK_DEV_STATE_CLOSED }; +/* Index to model and layer ID map */ +struct cnxk_ml_index_map { + /* Model ID */ + uint16_t model_id; + + /* Layer ID */ + uint16_t layer_id; + + /* Layer status */ + bool active; +}; + /* Device private data */ struct cnxk_ml_dev { /* RTE device */ @@ -56,6 +68,9 @@ struct cnxk_ml_dev { /* Maximum number of layers */ uint64_t max_nb_layers; + + /* Index map */ + struct cnxk_ml_index_map *index_map; }; #endif /* _CNXK_ML_DEV_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 1767a8a3db..3d9d5f9d78 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -12,6 +12,9 @@ #include "cnxk_ml_model.h" #include "cnxk_ml_ops.h" +/* ML model macros */ +#define CNXK_ML_MODEL_MEMZONE_NAME "ml_cnxk_model_mz" + static void qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) { @@ -139,6 +142,7 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co uint16_t model_id; uint32_t mz_size; uint16_t qp_id; + uint64_t i; int ret; if (dev == NULL) @@ -242,7 +246,7 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co plt_err("Could not stop model %u", model_id); } if (model->state == ML_CNXK_MODEL_STATE_LOADED) { - if (cn10k_ml_model_unload(dev, model_id) != 0) + if (cnxk_ml_model_unload(dev, model_id) != 0) plt_err("Could not unload model %u", model_id); } dev->data->models[model_id] = NULL; @@ -273,6 +277,23 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co cnxk_mldev->max_nb_layers = cnxk_mldev->cn10k_mldev.fw.req->cn10k_req.jd.fw_load.cap.s.max_models; + /* Allocate and initialize index_map */ + if (cnxk_mldev->index_map == NULL) { + cnxk_mldev->index_map = + rte_zmalloc("cnxk_ml_index_map", + sizeof(struct cnxk_ml_index_map) * cnxk_mldev->max_nb_layers, + RTE_CACHE_LINE_SIZE); + if (cnxk_mldev->index_map == NULL) { + plt_err("Failed to get memory for index_map, nb_layers %" PRIu64, + cnxk_mldev->max_nb_layers); + ret = -ENOMEM; + goto error; + } + } + + for (i = 0; i < cnxk_mldev->max_nb_layers; i++) + cnxk_mldev->index_map[i].active = false; + cnxk_mldev->nb_models_loaded = 0; cnxk_mldev->nb_models_started = 0; cnxk_mldev->nb_models_stopped = 0; @@ -305,6 +326,9 @@ cnxk_ml_dev_close(struct rte_ml_dev *dev) if (cn10k_ml_dev_close(cnxk_mldev) != 0) plt_err("Failed to close CN10K ML Device"); + if (cnxk_mldev->index_map) + rte_free(cnxk_mldev->index_map); + /* Stop and unload all models */ for (model_id = 0; model_id < dev->data->nb_models; model_id++) { model = dev->data->models[model_id]; @@ -314,7 +338,7 @@ cnxk_ml_dev_close(struct rte_ml_dev *dev) plt_err("Could not stop model %u", model_id); } if (model->state == ML_CNXK_MODEL_STATE_LOADED) { - if (cn10k_ml_model_unload(dev, model_id) != 0) + if (cnxk_ml_model_unload(dev, model_id) != 0) plt_err("Could not unload model %u", model_id); } dev->data->models[model_id] = NULL; @@ -430,6 +454,118 @@ cnxk_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, return 0; } +static int +cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, uint16_t *model_id) +{ + struct rte_ml_dev_info dev_info; + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + + char str[RTE_MEMZONE_NAMESIZE]; + const struct plt_memzone *mz; + uint64_t model_info_size; + uint16_t lcl_model_id; + uint64_t mz_size; + bool found; + int ret; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + /* Find model ID */ + found = false; + for (lcl_model_id = 0; lcl_model_id < dev->data->nb_models; lcl_model_id++) { + if (dev->data->models[lcl_model_id] == NULL) { + found = true; + break; + } + } + + if (!found) { + plt_err("No slots available to load new model"); + return -ENOMEM; + } + + /* Compute memzone size */ + cnxk_ml_dev_info_get(dev, &dev_info); + mz_size = PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), dev_info.align_size); + model_info_size = sizeof(struct rte_ml_model_info) + + ML_CNXK_MODEL_MAX_INPUT_OUTPUT * sizeof(struct rte_ml_io_info) + + ML_CNXK_MODEL_MAX_INPUT_OUTPUT * sizeof(struct rte_ml_io_info); + model_info_size = PLT_ALIGN_CEIL(model_info_size, dev_info.align_size); + mz_size += model_info_size; + + /* Allocate memzone for model object */ + snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", CNXK_ML_MODEL_MEMZONE_NAME, lcl_model_id); + mz = plt_memzone_reserve_aligned(str, mz_size, 0, dev_info.align_size); + if (!mz) { + plt_err("Failed to allocate memory for cnxk_ml_model: %s", str); + return -ENOMEM; + } + + model = mz->addr; + model->cnxk_mldev = cnxk_mldev; + model->model_id = lcl_model_id; + model->info = PLT_PTR_ADD( + model, PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), dev_info.align_size)); + dev->data->models[lcl_model_id] = model; + + ret = cn10k_ml_model_load(cnxk_mldev, params, model); + if (ret != 0) + goto error; + + plt_spinlock_init(&model->lock); + model->state = ML_CNXK_MODEL_STATE_LOADED; + cnxk_mldev->nb_models_loaded++; + + *model_id = lcl_model_id; + + return 0; + +error: + rte_memzone_free(mz); + + return ret; +} + +int +cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + + char str[RTE_MEMZONE_NAMESIZE]; + int ret; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + model = dev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + + if (model->state != ML_CNXK_MODEL_STATE_LOADED) { + plt_err("Cannot unload. Model in use."); + return -EBUSY; + } + + ret = cn10k_ml_model_unload(cnxk_mldev, model); + if (ret != 0) + return ret; + + dev->data->models[model_id] = NULL; + cnxk_mldev->nb_models_unloaded++; + + snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", CNXK_ML_MODEL_MEMZONE_NAME, model_id); + return plt_memzone_free(plt_memzone_lookup(str)); +} + struct rte_ml_dev_ops cnxk_ml_ops = { /* Device control ops */ .dev_info_get = cnxk_ml_dev_info_get, @@ -453,8 +589,8 @@ struct rte_ml_dev_ops cnxk_ml_ops = { .dev_xstats_reset = cn10k_ml_dev_xstats_reset, /* Model ops */ - .model_load = cn10k_ml_model_load, - .model_unload = cn10k_ml_model_unload, + .model_load = cnxk_ml_model_load, + .model_unload = cnxk_ml_model_unload, .model_start = cn10k_ml_model_start, .model_stop = cn10k_ml_model_stop, .model_info_get = cn10k_ml_model_info_get, diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index a925c07580..bc14f6e5b9 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -62,4 +62,6 @@ struct cnxk_ml_qp { extern struct rte_ml_dev_ops cnxk_ml_ops; +int cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id); + #endif /* _CNXK_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131684 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AEE3D425E4; Wed, 20 Sep 2023 09:26:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C8ACA41144; Wed, 20 Sep 2023 09:25:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id A426940ED6 for ; Wed, 20 Sep 2023 09:25:38 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633le002294 for ; Wed, 20 Sep 2023 00:25:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=MkIFI/YEa0UUtPCoWagMT7RFZWm2noQM+LOABnj743U=; b=Xmlr3y4u31w9TB83x+q9wumQnYtP3Xlei/eDsuvCIJNIn+uUCvP8X3RHvlR5AY1RUiUN cdAbz8i6M1d6tzRqTYskoBYQgbCGhflm1hGT7NU6suRUjIBjQXd8C/QduQ3pg3aIJ+OL 0jC3ZCCkwBWFnsnIvUTGVzJ2bH7M+ASdlV+Vdl5VHqMiGBhfDziuRk6l9D3Xn1JXzoeY l/TrGxHdZ0hOdE9tnpUABbuQscIyPTi7H3dKjYK28J24sVm+08GD4nta0LR5TVoYkFqT EaF3D+dsd8+aces96vCkWkIdz+679Pv+GdeQB+6LNpMGdyCxsn8jZWWiaUsxIIhsD3pQ OA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:37 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:35 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id B14FC5B6926; Wed, 20 Sep 2023 00:25:35 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 11/34] ml/cnxk: update model start and stop functions Date: Wed, 20 Sep 2023 00:25:02 -0700 Message-ID: <20230920072528.14185-12-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: biY7AfsK6Bmc7Y-4u8sbxPDqWZEBk3se X-Proofpoint-GUID: biY7AfsK6Bmc7Y-4u8sbxPDqWZEBk3se X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented cnxk wrapper functions to start and stop ML models. Wrapper functions would invoke the cn10k model start and stop functions. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ocm.c | 28 ++-- drivers/ml/cnxk/cn10k_ml_ocm.h | 12 +- drivers/ml/cnxk/cn10k_ml_ops.c | 282 ++++++++++++++++++++------------- drivers/ml/cnxk/cn10k_ml_ops.h | 8 +- drivers/ml/cnxk/cnxk_ml_ops.c | 48 +++++- drivers/ml/cnxk/cnxk_ml_ops.h | 1 + 6 files changed, 240 insertions(+), 139 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ocm.c b/drivers/ml/cnxk/cn10k_ml_ocm.c index 5682778e87..2d900dbc78 100644 --- a/drivers/ml/cnxk/cn10k_ml_ocm.c +++ b/drivers/ml/cnxk/cn10k_ml_ocm.c @@ -217,11 +217,10 @@ cn10k_ml_ocm_tilecount(uint64_t tilemask, int *start, int *end) * scratch & WB pages and OCM allocation mode. */ int -cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t wb_pages, +cn10k_ml_ocm_tilemask_find(struct cnxk_ml_dev *cnxk_mldev, uint8_t num_tiles, uint16_t wb_pages, uint16_t scratch_pages, uint64_t *tilemask) { struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_ocm *ocm; uint16_t used_scratch_pages_max; @@ -240,7 +239,6 @@ cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t w int max_slot_sz; int page_id; - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; ocm = &cn10k_mldev->ocm; @@ -335,12 +333,10 @@ cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t w } void -cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t layer_id, +cn10k_ml_ocm_reserve_pages(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id, uint16_t layer_id, uint64_t tilemask, int wb_page_start, uint16_t wb_pages, uint16_t scratch_pages) { - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; struct cnxk_ml_layer *layer; struct cn10k_ml_ocm *ocm; @@ -353,10 +349,8 @@ cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t l int tile_id; int page_id; - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - ocm = &cn10k_mldev->ocm; - model = dev->data->models[model_id]; + ocm = &cnxk_mldev->cn10k_mldev.ocm; + model = cnxk_mldev->mldev->data->models[model_id]; layer = &model->layer[layer_id]; /* Get first set bit, tile_start */ @@ -398,12 +392,10 @@ cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t l } void -cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t layer_id) +cn10k_ml_ocm_free_pages(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id, uint16_t layer_id) { struct cnxk_ml_model *local_model; struct cnxk_ml_layer *local_layer; - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; struct cnxk_ml_layer *layer; struct cn10k_ml_ocm *ocm; @@ -418,10 +410,8 @@ cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t laye uint16_t i; uint16_t j; - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - ocm = &cn10k_mldev->ocm; - model = dev->data->models[model_id]; + ocm = &cnxk_mldev->cn10k_mldev.ocm; + model = cnxk_mldev->mldev->data->models[model_id]; layer = &model->layer[layer_id]; /* Update OCM info for WB memory */ @@ -440,8 +430,8 @@ cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t laye /* Get max scratch pages required, excluding the current model */ scratch_resize_pages = 0; - for (i = 0; i < dev->data->nb_models; i++) { - local_model = dev->data->models[i]; + for (i = 0; i < cnxk_mldev->mldev->data->nb_models; i++) { + local_model = cnxk_mldev->mldev->data->models[i]; if (local_model == NULL) continue; diff --git a/drivers/ml/cnxk/cn10k_ml_ocm.h b/drivers/ml/cnxk/cn10k_ml_ocm.h index 720f8caf76..97b723a56a 100644 --- a/drivers/ml/cnxk/cn10k_ml_ocm.h +++ b/drivers/ml/cnxk/cn10k_ml_ocm.h @@ -8,6 +8,8 @@ #include #include +struct cnxk_ml_dev; + /* Number of OCM tiles. */ #define ML_CN10K_OCM_NUMTILES 0x8 @@ -75,12 +77,12 @@ struct cn10k_ml_ocm { }; int cn10k_ml_ocm_tilecount(uint64_t tilemask, int *start, int *end); -int cn10k_ml_ocm_tilemask_find(struct rte_ml_dev *dev, uint8_t num_tiles, uint16_t wb_pages, +int cn10k_ml_ocm_tilemask_find(struct cnxk_ml_dev *cnxk_mldev, uint8_t num_tiles, uint16_t wb_pages, uint16_t scratch_pages, uint64_t *tilemask); -void cn10k_ml_ocm_reserve_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t layer_id, - uint64_t tilemask, int wb_page_start, uint16_t wb_pages, - uint16_t scratch_pages); -void cn10k_ml_ocm_free_pages(struct rte_ml_dev *dev, uint16_t model_id, uint16_t layer_id); +void cn10k_ml_ocm_reserve_pages(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id, + uint16_t layer_id, uint64_t tilemask, int wb_page_start, + uint16_t wb_pages, uint16_t scratch_pages); +void cn10k_ml_ocm_free_pages(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id, uint16_t layer_id); void cn10k_ml_ocm_print(struct rte_ml_dev *dev, FILE *fp); #endif /* _CN10K_ML_OCM_H_ */ diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 3bfc63d9d4..e5b9837ed7 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -252,26 +252,28 @@ cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) } static void -cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml_model *model, +cn10k_ml_prep_sp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, struct cnxk_ml_req *req, enum cn10k_ml_job_type job_type) { struct cn10k_ml_model_metadata *metadata; struct cn10k_ml_layer_addr *addr; + struct cn10k_ml_dev *cn10k_mldev; - metadata = &model->glow.metadata; - addr = &model->layer[0].glow.addr; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + metadata = &layer->glow.metadata; + addr = &layer->glow.addr; memset(&req->cn10k_req.jd, 0, sizeof(struct cn10k_ml_jd)); req->cn10k_req.jd.hdr.jce.w0.u64 = 0; req->cn10k_req.jd.hdr.jce.w1.u64 = PLT_U64_CAST(&req->cn10k_req.status); - req->cn10k_req.jd.hdr.model_id = model->model_id; + req->cn10k_req.jd.hdr.model_id = layer->index; req->cn10k_req.jd.hdr.job_type = job_type; req->cn10k_req.jd.hdr.fp_flags = 0x0; req->cn10k_req.jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->cn10k_req.result); if (job_type == ML_CN10K_JOB_TYPE_MODEL_START) { - if (!model->glow.metadata.model.ocm_relocatable) + if (!layer->glow.metadata.model.ocm_relocatable) req->cn10k_req.jd.hdr.sp_flags = ML_CN10K_SP_FLAGS_OCM_NONRELOCATABLE; else req->cn10k_req.jd.hdr.sp_flags = 0x0; @@ -295,7 +297,7 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml req->cn10k_req.jd.model_start.num_gather_entries = 0; req->cn10k_req.jd.model_start.num_scatter_entries = 0; req->cn10k_req.jd.model_start.tilemask = 0; /* Updated after reserving pages */ - req->cn10k_req.jd.model_start.batch_size = model->batch_size; + req->cn10k_req.jd.model_start.batch_size = layer->batch_size; req->cn10k_req.jd.model_start.ocm_wb_base_address = 0; /* Updated after reserving pages */ req->cn10k_req.jd.model_start.ocm_wb_range_start = @@ -327,9 +329,13 @@ cn10k_ml_prep_sp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml } static __rte_always_inline void -cn10k_ml_prep_fp_job_descriptor(struct cn10k_ml_dev *cn10k_mldev, struct cnxk_ml_req *req, +cn10k_ml_prep_fp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_req *req, struct rte_ml_op *op) { + struct cn10k_ml_dev *cn10k_mldev; + + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + req->cn10k_req.jd.hdr.jce.w0.u64 = 0; req->cn10k_req.jd.hdr.jce.w1.u64 = PLT_U64_CAST(req->status); req->cn10k_req.jd.hdr.model_id = op->model_id; @@ -718,10 +724,8 @@ cn10k_ml_model_xstats_reset(struct rte_ml_dev *dev, int32_t model_id, const uint } static int -cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) +cn10k_ml_cache_model_data(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer) { - struct rte_ml_model_info *info; - struct cnxk_ml_model *model; struct rte_ml_buff_seg seg[2]; struct rte_ml_buff_seg *inp; struct rte_ml_buff_seg *out; @@ -734,22 +738,20 @@ cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) int ret = 0; uint32_t i; - model = dev->data->models[model_id]; - info = (struct rte_ml_model_info *)model->info; inp = &seg[0]; out = &seg[1]; /* Create input and output buffers. */ - for (i = 0; i < info->nb_inputs; i++) - isize += info->input_info[i].size; + for (i = 0; i < layer->info.nb_inputs; i++) + isize += layer->info.input[i].sz_q; - for (i = 0; i < info->nb_outputs; i++) - osize += info->output_info[i].size; + for (i = 0; i < layer->info.nb_outputs; i++) + osize += layer->info.output[i].sz_q; - isize = model->batch_size * isize; - osize = model->batch_size * osize; + isize = layer->batch_size * isize; + osize = layer->batch_size * osize; - snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", "ml_dummy_io", model_id); + snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", "ml_dummy_io", layer->index); mz = plt_memzone_reserve_aligned(str, isize + osize, 0, ML_CN10K_ALIGN_SIZE); if (mz == NULL) return -ENOMEM; @@ -765,15 +767,15 @@ cn10k_ml_cache_model_data(struct rte_ml_dev *dev, uint16_t model_id) seg[1].length = osize; seg[1].next = NULL; - op.model_id = model_id; - op.nb_batches = model->batch_size; + op.model_id = layer->index; + op.nb_batches = layer->batch_size; op.mempool = NULL; op.input = &inp; op.output = &out; - memset(model->layer[0].glow.req, 0, sizeof(struct cnxk_ml_req)); - ret = cn10k_ml_inference_sync(dev, &op); + memset(layer->glow.req, 0, sizeof(struct cnxk_ml_req)); + ret = cn10k_ml_inference_sync(cnxk_mldev, &op); plt_memzone_free(mz); return ret; @@ -1510,14 +1512,16 @@ cn10k_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *mode } int -cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) +cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; struct cn10k_ml_ocm *ocm; struct cnxk_ml_req *req; + uint16_t layer_id = 0; bool job_enqueued; bool job_dequeued; uint8_t num_tiles; @@ -1528,85 +1532,89 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) bool locked; int ret = 0; - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - ocm = &cn10k_mldev->ocm; - model = dev->data->models[model_id]; + PLT_SET_USED(layer_name); + cnxk_mldev = (struct cnxk_ml_dev *)device; + if (cnxk_mldev == NULL) { + plt_err("Invalid device = %p", device); + return -EINVAL; + } + + model = cnxk_mldev->mldev->data->models[model_id]; if (model == NULL) { plt_err("Invalid model_id = %u", model_id); return -EINVAL; } + layer = &model->layer[layer_id]; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; + /* Prepare JD */ - req = model->layer[0].glow.req; - cn10k_ml_prep_sp_job_descriptor(cn10k_mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_START); + req = layer->glow.req; + cn10k_ml_prep_sp_job_descriptor(cnxk_mldev, layer, req, ML_CN10K_JOB_TYPE_MODEL_START); req->cn10k_req.result.error_code = 0x0; req->cn10k_req.result.user_ptr = NULL; plt_write64(ML_CNXK_POLL_JOB_START, &req->cn10k_req.status); plt_wmb(); - num_tiles = model->layer[0].glow.metadata.model.tile_end - - model->layer[0].glow.metadata.model.tile_start + 1; + num_tiles = layer->glow.metadata.model.tile_end - layer->glow.metadata.model.tile_start + 1; locked = false; while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { - if (model->state == ML_CNXK_MODEL_STATE_STARTED) { - plt_ml_dbg("Model already started, model = 0x%016lx", - PLT_U64_CAST(model)); + if (layer->state == ML_CNXK_LAYER_STATE_STARTED) { + plt_ml_dbg("Layer already started, model_id = %u, layer_id = %u", + model->model_id, layer_id); plt_spinlock_unlock(&model->lock); return 1; } - if (model->state == ML_CNXK_MODEL_STATE_JOB_ACTIVE) { - plt_err("A slow-path job is active for the model = 0x%016lx", - PLT_U64_CAST(model)); + if (layer->state == ML_CNXK_LAYER_STATE_JOB_ACTIVE) { + plt_err("A slow-path job is active for the model_id = %u", + model->model_id); plt_spinlock_unlock(&model->lock); return -EBUSY; } - model->state = ML_CNXK_MODEL_STATE_JOB_ACTIVE; + layer->state = ML_CNXK_LAYER_STATE_JOB_ACTIVE; plt_spinlock_unlock(&model->lock); locked = true; } } - while (!model->layer[0].glow.ocm_map.ocm_reserved) { + while (!layer->glow.ocm_map.ocm_reserved) { if (plt_spinlock_trylock(&ocm->lock) != 0) { wb_page_start = cn10k_ml_ocm_tilemask_find( - dev, num_tiles, model->layer[0].glow.ocm_map.wb_pages, - model->layer[0].glow.ocm_map.scratch_pages, &tilemask); + cnxk_mldev, num_tiles, layer->glow.ocm_map.wb_pages, + layer->glow.ocm_map.scratch_pages, &tilemask); if (wb_page_start == -1) { plt_err("Free pages not available on OCM tiles"); - plt_err("Failed to start model = 0x%016lx, name = %s", - PLT_U64_CAST(model), - model->layer[0].glow.metadata.model.name); - + plt_err("Failed to start layer, model_id = %u, layer_id = %u", + model->model_id, layer_id); plt_spinlock_unlock(&ocm->lock); return -ENOMEM; } - model->layer[0].glow.ocm_map.tilemask = tilemask; - model->layer[0].glow.ocm_map.wb_page_start = wb_page_start; + layer->glow.ocm_map.tilemask = tilemask; + layer->glow.ocm_map.wb_page_start = wb_page_start; - cn10k_ml_ocm_reserve_pages(dev, model->model_id, 0, - model->layer[0].glow.ocm_map.tilemask, - model->layer[0].glow.ocm_map.wb_page_start, - model->layer[0].glow.ocm_map.wb_pages, - model->layer[0].glow.ocm_map.scratch_pages); - model->layer[0].glow.ocm_map.ocm_reserved = true; + cn10k_ml_ocm_reserve_pages( + cnxk_mldev, model->model_id, layer_id, layer->glow.ocm_map.tilemask, + layer->glow.ocm_map.wb_page_start, layer->glow.ocm_map.wb_pages, + layer->glow.ocm_map.scratch_pages); + layer->glow.ocm_map.ocm_reserved = true; plt_spinlock_unlock(&ocm->lock); } } /* Update JD */ - cn10k_ml_ocm_tilecount(model->layer[0].glow.ocm_map.tilemask, &tile_start, &tile_end); + cn10k_ml_ocm_tilecount(layer->glow.ocm_map.tilemask, &tile_start, &tile_end); req->cn10k_req.jd.model_start.tilemask = GENMASK_ULL(tile_end, tile_start); req->cn10k_req.jd.model_start.ocm_wb_base_address = - model->layer[0].glow.ocm_map.wb_page_start * ocm->page_size; + layer->glow.ocm_map.wb_page_start * ocm->page_size; job_enqueued = false; job_dequeued = false; @@ -1640,66 +1648,94 @@ cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) locked = false; while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { - if (ret == 0) { - model->state = ML_CNXK_MODEL_STATE_STARTED; - cnxk_mldev->nb_models_started++; - } else { - model->state = ML_CNXK_MODEL_STATE_UNKNOWN; - } + if (ret == 0) + layer->state = ML_CNXK_LAYER_STATE_STARTED; + else + layer->state = ML_CNXK_LAYER_STATE_UNKNOWN; plt_spinlock_unlock(&model->lock); locked = true; } } - if (model->state == ML_CNXK_MODEL_STATE_UNKNOWN) { - while (model->layer[0].glow.ocm_map.ocm_reserved) { + if (layer->state == ML_CNXK_LAYER_STATE_UNKNOWN) { + while (layer->glow.ocm_map.ocm_reserved) { if (plt_spinlock_trylock(&ocm->lock) != 0) { - cn10k_ml_ocm_free_pages(dev, model->model_id, 0); - model->layer[0].glow.ocm_map.ocm_reserved = false; - model->layer[0].glow.ocm_map.tilemask = 0x0; + cn10k_ml_ocm_free_pages(cnxk_mldev, model->model_id, layer_id); + layer->glow.ocm_map.ocm_reserved = false; + layer->glow.ocm_map.tilemask = 0x0; plt_spinlock_unlock(&ocm->lock); } } } - if (ret < 0) { /* Call unload to update model and FW state, ignore error */ - rte_ml_model_stop(dev->data->dev_id, model_id); + if (ret < 0) { + cn10k_ml_layer_stop(device, model_id, layer_name); } else { - if (cn10k_mldev->cache_model_data && roc_model_is_cn10ka()) - ret = cn10k_ml_cache_model_data(dev, model_id); + if (cn10k_mldev->cache_model_data) + ret = cn10k_ml_cache_model_data(cnxk_mldev, layer); } return ret; } int -cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) +cn10k_ml_model_start(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) +{ + struct cnxk_ml_layer *layer; + int ret; + + layer = &model->layer[0]; + ret = cn10k_ml_layer_start(cnxk_mldev, model->model_id, layer->name); + if (ret != 0) { + plt_err("CN10K Model start failed, model_id = %u, error = %d", model->model_id, + ret); + return ret; + } + + cnxk_mldev->nb_models_started++; + model->state = ML_CNXK_MODEL_STATE_STARTED; + + return 0; +} + +int +cn10k_ml_layer_stop(void *device, uint16_t model_id, const char *layer_name) { struct cn10k_ml_dev *cn10k_mldev; struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; struct cn10k_ml_ocm *ocm; struct cnxk_ml_req *req; + uint16_t layer_id = 0; bool job_enqueued; bool job_dequeued; bool locked; int ret = 0; - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - ocm = &cn10k_mldev->ocm; - model = dev->data->models[model_id]; + PLT_SET_USED(layer_name); + + cnxk_mldev = (struct cnxk_ml_dev *)device; + if (cnxk_mldev == NULL) { + plt_err("Invalid device = %p", device); + return -EINVAL; + } + model = cnxk_mldev->mldev->data->models[model_id]; if (model == NULL) { plt_err("Invalid model_id = %u", model_id); return -EINVAL; } + layer = &model->layer[layer_id]; + cn10k_mldev = &cnxk_mldev->cn10k_mldev; + ocm = &cn10k_mldev->ocm; + /* Prepare JD */ - req = model->layer[0].glow.req; - cn10k_ml_prep_sp_job_descriptor(cn10k_mldev, model, req, ML_CN10K_JOB_TYPE_MODEL_STOP); + req = layer->glow.req; + cn10k_ml_prep_sp_job_descriptor(cnxk_mldev, layer, req, ML_CN10K_JOB_TYPE_MODEL_STOP); req->cn10k_req.result.error_code = 0x0; req->cn10k_req.result.user_ptr = NULL; @@ -1709,31 +1745,31 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) locked = false; while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { - if (model->state == ML_CNXK_MODEL_STATE_LOADED) { - plt_ml_dbg("Model not started, model = 0x%016lx", - PLT_U64_CAST(model)); + if (layer->state == ML_CNXK_LAYER_STATE_LOADED) { + plt_ml_dbg("Layer not started, model_id = %u, layer_id = %u", + model->model_id, layer_id); plt_spinlock_unlock(&model->lock); return 1; } - if (model->state == ML_CNXK_MODEL_STATE_JOB_ACTIVE) { - plt_err("A slow-path job is active for the model = 0x%016lx", - PLT_U64_CAST(model)); + if (layer->state == ML_CNXK_LAYER_STATE_JOB_ACTIVE) { + plt_err("A slow-path job is active for the layer, model_id = %u, layer_id = %u", + model->model_id, layer_id); plt_spinlock_unlock(&model->lock); return -EBUSY; } - model->state = ML_CNXK_MODEL_STATE_JOB_ACTIVE; + layer->state = ML_CNXK_LAYER_STATE_JOB_ACTIVE; plt_spinlock_unlock(&model->lock); locked = true; } } - while (model->layer[0].glow.ocm_map.ocm_reserved) { + while (layer->glow.ocm_map.ocm_reserved) { if (plt_spinlock_trylock(&ocm->lock) != 0) { - cn10k_ml_ocm_free_pages(dev, model->model_id, 0); - model->layer[0].glow.ocm_map.ocm_reserved = false; - model->layer[0].glow.ocm_map.tilemask = 0x0; + cn10k_ml_ocm_free_pages(cnxk_mldev, model->model_id, layer_id); + layer->glow.ocm_map.ocm_reserved = false; + layer->glow.ocm_map.tilemask = 0x0; plt_spinlock_unlock(&ocm->lock); } } @@ -1770,8 +1806,11 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) locked = false; while (!locked) { if (plt_spinlock_trylock(&model->lock) != 0) { - cnxk_mldev->nb_models_stopped++; - model->state = ML_CNXK_MODEL_STATE_LOADED; + if (ret == 0) + layer->state = ML_CNXK_LAYER_STATE_LOADED; + else + layer->state = ML_CNXK_LAYER_STATE_UNKNOWN; + plt_spinlock_unlock(&model->lock); locked = true; } @@ -1780,6 +1819,25 @@ cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) return ret; } +int +cn10k_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) +{ + struct cnxk_ml_layer *layer; + int ret; + + layer = &model->layer[0]; + ret = cn10k_ml_layer_stop(cnxk_mldev, model->model_id, layer->name); + if (ret != 0) { + plt_err("CN10K Model stop failed, model_id = %u, error = %d", model->model_id, ret); + return ret; + } + + cnxk_mldev->nb_models_stopped++; + model->state = ML_CNXK_MODEL_STATE_LOADED; + + return 0; +} + int cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_model_info *model_info) @@ -2007,30 +2065,35 @@ queue_free_count(uint64_t head, uint64_t tail, uint64_t nb_desc) } static __rte_always_inline void -cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cnxk_ml_req *req) +cn10k_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, struct cnxk_ml_req *req) { union cn10k_ml_error_code *error_code; struct cn10k_ml_layer_xstats *xstats; struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_result *result; struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; struct cnxk_ml_qp *qp; struct rte_ml_op *op; uint64_t hw_latency; uint64_t fw_latency; + uint16_t model_id; + uint16_t layer_id; result = &req->cn10k_req.result; op = req->op; if (likely(result->error_code == 0)) { - model = dev->data->models[op->model_id]; + model_id = cnxk_mldev->index_map[op->model_id].model_id; + layer_id = cnxk_mldev->index_map[op->model_id].layer_id; + model = cnxk_mldev->mldev->data->models[model_id]; + layer = &model->layer[layer_id]; if (likely(qp_id >= 0)) { - qp = dev->data->queue_pairs[qp_id]; + qp = cnxk_mldev->mldev->data->queue_pairs[qp_id]; qp->stats.dequeued_count++; - xstats = &model->layer[0].glow.burst_xstats[qp_id]; + xstats = &layer->glow.burst_xstats[qp_id]; } else { - xstats = model->layer[0].glow.sync_xstats; + xstats = layer->glow.sync_xstats; } if (unlikely(xstats->dequeued_count == xstats->hw_reset_count)) { @@ -2058,14 +2121,13 @@ cn10k_ml_result_update(struct rte_ml_dev *dev, int qp_id, struct cnxk_ml_req *re op->status = RTE_ML_OP_STATUS_SUCCESS; } else { if (likely(qp_id >= 0)) { - qp = dev->data->queue_pairs[qp_id]; + qp = cnxk_mldev->mldev->data->queue_pairs[qp_id]; qp->stats.dequeue_err_count++; } /* Handle driver error */ error_code = (union cn10k_ml_error_code *)&result->error_code; if (error_code->s.etype == ML_ETYPE_DRIVER) { - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; /* Check for exception */ @@ -2120,7 +2182,7 @@ cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op req = &queue->reqs[head]; cn10k_mldev->set_poll_addr(req); - cn10k_ml_prep_fp_job_descriptor(cn10k_mldev, req, op); + cn10k_ml_prep_fp_job_descriptor(cnxk_mldev, req, op); memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result)); error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; @@ -2187,7 +2249,7 @@ cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op } } - cn10k_ml_result_update(dev, qp_id, req); + cn10k_ml_result_update(cnxk_mldev, qp_id, req); ops[count] = req->op; queue_index_advance(&tail, qp->nb_desc); @@ -2236,23 +2298,27 @@ cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op, struct rte_m } __rte_hot int -cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) +cn10k_ml_inference_sync(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op) { union cn10k_ml_error_code *error_code; struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; struct cnxk_ml_req *req; + uint16_t model_id; + uint16_t layer_id; bool timeout; int ret = 0; - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; - model = dev->data->models[op->model_id]; - req = model->layer[0].glow.req; + model_id = cnxk_mldev->index_map[op->model_id].model_id; + layer_id = cnxk_mldev->index_map[op->model_id].layer_id; + model = cnxk_mldev->mldev->data->models[model_id]; + layer = &model->layer[layer_id]; + req = layer->glow.req; cn10k_ml_set_poll_addr(req); - cn10k_ml_prep_fp_job_descriptor(cn10k_mldev, req, op); + cn10k_ml_prep_fp_job_descriptor(cnxk_mldev, req, op); memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result)); error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; @@ -2288,7 +2354,7 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) if (timeout) ret = -ETIME; else - cn10k_ml_result_update(dev, -1, req); + cn10k_ml_result_update(cnxk_mldev, -1, req); error_enqueue: return ret; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 677219dfdf..a222a43d55 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -315,8 +315,8 @@ int cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mod int cn10k_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, struct cnxk_ml_model *model); int cn10k_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); -int cn10k_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id); -int cn10k_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id); +int cn10k_ml_model_start(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); +int cn10k_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_model_info *model_info); int cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buffer); @@ -335,7 +335,7 @@ __rte_hot uint16_t cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id struct rte_ml_op **ops, uint16_t nb_ops); __rte_hot int cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op, struct rte_ml_op_error *error); -__rte_hot int cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op); +__rte_hot int cn10k_ml_inference_sync(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op); /* Misc ops */ void cn10k_ml_qp_initialize(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_qp *qp); @@ -344,5 +344,7 @@ void cn10k_ml_qp_initialize(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_qp *q int cn10k_ml_layer_load(void *device, uint16_t model_id, const char *layer_name, uint8_t *buffer, size_t size, uint16_t *index); int cn10k_ml_layer_unload(void *device, uint16_t model_id, const char *layer_name); +int cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name); +int cn10k_ml_layer_stop(void *device, uint16_t model_id, const char *layer_name); #endif /* _CN10K_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 3d9d5f9d78..915309168d 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -242,7 +242,7 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co model = dev->data->models[model_id]; if (model != NULL) { if (model->state == ML_CNXK_MODEL_STATE_STARTED) { - if (cn10k_ml_model_stop(dev, model_id) != 0) + if (cnxk_ml_model_stop(dev, model_id) != 0) plt_err("Could not stop model %u", model_id); } if (model->state == ML_CNXK_MODEL_STATE_LOADED) { @@ -334,7 +334,7 @@ cnxk_ml_dev_close(struct rte_ml_dev *dev) model = dev->data->models[model_id]; if (model != NULL) { if (model->state == ML_CNXK_MODEL_STATE_STARTED) { - if (cn10k_ml_model_stop(dev, model_id) != 0) + if (cnxk_ml_model_stop(dev, model_id) != 0) plt_err("Could not stop model %u", model_id); } if (model->state == ML_CNXK_MODEL_STATE_LOADED) { @@ -566,6 +566,46 @@ cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) return plt_memzone_free(plt_memzone_lookup(str)); } +static int +cnxk_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + model = dev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + + return cn10k_ml_model_start(cnxk_mldev, model); +} + +int +cnxk_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + model = dev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + + return cn10k_ml_model_stop(cnxk_mldev, model); +} + struct rte_ml_dev_ops cnxk_ml_ops = { /* Device control ops */ .dev_info_get = cnxk_ml_dev_info_get, @@ -591,8 +631,8 @@ struct rte_ml_dev_ops cnxk_ml_ops = { /* Model ops */ .model_load = cnxk_ml_model_load, .model_unload = cnxk_ml_model_unload, - .model_start = cn10k_ml_model_start, - .model_stop = cn10k_ml_model_stop, + .model_start = cnxk_ml_model_start, + .model_stop = cnxk_ml_model_stop, .model_info_get = cn10k_ml_model_info_get, .model_params_update = cn10k_ml_model_params_update, diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index bc14f6e5b9..d27ca0d0cb 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -63,5 +63,6 @@ struct cnxk_ml_qp { extern struct rte_ml_dev_ops cnxk_ml_ops; int cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id); +int cnxk_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id); #endif /* _CNXK_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131687 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60BF6425E4; Wed, 20 Sep 2023 09:27:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5EF87427D9; Wed, 20 Sep 2023 09:25:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8CF0740EE7 for ; Wed, 20 Sep 2023 09:25:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW0v008355 for ; Wed, 20 Sep 2023 00:25:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=vPIqgOlrsaHzMBv6BIem+chN98Dv3qYzSOSrhkvjrdE=; b=UfVfm1gUp0SSkIgNyH/RmCrxEcCPA9Sdb7eI6xErulzXnvL2ZF4pPOZj2O98GZsfRMs1 igqDyfcKqRpWrA7FJgOSwJG58787k3JXghqJH3govPO6UB7FD13Be0btjFrDFp6F4+/I KzCVrbI4CuBC27+fvn9J1bqUrtwspo1fauCAzAES2v6OeDT1AW3q9YjnFM+4lbAe27k6 ggrm6tSAHB5ZP4Vfbp4dg2dNxr7DkcPB/OujIoENk383CRNLF+LXGtGtG0g2rzWKjfVb KIswDkVcEwsNiCkKZuZvBU8jmlCL0Jy/90KT+7N466xHzBLHelwMpZB5KeUyNtCWWzP7 WA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:36 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 098645B6929; Wed, 20 Sep 2023 00:25:36 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 12/34] ml/cnxk: update model utility functions Date: Wed, 20 Sep 2023 00:25:03 -0700 Message-ID: <20230920072528.14185-13-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: tEJCdACyy50t_6RktQtuV0ZdZh4wFh2B X-Proofpoint-GUID: tEJCdACyy50t_6RktQtuV0ZdZh4wFh2B X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added cnxk wrapper function to update model params and fetch model info. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 38 ++++++--------------------- drivers/ml/cnxk/cn10k_ml_ops.h | 5 ++-- drivers/ml/cnxk/cnxk_ml_ops.c | 48 ++++++++++++++++++++++++++++++++-- 3 files changed, 56 insertions(+), 35 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index e5b9837ed7..0eebefee5f 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -1839,45 +1839,23 @@ cn10k_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) } int -cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, - struct rte_ml_model_info *model_info) +cn10k_ml_model_params_update(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + void *buffer) { - struct cnxk_ml_model *model; - - model = dev->data->models[model_id]; - - if (model == NULL) { - plt_err("Invalid model_id = %u", model_id); - return -EINVAL; - } - - rte_memcpy(model_info, model->info, sizeof(struct rte_ml_model_info)); - model_info->input_info = ((struct rte_ml_model_info *)model->info)->input_info; - model_info->output_info = ((struct rte_ml_model_info *)model->info)->output_info; - - return 0; -} - -int -cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buffer) -{ - struct cnxk_ml_model *model; - - model = dev->data->models[model_id]; + struct cnxk_ml_layer *layer; - if (model == NULL) { - plt_err("Invalid model_id = %u", model_id); - return -EINVAL; - } + RTE_SET_USED(cnxk_mldev); if (model->state == ML_CNXK_MODEL_STATE_UNKNOWN) return -1; else if (model->state != ML_CNXK_MODEL_STATE_LOADED) return -EBUSY; + layer = &model->layer[0]; + /* Update model weights & bias */ - rte_memcpy(model->layer[0].glow.addr.wb_load_addr, buffer, - model->layer[0].glow.metadata.weights_bias.file_size); + rte_memcpy(layer->glow.addr.wb_load_addr, buffer, + layer->glow.metadata.weights_bias.file_size); return 0; } diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index a222a43d55..ef12069f0d 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -317,9 +317,8 @@ int cn10k_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_para int cn10k_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int cn10k_ml_model_start(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int cn10k_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); -int cn10k_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, - struct rte_ml_model_info *model_info); -int cn10k_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buffer); +int cn10k_ml_model_params_update(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, + void *buffer); /* I/O ops */ int cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 915309168d..5ad0ea8c3c 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -606,6 +606,50 @@ cnxk_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) return cn10k_ml_model_stop(cnxk_mldev, model); } +static int +cnxk_ml_model_info_get(struct rte_ml_dev *dev, uint16_t model_id, + struct rte_ml_model_info *model_info) +{ + struct rte_ml_model_info *info; + struct cnxk_ml_model *model; + + if ((dev == NULL) || (model_info == NULL)) + return -EINVAL; + + model = dev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + + info = (struct rte_ml_model_info *)model->info; + rte_memcpy(model_info, info, sizeof(struct rte_ml_model_info)); + model_info->input_info = info->input_info; + model_info->output_info = info->output_info; + + return 0; +} + +static int +cnxk_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buffer) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + + if ((dev == NULL) || (buffer == NULL)) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + model = dev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + + return cn10k_ml_model_params_update(cnxk_mldev, model, buffer); +} + struct rte_ml_dev_ops cnxk_ml_ops = { /* Device control ops */ .dev_info_get = cnxk_ml_dev_info_get, @@ -633,8 +677,8 @@ struct rte_ml_dev_ops cnxk_ml_ops = { .model_unload = cnxk_ml_model_unload, .model_start = cnxk_ml_model_start, .model_stop = cnxk_ml_model_stop, - .model_info_get = cn10k_ml_model_info_get, - .model_params_update = cn10k_ml_model_params_update, + .model_info_get = cnxk_ml_model_info_get, + .model_params_update = cnxk_ml_model_params_update, /* I/O ops */ .io_quantize = cn10k_ml_io_quantize, From patchwork Wed Sep 20 07:25:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131689 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BBFE9425E4; Wed, 20 Sep 2023 09:27:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA3B6427E2; Wed, 20 Sep 2023 09:25:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 25F3840EE5 for ; Wed, 20 Sep 2023 09:25:40 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW0w008355 for ; Wed, 20 Sep 2023 00:25:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=BQyXLVyCRq+zcDxZ4JpDhOpTFxSmWbxu5LnKQubz/8M=; b=Nk6FHfKJtVzsSVkw8nRQBZpPBT/P+aT14QxHInrY1pJ4mMsvMTwqJSh6a4i8LBrHbCub xvpwMxgA9LPfYYx95GPu2qY+Vm6mBBuYfeJ9oZ/bBdKPiC3P83cRmpDXgyZv7Qtp4PpG sI6cecQhVbQEDHDVz4tg8Z/xceebibGTW/eL8Ysv4LBN6/EEauXbkBFhQoBUZPO7FrPw HNR1hWf/XWXRldauVJNkOs21W7OOJSOVwk/SqRCcViuxWH0Iz5sZxEegxMHuYKfpQh3t mzxrSILE8rU/8AFZ5GnmMIJ5CxBXcbohI/jQoMCzud3TdbuksQT9luTkRXBfTLoAXdCE Mg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-6 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:39 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:36 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 5A1B65B6927; Wed, 20 Sep 2023 00:25:36 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 13/34] ml/cnxk: update data quantization functions Date: Wed, 20 Sep 2023 00:25:04 -0700 Message-ID: <20230920072528.14185-14-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jdDeAH9M13q4xNB2y0WURwE9_xLG6pM6 X-Proofpoint-GUID: jdDeAH9M13q4xNB2y0WURwE9_xLG6pM6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added cnxk wrapper functions to quantize input data and dequantize output data. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 164 --------------------------------- drivers/ml/cnxk/cn10k_ml_ops.h | 7 -- drivers/ml/cnxk/cnxk_ml_io.c | 95 +++++++++++++++++++ drivers/ml/cnxk/cnxk_ml_io.h | 3 + drivers/ml/cnxk/cnxk_ml_ops.c | 78 +++++++++++++++- drivers/ml/cnxk/meson.build | 1 + 6 files changed, 175 insertions(+), 173 deletions(-) create mode 100644 drivers/ml/cnxk/cnxk_ml_io.c diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 0eebefee5f..1e6aee818c 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -1860,170 +1860,6 @@ cn10k_ml_model_params_update(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_mode return 0; } -int -cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **dbuffer, - struct rte_ml_buff_seg **qbuffer) -{ - struct cnxk_ml_model *model; - uint8_t model_input_type; - uint8_t *lcl_dbuffer; - uint8_t *lcl_qbuffer; - uint8_t input_type; - float qscale; - uint32_t i; - uint32_t j; - int ret; - - model = dev->data->models[model_id]; - - if (model == NULL) { - plt_err("Invalid model_id = %u", model_id); - return -EINVAL; - } - - lcl_dbuffer = dbuffer[0]->addr; - lcl_qbuffer = qbuffer[0]->addr; - - for (i = 0; i < model->layer[0].glow.metadata.model.num_input; i++) { - if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - input_type = model->layer[0].glow.metadata.input1[i].input_type; - model_input_type = model->layer[0].glow.metadata.input1[i].model_input_type; - qscale = model->layer[0].glow.metadata.input1[i].qscale; - } else { - j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - input_type = model->layer[0].glow.metadata.input2[j].input_type; - model_input_type = model->layer[0].glow.metadata.input2[j].model_input_type; - qscale = model->layer[0].glow.metadata.input2[j].qscale; - } - - if (input_type == model_input_type) { - rte_memcpy(lcl_qbuffer, lcl_dbuffer, model->layer[0].info.input[i].sz_d); - } else { - switch (model->layer[0].glow.metadata.input1[i].model_input_type) { - case RTE_ML_IO_TYPE_INT8: - ret = rte_ml_io_float32_to_int8( - qscale, model->layer[0].info.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); - break; - case RTE_ML_IO_TYPE_UINT8: - ret = rte_ml_io_float32_to_uint8( - qscale, model->layer[0].info.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); - break; - case RTE_ML_IO_TYPE_INT16: - ret = rte_ml_io_float32_to_int16( - qscale, model->layer[0].info.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); - break; - case RTE_ML_IO_TYPE_UINT16: - ret = rte_ml_io_float32_to_uint16( - qscale, model->layer[0].info.input[i].nb_elements, - lcl_dbuffer, lcl_qbuffer); - break; - case RTE_ML_IO_TYPE_FP16: - ret = rte_ml_io_float32_to_float16( - model->layer[0].info.input[i].nb_elements, lcl_dbuffer, - lcl_qbuffer); - break; - default: - plt_err("Unsupported model_input_type[%u] : %u", i, - model->layer[0].glow.metadata.input1[i].model_input_type); - ret = -ENOTSUP; - } - if (ret < 0) - return ret; - } - - lcl_dbuffer += model->layer[0].info.input[i].sz_d; - lcl_qbuffer += model->layer[0].info.input[i].sz_q; - } - - return 0; -} - -int -cn10k_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **qbuffer, - struct rte_ml_buff_seg **dbuffer) -{ - struct cnxk_ml_model *model; - uint8_t model_output_type; - uint8_t *lcl_qbuffer; - uint8_t *lcl_dbuffer; - uint8_t output_type; - float dscale; - uint32_t i; - uint32_t j; - int ret; - - model = dev->data->models[model_id]; - - if (model == NULL) { - plt_err("Invalid model_id = %u", model_id); - return -EINVAL; - } - - lcl_dbuffer = dbuffer[0]->addr; - lcl_qbuffer = qbuffer[0]->addr; - - for (i = 0; i < model->layer[0].glow.metadata.model.num_output; i++) { - if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - output_type = model->layer[0].glow.metadata.output1[i].output_type; - model_output_type = - model->layer[0].glow.metadata.output1[i].model_output_type; - dscale = model->layer[0].glow.metadata.output1[i].dscale; - } else { - j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - output_type = model->layer[0].glow.metadata.output2[j].output_type; - model_output_type = - model->layer[0].glow.metadata.output2[j].model_output_type; - dscale = model->layer[0].glow.metadata.output2[j].dscale; - } - - if (output_type == model_output_type) { - rte_memcpy(lcl_dbuffer, lcl_qbuffer, model->layer[0].info.output[i].sz_q); - } else { - switch (model->layer[0].glow.metadata.output1[i].model_output_type) { - case RTE_ML_IO_TYPE_INT8: - ret = rte_ml_io_int8_to_float32( - dscale, model->layer[0].info.output[i].nb_elements, - lcl_qbuffer, lcl_dbuffer); - break; - case RTE_ML_IO_TYPE_UINT8: - ret = rte_ml_io_uint8_to_float32( - dscale, model->layer[0].info.output[i].nb_elements, - lcl_qbuffer, lcl_dbuffer); - break; - case RTE_ML_IO_TYPE_INT16: - ret = rte_ml_io_int16_to_float32( - dscale, model->layer[0].info.output[i].nb_elements, - lcl_qbuffer, lcl_dbuffer); - break; - case RTE_ML_IO_TYPE_UINT16: - ret = rte_ml_io_uint16_to_float32( - dscale, model->layer[0].info.output[i].nb_elements, - lcl_qbuffer, lcl_dbuffer); - break; - case RTE_ML_IO_TYPE_FP16: - ret = rte_ml_io_float16_to_float32( - model->layer[0].info.output[i].nb_elements, lcl_qbuffer, - lcl_dbuffer); - break; - default: - plt_err("Unsupported model_output_type[%u] : %u", i, - model->layer[0].glow.metadata.output1[i].model_output_type); - ret = -ENOTSUP; - } - if (ret < 0) - return ret; - } - - lcl_qbuffer += model->layer[0].info.output[i].sz_q; - lcl_dbuffer += model->layer[0].info.output[i].sz_d; - } - - return 0; -} - static __rte_always_inline void queue_index_advance(uint64_t *index, uint64_t nb_desc) { diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index ef12069f0d..780e2a9f9c 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -320,13 +320,6 @@ int cn10k_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *mo int cn10k_ml_model_params_update(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, void *buffer); -/* I/O ops */ -int cn10k_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, - struct rte_ml_buff_seg **dbuffer, struct rte_ml_buff_seg **qbuffer); - -int cn10k_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, - struct rte_ml_buff_seg **qbuffer, struct rte_ml_buff_seg **dbuffer); - /* Fast-path ops */ __rte_hot uint16_t cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops); diff --git a/drivers/ml/cnxk/cnxk_ml_io.c b/drivers/ml/cnxk/cnxk_ml_io.c new file mode 100644 index 0000000000..c78009ab0c --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_io.c @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include + +#include + +#include + +#include "cnxk_ml_io.h" + +inline int +cnxk_ml_io_quantize_single(struct cnxk_ml_io *input, uint8_t *dbuffer, uint8_t *qbuffer) +{ + enum rte_ml_io_type qtype; + enum rte_ml_io_type dtype; + uint32_t nb_elements; + float qscale; + int ret = 0; + + dtype = input->dtype; + qtype = input->qtype; + qscale = input->scale; + nb_elements = input->nb_elements; + + if (dtype == qtype) { + rte_memcpy(qbuffer, dbuffer, input->sz_d); + } else { + switch (qtype) { + case RTE_ML_IO_TYPE_INT8: + ret = rte_ml_io_float32_to_int8(qscale, nb_elements, dbuffer, qbuffer); + break; + case RTE_ML_IO_TYPE_UINT8: + ret = rte_ml_io_float32_to_uint8(qscale, nb_elements, dbuffer, qbuffer); + break; + case RTE_ML_IO_TYPE_INT16: + ret = rte_ml_io_float32_to_int16(qscale, nb_elements, dbuffer, qbuffer); + break; + case RTE_ML_IO_TYPE_UINT16: + ret = rte_ml_io_float32_to_uint16(qscale, nb_elements, dbuffer, qbuffer); + break; + case RTE_ML_IO_TYPE_FP16: + ret = rte_ml_io_float32_to_float16(nb_elements, dbuffer, qbuffer); + break; + default: + plt_err("Unsupported qtype : %u", qtype); + ret = -ENOTSUP; + } + } + + return ret; +} + +inline int +cnxk_ml_io_dequantize_single(struct cnxk_ml_io *output, uint8_t *qbuffer, uint8_t *dbuffer) +{ + enum rte_ml_io_type qtype; + enum rte_ml_io_type dtype; + uint32_t nb_elements; + float dscale; + int ret = 0; + + dtype = output->dtype; + qtype = output->qtype; + dscale = output->scale; + nb_elements = output->nb_elements; + + if (dtype == qtype) { + rte_memcpy(dbuffer, qbuffer, output->sz_q); + } else { + switch (qtype) { + case RTE_ML_IO_TYPE_INT8: + ret = rte_ml_io_int8_to_float32(dscale, nb_elements, qbuffer, dbuffer); + break; + case RTE_ML_IO_TYPE_UINT8: + ret = rte_ml_io_uint8_to_float32(dscale, nb_elements, qbuffer, dbuffer); + break; + case RTE_ML_IO_TYPE_INT16: + ret = rte_ml_io_int16_to_float32(dscale, nb_elements, qbuffer, dbuffer); + break; + case RTE_ML_IO_TYPE_UINT16: + ret = rte_ml_io_uint16_to_float32(dscale, nb_elements, qbuffer, dbuffer); + break; + case RTE_ML_IO_TYPE_FP16: + ret = rte_ml_io_float16_to_float32(nb_elements, qbuffer, dbuffer); + break; + default: + plt_err("Unsupported qtype: %u", qtype); + ret = -ENOTSUP; + } + } + + return ret; +} diff --git a/drivers/ml/cnxk/cnxk_ml_io.h b/drivers/ml/cnxk/cnxk_ml_io.h index 29ec7ec511..5de166c252 100644 --- a/drivers/ml/cnxk/cnxk_ml_io.h +++ b/drivers/ml/cnxk/cnxk_ml_io.h @@ -76,4 +76,7 @@ struct cnxk_ml_io_info { uint32_t total_output_sz_d; }; +int cnxk_ml_io_quantize_single(struct cnxk_ml_io *input, uint8_t *dbuffer, uint8_t *qbuffer); +int cnxk_ml_io_dequantize_single(struct cnxk_ml_io *output, uint8_t *qbuffer, uint8_t *dbuffer); + #endif /* _CNXK_ML_IO_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 5ad0ea8c3c..ed71a55132 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -5,6 +5,8 @@ #include #include +#include + #include "cn10k_ml_ops.h" #include "cnxk_ml_dev.h" @@ -650,6 +652,78 @@ cnxk_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buf return cn10k_ml_model_params_update(cnxk_mldev, model, buffer); } +static int +cnxk_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **dbuffer, + struct rte_ml_buff_seg **qbuffer) +{ + struct cnxk_ml_io_info *info = NULL; + struct cnxk_ml_model *model; + uint8_t *lcl_dbuffer; + uint8_t *lcl_qbuffer; + uint32_t i; + int ret; + + if ((dev == NULL) || (dbuffer == NULL) || (qbuffer == NULL)) + return -EINVAL; + + model = dev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + + info = &model->layer[0].info; + + lcl_dbuffer = dbuffer[0]->addr; + lcl_qbuffer = qbuffer[0]->addr; + for (i = 0; i < info->nb_inputs; i++) { + ret = cnxk_ml_io_quantize_single(&info->input[i], lcl_dbuffer, lcl_qbuffer); + if (ret < 0) + return ret; + + lcl_dbuffer += info->input[i].sz_d; + lcl_qbuffer += info->input[i].sz_q; + } + + return 0; +} + +static int +cnxk_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buff_seg **qbuffer, + struct rte_ml_buff_seg **dbuffer) +{ + struct cnxk_ml_io_info *info = NULL; + struct cnxk_ml_model *model; + uint8_t *lcl_qbuffer; + uint8_t *lcl_dbuffer; + uint32_t i; + int ret; + + if ((dev == NULL) || (qbuffer == NULL) || (dbuffer == NULL)) + return -EINVAL; + + model = dev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + + info = &model->layer[model->nb_layers - 1].info; + + lcl_qbuffer = qbuffer[0]->addr; + lcl_dbuffer = dbuffer[0]->addr; + for (i = 0; i < info->nb_outputs; i++) { + ret = cnxk_ml_io_dequantize_single(&info->output[i], lcl_qbuffer, lcl_dbuffer); + if (ret < 0) + return ret; + + lcl_qbuffer += info->output[i].sz_q; + lcl_dbuffer += info->output[i].sz_d; + } + + return 0; +} + struct rte_ml_dev_ops cnxk_ml_ops = { /* Device control ops */ .dev_info_get = cnxk_ml_dev_info_get, @@ -681,6 +755,6 @@ struct rte_ml_dev_ops cnxk_ml_ops = { .model_params_update = cnxk_ml_model_params_update, /* I/O ops */ - .io_quantize = cn10k_ml_io_quantize, - .io_dequantize = cn10k_ml_io_dequantize, + .io_quantize = cnxk_ml_io_quantize, + .io_dequantize = cnxk_ml_io_dequantize, }; diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 6385ac4548..9cc4ddec70 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -25,6 +25,7 @@ sources = files( 'cn10k_ml_model.c', 'cn10k_ml_ocm.c', 'cnxk_ml_dev.c', + 'cnxk_ml_io.c', 'cnxk_ml_model.c', 'cnxk_ml_ops.c', ) From patchwork Wed Sep 20 07:25:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131686 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE920425E4; Wed, 20 Sep 2023 09:27:03 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F82A41611; Wed, 20 Sep 2023 09:25:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2C33540ED2 for ; Wed, 20 Sep 2023 09:25:39 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lf002294 for ; Wed, 20 Sep 2023 00:25:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=rACzJcALy7945cYDJkpuuGU5UiB63YCj5WD+NCCh7Sc=; b=IqxyC0s0SOuFJPSjCZiRW++LodbPi+ZPZt3/iNfGjQTld6j0ro/zgj+TDGXAF63wGHuK 6eH/De3Jc1fNsfksK/5UIahTrRkpih1xtSGnEx64cE5m7gXikD/OBtIgXUiQ6fShcrmI qhtDZNG5AKeDq0SIagWV26ur9oDhfa1oMNF0vahED3qHJGRxL9gAqKlVw/vKXvxR+AqW vulRtjBII//HFZtZ8tnaQ7V6CJDCH+4TMrXBo4zjM2bdfONEZBIXCOIw5F7eiyhZfl8U tGkeCmvjvTB/2u3s+edw5diASXn538/ZEzBfsITJWYiuRh17rkJitwWWvdMavSbKVATh aw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-9 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:38 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:36 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id AFE175B6922; Wed, 20 Sep 2023 00:25:36 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 14/34] ml/cnxk: update device debug functions Date: Wed, 20 Sep 2023 00:25:05 -0700 Message-ID: <20230920072528.14185-15-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: vTn7qwF8L-S9USPZsS3bnGYVFQpO5qQj X-Proofpoint-GUID: vTn7qwF8L-S9USPZsS3bnGYVFQpO5qQj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added cnxk wrapper for device dump and selftest debug functions. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_model.c | 118 +++++++++++++++++++++ drivers/ml/cnxk/cn10k_ml_model.h | 1 + drivers/ml/cnxk/cn10k_ml_ocm.c | 11 +- drivers/ml/cnxk/cn10k_ml_ocm.h | 2 +- drivers/ml/cnxk/cn10k_ml_ops.c | 176 ++----------------------------- drivers/ml/cnxk/cn10k_ml_ops.h | 4 +- drivers/ml/cnxk/cnxk_ml_model.c | 33 ++++++ drivers/ml/cnxk/cnxk_ml_model.h | 2 + drivers/ml/cnxk/cnxk_ml_ops.c | 39 ++++++- drivers/ml/cnxk/cnxk_ml_utils.c | 15 +++ drivers/ml/cnxk/cnxk_ml_utils.h | 17 +++ drivers/ml/cnxk/meson.build | 2 + 12 files changed, 237 insertions(+), 183 deletions(-) create mode 100644 drivers/ml/cnxk/cnxk_ml_utils.c create mode 100644 drivers/ml/cnxk/cnxk_ml_utils.h diff --git a/drivers/ml/cnxk/cn10k_ml_model.c b/drivers/ml/cnxk/cn10k_ml_model.c index 9a336cd18f..9e92d4acf3 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.c +++ b/drivers/ml/cnxk/cn10k_ml_model.c @@ -12,6 +12,7 @@ #include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" #include "cnxk_ml_ops.h" +#include "cnxk_ml_utils.h" static enum rte_ml_io_type cn10k_ml_io_type_map(uint8_t type) @@ -591,3 +592,120 @@ cn10k_ml_model_info_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *mo rte_ml_io_type_size_get(io_info->output[i].qtype); } } + +void +cn10k_ml_layer_print(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, FILE *fp) +{ + struct cn10k_ml_ocm *ocm; + char str[STR_LEN]; + uint8_t i; + uint8_t j; + + ocm = &cnxk_mldev->cn10k_mldev.ocm; + + /* Print debug info */ + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, " Layer Information (Layer ID: %u, Name: %s)\n", + cnxk_mldev->index_map[layer->index].layer_id, layer->name); + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "index", layer->index); + fprintf(fp, "%*s : %s\n", FIELD_LEN, "name", layer->name); + fprintf(fp, "%*s : %u.%u.%u.%u\n", FIELD_LEN, "version", + layer->glow.metadata.model.version[0], layer->glow.metadata.model.version[1], + layer->glow.metadata.model.version[2], layer->glow.metadata.model.version[3]); + fprintf(fp, "%*s : 0x%016lx\n", FIELD_LEN, "layer", PLT_U64_CAST(layer)); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "batch_size", layer->batch_size); + + /* Print model state */ + if (layer->state == ML_CNXK_LAYER_STATE_LOADED) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "loaded"); + if (layer->state == ML_CNXK_LAYER_STATE_JOB_ACTIVE) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "job_active"); + if (layer->state == ML_CNXK_LAYER_STATE_STARTED) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "started"); + + /* Print OCM status */ + fprintf(fp, "%*s : %" PRIu64 " bytes\n", FIELD_LEN, "wb_size", + layer->glow.metadata.model.ocm_wb_range_end - + layer->glow.metadata.model.ocm_wb_range_start + 1); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "wb_pages", layer->glow.ocm_map.wb_pages); + fprintf(fp, "%*s : %" PRIu64 " bytes\n", FIELD_LEN, "scratch_size", + ocm->size_per_tile - layer->glow.metadata.model.ocm_tmp_range_floor); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "scratch_pages", layer->glow.ocm_map.scratch_pages); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_tiles", + layer->glow.metadata.model.tile_end - layer->glow.metadata.model.tile_start + 1); + + if (layer->state == ML_CNXK_LAYER_STATE_STARTED) { + fprintf(fp, "%*s : 0x%0*" PRIx64 "\n", FIELD_LEN, "tilemask", + ML_CN10K_OCM_NUMTILES / 4, layer->glow.ocm_map.tilemask); + fprintf(fp, "%*s : 0x%" PRIx64 "\n", FIELD_LEN, "ocm_wb_start", + layer->glow.ocm_map.wb_page_start * ocm->page_size); + } + + fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_inputs", layer->glow.metadata.model.num_input); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_outputs", layer->glow.metadata.model.num_output); + fprintf(fp, "\n"); + + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "%8s %16s %12s %18s\n", "input", "input_name", "input_type", + "model_input_type"); + cnxk_ml_print_line(fp, LINE_LEN); + for (i = 0; i < layer->glow.metadata.model.num_input; i++) { + if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { + fprintf(fp, "%8u ", i); + fprintf(fp, "%*s ", 16, layer->glow.metadata.input1[i].input_name); + rte_ml_io_type_to_str(layer->glow.metadata.input1[i].input_type, str, + STR_LEN); + fprintf(fp, "%*s ", 12, str); + rte_ml_io_type_to_str(layer->glow.metadata.input1[i].model_input_type, str, + STR_LEN); + fprintf(fp, "%*s ", 18, str); + fprintf(fp, "\n"); + } else { + j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; + + fprintf(fp, "%8u ", i); + fprintf(fp, "%*s ", 16, layer->glow.metadata.input2[j].input_name); + rte_ml_io_type_to_str(layer->glow.metadata.input2[j].input_type, str, + STR_LEN); + fprintf(fp, "%*s ", 12, str); + rte_ml_io_type_to_str(layer->glow.metadata.input2[j].model_input_type, str, + STR_LEN); + fprintf(fp, "%*s ", 18, str); + fprintf(fp, "\n"); + } + } + fprintf(fp, "\n"); + + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "%8s %16s %12s %18s\n", "output", "output_name", "output_type", + "model_output_type"); + cnxk_ml_print_line(fp, LINE_LEN); + for (i = 0; i < layer->glow.metadata.model.num_output; i++) { + if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { + fprintf(fp, "%8u ", i); + fprintf(fp, "%*s ", 16, layer->glow.metadata.output1[i].output_name); + rte_ml_io_type_to_str(layer->glow.metadata.output1[i].output_type, str, + STR_LEN); + fprintf(fp, "%*s ", 12, str); + rte_ml_io_type_to_str(layer->glow.metadata.output1[i].model_output_type, + str, STR_LEN); + fprintf(fp, "%*s ", 18, str); + fprintf(fp, "\n"); + } else { + j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; + fprintf(fp, "%8u ", i); + fprintf(fp, "%*s ", 16, layer->glow.metadata.output2[j].output_name); + rte_ml_io_type_to_str(layer->glow.metadata.output2[j].output_type, str, + STR_LEN); + fprintf(fp, "%*s ", 12, str); + rte_ml_io_type_to_str(layer->glow.metadata.output2[j].model_output_type, + str, STR_LEN); + fprintf(fp, "%*s ", 18, str); + fprintf(fp, "\n"); + } + } + fprintf(fp, "\n"); + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "\n"); +} diff --git a/drivers/ml/cnxk/cn10k_ml_model.h b/drivers/ml/cnxk/cn10k_ml_model.h index 45290b84ce..8717e7de3e 100644 --- a/drivers/ml/cnxk/cn10k_ml_model.h +++ b/drivers/ml/cnxk/cn10k_ml_model.h @@ -459,5 +459,6 @@ int cn10k_ml_model_ocm_pages_count(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_m void cn10k_ml_model_info_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, struct cnxk_ml_io_info *io_info, struct cn10k_ml_model_metadata *metadata); +void cn10k_ml_layer_print(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, FILE *fp); #endif /* _CN10K_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/cn10k_ml_ocm.c b/drivers/ml/cnxk/cn10k_ml_ocm.c index 2d900dbc78..70d207e646 100644 --- a/drivers/ml/cnxk/cn10k_ml_ocm.c +++ b/drivers/ml/cnxk/cn10k_ml_ocm.c @@ -483,19 +483,15 @@ cn10k_ml_ocm_pagemask_to_str(struct cn10k_ml_ocm_tile_info *tile_info, uint16_t } void -cn10k_ml_ocm_print(struct rte_ml_dev *dev, FILE *fp) +cn10k_ml_ocm_print(struct cnxk_ml_dev *cnxk_mldev, FILE *fp) { - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; struct cn10k_ml_ocm *ocm; uint8_t tile_id; uint8_t word_id; int wb_pages; char *str; - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - ocm = &cn10k_mldev->ocm; + ocm = &cnxk_mldev->cn10k_mldev.ocm; /* Nibbles + prefix '0x' */ str = rte_zmalloc("ocm_mask_str", ocm->num_pages / 4 + 2, RTE_CACHE_LINE_SIZE); @@ -510,8 +506,7 @@ cn10k_ml_ocm_print(struct rte_ml_dev *dev, FILE *fp) wb_pages = 0 - ocm->tile_ocm_info[tile_id].scratch_pages; for (word_id = 0; word_id < ocm->mask_words; word_id++) - wb_pages += - rte_popcount32(ocm->tile_ocm_info[tile_id].ocm_mask[word_id]); + wb_pages += rte_popcount32(ocm->tile_ocm_info[tile_id].ocm_mask[word_id]); fprintf(fp, "tile = %2u, scratch_pages = %4u," diff --git a/drivers/ml/cnxk/cn10k_ml_ocm.h b/drivers/ml/cnxk/cn10k_ml_ocm.h index 97b723a56a..bf8944f8ee 100644 --- a/drivers/ml/cnxk/cn10k_ml_ocm.h +++ b/drivers/ml/cnxk/cn10k_ml_ocm.h @@ -83,6 +83,6 @@ void cn10k_ml_ocm_reserve_pages(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_i uint16_t layer_id, uint64_t tilemask, int wb_page_start, uint16_t wb_pages, uint16_t scratch_pages); void cn10k_ml_ocm_free_pages(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id, uint16_t layer_id); -void cn10k_ml_ocm_print(struct rte_ml_dev *dev, FILE *fp); +void cn10k_ml_ocm_print(struct cnxk_ml_dev *cnxk_mldev, FILE *fp); #endif /* _CN10K_ML_OCM_H_ */ diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 1e6aee818c..c3608eec99 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -22,11 +22,6 @@ /* ML layer macros */ #define CN10K_ML_LAYER_MEMZONE_NAME "ml_cn10k_layer_mz" -/* Debug print width */ -#define STR_LEN 12 -#define FIELD_LEN 16 -#define LINE_LEN 90 - /* ML Job descriptor flags */ #define ML_FLAGS_POLL_COMPL BIT(0) #define ML_FLAGS_SSO_COMPL BIT(1) @@ -74,16 +69,6 @@ static const struct cn10k_ml_stype_db_driver { {ML_DRIVER_ERR_FW_ERROR, "UNKNOWN FIRMWARE ERROR"}, }; -static void -print_line(FILE *fp, int len) -{ - int i; - - for (i = 0; i < len; i++) - fprintf(fp, "-"); - fprintf(fp, "\n"); -} - static inline void cn10k_ml_set_poll_addr(struct cnxk_ml_req *req) { @@ -117,140 +102,6 @@ cn10k_ml_qp_initialize(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_qp *qp) } } -static void -cn10k_ml_model_print(struct rte_ml_dev *dev, uint16_t model_id, FILE *fp) -{ - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - struct cnxk_ml_model *model; - struct cn10k_ml_ocm *ocm; - char str[STR_LEN]; - uint8_t i; - uint8_t j; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - ocm = &cn10k_mldev->ocm; - model = dev->data->models[model_id]; - - /* Print debug info */ - print_line(fp, LINE_LEN); - fprintf(fp, " Model Information (%s)\n", model->glow.metadata.model.name); - print_line(fp, LINE_LEN); - fprintf(fp, "%*s : %s\n", FIELD_LEN, "name", model->glow.metadata.model.name); - fprintf(fp, "%*s : %u.%u.%u.%u\n", FIELD_LEN, "version", - model->glow.metadata.model.version[0], model->glow.metadata.model.version[1], - model->glow.metadata.model.version[2], model->glow.metadata.model.version[3]); - if (strlen(model->name) != 0) - fprintf(fp, "%*s : %s\n", FIELD_LEN, "debug_name", model->name); - fprintf(fp, "%*s : 0x%016lx\n", FIELD_LEN, "model", PLT_U64_CAST(model)); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "model_id", model->model_id); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "batch_size", model->glow.metadata.model.batch_size); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_layers", model->glow.metadata.model.num_layers); - - /* Print model state */ - if (model->state == ML_CNXK_MODEL_STATE_LOADED) - fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "loaded"); - if (model->state == ML_CNXK_MODEL_STATE_JOB_ACTIVE) - fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "job_active"); - if (model->state == ML_CNXK_MODEL_STATE_STARTED) - fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "started"); - - /* Print OCM status */ - fprintf(fp, "%*s : %" PRIu64 " bytes\n", FIELD_LEN, "wb_size", - model->glow.metadata.model.ocm_wb_range_end - - model->glow.metadata.model.ocm_wb_range_start + 1); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "wb_pages", model->layer[0].glow.ocm_map.wb_pages); - fprintf(fp, "%*s : %" PRIu64 " bytes\n", FIELD_LEN, "scratch_size", - ocm->size_per_tile - model->glow.metadata.model.ocm_tmp_range_floor); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "scratch_pages", - model->layer[0].glow.ocm_map.scratch_pages); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_tiles", - model->glow.metadata.model.tile_end - model->glow.metadata.model.tile_start + 1); - - if (model->state == ML_CNXK_MODEL_STATE_STARTED) { - fprintf(fp, "%*s : 0x%0*" PRIx64 "\n", FIELD_LEN, "tilemask", - ML_CN10K_OCM_NUMTILES / 4, model->layer[0].glow.ocm_map.tilemask); - fprintf(fp, "%*s : 0x%" PRIx64 "\n", FIELD_LEN, "ocm_wb_start", - model->layer[0].glow.ocm_map.wb_page_start * cn10k_mldev->ocm.page_size); - } - - fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_inputs", model->glow.metadata.model.num_input); - fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_outputs", model->glow.metadata.model.num_output); - fprintf(fp, "\n"); - - print_line(fp, LINE_LEN); - fprintf(fp, "%8s %16s %12s %18s %12s\n", "input", "input_name", "input_type", - "model_input_type", "quantize"); - print_line(fp, LINE_LEN); - for (i = 0; i < model->glow.metadata.model.num_input; i++) { - if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - fprintf(fp, "%8u ", i); - fprintf(fp, "%*s ", 16, model->glow.metadata.input1[i].input_name); - rte_ml_io_type_to_str(model->glow.metadata.input1[i].input_type, str, - STR_LEN); - fprintf(fp, "%*s ", 12, str); - rte_ml_io_type_to_str(model->glow.metadata.input1[i].model_input_type, str, - STR_LEN); - fprintf(fp, "%*s ", 18, str); - fprintf(fp, "%*s", 12, - (model->glow.metadata.input1[i].quantize == 1 ? "Yes" : "No")); - fprintf(fp, "\n"); - } else { - j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - - fprintf(fp, "%8u ", i); - fprintf(fp, "%*s ", 16, model->glow.metadata.input2[j].input_name); - rte_ml_io_type_to_str(model->glow.metadata.input2[j].input_type, str, - STR_LEN); - fprintf(fp, "%*s ", 12, str); - rte_ml_io_type_to_str(model->glow.metadata.input2[j].model_input_type, str, - STR_LEN); - fprintf(fp, "%*s ", 18, str); - fprintf(fp, "%*s", 12, - (model->glow.metadata.input2[j].quantize == 1 ? "Yes" : "No")); - fprintf(fp, "\n"); - } - } - fprintf(fp, "\n"); - - print_line(fp, LINE_LEN); - fprintf(fp, "%8s %16s %12s %18s %12s\n", "output", "output_name", "output_type", - "model_output_type", "dequantize"); - print_line(fp, LINE_LEN); - for (i = 0; i < model->glow.metadata.model.num_output; i++) { - if (i < MRVL_ML_NUM_INPUT_OUTPUT_1) { - fprintf(fp, "%8u ", i); - fprintf(fp, "%*s ", 16, model->glow.metadata.output1[i].output_name); - rte_ml_io_type_to_str(model->glow.metadata.output1[i].output_type, str, - STR_LEN); - fprintf(fp, "%*s ", 12, str); - rte_ml_io_type_to_str(model->glow.metadata.output1[i].model_output_type, - str, STR_LEN); - fprintf(fp, "%*s ", 18, str); - fprintf(fp, "%*s", 12, - (model->glow.metadata.output1[i].dequantize == 1 ? "Yes" : "No")); - fprintf(fp, "\n"); - } else { - j = i - MRVL_ML_NUM_INPUT_OUTPUT_1; - fprintf(fp, "%8u ", i); - fprintf(fp, "%*s ", 16, model->glow.metadata.output2[j].output_name); - rte_ml_io_type_to_str(model->glow.metadata.output2[j].output_type, str, - STR_LEN); - fprintf(fp, "%*s ", 12, str); - rte_ml_io_type_to_str(model->glow.metadata.output2[j].model_output_type, - str, STR_LEN); - fprintf(fp, "%*s ", 18, str); - fprintf(fp, "%*s", 12, - (model->glow.metadata.output2[j].dequantize == 1 ? "Yes" : "No")); - fprintf(fp, "\n"); - } - } - fprintf(fp, "\n"); - print_line(fp, LINE_LEN); - fprintf(fp, "\n"); -} - static void cn10k_ml_prep_sp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, struct cnxk_ml_req *req, enum cn10k_ml_job_type job_type) @@ -1124,38 +975,25 @@ cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mo } int -cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) +cn10k_ml_dev_dump(struct cnxk_ml_dev *cnxk_mldev, FILE *fp) { struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - struct cnxk_ml_model *model; struct cn10k_ml_fw *fw; uint32_t head_loc; uint32_t tail_loc; - uint16_t model_id; uint32_t bufsize; char *head_ptr; int core_id; - if (roc_env_is_asim()) - return 0; - - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; fw = &cn10k_mldev->fw; - /* Dump model info */ - for (model_id = 0; model_id < dev->data->nb_models; model_id++) { - model = dev->data->models[model_id]; - if (model != NULL) { - cn10k_ml_model_print(dev, model_id, fp); - fprintf(fp, "\n"); - } - } - /* Dump OCM state */ - cn10k_ml_ocm_print(dev, fp); + cn10k_ml_ocm_print(cnxk_mldev, fp); + + if (roc_env_is_asim()) + return 0; /* Dump debug buffer */ for (core_id = 0; core_id <= 1; core_id++) { @@ -1211,17 +1049,15 @@ cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) } int -cn10k_ml_dev_selftest(struct rte_ml_dev *dev) +cn10k_ml_dev_selftest(struct cnxk_ml_dev *cnxk_mldev) { struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; const struct plt_memzone *mz; struct cnxk_ml_req *req; uint64_t timeout_cycle; bool timeout; int ret; - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; mz = plt_memzone_reserve_aligned("dev_selftest", sizeof(struct cnxk_ml_req), 0, ML_CN10K_ALIGN_SIZE); diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 780e2a9f9c..5fda98ae88 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -295,8 +295,8 @@ int cn10k_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_d int cn10k_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); int cn10k_ml_dev_start(struct cnxk_ml_dev *cnxk_mldev); int cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev); -int cn10k_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp); -int cn10k_ml_dev_selftest(struct rte_ml_dev *dev); +int cn10k_ml_dev_dump(struct cnxk_ml_dev *cnxk_mldev, FILE *fp); +int cn10k_ml_dev_selftest(struct cnxk_ml_dev *cnxk_mldev); int cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats); void cn10k_ml_dev_stats_reset(struct rte_ml_dev *dev); diff --git a/drivers/ml/cnxk/cnxk_ml_model.c b/drivers/ml/cnxk/cnxk_ml_model.c index 3d735ced3e..b069d4e3a5 100644 --- a/drivers/ml/cnxk/cnxk_ml_model.c +++ b/drivers/ml/cnxk/cnxk_ml_model.c @@ -5,3 +5,36 @@ #include #include "cnxk_ml_model.h" +#include "cnxk_ml_utils.h" + +void +cnxk_ml_model_dump(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, FILE *fp) +{ + struct cnxk_ml_layer *layer; + uint16_t layer_id; + + /* Print debug info */ + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, " Model Information (Model ID: %u, Name: %s)\n", model->model_id, model->name); + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "model_id", model->model_id); + fprintf(fp, "%*s : %s\n", FIELD_LEN, "name", model->name); + fprintf(fp, "%*s : 0x%016lx\n", FIELD_LEN, "model", PLT_U64_CAST(model)); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "batch_size", model->batch_size); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "nb_layers", model->nb_layers); + + /* Print model state */ + if (model->state == ML_CNXK_MODEL_STATE_LOADED) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "loaded"); + if (model->state == ML_CNXK_MODEL_STATE_JOB_ACTIVE) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "job_active"); + if (model->state == ML_CNXK_MODEL_STATE_STARTED) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "started"); + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "\n"); + + for (layer_id = 0; layer_id < model->nb_layers; layer_id++) { + layer = &model->layer[layer_id]; + cn10k_ml_layer_print(cnxk_mldev, layer, fp); + } +} diff --git a/drivers/ml/cnxk/cnxk_ml_model.h b/drivers/ml/cnxk/cnxk_ml_model.h index a2994dbb71..66d979dd3c 100644 --- a/drivers/ml/cnxk/cnxk_ml_model.h +++ b/drivers/ml/cnxk/cnxk_ml_model.h @@ -108,4 +108,6 @@ struct cnxk_ml_model { plt_spinlock_t lock; }; +void cnxk_ml_model_dump(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, FILE *fp); + #endif /* _CNXK_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index ed71a55132..b49ab59798 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -411,6 +411,41 @@ cnxk_ml_dev_stop(struct rte_ml_dev *dev) return 0; } +static int +cnxk_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + uint16_t model_id; + + if ((dev == NULL) || (fp == NULL)) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + /* Dump model info */ + for (model_id = 0; model_id < cnxk_mldev->mldev->data->nb_models; model_id++) { + model = cnxk_mldev->mldev->data->models[model_id]; + if (model != NULL) + cnxk_ml_model_dump(cnxk_mldev, model, fp); + } + + return cn10k_ml_dev_dump(cnxk_mldev, fp); +} + +static int +cnxk_ml_dev_selftest(struct rte_ml_dev *dev) +{ + struct cnxk_ml_dev *cnxk_mldev; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + return cn10k_ml_dev_selftest(cnxk_mldev); +} + static int cnxk_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, const struct rte_ml_dev_qp_conf *qp_conf, int socket_id) @@ -731,8 +766,8 @@ struct rte_ml_dev_ops cnxk_ml_ops = { .dev_close = cnxk_ml_dev_close, .dev_start = cnxk_ml_dev_start, .dev_stop = cnxk_ml_dev_stop, - .dev_dump = cn10k_ml_dev_dump, - .dev_selftest = cn10k_ml_dev_selftest, + .dev_dump = cnxk_ml_dev_dump, + .dev_selftest = cnxk_ml_dev_selftest, /* Queue-pair handling ops */ .dev_queue_pair_setup = cnxk_ml_dev_queue_pair_setup, diff --git a/drivers/ml/cnxk/cnxk_ml_utils.c b/drivers/ml/cnxk/cnxk_ml_utils.c new file mode 100644 index 0000000000..ca3670a9e8 --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_utils.c @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include "cnxk_ml_utils.h" + +void +cnxk_ml_print_line(FILE *fp, int len) +{ + int i; + + for (i = 0; i < len; i++) + fprintf(fp, "-"); + fprintf(fp, "\n"); +} diff --git a/drivers/ml/cnxk/cnxk_ml_utils.h b/drivers/ml/cnxk/cnxk_ml_utils.h new file mode 100644 index 0000000000..ed2ab21346 --- /dev/null +++ b/drivers/ml/cnxk/cnxk_ml_utils.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _CNXK_ML_UTILS_H_ +#define _CNXK_ML_UTILS_H_ + +#include + +/* Debug print width */ +#define STR_LEN 12 +#define FIELD_LEN 16 +#define LINE_LEN 72 + +void cnxk_ml_print_line(FILE *fp, int len); + +#endif /* _CNXK_ML_UTILS_H_ */ diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 9cc4ddec70..575f08f9c0 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -17,6 +17,7 @@ driver_sdk_headers = files( 'cnxk_ml_model.h', 'cnxk_ml_ops.h', 'cnxk_ml_xstats.h', + 'cnxk_ml_utils.h', ) sources = files( @@ -28,6 +29,7 @@ sources = files( 'cnxk_ml_io.c', 'cnxk_ml_model.c', 'cnxk_ml_ops.c', + 'cnxk_ml_utils.c', ) deps += ['mldev', 'common_cnxk', 'kvargs', 'hash'] From patchwork Wed Sep 20 07:25:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131688 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D43C1425E4; Wed, 20 Sep 2023 09:27:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 89EB0427DE; Wed, 20 Sep 2023 09:25:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 87AF540EE5 for ; Wed, 20 Sep 2023 09:25:39 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lg002294 for ; Wed, 20 Sep 2023 00:25:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=WtU4Q9KI885mqMD5mal6W6RSuoCkJKe0cU4aD6WBjTE=; b=P7zYwcuw5e14+xVDu+3JFLjJeAA5sxhsLGoYR3wJ5dnRkLxXPYZkeN/4g2sl/40XHImN 0fdwrpOU+ZctLz4L2lQx3aEkOtQ/kSXn0lx+tSqO7VWdlhj/vfPrMmW2ixFlVv1uoMeP C5IWnTfqcmJPs8iMmxMUS0Wu2C9ct90AWCcZZT/rsq0M1Pc0avjy/vVarp021CeIYwbA LgQGVAGkAjtHThpQPlQL/7zaPproaEeBFob1nXncD4SWERjnuukPH6P79K9gT62eGqOR k9OWyxnI39wgJgCHn2id9Z3x4PY2kEg8we3ACz5p1E6r3BDh9AxmoNf5TAf9/4tz3s2D Cw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-10 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:37 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:37 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 075605B6926; Wed, 20 Sep 2023 00:25:37 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 15/34] ml/cnxk: update device stats functions Date: Wed, 20 Sep 2023 00:25:06 -0700 Message-ID: <20230920072528.14185-16-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: bzOAq_fZCcu3jv2Cc68RFMBwliIMTrJ9 X-Proofpoint-GUID: bzOAq_fZCcu3jv2Cc68RFMBwliIMTrJ9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added cnxk wrapper function to handle ML device stats Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 32 ------------------------------ drivers/ml/cnxk/cn10k_ml_ops.h | 2 -- drivers/ml/cnxk/cnxk_ml_ops.c | 36 ++++++++++++++++++++++++++++++++-- 3 files changed, 34 insertions(+), 36 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index c3608eec99..59cd3bb9b3 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -774,38 +774,6 @@ cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev) return 0; } -int -cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats) -{ - struct cnxk_ml_qp *qp; - int qp_id; - - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { - qp = dev->data->queue_pairs[qp_id]; - stats->enqueued_count += qp->stats.enqueued_count; - stats->dequeued_count += qp->stats.dequeued_count; - stats->enqueue_err_count += qp->stats.enqueue_err_count; - stats->dequeue_err_count += qp->stats.dequeue_err_count; - } - - return 0; -} - -void -cn10k_ml_dev_stats_reset(struct rte_ml_dev *dev) -{ - struct cnxk_ml_qp *qp; - int qp_id; - - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { - qp = dev->data->queue_pairs[qp_id]; - qp->stats.enqueued_count = 0; - qp->stats.dequeued_count = 0; - qp->stats.enqueue_err_count = 0; - qp->stats.dequeue_err_count = 0; - } -} - int cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, struct rte_ml_dev_xstats_map *xstats_map, diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 5fda98ae88..47e7cb12af 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -298,8 +298,6 @@ int cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev); int cn10k_ml_dev_dump(struct cnxk_ml_dev *cnxk_mldev, FILE *fp); int cn10k_ml_dev_selftest(struct cnxk_ml_dev *cnxk_mldev); -int cn10k_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats); -void cn10k_ml_dev_stats_reset(struct rte_ml_dev *dev); int cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, struct rte_ml_dev_xstats_map *xstats_map, uint32_t size); diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index b49ab59798..ffeb3f4452 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -491,6 +491,38 @@ cnxk_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id, return 0; } +static int +cnxk_ml_dev_stats_get(struct rte_ml_dev *dev, struct rte_ml_dev_stats *stats) +{ + struct cnxk_ml_qp *qp; + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + qp = dev->data->queue_pairs[qp_id]; + stats->enqueued_count += qp->stats.enqueued_count; + stats->dequeued_count += qp->stats.dequeued_count; + stats->enqueue_err_count += qp->stats.enqueue_err_count; + stats->dequeue_err_count += qp->stats.dequeue_err_count; + } + + return 0; +} + +static void +cnxk_ml_dev_stats_reset(struct rte_ml_dev *dev) +{ + struct cnxk_ml_qp *qp; + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + qp = dev->data->queue_pairs[qp_id]; + qp->stats.enqueued_count = 0; + qp->stats.dequeued_count = 0; + qp->stats.enqueue_err_count = 0; + qp->stats.dequeue_err_count = 0; + } +} + static int cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, uint16_t *model_id) { @@ -774,8 +806,8 @@ struct rte_ml_dev_ops cnxk_ml_ops = { .dev_queue_pair_release = cnxk_ml_dev_queue_pair_release, /* Stats ops */ - .dev_stats_get = cn10k_ml_dev_stats_get, - .dev_stats_reset = cn10k_ml_dev_stats_reset, + .dev_stats_get = cnxk_ml_dev_stats_get, + .dev_stats_reset = cnxk_ml_dev_stats_reset, .dev_xstats_names_get = cn10k_ml_dev_xstats_names_get, .dev_xstats_by_name_get = cn10k_ml_dev_xstats_by_name_get, .dev_xstats_get = cn10k_ml_dev_xstats_get, From patchwork Wed Sep 20 07:25:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131690 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37519425E4; Wed, 20 Sep 2023 09:27:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D9548427E7; Wed, 20 Sep 2023 09:25:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 061D240EE7 for ; Wed, 20 Sep 2023 09:25:39 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lh002294 for ; Wed, 20 Sep 2023 00:25:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=qHeIdvJZcqWfRXCwCcgHb7DLEtYH+28Y7Bk/zUMUPkg=; b=QB5b0b6iVkj/hvvb/JSZqJNJp2GXxRLrFBhRYsBtYUNTLsDt/8cIBX7ozBNwEwy0SDbT VdFmJiondrFsnj9544RMaNvOkR1Op1tMDTqUCZCAxGsmebfWRFn5VnXLLXIZ720Ii172 y42XuhnvxXN3LmKJCNhSldWEb3o+Y9VZfrMdOyLmaOPAAk2buu8YJ8KzAWvZRaQAmyID adHPuD7/caLUpL1NygPzSvP24QHGkO+hdf+MhOzrIojvHQbYhVUFcsJflHA1oEuEByA0 K/2vzowsdoqqP/icAJRzbsNl/7g1KU8bJWYOMMss5luOLW5wIA1Apgl73X+L9lLwhIFn VQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-11 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:39 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:37 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:37 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 57FB45B6929; Wed, 20 Sep 2023 00:25:37 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 16/34] ml/cnxk: update device and model xstats functions Date: Wed, 20 Sep 2023 00:25:07 -0700 Message-ID: <20230920072528.14185-17-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Us7T4QRt7Cdk1J3RG6e7IxuU9yH_mTwT X-Proofpoint-GUID: Us7T4QRt7Cdk1J3RG6e7IxuU9yH_mTwT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added cnxk wrapper function to handle ML device and model extended stats. Handling resources for the xstats is done in the cnxk layer. Introduced internal xstats group. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.h | 4 - drivers/ml/cnxk/cn10k_ml_ops.c | 542 +------------------------------ drivers/ml/cnxk/cn10k_ml_ops.h | 11 - drivers/ml/cnxk/cnxk_ml_dev.h | 5 + drivers/ml/cnxk/cnxk_ml_ops.c | 540 +++++++++++++++++++++++++++++- drivers/ml/cnxk/cnxk_ml_xstats.h | 21 +- 6 files changed, 571 insertions(+), 552 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index be989e0a20..bde9d08901 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -10,7 +10,6 @@ #include "cn10k_ml_ocm.h" #include "cnxk_ml_io.h" -#include "cnxk_ml_xstats.h" /* Dummy Device ops */ extern struct rte_ml_dev_ops ml_dev_dummy_ops; @@ -133,9 +132,6 @@ struct cn10k_ml_dev { /* OCM info */ struct cn10k_ml_ocm ocm; - /* Extended stats data */ - struct cnxk_ml_xstats xstats; - /* Enable / disable model data caching */ int cache_model_data; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 59cd3bb9b3..f1431b89a2 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -202,107 +202,21 @@ cn10k_ml_prep_fp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_r req->cn10k_req.jd.model_run.num_batches = op->nb_batches; } -static int -cn10k_ml_xstats_init(struct rte_ml_dev *dev) -{ - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - uint16_t nb_stats; - uint16_t stat_id; - uint16_t model; - uint16_t i; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - - /* Allocate memory for xstats entries. Don't allocate during reconfigure */ - nb_stats = RTE_DIM(device_xstats) + ML_CNXK_MAX_MODELS * RTE_DIM(layer_xstats); - if (cn10k_mldev->xstats.entries == NULL) - cn10k_mldev->xstats.entries = rte_zmalloc( - "cn10k_ml_xstats", sizeof(struct cnxk_ml_xstats_entry) * nb_stats, - PLT_CACHE_LINE_SIZE); - - if (cn10k_mldev->xstats.entries == NULL) - return -ENOMEM; - - /* Initialize device xstats */ - stat_id = 0; - for (i = 0; i < RTE_DIM(device_xstats); i++) { - cn10k_mldev->xstats.entries[stat_id].map.id = stat_id; - snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, - sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), "%s", - device_xstats[i].name); - - cn10k_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_DEVICE; - cn10k_mldev->xstats.entries[stat_id].type = device_xstats[i].type; - cn10k_mldev->xstats.entries[stat_id].fn_id = CNXK_ML_XSTATS_FN_DEVICE; - cn10k_mldev->xstats.entries[stat_id].obj_idx = 0; - cn10k_mldev->xstats.entries[stat_id].reset_allowed = device_xstats[i].reset_allowed; - stat_id++; - } - cn10k_mldev->xstats.count_mode_device = stat_id; - - /* Initialize model xstats */ - for (model = 0; model < ML_CNXK_MAX_MODELS; model++) { - cn10k_mldev->xstats.offset_for_model[model] = stat_id; - - for (i = 0; i < RTE_DIM(layer_xstats); i++) { - cn10k_mldev->xstats.entries[stat_id].map.id = stat_id; - cn10k_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_MODEL; - cn10k_mldev->xstats.entries[stat_id].type = layer_xstats[i].type; - cn10k_mldev->xstats.entries[stat_id].fn_id = CNXK_ML_XSTATS_FN_MODEL; - cn10k_mldev->xstats.entries[stat_id].obj_idx = model; - cn10k_mldev->xstats.entries[stat_id].reset_allowed = - layer_xstats[i].reset_allowed; - - /* Name of xstat is updated during model load */ - snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, - sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), - "Model-%u-%s", model, layer_xstats[i].name); - - stat_id++; - } - - cn10k_mldev->xstats.count_per_model[model] = RTE_DIM(layer_xstats); - } - - cn10k_mldev->xstats.count_mode_model = stat_id - cn10k_mldev->xstats.count_mode_device; - cn10k_mldev->xstats.count = stat_id; - - return 0; -} - static void -cn10k_ml_xstats_uninit(struct rte_ml_dev *dev) +cn10k_ml_xstats_layer_name_update(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id, + uint16_t layer_id) { - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - - rte_free(cn10k_mldev->xstats.entries); - cn10k_mldev->xstats.entries = NULL; - - cn10k_mldev->xstats.count = 0; -} - -static void -cn10k_ml_xstats_model_name_update(struct rte_ml_dev *dev, uint16_t model_id) -{ - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; uint16_t rclk_freq; uint16_t sclk_freq; uint16_t stat_id; char suffix[8]; uint16_t i; - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - model = dev->data->models[model_id]; - stat_id = RTE_DIM(device_xstats) + model_id * RTE_DIM(layer_xstats); + model = cnxk_mldev->mldev->data->models[model_id]; + layer = &model->layer[layer_id]; + stat_id = cnxk_mldev->xstats.offset_for_layer[model_id][layer_id]; roc_clk_freq_get(&rclk_freq, &sclk_freq); if (sclk_freq == 0) @@ -310,270 +224,15 @@ cn10k_ml_xstats_model_name_update(struct rte_ml_dev *dev, uint16_t model_id) else strcpy(suffix, "ns"); - /* Update xstat name based on model name and sclk availability */ + /* Update xstat name based on layer name and sclk availability */ for (i = 0; i < RTE_DIM(layer_xstats); i++) { - snprintf(cn10k_mldev->xstats.entries[stat_id].map.name, - sizeof(cn10k_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", - model->layer[0].glow.metadata.model.name, layer_xstats[i].name, suffix); + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", + layer->glow.metadata.model.name, layer_xstats[i].name, suffix); stat_id++; } } -static uint64_t -cn10k_ml_dev_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx __rte_unused, - enum cnxk_ml_xstats_type type) -{ - struct cnxk_ml_dev *cnxk_mldev; - - cnxk_mldev = dev->data->dev_private; - - switch (type) { - case nb_models_loaded: - return cnxk_mldev->nb_models_loaded; - case nb_models_unloaded: - return cnxk_mldev->nb_models_unloaded; - case nb_models_started: - return cnxk_mldev->nb_models_started; - case nb_models_stopped: - return cnxk_mldev->nb_models_stopped; - default: - return -1; - } - - return 0; -} - -#define ML_AVG_FOREACH_QP(dev, model, qp_id, str, value, count) \ - do { \ - value = 0; \ - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - value += model->layer[0].glow.burst_xstats[qp_id].str##_latency_tot; \ - count += model->layer[0].glow.burst_xstats[qp_id].dequeued_count - \ - model->layer[0].glow.burst_xstats[qp_id].str##_reset_count; \ - } \ - if (count != 0) \ - value = value / count; \ - } while (0) - -#define ML_MIN_FOREACH_QP(dev, model, qp_id, str, value, count) \ - do { \ - value = UINT64_MAX; \ - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - value = PLT_MIN( \ - value, \ - model->layer[0].glow.burst_xstats[qp_id].str##_latency_min); \ - count += model->layer[0].glow.burst_xstats[qp_id].dequeued_count - \ - model->layer[0].glow.burst_xstats[qp_id].str##_reset_count; \ - } \ - if (count == 0) \ - value = 0; \ - } while (0) - -#define ML_MAX_FOREACH_QP(dev, model, qp_id, str, value, count) \ - do { \ - value = 0; \ - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - value = PLT_MAX( \ - value, \ - model->layer[0].glow.burst_xstats[qp_id].str##_latency_max); \ - count += model->layer[0].glow.burst_xstats[qp_id].dequeued_count - \ - model->layer[0].glow.burst_xstats[qp_id].str##_reset_count; \ - } \ - if (count == 0) \ - value = 0; \ - } while (0) - -static uint64_t -cn10k_ml_model_xstat_get(struct rte_ml_dev *dev, uint16_t obj_idx, enum cnxk_ml_xstats_type type) -{ - struct cnxk_ml_model *model; - uint16_t rclk_freq; /* MHz */ - uint16_t sclk_freq; /* MHz */ - uint64_t count = 0; - uint64_t value; - uint32_t qp_id; - - model = dev->data->models[obj_idx]; - if (model == NULL) - return 0; - - switch (type) { - case avg_hw_latency: - ML_AVG_FOREACH_QP(dev, model, qp_id, hw, value, count); - break; - case min_hw_latency: - ML_MIN_FOREACH_QP(dev, model, qp_id, hw, value, count); - break; - case max_hw_latency: - ML_MAX_FOREACH_QP(dev, model, qp_id, hw, value, count); - break; - case avg_fw_latency: - ML_AVG_FOREACH_QP(dev, model, qp_id, fw, value, count); - break; - case min_fw_latency: - ML_MIN_FOREACH_QP(dev, model, qp_id, fw, value, count); - break; - case max_fw_latency: - ML_MAX_FOREACH_QP(dev, model, qp_id, fw, value, count); - break; - default: - value = 0; - } - - roc_clk_freq_get(&rclk_freq, &sclk_freq); - if (sclk_freq != 0) /* return in ns */ - value = (value * 1000ULL) / sclk_freq; - - return value; -} - -static int -cn10k_ml_device_xstats_reset(struct rte_ml_dev *dev, const uint16_t stat_ids[], uint16_t nb_ids) -{ - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_xstats_entry *xs; - struct cnxk_ml_dev *cnxk_mldev; - uint16_t nb_stats; - uint16_t stat_id; - uint32_t i; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - - if (stat_ids == NULL) - nb_stats = cn10k_mldev->xstats.count_mode_device; - else - nb_stats = nb_ids; - - for (i = 0; i < nb_stats; i++) { - if (stat_ids == NULL) - stat_id = i; - else - stat_id = stat_ids[i]; - - if (stat_id >= cn10k_mldev->xstats.count_mode_device) - return -EINVAL; - - xs = &cn10k_mldev->xstats.entries[stat_id]; - if (!xs->reset_allowed) - continue; - - xs->reset_value = cn10k_ml_dev_xstat_get(dev, xs->obj_idx, xs->type); - } - - return 0; -} - -#define ML_AVG_RESET_FOREACH_QP(dev, model, qp_id, str) \ - do { \ - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { \ - model->layer[0].glow.burst_xstats[qp_id].str##_latency_tot = 0; \ - model->layer[0].glow.burst_xstats[qp_id].str##_reset_count = \ - model->layer[0].glow.burst_xstats[qp_id].dequeued_count; \ - } \ - } while (0) - -#define ML_MIN_RESET_FOREACH_QP(dev, model, qp_id, str) \ - do { \ - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) \ - model->layer[0].glow.burst_xstats[qp_id].str##_latency_min = UINT64_MAX; \ - } while (0) - -#define ML_MAX_RESET_FOREACH_QP(dev, model, qp_id, str) \ - do { \ - for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) \ - model->layer[0].glow.burst_xstats[qp_id].str##_latency_max = 0; \ - } while (0) - -static void -cn10k_ml_reset_model_stat(struct rte_ml_dev *dev, uint16_t model_id, enum cnxk_ml_xstats_type type) -{ - struct cnxk_ml_model *model; - uint32_t qp_id; - - model = dev->data->models[model_id]; - - switch (type) { - case avg_hw_latency: - ML_AVG_RESET_FOREACH_QP(dev, model, qp_id, hw); - break; - case min_hw_latency: - ML_MIN_RESET_FOREACH_QP(dev, model, qp_id, hw); - break; - case max_hw_latency: - ML_MAX_RESET_FOREACH_QP(dev, model, qp_id, hw); - break; - case avg_fw_latency: - ML_AVG_RESET_FOREACH_QP(dev, model, qp_id, fw); - break; - case min_fw_latency: - ML_MIN_RESET_FOREACH_QP(dev, model, qp_id, fw); - break; - case max_fw_latency: - ML_MAX_RESET_FOREACH_QP(dev, model, qp_id, fw); - break; - default: - return; - } -} - -static int -cn10k_ml_model_xstats_reset(struct rte_ml_dev *dev, int32_t model_id, const uint16_t stat_ids[], - uint16_t nb_ids) -{ - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_xstats_entry *xs; - struct cnxk_ml_dev *cnxk_mldev; - struct cnxk_ml_model *model; - int32_t lcl_model_id = 0; - uint16_t start_id; - uint16_t end_id; - int32_t i; - int32_t j; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - for (i = 0; i < ML_CNXK_MAX_MODELS; i++) { - if (model_id == -1) { - model = dev->data->models[i]; - if (model == NULL) /* Skip inactive models */ - continue; - } else { - if (model_id != i) - continue; - - model = dev->data->models[model_id]; - if (model == NULL) { - plt_err("Invalid model_id = %d\n", model_id); - return -EINVAL; - } - } - - start_id = cn10k_mldev->xstats.offset_for_model[i]; - end_id = cn10k_mldev->xstats.offset_for_model[i] + - cn10k_mldev->xstats.count_per_model[i] - 1; - - if (stat_ids == NULL) { - for (j = start_id; j <= end_id; j++) { - xs = &cn10k_mldev->xstats.entries[j]; - cn10k_ml_reset_model_stat(dev, i, xs->type); - } - } else { - for (j = 0; j < nb_ids; j++) { - if (stat_ids[j] < start_id || stat_ids[j] > end_id) { - plt_err("Invalid stat_ids[%d] = %d for model_id = %d\n", j, - stat_ids[j], lcl_model_id); - return -EINVAL; - } - xs = &cn10k_mldev->xstats.entries[stat_ids[j]]; - cn10k_ml_reset_model_stat(dev, i, xs->type); - } - } - } - - return 0; -} - static int cn10k_ml_cache_model_data(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer) { @@ -658,7 +317,6 @@ cn10k_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_c struct cn10k_ml_dev *cn10k_mldev; struct cn10k_ml_ocm *ocm; uint16_t tile_id; - int ret; RTE_SET_USED(conf); @@ -686,13 +344,6 @@ cn10k_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_c rte_spinlock_init(&ocm->lock); - /* Initialize xstats */ - ret = cn10k_ml_xstats_init(cnxk_mldev->mldev); - if (ret != 0) { - plt_err("Failed to initialize xstats"); - return ret; - } - /* Set JCMDQ enqueue function */ if (cn10k_mldev->hw_queue_lock == 1) cn10k_mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_sl; @@ -721,9 +372,6 @@ cn10k_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev) /* Release ocm_mask memory */ rte_free(cn10k_mldev->ocm.ocm_mask); - /* Un-initialize xstats */ - cn10k_ml_xstats_uninit(cnxk_mldev->mldev); - /* Unload firmware */ cn10k_ml_fw_unload(cnxk_mldev); @@ -774,174 +422,6 @@ cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev) return 0; } -int -cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, - int32_t model_id, struct rte_ml_dev_xstats_map *xstats_map, - uint32_t size) -{ - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - uint32_t xstats_mode_count; - uint32_t idx = 0; - uint32_t i; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - - xstats_mode_count = 0; - switch (mode) { - case RTE_ML_DEV_XSTATS_DEVICE: - xstats_mode_count = cn10k_mldev->xstats.count_mode_device; - break; - case RTE_ML_DEV_XSTATS_MODEL: - if (model_id >= ML_CNXK_MAX_MODELS) - break; - xstats_mode_count = cn10k_mldev->xstats.count_per_model[model_id]; - break; - default: - return -EINVAL; - }; - - if (xstats_mode_count > size || xstats_map == NULL) - return xstats_mode_count; - - for (i = 0; i < cn10k_mldev->xstats.count && idx < size; i++) { - if (cn10k_mldev->xstats.entries[i].mode != mode) - continue; - - if (mode != RTE_ML_DEV_XSTATS_DEVICE && - model_id != cn10k_mldev->xstats.entries[i].obj_idx) - continue; - - strncpy(xstats_map[idx].name, cn10k_mldev->xstats.entries[i].map.name, - RTE_ML_STR_MAX); - xstats_map[idx].id = cn10k_mldev->xstats.entries[i].map.id; - idx++; - } - - return idx; -} - -int -cn10k_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16_t *stat_id, - uint64_t *value) -{ - struct cnxk_ml_xstats_entry *xs; - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - cnxk_ml_xstats_fn fn; - uint32_t i; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - for (i = 0; i < cn10k_mldev->xstats.count; i++) { - xs = &cn10k_mldev->xstats.entries[i]; - if (strncmp(xs->map.name, name, RTE_ML_STR_MAX) == 0) { - if (stat_id != NULL) - *stat_id = xs->map.id; - - switch (xs->fn_id) { - case CNXK_ML_XSTATS_FN_DEVICE: - fn = cn10k_ml_dev_xstat_get; - break; - case CNXK_ML_XSTATS_FN_MODEL: - fn = cn10k_ml_model_xstat_get; - break; - default: - plt_err("Unexpected xstat fn_id = %d", xs->fn_id); - return -EINVAL; - } - - *value = fn(dev, xs->obj_idx, xs->type) - xs->reset_value; - - return 0; - } - } - - if (stat_id != NULL) - *stat_id = (uint16_t)-1; - - return -EINVAL; -} - -int -cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, - const uint16_t stat_ids[], uint64_t values[], uint16_t nb_ids) -{ - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_xstats_entry *xs; - struct cnxk_ml_dev *cnxk_mldev; - uint32_t xstats_mode_count; - cnxk_ml_xstats_fn fn; - uint64_t val; - uint32_t idx; - uint32_t i; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - xstats_mode_count = 0; - - switch (mode) { - case RTE_ML_DEV_XSTATS_DEVICE: - xstats_mode_count = cn10k_mldev->xstats.count_mode_device; - break; - case RTE_ML_DEV_XSTATS_MODEL: - if (model_id >= ML_CNXK_MAX_MODELS) - return -EINVAL; - xstats_mode_count = cn10k_mldev->xstats.count_per_model[model_id]; - break; - default: - return -EINVAL; - }; - - idx = 0; - for (i = 0; i < nb_ids && idx < xstats_mode_count; i++) { - xs = &cn10k_mldev->xstats.entries[stat_ids[i]]; - if (stat_ids[i] > cn10k_mldev->xstats.count || xs->mode != mode) - continue; - - if (mode == RTE_ML_DEV_XSTATS_MODEL && model_id != xs->obj_idx) { - plt_err("Invalid stats_id[%d] = %d for model_id = %d\n", i, stat_ids[i], - model_id); - return -EINVAL; - } - - switch (xs->fn_id) { - case CNXK_ML_XSTATS_FN_DEVICE: - fn = cn10k_ml_dev_xstat_get; - break; - case CNXK_ML_XSTATS_FN_MODEL: - fn = cn10k_ml_model_xstat_get; - break; - default: - plt_err("Unexpected xstat fn_id = %d", xs->fn_id); - return -EINVAL; - } - - val = fn(dev, xs->obj_idx, xs->type); - if (values) - values[idx] = val; - - idx++; - } - - return idx; -} - -int -cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, - int32_t model_id, const uint16_t stat_ids[], uint16_t nb_ids) -{ - switch (mode) { - case RTE_ML_DEV_XSTATS_DEVICE: - return cn10k_ml_device_xstats_reset(dev, stat_ids, nb_ids); - case RTE_ML_DEV_XSTATS_MODEL: - return cn10k_ml_model_xstats_reset(dev, model_id, stat_ids, nb_ids); - }; - - return 0; -} - int cn10k_ml_dev_dump(struct cnxk_ml_dev *cnxk_mldev, FILE *fp) { @@ -1215,7 +695,7 @@ cn10k_ml_layer_load(void *device, uint16_t model_id, const char *layer_name, uin sizeof(struct cn10k_ml_layer_xstats)); /* Update xstats names */ - cn10k_ml_xstats_model_name_update(cnxk_mldev->mldev, idx); + cn10k_ml_xstats_layer_name_update(cnxk_mldev, model_id, layer_id); layer->state = ML_CNXK_LAYER_STATE_LOADED; cnxk_mldev->index_map[idx].model_id = model->model_id; diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 47e7cb12af..8a090a3159 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -298,17 +298,6 @@ int cn10k_ml_dev_stop(struct cnxk_ml_dev *cnxk_mldev); int cn10k_ml_dev_dump(struct cnxk_ml_dev *cnxk_mldev, FILE *fp); int cn10k_ml_dev_selftest(struct cnxk_ml_dev *cnxk_mldev); -int cn10k_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, - int32_t model_id, struct rte_ml_dev_xstats_map *xstats_map, - uint32_t size); -int cn10k_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16_t *stat_id, - uint64_t *value); -int cn10k_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, - int32_t model_id, const uint16_t stat_ids[], uint64_t values[], - uint16_t nb_ids); -int cn10k_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, - int32_t model_id, const uint16_t stat_ids[], uint16_t nb_ids); - /* Slow-path ops */ int cn10k_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, struct cnxk_ml_model *model); diff --git a/drivers/ml/cnxk/cnxk_ml_dev.h b/drivers/ml/cnxk/cnxk_ml_dev.h index 1590249abd..3ce9338f1f 100644 --- a/drivers/ml/cnxk/cnxk_ml_dev.h +++ b/drivers/ml/cnxk/cnxk_ml_dev.h @@ -9,6 +9,8 @@ #include "cn10k_ml_dev.h" +#include "cnxk_ml_xstats.h" + /* ML command timeout in seconds */ #define ML_CNXK_CMD_TIMEOUT 5 @@ -51,6 +53,9 @@ struct cnxk_ml_dev { /* Configuration state */ enum cnxk_ml_dev_state state; + /* Extended stats data */ + struct cnxk_ml_xstats xstats; + /* Number of models loaded */ uint16_t nb_models_loaded; diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index ffeb3f4452..3719331951 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -117,6 +117,344 @@ cnxk_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_desc return NULL; } +static int +cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) +{ + uint16_t nb_stats; + uint16_t stat_id; + uint16_t model; + uint16_t layer; + uint16_t i; + + /* Allocate memory for xstats entries. Don't allocate during reconfigure */ + nb_stats = RTE_DIM(device_xstats) + + RTE_DIM(layer_xstats) * ML_CNXK_MAX_MODELS * ML_CNXK_MODEL_MAX_LAYERS; + if (cnxk_mldev->xstats.entries == NULL) + cnxk_mldev->xstats.entries = rte_zmalloc( + "cnxk_ml_xstats", sizeof(struct cnxk_ml_xstats_entry) * nb_stats, + PLT_CACHE_LINE_SIZE); + + if (cnxk_mldev->xstats.entries == NULL) + return -ENOMEM; + + /* Initialize device xstats */ + stat_id = 0; + for (i = 0; i < RTE_DIM(device_xstats); i++) { + cnxk_mldev->xstats.entries[stat_id].map.id = stat_id; + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), "%s", + device_xstats[i].name); + + cnxk_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_DEVICE; + cnxk_mldev->xstats.entries[stat_id].group = CNXK_ML_XSTATS_GROUP_DEVICE; + cnxk_mldev->xstats.entries[stat_id].type = device_xstats[i].type; + cnxk_mldev->xstats.entries[stat_id].fn_id = CNXK_ML_XSTATS_FN_DEVICE; + cnxk_mldev->xstats.entries[stat_id].obj_idx = 0; + cnxk_mldev->xstats.entries[stat_id].reset_allowed = device_xstats[i].reset_allowed; + stat_id++; + } + cnxk_mldev->xstats.count_mode_device = stat_id; + + /* Initialize model xstats */ + for (model = 0; model < ML_CNXK_MAX_MODELS; model++) { + cnxk_mldev->xstats.offset_for_model[model] = stat_id; + + for (layer = 0; layer < ML_CNXK_MODEL_MAX_LAYERS; layer++) { + cnxk_mldev->xstats.offset_for_layer[model][layer] = stat_id; + + for (i = 0; i < RTE_DIM(layer_xstats); i++) { + cnxk_mldev->xstats.entries[stat_id].map.id = stat_id; + cnxk_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_MODEL; + cnxk_mldev->xstats.entries[stat_id].group = + CNXK_ML_XSTATS_GROUP_LAYER; + cnxk_mldev->xstats.entries[stat_id].type = layer_xstats[i].type; + cnxk_mldev->xstats.entries[stat_id].fn_id = CNXK_ML_XSTATS_FN_MODEL; + cnxk_mldev->xstats.entries[stat_id].obj_idx = model; + cnxk_mldev->xstats.entries[stat_id].layer_id = layer; + cnxk_mldev->xstats.entries[stat_id].reset_allowed = + layer_xstats[i].reset_allowed; + + /* Name of xstat is updated during model load */ + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), + "Layer-%u-%u-%s", model, layer, layer_xstats[i].name); + + stat_id++; + } + + cnxk_mldev->xstats.count_per_layer[model][layer] = RTE_DIM(layer_xstats); + } + + cnxk_mldev->xstats.count_per_model[model] = RTE_DIM(layer_xstats); + } + + cnxk_mldev->xstats.count_mode_model = stat_id - cnxk_mldev->xstats.count_mode_device; + cnxk_mldev->xstats.count = stat_id; + + return 0; +} + +static void +cnxk_ml_xstats_uninit(struct cnxk_ml_dev *cnxk_mldev) +{ + rte_free(cnxk_mldev->xstats.entries); + cnxk_mldev->xstats.entries = NULL; + + cnxk_mldev->xstats.count = 0; +} + +static uint64_t +cnxk_ml_dev_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx __rte_unused, + int32_t layer_id __rte_unused, enum cnxk_ml_xstats_type type) +{ + switch (type) { + case nb_models_loaded: + return cnxk_mldev->nb_models_loaded; + case nb_models_unloaded: + return cnxk_mldev->nb_models_unloaded; + case nb_models_started: + return cnxk_mldev->nb_models_started; + case nb_models_stopped: + return cnxk_mldev->nb_models_stopped; + default: + return -1; + } + + return 0; +} + +#define ML_AVG_FOREACH_QP(cnxk_mldev, layer, qp_id, str, value, count) \ + do { \ + value = 0; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value += layer->glow.burst_xstats[qp_id].str##_latency_tot; \ + count += layer->glow.burst_xstats[qp_id].dequeued_count - \ + layer->glow.burst_xstats[qp_id].str##_reset_count; \ + } \ + if (count != 0) \ + value = value / count; \ + } while (0) + +#define ML_MIN_FOREACH_QP(cnxk_mldev, layer, qp_id, str, value, count) \ + do { \ + value = UINT64_MAX; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value = PLT_MIN(value, layer->glow.burst_xstats[qp_id].str##_latency_min); \ + count += layer->glow.burst_xstats[qp_id].dequeued_count - \ + layer->glow.burst_xstats[qp_id].str##_reset_count; \ + } \ + if (count == 0) \ + value = 0; \ + } while (0) + +#define ML_MAX_FOREACH_QP(cnxk_mldev, layer, qp_id, str, value, count) \ + do { \ + value = 0; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value = PLT_MAX(value, layer->glow.burst_xstats[qp_id].str##_latency_max); \ + count += layer->glow.burst_xstats[qp_id].dequeued_count - \ + layer->glow.burst_xstats[qp_id].str##_reset_count; \ + } \ + if (count == 0) \ + value = 0; \ + } while (0) + +static uint64_t +cnxk_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx, int32_t layer_id, + enum cnxk_ml_xstats_type type) +{ + struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; + uint16_t rclk_freq; /* MHz */ + uint16_t sclk_freq; /* MHz */ + uint64_t count = 0; + uint64_t value = 0; + uint32_t qp_id; + + model = cnxk_mldev->mldev->data->models[obj_idx]; + if (model == NULL) + return 0; + + if (layer_id >= 0) + layer = &model->layer[layer_id]; + else + return 0; + + switch (type) { + case avg_hw_latency: + ML_AVG_FOREACH_QP(cnxk_mldev, layer, qp_id, hw, value, count); + break; + case min_hw_latency: + ML_MIN_FOREACH_QP(cnxk_mldev, layer, qp_id, hw, value, count); + break; + case max_hw_latency: + ML_MAX_FOREACH_QP(cnxk_mldev, layer, qp_id, hw, value, count); + break; + case avg_fw_latency: + ML_AVG_FOREACH_QP(cnxk_mldev, layer, qp_id, fw, value, count); + break; + case min_fw_latency: + ML_MIN_FOREACH_QP(cnxk_mldev, layer, qp_id, fw, value, count); + break; + case max_fw_latency: + ML_MAX_FOREACH_QP(cnxk_mldev, layer, qp_id, fw, value, count); + break; + default: + value = 0; + } + + roc_clk_freq_get(&rclk_freq, &sclk_freq); + if (sclk_freq != 0) /* return in ns */ + value = (value * 1000ULL) / sclk_freq; + + return value; +} + +static int +cnxk_ml_device_xstats_reset(struct cnxk_ml_dev *cnxk_mldev, const uint16_t stat_ids[], + uint16_t nb_ids) +{ + struct cnxk_ml_xstats_entry *xs; + uint16_t nb_stats; + uint16_t stat_id; + uint32_t i; + + if (stat_ids == NULL) + nb_stats = cnxk_mldev->xstats.count_mode_device; + else + nb_stats = nb_ids; + + for (i = 0; i < nb_stats; i++) { + if (stat_ids == NULL) + stat_id = i; + else + stat_id = stat_ids[i]; + + if (stat_id >= cnxk_mldev->xstats.count_mode_device) + return -EINVAL; + + xs = &cnxk_mldev->xstats.entries[stat_id]; + if (!xs->reset_allowed) + continue; + + xs->reset_value = + cnxk_ml_dev_xstat_get(cnxk_mldev, xs->obj_idx, xs->layer_id, xs->type); + } + + return 0; +} + +#define ML_AVG_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, str) \ + do { \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + layer->glow.burst_xstats[qp_id].str##_latency_tot = 0; \ + layer->glow.burst_xstats[qp_id].str##_reset_count = \ + layer->glow.burst_xstats[qp_id].dequeued_count; \ + } \ + } while (0) + +#define ML_MIN_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, str) \ + do { \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) \ + layer->glow.burst_xstats[qp_id].str##_latency_min = UINT64_MAX; \ + } while (0) + +#define ML_MAX_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, str) \ + do { \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) \ + layer->glow.burst_xstats[qp_id].str##_latency_max = 0; \ + } while (0) + +static void +cnxk_ml_reset_model_stat(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id, + enum cnxk_ml_xstats_type type) +{ + struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; + uint16_t layer_id = 0; + uint32_t qp_id; + + model = cnxk_mldev->mldev->data->models[model_id]; + layer = &model->layer[layer_id]; + + switch (type) { + case avg_hw_latency: + ML_AVG_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, hw); + break; + case min_hw_latency: + ML_MIN_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, hw); + break; + case max_hw_latency: + ML_MAX_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, hw); + break; + case avg_fw_latency: + ML_AVG_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, fw); + break; + case min_fw_latency: + ML_MIN_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, fw); + break; + case max_fw_latency: + ML_MAX_RESET_FOREACH_QP(cnxk_mldev, layer, qp_id, fw); + break; + default: + return; + } +} + +static int +cnxk_ml_model_xstats_reset(struct cnxk_ml_dev *cnxk_mldev, int32_t model_id, + const uint16_t stat_ids[], uint16_t nb_ids) +{ + struct cnxk_ml_xstats_entry *xs; + struct cnxk_ml_model *model; + int32_t lcl_model_id = 0; + uint16_t layer_id = 0; + uint16_t start_id; + uint16_t end_id; + int32_t i; + int32_t j; + + for (i = 0; i < ML_CNXK_MAX_MODELS; i++) { + if (model_id == -1) { + model = cnxk_mldev->mldev->data->models[i]; + if (model == NULL) /* skip inactive models */ + continue; + } else { + if (model_id != i) + continue; + + model = cnxk_mldev->mldev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %d\n", model_id); + return -EINVAL; + } + } + + start_id = cnxk_mldev->xstats.offset_for_layer[i][layer_id]; + end_id = cnxk_mldev->xstats.offset_for_layer[i][layer_id] + + cnxk_mldev->xstats.count_per_layer[i][layer_id] - 1; + + if (stat_ids == NULL) { + for (j = start_id; j <= end_id; j++) { + xs = &cnxk_mldev->xstats.entries[j]; + cnxk_ml_reset_model_stat(cnxk_mldev, i, xs->type); + } + } else { + for (j = 0; j < nb_ids; j++) { + if (stat_ids[j] < start_id || stat_ids[j] > end_id) { + plt_err("Invalid stat_ids[%d] = %d for model_id = %d\n", j, + stat_ids[j], lcl_model_id); + return -EINVAL; + } + xs = &cnxk_mldev->xstats.entries[stat_ids[j]]; + cnxk_ml_reset_model_stat(cnxk_mldev, i, xs->type); + } + } + } + + return 0; +} + static int cnxk_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) { @@ -296,6 +634,13 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co for (i = 0; i < cnxk_mldev->max_nb_layers; i++) cnxk_mldev->index_map[i].active = false; + /* Initialize xstats */ + ret = cnxk_ml_xstats_init(cnxk_mldev); + if (ret != 0) { + plt_err("Failed to initialize xstats"); + goto error; + } + cnxk_mldev->nb_models_loaded = 0; cnxk_mldev->nb_models_started = 0; cnxk_mldev->nb_models_stopped = 0; @@ -325,6 +670,9 @@ cnxk_ml_dev_close(struct rte_ml_dev *dev) cnxk_mldev = dev->data->dev_private; + /* Un-initialize xstats */ + cnxk_ml_xstats_uninit(cnxk_mldev); + if (cn10k_ml_dev_close(cnxk_mldev) != 0) plt_err("Failed to close CN10K ML Device"); @@ -523,6 +871,190 @@ cnxk_ml_dev_stats_reset(struct rte_ml_dev *dev) } } +static int +cnxk_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, + int32_t model_id, struct rte_ml_dev_xstats_map *xstats_map, + uint32_t size) +{ + struct cnxk_ml_xstats_entry *xs; + struct cnxk_ml_dev *cnxk_mldev; + uint32_t xstats_mode_count; + uint16_t layer_id = 0; + uint32_t idx = 0; + uint32_t i; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + xstats_mode_count = 0; + + switch (mode) { + case RTE_ML_DEV_XSTATS_DEVICE: + xstats_mode_count = cnxk_mldev->xstats.count_mode_device; + break; + case RTE_ML_DEV_XSTATS_MODEL: + if (model_id >= ML_CNXK_MAX_MODELS) + break; + xstats_mode_count = cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + break; + default: + return -EINVAL; + }; + + if (xstats_mode_count > size || xstats_map == NULL) + return xstats_mode_count; + + for (i = 0; i < cnxk_mldev->xstats.count && idx < size; i++) { + xs = &cnxk_mldev->xstats.entries[i]; + if (xs->mode != mode) + continue; + + if (mode == RTE_ML_DEV_XSTATS_MODEL && + (model_id != xs->obj_idx || layer_id != xs->layer_id)) + continue; + + strncpy(xstats_map[idx].name, xs->map.name, RTE_ML_STR_MAX); + xstats_map[idx].id = xs->map.id; + idx++; + } + + return idx; +} + +static int +cnxk_ml_dev_xstats_by_name_get(struct rte_ml_dev *dev, const char *name, uint16_t *stat_id, + uint64_t *value) +{ + struct cnxk_ml_xstats_entry *xs; + struct cnxk_ml_dev *cnxk_mldev; + cnxk_ml_xstats_fn fn; + uint32_t i; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + for (i = 0; i < cnxk_mldev->xstats.count; i++) { + xs = &cnxk_mldev->xstats.entries[i]; + if (strncmp(xs->map.name, name, RTE_ML_STR_MAX) == 0) { + if (stat_id != NULL) + *stat_id = xs->map.id; + + switch (xs->fn_id) { + case CNXK_ML_XSTATS_FN_DEVICE: + fn = cnxk_ml_dev_xstat_get; + break; + case CNXK_ML_XSTATS_FN_MODEL: + fn = cnxk_ml_model_xstat_get; + break; + default: + plt_err("Unexpected xstat fn_id = %d", xs->fn_id); + return -EINVAL; + } + + *value = fn(cnxk_mldev, xs->obj_idx, xs->layer_id, xs->type) - + xs->reset_value; + + return 0; + } + } + + if (stat_id != NULL) + *stat_id = (uint16_t)-1; + + return -EINVAL; +} + +static int +cnxk_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, + const uint16_t stat_ids[], uint64_t values[], uint16_t nb_ids) +{ + struct cnxk_ml_xstats_entry *xs; + struct cnxk_ml_dev *cnxk_mldev; + uint32_t xstats_mode_count; + uint16_t layer_id = 0; + cnxk_ml_xstats_fn fn; + uint64_t val; + uint32_t idx; + uint32_t i; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + xstats_mode_count = 0; + + switch (mode) { + case RTE_ML_DEV_XSTATS_DEVICE: + xstats_mode_count = cnxk_mldev->xstats.count_mode_device; + break; + case RTE_ML_DEV_XSTATS_MODEL: + if (model_id >= ML_CNXK_MAX_MODELS) + return -EINVAL; + xstats_mode_count = cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + break; + default: + return -EINVAL; + }; + + idx = 0; + for (i = 0; i < nb_ids && idx < xstats_mode_count; i++) { + xs = &cnxk_mldev->xstats.entries[stat_ids[i]]; + if (stat_ids[i] > cnxk_mldev->xstats.count || xs->mode != mode) + continue; + + if (mode == RTE_ML_DEV_XSTATS_MODEL && + (model_id != xs->obj_idx || layer_id != xs->layer_id)) { + plt_err("Invalid stats_id[%d] = %d for model_id = %d\n", i, stat_ids[i], + model_id); + return -EINVAL; + } + + switch (xs->fn_id) { + case CNXK_ML_XSTATS_FN_DEVICE: + fn = cnxk_ml_dev_xstat_get; + break; + case CNXK_ML_XSTATS_FN_MODEL: + fn = cnxk_ml_model_xstat_get; + break; + default: + plt_err("Unexpected xstat fn_id = %d", xs->fn_id); + return -EINVAL; + } + + val = fn(cnxk_mldev, xs->obj_idx, xs->layer_id, xs->type); + if (values) + values[idx] = val; + + idx++; + } + + return idx; +} + +static int +cnxk_ml_dev_xstats_reset(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, int32_t model_id, + const uint16_t stat_ids[], uint16_t nb_ids) +{ + struct cnxk_ml_dev *cnxk_mldev; + + if (dev == NULL) + return -EINVAL; + + cnxk_mldev = dev->data->dev_private; + + switch (mode) { + case RTE_ML_DEV_XSTATS_DEVICE: + return cnxk_ml_device_xstats_reset(cnxk_mldev, stat_ids, nb_ids); + case RTE_ML_DEV_XSTATS_MODEL: + return cnxk_ml_model_xstats_reset(cnxk_mldev, model_id, stat_ids, nb_ids); + }; + + return 0; +} + static int cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, uint16_t *model_id) { @@ -808,10 +1340,10 @@ struct rte_ml_dev_ops cnxk_ml_ops = { /* Stats ops */ .dev_stats_get = cnxk_ml_dev_stats_get, .dev_stats_reset = cnxk_ml_dev_stats_reset, - .dev_xstats_names_get = cn10k_ml_dev_xstats_names_get, - .dev_xstats_by_name_get = cn10k_ml_dev_xstats_by_name_get, - .dev_xstats_get = cn10k_ml_dev_xstats_get, - .dev_xstats_reset = cn10k_ml_dev_xstats_reset, + .dev_xstats_names_get = cnxk_ml_dev_xstats_names_get, + .dev_xstats_by_name_get = cnxk_ml_dev_xstats_by_name_get, + .dev_xstats_get = cnxk_ml_dev_xstats_get, + .dev_xstats_reset = cnxk_ml_dev_xstats_reset, /* Model ops */ .model_load = cnxk_ml_model_load, diff --git a/drivers/ml/cnxk/cnxk_ml_xstats.h b/drivers/ml/cnxk/cnxk_ml_xstats.h index 0d405679ca..5e02bb876c 100644 --- a/drivers/ml/cnxk/cnxk_ml_xstats.h +++ b/drivers/ml/cnxk/cnxk_ml_xstats.h @@ -7,6 +7,8 @@ #include "cnxk_ml_io.h" +struct cnxk_ml_dev; + /* Extended stats types enum */ enum cnxk_ml_xstats_type { /* Number of models loaded */ @@ -58,9 +60,21 @@ enum cnxk_ml_xstats_fn_type { CNXK_ML_XSTATS_FN_MODEL, }; +/* Extended stats group */ +enum cnxk_ml_xstats_group { + /* Device stats */ + CNXK_ML_XSTATS_GROUP_DEVICE, + + /* Model stats */ + CNXK_ML_XSTATS_GROUP_MODEL, + + /* Layer stats */ + CNXK_ML_XSTATS_GROUP_LAYER, +}; + /* Function pointer to get xstats for a type */ -typedef uint64_t (*cnxk_ml_xstats_fn)(struct rte_ml_dev *cnxk_mldev, uint16_t obj_idx, - enum cnxk_ml_xstats_type stat); +typedef uint64_t (*cnxk_ml_xstats_fn)(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx, + int32_t layer_id, enum cnxk_ml_xstats_type stat); /* Extended stats entry structure */ struct cnxk_ml_xstats_entry { @@ -70,6 +84,9 @@ struct cnxk_ml_xstats_entry { /* xstats mode, device or model */ enum rte_ml_dev_xstats_mode mode; + /* xstats group */ + enum cnxk_ml_xstats_group group; + /* Type of xstats */ enum cnxk_ml_xstats_type type; From patchwork Wed Sep 20 07:25:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131691 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B85B425E4; Wed, 20 Sep 2023 09:27:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0429B42831; Wed, 20 Sep 2023 09:25:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7598140FDE for ; Wed, 20 Sep 2023 09:25:40 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633li002294 for ; Wed, 20 Sep 2023 00:25:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=CMaXdXnmxWf/z85MrO2KIgMH6ZEWoH7ddU3YAftB+zk=; b=NKpFWaCNGGRzfHbiBmdlCWPSn363sYN8eSMM39Nul9/u69YwUN62jOg+9nS2f+psUsB/ j6cf9I40bSLouddHmVb3MksbbE9XVeFj06A5RA5byOleLPXsypetX5pe71MEA9QytsLu IZF6vZKJiCty4e5HtZVrGpetrqjfL5WHkWMe8REmLHS4cp8A+Tm4bl36HZ2z9aBhOCUZ YXuIyt42L9IpZ3onI0zNV8DH2R29MdNNGKp8pawy3Nv+ZA8Sl3KickiKLgDkIPitCOpV YBiXA1cKioZpdX8MR1PDQPy5QD/2tFNRzAPPw1iTdsvbpz/kC0moT61Qc07X1myxPlDI iQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-12 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:39 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:37 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:37 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id A8B6F5B6927; Wed, 20 Sep 2023 00:25:37 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 17/34] ml/cnxk: update fast path functions Date: Wed, 20 Sep 2023 00:25:08 -0700 Message-ID: <20230920072528.14185-18-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 46Xyq0pcL6X0v_stkbg3cxdjCzpYVUE6 X-Proofpoint-GUID: 46Xyq0pcL6X0v_stkbg3cxdjCzpYVUE6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented cnxk layer fast-path functions and added support for model specific fast-path functions. CNXK layer functions would invoke model specific fast-path functions. Added support for model specific poll handling functions and updated internal inference sync function. Drop use of rte_ml_op as argument. Updated function arguments to enable the function to be used as callback by TVM HW runtime. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.h | 5 - drivers/ml/cnxk/cn10k_ml_ops.c | 241 ++++++++------------------------ drivers/ml/cnxk/cn10k_ml_ops.h | 13 +- drivers/ml/cnxk/cnxk_ml_model.h | 14 ++ drivers/ml/cnxk/cnxk_ml_ops.c | 128 +++++++++++++++++ drivers/ml/cnxk/cnxk_ml_ops.h | 7 + 6 files changed, 216 insertions(+), 192 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index bde9d08901..94a94d996f 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -143,11 +143,6 @@ struct cn10k_ml_dev { /* JCMD enqueue function handler */ bool (*ml_jcmdq_enqueue)(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd); - - /* Poll handling function pointers */ - void (*set_poll_addr)(struct cnxk_ml_req *req); - void (*set_poll_ptr)(struct cnxk_ml_req *req); - uint64_t (*get_poll_ptr)(struct cnxk_ml_req *req); }; uint64_t cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw); diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index f1431b89a2..7d809d25ae 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -69,24 +69,12 @@ static const struct cn10k_ml_stype_db_driver { {ML_DRIVER_ERR_FW_ERROR, "UNKNOWN FIRMWARE ERROR"}, }; -static inline void +__rte_hot void cn10k_ml_set_poll_addr(struct cnxk_ml_req *req) { req->status = &req->cn10k_req.status; } -static inline void -cn10k_ml_set_poll_ptr(struct cnxk_ml_req *req) -{ - plt_write64(ML_CNXK_POLL_JOB_START, req->status); -} - -static inline uint64_t -cn10k_ml_get_poll_ptr(struct cnxk_ml_req *req) -{ - return plt_read64(req->status); -} - void cn10k_ml_qp_initialize(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_qp *qp) { @@ -181,7 +169,7 @@ cn10k_ml_prep_sp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_l static __rte_always_inline void cn10k_ml_prep_fp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_req *req, - struct rte_ml_op *op) + uint16_t index, void *input, void *output, uint16_t nb_batches) { struct cn10k_ml_dev *cn10k_mldev; @@ -189,17 +177,17 @@ cn10k_ml_prep_fp_job_descriptor(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_r req->cn10k_req.jd.hdr.jce.w0.u64 = 0; req->cn10k_req.jd.hdr.jce.w1.u64 = PLT_U64_CAST(req->status); - req->cn10k_req.jd.hdr.model_id = op->model_id; + req->cn10k_req.jd.hdr.model_id = index; req->cn10k_req.jd.hdr.job_type = ML_CN10K_JOB_TYPE_MODEL_RUN; req->cn10k_req.jd.hdr.fp_flags = ML_FLAGS_POLL_COMPL; req->cn10k_req.jd.hdr.sp_flags = 0x0; req->cn10k_req.jd.hdr.result = roc_ml_addr_ap2mlip(&cn10k_mldev->roc, &req->cn10k_req.result); req->cn10k_req.jd.model_run.input_ddr_addr = - PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, op->input[0]->addr)); + PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, input)); req->cn10k_req.jd.model_run.output_ddr_addr = - PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, op->output[0]->addr)); - req->cn10k_req.jd.model_run.num_batches = op->nb_batches; + PLT_U64_CAST(roc_ml_addr_ap2mlip(&cn10k_mldev->roc, output)); + req->cn10k_req.jd.model_run.num_batches = nb_batches; } static void @@ -236,30 +224,15 @@ cn10k_ml_xstats_layer_name_update(struct cnxk_ml_dev *cnxk_mldev, uint16_t model static int cn10k_ml_cache_model_data(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer) { - struct rte_ml_buff_seg seg[2]; - struct rte_ml_buff_seg *inp; - struct rte_ml_buff_seg *out; - struct rte_ml_op op; - char str[RTE_MEMZONE_NAMESIZE]; const struct plt_memzone *mz; uint64_t isize = 0; uint64_t osize = 0; int ret = 0; - uint32_t i; - - inp = &seg[0]; - out = &seg[1]; /* Create input and output buffers. */ - for (i = 0; i < layer->info.nb_inputs; i++) - isize += layer->info.input[i].sz_q; - - for (i = 0; i < layer->info.nb_outputs; i++) - osize += layer->info.output[i].sz_q; - - isize = layer->batch_size * isize; - osize = layer->batch_size * osize; + isize = layer->info.total_input_sz_q; + osize = layer->info.total_output_sz_q; snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", "ml_dummy_io", layer->index); mz = plt_memzone_reserve_aligned(str, isize + osize, 0, ML_CN10K_ALIGN_SIZE); @@ -267,25 +240,9 @@ cn10k_ml_cache_model_data(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer * return -ENOMEM; memset(mz->addr, 0, isize + osize); - seg[0].addr = mz->addr; - seg[0].iova_addr = mz->iova; - seg[0].length = isize; - seg[0].next = NULL; - - seg[1].addr = PLT_PTR_ADD(mz->addr, isize); - seg[1].iova_addr = mz->iova + isize; - seg[1].length = osize; - seg[1].next = NULL; - - op.model_id = layer->index; - op.nb_batches = layer->batch_size; - op.mempool = NULL; - - op.input = &inp; - op.output = &out; - memset(layer->glow.req, 0, sizeof(struct cnxk_ml_req)); - ret = cn10k_ml_inference_sync(cnxk_mldev, &op); + ret = cn10k_ml_inference_sync(cnxk_mldev, layer->index, mz->addr, + PLT_PTR_ADD(mz->addr, isize), 1); plt_memzone_free(mz); return ret; @@ -350,13 +307,8 @@ cn10k_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_c else cn10k_mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_lf; - /* Set polling function pointers */ - cn10k_mldev->set_poll_addr = cn10k_ml_set_poll_addr; - cn10k_mldev->set_poll_ptr = cn10k_ml_set_poll_ptr; - cn10k_mldev->get_poll_ptr = cn10k_ml_get_poll_ptr; - - cnxk_mldev->mldev->enqueue_burst = cn10k_ml_enqueue_burst; - cnxk_mldev->mldev->dequeue_burst = cn10k_ml_dequeue_burst; + cnxk_mldev->mldev->enqueue_burst = cnxk_ml_enqueue_burst; + cnxk_mldev->mldev->dequeue_burst = cnxk_ml_dequeue_burst; cnxk_mldev->mldev->op_error_get = cn10k_ml_op_error_get; return 0; @@ -749,6 +701,12 @@ cn10k_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * cn10k_ml_model_info_set(cnxk_mldev, model, &model->layer[0].info, &model->glow.metadata); + /* Set fast-path functions */ + model->enqueue_single = cn10k_ml_enqueue_single; + model->result_update = cn10k_ml_result_update; + model->set_error_code = cn10k_ml_set_error_code; + model->set_poll_addr = cn10k_ml_set_poll_addr; + return 0; } @@ -1144,26 +1102,8 @@ cn10k_ml_model_params_update(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_mode return 0; } -static __rte_always_inline void -queue_index_advance(uint64_t *index, uint64_t nb_desc) -{ - *index = (*index + 1) % nb_desc; -} - -static __rte_always_inline uint64_t -queue_pending_count(uint64_t head, uint64_t tail, uint64_t nb_desc) -{ - return (nb_desc + head - tail) % nb_desc; -} - -static __rte_always_inline uint64_t -queue_free_count(uint64_t head, uint64_t tail, uint64_t nb_desc) -{ - return nb_desc - queue_pending_count(head, tail, nb_desc) - 1; -} - -static __rte_always_inline void -cn10k_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, struct cnxk_ml_req *req) +__rte_hot void +cn10k_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, void *request) { union cn10k_ml_error_code *error_code; struct cn10k_ml_layer_xstats *xstats; @@ -1171,6 +1111,7 @@ cn10k_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, struct cnxk_ml struct cn10k_ml_result *result; struct cnxk_ml_model *model; struct cnxk_ml_layer *layer; + struct cnxk_ml_req *req; struct cnxk_ml_qp *qp; struct rte_ml_op *op; uint64_t hw_latency; @@ -1178,9 +1119,9 @@ cn10k_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, struct cnxk_ml uint16_t model_id; uint16_t layer_id; + req = (struct cnxk_ml_req *)request; result = &req->cn10k_req.result; op = req->op; - if (likely(result->error_code == 0)) { model_id = cnxk_mldev->index_map[op->model_id].model_id; layer_id = cnxk_mldev->index_map[op->model_id].layer_id; @@ -1247,119 +1188,48 @@ cn10k_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, struct cnxk_ml op->user_ptr = result->user_ptr; } -__rte_hot uint16_t -cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, - uint16_t nb_ops) +__rte_hot void +cn10k_ml_set_error_code(struct cnxk_ml_req *req, uint64_t etype, uint64_t stype) +{ + union cn10k_ml_error_code *error_code; + + error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; + error_code->s.etype = etype; + error_code->s.stype = stype; +} + +__rte_hot bool +cn10k_ml_enqueue_single(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op, uint16_t layer_id, + struct cnxk_ml_qp *qp, uint64_t head) { union cn10k_ml_error_code *error_code; struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; struct cnxk_ml_queue *queue; struct cnxk_ml_req *req; - struct cnxk_ml_qp *qp; - struct rte_ml_op *op; - - uint16_t count; - uint64_t head; - bool enqueued; - cnxk_mldev = dev->data->dev_private; cn10k_mldev = &cnxk_mldev->cn10k_mldev; - qp = dev->data->queue_pairs[qp_id]; queue = &qp->queue; - - head = queue->head; - nb_ops = PLT_MIN(nb_ops, queue_free_count(head, queue->tail, qp->nb_desc)); - count = 0; - - if (unlikely(nb_ops == 0)) - return 0; - -enqueue_req: - op = ops[count]; req = &queue->reqs[head]; - cn10k_mldev->set_poll_addr(req); - cn10k_ml_prep_fp_job_descriptor(cnxk_mldev, req, op); + model = cnxk_mldev->mldev->data->models[op->model_id]; + model->set_poll_addr(req); + cn10k_ml_prep_fp_job_descriptor(cnxk_mldev, req, model->layer[layer_id].index, + op->input[0]->addr, op->output[0]->addr, op->nb_batches); memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result)); error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; error_code->s.etype = ML_ETYPE_UNKNOWN; req->cn10k_req.result.user_ptr = op->user_ptr; - cn10k_mldev->set_poll_ptr(req); - enqueued = cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jcmd); - if (unlikely(!enqueued)) - goto jcmdq_full; + cnxk_ml_set_poll_ptr(req); + if (unlikely(!cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jcmd))) + return false; req->timeout = plt_tsc_cycles() + queue->wait_cycles; req->op = op; - queue_index_advance(&head, qp->nb_desc); - count++; - - if (count < nb_ops) - goto enqueue_req; - -jcmdq_full: - queue->head = head; - qp->stats.enqueued_count += count; - - return count; -} - -__rte_hot uint16_t -cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, - uint16_t nb_ops) -{ - union cn10k_ml_error_code *error_code; - struct cn10k_ml_dev *cn10k_mldev; - struct cnxk_ml_dev *cnxk_mldev; - struct cnxk_ml_queue *queue; - struct cnxk_ml_req *req; - struct cnxk_ml_qp *qp; - - uint64_t status; - uint16_t count; - uint64_t tail; - - cnxk_mldev = dev->data->dev_private; - cn10k_mldev = &cnxk_mldev->cn10k_mldev; - qp = dev->data->queue_pairs[qp_id]; - queue = &qp->queue; - - tail = queue->tail; - nb_ops = PLT_MIN(nb_ops, queue_pending_count(queue->head, tail, qp->nb_desc)); - count = 0; - - if (unlikely(nb_ops == 0)) - goto empty_or_active; - -dequeue_req: - req = &queue->reqs[tail]; - status = cn10k_mldev->get_poll_ptr(req); - if (unlikely(status != ML_CNXK_POLL_JOB_FINISH)) { - if (plt_tsc_cycles() < req->timeout) { - goto empty_or_active; - } else { /* Timeout, set indication of driver error */ - error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; - error_code->s.etype = ML_ETYPE_DRIVER; - } - } - - cn10k_ml_result_update(cnxk_mldev, qp_id, req); - ops[count] = req->op; - - queue_index_advance(&tail, qp->nb_desc); - count++; - - if (count < nb_ops) - goto dequeue_req; - -empty_or_active: - queue->tail = tail; - - return count; + return true; } __rte_hot int @@ -1396,41 +1266,48 @@ cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op, struct rte_m } __rte_hot int -cn10k_ml_inference_sync(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op) +cn10k_ml_inference_sync(void *device, uint16_t index, void *input, void *output, + uint16_t nb_batches) { union cn10k_ml_error_code *error_code; struct cn10k_ml_dev *cn10k_mldev; + struct cnxk_ml_dev *cnxk_mldev; struct cnxk_ml_model *model; struct cnxk_ml_layer *layer; struct cnxk_ml_req *req; + struct rte_ml_op op; uint16_t model_id; uint16_t layer_id; bool timeout; int ret = 0; + cnxk_mldev = (struct cnxk_ml_dev *)device; cn10k_mldev = &cnxk_mldev->cn10k_mldev; - model_id = cnxk_mldev->index_map[op->model_id].model_id; - layer_id = cnxk_mldev->index_map[op->model_id].layer_id; + model_id = cnxk_mldev->index_map[index].model_id; + layer_id = cnxk_mldev->index_map[index].layer_id; model = cnxk_mldev->mldev->data->models[model_id]; layer = &model->layer[layer_id]; req = layer->glow.req; + op.model_id = index; + op.impl_opaque = 0; + cn10k_ml_set_poll_addr(req); - cn10k_ml_prep_fp_job_descriptor(cnxk_mldev, req, op); + cn10k_ml_prep_fp_job_descriptor(cnxk_mldev, req, index, input, output, nb_batches); memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result)); error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; error_code->s.etype = ML_ETYPE_UNKNOWN; - req->cn10k_req.result.user_ptr = op->user_ptr; + req->cn10k_req.result.user_ptr = NULL; - cn10k_mldev->set_poll_ptr(req); + cnxk_ml_set_poll_ptr(req); req->cn10k_req.jcmd.w1.s.jobptr = PLT_U64_CAST(&req->cn10k_req.jd); timeout = true; req->timeout = plt_tsc_cycles() + ML_CNXK_CMD_TIMEOUT * plt_tsc_hz(); do { if (cn10k_mldev->ml_jcmdq_enqueue(&cn10k_mldev->roc, &req->cn10k_req.jcmd)) { - req->op = op; + req->op = &op; timeout = false; break; } @@ -1443,7 +1320,7 @@ cn10k_ml_inference_sync(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op) timeout = true; do { - if (cn10k_mldev->get_poll_ptr(req) == ML_CNXK_POLL_JOB_FINISH) { + if (cnxk_ml_get_poll_ptr(req) == ML_CNXK_POLL_JOB_FINISH) { timeout = false; break; } diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 8a090a3159..3e75cae65a 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -13,6 +13,7 @@ struct cnxk_ml_dev; struct cnxk_ml_qp; struct cnxk_ml_model; +struct cnxk_ml_req; /* Firmware version string length */ #define MLDEV_FIRMWARE_VERSION_LENGTH 32 @@ -308,13 +309,15 @@ int cn10k_ml_model_params_update(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_ void *buffer); /* Fast-path ops */ -__rte_hot uint16_t cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, - struct rte_ml_op **ops, uint16_t nb_ops); -__rte_hot uint16_t cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, - struct rte_ml_op **ops, uint16_t nb_ops); +__rte_hot bool cn10k_ml_enqueue_single(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op, + uint16_t layer_id, struct cnxk_ml_qp *qp, uint64_t head); __rte_hot int cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op, struct rte_ml_op_error *error); -__rte_hot int cn10k_ml_inference_sync(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op); +__rte_hot int cn10k_ml_inference_sync(void *device, uint16_t index, void *input, void *output, + uint16_t nb_batches); +__rte_hot void cn10k_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, void *request); +__rte_hot void cn10k_ml_set_error_code(struct cnxk_ml_req *req, uint64_t etype, uint64_t stype); +__rte_hot void cn10k_ml_set_poll_addr(struct cnxk_ml_req *req); /* Misc ops */ void cn10k_ml_qp_initialize(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_qp *qp); diff --git a/drivers/ml/cnxk/cnxk_ml_model.h b/drivers/ml/cnxk/cnxk_ml_model.h index 66d979dd3c..f618e5aa5f 100644 --- a/drivers/ml/cnxk/cnxk_ml_model.h +++ b/drivers/ml/cnxk/cnxk_ml_model.h @@ -15,6 +15,8 @@ struct cnxk_ml_dev; struct cnxk_ml_model; +struct cnxk_ml_qp; +struct cnxk_ml_req; /* Model state */ enum cnxk_ml_model_state { @@ -70,6 +72,12 @@ struct cnxk_ml_layer { struct cn10k_ml_layer_data glow; }; +typedef bool (*enqueue_single_t)(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op, + uint16_t layer_id, struct cnxk_ml_qp *qp, uint64_t head); +typedef void (*result_update_t)(struct cnxk_ml_dev *cnxk_mldev, int qp_id, void *request); +typedef void (*set_error_code_t)(struct cnxk_ml_req *req, uint64_t etype, uint64_t stype); +typedef void (*set_poll_addr_t)(struct cnxk_ml_req *req); + /* Model Object */ struct cnxk_ml_model { /* Device reference */ @@ -106,6 +114,12 @@ struct cnxk_ml_model { /* Spinlock, used to update model state */ plt_spinlock_t lock; + + /* Fast-path functions */ + enqueue_single_t enqueue_single; + result_update_t result_update; + set_error_code_t set_error_code; + set_poll_addr_t set_poll_addr; }; void cnxk_ml_model_dump(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, FILE *fp); diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 3719331951..923e603e8e 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -17,6 +17,18 @@ /* ML model macros */ #define CNXK_ML_MODEL_MEMZONE_NAME "ml_cnxk_model_mz" +__rte_hot void +cnxk_ml_set_poll_ptr(struct cnxk_ml_req *req) +{ + plt_write64(ML_CNXK_POLL_JOB_START, req->status); +} + +__rte_hot uint64_t +cnxk_ml_get_poll_ptr(struct cnxk_ml_req *req) +{ + return plt_read64(req->status); +} + static void qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) { @@ -1323,6 +1335,122 @@ cnxk_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_b return 0; } +static __rte_always_inline void +queue_index_advance(uint64_t *index, uint64_t nb_desc) +{ + *index = (*index + 1) % nb_desc; +} + +static __rte_always_inline uint64_t +queue_pending_count(uint64_t head, uint64_t tail, uint64_t nb_desc) +{ + return (nb_desc + head - tail) % nb_desc; +} + +static __rte_always_inline uint64_t +queue_free_count(uint64_t head, uint64_t tail, uint64_t nb_desc) +{ + return nb_desc - queue_pending_count(head, tail, nb_desc) - 1; +} + +__rte_hot uint16_t +cnxk_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, + uint16_t nb_ops) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + struct cnxk_ml_queue *queue; + struct cnxk_ml_qp *qp; + struct rte_ml_op *op; + + uint16_t layer_id = 0; + uint16_t count; + uint64_t head; + + cnxk_mldev = dev->data->dev_private; + qp = dev->data->queue_pairs[qp_id]; + queue = &qp->queue; + + head = queue->head; + nb_ops = PLT_MIN(nb_ops, queue_free_count(head, queue->tail, qp->nb_desc)); + count = 0; + + if (unlikely(nb_ops == 0)) + return 0; + +enqueue_req: + op = ops[count]; + model = cnxk_mldev->mldev->data->models[op->model_id]; + + if (unlikely(!model->enqueue_single(cnxk_mldev, op, layer_id, qp, head))) + goto jcmdq_full; + + queue_index_advance(&head, qp->nb_desc); + count++; + + if (count < nb_ops) + goto enqueue_req; + +jcmdq_full: + queue->head = head; + qp->stats.enqueued_count += count; + + return count; +} + +__rte_hot uint16_t +cnxk_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, + uint16_t nb_ops) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_queue *queue; + struct cnxk_ml_model *model; + struct cnxk_ml_req *req; + struct cnxk_ml_qp *qp; + + uint64_t status; + uint16_t count; + uint64_t tail; + + cnxk_mldev = dev->data->dev_private; + qp = dev->data->queue_pairs[qp_id]; + queue = &qp->queue; + + tail = queue->tail; + nb_ops = PLT_MIN(nb_ops, queue_pending_count(queue->head, tail, qp->nb_desc)); + count = 0; + + if (unlikely(nb_ops == 0)) + goto empty_or_active; + +dequeue_req: + + req = &queue->reqs[tail]; + model = cnxk_mldev->mldev->data->models[req->op->model_id]; + + status = cnxk_ml_get_poll_ptr(req); + if (unlikely(status != ML_CNXK_POLL_JOB_FINISH)) { + if (plt_tsc_cycles() < req->timeout) + goto empty_or_active; + else /* Timeout, set indication of driver error */ + model->set_error_code(req, ML_ETYPE_DRIVER, 0); + } + + model->result_update(cnxk_mldev, qp->id, req); + + ops[count] = req->op; + queue_index_advance(&tail, qp->nb_desc); + count++; + + if (count < nb_ops) + goto dequeue_req; + +empty_or_active: + queue->tail = tail; + + return count; +} + struct rte_ml_dev_ops cnxk_ml_ops = { /* Device control ops */ .dev_info_get = cnxk_ml_dev_info_get, diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index d27ca0d0cb..d0c126f34b 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -65,4 +65,11 @@ extern struct rte_ml_dev_ops cnxk_ml_ops; int cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id); int cnxk_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id); +__rte_hot uint16_t cnxk_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, + struct rte_ml_op **ops, uint16_t nb_ops); +__rte_hot uint16_t cnxk_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, + struct rte_ml_op **ops, uint16_t nb_ops); +__rte_hot void cnxk_ml_set_poll_ptr(struct cnxk_ml_req *req); +__rte_hot uint64_t cnxk_ml_get_poll_ptr(struct cnxk_ml_req *req); + #endif /* _CNXK_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131693 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D277D425E4; Wed, 20 Sep 2023 09:27:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CAB542D2E; Wed, 20 Sep 2023 09:25:59 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id DDB5A40EE7 for ; Wed, 20 Sep 2023 09:25:40 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lj002294 for ; Wed, 20 Sep 2023 00:25:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=quASSH8B4/AivnqVd89GU4ZF/P7jdg8HD8k5cwNfpdU=; b=BfJCEFWboq8TQSgUMUV3b6ZjPGANYmZpOciOaUKbOHjmEHcqgutuL31l8YK9GEKc5zmp Ou3EpXwc/tuVN8ldh9kTm3bY9eeLCfRyUtf41MfHDxiq/1DTwwi8h5dPPc1KEeC6p5rG JVVnebqPLdGcu8fuSRGE4iZIXaZQwcyB3szWjB2bCtuK09ZNHTYWxwVBqCWI468xw0oK ac4EQfwH0+vDwEuuM4VwDrctsbbfUZCc1GN/k5BIORnFzQyZAg9ahjGHQuHDZ3CaJy8k i8gkgpDpQ53GlkD2ZZdJEBQ5v3ya7isG6cEqFyJfjwmMCdejAVv3G5YS13WRHK+mz1Dc KQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-13 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:39 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:38 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 037F25B6922; Wed, 20 Sep 2023 00:25:37 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 18/34] ml/cnxk: move error handling to cnxk layer Date: Wed, 20 Sep 2023 00:25:09 -0700 Message-ID: <20230920072528.14185-19-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: aRJDlZ9Ak1R8RBsDmnK-leOhAgzfAoYs X-Proofpoint-GUID: aRJDlZ9Ak1R8RBsDmnK-leOhAgzfAoYs X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move error type structures to cnxk layer. cn10k layer to handle fw and hw error sub-types only. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.h | 41 ++++++--------- drivers/ml/cnxk/cn10k_ml_ops.c | 93 +++++++++++++--------------------- drivers/ml/cnxk/cnxk_ml_dev.c | 8 +++ drivers/ml/cnxk/cnxk_ml_dev.h | 18 +++++++ drivers/ml/cnxk/cnxk_ml_ops.c | 2 +- 5 files changed, 78 insertions(+), 84 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index 94a94d996f..2e7eb6c9ef 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -52,38 +52,27 @@ struct cnxk_ml_dev; struct cnxk_ml_req; struct cnxk_ml_qp; -/* Error types enumeration */ -enum cn10k_ml_error_etype { - /* 0x0 */ ML_ETYPE_NO_ERROR = 0, /* No error */ - /* 0x1 */ ML_ETYPE_FW_NONFATAL, /* Firmware non-fatal error */ - /* 0x2 */ ML_ETYPE_HW_NONFATAL, /* Hardware non-fatal error */ - /* 0x3 */ ML_ETYPE_HW_FATAL, /* Hardware fatal error */ - /* 0x4 */ ML_ETYPE_HW_WARNING, /* Hardware warning */ - /* 0x5 */ ML_ETYPE_DRIVER, /* Driver specific error */ - /* 0x6 */ ML_ETYPE_UNKNOWN, /* Unknown error */ -}; - /* Firmware non-fatal error sub-type */ enum cn10k_ml_error_stype_fw_nf { - /* 0x0 */ ML_FW_ERR_NOERR = 0, /* No error */ - /* 0x1 */ ML_FW_ERR_UNLOAD_ID_NOT_FOUND, /* Model ID not found during load */ - /* 0x2 */ ML_FW_ERR_LOAD_LUT_OVERFLOW, /* Lookup table overflow at load */ - /* 0x3 */ ML_FW_ERR_ID_IN_USE, /* Model ID already in use */ - /* 0x4 */ ML_FW_ERR_INVALID_TILEMASK, /* Invalid OCM tilemask */ - /* 0x5 */ ML_FW_ERR_RUN_LUT_OVERFLOW, /* Lookup table overflow at run */ - /* 0x6 */ ML_FW_ERR_RUN_ID_NOT_FOUND, /* Model ID not found during run */ - /* 0x7 */ ML_FW_ERR_COMMAND_NOTSUP, /* Unsupported command */ - /* 0x8 */ ML_FW_ERR_DDR_ADDR_RANGE, /* DDR address out of range */ - /* 0x9 */ ML_FW_ERR_NUM_BATCHES_INVALID, /* Invalid number of batches */ - /* 0xA */ ML_FW_ERR_INSSYNC_TIMEOUT, /* INS sync timeout */ + /* 0x0 */ ML_CN10K_FW_ERR_NOERR = 0, /* No error */ + /* 0x1 */ ML_CN10K_FW_ERR_UNLOAD_ID_NOT_FOUND, /* Model ID not found during load */ + /* 0x2 */ ML_CN10K_FW_ERR_LOAD_LUT_OVERFLOW, /* Lookup table overflow at load */ + /* 0x3 */ ML_CN10K_FW_ERR_ID_IN_USE, /* Model ID already in use */ + /* 0x4 */ ML_CN10K_FW_ERR_INVALID_TILEMASK, /* Invalid OCM tilemask */ + /* 0x5 */ ML_CN10K_FW_ERR_RUN_LUT_OVERFLOW, /* Lookup table overflow at run */ + /* 0x6 */ ML_CN10K_FW_ERR_RUN_ID_NOT_FOUND, /* Model ID not found during run */ + /* 0x7 */ ML_CN10K_FW_ERR_COMMAND_NOTSUP, /* Unsupported command */ + /* 0x8 */ ML_CN10K_FW_ERR_DDR_ADDR_RANGE, /* DDR address out of range */ + /* 0x9 */ ML_CN10K_FW_ERR_NUM_BATCHES_INVALID, /* Invalid number of batches */ + /* 0xA */ ML_CN10K_FW_ERR_INSSYNC_TIMEOUT, /* INS sync timeout */ }; /* Driver error sub-type */ enum cn10k_ml_error_stype_driver { - /* 0x0 */ ML_DRIVER_ERR_NOERR = 0, /* No error */ - /* 0x1 */ ML_DRIVER_ERR_UNKNOWN, /* Unable to determine error sub-type */ - /* 0x2 */ ML_DRIVER_ERR_EXCEPTION, /* Firmware exception */ - /* 0x3 */ ML_DRIVER_ERR_FW_ERROR, /* Unknown firmware error */ + /* 0x0 */ ML_CN10K_DRIVER_ERR_NOERR = 0, /* No error */ + /* 0x1 */ ML_CN10K_DRIVER_ERR_UNKNOWN, /* Unable to determine error sub-type */ + /* 0x2 */ ML_CN10K_DRIVER_ERR_EXCEPTION, /* Firmware exception */ + /* 0x3 */ ML_CN10K_DRIVER_ERR_FW_ERROR, /* Unknown firmware error */ }; /* Error structure */ diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 7d809d25ae..daeb3b712c 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -26,47 +26,27 @@ #define ML_FLAGS_POLL_COMPL BIT(0) #define ML_FLAGS_SSO_COMPL BIT(1) -/* Error message length */ -#define ERRMSG_LEN 32 - -/* Error type database */ -static const struct cn10k_ml_etype_db { - enum cn10k_ml_error_etype etype; - char name[ERRMSG_LEN]; -} ml_etype_db[] = { - {ML_ETYPE_NO_ERROR, "NO_ERROR"}, {ML_ETYPE_FW_NONFATAL, "FW_NON_FATAL"}, - {ML_ETYPE_HW_NONFATAL, "HW_NON_FATAL"}, {ML_ETYPE_HW_FATAL, "HW_FATAL"}, - {ML_ETYPE_HW_WARNING, "HW_WARNING"}, {ML_ETYPE_DRIVER, "DRIVER_ERROR"}, - {ML_ETYPE_UNKNOWN, "UNKNOWN_ERROR"}, -}; - /* Hardware non-fatal error subtype database */ -static const struct cn10k_ml_stype_db_hw_nf { - enum cn10k_ml_error_stype_fw_nf stype; - char msg[ERRMSG_LEN]; -} ml_stype_db_hw_nf[] = { - {ML_FW_ERR_NOERR, "NO ERROR"}, - {ML_FW_ERR_UNLOAD_ID_NOT_FOUND, "UNLOAD MODEL ID NOT FOUND"}, - {ML_FW_ERR_LOAD_LUT_OVERFLOW, "LOAD LUT OVERFLOW"}, - {ML_FW_ERR_ID_IN_USE, "MODEL ID IN USE"}, - {ML_FW_ERR_INVALID_TILEMASK, "INVALID TILEMASK"}, - {ML_FW_ERR_RUN_LUT_OVERFLOW, "RUN LUT OVERFLOW"}, - {ML_FW_ERR_RUN_ID_NOT_FOUND, "RUN MODEL ID NOT FOUND"}, - {ML_FW_ERR_COMMAND_NOTSUP, "COMMAND NOT SUPPORTED"}, - {ML_FW_ERR_DDR_ADDR_RANGE, "DDR ADDRESS OUT OF RANGE"}, - {ML_FW_ERR_NUM_BATCHES_INVALID, "INVALID BATCHES"}, - {ML_FW_ERR_INSSYNC_TIMEOUT, "INSSYNC TIMEOUT"}, +static struct cnxk_ml_error_db ml_stype_db_hw_nf[] = { + {ML_CN10K_FW_ERR_NOERR, "NO ERROR"}, + {ML_CN10K_FW_ERR_UNLOAD_ID_NOT_FOUND, "UNLOAD MODEL ID NOT FOUND"}, + {ML_CN10K_FW_ERR_LOAD_LUT_OVERFLOW, "LOAD LUT OVERFLOW"}, + {ML_CN10K_FW_ERR_ID_IN_USE, "MODEL ID IN USE"}, + {ML_CN10K_FW_ERR_INVALID_TILEMASK, "INVALID TILEMASK"}, + {ML_CN10K_FW_ERR_RUN_LUT_OVERFLOW, "RUN LUT OVERFLOW"}, + {ML_CN10K_FW_ERR_RUN_ID_NOT_FOUND, "RUN MODEL ID NOT FOUND"}, + {ML_CN10K_FW_ERR_COMMAND_NOTSUP, "COMMAND NOT SUPPORTED"}, + {ML_CN10K_FW_ERR_DDR_ADDR_RANGE, "DDR ADDRESS OUT OF RANGE"}, + {ML_CN10K_FW_ERR_NUM_BATCHES_INVALID, "INVALID BATCHES"}, + {ML_CN10K_FW_ERR_INSSYNC_TIMEOUT, "INSSYNC TIMEOUT"}, }; /* Driver error subtype database */ -static const struct cn10k_ml_stype_db_driver { - enum cn10k_ml_error_stype_driver stype; - char msg[ERRMSG_LEN]; -} ml_stype_db_driver[] = { - {ML_DRIVER_ERR_NOERR, "NO ERROR"}, - {ML_DRIVER_ERR_UNKNOWN, "UNKNOWN ERROR"}, - {ML_DRIVER_ERR_EXCEPTION, "FW EXCEPTION"}, - {ML_DRIVER_ERR_FW_ERROR, "UNKNOWN FIRMWARE ERROR"}, +static struct cnxk_ml_error_db ml_stype_db_driver[] = { + {ML_CN10K_DRIVER_ERR_NOERR, "NO ERROR"}, + {ML_CN10K_DRIVER_ERR_UNKNOWN, "UNKNOWN ERROR"}, + {ML_CN10K_DRIVER_ERR_EXCEPTION, "FW EXCEPTION"}, + {ML_CN10K_DRIVER_ERR_FW_ERROR, "UNKNOWN FIRMWARE ERROR"}, }; __rte_hot void @@ -1166,19 +1146,19 @@ cn10k_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, void *request) /* Handle driver error */ error_code = (union cn10k_ml_error_code *)&result->error_code; - if (error_code->s.etype == ML_ETYPE_DRIVER) { + if (error_code->s.etype == ML_CNXK_ETYPE_DRIVER) { cn10k_mldev = &cnxk_mldev->cn10k_mldev; /* Check for exception */ if ((roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C0) != 0) || (roc_ml_reg_read64(&cn10k_mldev->roc, ML_SCRATCH_EXCEPTION_SP_C1) != 0)) - error_code->s.stype = ML_DRIVER_ERR_EXCEPTION; + error_code->s.stype = ML_CN10K_DRIVER_ERR_EXCEPTION; else if ((roc_ml_reg_read64(&cn10k_mldev->roc, ML_CORE_INT_LO) != 0) || (roc_ml_reg_read64(&cn10k_mldev->roc, ML_CORE_INT_HI) != 0)) - error_code->s.stype = ML_DRIVER_ERR_FW_ERROR; + error_code->s.stype = ML_CN10K_DRIVER_ERR_FW_ERROR; else - error_code->s.stype = ML_DRIVER_ERR_UNKNOWN; + error_code->s.stype = ML_CN10K_DRIVER_ERR_UNKNOWN; } op->impl_opaque = result->error_code; @@ -1219,7 +1199,7 @@ cn10k_ml_enqueue_single(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op, ui memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result)); error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; - error_code->s.etype = ML_ETYPE_UNKNOWN; + error_code->s.etype = ML_CNXK_ETYPE_UNKNOWN; req->cn10k_req.result.user_ptr = op->user_ptr; cnxk_ml_set_poll_ptr(req); @@ -1236,30 +1216,29 @@ __rte_hot int cn10k_ml_op_error_get(struct rte_ml_dev *dev, struct rte_ml_op *op, struct rte_ml_op_error *error) { union cn10k_ml_error_code *error_code; - char msg[RTE_ML_STR_MAX]; PLT_SET_USED(dev); error_code = (union cn10k_ml_error_code *)&op->impl_opaque; - /* Copy error message */ - plt_strlcpy(msg, ml_etype_db[error_code->s.etype].name, sizeof(msg)); - /* Copy sub error message */ - if (error_code->s.etype == ML_ETYPE_HW_NONFATAL) { - strcat(msg, " : "); + if (error_code->s.etype == ML_CNXK_ETYPE_HW_NONFATAL) { if (error_code->s.stype < PLT_DIM(ml_stype_db_hw_nf)) - strcat(msg, ml_stype_db_hw_nf[error_code->s.stype].msg); + snprintf(error->message, RTE_ML_STR_MAX, "%s : %s", + ml_etype_db[error_code->s.etype].str, + ml_stype_db_hw_nf[error_code->s.stype].str); else - strcat(msg, "UNKNOWN ERROR"); - } - - if (error_code->s.etype == ML_ETYPE_DRIVER) { - strcat(msg, " : "); - strcat(msg, ml_stype_db_driver[error_code->s.stype].msg); + snprintf(error->message, RTE_ML_STR_MAX, "%s : UNKNOWN ERROR", + ml_etype_db[error_code->s.etype].str); + } else if (error_code->s.etype == ML_CNXK_ETYPE_DRIVER) { + snprintf(error->message, RTE_ML_STR_MAX, "%s : %s", + ml_etype_db[error_code->s.etype].str, + ml_stype_db_driver[error_code->s.stype].str); + } else { + snprintf(error->message, RTE_ML_STR_MAX, "%s", + ml_etype_db[error_code->s.etype].str); } - plt_strlcpy(error->message, msg, sizeof(error->message)); error->errcode = error_code->u64; return 0; @@ -1297,7 +1276,7 @@ cn10k_ml_inference_sync(void *device, uint16_t index, void *input, void *output, memset(&req->cn10k_req.result, 0, sizeof(struct cn10k_ml_result)); error_code = (union cn10k_ml_error_code *)&req->cn10k_req.result.error_code; - error_code->s.etype = ML_ETYPE_UNKNOWN; + error_code->s.etype = ML_CNXK_ETYPE_UNKNOWN; req->cn10k_req.result.user_ptr = NULL; cnxk_ml_set_poll_ptr(req); diff --git a/drivers/ml/cnxk/cnxk_ml_dev.c b/drivers/ml/cnxk/cnxk_ml_dev.c index 2a5c17c973..63d1c9e417 100644 --- a/drivers/ml/cnxk/cnxk_ml_dev.c +++ b/drivers/ml/cnxk/cnxk_ml_dev.c @@ -9,3 +9,11 @@ /* Dummy operations for ML device */ struct rte_ml_dev_ops ml_dev_dummy_ops = {0}; + +/* Error type database */ +struct cnxk_ml_error_db ml_etype_db[] = { + {ML_CNXK_ETYPE_NO_ERROR, "NO_ERROR"}, {ML_CNXK_ETYPE_FW_NONFATAL, "FW_NON_FATAL"}, + {ML_CNXK_ETYPE_HW_NONFATAL, "HW_NON_FATAL"}, {ML_CNXK_ETYPE_HW_FATAL, "HW_FATAL"}, + {ML_CNXK_ETYPE_HW_WARNING, "HW_WARNING"}, {ML_CNXK_ETYPE_DRIVER, "DRIVER_ERROR"}, + {ML_CNXK_ETYPE_UNKNOWN, "UNKNOWN_ERROR"}, +}; diff --git a/drivers/ml/cnxk/cnxk_ml_dev.h b/drivers/ml/cnxk/cnxk_ml_dev.h index 3ce9338f1f..382fca64be 100644 --- a/drivers/ml/cnxk/cnxk_ml_dev.h +++ b/drivers/ml/cnxk/cnxk_ml_dev.h @@ -18,6 +18,22 @@ #define ML_CNXK_POLL_JOB_START 0 #define ML_CNXK_POLL_JOB_FINISH 1 +/* Error types enumeration */ +enum cnxk_ml_error_etype { + /* 0x0 */ ML_CNXK_ETYPE_NO_ERROR = 0, /* No error */ + /* 0x1 */ ML_CNXK_ETYPE_FW_NONFATAL, /* Firmware non-fatal error */ + /* 0x2 */ ML_CNXK_ETYPE_HW_NONFATAL, /* Hardware non-fatal error */ + /* 0x3 */ ML_CNXK_ETYPE_HW_FATAL, /* Hardware fatal error */ + /* 0x4 */ ML_CNXK_ETYPE_HW_WARNING, /* Hardware warning */ + /* 0x5 */ ML_CNXK_ETYPE_DRIVER, /* Driver specific error */ + /* 0x6 */ ML_CNXK_ETYPE_UNKNOWN, /* Unknown error */ +}; + +struct cnxk_ml_error_db { + uint64_t code; + char str[RTE_ML_STR_MAX]; +}; + /* Device configuration state enum */ enum cnxk_ml_dev_state { /* Probed and not configured */ @@ -78,4 +94,6 @@ struct cnxk_ml_dev { struct cnxk_ml_index_map *index_map; }; +extern struct cnxk_ml_error_db ml_etype_db[]; + #endif /* _CNXK_ML_DEV_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 923e603e8e..e6c67c71f5 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -1433,7 +1433,7 @@ cnxk_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op * if (plt_tsc_cycles() < req->timeout) goto empty_or_active; else /* Timeout, set indication of driver error */ - model->set_error_code(req, ML_ETYPE_DRIVER, 0); + model->set_error_code(req, ML_CNXK_ETYPE_DRIVER, 0); } model->result_update(cnxk_mldev, qp->id, req); From patchwork Wed Sep 20 07:25:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131692 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74978425E4; Wed, 20 Sep 2023 09:27:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3D2E642D27; Wed, 20 Sep 2023 09:25:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AEE2C40EE5 for ; Wed, 20 Sep 2023 09:25:40 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW0x008355 for ; Wed, 20 Sep 2023 00:25:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=uzJ6TXChF1ZSghTMUyujkCoGzodJc5twoL13hvUsasw=; b=asw8ry+/kBN56h8afo23qoIhTkqauXylXPTAsV/HXO3SG9dfsPeOu8YBZATGY4wl13md IReJy9A6kA1C4o8Gk/xE+52kXlNfARzlh7C8wf902cYqNkuy/0VfJhMiX+CYS+oq027i Yu+wm7HZ2W8m7KhVDa26VnIzPL0myJ4kbVNL/avnhQ1uOgdGoRuzzVjCid75Jk3gc458 o+S4ytwOGjATrtGzXwBOYe4xdcWdhr0ZNd6+xumhewOQ4FWW1GXmX3jrwAEFVwQbdP/x ik9eMh8sl9gD9YH7Zk1ZX6JJuoiI0rlpmFu9x9/3lfxPQlwmK/yVUx91QqjF5bSrD1i3 Xw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:40 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:38 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 5134E5B6926; Wed, 20 Sep 2023 00:25:38 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 19/34] ml/cnxk: support config and close of tvmdp library Date: Wed, 20 Sep 2023 00:25:10 -0700 Message-ID: <20230920072528.14185-20-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ANbaZtT7fzR1cI886w-1f_-FVFYLKDz6 X-Proofpoint-GUID: ANbaZtT7fzR1cI886w-1f_-FVFYLKDz6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to configure and close TVMDP library based on ML device configuration options. Updated meson build to enable Jansson, TVM runtime, TVMDP library as build dependencies. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cnxk_ml_ops.c | 15 ++++++++++ drivers/ml/cnxk/meson.build | 50 ++++++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_ops.c | 42 ++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_ops.h | 19 +++++++++++++ 4 files changed, 126 insertions(+) create mode 100644 drivers/ml/cnxk/mvtvm_ml_ops.c create mode 100644 drivers/ml/cnxk/mvtvm_ml_ops.h diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index e6c67c71f5..358f16cead 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -9,6 +9,10 @@ #include "cn10k_ml_ops.h" +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM +#include "mvtvm_ml_ops.h" +#endif + #include "cnxk_ml_dev.h" #include "cnxk_ml_io.h" #include "cnxk_ml_model.h" @@ -625,6 +629,12 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co goto error; } +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + ret = mvtvm_ml_dev_configure(cnxk_mldev, conf); + if (ret != 0) + goto error; +#endif + /* Set device capabilities */ cnxk_mldev->max_nb_layers = cnxk_mldev->cn10k_mldev.fw.req->cn10k_req.jd.fw_load.cap.s.max_models; @@ -685,6 +695,11 @@ cnxk_ml_dev_close(struct rte_ml_dev *dev) /* Un-initialize xstats */ cnxk_ml_xstats_uninit(cnxk_mldev); +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (mvtvm_ml_dev_close(cnxk_mldev) != 0) + plt_err("Failed to close MVTVM ML Device"); +#endif + if (cn10k_ml_dev_close(cnxk_mldev) != 0) plt_err("Failed to close CN10K ML Device"); diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 575f08f9c0..61f7fa32af 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -7,6 +7,32 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64') subdir_done() endif +enable_mvtvm = true + +if not jansson_dep.found() + message('drivers/ml/cnxk: jansson not found') + enable_mvtvm = false +endif + +if not cc.check_header('dlpack/dlpack.h') + message('drivers/ml/cnxk: dlpack.h not found') + enable_mvtvm = false +endif + +tvmrt_lib = cc.find_library('tvm_runtime', required: false) +if tvmrt_lib.found() + tvmrt_dep = declare_dependency(dependencies: tvmrt_lib) +else + message('drivers/ml/cnxk: tvm_runtime not found') + enable_mvtvm = false +endif + +tvmdp_dep = dependency('tvmdp', required: false) +if not tvmdp_dep.found() + message('drivers/ml/cnxk: tvmdp not found') + enable_mvtvm = false +endif + driver_sdk_headers = files( 'cn10k_ml_dev.h', 'cn10k_ml_ops.h', @@ -34,6 +60,30 @@ sources = files( deps += ['mldev', 'common_cnxk', 'kvargs', 'hash'] +if enable_mvtvm + +dpdk_conf.set('RTE_MLDEV_CNXK_ENABLE_MVTVM', true) + +driver_sdk_headers += files( + 'mvtvm_ml_ops.h', +) + +sources += files( + 'mvtvm_ml_ops.c', +) + +ext_deps += tvmrt_dep +ext_deps += tvmdp_dep +ext_deps += cc.find_library('stdc++', required: true) +ext_deps += jansson_dep + +deps += ['bus_vdev'] + +message('drivers/ml/cnxk: Enabled TVM model support') +else +message('drivers/ml/cnxk: Disabled TVM model support') +endif + require_iova_in_mbuf = false if get_option('buildtype').contains('debug') diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c new file mode 100644 index 0000000000..f2b9499cf4 --- /dev/null +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include +#include +#include +#include + +#include "mvtvm_ml_ops.h" + +#include "cnxk_ml_dev.h" + +int +mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf) +{ + int ret; + + RTE_SET_USED(conf); + + /* Configure TVMDP library */ + ret = tvmdp_configure(cnxk_mldev->mldev->data->nb_models, rte_get_tsc_cycles); + if (ret != 0) + plt_err("TVMDP configuration failed, error = %d\n", ret); + + return ret; +} + +int +mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev) +{ + int ret; + + RTE_SET_USED(cnxk_mldev); + + /* Close TVMDP library configuration */ + ret = tvmdp_close(); + if (ret != 0) + plt_err("TVMDP close failed, error = %d\n", ret); + + return ret; +} diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h new file mode 100644 index 0000000000..305b4681ed --- /dev/null +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _MVTVM_ML_OPS_H_ +#define _MVTVM_ML_OPS_H_ + +#include + +#include + +#include + +struct cnxk_ml_dev; + +int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf); +int mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); + +#endif /* _MVTVM_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131695 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61ED9425E4; Wed, 20 Sep 2023 09:27:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A3BBA42D7D; Wed, 20 Sep 2023 09:26:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4EE49410D3 for ; Wed, 20 Sep 2023 09:25:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K633lk002294 for ; Wed, 20 Sep 2023 00:25:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/dlEw8556uuHKZoPaj3vWTnDWOF0120mHN0j+XKXwHY=; b=XZLNl6MBtjEPp4ICb5uCn38oC68m3pLYo3dk6s0CEZi7sKS8lB8fA538mSYFf4j74/XQ 3kxur+GDqk4H2VmLGiYfyigKNFaclY169D7XRdMg03TZxQ92fHyZ5SaVymM0AGzRDSZU 8iy9aylOjYyXWOihYLCzEHtP43F2Z1QEXEluJvnKd7Azv6+DIbhl1YZgGXgjbuE2871+ kW4t8b9+9aWqyKBQWLEdvnUeki3WGr0QGe68d8vbaa8OEZ6IXHMw6HKkcTB9eIPAmkNe f7M6fzepEDpxyzrLc+xooEUC26tjG8e+NyDBW82fOnwZFoiB1ZWqX/ersvinCs/TWkRz Ig== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89j8-14 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:40 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:38 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id A42A55B6928; Wed, 20 Sep 2023 00:25:38 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 20/34] ml/cnxk: add structures to support TVM model type Date: Wed, 20 Sep 2023 00:25:11 -0700 Message-ID: <20230920072528.14185-21-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: DZ6uxaHo8h2Dw_66OyqqtLQ02gtbcigR X-Proofpoint-GUID: DZ6uxaHo8h2Dw_66OyqqtLQ02gtbcigR X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduced model type, sub-type and layer type. Added internal structures for TVM model objects. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ocm.c | 3 ++ drivers/ml/cnxk/cn10k_ml_ops.c | 6 ++- drivers/ml/cnxk/cnxk_ml_model.h | 63 +++++++++++++++++++++++++++++++- drivers/ml/cnxk/cnxk_ml_ops.c | 60 +++++++++++++++++++++++++----- drivers/ml/cnxk/meson.build | 1 + drivers/ml/cnxk/mvtvm_ml_model.h | 46 +++++++++++++++++++++++ 6 files changed, 166 insertions(+), 13 deletions(-) create mode 100644 drivers/ml/cnxk/mvtvm_ml_model.h diff --git a/drivers/ml/cnxk/cn10k_ml_ocm.c b/drivers/ml/cnxk/cn10k_ml_ocm.c index 70d207e646..a7b64ddf05 100644 --- a/drivers/ml/cnxk/cn10k_ml_ocm.c +++ b/drivers/ml/cnxk/cn10k_ml_ocm.c @@ -437,6 +437,9 @@ cn10k_ml_ocm_free_pages(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id, uint1 for (j = 0; j < local_model->nb_layers; j++) { local_layer = &local_model->layer[j]; + if (local_layer->type != ML_CNXK_LAYER_TYPE_MRVL) + continue; + if (local_layer != layer && local_layer->glow.ocm_map.ocm_reserved) { if (IS_BIT_SET(local_layer->glow.ocm_map.tilemask, tile_id)) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index daeb3b712c..db18f32052 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -650,6 +650,9 @@ cn10k_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * if (ret != 0) return ret; + /* Set model sub type */ + model->subtype = ML_CNXK_MODEL_SUBTYPE_GLOW_MRVL; + /* Copy metadata to internal buffer */ rte_memcpy(&model->glow.metadata, params->addr, sizeof(struct cn10k_ml_model_metadata)); cn10k_ml_model_metadata_update(&model->glow.metadata); @@ -671,6 +674,7 @@ cn10k_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * /* Load layer and get the index */ layer = &model->layer[0]; + layer->type = ML_CNXK_LAYER_TYPE_MRVL; ret = cn10k_ml_layer_load(cnxk_mldev, model->model_id, NULL, params->addr, params->size, &layer->index); if (ret != 0) { @@ -894,7 +898,7 @@ cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name) if (ret < 0) { cn10k_ml_layer_stop(device, model_id, layer_name); } else { - if (cn10k_mldev->cache_model_data) + if (cn10k_mldev->cache_model_data && model->type == ML_CNXK_MODEL_TYPE_GLOW) ret = cn10k_ml_cache_model_data(cnxk_mldev, layer); } diff --git a/drivers/ml/cnxk/cnxk_ml_model.h b/drivers/ml/cnxk/cnxk_ml_model.h index f618e5aa5f..b5d6ab2b1e 100644 --- a/drivers/ml/cnxk/cnxk_ml_model.h +++ b/drivers/ml/cnxk/cnxk_ml_model.h @@ -11,6 +11,10 @@ #include "cn10k_ml_model.h" +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM +#include "mvtvm_ml_model.h" +#endif + #include "cnxk_ml_io.h" struct cnxk_ml_dev; @@ -18,6 +22,45 @@ struct cnxk_ml_model; struct cnxk_ml_qp; struct cnxk_ml_req; +/* Model type */ +enum cnxk_ml_model_type { + /* Invalid model type */ + ML_CNXK_MODEL_TYPE_INVALID, + + /* Glow compiled model, for MLIP target */ + ML_CNXK_MODEL_TYPE_GLOW, + + /* TVM compiled model, for ARM64 / ARM64 + MLIP target */ + ML_CNXK_MODEL_TYPE_TVM, +}; + +/* Model subtype */ +enum cnxk_ml_model_subtype { + /* Marvell Glow model */ + ML_CNXK_MODEL_SUBTYPE_GLOW_MRVL, + + /* TVM model with single MRVL region */ + ML_CNXK_MODEL_SUBTYPE_TVM_MRVL, + + /* TVM model with LLVM regions only */ + ML_CNXK_MODEL_SUBTYPE_TVM_LLVM, + + /* TVM hybrid model, with both MRVL and LLVM regions or (> 1) MRVL regions*/ + ML_CNXK_MODEL_SUBTYPE_TVM_HYBRID, +}; + +/* Layer type */ +enum cnxk_ml_layer_type { + /* MRVL layer, for MLIP target*/ + ML_CNXK_LAYER_TYPE_UNKNOWN = 0, + + /* MRVL layer, for MLIP target*/ + ML_CNXK_LAYER_TYPE_MRVL, + + /* LLVM layer, for ARM64 target*/ + ML_CNXK_LAYER_TYPE_LLVM, +}; + /* Model state */ enum cnxk_ml_model_state { /* Unknown state */ @@ -53,6 +96,9 @@ struct cnxk_ml_layer { /* Name*/ char name[RTE_ML_STR_MAX]; + /* Type */ + enum cnxk_ml_layer_type type; + /* Model handle */ struct cnxk_ml_model *model; @@ -83,14 +129,27 @@ struct cnxk_ml_model { /* Device reference */ struct cnxk_ml_dev *cnxk_mldev; + /* Type */ + enum cnxk_ml_model_type type; + + /* Model subtype */ + enum cnxk_ml_model_subtype subtype; + /* ID */ uint16_t model_id; /* Name */ char name[RTE_ML_STR_MAX]; - /* Model specific data - glow */ - struct cn10k_ml_model_data glow; + union { + /* Model specific data - glow */ + struct cn10k_ml_model_data glow; + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + /* Model type specific data - mvtvm */ + struct mvtvm_ml_model_data mvtvm; +#endif + }; /* Batch size */ uint32_t batch_size; diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 358f16cead..a20937ea11 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -1286,6 +1286,8 @@ cnxk_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buf struct cnxk_ml_model *model; uint8_t *lcl_dbuffer; uint8_t *lcl_qbuffer; + uint64_t d_offset; + uint64_t q_offset; uint32_t i; int ret; @@ -1298,17 +1300,35 @@ cnxk_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buf return -EINVAL; } - info = &model->layer[0].info; + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) + info = &model->layer[0].info; +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + info = &model->mvtvm.info; +#endif + + if (info == NULL) + return -EINVAL; - lcl_dbuffer = dbuffer[0]->addr; - lcl_qbuffer = qbuffer[0]->addr; + d_offset = 0; + q_offset = 0; for (i = 0; i < info->nb_inputs; i++) { + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + lcl_dbuffer = dbuffer[i]->addr; + lcl_qbuffer = qbuffer[i]->addr; + } else { + lcl_dbuffer = RTE_PTR_ADD(dbuffer[0]->addr, d_offset); + lcl_qbuffer = RTE_PTR_ADD(qbuffer[0]->addr, q_offset); + } + ret = cnxk_ml_io_quantize_single(&info->input[i], lcl_dbuffer, lcl_qbuffer); if (ret < 0) return ret; - lcl_dbuffer += info->input[i].sz_d; - lcl_qbuffer += info->input[i].sz_q; + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) { + d_offset += info->input[i].sz_d; + q_offset += info->input[i].sz_q; + } } return 0; @@ -1322,6 +1342,8 @@ cnxk_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_b struct cnxk_ml_model *model; uint8_t *lcl_qbuffer; uint8_t *lcl_dbuffer; + uint64_t q_offset; + uint64_t d_offset; uint32_t i; int ret; @@ -1334,17 +1356,35 @@ cnxk_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_b return -EINVAL; } - info = &model->layer[model->nb_layers - 1].info; + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) + info = &model->layer[model->nb_layers - 1].info; +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + info = &model->mvtvm.info; +#endif + + if (info == NULL) + return -EINVAL; - lcl_qbuffer = qbuffer[0]->addr; - lcl_dbuffer = dbuffer[0]->addr; + q_offset = 0; + d_offset = 0; for (i = 0; i < info->nb_outputs; i++) { + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + lcl_qbuffer = qbuffer[i]->addr; + lcl_dbuffer = dbuffer[i]->addr; + } else { + lcl_qbuffer = RTE_PTR_ADD(qbuffer[0]->addr, q_offset); + lcl_dbuffer = RTE_PTR_ADD(dbuffer[0]->addr, d_offset); + } + ret = cnxk_ml_io_dequantize_single(&info->output[i], lcl_qbuffer, lcl_dbuffer); if (ret < 0) return ret; - lcl_qbuffer += info->output[i].sz_q; - lcl_dbuffer += info->output[i].sz_d; + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) { + q_offset += info->output[i].sz_q; + d_offset += info->output[i].sz_d; + } } return 0; diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 61f7fa32af..25b72cc8aa 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -66,6 +66,7 @@ dpdk_conf.set('RTE_MLDEV_CNXK_ENABLE_MVTVM', true) driver_sdk_headers += files( 'mvtvm_ml_ops.h', + 'mvtvm_ml_model.h', ) sources += files( diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h new file mode 100644 index 0000000000..1f6b435be0 --- /dev/null +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _MVTVM_ML_MODEL_H_ +#define _MVTVM_ML_MODEL_H_ + +#include + +#include + +#include "cnxk_ml_io.h" + +/* Maximum number of objects per model */ +#define ML_MVTVM_MODEL_OBJECT_MAX 3 + +/* Objects list */ +extern char mvtvm_object_list[ML_MVTVM_MODEL_OBJECT_MAX][RTE_ML_STR_MAX]; + +/* Model object structure */ +struct mvtvm_ml_model_object { + /* Name */ + char name[RTE_ML_STR_MAX]; + + /* Temporary buffer */ + uint8_t *buffer; + + /* Buffer size */ + int64_t size; +}; + +struct mvtvm_ml_model_data { + /* Model metadata */ + struct tvmdp_model_metadata metadata; + + /* Model objects */ + struct tvmdp_model_object object; + + /* TVM runtime callbacks */ + struct tvmrt_glow_callback cb; + + /* Model I/O info */ + struct cnxk_ml_io_info info; +}; + +#endif /* _MVTVM_ML_MODEL_H_ */ From patchwork Wed Sep 20 07:25:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131694 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D38BD425E4; Wed, 20 Sep 2023 09:27:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 781E942D66; Wed, 20 Sep 2023 09:26:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 45F0C410D0 for ; Wed, 20 Sep 2023 09:25:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW10008355 for ; Wed, 20 Sep 2023 00:25:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=9/rlZAMzUiCI5a7QVSICmhQEeMc0gQ5QZ11KaeYMI2U=; b=KRKX8119zmCELPmHQfwNYgo4m/kbc8EFDC6SYL3XFiso2n8EncVoR0b+fm0Z7q0m/V0i Bffq/or/LN2/H1a6WdjjzZslocPy3GXcZrZjAkn2RCAKNcX0ut3QzNebhXtTe7R63wil is8Rccdp7qu3Su66AAWd4Qem4Ox7fVXk/WumREVChSBFTv0eQfjggQI9muguvO9GGl84 PCLYBTrL2wTM3S0u0qmFvve2W3fu2thavtY9neEK6qbSxX33v4Em/3c1DPESfo/nT+Xq UpL4P60thD13Dc6BEB2yIkaEVrpl4oZiUxeeDQg/Y4vn6MY+hHxZ/jAs9PgX8feai19H 7g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:40 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:39 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id F32065B6929; Wed, 20 Sep 2023 00:25:38 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 21/34] ml/cnxk: add support for identify model type Date: Wed, 20 Sep 2023 00:25:12 -0700 Message-ID: <20230920072528.14185-22-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: D1fUyDUZ-AYRd2CW541UCTqmitPvjXA4 X-Proofpoint-GUID: D1fUyDUZ-AYRd2CW541UCTqmitPvjXA4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable support to parse model buffer to identify the model type and model sub-type. Enabled basic checks for Glow model type buffer. Signed-off-by: Srikanth Yalavarthi Signed-off-by: Anup Prabhu --- drivers/ml/cnxk/cnxk_ml_model.c | 96 ++++++++++++++++++++++++++++++++ drivers/ml/cnxk/cnxk_ml_model.h | 1 + drivers/ml/cnxk/cnxk_ml_ops.c | 9 +++ drivers/ml/cnxk/meson.build | 6 ++ drivers/ml/cnxk/mvtvm_ml_model.c | 11 ++++ 5 files changed, 123 insertions(+) create mode 100644 drivers/ml/cnxk/mvtvm_ml_model.c diff --git a/drivers/ml/cnxk/cnxk_ml_model.c b/drivers/ml/cnxk/cnxk_ml_model.c index b069d4e3a5..746d3ca5a9 100644 --- a/drivers/ml/cnxk/cnxk_ml_model.c +++ b/drivers/ml/cnxk/cnxk_ml_model.c @@ -2,11 +2,107 @@ * Copyright (c) 2023 Marvell. */ +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM +#include +#include +#endif + +#include #include +#include "cn10k_ml_model.h" + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM +#include "mvtvm_ml_model.h" +#endif + #include "cnxk_ml_model.h" #include "cnxk_ml_utils.h" +enum cnxk_ml_model_type +cnxk_ml_model_get_type(struct rte_ml_model_params *params) +{ + struct cn10k_ml_model_metadata_header *metadata_header; + uint32_t payload_crc32c; + uint32_t header_crc32c; + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + struct archive_entry *entry; + struct archive *a; + uint8_t i; + int ret; + + /* Assume as archive and check for read status */ + a = archive_read_new(); + archive_read_support_filter_all(a); + archive_read_support_format_all(a); + + ret = archive_read_open_memory(a, params->addr, params->size); + if (ret == ARCHIVE_OK) + goto check_tvm; + else + goto check_glow; + +check_tvm: + bool object_found[ML_MVTVM_MODEL_OBJECT_MAX] = {false, false, false}; + + /* Parse buffer for available objects */ + while (archive_read_next_header(a, &entry) == ARCHIVE_OK) { + for (i = 0; i < ML_MVTVM_MODEL_OBJECT_MAX; i++) { + if (!object_found[i] && + (strcmp(archive_entry_pathname(entry), mvtvm_object_list[i]) == 0)) + object_found[i] = true; + } + archive_read_data_skip(a); + } + + /* Check if all objects are available */ + for (i = 0; i < ML_MVTVM_MODEL_OBJECT_MAX; i++) { + if (!object_found[i]) { + plt_err("Object %s not found in archive!\n", mvtvm_object_list[i]); + return ML_CNXK_MODEL_TYPE_INVALID; + } + } + + return ML_CNXK_MODEL_TYPE_TVM; + +check_glow: +#endif + + /* Check model magic string */ + metadata_header = (struct cn10k_ml_model_metadata_header *)params->addr; + if (strncmp((char *)metadata_header->magic, MRVL_ML_MODEL_MAGIC_STRING, 4) != 0) { + plt_err("Invalid Glow model, magic = %s", metadata_header->magic); + return ML_CNXK_MODEL_TYPE_INVALID; + } + + /* Header CRC check */ + if (metadata_header->header_crc32c != 0) { + header_crc32c = rte_hash_crc( + params->addr, + sizeof(struct cn10k_ml_model_metadata_header) - sizeof(uint32_t), 0); + + if (header_crc32c != metadata_header->header_crc32c) { + plt_err("Invalid Glow model, Header CRC mismatch"); + return ML_CNXK_MODEL_TYPE_INVALID; + } + } + + /* Payload CRC check */ + if (metadata_header->payload_crc32c != 0) { + payload_crc32c = rte_hash_crc( + PLT_PTR_ADD(params->addr, sizeof(struct cn10k_ml_model_metadata_header)), + params->size - sizeof(struct cn10k_ml_model_metadata_header), 0); + + if (payload_crc32c != metadata_header->payload_crc32c) { + plt_err("Invalid Glow model, Payload CRC mismatch"); + return ML_CNXK_MODEL_TYPE_INVALID; + } + } + + return ML_CNXK_MODEL_TYPE_GLOW; +} + void cnxk_ml_model_dump(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, FILE *fp) { diff --git a/drivers/ml/cnxk/cnxk_ml_model.h b/drivers/ml/cnxk/cnxk_ml_model.h index b5d6ab2b1e..577a96dc26 100644 --- a/drivers/ml/cnxk/cnxk_ml_model.h +++ b/drivers/ml/cnxk/cnxk_ml_model.h @@ -181,6 +181,7 @@ struct cnxk_ml_model { set_poll_addr_t set_poll_addr; }; +enum cnxk_ml_model_type cnxk_ml_model_get_type(struct rte_ml_model_params *params); void cnxk_ml_model_dump(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, FILE *fp); #endif /* _CNXK_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index a20937ea11..052c69e510 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -10,6 +10,7 @@ #include "cn10k_ml_ops.h" #ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM +#include "mvtvm_ml_model.h" #include "mvtvm_ml_ops.h" #endif @@ -1087,6 +1088,7 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u { struct rte_ml_dev_info dev_info; struct cnxk_ml_dev *cnxk_mldev; + enum cnxk_ml_model_type type; struct cnxk_ml_model *model; char str[RTE_MEMZONE_NAMESIZE]; @@ -1102,6 +1104,12 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u cnxk_mldev = dev->data->dev_private; + type = cnxk_ml_model_get_type(params); + if (type == ML_CNXK_MODEL_TYPE_INVALID) { + plt_err("Invalid / unsupported model type"); + return -EINVAL; + } + /* Find model ID */ found = false; for (lcl_model_id = 0; lcl_model_id < dev->data->nb_models; lcl_model_id++) { @@ -1135,6 +1143,7 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u model = mz->addr; model->cnxk_mldev = cnxk_mldev; + model->type = type; model->model_id = lcl_model_id; model->info = PLT_PTR_ADD( model, PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), dev_info.align_size)); diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 25b72cc8aa..09a62b5c55 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -9,6 +9,11 @@ endif enable_mvtvm = true +if not libarchive.found() + message('drivers/ml/cnxk: libarchive not found') + enable_mvtvm = false +endif + if not jansson_dep.found() message('drivers/ml/cnxk: jansson not found') enable_mvtvm = false @@ -71,6 +76,7 @@ driver_sdk_headers += files( sources += files( 'mvtvm_ml_ops.c', + 'mvtvm_ml_model.c', ) ext_deps += tvmrt_dep diff --git a/drivers/ml/cnxk/mvtvm_ml_model.c b/drivers/ml/cnxk/mvtvm_ml_model.c new file mode 100644 index 0000000000..6462267534 --- /dev/null +++ b/drivers/ml/cnxk/mvtvm_ml_model.c @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include + +#include "mvtvm_ml_model.h" + +/* Objects list */ +char mvtvm_object_list[ML_MVTVM_MODEL_OBJECT_MAX][RTE_ML_STR_MAX] = {"mod.so", "mod.json", + "mod.params"}; From patchwork Wed Sep 20 07:25:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131696 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 600A6425E4; Wed, 20 Sep 2023 09:28:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E6AFA42D78; Wed, 20 Sep 2023 09:26:02 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D1300410D0 for ; Wed, 20 Sep 2023 09:25:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW11008355 for ; Wed, 20 Sep 2023 00:25:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=2gU7mUOH2ysYc3tL6IMB2VSNeTr1cVWKHvLdd5GbziI=; b=bgwyUZXh9GmtNEUOVdN58YeJeGN8WLzYwsak5qjRMmEMVaQXqfOLAh/mKEF6OFcRdBIi 7ZqpUsz9KbenRiQmjqdMqOtnXSrqzB5RW0gTRePxu25MgNft/4YLSrZ6Kvt5hVAO5QFD pr3ukRvooH9S2cVVRVwhx0nZyYke17oWE8XMC7oxisEFfkD66pKJ9RlEu1OZMpZwNAg3 9Tli2rxSSuZvqPX4bhzR9dxxSlKPcI2Atpm/d+jZaJKC/JdC1Xa1egkkMuzHJuqIcnfL AC1JArBBoCyh4l1gKl3sfBo7Us69aadrZwlhc7hWYzioBoisww94mZvj8jPExX+hq4G1 JQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-9 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:41 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:39 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 504615B6922; Wed, 20 Sep 2023 00:25:39 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 22/34] ml/cnxk: add support to parse TVM model objects Date: Wed, 20 Sep 2023 00:25:13 -0700 Message-ID: <20230920072528.14185-23-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 10qhB4av7f-FMsG8E_f63OAaWw3-dWsU X-Proofpoint-GUID: 10qhB4av7f-FMsG8E_f63OAaWw3-dWsU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to parse TVM model objects from the model archive buffer. Added support to check for all expected objects and copy TVM model objects to internal buffers. Signed-off-by: Srikanth Yalavarthi Signed-off-by: Anup Prabhu --- drivers/ml/cnxk/cnxk_ml_ops.c | 14 +++++-- drivers/ml/cnxk/mvtvm_ml_model.c | 62 +++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_model.h | 3 ++ drivers/ml/cnxk/mvtvm_ml_ops.c | 63 ++++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_ops.h | 3 ++ 5 files changed, 142 insertions(+), 3 deletions(-) diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 052c69e510..8e17f597af 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -1149,9 +1149,17 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u model, PLT_ALIGN_CEIL(sizeof(struct cnxk_ml_model), dev_info.align_size)); dev->data->models[lcl_model_id] = model; - ret = cn10k_ml_model_load(cnxk_mldev, params, model); - if (ret != 0) - goto error; + if (type == ML_CNXK_MODEL_TYPE_GLOW) { + ret = cn10k_ml_model_load(cnxk_mldev, params, model); + if (ret != 0) + goto error; +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + } else { + ret = mvtvm_ml_model_load(cnxk_mldev, params, model); + if (ret != 0) + goto error; +#endif + } plt_spinlock_init(&model->lock); model->state = ML_CNXK_MODEL_STATE_LOADED; diff --git a/drivers/ml/cnxk/mvtvm_ml_model.c b/drivers/ml/cnxk/mvtvm_ml_model.c index 6462267534..425a682209 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.c +++ b/drivers/ml/cnxk/mvtvm_ml_model.c @@ -2,10 +2,72 @@ * Copyright (c) 2023 Marvell. */ +#include +#include + #include +#include + #include "mvtvm_ml_model.h" /* Objects list */ char mvtvm_object_list[ML_MVTVM_MODEL_OBJECT_MAX][RTE_ML_STR_MAX] = {"mod.so", "mod.json", "mod.params"}; + +int +mvtvm_ml_model_blob_parse(struct rte_ml_model_params *params, struct mvtvm_ml_model_object *object) +{ + bool object_found[ML_MVTVM_MODEL_OBJECT_MAX] = {false, false, false}; + struct archive_entry *entry; + struct archive *a; + uint8_t i; + int ret; + + /* Open archive */ + a = archive_read_new(); + archive_read_support_filter_all(a); + archive_read_support_format_all(a); + + ret = archive_read_open_memory(a, params->addr, params->size); + if (ret != ARCHIVE_OK) + return archive_errno(a); + + /* Read archive */ + while (archive_read_next_header(a, &entry) == ARCHIVE_OK) { + for (i = 0; i < ML_MVTVM_MODEL_OBJECT_MAX; i++) { + if (!object_found[i] && + (strcmp(archive_entry_pathname(entry), mvtvm_object_list[i]) == 0)) { + memcpy(object[i].name, mvtvm_object_list[i], RTE_ML_STR_MAX); + object[i].size = archive_entry_size(entry); + object[i].buffer = rte_malloc(NULL, object[i].size, 0); + + if (archive_read_data(a, object[i].buffer, object[i].size) != + object[i].size) { + plt_err("Failed to read object from model archive: %s", + object[i].name); + goto error; + } + object_found[i] = true; + } + } + archive_read_data_skip(a); + } + + /* Check if all objects are parsed */ + for (i = 0; i < ML_MVTVM_MODEL_OBJECT_MAX; i++) { + if (!object_found[i]) { + plt_err("Object %s not found in archive!\n", mvtvm_object_list[i]); + goto error; + } + } + return 0; + +error: + for (i = 0; i < ML_MVTVM_MODEL_OBJECT_MAX; i++) { + if (object[i].buffer != NULL) + rte_free(object[i].buffer); + } + + return -EINVAL; +} diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index 1f6b435be0..73a45a91d6 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -43,4 +43,7 @@ struct mvtvm_ml_model_data { struct cnxk_ml_io_info info; }; +int mvtvm_ml_model_blob_parse(struct rte_ml_model_params *params, + struct mvtvm_ml_model_object *object); + #endif /* _MVTVM_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index f2b9499cf4..baa9099084 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -7,9 +7,14 @@ #include #include +#include "mvtvm_ml_model.h" #include "mvtvm_ml_ops.h" #include "cnxk_ml_dev.h" +#include "cnxk_ml_model.h" + +/* ML model macros */ +#define MVTVM_ML_MODEL_MEMZONE_NAME "ml_mvtvm_model_mz" int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf) @@ -40,3 +45,61 @@ mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev) return ret; } + +int +mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, + struct cnxk_ml_model *model) +{ + struct mvtvm_ml_model_object object[ML_MVTVM_MODEL_OBJECT_MAX]; + char str[RTE_MEMZONE_NAMESIZE]; + const struct plt_memzone *mz; + size_t model_object_size = 0; + uint64_t mz_size = 0; + int ret; + + RTE_SET_USED(cnxk_mldev); + + ret = mvtvm_ml_model_blob_parse(params, object); + if (ret != 0) + return ret; + + model_object_size = RTE_ALIGN_CEIL(object[0].size, RTE_CACHE_LINE_MIN_SIZE) + + RTE_ALIGN_CEIL(object[1].size, RTE_CACHE_LINE_MIN_SIZE) + + RTE_ALIGN_CEIL(object[2].size, RTE_CACHE_LINE_MIN_SIZE); + mz_size += model_object_size; + + /* Allocate memzone for model object */ + snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", MVTVM_ML_MODEL_MEMZONE_NAME, model->model_id); + mz = plt_memzone_reserve_aligned(str, mz_size, 0, ML_CN10K_ALIGN_SIZE); + if (!mz) { + plt_err("plt_memzone_reserve failed : %s", str); + return -ENOMEM; + } + + /* Copy mod.so */ + model->mvtvm.object.so.addr = mz->addr; + model->mvtvm.object.so.size = object[0].size; + rte_memcpy(model->mvtvm.object.so.name, object[0].name, TVMDP_NAME_STRLEN); + rte_memcpy(model->mvtvm.object.so.addr, object[0].buffer, object[0].size); + rte_free(object[0].buffer); + + /* Copy mod.json */ + model->mvtvm.object.json.addr = + RTE_PTR_ADD(model->mvtvm.object.so.addr, + RTE_ALIGN_CEIL(model->mvtvm.object.so.size, RTE_CACHE_LINE_MIN_SIZE)); + model->mvtvm.object.json.size = object[1].size; + rte_memcpy(model->mvtvm.object.json.name, object[1].name, TVMDP_NAME_STRLEN); + rte_memcpy(model->mvtvm.object.json.addr, object[1].buffer, object[1].size); + rte_free(object[1].buffer); + + /* Copy mod.params */ + model->mvtvm.object.params.addr = + RTE_PTR_ADD(model->mvtvm.object.json.addr, + RTE_ALIGN_CEIL(model->mvtvm.object.json.size, RTE_CACHE_LINE_MIN_SIZE)); + model->mvtvm.object.params.size = object[2].size; + rte_memcpy(model->mvtvm.object.params.name, object[2].name, TVMDP_NAME_STRLEN); + rte_memcpy(model->mvtvm.object.params.addr, object[2].buffer, object[2].size); + rte_free(object[2].buffer); + + return 0; +} diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h index 305b4681ed..6607537599 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.h +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -12,8 +12,11 @@ #include struct cnxk_ml_dev; +struct cnxk_ml_model; int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf); int mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); +int mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, + struct cnxk_ml_model *model); #endif /* _MVTVM_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131698 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B844425E4; Wed, 20 Sep 2023 09:28:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 035AF42D90; Wed, 20 Sep 2023 09:26:05 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 897C8410E3 for ; Wed, 20 Sep 2023 09:25:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JQqM009223 for ; Wed, 20 Sep 2023 00:25:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=eiOnOl55n+xWyUGE1A4vq0UV01iBqG8Ybz6I8HJwO9I=; b=Spip8N5MMpJJ62pW/N91z/jQV81TtyMC/uU6JOOcjFWKniGzm4QNoBXTDKkguWbAxrnv me7FEAe6rYTHLOwSU06scit/Io8Kfw8KegH/Dse4E4vN8i05R+TQM9mw8Y+Y45vjGc4B q1bCxAnh46X06MGzETLWwVRGhRmWg9ymBjtqWFQFL3cBS13h2nP2Br1SPmzp3PCdsLQp 1ANlGCBgX1oWR3y0zLzHAb75vxtQesPENhVybPmNHpkndiUakKT80DbmDNk/MQc2XB3J VGMmoNXwpd47hU1tFH19NJwqFY4d1n4JdA50UanFLbjHc2uuRDePtLl2gtQo4i5+d7t7 0w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasym8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:41 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:39 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 9D9D55B6926; Wed, 20 Sep 2023 00:25:39 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 23/34] ml/cnxk: fetch layer info and load TVM model Date: Wed, 20 Sep 2023 00:25:14 -0700 Message-ID: <20230920072528.14185-24-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: X9P8NkQOXT1iFe87tC7zeK9Zeyl4v2r9 X-Proofpoint-GUID: X9P8NkQOXT1iFe87tC7zeK9Zeyl4v2r9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to fetch TVM model layer information and update internal structures based on the layer information Set callback functions for layer load and unload and enable model loading using TVMDP library. Added support to fetch full metadata after model load. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 22 ++++++++- drivers/ml/cnxk/mvtvm_ml_model.h | 2 + drivers/ml/cnxk/mvtvm_ml_ops.c | 83 ++++++++++++++++++++++++++++++++ 3 files changed, 106 insertions(+), 1 deletion(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index db18f32052..79217165cd 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -508,8 +508,10 @@ cn10k_ml_layer_load(void *device, uint16_t model_id, const char *layer_name, uin int qp_id; int ret; - PLT_SET_USED(size); +#ifndef RTE_MLDEV_CNXK_ENABLE_MVTVM PLT_SET_USED(layer_name); +#endif + PLT_SET_USED(size); cnxk_mldev = (struct cnxk_ml_dev *)device; if (cnxk_mldev == NULL) { @@ -523,6 +525,24 @@ cn10k_ml_layer_load(void *device, uint16_t model_id, const char *layer_name, uin return -EINVAL; } +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } + } +#endif layer = &model->layer[layer_id]; ret = cn10k_ml_model_metadata_check(buffer, size); diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index 73a45a91d6..6c38217c15 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -11,6 +11,8 @@ #include "cnxk_ml_io.h" +struct cnxk_ml_model; + /* Maximum number of objects per model */ #define ML_MVTVM_MODEL_OBJECT_MAX 3 diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index baa9099084..d9ec411385 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -7,6 +7,8 @@ #include #include +#include "cn10k_ml_ops.h" + #include "mvtvm_ml_model.h" #include "mvtvm_ml_ops.h" @@ -51,9 +53,13 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * struct cnxk_ml_model *model) { struct mvtvm_ml_model_object object[ML_MVTVM_MODEL_OBJECT_MAX]; + struct tvmrt_glow_callback *callback; char str[RTE_MEMZONE_NAMESIZE]; const struct plt_memzone *mz; size_t model_object_size = 0; + uint16_t nb_mrvl_layers; + uint16_t nb_llvm_layers; + uint8_t layer_id = 0; uint64_t mz_size = 0; int ret; @@ -101,5 +107,82 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * rte_memcpy(model->mvtvm.object.params.addr, object[2].buffer, object[2].size); rte_free(object[2].buffer); + /* Get metadata - stage 1 */ + ret = tvmdp_model_metadata_get_stage1(model->mvtvm.object.json.addr, + model->mvtvm.object.json.size, + &model->mvtvm.metadata); + if (ret != 0) { + plt_err("TVMDP: Failed to parse metadata - stage 1, model_id = %u, error = %d", + model->model_id, ret); + goto error; + } + + /* Set model fields */ + plt_strlcpy(model->name, model->mvtvm.metadata.model.name, TVMDP_NAME_STRLEN); + model->batch_size = 1; + model->nb_layers = model->mvtvm.metadata.model.nb_layers; + + /* Update layer info */ + nb_mrvl_layers = 0; + nb_llvm_layers = 0; + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + strncpy(model->layer[layer_id].name, + model->mvtvm.metadata.model.layer[layer_id].name, TVMDP_NAME_STRLEN); + if (strcmp(model->mvtvm.metadata.model.layer[layer_id].type, "mrvl") == 0 || + strcmp(model->mvtvm.metadata.model.layer[layer_id].type, "MRVL") == 0) { + model->layer[layer_id].type = ML_CNXK_LAYER_TYPE_MRVL; + nb_mrvl_layers++; + } else if (strcmp(model->mvtvm.metadata.model.layer[layer_id].type, "llvm") == 0 || + strcmp(model->mvtvm.metadata.model.layer[layer_id].type, "LLVM") == 0) { + model->layer[layer_id].type = ML_CNXK_LAYER_TYPE_LLVM; + nb_llvm_layers++; + } + } + + if ((nb_llvm_layers == 0) && (nb_mrvl_layers == 0)) { + plt_err("Invalid model, nb_llvm_layers = %u, nb_mrvl_layers = %u", nb_llvm_layers, + nb_mrvl_layers); + goto error; + } + + /* Set model subtype */ + if ((nb_llvm_layers == 0) && (nb_mrvl_layers == 1)) + model->subtype = ML_CNXK_MODEL_SUBTYPE_TVM_MRVL; + else if ((nb_llvm_layers > 0) && (nb_mrvl_layers == 0)) + model->subtype = ML_CNXK_MODEL_SUBTYPE_TVM_LLVM; + else + model->subtype = ML_CNXK_MODEL_SUBTYPE_TVM_HYBRID; + + /* Set callback function array */ + if (model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_LLVM) { + callback = &model->mvtvm.cb; + callback->tvmrt_glow_layer_load = cn10k_ml_layer_load; + callback->tvmrt_glow_layer_unload = cn10k_ml_layer_unload; + } else { + callback = NULL; + } + + /* Initialize model in TVMDP */ + ret = tvmdp_model_load(cnxk_mldev, model->model_id, (void *)(&model->mvtvm.object), + callback); + if (ret != 0) { + plt_err("TVMDP: Model load failed, model_id = %u, error = %d", model->model_id, + ret); + goto error; + } + + /* Get model metadata - stage 2 */ + ret = tvmdp_model_metadata_get_stage2(model->model_id, &model->mvtvm.metadata); + if (ret != 0) { + plt_err("TVMDP: Failed to get metadata, model_id = %u, error = %d\n", + model->model_id, ret); + goto error; + } + return 0; + +error: + rte_memzone_free(mz); + + return ret; } From patchwork Wed Sep 20 07:25:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131700 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CA4D5425E4; Wed, 20 Sep 2023 09:28:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9ED0A42DA6; Wed, 20 Sep 2023 09:26:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 318FE410ED for ; Wed, 20 Sep 2023 09:25:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JQqN009223 for ; Wed, 20 Sep 2023 00:25:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=10sRohSzQY7i22s+LaNCOqBhXGn00Qca66KL0nDE2Dw=; b=RXF+7uriN3zkIAPx+Kzh+dITmBfZxV1YwqG6maUvZAmpivREbiG1xmqhixjhnrxyvyMs greumTq0lubQxfN3kZl6tgGKkf4YNeS9jbn/RKtOOF2cg3tPSDOSth+Xw2Y8a3/CfAlz jARNm4cNwZhiQnQP87PkeBBgIc9GaEQloSY82du4LnGwzaxhIWDTvAd24pXatigQnPMs JliCeCdkqgFppp0G7rQk232Rew2cLTdMUvM65uWNyGtvDkK1nRvXzk8nNT/iua2ay5iY kf8tMoG96foninhXb8yN2cvzXAN470yuFf5iV/lHIabY40jPOuVJ8MDg76cufCTPZVgx ow== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasym8-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:42 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:40 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id E8D385B6928; Wed, 20 Sep 2023 00:25:39 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 24/34] ml/cnxk: update internal info for TVM model Date: Wed, 20 Sep 2023 00:25:15 -0700 Message-ID: <20230920072528.14185-25-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 8XRpyzE9tAwbt-sovVC8-D2VPjRmO7dC X-Proofpoint-GUID: 8XRpyzE9tAwbt-sovVC8-D2VPjRmO7dC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enabled updating internal IO info structures for TVM model. Compute static fields related to the model I/O. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/mvtvm_ml_model.c | 105 +++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_model.h | 1 + drivers/ml/cnxk/mvtvm_ml_ops.c | 3 + 3 files changed, 109 insertions(+) diff --git a/drivers/ml/cnxk/mvtvm_ml_model.c b/drivers/ml/cnxk/mvtvm_ml_model.c index 425a682209..86f465a645 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.c +++ b/drivers/ml/cnxk/mvtvm_ml_model.c @@ -7,10 +7,14 @@ #include +#include + #include #include "mvtvm_ml_model.h" +#include "cnxk_ml_model.h" + /* Objects list */ char mvtvm_object_list[ML_MVTVM_MODEL_OBJECT_MAX][RTE_ML_STR_MAX] = {"mod.so", "mod.json", "mod.params"}; @@ -71,3 +75,104 @@ mvtvm_ml_model_blob_parse(struct rte_ml_model_params *params, struct mvtvm_ml_mo return -EINVAL; } + +static enum rte_ml_io_type +mvtvm_ml_io_type_map(uint8_t type) +{ + switch (type) { + case kDLInt: + return RTE_ML_IO_TYPE_INT32; + case kDLUInt: + return RTE_ML_IO_TYPE_UINT32; + case kDLFloat: + return RTE_ML_IO_TYPE_FP32; + case kDLBfloat: + return RTE_ML_IO_TYPE_BFLOAT16; + } + + return RTE_ML_IO_TYPE_UNKNOWN; +} + +void +mvtvm_ml_model_io_info_update(struct cnxk_ml_model *model) +{ + struct tvmdp_model_metadata *metadata; + int32_t i; + int32_t j; + + if (model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) + goto tvm_mrvl_model; + + metadata = &model->mvtvm.metadata; + + /* Inputs, set for layer_id = 0 */ + model->mvtvm.info.nb_inputs = metadata->model.num_input; + model->mvtvm.info.total_input_sz_d = 0; + model->mvtvm.info.total_input_sz_q = 0; + for (i = 0; i < metadata->model.num_input; i++) { + strncpy(model->mvtvm.info.input[i].name, metadata->input[i].name, + TVMDP_NAME_STRLEN); + model->mvtvm.info.input[i].dtype = + mvtvm_ml_io_type_map(metadata->input[i].datatype.code); + model->mvtvm.info.input[i].qtype = + mvtvm_ml_io_type_map(metadata->input[i].model_datatype.code); + model->mvtvm.info.input[i].nb_dims = metadata->input[i].ndim; + + model->mvtvm.info.input[i].nb_elements = 1; + for (j = 0; j < metadata->input[i].ndim; j++) { + model->mvtvm.info.input[i].shape[j] = metadata->input[i].shape[j]; + model->mvtvm.info.input[i].nb_elements *= metadata->input[i].shape[j]; + } + + model->mvtvm.info.input[i].sz_d = + model->mvtvm.info.input[i].nb_elements * + rte_ml_io_type_size_get(model->mvtvm.info.input[i].dtype); + model->mvtvm.info.input[i].sz_q = + model->mvtvm.info.input[i].nb_elements * + rte_ml_io_type_size_get(model->mvtvm.info.input[i].qtype); + + model->mvtvm.info.total_input_sz_d += model->mvtvm.info.input[i].sz_d; + model->mvtvm.info.total_input_sz_q += model->mvtvm.info.input[i].sz_q; + + plt_ml_dbg("model_id = %u, input[%u] - sz_d = %u sz_q = %u", model->model_id, i, + model->mvtvm.info.input[i].sz_d, model->mvtvm.info.input[i].sz_q); + } + + /* Outputs, set for nb_layers - 1 */ + model->mvtvm.info.nb_outputs = metadata->model.num_output; + model->mvtvm.info.total_output_sz_d = 0; + model->mvtvm.info.total_output_sz_q = 0; + for (i = 0; i < metadata->model.num_output; i++) { + strncpy(model->mvtvm.info.output[i].name, metadata->output[i].name, + TVMDP_NAME_STRLEN); + model->mvtvm.info.output[i].dtype = + mvtvm_ml_io_type_map(metadata->output[i].datatype.code); + model->mvtvm.info.output[i].qtype = + mvtvm_ml_io_type_map(metadata->output[i].model_datatype.code); + model->mvtvm.info.output[i].nb_dims = metadata->output[i].ndim; + + model->mvtvm.info.output[i].nb_elements = 1; + for (j = 0; j < metadata->output[i].ndim; j++) { + model->mvtvm.info.output[i].shape[j] = metadata->output[i].shape[j]; + model->mvtvm.info.output[i].nb_elements *= metadata->output[i].shape[j]; + } + + model->mvtvm.info.output[i].sz_d = + model->mvtvm.info.output[i].nb_elements * + rte_ml_io_type_size_get(model->mvtvm.info.output[i].dtype); + model->mvtvm.info.output[i].sz_q = + model->mvtvm.info.output[i].nb_elements * + rte_ml_io_type_size_get(model->mvtvm.info.output[i].qtype); + + model->mvtvm.info.total_output_sz_d += model->mvtvm.info.output[i].sz_d; + model->mvtvm.info.total_output_sz_q += model->mvtvm.info.output[i].sz_q; + + plt_ml_dbg("model_id = %u, output[%u] - sz_d = %u sz_q = %u", model->model_id, i, + model->mvtvm.info.output[i].sz_d, model->mvtvm.info.output[i].sz_q); + } + + return; + +tvm_mrvl_model: + cn10k_ml_layer_io_info_update(&model->mvtvm.info, &model->layer[0].glow.metadata); +} diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index 6c38217c15..2b25a7b568 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -47,5 +47,6 @@ struct mvtvm_ml_model_data { int mvtvm_ml_model_blob_parse(struct rte_ml_model_params *params, struct mvtvm_ml_model_object *object); +void mvtvm_ml_model_io_info_update(struct cnxk_ml_model *model); #endif /* _MVTVM_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index d9ec411385..1d585a57ff 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -179,6 +179,9 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * goto error; } + /* Update model I/O data */ + mvtvm_ml_model_io_info_update(model); + return 0; error: From patchwork Wed Sep 20 07:25:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131697 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 511A2425E4; Wed, 20 Sep 2023 09:28:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D587842D87; Wed, 20 Sep 2023 09:26:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 647A5410D5 for ; Wed, 20 Sep 2023 09:25:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW12008355 for ; Wed, 20 Sep 2023 00:25:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=sIbnX95p1SSAvyu7SQsAmexdTyX49A99QbBHsVntz6A=; b=bSDqWGOALcQ3gNJuzJueRk3eYMlQMpLTO8Sr0e4cN6Sp7tDPE05DTtuADQImnzfOW6LL QZN7nHQM+lB8NQmH8onkQjryfODBQm4HwnC/tJMgpJdWYMfNdGspdegrDs44CBRkEAtB NhnzZQ0diaYXRS2ujdXiYzDMUUzQxeTUoaL7hVImqQYsim1F5a8WYuR3PnSXBoibqGm/ CIdPMp/IjD/kSvxAleAqD4Hxj/9dVsakYUhAleHUgW3GricvpaFXgM/Ygyu60cVu+oLs jXHAVITrXUWLhAZxThXpHHjnd0I+2rY/DOWt22FiYM/q2UG/oPdJT+siCUo0XQ9ozGao BA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-10 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:41 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:40 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 41A995B6929; Wed, 20 Sep 2023 00:25:40 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 25/34] ml/cnxk: enable model unload in tvmdp library Date: Wed, 20 Sep 2023 00:25:16 -0700 Message-ID: <20230920072528.14185-26-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wyLr94sLBtar4T1qOoIZ8EpO7DKU2Rp6 X-Proofpoint-GUID: wyLr94sLBtar4T1qOoIZ8EpO7DKU2Rp6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable unloading model using external tvmdp library. Updated layer unload callback to support multiple layers. Signed-off-by: Srikanth Yalavarthi Signed-off-by: Anup Prabhu --- drivers/ml/cnxk/cn10k_ml_ops.c | 20 ++++++++++++++++++++ drivers/ml/cnxk/cnxk_ml_ops.c | 9 +++++++-- drivers/ml/cnxk/mvtvm_ml_ops.c | 28 ++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_ops.h | 1 + 4 files changed, 56 insertions(+), 2 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 79217165cd..85d0a9e18b 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -725,7 +725,9 @@ cn10k_ml_layer_unload(void *device, uint16_t model_id, const char *layer_name) uint16_t layer_id = 0; int ret; +#ifndef RTE_MLDEV_CNXK_ENABLE_MVTVM PLT_SET_USED(layer_name); +#endif cnxk_mldev = (struct cnxk_ml_dev *)device; if (cnxk_mldev == NULL) { @@ -739,6 +741,24 @@ cn10k_ml_layer_unload(void *device, uint16_t model_id, const char *layer_name) return -EINVAL; } +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } + } +#endif layer = &model->layer[layer_id]; snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u_%u", CN10K_ML_LAYER_MEMZONE_NAME, diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 8e17f597af..512bac641e 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -1182,7 +1182,7 @@ cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) struct cnxk_ml_model *model; char str[RTE_MEMZONE_NAMESIZE]; - int ret; + int ret = 0; if (dev == NULL) return -EINVAL; @@ -1200,7 +1200,12 @@ cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id) return -EBUSY; } - ret = cn10k_ml_model_unload(cnxk_mldev, model); + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) + ret = cn10k_ml_model_unload(cnxk_mldev, model); +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + ret = mvtvm_ml_model_unload(cnxk_mldev, model); +#endif if (ret != 0) return ret; diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index 1d585a57ff..073773e409 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -189,3 +189,31 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * return ret; } + +int +mvtvm_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) +{ + char str[RTE_MEMZONE_NAMESIZE]; + const struct plt_memzone *mz; + int ret; + + RTE_SET_USED(cnxk_mldev); + + /* Initialize model in TVMDP */ + ret = tvmdp_model_unload(model->model_id); + if (ret != 0) { + plt_err("TVMDP: Model unload failed, model_id = %u, error = %d", model->model_id, + ret); + return ret; + } + + snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", MVTVM_ML_MODEL_MEMZONE_NAME, model->model_id); + mz = rte_memzone_lookup(str); + if (mz == NULL) { + plt_err("Memzone lookup failed for TVM model: model_id = %u, mz = %s", + model->model_id, str); + return -EINVAL; + } + + return plt_memzone_free(mz); +} diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h index 6607537599..770794fe7d 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.h +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -18,5 +18,6 @@ int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_d int mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); int mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, struct cnxk_ml_model *model); +int mvtvm_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); #endif /* _MVTVM_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131699 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3FE1C425E4; Wed, 20 Sep 2023 09:28:24 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B10542D9D; Wed, 20 Sep 2023 09:26:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E96B3410D5 for ; Wed, 20 Sep 2023 09:25:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW13008355 for ; Wed, 20 Sep 2023 00:25:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=86KO794XrPVN79TW0jtn02afmbErQPuu2MXQxhWbVrQ=; b=b/EPUOcaLr0Kw3Y0ESbiB7AGwirz4lq5/SPVaLuYsKAt4ZS14bFtvutiai/bjgjKDgHq hqW3AydLMLvf0dGmMhUavgokkbnJRR8coUl1vcH91pNfGUG/C7czsz9ONB+juR1KbFNZ UUQSwMlUNxx83zp/D9D+dswnVj4yVb7C/zXqabV0jxuld5n6mIWJHeZBISKHCpi5upxV Hq06Uv37HBEiXFx/E5VCc5qabbe5sB5u9/J0sUzziGtZaHIXOQhqiOaa8ErFmZZSU8RZ ESavskEKHVmI4Q/X/zOt+LXK0YB01kBsUgjlAElUqN90Wy/FQ0DPDTE2h8hkK14VKkSk 1w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-11 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:42 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:40 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 8DF145B6922; Wed, 20 Sep 2023 00:25:40 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 26/34] ml/cnxk: support start and stop for TVM models Date: Wed, 20 Sep 2023 00:25:17 -0700 Message-ID: <20230920072528.14185-27-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jfMmN0Nu3NvAhqrj719IbiOCkHaaq205 X-Proofpoint-GUID: jfMmN0Nu3NvAhqrj719IbiOCkHaaq205 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to start and stop TVM models. TVM model start would invoke layer start for all Glow layers part of the model. TVM model stop would invoke layer stop for all Glow layers part of the model. Signed-off-by: Srikanth Yalavarthi Signed-off-by: Anup Prabhu --- drivers/ml/cnxk/cn10k_ml_ops.c | 42 +++++++++++++++++++++++++++ drivers/ml/cnxk/cnxk_ml_ops.c | 18 ++++++++++-- drivers/ml/cnxk/mvtvm_ml_ops.c | 52 ++++++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_ops.h | 2 ++ 4 files changed, 112 insertions(+), 2 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 85d0a9e18b..f70383b128 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -798,7 +798,9 @@ cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name) bool locked; int ret = 0; +#ifndef RTE_MLDEV_CNXK_ENABLE_MVTVM PLT_SET_USED(layer_name); +#endif cnxk_mldev = (struct cnxk_ml_dev *)device; if (cnxk_mldev == NULL) { @@ -812,6 +814,25 @@ cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name) return -EINVAL; } +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } + } +#endif + layer = &model->layer[layer_id]; cn10k_mldev = &cnxk_mldev->cn10k_mldev; ocm = &cn10k_mldev->ocm; @@ -981,7 +1002,9 @@ cn10k_ml_layer_stop(void *device, uint16_t model_id, const char *layer_name) bool locked; int ret = 0; +#ifndef RTE_MLDEV_CNXK_ENABLE_MVTVM PLT_SET_USED(layer_name); +#endif cnxk_mldev = (struct cnxk_ml_dev *)device; if (cnxk_mldev == NULL) { @@ -995,6 +1018,25 @@ cn10k_ml_layer_stop(void *device, uint16_t model_id, const char *layer_name) return -EINVAL; } +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } + } +#endif + layer = &model->layer[layer_id]; cn10k_mldev = &cnxk_mldev->cn10k_mldev; ocm = &cn10k_mldev->ocm; diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 512bac641e..1e567ad45c 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -1233,7 +1233,14 @@ cnxk_ml_model_start(struct rte_ml_dev *dev, uint16_t model_id) return -EINVAL; } - return cn10k_ml_model_start(cnxk_mldev, model); + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) + return cn10k_ml_model_start(cnxk_mldev, model); +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + return mvtvm_ml_model_start(cnxk_mldev, model); +#endif + + return 0; } int @@ -1253,7 +1260,14 @@ cnxk_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id) return -EINVAL; } - return cn10k_ml_model_stop(cnxk_mldev, model); + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) + return cn10k_ml_model_stop(cnxk_mldev, model); +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + return mvtvm_ml_model_stop(cnxk_mldev, model); +#endif + + return 0; } static int diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index 073773e409..4015374b0d 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -217,3 +217,55 @@ mvtvm_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *mode return plt_memzone_free(mz); } + +int +mvtvm_ml_model_start(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) +{ + struct cnxk_ml_layer *layer; + + uint16_t layer_id = 0; + int ret = 0; + +next_layer: + layer = &model->layer[layer_id]; + if (layer->type == ML_CNXK_LAYER_TYPE_MRVL) { + ret = cn10k_ml_layer_start(cnxk_mldev, model->model_id, layer->name); + if (ret != 0) { + plt_err("Layer start failed, model_id = %u, layer_name = %s, error = %d", + model->model_id, layer->name, ret); + return ret; + } + } + layer_id++; + + if (layer_id < model->nb_layers) + goto next_layer; + + return 0; +} + +int +mvtvm_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) +{ + struct cnxk_ml_layer *layer; + + uint16_t layer_id = 0; + int ret = 0; + +next_layer: + layer = &model->layer[layer_id]; + if (layer->type == ML_CNXK_LAYER_TYPE_MRVL) { + ret = cn10k_ml_layer_stop(cnxk_mldev, model->model_id, layer->name); + if (ret != 0) { + plt_err("Layer stop failed, model_id = %u, layer_name = %s, error = %d", + model->model_id, layer->name, ret); + return ret; + } + } + layer_id++; + + if (layer_id < model->nb_layers) + goto next_layer; + + return 0; +} diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h index 770794fe7d..55459f9f7f 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.h +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -19,5 +19,7 @@ int mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); int mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, struct cnxk_ml_model *model); int mvtvm_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); +int mvtvm_ml_model_start(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); +int mvtvm_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); #endif /* _MVTVM_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131701 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F0383425E4; Wed, 20 Sep 2023 09:28:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CA84742DB2; Wed, 20 Sep 2023 09:26:08 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 769BD410D5 for ; Wed, 20 Sep 2023 09:25:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW14008355 for ; Wed, 20 Sep 2023 00:25:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=r15Vs4k4rEN5/gl2EXeFuie+oK7uNOBo/Jn8qF9BmSQ=; b=K7i7vDuyq5EBLhHJibaMNDuB6HG/wWPZ6TP3PuDl8Nqo8lTZL4f9yv2JVONkhblIgpaT t5gSfn6zbpsM4JLdqee6VZYXq63Tun1LSUGFdnzC7XxCh1nuzdyAiCTXbKhuO0z41FME HlNJnImTDznD7PnaSwYdYgITQOsHM66u7L6LSBZOjdcIPu7q3nTmq2YBkUdnycHKmiHL BrACKE3NyPEKAiyP5B3PW+J2751pZy1kN46cmtmrDkw4xCy5ZqxmYCHeov2B4xEZF8mF xKjfvsmaXahOepNrAlkkoXX6is/aaKn+3afrEY4K9gB+2Bh4xh/O5I0n53XG66FFQJNF 1w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-12 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:42 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:40 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id D99D55B6927; Wed, 20 Sep 2023 00:25:40 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 27/34] ml/cnxk: update internal TVM model info structure Date: Wed, 20 Sep 2023 00:25:18 -0700 Message-ID: <20230920072528.14185-28-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: gBUVFqpbi-d0dcxGIy245vZ4XIkhU9DD X-Proofpoint-GUID: gBUVFqpbi-d0dcxGIy245vZ4XIkhU9DD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Prince Takkar Added support to update internal model info structure for TVM models. Signed-off-by: Prince Takkar Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/mvtvm_ml_model.c | 65 ++++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_model.h | 2 + drivers/ml/cnxk/mvtvm_ml_ops.c | 3 ++ 3 files changed, 70 insertions(+) diff --git a/drivers/ml/cnxk/mvtvm_ml_model.c b/drivers/ml/cnxk/mvtvm_ml_model.c index 86f465a645..8c04d4652f 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.c +++ b/drivers/ml/cnxk/mvtvm_ml_model.c @@ -13,6 +13,7 @@ #include "mvtvm_ml_model.h" +#include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" /* Objects list */ @@ -176,3 +177,67 @@ mvtvm_ml_model_io_info_update(struct cnxk_ml_model *model) tvm_mrvl_model: cn10k_ml_layer_io_info_update(&model->mvtvm.info, &model->layer[0].glow.metadata); } + +void +mvtvm_ml_model_info_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) +{ + struct tvmdp_model_metadata *metadata; + struct rte_ml_model_info *info; + struct rte_ml_io_info *output; + struct rte_ml_io_info *input; + uint8_t i; + + info = PLT_PTR_CAST(model->info); + input = PLT_PTR_ADD(info, sizeof(struct rte_ml_model_info)); + output = PLT_PTR_ADD(input, ML_CNXK_MODEL_MAX_INPUT_OUTPUT * sizeof(struct rte_ml_io_info)); + + /* Reset model info */ + memset(info, 0, sizeof(struct rte_ml_model_info)); + + if (model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) + goto tvm_mrvl_model; + + metadata = &model->mvtvm.metadata; + rte_memcpy(info->name, metadata->model.name, TVMDP_NAME_STRLEN); + snprintf(info->version, RTE_ML_STR_MAX, "%u.%u.%u.%u", metadata->model.version[0], + metadata->model.version[1], metadata->model.version[2], + metadata->model.version[3]); + info->model_id = model->model_id; + info->device_id = cnxk_mldev->mldev->data->dev_id; + info->io_layout = RTE_ML_IO_LAYOUT_SPLIT; + info->min_batches = model->batch_size; + info->max_batches = model->batch_size; + info->nb_inputs = metadata->model.num_input; + info->input_info = input; + info->nb_outputs = metadata->model.num_output; + info->output_info = output; + info->wb_size = 0; + + /* Set input info */ + for (i = 0; i < info->nb_inputs; i++) { + rte_memcpy(input[i].name, metadata->input[i].name, MRVL_ML_INPUT_NAME_LEN); + input[i].nb_dims = metadata->input[i].ndim; + input[i].shape = &model->mvtvm.info.input[i].shape[0]; + input[i].type = model->mvtvm.info.input[i].qtype; + input[i].nb_elements = model->mvtvm.info.input[i].nb_elements; + input[i].size = model->mvtvm.info.input[i].nb_elements * + rte_ml_io_type_size_get(model->mvtvm.info.input[i].qtype); + } + + /* Set output info */ + for (i = 0; i < info->nb_outputs; i++) { + rte_memcpy(output[i].name, metadata->output[i].name, MRVL_ML_OUTPUT_NAME_LEN); + output[i].nb_dims = metadata->output[i].ndim; + output[i].shape = &model->mvtvm.info.output[i].shape[0]; + output[i].type = model->mvtvm.info.output[i].qtype; + output[i].nb_elements = model->mvtvm.info.output[i].nb_elements; + output[i].size = model->mvtvm.info.output[i].nb_elements * + rte_ml_io_type_size_get(model->mvtvm.info.output[i].qtype); + } + + return; + +tvm_mrvl_model: + cn10k_ml_model_info_set(cnxk_mldev, model, &model->mvtvm.info, + &model->layer[0].glow.metadata); +} diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index 2b25a7b568..eef424b5c2 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -11,6 +11,7 @@ #include "cnxk_ml_io.h" +struct cnxk_ml_dev; struct cnxk_ml_model; /* Maximum number of objects per model */ @@ -48,5 +49,6 @@ struct mvtvm_ml_model_data { int mvtvm_ml_model_blob_parse(struct rte_ml_model_params *params, struct mvtvm_ml_model_object *object); void mvtvm_ml_model_io_info_update(struct cnxk_ml_model *model); +void mvtvm_ml_model_info_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); #endif /* _MVTVM_ML_MODEL_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index 4015374b0d..213151e68b 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -182,6 +182,9 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * /* Update model I/O data */ mvtvm_ml_model_io_info_update(model); + /* Set model info */ + mvtvm_ml_model_info_set(cnxk_mldev, model); + return 0; error: From patchwork Wed Sep 20 07:25:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131702 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C64B9425E4; Wed, 20 Sep 2023 09:28:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EEF3442DC0; Wed, 20 Sep 2023 09:26:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id F1455410D5 for ; Wed, 20 Sep 2023 09:25:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JQqP009223 for ; Wed, 20 Sep 2023 00:25:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=zjEs3piPKsQNyiO8DxNPHmWrSsjksx9mMU+vYgB/VYw=; b=jgF4B9qQdKV+9Y3OoK6LcrcyuBi/7mILDPcQxMZMFurVyi8Go8+AuvnjXPOB9HigrkJ/ U12gWnqH3XXSqOmyaTRrIGpntrxFhtm2cp1Ybwh8RKlfthOfw2Mew/unA/4IfLCQhewN ZVfJ4G730TaM6vKIR995Vp1lpK8aYan6/rzMoj1yIJHZC9n8jVUJxk3rMXbCeacnClUE doT7KurlYb+zOC4jJE8cGQBeaR3WmcPjAZNEO+zSpqJVMBCgxIXScsxP7b/JJwwH6ZvI 0XlM1IY/mjDkAMuTfzW3rYrdJz2yNEkYgCwHNapg/dl7ECx9d2FjqpVq+KF/pvgNtgor tg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasym8-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:43 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:41 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 3248D5B6928; Wed, 20 Sep 2023 00:25:41 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 28/34] ml/cnxk: support device dump for TVM models Date: Wed, 20 Sep 2023 00:25:19 -0700 Message-ID: <20230920072528.14185-29-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: o58Bz9RacJJe0jPYuv1R_OCXxwPppJ2w X-Proofpoint-GUID: o58Bz9RacJJe0jPYuv1R_OCXxwPppJ2w X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enabled support to print TVM model layer info. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cnxk_ml_model.c | 9 ++++- drivers/ml/cnxk/cnxk_ml_ops.c | 1 + drivers/ml/cnxk/mvtvm_ml_model.c | 59 ++++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_model.h | 2 ++ 4 files changed, 70 insertions(+), 1 deletion(-) diff --git a/drivers/ml/cnxk/cnxk_ml_model.c b/drivers/ml/cnxk/cnxk_ml_model.c index 746d3ca5a9..e63ee58ab2 100644 --- a/drivers/ml/cnxk/cnxk_ml_model.c +++ b/drivers/ml/cnxk/cnxk_ml_model.c @@ -115,6 +115,8 @@ cnxk_ml_model_dump(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, cnxk_ml_print_line(fp, LINE_LEN); fprintf(fp, "%*s : %u\n", FIELD_LEN, "model_id", model->model_id); fprintf(fp, "%*s : %s\n", FIELD_LEN, "name", model->name); + fprintf(fp, "%*s : %d\n", FIELD_LEN, "type", model->type); + fprintf(fp, "%*s : %d\n", FIELD_LEN, "subtype", model->subtype); fprintf(fp, "%*s : 0x%016lx\n", FIELD_LEN, "model", PLT_U64_CAST(model)); fprintf(fp, "%*s : %u\n", FIELD_LEN, "batch_size", model->batch_size); fprintf(fp, "%*s : %u\n", FIELD_LEN, "nb_layers", model->nb_layers); @@ -131,6 +133,11 @@ cnxk_ml_model_dump(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model, for (layer_id = 0; layer_id < model->nb_layers; layer_id++) { layer = &model->layer[layer_id]; - cn10k_ml_layer_print(cnxk_mldev, layer, fp); + if (layer->type == ML_CNXK_LAYER_TYPE_MRVL) + cn10k_ml_layer_print(cnxk_mldev, layer, fp); +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + mvtvm_ml_layer_print(cnxk_mldev, layer, fp); +#endif } } diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 1e567ad45c..361184620b 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -18,6 +18,7 @@ #include "cnxk_ml_io.h" #include "cnxk_ml_model.h" #include "cnxk_ml_ops.h" +#include "cnxk_ml_utils.h" /* ML model macros */ #define CNXK_ML_MODEL_MEMZONE_NAME "ml_cnxk_model_mz" diff --git a/drivers/ml/cnxk/mvtvm_ml_model.c b/drivers/ml/cnxk/mvtvm_ml_model.c index 8c04d4652f..7086c7a407 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.c +++ b/drivers/ml/cnxk/mvtvm_ml_model.c @@ -15,6 +15,7 @@ #include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" +#include "cnxk_ml_utils.h" /* Objects list */ char mvtvm_object_list[ML_MVTVM_MODEL_OBJECT_MAX][RTE_ML_STR_MAX] = {"mod.so", "mod.json", @@ -241,3 +242,61 @@ mvtvm_ml_model_info_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *mo cn10k_ml_model_info_set(cnxk_mldev, model, &model->mvtvm.info, &model->layer[0].glow.metadata); } + +void +mvtvm_ml_layer_print(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, FILE *fp) +{ + char str[STR_LEN]; + uint8_t i; + + /* Print debug info */ + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, " Layer Information (Layer ID: %u, Name: %s)\n", + cnxk_mldev->index_map[layer->index].layer_id, layer->name); + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "layer_id", + cnxk_mldev->index_map[layer->index].layer_id); + fprintf(fp, "%*s : %s\n", FIELD_LEN, "name", layer->name); + fprintf(fp, "%*s : %d\n", FIELD_LEN, "type", layer->type); + fprintf(fp, "%*s : 0x%016lx\n", FIELD_LEN, "layer", PLT_U64_CAST(layer)); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "batch_size", layer->batch_size); + + /* Print model state */ + if (layer->state == ML_CNXK_LAYER_STATE_LOADED) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "loaded"); + if (layer->state == ML_CNXK_LAYER_STATE_JOB_ACTIVE) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "job_active"); + if (layer->state == ML_CNXK_LAYER_STATE_STARTED) + fprintf(fp, "%*s : %s\n", FIELD_LEN, "state", "started"); + + fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_inputs", layer->info.nb_inputs); + fprintf(fp, "%*s : %u\n", FIELD_LEN, "num_outputs", layer->info.nb_outputs); + fprintf(fp, "\n"); + + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "%8s %16s %12s\n", "input", "input_name", "input_type"); + cnxk_ml_print_line(fp, LINE_LEN); + for (i = 0; i < layer->info.nb_inputs; i++) { + fprintf(fp, "%8u ", i); + fprintf(fp, "%*s ", 16, layer->info.input[i].name); + rte_ml_io_type_to_str(layer->info.input[i].qtype, str, STR_LEN); + fprintf(fp, "%*s ", 12, str); + } + fprintf(fp, "\n"); + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "\n"); + + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "%8s %16s %12s\n", "output", "output_name", "output_type"); + cnxk_ml_print_line(fp, LINE_LEN); + for (i = 0; i < layer->info.nb_outputs; i++) { + fprintf(fp, "%8u ", i); + fprintf(fp, "%*s ", 16, layer->info.output[i].name); + rte_ml_io_type_to_str(layer->info.output[i].qtype, str, STR_LEN); + fprintf(fp, "%*s ", 12, str); + fprintf(fp, "\n"); + } + fprintf(fp, "\n"); + cnxk_ml_print_line(fp, LINE_LEN); + fprintf(fp, "\n"); +} diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index eef424b5c2..fa7735cfaa 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -13,6 +13,7 @@ struct cnxk_ml_dev; struct cnxk_ml_model; +struct cnxk_ml_layer; /* Maximum number of objects per model */ #define ML_MVTVM_MODEL_OBJECT_MAX 3 @@ -50,5 +51,6 @@ int mvtvm_ml_model_blob_parse(struct rte_ml_model_params *params, struct mvtvm_ml_model_object *object); void mvtvm_ml_model_io_info_update(struct cnxk_ml_model *model); void mvtvm_ml_model_info_set(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); +void mvtvm_ml_layer_print(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_layer *layer, FILE *fp); #endif /* _MVTVM_ML_MODEL_H_ */ From patchwork Wed Sep 20 07:25:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131703 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66A53425E4; Wed, 20 Sep 2023 09:28:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12D8B42DBB; Wed, 20 Sep 2023 09:26:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0D87E410F2 for ; Wed, 20 Sep 2023 09:25:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW15008355 for ; Wed, 20 Sep 2023 00:25:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=K5oKLN1Hlg1ERKhsKqzgCCl3+4O5tOdBtK7qfkQlOMU=; b=ULs4UaJkN4vVSus4y3ioxkhOq1tZZ/44HhJNPa193prMset4wWJ6DdiIJUylrrBqh6yQ IzWe1CJH7p4gyE65fZ4VRIEsQpq9iWISEZiSDtlhk0wfv7k4Ra8bJr4r6aMi9J2+Ydrc BtfFdUfJA6Eh0KrTvpJfm2jQ9TikFqz79VTDwtsBIkY6IK77r+WQlnO7HlIfRYkhxskW hDLXWfVHt2IshxEQe3cO72UGV/igh3TqkRTjVFKkbMNJ2F/W5TCqhG8mROVTdQJ3fWkJ CkTL0S9HZI2uDZ9dfwrql+qGmTZA40m1RRx1TMXH1H/8PQOTpvdLBlMOnTgyi8WSO46A OA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-13 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:43 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:41 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 80D0B5B6929; Wed, 20 Sep 2023 00:25:41 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 29/34] ml/cnxk: enable reporting model runtime as xstats Date: Wed, 20 Sep 2023 00:25:20 -0700 Message-ID: <20230920072528.14185-30-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: QG_-mZRAKDusVbVGSOEQdhJHqoUWIphe X-Proofpoint-GUID: QG_-mZRAKDusVbVGSOEQdhJHqoUWIphe X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added model xstats entries to compute runtime latency. Allocated internal resources for TVM model xstats. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cnxk_ml_ops.c | 200 ++++++++++++++++++++++++++++--- drivers/ml/cnxk/cnxk_ml_ops.h | 1 + drivers/ml/cnxk/cnxk_ml_xstats.h | 7 ++ drivers/ml/cnxk/mvtvm_ml_model.h | 24 ++++ drivers/ml/cnxk/mvtvm_ml_ops.c | 24 +++- 5 files changed, 238 insertions(+), 18 deletions(-) diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 361184620b..f281e6070f 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -146,7 +146,8 @@ cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) /* Allocate memory for xstats entries. Don't allocate during reconfigure */ nb_stats = RTE_DIM(device_xstats) + - RTE_DIM(layer_xstats) * ML_CNXK_MAX_MODELS * ML_CNXK_MODEL_MAX_LAYERS; + RTE_DIM(layer_xstats) * ML_CNXK_MAX_MODELS * ML_CNXK_MODEL_MAX_LAYERS + + RTE_DIM(model_xstats) * ML_CNXK_MAX_MODELS; if (cnxk_mldev->xstats.entries == NULL) cnxk_mldev->xstats.entries = rte_zmalloc( "cnxk_ml_xstats", sizeof(struct cnxk_ml_xstats_entry) * nb_stats, @@ -177,6 +178,25 @@ cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) for (model = 0; model < ML_CNXK_MAX_MODELS; model++) { cnxk_mldev->xstats.offset_for_model[model] = stat_id; + for (i = 0; i < RTE_DIM(model_xstats); i++) { + cnxk_mldev->xstats.entries[stat_id].map.id = stat_id; + cnxk_mldev->xstats.entries[stat_id].mode = RTE_ML_DEV_XSTATS_MODEL; + cnxk_mldev->xstats.entries[stat_id].group = CNXK_ML_XSTATS_GROUP_MODEL; + cnxk_mldev->xstats.entries[stat_id].type = model_xstats[i].type; + cnxk_mldev->xstats.entries[stat_id].fn_id = CNXK_ML_XSTATS_FN_MODEL; + cnxk_mldev->xstats.entries[stat_id].obj_idx = model; + cnxk_mldev->xstats.entries[stat_id].layer_id = -1; + cnxk_mldev->xstats.entries[stat_id].reset_allowed = + model_xstats[i].reset_allowed; + + /* Name of xstat is updated during model load */ + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), + "Model-%u-%s", model, model_xstats[i].name); + + stat_id++; + } + for (layer = 0; layer < ML_CNXK_MODEL_MAX_LAYERS; layer++) { cnxk_mldev->xstats.offset_for_layer[model][layer] = stat_id; @@ -203,7 +223,8 @@ cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) cnxk_mldev->xstats.count_per_layer[model][layer] = RTE_DIM(layer_xstats); } - cnxk_mldev->xstats.count_per_model[model] = RTE_DIM(layer_xstats); + cnxk_mldev->xstats.count_per_model[model] = + RTE_DIM(layer_xstats) + ML_CNXK_MODEL_MAX_LAYERS * RTE_DIM(model_xstats); } cnxk_mldev->xstats.count_mode_model = stat_id - cnxk_mldev->xstats.count_mode_device; @@ -212,6 +233,42 @@ cnxk_ml_xstats_init(struct cnxk_ml_dev *cnxk_mldev) return 0; } +void +cnxk_ml_xstats_model_name_update(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id) +{ + struct cnxk_ml_model *model; + uint16_t rclk_freq; + uint16_t sclk_freq; + uint16_t stat_id; + char suffix[8]; + uint16_t i; + + model = cnxk_mldev->mldev->data->models[model_id]; + stat_id = cnxk_mldev->xstats.offset_for_model[model_id]; + + roc_clk_freq_get(&rclk_freq, &sclk_freq); + if (sclk_freq == 0) + strcpy(suffix, "cycles"); + else + strcpy(suffix, "ns"); + + /* Update xstat name based on layer name and sclk availability */ + for (i = 0; i < RTE_DIM(model_xstats); i++) { + if (model->type == ML_CNXK_MODEL_TYPE_GLOW) + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", + model->glow.metadata.model.name, model_xstats[i].name, suffix); +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + snprintf(cnxk_mldev->xstats.entries[stat_id].map.name, + sizeof(cnxk_mldev->xstats.entries[stat_id].map.name), "%s-%s-%s", + model->mvtvm.metadata.model.name, model_xstats[i].name, suffix); +#endif + + stat_id++; + } +} + static void cnxk_ml_xstats_uninit(struct cnxk_ml_dev *cnxk_mldev) { @@ -249,6 +306,9 @@ cnxk_ml_dev_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx __rte_unu count += layer->glow.burst_xstats[qp_id].dequeued_count - \ layer->glow.burst_xstats[qp_id].str##_reset_count; \ } \ + value += layer->glow.sync_xstats->str##_latency_tot; \ + count += layer->glow.sync_xstats->dequeued_count - \ + layer->glow.sync_xstats->str##_reset_count; \ if (count != 0) \ value = value / count; \ } while (0) @@ -261,6 +321,9 @@ cnxk_ml_dev_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx __rte_unu count += layer->glow.burst_xstats[qp_id].dequeued_count - \ layer->glow.burst_xstats[qp_id].str##_reset_count; \ } \ + value = PLT_MIN(value, layer->glow.sync_xstats->str##_latency_min); \ + count += layer->glow.sync_xstats->dequeued_count - \ + layer->glow.sync_xstats->str##_reset_count; \ if (count == 0) \ value = 0; \ } while (0) @@ -273,9 +336,52 @@ cnxk_ml_dev_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx __rte_unu count += layer->glow.burst_xstats[qp_id].dequeued_count - \ layer->glow.burst_xstats[qp_id].str##_reset_count; \ } \ + value = PLT_MAX(value, layer->glow.sync_xstats->str##_latency_max); \ + count += layer->glow.sync_xstats->dequeued_count - \ + layer->glow.sync_xstats->str##_reset_count; \ + if (count == 0) \ + value = 0; \ + } while (0) + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM +#define ML_AVG_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count) \ + do { \ + value = 0; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value += model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_tot; \ + count += model->mvtvm.burst_xstats[qp_id].dequeued_count - \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_reset_count; \ + } \ + if (count != 0) \ + value = value / count; \ + } while (0) + +#define ML_MIN_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count) \ + do { \ + value = UINT64_MAX; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value = PLT_MIN(value, \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_min); \ + count += model->mvtvm.burst_xstats[qp_id].dequeued_count - \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_reset_count; \ + } \ + if (count == 0) \ + value = 0; \ + } while (0) + +#define ML_MAX_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count) \ + do { \ + value = 0; \ + for (qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { \ + value = PLT_MAX(value, \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_max); \ + count += model->mvtvm.burst_xstats[qp_id].dequeued_count - \ + model->mvtvm.burst_xstats[qp_id].tvm_rt_reset_count; \ + } \ if (count == 0) \ value = 0; \ } while (0) +#endif static uint64_t cnxk_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx, int32_t layer_id, @@ -293,11 +399,15 @@ cnxk_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx, int32_ if (model == NULL) return 0; - if (layer_id >= 0) + if (layer_id >= 0) { layer = &model->layer[layer_id]; - else - return 0; + goto layer_xstats; + } else { + layer = NULL; + goto model_xstats; + } +layer_xstats: switch (type) { case avg_hw_latency: ML_AVG_FOREACH_QP(cnxk_mldev, layer, qp_id, hw, value, count); @@ -320,7 +430,26 @@ cnxk_ml_model_xstat_get(struct cnxk_ml_dev *cnxk_mldev, uint16_t obj_idx, int32_ default: value = 0; } + goto exit_xstats; +model_xstats: +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + switch (type) { + case avg_rt_latency: + ML_AVG_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count); + break; + case min_rt_latency: + ML_MIN_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count); + break; + case max_rt_latency: + ML_MAX_FOREACH_QP_MVTVM(cnxk_mldev, model, qp_id, value, count); + break; + default: + value = 0; + } +#endif + +exit_xstats: roc_clk_freq_get(&rclk_freq, &sclk_freq); if (sclk_freq != 0) /* return in ns */ value = (value * 1000ULL) / sclk_freq; @@ -907,8 +1036,9 @@ cnxk_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode { struct cnxk_ml_xstats_entry *xs; struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; uint32_t xstats_mode_count; - uint16_t layer_id = 0; + uint16_t layer_id; uint32_t idx = 0; uint32_t i; @@ -925,7 +1055,17 @@ cnxk_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode case RTE_ML_DEV_XSTATS_MODEL: if (model_id >= ML_CNXK_MAX_MODELS) break; - xstats_mode_count = cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + + model = cnxk_mldev->mldev->data->models[model_id]; + for (layer_id = 0; layer_id < model->nb_layers; layer_id++) { + if (model->layer[layer_id].type == ML_CNXK_LAYER_TYPE_MRVL) + xstats_mode_count += + cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + } + + if ((model->type == ML_CNXK_MODEL_TYPE_TVM) && + (model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_MRVL)) + xstats_mode_count += RTE_DIM(model_xstats); break; default: return -EINVAL; @@ -939,9 +1079,20 @@ cnxk_ml_dev_xstats_names_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode if (xs->mode != mode) continue; - if (mode == RTE_ML_DEV_XSTATS_MODEL && - (model_id != xs->obj_idx || layer_id != xs->layer_id)) - continue; + if (mode == RTE_ML_DEV_XSTATS_MODEL) { + if (model_id != xs->obj_idx) + continue; + + model = cnxk_mldev->mldev->data->models[model_id]; + if ((model->type == ML_CNXK_MODEL_TYPE_GLOW || + model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) && + xs->group == CNXK_ML_XSTATS_GROUP_MODEL) + continue; + + if (model->type == ML_CNXK_MODEL_TYPE_TVM && + model->layer[xs->layer_id].type == ML_CNXK_LAYER_TYPE_LLVM) + continue; + } strncpy(xstats_map[idx].name, xs->map.name, RTE_ML_STR_MAX); xstats_map[idx].id = xs->map.id; @@ -1002,9 +1153,10 @@ cnxk_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, { struct cnxk_ml_xstats_entry *xs; struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; uint32_t xstats_mode_count; - uint16_t layer_id = 0; cnxk_ml_xstats_fn fn; + uint16_t layer_id; uint64_t val; uint32_t idx; uint32_t i; @@ -1022,7 +1174,14 @@ cnxk_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, case RTE_ML_DEV_XSTATS_MODEL: if (model_id >= ML_CNXK_MAX_MODELS) return -EINVAL; - xstats_mode_count = cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + + model = cnxk_mldev->mldev->data->models[model_id]; + for (layer_id = 0; layer_id < model->nb_layers; layer_id++) + xstats_mode_count += cnxk_mldev->xstats.count_per_layer[model_id][layer_id]; + + if ((model->type == ML_CNXK_MODEL_TYPE_TVM) && + (model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_MRVL)) + xstats_mode_count += RTE_DIM(model_xstats); break; default: return -EINVAL; @@ -1034,11 +1193,18 @@ cnxk_ml_dev_xstats_get(struct rte_ml_dev *dev, enum rte_ml_dev_xstats_mode mode, if (stat_ids[i] > cnxk_mldev->xstats.count || xs->mode != mode) continue; - if (mode == RTE_ML_DEV_XSTATS_MODEL && - (model_id != xs->obj_idx || layer_id != xs->layer_id)) { - plt_err("Invalid stats_id[%d] = %d for model_id = %d\n", i, stat_ids[i], - model_id); - return -EINVAL; + if (mode == RTE_ML_DEV_XSTATS_MODEL) { + if (model_id != xs->obj_idx) + continue; + + model = cnxk_mldev->mldev->data->models[xs->obj_idx]; + if ((model->type == ML_CNXK_MODEL_TYPE_GLOW || + model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) && + xs->group == CNXK_ML_XSTATS_GROUP_MODEL) + continue; + + if (xs->layer_id == -1 && xs->group == CNXK_ML_XSTATS_GROUP_LAYER) + continue; } switch (xs->fn_id) { diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index d0c126f34b..2575f4c6e1 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -64,6 +64,7 @@ extern struct rte_ml_dev_ops cnxk_ml_ops; int cnxk_ml_model_unload(struct rte_ml_dev *dev, uint16_t model_id); int cnxk_ml_model_stop(struct rte_ml_dev *dev, uint16_t model_id); +void cnxk_ml_xstats_model_name_update(struct cnxk_ml_dev *cnxk_mldev, uint16_t model_id); __rte_hot uint16_t cnxk_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op **ops, uint16_t nb_ops); diff --git a/drivers/ml/cnxk/cnxk_ml_xstats.h b/drivers/ml/cnxk/cnxk_ml_xstats.h index 5e02bb876c..a2c9adfe4a 100644 --- a/drivers/ml/cnxk/cnxk_ml_xstats.h +++ b/drivers/ml/cnxk/cnxk_ml_xstats.h @@ -142,4 +142,11 @@ static const struct cnxk_ml_xstat_info layer_xstats[] = { {"Min-FW-Latency", min_fw_latency, 1}, {"Max-FW-Latency", max_fw_latency, 1}, }; +/* Model xstats */ +static const struct cnxk_ml_xstat_info model_xstats[] = { + {"Avg-RT-Latency", avg_rt_latency, 1}, + {"Min-RT-Latency", min_rt_latency, 1}, + {"Max-RT-Latency", max_rt_latency, 1}, +}; + #endif /* _CNXK_ML_XSTATS_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index fa7735cfaa..d71df36f5a 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -33,6 +33,27 @@ struct mvtvm_ml_model_object { int64_t size; }; +/* Model fast-path stats */ +struct mvtvm_ml_model_xstats { + /* Total TVM runtime latency, sum of all inferences */ + uint64_t tvm_rt_latency_tot; + + /* TVM runtime latency */ + uint64_t tvm_rt_latency; + + /* Minimum TVM runtime latency */ + uint64_t tvm_rt_latency_min; + + /* Maximum TVM runtime latency */ + uint64_t tvm_rt_latency_max; + + /* Total jobs dequeued */ + uint64_t dequeued_count; + + /* Hardware stats reset index */ + uint64_t tvm_rt_reset_count; +}; + struct mvtvm_ml_model_data { /* Model metadata */ struct tvmdp_model_metadata metadata; @@ -45,6 +66,9 @@ struct mvtvm_ml_model_data { /* Model I/O info */ struct cnxk_ml_io_info info; + + /* Stats for burst ops */ + struct mvtvm_ml_model_xstats *burst_xstats; }; int mvtvm_ml_model_blob_parse(struct rte_ml_model_params *params, diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index 213151e68b..d4518412be 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -14,6 +14,7 @@ #include "cnxk_ml_dev.h" #include "cnxk_ml_model.h" +#include "cnxk_ml_ops.h" /* ML model macros */ #define MVTVM_ML_MODEL_MEMZONE_NAME "ml_mvtvm_model_mz" @@ -57,6 +58,7 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * char str[RTE_MEMZONE_NAMESIZE]; const struct plt_memzone *mz; size_t model_object_size = 0; + size_t model_xstats_size = 0; uint16_t nb_mrvl_layers; uint16_t nb_llvm_layers; uint8_t layer_id = 0; @@ -72,7 +74,11 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * model_object_size = RTE_ALIGN_CEIL(object[0].size, RTE_CACHE_LINE_MIN_SIZE) + RTE_ALIGN_CEIL(object[1].size, RTE_CACHE_LINE_MIN_SIZE) + RTE_ALIGN_CEIL(object[2].size, RTE_CACHE_LINE_MIN_SIZE); - mz_size += model_object_size; + + model_xstats_size = + cnxk_mldev->mldev->data->nb_queue_pairs * sizeof(struct mvtvm_ml_model_xstats); + + mz_size += model_object_size + model_xstats_size; /* Allocate memzone for model object */ snprintf(str, RTE_MEMZONE_NAMESIZE, "%s_%u", MVTVM_ML_MODEL_MEMZONE_NAME, model->model_id); @@ -185,6 +191,22 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * /* Set model info */ mvtvm_ml_model_info_set(cnxk_mldev, model); + /* Update model xstats name */ + cnxk_ml_xstats_model_name_update(cnxk_mldev, model->model_id); + + model->mvtvm.burst_xstats = RTE_PTR_ADD( + model->mvtvm.object.params.addr, + RTE_ALIGN_CEIL(model->mvtvm.object.params.size, RTE_CACHE_LINE_MIN_SIZE)); + + for (int qp_id = 0; qp_id < cnxk_mldev->mldev->data->nb_queue_pairs; qp_id++) { + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_tot = 0; + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency = 0; + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_min = UINT64_MAX; + model->mvtvm.burst_xstats[qp_id].tvm_rt_latency_max = 0; + model->mvtvm.burst_xstats[qp_id].tvm_rt_reset_count = 0; + model->mvtvm.burst_xstats[qp_id].dequeued_count = 0; + } + return 0; error: From patchwork Wed Sep 20 07:25:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131704 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 040C2425E4; Wed, 20 Sep 2023 09:28:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2725942DC9; Wed, 20 Sep 2023 09:26:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 970F540ED1 for ; Wed, 20 Sep 2023 09:25:44 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW16008355 for ; Wed, 20 Sep 2023 00:25:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=8Fcrp6IZ0ztVf+u1E100cf37+8zyOS8XTWw8vSeee8k=; b=M8GI1OHC0G6HSFhhUwIsoH4odqLZJybPbOrwOyutdep1HshWtAwuQ6nbbhMDLkqcqBgh CNXMSXpHsbdUEkLYuBwVz/qq2Z3o73V4myQ44YcCjGcHig6sjkdI9filU4mM61nBnmx9 YtBgT83v3EpuXjmhVMCm0yQj3RWtIIDCj/MVPE+Z46FfG7OsY0aZIm46eSZ38f6GpgYK gQYUiX3+bLbdZHHPxKJQ5LH+7QmB3XgQB8bY9kMvzmd5L4aug2gN3hZNzbianTiec6Fb tfGbXjXl4WwPdmagoig44tM9JuMnXjGyDkvbakrwcmrjVK9ZpzVn6xEVBOB9o1fYoSpv fQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-14 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:43 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:41 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id CBB915B692A; Wed, 20 Sep 2023 00:25:41 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 30/34] ml/cnxk: implement I/O alloc and free callbacks Date: Wed, 20 Sep 2023 00:25:21 -0700 Message-ID: <20230920072528.14185-31-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: XSH2iaBiQ4hAJIzzdvSoVUmNDEreQkjc X-Proofpoint-GUID: XSH2iaBiQ4hAJIzzdvSoVUmNDEreQkjc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented callback functions for IO allocation and free for Glow layers. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 123 +++++++++++++++++++++++++++++++++ drivers/ml/cnxk/cn10k_ml_ops.h | 3 + drivers/ml/cnxk/mvtvm_ml_ops.c | 2 + 3 files changed, 128 insertions(+) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index f70383b128..23e98b96c5 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -1399,3 +1399,126 @@ cn10k_ml_inference_sync(void *device, uint16_t index, void *input, void *output, error_enqueue: return ret; } + +int +cn10k_ml_io_alloc(void *device, uint16_t model_id, const char *layer_name, uint64_t **input_qbuffer, + uint64_t **output_qbuffer) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + struct cnxk_ml_layer *layer; + + char str[RTE_MEMZONE_NAMESIZE]; + const struct plt_memzone *mz; + uint16_t layer_id = 0; + uint64_t output_size; + uint64_t input_size; + +#ifndef RTE_MLDEV_CNXK_ENABLE_MVTVM + PLT_SET_USED(layer_name); +#endif + + cnxk_mldev = (struct cnxk_ml_dev *)device; + if (cnxk_mldev == NULL) { + plt_err("Invalid device = %p", cnxk_mldev); + return -EINVAL; + } + + model = cnxk_mldev->mldev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } + } +#endif + + layer = &model->layer[layer_id]; + input_size = PLT_ALIGN_CEIL(layer->info.total_input_sz_q, ML_CN10K_ALIGN_SIZE); + output_size = PLT_ALIGN_CEIL(layer->info.total_output_sz_q, ML_CN10K_ALIGN_SIZE); + + sprintf(str, "cn10k_ml_io_mz_%u_%u", model_id, layer_id); + mz = plt_memzone_reserve_aligned(str, input_size + output_size, 0, ML_CN10K_ALIGN_SIZE); + if (mz == NULL) { + plt_err("io_alloc failed: Unable to allocate memory: model_id = %u, layer_name = %s", + model_id, layer_name); + return -ENOMEM; + } + + *input_qbuffer = mz->addr; + *output_qbuffer = PLT_PTR_ADD(mz->addr, input_size); + + return 0; +} + +int +cn10k_ml_io_free(void *device, uint16_t model_id, const char *layer_name) +{ + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + + char str[RTE_MEMZONE_NAMESIZE]; + const struct plt_memzone *mz; + uint16_t layer_id = 0; + +#ifndef RTE_MLDEV_CNXK_ENABLE_MVTVM + PLT_SET_USED(layer_name); +#endif + + cnxk_mldev = (struct cnxk_ml_dev *)device; + if (cnxk_mldev == NULL) { + plt_err("Invalid device = %p", cnxk_mldev); + return -EINVAL; + } + + model = cnxk_mldev->mldev->data->models[model_id]; + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + if (model->type == ML_CNXK_MODEL_TYPE_TVM) { + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } + } +#endif + + sprintf(str, "cn10k_ml_io_mz_%u_%u", model_id, layer_id); + mz = plt_memzone_lookup(str); + if (mz == NULL) { + plt_err("io_free failed: Memzone not found: model_id = %u, layer_name = %s", + model_id, layer_name); + return -EINVAL; + } + + return plt_memzone_free(mz); +} diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 3e75cae65a..055651eaa2 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -328,5 +328,8 @@ int cn10k_ml_layer_load(void *device, uint16_t model_id, const char *layer_name, int cn10k_ml_layer_unload(void *device, uint16_t model_id, const char *layer_name); int cn10k_ml_layer_start(void *device, uint16_t model_id, const char *layer_name); int cn10k_ml_layer_stop(void *device, uint16_t model_id, const char *layer_name); +int cn10k_ml_io_alloc(void *device, uint16_t model_id, const char *layer_name, + uint64_t **input_qbuffer, uint64_t **output_qbuffer); +int cn10k_ml_io_free(void *device, uint16_t model_id, const char *layer_name); #endif /* _CN10K_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index d4518412be..a41ba4d343 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -164,6 +164,8 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * callback = &model->mvtvm.cb; callback->tvmrt_glow_layer_load = cn10k_ml_layer_load; callback->tvmrt_glow_layer_unload = cn10k_ml_layer_unload; + callback->tvmrt_io_alloc = cn10k_ml_io_alloc; + callback->tvmrt_io_free = cn10k_ml_io_free; } else { callback = NULL; } From patchwork Wed Sep 20 07:25:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131705 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 47A3A425E4; Wed, 20 Sep 2023 09:29:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B1D5D42E2A; Wed, 20 Sep 2023 09:26:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1CC0B40ED1 for ; Wed, 20 Sep 2023 09:25:45 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW17008355 for ; Wed, 20 Sep 2023 00:25:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=puEOYX7aM7Vs9p1/IbVYWDRIOyv+iDIYhwFnQhM0exs=; b=d4Eo+gCTBM7wJew8Uc1PIUFw9qoHhu/YEjzhX2d4lSKP23soF3/E9wtVttEoqPkf4+OI LH2lmU0DeNBySflLYHVSjKFAniieDjwKJ6pa0ZmLHjqdUdEgfOkpLXz/N/XlwqA5xo6L Q3UPc/70+ct5YK2Fc7C5Hr+tTvgq7J3jusltlPMCY0LUz4MHazv0Ay9G1vxDazaJmcaG zl3E5kCH1CDaMFjBgLjEyBFSxKaV0/nntqgTo0opE7cQJAIZms3FevPzHYn8C35RSY1D YdcV1/Fs5H8xlv6ZPteQNRu7wqElrJYGjSpkZQ3WaWtrChT8PIhyddCJQHKpttyjrcyc Ug== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-15 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:42 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 21EEC5B6926; Wed, 20 Sep 2023 00:25:42 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 31/34] ml/cnxk: add generic ML malloc and free callback Date: Wed, 20 Sep 2023 00:25:22 -0700 Message-ID: <20230920072528.14185-32-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Q-xY5KbaKf7MZFqVjjCVu1t_vBwkhlPr X-Proofpoint-GUID: Q-xY5KbaKf7MZFqVjjCVu1t_vBwkhlPr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented generic ML malloc and free callbacks Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 30 ++++++++++++++++++++++++++++++ drivers/ml/cnxk/cn10k_ml_ops.h | 3 +++ drivers/ml/cnxk/mvtvm_ml_ops.c | 2 ++ 3 files changed, 35 insertions(+) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 23e98b96c5..140f7a343f 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -1522,3 +1522,33 @@ cn10k_ml_io_free(void *device, uint16_t model_id, const char *layer_name) return plt_memzone_free(mz); } + +int +cn10k_ml_malloc(const char *name, size_t size, uint32_t align, void **addr) +{ + const struct plt_memzone *mz; + + mz = plt_memzone_reserve_aligned(name, size, 0, align); + if (mz == NULL) { + plt_err("ml_malloc failed: Unable to allocate memory: name = %s", name); + return -ENOMEM; + } + + *addr = mz->addr; + + return 0; +} + +int +cn10k_ml_free(const char *name) +{ + const struct plt_memzone *mz; + + mz = plt_memzone_lookup(name); + if (mz == NULL) { + plt_err("ml_free failed: Memzone not found: name = %s", name); + return -EINVAL; + } + + return plt_memzone_free(mz); +} diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index 055651eaa2..d7df1d003a 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -332,4 +332,7 @@ int cn10k_ml_io_alloc(void *device, uint16_t model_id, const char *layer_name, uint64_t **input_qbuffer, uint64_t **output_qbuffer); int cn10k_ml_io_free(void *device, uint16_t model_id, const char *layer_name); +int cn10k_ml_malloc(const char *name, size_t size, uint32_t align, void **addr); +int cn10k_ml_free(const char *name); + #endif /* _CN10K_ML_OPS_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index a41ba4d343..95238d43d8 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -166,6 +166,8 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * callback->tvmrt_glow_layer_unload = cn10k_ml_layer_unload; callback->tvmrt_io_alloc = cn10k_ml_io_alloc; callback->tvmrt_io_free = cn10k_ml_io_free; + callback->tvmrt_malloc = cn10k_ml_malloc; + callback->tvmrt_free = cn10k_ml_free; } else { callback = NULL; } From patchwork Wed Sep 20 07:25:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131706 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C048425E4; Wed, 20 Sep 2023 09:29:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDF624113C; Wed, 20 Sep 2023 09:26:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9CC9F410FC for ; Wed, 20 Sep 2023 09:25:45 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW19008355 for ; Wed, 20 Sep 2023 00:25:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=8yaUFL5k+3din5FIE4Qy6dkqTwdlnbOJP81BhUJOAqM=; b=Xgdz+sjGx1npRFXF0sMRLf7gPxYiAbtKr9X73Vpmj7Pc53glXDHDCVwwbMvHv/2sDZoa K7/rTDOhb0xxz1yH6N8LlmXX1bdjRC5SRd26sHiN6dT8jKArmYPJJQ1ij+oJhLdOo4S5 uRwpxTTzq8tMXZ+dqosk78OtONpXa7ZbArvthWR5Wp++lV8op1s4nGqJod45R5AoHSYT J0nmpJ/wfJfEgcoHv3qC+nuif7w4/dmmGa4kcpU5DNjFOEQEMFktgJAXvUufcSFutBgv lCitAWt1zQrA8twQtTJawNpz7+1Ho5len9aRTT93gUcjWNJaol4DU3EVcRbwpubSMzC+ SA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-16 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:44 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:42 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 6FFA95B6927; Wed, 20 Sep 2023 00:25:42 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 32/34] ml/cnxk: support quantize and dequantize callback Date: Wed, 20 Sep 2023 00:25:23 -0700 Message-ID: <20230920072528.14185-33-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: IuPxjAjkYHN7FmCg33wju8iX4mKTA2uj X-Proofpoint-GUID: IuPxjAjkYHN7FmCg33wju8iX4mKTA2uj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Prince Takkar Added support for quantize and dequantize callback functions for TVM models. Signed-off-by: Prince Takkar --- drivers/ml/cnxk/mvtvm_ml_model.h | 2 + drivers/ml/cnxk/mvtvm_ml_ops.c | 127 +++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_ops.h | 4 + 3 files changed, 133 insertions(+) diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index d71df36f5a..57a6ce0bb1 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -5,6 +5,8 @@ #ifndef _MVTVM_ML_MODEL_H_ #define _MVTVM_ML_MODEL_H_ +#include + #include #include diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index 95238d43d8..5292ac97fe 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -7,6 +7,8 @@ #include #include +#include + #include "cn10k_ml_ops.h" #include "mvtvm_ml_model.h" @@ -168,6 +170,8 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * callback->tvmrt_io_free = cn10k_ml_io_free; callback->tvmrt_malloc = cn10k_ml_malloc; callback->tvmrt_free = cn10k_ml_free; + callback->tvmrt_quantize = mvtvm_ml_io_quantize; + callback->tvmrt_dequantize = mvtvm_ml_io_dequantize; } else { callback = NULL; } @@ -298,3 +302,126 @@ mvtvm_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model) return 0; } + +int +mvtvm_ml_io_quantize(void *device, uint16_t model_id, const char *layer_name, + const DLTensor **deq_tensor, void *qbuffer) +{ + struct cnxk_ml_io_info *info = NULL; + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + uint16_t layer_id = 0; + uint8_t *lcl_dbuffer; + uint8_t *lcl_qbuffer; + uint32_t i; + int ret; + +#ifdef CNXK_ML_DEV_DEBUG + if ((device == NULL) || (deq_tensor == NULL) || (qbuffer == NULL)) + return -EINVAL; +#endif + + cnxk_mldev = (struct cnxk_ml_dev *)device; + + model = cnxk_mldev->mldev->data->models[model_id]; +#ifdef CNXK_ML_DEV_DEBUG + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } +#endif + + /* Get layer id */ + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + +#ifdef CNXK_ML_DEV_DEBUG + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } +#endif + + info = &model->layer[layer_id].info; + lcl_qbuffer = (uint8_t *)qbuffer; + + for (i = 0; i < info->nb_inputs; i++) { + lcl_dbuffer = PLT_PTR_ADD(deq_tensor[i]->data, deq_tensor[i]->byte_offset); + + ret = cnxk_ml_io_quantize_single(&info->input[i], lcl_dbuffer, lcl_qbuffer); + if (ret < 0) + return ret; + + lcl_qbuffer += info->input[i].sz_q; + } + + return 0; +} + +int +mvtvm_ml_io_dequantize(void *device, uint16_t model_id, const char *layer_name, void *qbuffer, + const DLTensor **deq_tensor) +{ + struct cnxk_ml_io_info *info = NULL; + struct cnxk_ml_dev *cnxk_mldev; + struct cnxk_ml_model *model; + uint16_t layer_id = 0; + uint8_t *lcl_dbuffer; + uint8_t *lcl_qbuffer; + uint32_t i; + int ret; + +#ifdef CNXK_ML_DEV_DEBUG + if ((device == NULL) || (deq_tensor == NULL) || (qbuffer == NULL)) + return -EINVAL; +#endif + + cnxk_mldev = (struct cnxk_ml_dev *)device; + + model = cnxk_mldev->mldev->data->models[model_id]; +#ifdef CNXK_ML_DEV_DEBUG + if (model == NULL) { + plt_err("Invalid model_id = %u", model_id); + return -EINVAL; + } +#endif + + for (layer_id = 0; layer_id < model->mvtvm.metadata.model.nb_layers; layer_id++) { + if (strcmp(model->layer[layer_id].name, layer_name) == 0) + break; + } + +#ifdef CNXK_ML_DEV_DEBUG + if (layer_id == model->mvtvm.metadata.model.nb_layers) { + plt_err("Invalid layer name: %s", layer_name); + return -EINVAL; + } + + if (model->layer[layer_id].type != ML_CNXK_LAYER_TYPE_MRVL) { + plt_err("Invalid layer name / type: %s", layer_name); + return -EINVAL; + } +#endif + + info = &model->layer[layer_id].info; + lcl_qbuffer = (uint8_t *)qbuffer; + + for (i = 0; i < info->nb_outputs; i++) { + lcl_dbuffer = PLT_PTR_ADD(deq_tensor[i]->data, deq_tensor[i]->byte_offset); + + ret = cnxk_ml_io_dequantize_single(&info->output[i], lcl_qbuffer, lcl_dbuffer); + if (ret < 0) + return ret; + + lcl_qbuffer += info->output[i].sz_q; + } + + return 0; +} diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h index 55459f9f7f..a1a868ef4b 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.h +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -21,5 +21,9 @@ int mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_para int mvtvm_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int mvtvm_ml_model_start(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); int mvtvm_ml_model_stop(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model); +int mvtvm_ml_io_quantize(void *device, uint16_t model_id, const char *layer_name, + const DLTensor **deq_tensor, void *qbuffer); +int mvtvm_ml_io_dequantize(void *device, uint16_t model_id, const char *layer_name, void *qbuffer, + const DLTensor **deq_tensor); #endif /* _MVTVM_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131707 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6FF05425E4; Wed, 20 Sep 2023 09:29:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C45A242E2F; Wed, 20 Sep 2023 09:26:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9022B40ED1 for ; Wed, 20 Sep 2023 09:25:45 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K631id001603 for ; Wed, 20 Sep 2023 00:25:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Wd2QYxQ2phGB9JI/TnXnc1DJtQtC9F7OnXwZBKnctCE=; b=RmaY9/SbmLjNCIURXariJ6jrMXoTfmNtTE1/LQMLsT5f/hIcCBddKjV5RjTuRgpXjjfT rPrEJStKNOZD+C126hVXHchRVQiOBwsF3qNuJJJliCYjDaXTbVnpEbPCC6Ws8NVYTDg4 wzewBc5JCxF/Y5h7BoPw5MxJ2plrHFOf+4aEmaVMaEBam+Biwz229b6Fi88e6ytLpfga m9LW5YZkADkFNqPphsqW06D8q2JAwrSH7rkNL8ZqFNNA5pX9q58j/3/XadwUgp/hXU2A S/eW9dU/hycTeXfX6ZojZody4QbKkJ5Mi7T8TvtoDZNsbPRNPOJ4zk4xl5Xr8A77Iiwh ZA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7u4d89jw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:44 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:42 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id C05975B6922; Wed, 20 Sep 2023 00:25:42 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 33/34] ml/cnxk: enable fast-path ops for TVM models Date: Wed, 20 Sep 2023 00:25:24 -0700 Message-ID: <20230920072528.14185-34-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 4Vjb1xmdjcgkOJQwxhaP7TdW34Arvu01 X-Proofpoint-GUID: 4Vjb1xmdjcgkOJQwxhaP7TdW34Arvu01 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Anup Prabhu Enable fast-path ops support for TVM models. Models would use TVMDP library function calls to execute inference operations for Hybrid and LLVM model sub-types. For TVM MRVL model subtypes that have a single MRVL layer, the inference requests are directly enqueued to hardware by the driver. Signed-off-by: Anup Prabhu Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_ops.c | 4 - drivers/ml/cnxk/cnxk_ml_io.h | 6 ++ drivers/ml/cnxk/cnxk_ml_ops.c | 4 + drivers/ml/cnxk/cnxk_ml_ops.h | 9 +++ drivers/ml/cnxk/mvtvm_ml_model.c | 20 +++++ drivers/ml/cnxk/mvtvm_ml_model.h | 6 ++ drivers/ml/cnxk/mvtvm_ml_ops.c | 124 +++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_ops.h | 43 +++++++++++ 8 files changed, 212 insertions(+), 4 deletions(-) diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 140f7a343f..c1353fb0c8 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -287,10 +287,6 @@ cn10k_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_c else cn10k_mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_lf; - cnxk_mldev->mldev->enqueue_burst = cnxk_ml_enqueue_burst; - cnxk_mldev->mldev->dequeue_burst = cnxk_ml_dequeue_burst; - cnxk_mldev->mldev->op_error_get = cn10k_ml_op_error_get; - return 0; } diff --git a/drivers/ml/cnxk/cnxk_ml_io.h b/drivers/ml/cnxk/cnxk_ml_io.h index 5de166c252..6d5d25a7c9 100644 --- a/drivers/ml/cnxk/cnxk_ml_io.h +++ b/drivers/ml/cnxk/cnxk_ml_io.h @@ -47,6 +47,12 @@ struct cnxk_ml_io { /* Scale */ float scale; + + /* Dequantized offset */ + uint32_t offset_d; + + /* Quantized offset */ + uint32_t offset_q; }; /* Model / Layer IO structure */ diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index f281e6070f..274d152b81 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -770,6 +770,10 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co cnxk_mldev->max_nb_layers = cnxk_mldev->cn10k_mldev.fw.req->cn10k_req.jd.fw_load.cap.s.max_models; + cnxk_mldev->mldev->enqueue_burst = cnxk_ml_enqueue_burst; + cnxk_mldev->mldev->dequeue_burst = cnxk_ml_dequeue_burst; + cnxk_mldev->mldev->op_error_get = cn10k_ml_op_error_get; + /* Allocate and initialize index_map */ if (cnxk_mldev->index_map == NULL) { cnxk_mldev->index_map = diff --git a/drivers/ml/cnxk/cnxk_ml_ops.h b/drivers/ml/cnxk/cnxk_ml_ops.h index 2575f4c6e1..62e2b17e35 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.h +++ b/drivers/ml/cnxk/cnxk_ml_ops.h @@ -12,12 +12,21 @@ #include "cn10k_ml_ops.h" +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM +#include "mvtvm_ml_ops.h" +#endif + /* Request structure */ struct cnxk_ml_req { /* Device specific request */ union { /* CN10K */ struct cn10k_ml_req cn10k_req; + +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + /* MVTVM */ + struct mvtvm_ml_req mvtvm_req; +#endif }; /* Address of status field */ diff --git a/drivers/ml/cnxk/mvtvm_ml_model.c b/drivers/ml/cnxk/mvtvm_ml_model.c index 7086c7a407..8af84b6972 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.c +++ b/drivers/ml/cnxk/mvtvm_ml_model.c @@ -136,6 +136,16 @@ mvtvm_ml_model_io_info_update(struct cnxk_ml_model *model) model->mvtvm.info.total_input_sz_d += model->mvtvm.info.input[i].sz_d; model->mvtvm.info.total_input_sz_q += model->mvtvm.info.input[i].sz_q; + model->mvtvm.info.input[i].offset_d = model->mvtvm.info.total_input_sz_d; + model->mvtvm.info.input[i].offset_q = model->mvtvm.info.total_input_sz_q; + + model->mvtvm.input_tensor[i].device = metadata->input[i].device; + model->mvtvm.input_tensor[i].ndim = metadata->input[i].ndim; + model->mvtvm.input_tensor[i].dtype = metadata->input[i].datatype; + model->mvtvm.input_tensor[i].shape = metadata->input[i].shape; + model->mvtvm.input_tensor[i].strides = NULL; + model->mvtvm.input_tensor[i].byte_offset = model->mvtvm.info.input[i].offset_q; + plt_ml_dbg("model_id = %u, input[%u] - sz_d = %u sz_q = %u", model->model_id, i, model->mvtvm.info.input[i].sz_d, model->mvtvm.info.input[i].sz_q); } @@ -169,6 +179,16 @@ mvtvm_ml_model_io_info_update(struct cnxk_ml_model *model) model->mvtvm.info.total_output_sz_d += model->mvtvm.info.output[i].sz_d; model->mvtvm.info.total_output_sz_q += model->mvtvm.info.output[i].sz_q; + model->mvtvm.info.output[i].offset_d = model->mvtvm.info.total_output_sz_d; + model->mvtvm.info.output[i].offset_q = model->mvtvm.info.total_output_sz_q; + + model->mvtvm.output_tensor[i].device = metadata->output[i].device; + model->mvtvm.output_tensor[i].ndim = metadata->output[i].ndim; + model->mvtvm.output_tensor[i].dtype = metadata->output[i].datatype; + model->mvtvm.output_tensor[i].shape = metadata->output[i].shape; + model->mvtvm.output_tensor[i].strides = NULL; + model->mvtvm.output_tensor[i].byte_offset = model->mvtvm.info.output[i].offset_q; + plt_ml_dbg("model_id = %u, output[%u] - sz_d = %u sz_q = %u", model->model_id, i, model->mvtvm.info.output[i].sz_d, model->mvtvm.info.output[i].sz_q); } diff --git a/drivers/ml/cnxk/mvtvm_ml_model.h b/drivers/ml/cnxk/mvtvm_ml_model.h index 57a6ce0bb1..08e101bbe7 100644 --- a/drivers/ml/cnxk/mvtvm_ml_model.h +++ b/drivers/ml/cnxk/mvtvm_ml_model.h @@ -71,6 +71,12 @@ struct mvtvm_ml_model_data { /* Stats for burst ops */ struct mvtvm_ml_model_xstats *burst_xstats; + + /* Input Tensor */ + DLTensor input_tensor[ML_CNXK_MODEL_MAX_INPUT_OUTPUT]; + + /* Output Tensor */ + DLTensor output_tensor[ML_CNXK_MODEL_MAX_INPUT_OUTPUT]; }; int mvtvm_ml_model_blob_parse(struct rte_ml_model_params *params, diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index 5292ac97fe..2baac8f72f 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -21,6 +21,12 @@ /* ML model macros */ #define MVTVM_ML_MODEL_MEMZONE_NAME "ml_mvtvm_model_mz" +__rte_hot static void +mvtvm_ml_set_poll_addr(struct cnxk_ml_req *req) +{ + req->status = &req->mvtvm_req.status; +} + int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf) { @@ -172,6 +178,7 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * callback->tvmrt_free = cn10k_ml_free; callback->tvmrt_quantize = mvtvm_ml_io_quantize; callback->tvmrt_dequantize = mvtvm_ml_io_dequantize; + callback->tvmrt_inference = cn10k_ml_inference_sync; } else { callback = NULL; } @@ -215,6 +222,19 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * model->mvtvm.burst_xstats[qp_id].dequeued_count = 0; } + /* Set model specific fast path functions */ + if (model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) { + model->enqueue_single = cn10k_ml_enqueue_single; + model->result_update = cn10k_ml_result_update; + model->set_error_code = cn10k_ml_set_error_code; + model->set_poll_addr = cn10k_ml_set_poll_addr; + } else { + model->enqueue_single = mvtvm_ml_enqueue_single; + model->result_update = mvtvm_ml_result_update; + model->set_error_code = mvtvm_ml_set_error_code; + model->set_poll_addr = mvtvm_ml_set_poll_addr; + } + return 0; error: @@ -425,3 +445,107 @@ mvtvm_ml_io_dequantize(void *device, uint16_t model_id, const char *layer_name, return 0; } + +static int +mvtvm_ml_model_run(struct cnxk_ml_model *model, struct rte_ml_op *op, struct cnxk_ml_req *req) +{ + uint8_t i; + + rte_memcpy(req->mvtvm_req.input_tensor, model->mvtvm.input_tensor, + model->mvtvm.metadata.model.num_input * sizeof(DLTensor)); + for (i = 0; i < model->mvtvm.metadata.model.num_input; i++) { + req->mvtvm_req.input_tensor[i].data = op->input[i]->addr; + req->mvtvm_req.input_tensor[i].byte_offset = 0; + } + + rte_memcpy(req->mvtvm_req.output_tensor, model->mvtvm.output_tensor, + model->mvtvm.metadata.model.num_output * sizeof(DLTensor)); + for (i = 0; i < model->mvtvm.metadata.model.num_output; i++) { + req->mvtvm_req.output_tensor[i].data = op->output[i]->addr; + req->mvtvm_req.output_tensor[i].byte_offset = 0; + } + + tvmdp_model_run(model->model_id, model->mvtvm.metadata.model.num_input, + req->mvtvm_req.input_tensor, model->mvtvm.metadata.model.num_output, + req->mvtvm_req.output_tensor, &req->mvtvm_req.result, + &req->mvtvm_req.status); + + plt_write64(ML_CNXK_POLL_JOB_FINISH, req->status); + + return 0; +} + +__rte_hot void +mvtvm_ml_set_error_code(struct cnxk_ml_req *req, uint64_t etype, uint64_t stype) +{ + RTE_SET_USED(stype); + + req->mvtvm_req.result.error_code = etype; +} + +__rte_hot bool +mvtvm_ml_enqueue_single(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op, uint16_t layer_id, + struct cnxk_ml_qp *qp, uint64_t head) +{ + struct cnxk_ml_model *model; + struct cnxk_ml_queue *queue; + struct cnxk_ml_req *req; + + RTE_SET_USED(layer_id); + + queue = &qp->queue; + req = &queue->reqs[head]; + model = cnxk_mldev->mldev->data->models[op->model_id]; + + model->set_poll_addr(req); + memset(&req->mvtvm_req.result, 0, sizeof(struct mvtvm_ml_result)); + req->mvtvm_req.result.error_code = 0x0; + req->mvtvm_req.result.user_ptr = op->user_ptr; + + cnxk_ml_set_poll_ptr(req); + mvtvm_ml_model_run(model, op, req); + req->timeout = plt_tsc_cycles() + queue->wait_cycles; + req->op = op; + + return true; +} + +__rte_hot void +mvtvm_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, void *request) +{ + struct mvtvm_ml_model_xstats *xstats; + struct mvtvm_ml_result *result; + struct cnxk_ml_model *model; + struct cnxk_ml_req *req; + uint64_t tvm_rt_latency; + struct cnxk_ml_qp *qp; + struct rte_ml_op *op; + + req = (struct cnxk_ml_req *)request; + result = &req->mvtvm_req.result; + op = req->op; + qp = cnxk_mldev->mldev->data->queue_pairs[qp_id]; + op->impl_opaque = result->error_code; + + if (likely(result->error_code == 0)) { + qp->stats.dequeued_count++; + op->status = RTE_ML_OP_STATUS_SUCCESS; + + model = cnxk_mldev->mldev->data->models[op->model_id]; + xstats = &model->mvtvm.burst_xstats[qp_id]; + + if (unlikely(xstats->dequeued_count == xstats->tvm_rt_reset_count)) { + xstats->tvm_rt_latency_min = UINT64_MAX; + xstats->tvm_rt_latency_max = 0; + } + tvm_rt_latency = result->stats.end_ns - result->stats.start_ns; + xstats->tvm_rt_latency = tvm_rt_latency; + xstats->tvm_rt_latency_tot += tvm_rt_latency; + xstats->tvm_rt_latency_min = RTE_MIN(xstats->tvm_rt_latency_min, tvm_rt_latency); + xstats->tvm_rt_latency_max = RTE_MAX(xstats->tvm_rt_latency_max, tvm_rt_latency); + xstats->dequeued_count++; + } else { + qp->stats.dequeue_err_count++; + op->status = RTE_ML_OP_STATUS_ERROR; + } +} diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h index a1a868ef4b..82292ceadd 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.h +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -13,6 +13,44 @@ struct cnxk_ml_dev; struct cnxk_ml_model; +struct cnxk_ml_qp; +struct cnxk_ml_req; + +/* Inference stats */ +struct mvtvm_ml_stats { + /* Start ns */ + uint64_t start_ns; + + /* Start ns */ + uint64_t end_ns; +}; + +/* Result structure */ +struct mvtvm_ml_result { + /* Job error code */ + uint64_t error_code; + + /* Inference stats */ + struct mvtvm_ml_stats stats; + + /* User context pointer */ + void *user_ptr; +}; + +/* MVTVM specific request */ +struct mvtvm_ml_req { + /* Input tensors */ + DLTensor input_tensor[ML_CNXK_MODEL_MAX_INPUT_OUTPUT]; + + /* Output tensors */ + DLTensor output_tensor[ML_CNXK_MODEL_MAX_INPUT_OUTPUT]; + + /* Status field for poll mode requests */ + volatile uint64_t status; + + /* Result */ + struct mvtvm_ml_result result; +}; int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf); int mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); @@ -26,4 +64,9 @@ int mvtvm_ml_io_quantize(void *device, uint16_t model_id, const char *layer_name int mvtvm_ml_io_dequantize(void *device, uint16_t model_id, const char *layer_name, void *qbuffer, const DLTensor **deq_tensor); +__rte_hot bool mvtvm_ml_enqueue_single(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_op *op, + uint16_t layer_id, struct cnxk_ml_qp *qp, uint64_t head); +__rte_hot void mvtvm_ml_result_update(struct cnxk_ml_dev *cnxk_mldev, int qp_id, void *request); +__rte_hot void mvtvm_ml_set_error_code(struct cnxk_ml_req *req, uint64_t etype, uint64_t stype); + #endif /* _MVTVM_ML_OPS_H_ */ From patchwork Wed Sep 20 07:25:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 131708 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4CAC0425E4; Wed, 20 Sep 2023 09:29:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E01A642E10; Wed, 20 Sep 2023 09:26:20 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 4124340ED1 for ; Wed, 20 Sep 2023 09:25:46 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K7JW1A008355 for ; Wed, 20 Sep 2023 00:25:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=cK30t0cfcKfCAJAdj2W/e4KBSoOz5hTPEvjNWSDujMU=; b=K6THGYlAizvohPMOmyCNWQ+8hWtEnNujj3X5dxlkEvsPYaSz7FcN33OYqw5LhwZp28oW WOcrHAhXzeaGdyWzPi1K7nkxGSUBPim/0UVbgTVJuLXcMIm+aE8RitccGiKhTeraNjFe aFMC+rlJDaqc9IM9y4CLYvMdULK3Ob3sfS+AwBgVNpY8agrdErzkpLulxDoW5rFYJZCJ jEVoCUUY2y3L8SreshAc88XXTQaAzHyUQ3fvlrz67yTRZyRCIZXCG+bxyWFj1GYM4Y+H VVRNPv+xLB/G5LQ8ykouhnc8WolDg01a+yaQEkMyIfHr6ieqluiyd1Nr2yo6Ib6Sq+3h gA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htasykw-17 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 00:25:45 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 00:25:43 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 00:25:43 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 22ACC5B6928; Wed, 20 Sep 2023 00:25:43 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v2 34/34] ml/cnxk: enable creation of mvtvm virtual device Date: Wed, 20 Sep 2023 00:25:25 -0700 Message-ID: <20230920072528.14185-35-syalavarthi@marvell.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230920072528.14185-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20230920072528.14185-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: M5MTeNY4wQREcf3UE9U3k-Sw8EJaTlFb X-Proofpoint-GUID: M5MTeNY4wQREcf3UE9U3k-Sw8EJaTlFb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_02,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable support to create a mvtvm virtual device on system's without a PCI based ML HW accelerator. Signed-off-by: Srikanth Yalavarthi --- drivers/ml/cnxk/cn10k_ml_dev.c | 8 ++ drivers/ml/cnxk/cn10k_ml_dev.h | 3 + drivers/ml/cnxk/cnxk_ml_dev.c | 3 + drivers/ml/cnxk/cnxk_ml_dev.h | 21 ++++ drivers/ml/cnxk/cnxk_ml_ops.c | 86 ++++++++++---- drivers/ml/cnxk/meson.build | 2 + drivers/ml/cnxk/mvtvm_ml_dev.c | 198 +++++++++++++++++++++++++++++++++ drivers/ml/cnxk/mvtvm_ml_dev.h | 40 +++++++ drivers/ml/cnxk/mvtvm_ml_ops.c | 34 +++++- drivers/ml/cnxk/mvtvm_ml_ops.h | 2 + 10 files changed, 372 insertions(+), 25 deletions(-) create mode 100644 drivers/ml/cnxk/mvtvm_ml_dev.c create mode 100644 drivers/ml/cnxk/mvtvm_ml_dev.h diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c index 20c114b8bf..e6dc87e353 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.c +++ b/drivers/ml/cnxk/cn10k_ml_dev.c @@ -368,6 +368,12 @@ cn10k_ml_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_de PLT_SET_USED(pci_drv); + if (cnxk_ml_dev_initialized == 1) { + plt_err("ML CNXK device already initialized!"); + plt_err("Cannot initialize CN10K PCI dev"); + rte_exit(-EINVAL, "Invalid EAL arguments "); + } + init_params = (struct rte_ml_dev_pmd_init_params){ .socket_id = rte_socket_id(), .private_data_size = sizeof(struct cnxk_ml_dev)}; @@ -414,6 +420,8 @@ cn10k_ml_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_de dev->dequeue_burst = NULL; dev->op_error_get = NULL; + cnxk_ml_dev_initialized = 1; + cnxk_mldev->type = CNXK_ML_DEV_TYPE_PCI; cnxk_mldev->state = ML_CNXK_DEV_STATE_PROBED; return 0; diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index 2e7eb6c9ef..cee405f3f5 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -11,6 +11,9 @@ #include "cnxk_ml_io.h" +/* Device status */ +extern int cnxk_ml_dev_initialized; + /* Dummy Device ops */ extern struct rte_ml_dev_ops ml_dev_dummy_ops; diff --git a/drivers/ml/cnxk/cnxk_ml_dev.c b/drivers/ml/cnxk/cnxk_ml_dev.c index 63d1c9e417..dc4512223c 100644 --- a/drivers/ml/cnxk/cnxk_ml_dev.c +++ b/drivers/ml/cnxk/cnxk_ml_dev.c @@ -7,6 +7,9 @@ #include "cnxk_ml_dev.h" +/* Device status */ +int cnxk_ml_dev_initialized; + /* Dummy operations for ML device */ struct rte_ml_dev_ops ml_dev_dummy_ops = {0}; diff --git a/drivers/ml/cnxk/cnxk_ml_dev.h b/drivers/ml/cnxk/cnxk_ml_dev.h index 382fca64be..491c4c4aea 100644 --- a/drivers/ml/cnxk/cnxk_ml_dev.h +++ b/drivers/ml/cnxk/cnxk_ml_dev.h @@ -9,6 +9,10 @@ #include "cn10k_ml_dev.h" +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM +#include "mvtvm_ml_dev.h" +#endif + #include "cnxk_ml_xstats.h" /* ML command timeout in seconds */ @@ -34,6 +38,15 @@ struct cnxk_ml_error_db { char str[RTE_ML_STR_MAX]; }; +/* Device type */ +enum cnxk_ml_dev_type { + /* PCI based Marvell's ML HW accelerator device */ + CNXK_ML_DEV_TYPE_PCI, + + /* Generic Virtual device */ + CNXK_ML_DEV_TYPE_VDEV, +}; + /* Device configuration state enum */ enum cnxk_ml_dev_state { /* Probed and not configured */ @@ -66,6 +79,9 @@ struct cnxk_ml_dev { /* RTE device */ struct rte_ml_dev *mldev; + /* Device type */ + enum cnxk_ml_dev_type type; + /* Configuration state */ enum cnxk_ml_dev_state state; @@ -87,6 +103,11 @@ struct cnxk_ml_dev { /* CN10K device structure */ struct cn10k_ml_dev cn10k_mldev; +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + /* MVTVM device structure */ + struct mvtvm_ml_dev mvtvm_mldev; +#endif + /* Maximum number of layers */ uint64_t max_nb_layers; diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c index 274d152b81..9a59e3b40b 100644 --- a/drivers/ml/cnxk/cnxk_ml_ops.c +++ b/drivers/ml/cnxk/cnxk_ml_ops.c @@ -125,7 +125,8 @@ cnxk_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_desc qp->stats.enqueue_err_count = 0; qp->stats.dequeue_err_count = 0; - cn10k_ml_qp_initialize(cnxk_mldev, qp); + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) + cn10k_ml_qp_initialize(cnxk_mldev, qp); return qp; @@ -616,7 +617,14 @@ cnxk_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) dev_info->driver_name = dev->device->driver->name; dev_info->max_models = ML_CNXK_MAX_MODELS; - return cn10k_ml_dev_info_get(cnxk_mldev, dev_info); + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) + return cn10k_ml_dev_info_get(cnxk_mldev, dev_info); +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + return mvtvm_ml_dev_info_get(cnxk_mldev, dev_info); +#endif + + return 0; } static int @@ -654,9 +662,11 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co conf->nb_queue_pairs, conf->nb_models); /* Load firmware */ - ret = cn10k_ml_fw_load(cnxk_mldev); - if (ret != 0) - return ret; + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) { + ret = cn10k_ml_fw_load(cnxk_mldev); + if (ret != 0) + return ret; + } } else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CONFIGURED) { plt_ml_dbg("Re-configuring ML device, nb_queue_pairs = %u, nb_models = %u", conf->nb_queue_pairs, conf->nb_models); @@ -754,10 +764,12 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co } dev->data->nb_models = conf->nb_models; - ret = cn10k_ml_dev_configure(cnxk_mldev, conf); - if (ret != 0) { - plt_err("Failed to configure CN10K ML Device"); - goto error; + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) { + ret = cn10k_ml_dev_configure(cnxk_mldev, conf); + if (ret != 0) { + plt_err("Failed to configure CN10K ML Device"); + goto error; + } } #ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM @@ -767,12 +779,17 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co #endif /* Set device capabilities */ - cnxk_mldev->max_nb_layers = - cnxk_mldev->cn10k_mldev.fw.req->cn10k_req.jd.fw_load.cap.s.max_models; + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) + cnxk_mldev->max_nb_layers = + cnxk_mldev->cn10k_mldev.fw.req->cn10k_req.jd.fw_load.cap.s.max_models; + else + cnxk_mldev->max_nb_layers = ML_CNXK_MAX_MODELS; cnxk_mldev->mldev->enqueue_burst = cnxk_ml_enqueue_burst; cnxk_mldev->mldev->dequeue_burst = cnxk_ml_dequeue_burst; - cnxk_mldev->mldev->op_error_get = cn10k_ml_op_error_get; + + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) + cnxk_mldev->mldev->op_error_get = cn10k_ml_op_error_get; /* Allocate and initialize index_map */ if (cnxk_mldev->index_map == NULL) { @@ -835,8 +852,10 @@ cnxk_ml_dev_close(struct rte_ml_dev *dev) plt_err("Failed to close MVTVM ML Device"); #endif - if (cn10k_ml_dev_close(cnxk_mldev) != 0) - plt_err("Failed to close CN10K ML Device"); + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) { + if (cn10k_ml_dev_close(cnxk_mldev) != 0) + plt_err("Failed to close CN10K ML Device"); + } if (cnxk_mldev->index_map) rte_free(cnxk_mldev->index_map); @@ -888,10 +907,12 @@ cnxk_ml_dev_start(struct rte_ml_dev *dev) cnxk_mldev = dev->data->dev_private; - ret = cn10k_ml_dev_start(cnxk_mldev); - if (ret != 0) { - plt_err("Failed to start CN10K ML Device"); - return ret; + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) { + ret = cn10k_ml_dev_start(cnxk_mldev); + if (ret != 0) { + plt_err("Failed to start CN10K ML Device"); + return ret; + } } cnxk_mldev->state = ML_CNXK_DEV_STATE_STARTED; @@ -910,10 +931,12 @@ cnxk_ml_dev_stop(struct rte_ml_dev *dev) cnxk_mldev = dev->data->dev_private; - ret = cn10k_ml_dev_stop(cnxk_mldev); - if (ret != 0) { - plt_err("Failed to stop CN10K ML Device"); - return ret; + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) { + ret = cn10k_ml_dev_stop(cnxk_mldev); + if (ret != 0) { + plt_err("Failed to stop CN10K ML Device"); + return ret; + } } cnxk_mldev->state = ML_CNXK_DEV_STATE_CONFIGURED; @@ -940,7 +963,14 @@ cnxk_ml_dev_dump(struct rte_ml_dev *dev, FILE *fp) cnxk_ml_model_dump(cnxk_mldev, model, fp); } - return cn10k_ml_dev_dump(cnxk_mldev, fp); + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_PCI) + return cn10k_ml_dev_dump(cnxk_mldev, fp); +#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM + else + return mvtvm_ml_dev_dump(cnxk_mldev, fp); +#endif + + return 0; } static int @@ -953,6 +983,9 @@ cnxk_ml_dev_selftest(struct rte_ml_dev *dev) cnxk_mldev = dev->data->dev_private; + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_VDEV) + return -ENOTSUP; + return cn10k_ml_dev_selftest(cnxk_mldev); } @@ -1281,6 +1314,11 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u return -EINVAL; } + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_VDEV && type != ML_CNXK_MODEL_TYPE_TVM) { + plt_err("Unsupported model type"); + return -ENOTSUP; + } + /* Find model ID */ found = false; for (lcl_model_id = 0; lcl_model_id < dev->data->nb_models; lcl_model_id++) { @@ -1475,6 +1513,8 @@ cnxk_ml_model_params_update(struct rte_ml_dev *dev, uint16_t model_id, void *buf return -EINVAL; cnxk_mldev = dev->data->dev_private; + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_VDEV) + return -ENOTSUP; model = dev->data->models[model_id]; if (model == NULL) { diff --git a/drivers/ml/cnxk/meson.build b/drivers/ml/cnxk/meson.build index 09a62b5c55..f5989c5caf 100644 --- a/drivers/ml/cnxk/meson.build +++ b/drivers/ml/cnxk/meson.build @@ -70,11 +70,13 @@ if enable_mvtvm dpdk_conf.set('RTE_MLDEV_CNXK_ENABLE_MVTVM', true) driver_sdk_headers += files( + 'mvtvm_ml_dev.h', 'mvtvm_ml_ops.h', 'mvtvm_ml_model.h', ) sources += files( + 'mvtvm_ml_dev.c', 'mvtvm_ml_ops.c', 'mvtvm_ml_model.c', ) diff --git a/drivers/ml/cnxk/mvtvm_ml_dev.c b/drivers/ml/cnxk/mvtvm_ml_dev.c new file mode 100644 index 0000000000..8ca0e959e3 --- /dev/null +++ b/drivers/ml/cnxk/mvtvm_ml_dev.c @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include +#include +#include + +#include + +#include + +#include "mvtvm_ml_dev.h" + +#include "cnxk_ml_dev.h" + +#define MVTVM_ML_DEV_MAX_QPS "max_qps" +#define MVTVM_ML_DEV_CACHE_MODEL_DATA "cache_model_data" + +#define MVTVM_ML_DEV_MAX_QPS_DEFAULT 32 +#define CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT 1 + +static const char *const valid_args[] = {MVTVM_ML_DEV_MAX_QPS, MVTVM_ML_DEV_CACHE_MODEL_DATA, NULL}; + +static int +parse_integer_arg(const char *key __rte_unused, const char *value, void *extra_args) +{ + int *i = (int *)extra_args; + + *i = atoi(value); + if (*i < 0) { + plt_err("Argument has to be positive."); + return -EINVAL; + } + + return 0; +} + +static int +parse_uint_arg(const char *key __rte_unused, const char *value, void *extra_args) +{ + int i; + char *end; + errno = 0; + + i = strtol(value, &end, 10); + if (*end != 0 || errno != 0 || i < 0) + return -EINVAL; + + *((uint32_t *)extra_args) = i; + + return 0; +} + +static int +mvtvm_mldev_parse_devargs(const char *args, struct mvtvm_ml_dev *mvtvm_mldev) +{ + bool cache_model_data_set = false; + struct rte_kvargs *kvlist = NULL; + bool max_qps_set = false; + int ret = 0; + + if (args == NULL) + goto check_args; + + kvlist = rte_kvargs_parse(args, valid_args); + if (kvlist == NULL) { + plt_err("Error parsing %s devargs\n", "MLDEV_NAME_MVTVM_PMD"); + return -EINVAL; + } + + if (rte_kvargs_count(kvlist, MVTVM_ML_DEV_MAX_QPS) == 1) { + ret = rte_kvargs_process(kvlist, MVTVM_ML_DEV_MAX_QPS, &parse_uint_arg, + &mvtvm_mldev->max_nb_qpairs); + if (ret < 0) { + plt_err("Error processing arguments, key = %s\n", MVTVM_ML_DEV_MAX_QPS); + ret = -EINVAL; + goto exit; + } + max_qps_set = true; + } + + if (rte_kvargs_count(kvlist, MVTVM_ML_DEV_CACHE_MODEL_DATA) == 1) { + ret = rte_kvargs_process(kvlist, MVTVM_ML_DEV_CACHE_MODEL_DATA, &parse_integer_arg, + &mvtvm_mldev->cache_model_data); + if (ret < 0) { + plt_err("Error processing arguments, key = %s\n", + MVTVM_ML_DEV_CACHE_MODEL_DATA); + ret = -EINVAL; + goto exit; + } + cache_model_data_set = true; + } + +check_args: + if (!max_qps_set) + mvtvm_mldev->max_nb_qpairs = MVTVM_ML_DEV_MAX_QPS_DEFAULT; + plt_ml_dbg("ML: %s = %u", MVTVM_ML_DEV_MAX_QPS, mvtvm_mldev->max_nb_qpairs); + + if (!cache_model_data_set) { + mvtvm_mldev->cache_model_data = CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT; + } else { + if ((mvtvm_mldev->cache_model_data < 0) || (mvtvm_mldev->cache_model_data > 1)) { + plt_err("Invalid argument, %s = %d\n", MVTVM_ML_DEV_CACHE_MODEL_DATA, + mvtvm_mldev->cache_model_data); + ret = -EINVAL; + goto exit; + } + } + plt_ml_dbg("ML: %s = %d", MVTVM_ML_DEV_CACHE_MODEL_DATA, mvtvm_mldev->cache_model_data); + +exit: + if (kvlist) + rte_kvargs_free(kvlist); + + return ret; +} + +static int +mvtvm_ml_vdev_probe(struct rte_vdev_device *vdev) +{ + struct rte_ml_dev_pmd_init_params init_params; + struct mvtvm_ml_dev *mvtvm_mldev; + struct cnxk_ml_dev *cnxk_mldev; + struct rte_ml_dev *dev; + const char *input_args; + const char *name; + int ret = 0; + + if (cnxk_ml_dev_initialized == 1) { + plt_err("ML CNXK device already initialized!"); + plt_err("Cannot initialize MVTVM vdev"); + rte_exit(-EINVAL, "Invalid EAL arguments "); + } + + init_params = (struct rte_ml_dev_pmd_init_params){ + .socket_id = rte_socket_id(), .private_data_size = sizeof(struct cnxk_ml_dev)}; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + input_args = rte_vdev_device_args(vdev); + + dev = rte_ml_dev_pmd_create(name, &vdev->device, &init_params); + if (dev == NULL) { + ret = -EFAULT; + goto error_exit; + } + + cnxk_mldev = dev->data->dev_private; + cnxk_mldev->mldev = dev; + mvtvm_mldev = &cnxk_mldev->mvtvm_mldev; + mvtvm_mldev->vdev = vdev; + + ret = mvtvm_mldev_parse_devargs(input_args, mvtvm_mldev); + if (ret < 0) + goto error_exit; + + dev->dev_ops = &cnxk_ml_ops; + dev->enqueue_burst = NULL; + dev->dequeue_burst = NULL; + dev->op_error_get = NULL; + + cnxk_ml_dev_initialized = 1; + cnxk_mldev->type = CNXK_ML_DEV_TYPE_VDEV; + + return 0; + +error_exit: + plt_err("Could not create device: ml_mvtvm"); + + return ret; +} + +static int +mvtvm_ml_vdev_remove(struct rte_vdev_device *vdev) +{ + struct rte_ml_dev *dev; + const char *name; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + dev = rte_ml_dev_pmd_get_named_dev(name); + if (dev == NULL) + return -ENODEV; + + return rte_ml_dev_pmd_destroy(dev); +} + +static struct rte_vdev_driver mvtvm_mldev_pmd = {.probe = mvtvm_ml_vdev_probe, + .remove = mvtvm_ml_vdev_remove}; + +RTE_PMD_REGISTER_VDEV(MLDEV_NAME_MVTVM_PMD, mvtvm_mldev_pmd); + +RTE_PMD_REGISTER_PARAM_STRING(MLDEV_NAME_MVTVM_PMD, + MVTVM_ML_DEV_MAX_QPS "=" MVTVM_ML_DEV_CACHE_MODEL_DATA "=<0|1>"); diff --git a/drivers/ml/cnxk/mvtvm_ml_dev.h b/drivers/ml/cnxk/mvtvm_ml_dev.h new file mode 100644 index 0000000000..6922c19337 --- /dev/null +++ b/drivers/ml/cnxk/mvtvm_ml_dev.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef _MVTVM_ML_DEV_H_ +#define _MVTVM_ML_DEV_H_ + +#include + +/* Device status */ +extern int cnxk_ml_dev_initialized; + +/* CNXK Device ops */ +extern struct rte_ml_dev_ops cnxk_ml_ops; + +/* Marvell MVTVM ML PMD device name */ +#define MLDEV_NAME_MVTVM_PMD ml_mvtvm + +/* Maximum number of descriptors per queue-pair */ +#define ML_MVTVM_MAX_DESC_PER_QP 1024 + +/* Maximum number of inputs / outputs per model */ +#define ML_MVTVM_MAX_INPUT_OUTPUT 32 + +/* Maximum number of segments for IO data */ +#define ML_MVTVM_MAX_SEGMENTS 1 + +/* Device private data */ +struct mvtvm_ml_dev { + /* Virtual device */ + struct rte_vdev_device *vdev; + + /* Maximum number of queue pairs */ + uint16_t max_nb_qpairs; + + /* Enable / disable model data caching */ + int cache_model_data; +}; + +#endif /* _MVTVM_ML_DEV_H_ */ diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.c b/drivers/ml/cnxk/mvtvm_ml_ops.c index 2baac8f72f..f4cd51f872 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.c +++ b/drivers/ml/cnxk/mvtvm_ml_ops.c @@ -9,8 +9,7 @@ #include -#include "cn10k_ml_ops.h" - +#include "mvtvm_ml_dev.h" #include "mvtvm_ml_model.h" #include "mvtvm_ml_ops.h" @@ -27,6 +26,22 @@ mvtvm_ml_set_poll_addr(struct cnxk_ml_req *req) req->status = &req->mvtvm_req.status; } +int +mvtvm_ml_dev_info_get(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_dev_info *dev_info) +{ + struct mvtvm_ml_dev *mvtvm_mldev; + + mvtvm_mldev = &cnxk_mldev->mvtvm_mldev; + + dev_info->max_queue_pairs = mvtvm_mldev->max_nb_qpairs; + dev_info->max_desc = ML_MVTVM_MAX_DESC_PER_QP; + dev_info->max_io = ML_MVTVM_MAX_INPUT_OUTPUT; + dev_info->max_segments = ML_MVTVM_MAX_SEGMENTS; + dev_info->align_size = RTE_CACHE_LINE_SIZE; + + return 0; +} + int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf) { @@ -57,6 +72,15 @@ mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev) return ret; } +int +mvtvm_ml_dev_dump(struct cnxk_ml_dev *cnxk_mldev, FILE *fp) +{ + RTE_SET_USED(cnxk_mldev); + RTE_SET_USED(fp); + + return 0; +} + int mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, struct cnxk_ml_model *model) @@ -167,6 +191,12 @@ mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params * else model->subtype = ML_CNXK_MODEL_SUBTYPE_TVM_HYBRID; + if (cnxk_mldev->type == CNXK_ML_DEV_TYPE_VDEV && + model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_LLVM) { + plt_err("Unsupported model sub-type"); + return -ENOTSUP; + } + /* Set callback function array */ if (model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_LLVM) { callback = &model->mvtvm.cb; diff --git a/drivers/ml/cnxk/mvtvm_ml_ops.h b/drivers/ml/cnxk/mvtvm_ml_ops.h index 82292ceadd..1247f80c2d 100644 --- a/drivers/ml/cnxk/mvtvm_ml_ops.h +++ b/drivers/ml/cnxk/mvtvm_ml_ops.h @@ -52,8 +52,10 @@ struct mvtvm_ml_req { struct mvtvm_ml_result result; }; +int mvtvm_ml_dev_info_get(struct cnxk_ml_dev *mldev, struct rte_ml_dev_info *dev_info); int mvtvm_ml_dev_configure(struct cnxk_ml_dev *cnxk_mldev, const struct rte_ml_dev_config *conf); int mvtvm_ml_dev_close(struct cnxk_ml_dev *cnxk_mldev); +int mvtvm_ml_dev_dump(struct cnxk_ml_dev *cnxk_mldev, FILE *fp); int mvtvm_ml_model_load(struct cnxk_ml_dev *cnxk_mldev, struct rte_ml_model_params *params, struct cnxk_ml_model *model); int mvtvm_ml_model_unload(struct cnxk_ml_dev *cnxk_mldev, struct cnxk_ml_model *model);