From patchwork Wed Oct 18 06:47:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 132842 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DAB2243196; Wed, 18 Oct 2023 08:48:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8820B40A6F; Wed, 18 Oct 2023 08:48:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D97F340289 for ; Wed, 18 Oct 2023 08:48:13 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39I3vKJp020024 for ; Tue, 17 Oct 2023 23:48:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=IUf2NZ9FXs0E50CY00rNrQjHil0qfq9P8jRBvaeQPjs=; b=R9Y1CMNm3MrvSfHwwOQz5WE1ccW11EtITOh0P+On09Sw+w/xTGyJjDaHLkbLJhzjHvm3 0gdwJiR3+eNboln1xGFIMvYth0WiPVR/kPRFz57E9b17iMYJUTzildhn3g7BjBJgT3IP AvKERvDykzYF3QtWcrnqXpIvxU8hV+0CUcGdKbIIxPSF9ZsZNbLxaxDXos6ERv8bZ24e bpfmE5ZiD1YWxDe3NQ4LWU5dwh7pfkVRwaN0/6zR/btkjgDdeYGs/MKriPMK4saAF62p nuSNXG9R+fZs+GWFJmzGBCrqv2jSSQyVzIk7j3fU/0XJWT+eoK8aj0wJljVESKrymTSs dQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3tstb3ursq-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 17 Oct 2023 23:48:13 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 17 Oct 2023 23:48:10 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 17 Oct 2023 23:48:10 -0700 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 776603F704A; Tue, 17 Oct 2023 23:48:10 -0700 (PDT) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , , Subject: [PATCH v5 01/34] ml/cnxk: drop support for register polling Date: Tue, 17 Oct 2023 23:47:29 -0700 Message-ID: <20231018064806.24145-2-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231018064806.24145-1-syalavarthi@marvell.com> References: <20230830155927.3566-1-syalavarthi@marvell.com> <20231018064806.24145-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 72ZqZqTnE-aVjv_7w6ny5Ji1R0Rut6gt X-Proofpoint-GUID: 72ZqZqTnE-aVjv_7w6ny5Ji1R0Rut6gt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-18_04,2023-10-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Dropped support for device argument "poll_mem" for cnxk ML driver. Support to use registers for polling is removed and DDR addresses would be used for polling. Signed-off-by: Srikanth Yalavarthi --- doc/guides/mldevs/cnxk.rst | 16 ----- drivers/ml/cnxk/cn10k_ml_dev.c | 36 +---------- drivers/ml/cnxk/cn10k_ml_dev.h | 13 +--- drivers/ml/cnxk/cn10k_ml_ops.c | 111 ++++----------------------------- drivers/ml/cnxk/cn10k_ml_ops.h | 6 -- 5 files changed, 18 insertions(+), 164 deletions(-) diff --git a/doc/guides/mldevs/cnxk.rst b/doc/guides/mldevs/cnxk.rst index b79bc540d9..1834b1f905 100644 --- a/doc/guides/mldevs/cnxk.rst +++ b/doc/guides/mldevs/cnxk.rst @@ -180,22 +180,6 @@ Runtime Config Options in the fast path enqueue burst operation. -**Polling memory location** (default ``ddr``) - - ML cnxk driver provides the option to select the memory location to be used - for polling to check the inference request completion. - Driver supports using either the DDR address space (``ddr``) - or ML registers (``register``) as polling locations. - The parameter ``poll_mem`` is used to specify the poll location. - - For example:: - - -a 0000:00:10.0,poll_mem="register" - - With the above configuration, ML cnxk driver is configured to use ML registers - for polling in fastpath requests. - - Debugging Options ----------------- diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c index 983138a7f2..e3c2badcef 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.c +++ b/drivers/ml/cnxk/cn10k_ml_dev.c @@ -23,7 +23,6 @@ #define CN10K_ML_DEV_CACHE_MODEL_DATA "cache_model_data" #define CN10K_ML_OCM_ALLOC_MODE "ocm_alloc_mode" #define CN10K_ML_DEV_HW_QUEUE_LOCK "hw_queue_lock" -#define CN10K_ML_FW_POLL_MEM "poll_mem" #define CN10K_ML_OCM_PAGE_SIZE "ocm_page_size" #define CN10K_ML_FW_PATH_DEFAULT "/lib/firmware/mlip-fw.bin" @@ -32,7 +31,6 @@ #define CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT 1 #define CN10K_ML_OCM_ALLOC_MODE_DEFAULT "lowest" #define CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT 1 -#define CN10K_ML_FW_POLL_MEM_DEFAULT "ddr" #define CN10K_ML_OCM_PAGE_SIZE_DEFAULT 16384 /* ML firmware macros */ @@ -54,7 +52,6 @@ static const char *const valid_args[] = {CN10K_ML_FW_PATH, CN10K_ML_DEV_CACHE_MODEL_DATA, CN10K_ML_OCM_ALLOC_MODE, CN10K_ML_DEV_HW_QUEUE_LOCK, - CN10K_ML_FW_POLL_MEM, CN10K_ML_OCM_PAGE_SIZE, NULL}; @@ -103,9 +100,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde bool hw_queue_lock_set = false; bool ocm_page_size_set = false; char *ocm_alloc_mode = NULL; - bool poll_mem_set = false; bool fw_path_set = false; - char *poll_mem = NULL; char *fw_path = NULL; int ret = 0; bool found; @@ -189,17 +184,6 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde hw_queue_lock_set = true; } - if (rte_kvargs_count(kvlist, CN10K_ML_FW_POLL_MEM) == 1) { - ret = rte_kvargs_process(kvlist, CN10K_ML_FW_POLL_MEM, &parse_string_arg, - &poll_mem); - if (ret < 0) { - plt_err("Error processing arguments, key = %s\n", CN10K_ML_FW_POLL_MEM); - ret = -EINVAL; - goto exit; - } - poll_mem_set = true; - } - if (rte_kvargs_count(kvlist, CN10K_ML_OCM_PAGE_SIZE) == 1) { ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_PAGE_SIZE, &parse_integer_arg, &mldev->ocm_page_size); @@ -280,18 +264,6 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *mlde } plt_info("ML: %s = %d", CN10K_ML_DEV_HW_QUEUE_LOCK, mldev->hw_queue_lock); - if (!poll_mem_set) { - mldev->fw.poll_mem = CN10K_ML_FW_POLL_MEM_DEFAULT; - } else { - if (!((strcmp(poll_mem, "ddr") == 0) || (strcmp(poll_mem, "register") == 0))) { - plt_err("Invalid argument, %s = %s\n", CN10K_ML_FW_POLL_MEM, poll_mem); - ret = -EINVAL; - goto exit; - } - mldev->fw.poll_mem = poll_mem; - } - plt_info("ML: %s = %s", CN10K_ML_FW_POLL_MEM, mldev->fw.poll_mem); - if (!ocm_page_size_set) { mldev->ocm_page_size = CN10K_ML_OCM_PAGE_SIZE_DEFAULT; } else { @@ -450,10 +422,7 @@ cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw) if (fw->report_dpe_warnings) flags = flags | FW_REPORT_DPE_WARNING_BITMASK; - if (strcmp(fw->poll_mem, "ddr") == 0) - flags = flags | FW_USE_DDR_POLL_ADDR_FP; - else if (strcmp(fw->poll_mem, "register") == 0) - flags = flags & ~FW_USE_DDR_POLL_ADDR_FP; + flags = flags | FW_USE_DDR_POLL_ADDR_FP; return flags; } @@ -863,5 +832,4 @@ RTE_PMD_REGISTER_PARAM_STRING(MLDEV_NAME_CN10K_PMD, CN10K_ML_FW_PATH "=<0|1>" CN10K_ML_DEV_CACHE_MODEL_DATA "=<0|1>" CN10K_ML_OCM_ALLOC_MODE "=" CN10K_ML_DEV_HW_QUEUE_LOCK - "=<0|1>" CN10K_ML_FW_POLL_MEM "=" CN10K_ML_OCM_PAGE_SIZE - "=<1024|2048|4096|8192|16384>"); + "=<0|1>" CN10K_ML_OCM_PAGE_SIZE "=<1024|2048|4096|8192|16384>"); diff --git a/drivers/ml/cnxk/cn10k_ml_dev.h b/drivers/ml/cnxk/cn10k_ml_dev.h index c73bf7d001..4aaeecff03 100644 --- a/drivers/ml/cnxk/cn10k_ml_dev.h +++ b/drivers/ml/cnxk/cn10k_ml_dev.h @@ -390,9 +390,6 @@ struct cn10k_ml_fw { /* Report DPE warnings */ int report_dpe_warnings; - /* Memory to be used for polling in fast-path requests */ - const char *poll_mem; - /* Data buffer */ uint8_t *data; @@ -525,13 +522,9 @@ struct cn10k_ml_dev { bool (*ml_jcmdq_enqueue)(struct roc_ml *roc_ml, struct ml_job_cmd_s *job_cmd); /* Poll handling function pointers */ - void (*set_poll_addr)(struct cn10k_ml_qp *qp, struct cn10k_ml_req *req, uint64_t idx); - void (*set_poll_ptr)(struct roc_ml *roc_ml, struct cn10k_ml_req *req); - uint64_t (*get_poll_ptr)(struct roc_ml *roc_ml, struct cn10k_ml_req *req); - - /* Memory barrier function pointers to handle synchronization */ - void (*set_enq_barrier)(void); - void (*set_deq_barrier)(void); + void (*set_poll_addr)(struct cn10k_ml_req *req); + void (*set_poll_ptr)(struct cn10k_ml_req *req); + uint64_t (*get_poll_ptr)(struct cn10k_ml_req *req); }; uint64_t cn10k_ml_fw_flags_get(struct cn10k_ml_fw *fw); diff --git a/drivers/ml/cnxk/cn10k_ml_ops.c b/drivers/ml/cnxk/cn10k_ml_ops.c index 4abf4ae0d3..11531afd8c 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.c +++ b/drivers/ml/cnxk/cn10k_ml_ops.c @@ -23,11 +23,6 @@ #define ML_FLAGS_POLL_COMPL BIT(0) #define ML_FLAGS_SSO_COMPL BIT(1) -/* Scratch register range for poll mode requests */ -#define ML_POLL_REGISTER_SYNC 1023 -#define ML_POLL_REGISTER_START 1024 -#define ML_POLL_REGISTER_END 2047 - /* Error message length */ #define ERRMSG_LEN 32 @@ -82,79 +77,23 @@ print_line(FILE *fp, int len) } static inline void -cn10k_ml_set_poll_addr_ddr(struct cn10k_ml_qp *qp, struct cn10k_ml_req *req, uint64_t idx) +cn10k_ml_set_poll_addr(struct cn10k_ml_req *req) { - PLT_SET_USED(qp); - PLT_SET_USED(idx); - req->compl_W1 = PLT_U64_CAST(&req->status); } static inline void -cn10k_ml_set_poll_addr_reg(struct cn10k_ml_qp *qp, struct cn10k_ml_req *req, uint64_t idx) -{ - req->compl_W1 = ML_SCRATCH(qp->block_start + idx % qp->block_size); -} - -static inline void -cn10k_ml_set_poll_ptr_ddr(struct roc_ml *roc_ml, struct cn10k_ml_req *req) +cn10k_ml_set_poll_ptr(struct cn10k_ml_req *req) { - PLT_SET_USED(roc_ml); - plt_write64(ML_CN10K_POLL_JOB_START, req->compl_W1); } -static inline void -cn10k_ml_set_poll_ptr_reg(struct roc_ml *roc_ml, struct cn10k_ml_req *req) -{ - roc_ml_reg_write64(roc_ml, ML_CN10K_POLL_JOB_START, req->compl_W1); -} - static inline uint64_t -cn10k_ml_get_poll_ptr_ddr(struct roc_ml *roc_ml, struct cn10k_ml_req *req) +cn10k_ml_get_poll_ptr(struct cn10k_ml_req *req) { - PLT_SET_USED(roc_ml); - return plt_read64(req->compl_W1); } -static inline uint64_t -cn10k_ml_get_poll_ptr_reg(struct roc_ml *roc_ml, struct cn10k_ml_req *req) -{ - return roc_ml_reg_read64(roc_ml, req->compl_W1); -} - -static inline void -cn10k_ml_set_sync_addr(struct cn10k_ml_dev *mldev, struct cn10k_ml_req *req) -{ - if (strcmp(mldev->fw.poll_mem, "ddr") == 0) - req->compl_W1 = PLT_U64_CAST(&req->status); - else if (strcmp(mldev->fw.poll_mem, "register") == 0) - req->compl_W1 = ML_SCRATCH(ML_POLL_REGISTER_SYNC); -} - -static inline void -cn10k_ml_enq_barrier_ddr(void) -{ -} - -static inline void -cn10k_ml_deq_barrier_ddr(void) -{ -} - -static inline void -cn10k_ml_enq_barrier_register(void) -{ - dmb_st; -} - -static inline void -cn10k_ml_deq_barrier_register(void) -{ - dsb_st; -} - static void qp_memzone_name_get(char *name, int size, int dev_id, int qp_id) { @@ -242,9 +181,6 @@ cn10k_ml_qp_create(const struct rte_ml_dev *dev, uint16_t qp_id, uint32_t nb_des qp->stats.dequeued_count = 0; qp->stats.enqueue_err_count = 0; qp->stats.dequeue_err_count = 0; - qp->block_size = - (ML_POLL_REGISTER_END - ML_POLL_REGISTER_START + 1) / dev->data->nb_queue_pairs; - qp->block_start = ML_POLL_REGISTER_START + qp_id * qp->block_size; /* Initialize job command */ for (i = 0; i < qp->nb_desc; i++) { @@ -933,11 +869,7 @@ cn10k_ml_dev_info_get(struct rte_ml_dev *dev, struct rte_ml_dev_info *dev_info) else dev_info->max_queue_pairs = ML_CN10K_MAX_QP_PER_DEVICE_LF; - if (strcmp(mldev->fw.poll_mem, "register") == 0) - dev_info->max_desc = ML_CN10K_MAX_DESC_PER_QP / dev_info->max_queue_pairs; - else if (strcmp(mldev->fw.poll_mem, "ddr") == 0) - dev_info->max_desc = ML_CN10K_MAX_DESC_PER_QP; - + dev_info->max_desc = ML_CN10K_MAX_DESC_PER_QP; dev_info->max_io = ML_CN10K_MAX_INPUT_OUTPUT; dev_info->max_segments = ML_CN10K_MAX_SEGMENTS; dev_info->align_size = ML_CN10K_ALIGN_SIZE; @@ -1118,24 +1050,9 @@ cn10k_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *c mldev->ml_jcmdq_enqueue = roc_ml_jcmdq_enqueue_lf; /* Set polling function pointers */ - if (strcmp(mldev->fw.poll_mem, "ddr") == 0) { - mldev->set_poll_addr = cn10k_ml_set_poll_addr_ddr; - mldev->set_poll_ptr = cn10k_ml_set_poll_ptr_ddr; - mldev->get_poll_ptr = cn10k_ml_get_poll_ptr_ddr; - } else if (strcmp(mldev->fw.poll_mem, "register") == 0) { - mldev->set_poll_addr = cn10k_ml_set_poll_addr_reg; - mldev->set_poll_ptr = cn10k_ml_set_poll_ptr_reg; - mldev->get_poll_ptr = cn10k_ml_get_poll_ptr_reg; - } - - /* Set barrier function pointers */ - if (strcmp(mldev->fw.poll_mem, "ddr") == 0) { - mldev->set_enq_barrier = cn10k_ml_enq_barrier_ddr; - mldev->set_deq_barrier = cn10k_ml_deq_barrier_ddr; - } else if (strcmp(mldev->fw.poll_mem, "register") == 0) { - mldev->set_enq_barrier = cn10k_ml_enq_barrier_register; - mldev->set_deq_barrier = cn10k_ml_deq_barrier_register; - } + mldev->set_poll_addr = cn10k_ml_set_poll_addr; + mldev->set_poll_ptr = cn10k_ml_set_poll_ptr; + mldev->get_poll_ptr = cn10k_ml_get_poll_ptr; dev->enqueue_burst = cn10k_ml_enqueue_burst; dev->dequeue_burst = cn10k_ml_dequeue_burst; @@ -2390,15 +2307,14 @@ cn10k_ml_enqueue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op op = ops[count]; req = &queue->reqs[head]; - mldev->set_poll_addr(qp, req, head); + mldev->set_poll_addr(req); cn10k_ml_prep_fp_job_descriptor(dev, req, op); memset(&req->result, 0, sizeof(struct cn10k_ml_result)); req->result.error_code.s.etype = ML_ETYPE_UNKNOWN; req->result.user_ptr = op->user_ptr; - mldev->set_enq_barrier(); - mldev->set_poll_ptr(&mldev->roc, req); + mldev->set_poll_ptr(req); enqueued = mldev->ml_jcmdq_enqueue(&mldev->roc, &req->jcmd); if (unlikely(!enqueued)) goto jcmdq_full; @@ -2445,7 +2361,7 @@ cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op dequeue_req: req = &queue->reqs[tail]; - status = mldev->get_poll_ptr(&mldev->roc, req); + status = mldev->get_poll_ptr(req); if (unlikely(status != ML_CN10K_POLL_JOB_FINISH)) { if (plt_tsc_cycles() < req->timeout) goto empty_or_active; @@ -2453,7 +2369,6 @@ cn10k_ml_dequeue_burst(struct rte_ml_dev *dev, uint16_t qp_id, struct rte_ml_op req->result.error_code.s.etype = ML_ETYPE_DRIVER; } - mldev->set_deq_barrier(); cn10k_ml_result_update(dev, qp_id, &req->result, req->op); ops[count] = req->op; @@ -2515,14 +2430,14 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) model = dev->data->models[op->model_id]; req = model->req; - cn10k_ml_set_sync_addr(mldev, req); + cn10k_ml_set_poll_addr(req); cn10k_ml_prep_fp_job_descriptor(dev, req, op); memset(&req->result, 0, sizeof(struct cn10k_ml_result)); req->result.error_code.s.etype = ML_ETYPE_UNKNOWN; req->result.user_ptr = op->user_ptr; - mldev->set_poll_ptr(&mldev->roc, req); + mldev->set_poll_ptr(req); req->jcmd.w1.s.jobptr = PLT_U64_CAST(&req->jd); timeout = true; @@ -2542,7 +2457,7 @@ cn10k_ml_inference_sync(struct rte_ml_dev *dev, struct rte_ml_op *op) timeout = true; do { - if (mldev->get_poll_ptr(&mldev->roc, req) == ML_CN10K_POLL_JOB_FINISH) { + if (mldev->get_poll_ptr(req) == ML_CN10K_POLL_JOB_FINISH) { timeout = false; break; } diff --git a/drivers/ml/cnxk/cn10k_ml_ops.h b/drivers/ml/cnxk/cn10k_ml_ops.h index d64a9f27e6..005b093e45 100644 --- a/drivers/ml/cnxk/cn10k_ml_ops.h +++ b/drivers/ml/cnxk/cn10k_ml_ops.h @@ -67,12 +67,6 @@ struct cn10k_ml_qp { /* Statistics per queue-pair */ struct rte_ml_dev_stats stats; - - /* Register block start for polling */ - uint32_t block_start; - - /* Register block end for polling */ - uint32_t block_size; }; /* Device ops */