From patchwork Tue Nov 29 06:50:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120220 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8345AA0093; Tue, 29 Nov 2022 07:50:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 905B342D14; Tue, 29 Nov 2022 07:50:52 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3042D40A79 for ; Tue, 29 Nov 2022 07:50:51 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2ASNsaOE020763; Mon, 28 Nov 2022 22:50:50 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=yFawXjD8BPAaPN3VXzyY/9oPbjps+jNerLmbpMs6u9Y=; b=kGiuLgWjzT22iTXslZ1doM4LMYjvgNgwQQjnAWnibjvs3DkNPjod0a4NdZrMsA/oAhxP Fy9N+5UUcKcYVns45aw04/Wfw+7cGlpB/jZvmW6E+OpTC9W+2Z8/5b8Fc5REaBwiTKL7 9Vd4JZN775I1JqfLe1sLdzPPkF8ehPmThZnoYw1SmsKzaNs2YAz0VnAbcAmk9s6dXpGN Siud4kdMu0Zo1TILdgptM0Mgz8ZE2nzR1TpXFiCBJUF+rJbadx2G3TZvr0xAdMja3Gn1 lCFpQzmivVl0DCPr5J3dZJUz4d7FvW6Zq1jQQvVUfKGz/4yn0UZD6hRVzM/sOuH+RGrW dA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3m3k6wa1kt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:50:50 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 28 Nov 2022 22:50:47 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 28 Nov 2022 22:50:47 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 86BC73F707B; Mon, 28 Nov 2022 22:50:47 -0800 (PST) From: Srikanth Yalavarthi To: Thomas Monjalon , Srikanth Yalavarthi CC: , , Subject: [PATCH v1 01/12] app/mldev: implement test framework for mldev Date: Mon, 28 Nov 2022 22:50:29 -0800 Message-ID: <20221129065040.5875-2-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: UVeklElMyJOrAbFtg3CCwcjHSOOwfcU- X-Proofpoint-ORIG-GUID: UVeklElMyJOrAbFtg3CCwcjHSOOwfcU- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented framework for mldev test application. New test cases can be added using the framework. Support is also enabled to add options specific to the test cases. User can launch the tests by specifying the name of test as part of launch arguments. Code to parse command line arguments is imported from test-eventdev, with support to parse additional data types. Common arguments supported include: test : name of the test application to run dev_id : device id of the ML device socket_id : socket_id of application resources debug : enable debugging help : print help Sample launch command: ./dpdk-test-mldev -- --test --dev_id \ --socket_id Signed-off-by: Srikanth Yalavarthi Change-Id: I67a1d8187f7b3af55a444deadb60079f8596191c --- MAINTAINERS | 1 + app/meson.build | 1 + app/test-mldev/meson.build | 17 ++ app/test-mldev/ml_common.h | 29 +++ app/test-mldev/ml_main.c | 118 +++++++++++ app/test-mldev/ml_options.c | 160 +++++++++++++++ app/test-mldev/ml_options.h | 31 +++ app/test-mldev/ml_test.c | 45 +++++ app/test-mldev/ml_test.h | 75 +++++++ app/test-mldev/parser.c | 380 ++++++++++++++++++++++++++++++++++++ app/test-mldev/parser.h | 55 ++++++ 11 files changed, 912 insertions(+) create mode 100644 app/test-mldev/meson.build create mode 100644 app/test-mldev/ml_common.h create mode 100644 app/test-mldev/ml_main.c create mode 100644 app/test-mldev/ml_options.c create mode 100644 app/test-mldev/ml_options.h create mode 100644 app/test-mldev/ml_test.c create mode 100644 app/test-mldev/ml_test.h create mode 100644 app/test-mldev/parser.c create mode 100644 app/test-mldev/parser.h diff --git a/MAINTAINERS b/MAINTAINERS index 0c3e6d28e9..1edea42fad 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -538,6 +538,7 @@ F: doc/guides/prog_guide/rawdev.rst ML device API - EXPERIMENTAL M: Srikanth Yalavarthi F: lib/mldev/ +F: app/test-mldev/ F: doc/guides/prog_guide/mldev.rst diff --git a/app/meson.build b/app/meson.build index e32ea4bd5c..74d2420f67 100644 --- a/app/meson.build +++ b/app/meson.build @@ -23,6 +23,7 @@ apps = [ 'test-fib', 'test-flow-perf', 'test-gpudev', + 'test-mldev', 'test-pipeline', 'test-pmd', 'test-regex', diff --git a/app/test-mldev/meson.build b/app/test-mldev/meson.build new file mode 100644 index 0000000000..8ca2e1a1c1 --- /dev/null +++ b/app/test-mldev/meson.build @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2022 Marvell. + +if is_windows + build = false + reason = 'not supported on Windows' + subdir_done() +endif + +sources = files( + 'ml_main.c', + 'ml_options.c', + 'ml_test.c', + 'parser.c', +) + +deps += ['mldev'] diff --git a/app/test-mldev/ml_common.h b/app/test-mldev/ml_common.h new file mode 100644 index 0000000000..065180b619 --- /dev/null +++ b/app/test-mldev/ml_common.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ML_COMMON_ +#define _ML_COMMON_ + +#include + +#define CLNRM "\x1b[0m" +#define CLRED "\x1b[31m" +#define CLGRN "\x1b[32m" +#define CLYEL "\x1b[33m" + +#define ML_STR_FMT 20 + +#define ml_err(fmt, args...) fprintf(stderr, CLRED "error: %s() " fmt CLNRM "\n", __func__, ##args) + +#define ml_info(fmt, args...) fprintf(stdout, CLYEL "" fmt CLNRM "\n", ##args) + +#define ml_dump(str, fmt, val...) printf("\t%-*s : " fmt "\n", ML_STR_FMT, str, ##val) + +#define ml_dump_begin(str) printf("\t%-*s :\n\t{\n", ML_STR_FMT, str) + +#define ml_dump_list(str, id, val) printf("\t%*s[%2u] : %s\n", ML_STR_FMT - 4, str, id, val) + +#define ml_dump_end printf("\b\t}\n\n") + +#endif /* _ML_COMMON_*/ diff --git a/app/test-mldev/ml_main.c b/app/test-mldev/ml_main.c new file mode 100644 index 0000000000..d6652cd7b7 --- /dev/null +++ b/app/test-mldev/ml_main.c @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include +#include +#include + +#include +#include +#include + +#include "ml_common.h" +#include "ml_options.h" +#include "ml_test.h" + +struct ml_options opt; +struct ml_test *test; + +int +main(int argc, char **argv) +{ + uint16_t mldevs; + int ret; + + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_panic("invalid EAL arguments\n"); + argc -= ret; + argv += ret; + + mldevs = rte_ml_dev_count(); + if (!mldevs) + rte_panic("no mldev devices found\n"); + + /* set default values for options */ + ml_options_default(&opt); + + /* parse the command line arguments */ + ret = ml_options_parse(&opt, argc, argv); + if (ret) { + ml_err("parsing one or more user options failed"); + goto error; + } + + /* get test struct from name */ + test = ml_test_get(opt.test_name); + if (test == NULL) { + ml_err("failed to find requested test: %s", opt.test_name); + goto error; + } + + if (test->ops.test_result == NULL) { + ml_err("%s: ops.test_result not found", opt.test_name); + goto error; + } + + /* check test options */ + if (test->ops.opt_check) { + if (test->ops.opt_check(&opt)) { + ml_err("invalid command line argument"); + goto error; + } + } + + /* check the device capability */ + if (test->ops.cap_check) { + if (test->ops.cap_check(&opt) == false) { + ml_info("unsupported test: %s", opt.test_name); + ret = ML_TEST_UNSUPPORTED; + goto no_cap; + } + } + + /* dump options */ + if (opt.debug) { + if (test->ops.opt_dump) + test->ops.opt_dump(&opt); + } + + /* test specific setup */ + if (test->ops.test_setup) { + if (test->ops.test_setup(test, &opt)) { + ml_err("failed to setup test: %s", opt.test_name); + goto error; + } + } + + /* test driver */ + if (test->ops.test_driver) + test->ops.test_driver(test, &opt); + + /* get result */ + if (test->ops.test_result) + ret = test->ops.test_result(test, &opt); + + if (test->ops.test_destroy) + test->ops.test_destroy(test, &opt); + +no_cap: + if (ret == ML_TEST_SUCCESS) { + printf("Result: " CLGRN "%s" CLNRM "\n", "Success"); + } else if (ret == ML_TEST_FAILED) { + printf("Result: " CLRED "%s" CLNRM "\n", "Failed"); + return EXIT_FAILURE; + } else if (ret == ML_TEST_UNSUPPORTED) { + printf("Result: " CLYEL "%s" CLNRM "\n", "Unsupported"); + } + + rte_eal_cleanup(); + + return 0; + +error: + rte_eal_cleanup(); + + return EXIT_FAILURE; +} diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c new file mode 100644 index 0000000000..8fd7760e36 --- /dev/null +++ b/app/test-mldev/ml_options.c @@ -0,0 +1,160 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "ml_common.h" +#include "ml_options.h" +#include "ml_test.h" +#include "parser.h" + +typedef int (*option_parser_t)(struct ml_options *opt, const char *arg); + +void +ml_options_default(struct ml_options *opt) +{ + memset(opt, 0, sizeof(*opt)); + strlcpy(opt->test_name, "ml_test", ML_TEST_NAME_MAX_LEN); + opt->dev_id = 0; + opt->socket_id = SOCKET_ID_ANY; + opt->debug = false; +} + +struct long_opt_parser { + const char *lgopt_name; + option_parser_t parser_fn; +}; + +static int +ml_parse_test_name(struct ml_options *opt, const char *arg) +{ + strlcpy(opt->test_name, arg, ML_TEST_NAME_MAX_LEN); + return 0; +} + +static int +ml_parse_dev_id(struct ml_options *opt, const char *arg) +{ + int ret; + + ret = parser_read_int16(&opt->dev_id, arg); + + if (ret < 0) + return -EINVAL; + + return ret; +} + +static int +ml_parse_socket_id(struct ml_options *opt, const char *arg) +{ + opt->socket_id = atoi(arg); + + return 0; +} + +static void +ml_dump_test_options(const char *testname) +{ + RTE_SET_USED(testname); +} + +static void +print_usage(char *program) +{ + printf("\nusage : %s [EAL options] -- [application options]\n", program); + printf("application options:\n"); + printf("\t--test : name of the test application to run\n" + "\t--dev_id : device id of the ML device\n" + "\t--socket_id : socket_id of application resources\n" + "\t--debug : enable debug mode\n" + "\t--help : print help\n"); + printf("\n"); + printf("available tests and test specific application options:\n"); + ml_test_dump_names(ml_dump_test_options); +} + +static struct option lgopts[] = {{ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, + {ML_SOCKET_ID, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, + {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; + +static int +ml_opts_parse_long(int opt_idx, struct ml_options *opt) +{ + unsigned int i; + + struct long_opt_parser parsermap[] = { + {ML_TEST, ml_parse_test_name}, + {ML_DEVICE_ID, ml_parse_dev_id}, + {ML_SOCKET_ID, ml_parse_socket_id}, + }; + + for (i = 0; i < RTE_DIM(parsermap); i++) { + if (strncmp(lgopts[opt_idx].name, parsermap[i].lgopt_name, + strlen(lgopts[opt_idx].name)) == 0) + return parsermap[i].parser_fn(opt, optarg); + } + + return -EINVAL; +} + +int +ml_options_parse(struct ml_options *opt, int argc, char **argv) +{ + int opt_idx; + int retval; + int opts; + + while ((opts = getopt_long(argc, argv, "", lgopts, &opt_idx)) != EOF) { + switch (opts) { + case 0: /* parse long options */ + if (!strcmp(lgopts[opt_idx].name, "debug")) { + opt->debug = true; + break; + } + + if (!strcmp(lgopts[opt_idx].name, "help")) { + print_usage(argv[0]); + exit(EXIT_SUCCESS); + } + + retval = ml_opts_parse_long(opt_idx, opt); + if (retval != 0) + return retval; + break; + default: + return -EINVAL; + } + } + + return 0; +} + +void +ml_options_dump(struct ml_options *opt) +{ + struct rte_ml_dev_info dev_info; + + rte_ml_dev_info_get(opt->dev_id, &dev_info); + + ml_dump("driver", "%s", dev_info.driver_name); + ml_dump("test", "%s", opt->test_name); + ml_dump("dev_id", "%d", opt->dev_id); + + if (opt->socket_id == SOCKET_ID_ANY) + ml_dump("socket_id", "%d (SOCKET_ID_ANY)", opt->socket_id); + else + ml_dump("socket_id", "%d", opt->socket_id); + + ml_dump("debug", "%s", (opt->debug ? "true" : "false")); +} diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h new file mode 100644 index 0000000000..05311a9a47 --- /dev/null +++ b/app/test-mldev/ml_options.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ML_OPTIONS_ +#define _ML_OPTIONS_ + +#include +#include + +#define ML_TEST_NAME_MAX_LEN 32 + +/* Options names */ +#define ML_TEST ("test") +#define ML_DEVICE_ID ("dev_id") +#define ML_SOCKET_ID ("socket_id") +#define ML_DEBUG ("debug") +#define ML_HELP ("help") + +struct ml_options { + char test_name[ML_TEST_NAME_MAX_LEN]; + int16_t dev_id; + int socket_id; + bool debug; +}; + +void ml_options_default(struct ml_options *opt); +int ml_options_parse(struct ml_options *opt, int argc, char **argv); +void ml_options_dump(struct ml_options *opt); + +#endif /* _ML_OPTIONS_ */ diff --git a/app/test-mldev/ml_test.c b/app/test-mldev/ml_test.c new file mode 100644 index 0000000000..2304712764 --- /dev/null +++ b/app/test-mldev/ml_test.c @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include +#include +#include + +#include "ml_test.h" + +static STAILQ_HEAD(, ml_test_entry) head = STAILQ_HEAD_INITIALIZER(head); + +void +ml_test_register(struct ml_test_entry *entry) +{ + STAILQ_INSERT_TAIL(&head, entry, next); +} + +struct ml_test * +ml_test_get(const char *name) +{ + struct ml_test_entry *entry; + + if (!name) + return NULL; + + STAILQ_FOREACH(entry, &head, next) + if (!strncmp(entry->test.name, name, strlen(name))) + return &entry->test; + + return NULL; +} + +void +ml_test_dump_names(void (*f)(const char *name)) +{ + struct ml_test_entry *entry; + + STAILQ_FOREACH(entry, &head, next) + { + if (entry->test.name) + printf("\t %s\n", entry->test.name); + f(entry->test.name); + } +} diff --git a/app/test-mldev/ml_test.h b/app/test-mldev/ml_test.h new file mode 100644 index 0000000000..4a1430ec1b --- /dev/null +++ b/app/test-mldev/ml_test.h @@ -0,0 +1,75 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ML_TEST_ +#define _ML_TEST_ + +#include +#include +#include + +#include + +#include "ml_options.h" + +#define ML_TEST_MAX_POOL_SIZE 256 + +enum ml_test_result { + ML_TEST_SUCCESS, + ML_TEST_FAILED, + ML_TEST_UNSUPPORTED, +}; + +struct ml_test; + +typedef bool (*ml_test_capability_check_t)(struct ml_options *opt); +typedef int (*ml_test_options_check_t)(struct ml_options *opt); +typedef void (*ml_test_options_dump_t)(struct ml_options *opt); +typedef int (*ml_test_setup_t)(struct ml_test *test, struct ml_options *opt); +typedef void (*ml_test_destroy_t)(struct ml_test *test, struct ml_options *opt); +typedef int (*ml_test_driver_t)(struct ml_test *test, struct ml_options *opt); +typedef int (*ml_test_result_t)(struct ml_test *test, struct ml_options *opt); + +struct ml_test_ops { + ml_test_capability_check_t cap_check; + ml_test_options_check_t opt_check; + ml_test_options_dump_t opt_dump; + ml_test_setup_t test_setup; + ml_test_destroy_t test_destroy; + ml_test_driver_t test_driver; + ml_test_result_t test_result; +}; + +struct ml_test { + const char *name; + void *test_priv; + struct ml_test_ops ops; +}; + +struct ml_test_entry { + struct ml_test test; + + STAILQ_ENTRY(ml_test_entry) next; +}; + +static inline void * +ml_test_priv(struct ml_test *test) +{ + return test->test_priv; +} + +struct ml_test *ml_test_get(const char *name); +void ml_test_register(struct ml_test_entry *test); +void ml_test_dump_names(void (*f)(const char *)); + +#define ML_TEST_REGISTER(nm) \ + static struct ml_test_entry _ml_test_entry_##nm; \ + RTE_INIT(ml_test_##nm) \ + { \ + _ml_test_entry_##nm.test.name = RTE_STR(nm); \ + memcpy(&_ml_test_entry_##nm.test.ops, &nm, sizeof(struct ml_test_ops)); \ + ml_test_register(&_ml_test_entry_##nm); \ + } + +#endif /* _ML_TEST_ */ diff --git a/app/test-mldev/parser.c b/app/test-mldev/parser.c new file mode 100644 index 0000000000..0b7fb63fe5 --- /dev/null +++ b/app/test-mldev/parser.c @@ -0,0 +1,380 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2016 Intel Corporation. + * Copyright (c) 2017 Cavium, Inc. + * Copyright (c) 2022 Marvell. + */ + +#include +#include +#include +#include +#include +#include + +#include + +#include "parser.h" + +static uint32_t +get_hex_val(char c) +{ + switch (c) { + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + case '8': + case '9': + return c - '0'; + case 'A': + case 'B': + case 'C': + case 'D': + case 'E': + case 'F': + return c - 'A' + 10; + case 'a': + case 'b': + case 'c': + case 'd': + case 'e': + case 'f': + return c - 'a' + 10; + default: + return 0; + } +} + +int +parser_read_arg_bool(const char *p) +{ + p = skip_white_spaces(p); + int result = -EINVAL; + + if (((p[0] == 'y') && (p[1] == 'e') && (p[2] == 's')) || + ((p[0] == 'Y') && (p[1] == 'E') && (p[2] == 'S'))) { + p += 3; + result = 1; + } + + if (((p[0] == 'o') && (p[1] == 'n')) || ((p[0] == 'O') && (p[1] == 'N'))) { + p += 2; + result = 1; + } + + if (((p[0] == 'n') && (p[1] == 'o')) || ((p[0] == 'N') && (p[1] == 'O'))) { + p += 2; + result = 0; + } + + if (((p[0] == 'o') && (p[1] == 'f') && (p[2] == 'f')) || + ((p[0] == 'O') && (p[1] == 'F') && (p[2] == 'F'))) { + p += 3; + result = 0; + } + + p = skip_white_spaces(p); + + if (p[0] != '\0') + return -EINVAL; + + return result; +} + +int +parser_read_uint64(uint64_t *value, const char *p) +{ + char *next; + uint64_t val; + + p = skip_white_spaces(p); + if (!isdigit(*p)) + return -EINVAL; + + val = strtoul(p, &next, 10); + if (p == next) + return -EINVAL; + + p = next; + switch (*p) { + case 'T': + val *= 1024ULL; + /* fall through */ + case 'G': + val *= 1024ULL; + /* fall through */ + case 'M': + val *= 1024ULL; + /* fall through */ + case 'k': + case 'K': + val *= 1024ULL; + p++; + break; + } + + p = skip_white_spaces(p); + if (*p != '\0') + return -EINVAL; + + *value = val; + return 0; +} + +int +parser_read_int32(int32_t *value, const char *p) +{ + char *next; + int32_t val; + + p = skip_white_spaces(p); + if (!isdigit(*p)) + return -EINVAL; + + val = strtol(p, &next, 10); + if (p == next) + return -EINVAL; + + *value = val; + return 0; +} + +int +parser_read_int16(int16_t *value, const char *p) +{ + char *next; + int16_t val; + + p = skip_white_spaces(p); + if (!isdigit(*p)) + return -EINVAL; + + val = strtol(p, &next, 10); + if (p == next) + return -EINVAL; + + *value = val; + return 0; +} + +int +parser_read_uint64_hex(uint64_t *value, const char *p) +{ + char *next; + uint64_t val; + + p = skip_white_spaces(p); + + val = strtoul(p, &next, 16); + if (p == next) + return -EINVAL; + + p = skip_white_spaces(next); + if (*p != '\0') + return -EINVAL; + + *value = val; + return 0; +} + +int +parser_read_uint32(uint32_t *value, const char *p) +{ + uint64_t val = 0; + int ret = parser_read_uint64(&val, p); + + if (ret < 0) + return ret; + + if (val > UINT32_MAX) + return -ERANGE; + + *value = val; + return 0; +} + +int +parser_read_uint32_hex(uint32_t *value, const char *p) +{ + uint64_t val = 0; + int ret = parser_read_uint64_hex(&val, p); + + if (ret < 0) + return ret; + + if (val > UINT32_MAX) + return -ERANGE; + + *value = val; + return 0; +} + +int +parser_read_uint16(uint16_t *value, const char *p) +{ + uint64_t val = 0; + int ret = parser_read_uint64(&val, p); + + if (ret < 0) + return ret; + + if (val > UINT16_MAX) + return -ERANGE; + + *value = val; + return 0; +} + +int +parser_read_uint16_hex(uint16_t *value, const char *p) +{ + uint64_t val = 0; + int ret = parser_read_uint64_hex(&val, p); + + if (ret < 0) + return ret; + + if (val > UINT16_MAX) + return -ERANGE; + + *value = val; + return 0; +} + +int +parser_read_uint8(uint8_t *value, const char *p) +{ + uint64_t val = 0; + int ret = parser_read_uint64(&val, p); + + if (ret < 0) + return ret; + + if (val > UINT8_MAX) + return -ERANGE; + + *value = val; + return 0; +} + +int +parser_read_uint8_hex(uint8_t *value, const char *p) +{ + uint64_t val = 0; + int ret = parser_read_uint64_hex(&val, p); + + if (ret < 0) + return ret; + + if (val > UINT8_MAX) + return -ERANGE; + + *value = val; + return 0; +} + +int +parse_tokenize_string(char *string, char *tokens[], uint32_t *n_tokens) +{ + uint32_t i; + + if ((string == NULL) || (tokens == NULL) || (*n_tokens < 1)) + return -EINVAL; + + for (i = 0; i < *n_tokens; i++) { + tokens[i] = strtok_r(string, PARSE_DELIMITER, &string); + if (tokens[i] == NULL) + break; + } + + if ((i == *n_tokens) && (strtok_r(string, PARSE_DELIMITER, &string) != NULL)) + return -E2BIG; + + *n_tokens = i; + return 0; +} + +int +parse_hex_string(char *src, uint8_t *dst, uint32_t *size) +{ + char *c; + uint32_t len, i; + + /* Check input parameters */ + if ((src == NULL) || (dst == NULL) || (size == NULL) || (*size == 0)) + return -1; + + len = strlen(src); + if (((len & 3) != 0) || (len > (*size) * 2)) + return -1; + *size = len / 2; + + for (c = src; *c != 0; c++) { + if ((((*c) >= '0') && ((*c) <= '9')) || (((*c) >= 'A') && ((*c) <= 'F')) || + (((*c) >= 'a') && ((*c) <= 'f'))) + continue; + + return -1; + } + + /* Convert chars to bytes */ + for (i = 0; i < *size; i++) + dst[i] = get_hex_val(src[2 * i]) * 16 + get_hex_val(src[2 * i + 1]); + + return 0; +} + +int +parse_lcores_list(bool lcores[], int lcores_num, const char *corelist) +{ + int i, idx = 0; + int min, max; + char *end = NULL; + + if (corelist == NULL) + return -1; + while (isblank(*corelist)) + corelist++; + i = strlen(corelist); + while ((i > 0) && isblank(corelist[i - 1])) + i--; + + /* Get list of lcores */ + min = RTE_MAX_LCORE; + do { + while (isblank(*corelist)) + corelist++; + if (*corelist == '\0') + return -1; + idx = strtoul(corelist, &end, 10); + if (idx < 0 || idx > lcores_num) + return -1; + + if (end == NULL) + return -1; + while (isblank(*end)) + end++; + if (*end == '-') { + min = idx; + } else if ((*end == ',') || (*end == '\0')) { + max = idx; + if (min == RTE_MAX_LCORE) + min = idx; + for (idx = min; idx <= max; idx++) { + if (lcores[idx] == 1) + return -E2BIG; + lcores[idx] = 1; + } + + min = RTE_MAX_LCORE; + } else + return -1; + corelist = end + 1; + } while (*end != '\0'); + + return 0; +} diff --git a/app/test-mldev/parser.h b/app/test-mldev/parser.h new file mode 100644 index 0000000000..f0d5e79e4b --- /dev/null +++ b/app/test-mldev/parser.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2010-2016 Intel Corporation. + * Copyright (c) 2022 Marvell. + */ + +#ifndef __INCLUDE_PARSER_H__ +#define __INCLUDE_PARSER_H__ + +#include +#include +#include + +#define PARSE_DELIMITER " \f\n\r\t\v" + +#define skip_white_spaces(pos) \ + ({ \ + __typeof__(pos) _p = (pos); \ + for (; isspace(*_p); _p++) \ + ; \ + _p; \ + }) + +static inline size_t +skip_digits(const char *src) +{ + size_t i; + + for (i = 0; isdigit(src[i]); i++) + ; + + return i; +} + +int parser_read_arg_bool(const char *p); + +int parser_read_uint64(uint64_t *value, const char *p); +int parser_read_uint32(uint32_t *value, const char *p); +int parser_read_uint16(uint16_t *value, const char *p); +int parser_read_uint8(uint8_t *value, const char *p); + +int parser_read_uint64_hex(uint64_t *value, const char *p); +int parser_read_uint32_hex(uint32_t *value, const char *p); +int parser_read_uint16_hex(uint16_t *value, const char *p); +int parser_read_uint8_hex(uint8_t *value, const char *p); + +int parser_read_int32(int32_t *value, const char *p); +int parser_read_int16(int16_t *value, const char *p); + +int parse_hex_string(char *src, uint8_t *dst, uint32_t *size); + +int parse_tokenize_string(char *string, char *tokens[], uint32_t *n_tokens); + +int parse_lcores_list(bool lcores[], int lcores_num, const char *corelist); + +#endif /* __INCLUDE_PARSER_H__ */ From patchwork Tue Nov 29 06:50:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120221 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D6BAA0093; Tue, 29 Nov 2022 07:51:05 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7C9D342D17; Tue, 29 Nov 2022 07:50:53 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 9565040A79 for ; Tue, 29 Nov 2022 07:50:51 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2ASNsaOF020763; Mon, 28 Nov 2022 22:50:51 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Apl96qYANTCYmzTPQab2h7UuBlAdkfDBU1iN7Omrxag=; b=iqJYSl8MCV5aJuut2/yBMcTf3JwWx8+Wf7drkcf032a9ibYirQ3mKOwx2C9enFtxQxQs p36mW4t3sM2iiMZMBXdtoyZlTH6CPGn8PUtcHN0vxCpfwPSV2IEdivtjfuYG8KZTiyfg AS7GAjXvq6boFrgvjg77VIBrNpyLnfazu1hw/mSa57fok6uNlaXLtNiuTOdlAAAvgXoO mO/UeO6RybWxdnsQAucXDnFKNcOTblwsrlOq9/B8DqkGBwAMfBSR/w/ZExelpmURVGj5 rfsKzoe6YwlVglPV4FA8yIoXxVmHPBFvCCyAgtL4wTnrKsJGSf4KC1iNKwSrtjJhS8J/ HQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3m3k6wa1kt-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:50:50 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 28 Nov 2022 22:50:49 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 28 Nov 2022 22:50:49 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 026233F707A; Mon, 28 Nov 2022 22:50:48 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 02/12] app/mldev: add common test functions Date: Mon, 28 Nov 2022 22:50:30 -0800 Message-ID: <20221129065040.5875-3-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: pl6lO48QtfFAA88pkXpY-agGKaM38Ulr X-Proofpoint-ORIG-GUID: pl6lO48QtfFAA88pkXpY-agGKaM38Ulr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added common functions used by all tests. Common code includes functions to check capabilities, options, and handle ML devices. Signed-off-by: Srikanth Yalavarthi Change-Id: I5f99b57f97ca5b317450b63bff86ff9fdadf388f --- app/test-mldev/meson.build | 1 + app/test-mldev/test_common.c | 139 +++++++++++++++++++++++++++++++++++ app/test-mldev/test_common.h | 27 +++++++ 3 files changed, 167 insertions(+) create mode 100644 app/test-mldev/test_common.c create mode 100644 app/test-mldev/test_common.h diff --git a/app/test-mldev/meson.build b/app/test-mldev/meson.build index 8ca2e1a1c1..964bb9ddc4 100644 --- a/app/test-mldev/meson.build +++ b/app/test-mldev/meson.build @@ -12,6 +12,7 @@ sources = files( 'ml_options.c', 'ml_test.c', 'parser.c', + 'test_common.c', ) deps += ['mldev'] diff --git a/app/test-mldev/test_common.c b/app/test-mldev/test_common.c new file mode 100644 index 0000000000..b6b32904e4 --- /dev/null +++ b/app/test-mldev/test_common.c @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include +#include +#include + +#include +#include +#include + +#include "ml_common.h" +#include "ml_options.h" +#include "test_common.h" + +bool +ml_test_cap_check(struct ml_options *opt) +{ + struct rte_ml_dev_info dev_info; + + rte_ml_dev_info_get(opt->dev_id, &dev_info); + if (dev_info.max_models == 0) { + ml_err("Not enough mldev models supported = %d", dev_info.max_models); + return false; + } + + return true; +} + +int +ml_test_opt_check(struct ml_options *opt) +{ + uint16_t dev_count; + int socket_id; + + RTE_SET_USED(opt); + + dev_count = rte_ml_dev_count(); + if (dev_count == 0) { + ml_err("No ML devices found"); + return -ENODEV; + } + + if (opt->dev_id >= dev_count) { + ml_err("Invalid option dev_id = %d", opt->dev_id); + return -EINVAL; + } + + socket_id = rte_ml_dev_socket_id(opt->dev_id); + if (!((opt->socket_id != SOCKET_ID_ANY) || (opt->socket_id != socket_id))) { + ml_err("Invalid option, socket_id = %d\n", opt->socket_id); + return -EINVAL; + } + + return 0; +} + +void +ml_test_opt_dump(struct ml_options *opt) +{ + ml_options_dump(opt); +} + +int +ml_test_device_configure(struct ml_test *test, struct ml_options *opt) +{ + struct test_common *t = ml_test_priv(test); + struct rte_ml_dev_config dev_config; + int ret; + + ret = rte_ml_dev_info_get(opt->dev_id, &t->dev_info); + if (ret != 0) { + ml_err("Failed to get mldev info, dev_id = %d\n", opt->dev_id); + return ret; + } + + /* configure device */ + dev_config.socket_id = opt->socket_id; + dev_config.nb_models = t->dev_info.max_models; + dev_config.nb_queue_pairs = t->dev_info.max_queue_pairs; + ret = rte_ml_dev_configure(opt->dev_id, &dev_config); + if (ret != 0) { + ml_err("Failed to configure ml device, dev_id = %d\n", opt->dev_id); + return ret; + } + + return 0; +} + +int +ml_test_device_close(struct ml_test *test, struct ml_options *opt) +{ + struct test_common *t = ml_test_priv(test); + int ret = 0; + + RTE_SET_USED(t); + + /* close device */ + ret = rte_ml_dev_close(opt->dev_id); + if (ret != 0) + ml_err("Failed to close ML device, dev_id = %d\n", opt->dev_id); + + return ret; +} + +int +ml_test_device_start(struct ml_test *test, struct ml_options *opt) +{ + struct test_common *t = ml_test_priv(test); + int ret; + + RTE_SET_USED(t); + + /* start device */ + ret = rte_ml_dev_start(opt->dev_id); + if (ret != 0) { + ml_err("Failed to start ml device, dev_id = %d\n", opt->dev_id); + return ret; + } + + return 0; +} + +int +ml_test_device_stop(struct ml_test *test, struct ml_options *opt) +{ + struct test_common *t = ml_test_priv(test); + int ret = 0; + + RTE_SET_USED(t); + + /* stop device */ + ret = rte_ml_dev_stop(opt->dev_id); + if (ret != 0) + ml_err("Failed to stop ML device, dev_id = %d\n", opt->dev_id); + + return ret; +} diff --git a/app/test-mldev/test_common.h b/app/test-mldev/test_common.h new file mode 100644 index 0000000000..05a2e43e2f --- /dev/null +++ b/app/test-mldev/test_common.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ML_TEST_COMMON_ +#define _ML_TEST_COMMON_ + +#include + +#include "ml_options.h" +#include "ml_test.h" + +struct test_common { + struct ml_options *opt; + enum ml_test_result result; + struct rte_ml_dev_info dev_info; +}; + +bool ml_test_cap_check(struct ml_options *opt); +int ml_test_opt_check(struct ml_options *opt); +void ml_test_opt_dump(struct ml_options *opt); +int ml_test_device_configure(struct ml_test *test, struct ml_options *opt); +int ml_test_device_close(struct ml_test *test, struct ml_options *opt); +int ml_test_device_start(struct ml_test *test, struct ml_options *opt); +int ml_test_device_stop(struct ml_test *test, struct ml_options *opt); + +#endif /* _ML_TEST_COMMON_ */ From patchwork Tue Nov 29 06:50:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120222 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A732A0093; Tue, 29 Nov 2022 07:51:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D0F2C42D25; Tue, 29 Nov 2022 07:50:56 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 113AB42D24 for ; Tue, 29 Nov 2022 07:50:54 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AT3Nufu005808; Mon, 28 Nov 2022 22:50:54 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=LXI699beBYncyeg+niWBLaJDPDboM6NVGEQEYc4T5S0=; b=LMek4GHyY3b3GTAUxwQb6YQFyxwGrt6dk/MYxz0dyqNqXjd/5H22Q4L/wdRa4qqScgN9 3SY61r+rCsffJhG8XscF5milDbLiW9g2swFPzpxpiVHeEEZThbew6mQbQT16O/pKed7T ezJhyqzYWNdkQLjcYZDZfMW1rdeDXoIOdLgp1ikAISVcjMQucYMuo6snIwR1veggMQBZ lcNkDYIC1iQj7k/wPT4pC/dVyLjzyw9W5en94cW2VHxRK0BYpSWBlG8Xr5DHJ5H+t5uk fMQrBscW9pADMWC1NHJnHwUC6DveDndK2pBgDYgdJ8qMwcc0I2bhgmGCFg/Ovz4G3ueC /w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3m5a508nkx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:50:54 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Nov 2022 22:50:52 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 28 Nov 2022 22:50:52 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 47FF23F707A; Mon, 28 Nov 2022 22:50:52 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 03/12] app/mldev: add test case to validate device ops Date: Mon, 28 Nov 2022 22:50:31 -0800 Message-ID: <20221129065040.5875-4-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: eatU2FFJMjbzyIfqzmlataSFk4Et4sBi X-Proofpoint-GUID: eatU2FFJMjbzyIfqzmlataSFk4Et4sBi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added test case to validate device handling operations. Device ops test is a collection of multiple sub-tests. Enabled sub-test to validate device reconfiguration. Set device_ops as the default test. Signed-off-by: Srikanth Yalavarthi Change-Id: I4e9e4ac0e04df25b99df91330a566d963dcfc686 --- app/test-mldev/meson.build | 1 + app/test-mldev/ml_options.c | 5 +- app/test-mldev/test_device_ops.c | 234 +++++++++++++++++++++++++++++++ app/test-mldev/test_device_ops.h | 17 +++ 4 files changed, 255 insertions(+), 2 deletions(-) create mode 100644 app/test-mldev/test_device_ops.c create mode 100644 app/test-mldev/test_device_ops.h diff --git a/app/test-mldev/meson.build b/app/test-mldev/meson.build index 964bb9ddc4..60ea23d142 100644 --- a/app/test-mldev/meson.build +++ b/app/test-mldev/meson.build @@ -13,6 +13,7 @@ sources = files( 'ml_test.c', 'parser.c', 'test_common.c', + 'test_device_ops.c', ) deps += ['mldev'] diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 8fd7760e36..2e5f11bca2 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -24,7 +24,7 @@ void ml_options_default(struct ml_options *opt) { memset(opt, 0, sizeof(*opt)); - strlcpy(opt->test_name, "ml_test", ML_TEST_NAME_MAX_LEN); + strlcpy(opt->test_name, "device_ops", ML_TEST_NAME_MAX_LEN); opt->dev_id = 0; opt->socket_id = SOCKET_ID_ANY; opt->debug = false; @@ -66,7 +66,8 @@ ml_parse_socket_id(struct ml_options *opt, const char *arg) static void ml_dump_test_options(const char *testname) { - RTE_SET_USED(testname); + if (strcmp(testname, "device_ops") == 0) + printf("\n"); } static void diff --git a/app/test-mldev/test_device_ops.c b/app/test-mldev/test_device_ops.c new file mode 100644 index 0000000000..4cafcf41a6 --- /dev/null +++ b/app/test-mldev/test_device_ops.c @@ -0,0 +1,234 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include +#include +#include +#include +#include + +#include +#include +#include + +#include "ml_common.h" +#include "ml_options.h" +#include "test_device_ops.h" + +static bool +test_device_cap_check(struct ml_options *opt) +{ + if (!ml_test_cap_check(opt)) + return false; + + return true; +} + +static int +test_device_opt_check(struct ml_options *opt) +{ + int ret; + + /* check common opts */ + ret = ml_test_opt_check(opt); + if (ret != 0) + return ret; + + return 0; +} + +static void +test_device_opt_dump(struct ml_options *opt) +{ + /* dump common opts */ + ml_test_opt_dump(opt); +} + +static int +test_device_setup(struct ml_test *test, struct ml_options *opt) +{ + struct test_device *t; + void *test_device; + int ret = 0; + + /* allocate for test structure */ + test_device = rte_zmalloc_socket(test->name, sizeof(struct test_device), + RTE_CACHE_LINE_SIZE, opt->socket_id); + if (test_device == NULL) { + ml_err("failed to allocate memory for test_model"); + ret = -ENOMEM; + goto error; + } + test->test_priv = test_device; + t = ml_test_priv(test); + + t->cmn.result = ML_TEST_FAILED; + t->cmn.opt = opt; + + /* get device info */ + ret = rte_ml_dev_info_get(opt->dev_id, &t->cmn.dev_info); + if (ret < 0) { + ml_err("failed to get device info"); + goto error; + } + + return 0; + +error: + if (test_device != NULL) + rte_free(test_device); + + return ret; +} + +static void +test_device_destroy(struct ml_test *test, struct ml_options *opt) +{ + struct test_device *t; + + RTE_SET_USED(opt); + + t = ml_test_priv(test); + if (t != NULL) + rte_free(t); +} + +static int +test_device_reconfigure(struct ml_test *test, struct ml_options *opt) +{ + struct rte_ml_dev_config dev_config; + struct rte_ml_dev_qp_conf qp_conf; + struct test_device *t; + uint16_t qp_id = 0; + int ret = 0; + + t = ml_test_priv(test); + + /* configure with default options */ + ret = ml_test_device_configure(test, opt); + if (ret != 0) + return ret; + + /* setup one queue pair with nb_desc = 1 */ + qp_conf.nb_desc = 1; + qp_conf.cb = NULL; + + ret = rte_ml_dev_queue_pair_setup(opt->dev_id, qp_id, &qp_conf, opt->socket_id); + if (ret != 0) { + ml_err("Failed to setup ML device queue-pair, dev_id = %d, qp_id = %u\n", + opt->dev_id, qp_id); + goto error; + } + + /* start device */ + ret = ml_test_device_start(test, opt); + if (ret != 0) + goto error; + + /* stop device */ + ret = ml_test_device_stop(test, opt); + if (ret != 0) { + ml_err("Failed to stop device"); + goto error; + } + + /* reconfigure device based on dev_info */ + dev_config.socket_id = opt->socket_id; + dev_config.nb_models = t->cmn.dev_info.max_models; + dev_config.nb_queue_pairs = t->cmn.dev_info.max_queue_pairs; + ret = rte_ml_dev_configure(opt->dev_id, &dev_config); + if (ret != 0) { + ml_err("Failed to reconfigure ML device, dev_id = %d\n", opt->dev_id); + return ret; + } + + /* setup queue pairs */ + for (qp_id = 0; qp_id < t->cmn.dev_info.max_queue_pairs; qp_id++) { + qp_conf.nb_desc = t->cmn.dev_info.max_desc; + qp_conf.cb = NULL; + + ret = rte_ml_dev_queue_pair_setup(opt->dev_id, qp_id, &qp_conf, opt->socket_id); + if (ret != 0) { + ml_err("Failed to setup ML device queue-pair, dev_id = %d, qp_id = %u\n", + opt->dev_id, qp_id); + goto error; + } + } + + /* start device */ + ret = ml_test_device_start(test, opt); + if (ret != 0) + goto error; + + /* stop device */ + ret = ml_test_device_stop(test, opt); + if (ret != 0) + goto error; + + /* close device */ + ret = ml_test_device_close(test, opt); + if (ret != 0) + return ret; + + return 0; + +error: + ml_test_device_close(test, opt); + + return ret; +} + +static int +test_device_driver(struct ml_test *test, struct ml_options *opt) +{ + struct test_device *t; + int ret = 0; + + t = ml_test_priv(test); + + /* sub-test: device reconfigure */ + ret = test_device_reconfigure(test, opt); + if (ret != 0) { + printf("\n"); + printf("Model Device Reconfigure Test: " CLRED "%s" CLNRM "\n", "Failed"); + goto error; + } else { + printf("\n"); + printf("Model Device Reconfigure Test: " CLYEL "%s" CLNRM "\n", "Passed"); + } + + printf("\n"); + + t->cmn.result = ML_TEST_SUCCESS; + + return 0; + +error: + t->cmn.result = ML_TEST_FAILED; + return -1; +} + +static int +test_device_result(struct ml_test *test, struct ml_options *opt) +{ + struct test_device *t; + + RTE_SET_USED(opt); + + t = ml_test_priv(test); + + return t->cmn.result; +} + +static const struct ml_test_ops device_ops = { + .cap_check = test_device_cap_check, + .opt_check = test_device_opt_check, + .opt_dump = test_device_opt_dump, + .test_setup = test_device_setup, + .test_destroy = test_device_destroy, + .test_driver = test_device_driver, + .test_result = test_device_result, +}; + +ML_TEST_REGISTER(device_ops); diff --git a/app/test-mldev/test_device_ops.h b/app/test-mldev/test_device_ops.h new file mode 100644 index 0000000000..115b1072a2 --- /dev/null +++ b/app/test-mldev/test_device_ops.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ML_TEST_DEVICE_OPS_ +#define _ML_TEST_DEVICE_OPS_ + +#include + +#include "test_common.h" + +struct test_device { + /* common data */ + struct test_common cmn; +} __rte_cache_aligned; + +#endif /* _ML_TEST_DEVICE_OPS_ */ From patchwork Tue Nov 29 06:50:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120223 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 54FF2A0093; Tue, 29 Nov 2022 07:51:20 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F10D42D1D; Tue, 29 Nov 2022 07:50:59 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A98EF42C29 for ; Tue, 29 Nov 2022 07:50:56 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2ASNlF1x020463; Mon, 28 Nov 2022 22:50:56 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Drls+hSrLOX3RYg27R3UWX8TsYfDl69N/jbeHlDuIsY=; b=TrL2hs/KcKsOkh1GBM/vTXXEB2rzpXi3XaM8DdtIH25+/o6Rk4/7PzZe+sH3gJj/F18e 1ONgUlfRMxhwFsYGMDUAGDGQxk3Hkf8xx0CveTQgQwu1mDbk7gABrUY/iAzY5kwBKLf3 kQOfwcWHlZ6oMEELiKBjI33t1KeMoU5JiIaeWqmdpgYD1SjOQ/80ZHocfN+lfMCyRwCJ Dphza87H8/MkpMTp+CuhCsG+QDLDbjV7pq83Abl/3wqZfd+iWg6RIGdVQdXW+NqzhE9B 3mvFxFJWvStsr8i0Gfz3lPTMuMWuvKvmIaDT8rBvIa3ZOniVRc+p+R1pA38jGpsiaf6j iQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3m3k6wa1mh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:50:55 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Nov 2022 22:50:53 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 28 Nov 2022 22:50:53 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 8B5FD3F707B; Mon, 28 Nov 2022 22:50:53 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 04/12] app/mldev: add test case to validate model ops Date: Mon, 28 Nov 2022 22:50:32 -0800 Message-ID: <20221129065040.5875-5-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: pEnWMTFHxRWqF55hvUNgKohBMOUN4haq X-Proofpoint-ORIG-GUID: pEnWMTFHxRWqF55hvUNgKohBMOUN4haq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added test case to validate model operations. Model ops test is a collection of sub-tests. Each sub-test invokes the model operations in a specific order. Sub-test A: (load -> start -> stop -> unload) x n Sub-test B: load x n -> start x n -> stop x n -> unload x n Sub-test C: load x n + (start + stop) x n + unload x n Sub-test D: (load + start) x n -> (stop + unload) x n Added internal functions to handle model load, start, stop and unload. List of models to be used for testing can be specified through application argument "--models" Signed-off-by: Srikanth Yalavarthi Change-Id: Ia3a4fae03473480ff2971d3add6a79af16e1335c --- app/test-mldev/meson.build | 2 + app/test-mldev/ml_options.c | 45 ++- app/test-mldev/ml_options.h | 9 + app/test-mldev/test_model_common.c | 162 +++++++++++ app/test-mldev/test_model_common.h | 37 +++ app/test-mldev/test_model_ops.c | 433 +++++++++++++++++++++++++++++ app/test-mldev/test_model_ops.h | 21 ++ 7 files changed, 706 insertions(+), 3 deletions(-) create mode 100644 app/test-mldev/test_model_common.c create mode 100644 app/test-mldev/test_model_common.h create mode 100644 app/test-mldev/test_model_ops.c create mode 100644 app/test-mldev/test_model_ops.h diff --git a/app/test-mldev/meson.build b/app/test-mldev/meson.build index 60ea23d142..b09e1ccc8a 100644 --- a/app/test-mldev/meson.build +++ b/app/test-mldev/meson.build @@ -14,6 +14,8 @@ sources = files( 'parser.c', 'test_common.c', 'test_device_ops.c', + 'test_model_common.c', + 'test_model_ops.c', ) deps += ['mldev'] diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 2e5f11bca2..8e40a33ed0 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -4,6 +4,7 @@ #include #include +#include #include #include #include @@ -27,6 +28,7 @@ ml_options_default(struct ml_options *opt) strlcpy(opt->test_name, "device_ops", ML_TEST_NAME_MAX_LEN); opt->dev_id = 0; opt->socket_id = SOCKET_ID_ANY; + opt->nb_filelist = 0; opt->debug = false; } @@ -63,11 +65,47 @@ ml_parse_socket_id(struct ml_options *opt, const char *arg) return 0; } +static int +ml_parse_models(struct ml_options *opt, const char *arg) +{ + const char *delim = ","; + char models[PATH_MAX]; + char *token; + int ret = 0; + + strlcpy(models, arg, PATH_MAX); + + token = strtok(models, delim); + while (token != NULL) { + strlcpy(opt->filelist[opt->nb_filelist].model, token, PATH_MAX); + opt->nb_filelist++; + + if (opt->nb_filelist >= ML_TEST_MAX_MODELS) { + ml_err("Exceeded model count, max = %d\n", ML_TEST_MAX_MODELS); + ret = -EINVAL; + break; + } + token = strtok(NULL, delim); + } + + if (opt->nb_filelist == 0) { + ml_err("Models list is empty. Need atleast one model for the test"); + ret = -EINVAL; + } + + return ret; +} + static void ml_dump_test_options(const char *testname) { if (strcmp(testname, "device_ops") == 0) printf("\n"); + + if (strcmp(testname, "model_ops") == 0) { + printf("\t\t--models : comma separated list of models\n"); + printf("\n"); + } } static void @@ -85,9 +123,9 @@ print_usage(char *program) ml_test_dump_names(ml_dump_test_options); } -static struct option lgopts[] = {{ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, - {ML_SOCKET_ID, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, - {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; +static struct option lgopts[] = { + {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, {ML_MODELS, 1, 0, 0}, + {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; static int ml_opts_parse_long(int opt_idx, struct ml_options *opt) @@ -98,6 +136,7 @@ ml_opts_parse_long(int opt_idx, struct ml_options *opt) {ML_TEST, ml_parse_test_name}, {ML_DEVICE_ID, ml_parse_dev_id}, {ML_SOCKET_ID, ml_parse_socket_id}, + {ML_MODELS, ml_parse_models}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h index 05311a9a47..8faf3b5deb 100644 --- a/app/test-mldev/ml_options.h +++ b/app/test-mldev/ml_options.h @@ -5,22 +5,31 @@ #ifndef _ML_OPTIONS_ #define _ML_OPTIONS_ +#include #include #include #define ML_TEST_NAME_MAX_LEN 32 +#define ML_TEST_MAX_MODELS 8 /* Options names */ #define ML_TEST ("test") #define ML_DEVICE_ID ("dev_id") #define ML_SOCKET_ID ("socket_id") +#define ML_MODELS ("models") #define ML_DEBUG ("debug") #define ML_HELP ("help") +struct ml_filelist { + char model[PATH_MAX]; +}; + struct ml_options { char test_name[ML_TEST_NAME_MAX_LEN]; int16_t dev_id; int socket_id; + struct ml_filelist filelist[ML_TEST_MAX_MODELS]; + uint8_t nb_filelist; bool debug; }; diff --git a/app/test-mldev/test_model_common.c b/app/test-mldev/test_model_common.c new file mode 100644 index 0000000000..5368be17fe --- /dev/null +++ b/app/test-mldev/test_model_common.c @@ -0,0 +1,162 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include +#include + +#include +#include +#include + +#include "ml_common.h" +#include "ml_options.h" +#include "ml_test.h" +#include "test_common.h" +#include "test_model_common.h" + +int +ml_model_load(struct ml_test *test, struct ml_options *opt, struct ml_model *model, int16_t fid) +{ + struct test_common *t = ml_test_priv(test); + struct rte_ml_model_params model_params; + FILE *fp; + int ret; + + if (model->state == MODEL_LOADED) + return 0; + + if (model->state != MODEL_INITIAL) + return -EINVAL; + + /* read model binary */ + fp = fopen(opt->filelist[fid].model, "r"); + if (fp == NULL) { + ml_err("Failed to open model file : %s\n", opt->filelist[fid].model); + return -1; + } + + fseek(fp, 0, SEEK_END); + model_params.size = ftell(fp); + fseek(fp, 0, SEEK_SET); + + model_params.addr = rte_malloc_socket("ml_model", model_params.size, + t->dev_info.min_align_size, opt->socket_id); + if (model_params.addr == NULL) { + ml_err("Failed to allocate memory for model: %s\n", opt->filelist[fid].model); + fclose(fp); + return -ENOMEM; + } + + if (fread(model_params.addr, 1, model_params.size, fp) != model_params.size) { + ml_err("Failed to read model file : %s\n", opt->filelist[fid].model); + rte_free(model_params.addr); + fclose(fp); + return -1; + } + fclose(fp); + + /* load model to device */ + ret = rte_ml_model_load(opt->dev_id, &model_params, &model->id); + if (ret != 0) { + ml_err("Failed to load model : %s\n", opt->filelist[fid].model); + model->state = MODEL_ERROR; + rte_free(model_params.addr); + return ret; + } + + /* release mz */ + rte_free(model_params.addr); + + /* get model info */ + ret = rte_ml_model_info_get(opt->dev_id, model->id, &model->info); + if (ret != 0) { + ml_err("Failed to get model info : %s\n", opt->filelist[fid].model); + return ret; + } + + model->state = MODEL_LOADED; + + return 0; +} + +int +ml_model_unload(struct ml_test *test, struct ml_options *opt, struct ml_model *model, int16_t fid) +{ + struct test_common *t = ml_test_priv(test); + int ret; + + RTE_SET_USED(t); + + if (model->state == MODEL_INITIAL) + return 0; + + if (model->state != MODEL_LOADED) + return -EINVAL; + + /* unload model */ + ret = rte_ml_model_unload(opt->dev_id, model->id); + if (ret != 0) { + ml_err("Failed to unload model: %s\n", opt->filelist[fid].model); + model->state = MODEL_ERROR; + return ret; + } + + model->state = MODEL_INITIAL; + + return 0; +} + +int +ml_model_start(struct ml_test *test, struct ml_options *opt, struct ml_model *model, int16_t fid) +{ + struct test_common *t = ml_test_priv(test); + int ret; + + RTE_SET_USED(t); + + if (model->state == MODEL_STARTED) + return 0; + + if (model->state != MODEL_LOADED) + return -EINVAL; + + /* start model */ + ret = rte_ml_model_start(opt->dev_id, model->id); + if (ret != 0) { + ml_err("Failed to start model : %s\n", opt->filelist[fid].model); + model->state = MODEL_ERROR; + return ret; + } + + model->state = MODEL_STARTED; + + return 0; +} + +int +ml_model_stop(struct ml_test *test, struct ml_options *opt, struct ml_model *model, int16_t fid) +{ + struct test_common *t = ml_test_priv(test); + int ret; + + RTE_SET_USED(t); + + if (model->state == MODEL_LOADED) + return 0; + + if (model->state != MODEL_STARTED) + return -EINVAL; + + /* stop model */ + ret = rte_ml_model_stop(opt->dev_id, model->id); + if (ret != 0) { + ml_err("Failed to stop model: %s\n", opt->filelist[fid].model); + model->state = MODEL_ERROR; + return ret; + } + + model->state = MODEL_LOADED; + + return 0; +} diff --git a/app/test-mldev/test_model_common.h b/app/test-mldev/test_model_common.h new file mode 100644 index 0000000000..302e4eb45f --- /dev/null +++ b/app/test-mldev/test_model_common.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ML_TEST_MODEL_COMMON_ +#define _ML_TEST_MODEL_COMMON_ + +#include + +#include + +#include "ml_options.h" +#include "ml_test.h" + +enum model_state { + MODEL_INITIAL, + MODEL_LOADED, + MODEL_STARTED, + MODEL_ERROR, +}; + +struct ml_model { + int16_t id; + struct rte_ml_model_info info; + enum model_state state; +}; + +int ml_model_load(struct ml_test *test, struct ml_options *opt, struct ml_model *model, + int16_t fid); +int ml_model_unload(struct ml_test *test, struct ml_options *opt, struct ml_model *model, + int16_t fid); +int ml_model_start(struct ml_test *test, struct ml_options *opt, struct ml_model *model, + int16_t fid); +int ml_model_stop(struct ml_test *test, struct ml_options *opt, struct ml_model *model, + int16_t fid); + +#endif /* _ML_TEST_MODEL_COMMON_ */ diff --git a/app/test-mldev/test_model_ops.c b/app/test-mldev/test_model_ops.c new file mode 100644 index 0000000000..69c9df8ed6 --- /dev/null +++ b/app/test-mldev/test_model_ops.c @@ -0,0 +1,433 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include +#include +#include + +#include +#include +#include + +#include "ml_common.h" +#include "ml_options.h" +#include "ml_test.h" +#include "test_model_ops.h" + +static bool +test_model_ops_cap_check(struct ml_options *opt) +{ + if (!ml_test_cap_check(opt)) + return false; + + return true; +} + +static int +test_model_ops_opt_check(struct ml_options *opt) +{ + uint32_t i; + int ret; + + /* check common opts */ + ret = ml_test_opt_check(opt); + if (ret != 0) + return ret; + + /* check model file availability */ + for (i = 0; i < opt->nb_filelist; i++) { + if (access(opt->filelist[i].model, F_OK) == -1) { + ml_err("Model file not available: id = %u, file = %s", i, + opt->filelist[i].model); + return -ENOENT; + } + } + + return 0; +} + +static void +test_model_ops_opt_dump(struct ml_options *opt) +{ + uint32_t i; + + /* dump common opts */ + ml_test_opt_dump(opt); + + /* dump test specific opts */ + ml_dump_begin("models"); + for (i = 0; i < opt->nb_filelist; i++) + ml_dump_list("model", i, opt->filelist[i].model); + ml_dump_end; +} + +static int +test_model_ops_setup(struct ml_test *test, struct ml_options *opt) +{ + struct test_model_ops *t; + void *test_model_ops; + int ret = 0; + uint32_t i; + + /* allocate model ops test structure */ + test_model_ops = rte_zmalloc_socket(test->name, sizeof(struct test_model_ops), + RTE_CACHE_LINE_SIZE, opt->socket_id); + if (test_model_ops == NULL) { + ml_err("Failed to allocate memory for test_model"); + ret = -ENOMEM; + goto error; + } + test->test_priv = test_model_ops; + t = ml_test_priv(test); + + t->cmn.result = ML_TEST_FAILED; + t->cmn.opt = opt; + + /* get device info */ + ret = rte_ml_dev_info_get(opt->dev_id, &t->cmn.dev_info); + if (ret < 0) { + ml_err("Failed to get device info"); + goto error; + } + + /* set model initial state */ + for (i = 0; i < opt->nb_filelist; i++) + t->model[i].state = MODEL_INITIAL; + + return 0; + +error: + if (test_model_ops != NULL) + rte_free(test_model_ops); + + return ret; +} + +static void +test_model_ops_destroy(struct ml_test *test, struct ml_options *opt) +{ + struct test_model_ops *t; + + RTE_SET_USED(opt); + + t = ml_test_priv(test); + if (t != NULL) + rte_free(t); +} + +static int +test_model_ops_mldev_setup(struct ml_test *test, struct ml_options *opt) +{ + int ret; + + ret = ml_test_device_configure(test, opt); + if (ret != 0) + return ret; + + ret = ml_test_device_start(test, opt); + if (ret != 0) + goto error; + + return 0; + +error: + ml_test_device_close(test, opt); + + return ret; +} + +static int +test_model_ops_mldev_destroy(struct ml_test *test, struct ml_options *opt) +{ + int ret; + + ret = ml_test_device_stop(test, opt); + if (ret != 0) + goto error; + + ret = ml_test_device_close(test, opt); + if (ret != 0) + return ret; + + return 0; + +error: + ml_test_device_close(test, opt); + + return ret; +} + +/* Sub-test A: (load -> start -> stop -> unload) x n */ +static int +test_model_ops_subtest_a(struct ml_test *test, struct ml_options *opt) +{ + struct test_model_ops *t; + int ret = 0; + uint32_t i; + + t = ml_test_priv(test); + + /* load + start + stop + unload */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_load(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + + ret = ml_model_start(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + + ret = ml_model_stop(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + + ret = ml_model_unload(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + +error: + for (i = 0; i < opt->nb_filelist; i++) + ml_model_stop(test, opt, &t->model[i], i); + + for (i = 0; i < opt->nb_filelist; i++) + ml_model_unload(test, opt, &t->model[i], i); + + return ret; +} + +/* Sub-test B: load x n -> start x n -> stop x n -> unload x n */ +static int +test_model_ops_subtest_b(struct ml_test *test, struct ml_options *opt) +{ + struct test_model_ops *t; + int ret = 0; + uint32_t i; + + t = ml_test_priv(test); + + /* load */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_load(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + /* start */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_start(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + /* stop */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_stop(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + /* unload */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_unload(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + return 0; + +error: + for (i = 0; i < opt->nb_filelist; i++) + ml_model_stop(test, opt, &t->model[i], i); + + for (i = 0; i < opt->nb_filelist; i++) + ml_model_unload(test, opt, &t->model[i], i); + + return ret; +} + +/* Sub-test C: load x n + (start + stop) x n + unload x n */ +static int +test_model_ops_subtest_c(struct ml_test *test, struct ml_options *opt) +{ + struct test_model_ops *t; + int ret = 0; + uint32_t i; + + t = ml_test_priv(test); + + /* load */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_load(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + /* start + stop */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_start(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + + ret = ml_model_stop(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + /* unload */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_unload(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + return 0; + +error: + for (i = 0; i < opt->nb_filelist; i++) + ml_model_stop(test, opt, &t->model[i], i); + + for (i = 0; i < opt->nb_filelist; i++) + ml_model_unload(test, opt, &t->model[i], i); + + return ret; +} + +/* Sub-test D: (load + start) x n -> (stop + unload) x n */ +static int +test_model_ops_subtest_d(struct ml_test *test, struct ml_options *opt) +{ + struct test_model_ops *t; + int ret = 0; + uint32_t i; + + t = ml_test_priv(test); + + /* load + start */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_load(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + + ret = ml_model_start(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + /* stop + unload */ + for (i = 0; i < opt->nb_filelist; i++) { + ret = ml_model_stop(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + + ret = ml_model_unload(test, opt, &t->model[i], i); + if (ret != 0) + goto error; + } + + return 0; + +error: + for (i = 0; i < opt->nb_filelist; i++) + ml_model_stop(test, opt, &t->model[i], i); + + for (i = 0; i < opt->nb_filelist; i++) + ml_model_unload(test, opt, &t->model[i], i); + + return ret; +} + +static int +test_model_ops_driver(struct ml_test *test, struct ml_options *opt) +{ + struct test_model_ops *t; + int ret = 0; + + t = ml_test_priv(test); + + /* device setup */ + ret = test_model_ops_mldev_setup(test, opt); + if (ret != 0) + return ret; + + printf("\n"); + + /* sub-test A */ + ret = test_model_ops_subtest_a(test, opt); + if (ret != 0) { + printf("Model Ops Sub-test A: " CLRED "%s" CLNRM "\n", "Failed"); + goto error; + } else { + printf("Model Ops Sub-test A: " CLYEL "%s" CLNRM "\n", "Passed"); + } + + /* sub-test B */ + ret = test_model_ops_subtest_b(test, opt); + if (ret != 0) { + printf("Model Ops Sub-test B: " CLRED "%s" CLNRM "\n", "Failed"); + goto error; + } else { + printf("Model Ops Sub-test B: " CLYEL "%s" CLNRM "\n", "Passed"); + } + + /* sub-test C */ + ret = test_model_ops_subtest_c(test, opt); + if (ret != 0) { + printf("Model Ops Sub-test C: " CLRED "%s" CLNRM "\n", "Failed"); + goto error; + } else { + printf("Model Ops Sub-test C: " CLYEL "%s" CLNRM "\n", "Passed"); + } + + /* sub-test D */ + ret = test_model_ops_subtest_d(test, opt); + if (ret != 0) { + printf("Model Ops Sub-test D: " CLRED "%s" CLNRM "\n", "Failed"); + goto error; + } else { + printf("Model Ops Sub-test D: " CLYEL "%s" CLNRM "\n", "Passed"); + } + + printf("\n"); + + /* device destroy */ + ret = test_model_ops_mldev_destroy(test, opt); + if (ret != 0) + return ret; + + t->cmn.result = ML_TEST_SUCCESS; + + return 0; + +error: + test_model_ops_mldev_destroy(test, opt); + + t->cmn.result = ML_TEST_FAILED; + + return ret; +} + +static int +test_model_ops_result(struct ml_test *test, struct ml_options *opt) +{ + struct test_model_ops *t; + + RTE_SET_USED(opt); + + t = ml_test_priv(test); + + return t->cmn.result; +} + +static const struct ml_test_ops model_ops = { + .cap_check = test_model_ops_cap_check, + .opt_check = test_model_ops_opt_check, + .opt_dump = test_model_ops_opt_dump, + .test_setup = test_model_ops_setup, + .test_destroy = test_model_ops_destroy, + .test_driver = test_model_ops_driver, + .test_result = test_model_ops_result, +}; + +ML_TEST_REGISTER(model_ops); diff --git a/app/test-mldev/test_model_ops.h b/app/test-mldev/test_model_ops.h new file mode 100644 index 0000000000..9dd8402390 --- /dev/null +++ b/app/test-mldev/test_model_ops.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ML_TEST_MODEL_OPS_ +#define _ML_TEST_MODEL_OPS_ + +#include + +#include "test_common.h" +#include "test_model_common.h" + +struct test_model_ops { + /* common data */ + struct test_common cmn; + + /* test specific data */ + struct ml_model model[ML_TEST_MAX_MODELS]; +} __rte_cache_aligned; + +#endif /* _ML_TEST_MODEL_OPS_ */ From patchwork Tue Nov 29 06:50:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120224 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ED315A0093; Tue, 29 Nov 2022 07:51:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7712742D2C; Tue, 29 Nov 2022 07:51:00 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6085442D29 for ; Tue, 29 Nov 2022 07:50:57 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2ASNlF20020463; Mon, 28 Nov 2022 22:50:56 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=rrWTPmQyxKfJv/bOpJPy9VMQAzRE4RRCx3A3ddIYXKE=; b=idLTUZqYSqYqx8EPVX/PeHbA/9LQsXMSSf5jT+gYHGZ+iJCWZTbucPOHOE7DVA24SN+C sgGgYxtM/9maZMee+sfRTE5DNdDXTUIYHuBVfLuz+TWZ2hQs0aVhyGg2UfEUaFEa6fH7 PWpN9WgaeKudL2ZZkjeSWprIB3DgbnoqyLtVR5nies8lo/IoogFIUaK2VQmZIJSw1U3R tYxpRpqzM4khz+NzcO+Oz+um9wmo5kVg19YfA9TOGTKsQGDIvZiCeszxrI3GON8JUQ8w g3cs15BpXdeEUI3EXhjgWtmQYbAWXKhE0tRRsR7tCKUzsWiHJ3uX7+hiIwrN77kTuDl/ Ng== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3m3k6wa1mh-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:50:56 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Nov 2022 22:50:55 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 28 Nov 2022 22:50:55 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id D76183F707B; Mon, 28 Nov 2022 22:50:54 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 05/12] app/mldev: add ordered inference test case Date: Mon, 28 Nov 2022 22:50:33 -0800 Message-ID: <20221129065040.5875-6-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: dAY5J9RQP6Uq2b7gxzASw1FVaWf42cj1 X-Proofpoint-ORIG-GUID: dAY5J9RQP6Uq2b7gxzASw1FVaWf42cj1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added an ordered test case to execute inferences with single or multiple models. In this test case inference requests for a model are enqueued after completion of all requests for the previous model. Test supports inference repetitions. Operations sequence when testing with N models and R reps, (load -> start -> (enqueue + dequeue) x R -> stop -> unload) x N Test case can be executed by selecting "inference_ordered" test and repetitions can be specified through "--repetitions" argument. Signed-off-by: Srikanth Yalavarthi Change-Id: I0d0db7e484151487ff5fa59f7235047ea45f3237 --- app/test-mldev/meson.build | 2 + app/test-mldev/ml_options.c | 73 ++- app/test-mldev/ml_options.h | 17 +- app/test-mldev/test_inference_common.c | 565 ++++++++++++++++++++++++ app/test-mldev/test_inference_common.h | 65 +++ app/test-mldev/test_inference_ordered.c | 119 +++++ app/test-mldev/test_model_common.h | 10 + 7 files changed, 839 insertions(+), 12 deletions(-) create mode 100644 app/test-mldev/test_inference_common.c create mode 100644 app/test-mldev/test_inference_common.h create mode 100644 app/test-mldev/test_inference_ordered.c diff --git a/app/test-mldev/meson.build b/app/test-mldev/meson.build index b09e1ccc8a..475d76d126 100644 --- a/app/test-mldev/meson.build +++ b/app/test-mldev/meson.build @@ -16,6 +16,8 @@ sources = files( 'test_device_ops.c', 'test_model_common.c', 'test_model_ops.c', + 'test_inference_common.c', + 'test_inference_ordered.c', ) deps += ['mldev'] diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 8e40a33ed0..59a5d16584 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -29,6 +29,7 @@ ml_options_default(struct ml_options *opt) opt->dev_id = 0; opt->socket_id = SOCKET_ID_ANY; opt->nb_filelist = 0; + opt->repetitions = 1; opt->debug = false; } @@ -96,6 +97,60 @@ ml_parse_models(struct ml_options *opt, const char *arg) return ret; } +static int +ml_parse_filelist(struct ml_options *opt, const char *arg) +{ + const char *delim = ","; + char filelist[PATH_MAX]; + char *token; + + if (opt->nb_filelist >= ML_TEST_MAX_MODELS) { + ml_err("Exceeded filelist count, max = %d\n", ML_TEST_MAX_MODELS); + return -1; + } + + strlcpy(filelist, arg, PATH_MAX); + + /* model */ + token = strtok(filelist, delim); + if (token == NULL) { + ml_err("Invalid filelist, model not specified = %s\n", arg); + return -EINVAL; + } + strlcpy(opt->filelist[opt->nb_filelist].model, token, PATH_MAX); + + /* input */ + token = strtok(NULL, delim); + if (token == NULL) { + ml_err("Invalid filelist, input not specified = %s\n", arg); + return -EINVAL; + } + strlcpy(opt->filelist[opt->nb_filelist].input, token, PATH_MAX); + + /* output */ + token = strtok(NULL, delim); + if (token == NULL) { + ml_err("Invalid filelist, output not specified = %s\n", arg); + return -EINVAL; + } + strlcpy(opt->filelist[opt->nb_filelist].output, token, PATH_MAX); + + opt->nb_filelist++; + + if (opt->nb_filelist == 0) { + ml_err("Empty filelist. Need atleast one filelist entry for the test."); + return -EINVAL; + } + + return 0; +} + +static int +ml_parse_repetitions(struct ml_options *opt, const char *arg) +{ + return parser_read_uint64(&opt->repetitions, arg); +} + static void ml_dump_test_options(const char *testname) { @@ -106,6 +161,12 @@ ml_dump_test_options(const char *testname) printf("\t\t--models : comma separated list of models\n"); printf("\n"); } + + if (strcmp(testname, "inference_ordered") == 0) { + printf("\t\t--filelist : comma separated list of model, input and output\n" + "\t\t--repetitions : number of inference repetitions\n"); + printf("\n"); + } } static void @@ -124,8 +185,9 @@ print_usage(char *program) } static struct option lgopts[] = { - {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, {ML_MODELS, 1, 0, 0}, - {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; + {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, + {ML_MODELS, 1, 0, 0}, {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, + {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; static int ml_opts_parse_long(int opt_idx, struct ml_options *opt) @@ -133,10 +195,9 @@ ml_opts_parse_long(int opt_idx, struct ml_options *opt) unsigned int i; struct long_opt_parser parsermap[] = { - {ML_TEST, ml_parse_test_name}, - {ML_DEVICE_ID, ml_parse_dev_id}, - {ML_SOCKET_ID, ml_parse_socket_id}, - {ML_MODELS, ml_parse_models}, + {ML_TEST, ml_parse_test_name}, {ML_DEVICE_ID, ml_parse_dev_id}, + {ML_SOCKET_ID, ml_parse_socket_id}, {ML_MODELS, ml_parse_models}, + {ML_FILELIST, ml_parse_filelist}, {ML_REPETITIONS, ml_parse_repetitions}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h index 8faf3b5deb..ad8aee5964 100644 --- a/app/test-mldev/ml_options.h +++ b/app/test-mldev/ml_options.h @@ -13,15 +13,19 @@ #define ML_TEST_MAX_MODELS 8 /* Options names */ -#define ML_TEST ("test") -#define ML_DEVICE_ID ("dev_id") -#define ML_SOCKET_ID ("socket_id") -#define ML_MODELS ("models") -#define ML_DEBUG ("debug") -#define ML_HELP ("help") +#define ML_TEST ("test") +#define ML_DEVICE_ID ("dev_id") +#define ML_SOCKET_ID ("socket_id") +#define ML_MODELS ("models") +#define ML_FILELIST ("filelist") +#define ML_REPETITIONS ("repetitions") +#define ML_DEBUG ("debug") +#define ML_HELP ("help") struct ml_filelist { char model[PATH_MAX]; + char input[PATH_MAX]; + char output[PATH_MAX]; }; struct ml_options { @@ -30,6 +34,7 @@ struct ml_options { int socket_id; struct ml_filelist filelist[ML_TEST_MAX_MODELS]; uint8_t nb_filelist; + uint64_t repetitions; bool debug; }; diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c new file mode 100644 index 0000000000..e5e300ffdc --- /dev/null +++ b/app/test-mldev/test_inference_common.c @@ -0,0 +1,565 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "ml_common.h" +#include "ml_options.h" +#include "ml_test.h" +#include "test_common.h" +#include "test_inference_common.h" + +/* Enqueue inference requests with burst size equal to 1 */ +static int +ml_enqueue_single(void *arg) +{ + struct test_inference *t = ml_test_priv((struct ml_test *)arg); + struct ml_request *req = NULL; + struct rte_ml_op *op = NULL; + struct ml_core_args *args; + uint64_t model_enq = 0; + uint32_t burst_enq; + uint32_t lcore_id; + int16_t fid; + int ret; + + lcore_id = rte_lcore_id(); + args = &t->args[lcore_id]; + model_enq = 0; + + if (args->nb_reqs == 0) + return 0; + +next_rep: + fid = args->start_fid; + +next_model: + ret = rte_mempool_get(t->op_pool, (void **)&op); + if (ret != 0) + goto next_model; + +retry: + ret = rte_mempool_get(t->model[fid].io_pool, (void **)&req); + if (ret != 0) + goto retry; + + op->model_id = t->model[fid].id; + op->nb_batches = t->model[fid].info.batch_size; + op->mempool = t->op_pool; + + op->input.addr = req->input; + op->input.length = t->model[fid].inp_qsize; + op->input.next = NULL; + + op->output.addr = req->output; + op->output.length = t->model[fid].out_qsize; + op->output.next = NULL; + + op->user_ptr = req; + req->niters++; + req->fid = fid; + +enqueue_req: + burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, 0, &op, 1); + if (burst_enq == 0) + goto enqueue_req; + + fid++; + if (likely(fid <= args->end_fid)) + goto next_model; + + model_enq++; + if (likely(model_enq < args->nb_reqs)) + goto next_rep; + + return 0; +} + +/* Dequeue inference requests with burst size equal to 1 */ +static int +ml_dequeue_single(void *arg) +{ + struct test_inference *t = ml_test_priv((struct ml_test *)arg); + struct rte_ml_op_error error; + struct rte_ml_op *op = NULL; + struct ml_core_args *args; + struct ml_request *req; + uint64_t total_deq = 0; + uint8_t nb_filelist; + uint32_t burst_deq; + uint32_t lcore_id; + + lcore_id = rte_lcore_id(); + args = &t->args[lcore_id]; + nb_filelist = args->end_fid - args->start_fid + 1; + + if (args->nb_reqs == 0) + return 0; + +dequeue_req: + burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, 0, &op, 1); + + if (likely(burst_deq == 1)) { + total_deq += burst_deq; + if (unlikely(op->status == RTE_ML_OP_STATUS_ERROR)) { + rte_ml_op_error_get(t->cmn.opt->dev_id, op, &error); + ml_err("error_code = 0x%016lx, error_message = %s\n", error.errcode, + error.message); + } + req = (struct ml_request *)op->user_ptr; + rte_mempool_put(t->model[req->fid].io_pool, req); + rte_mempool_put(t->op_pool, op); + } + + if (likely(total_deq < args->nb_reqs * nb_filelist)) + goto dequeue_req; + + return 0; +} + +bool +test_inference_cap_check(struct ml_options *opt) +{ + struct rte_ml_dev_info dev_info; + + if (!ml_test_cap_check(opt)) + return false; + + rte_ml_dev_info_get(opt->dev_id, &dev_info); + if (opt->nb_filelist > dev_info.max_models) { + ml_err("Insufficient capabilities: Filelist count exceeded device limit, count = %u (max limit = %u)", + opt->nb_filelist, dev_info.max_models); + return false; + } + + return true; +} + +int +test_inference_opt_check(struct ml_options *opt) +{ + uint32_t i; + int ret; + + /* check common opts */ + ret = ml_test_opt_check(opt); + if (ret != 0) + return ret; + + /* check file availability */ + for (i = 0; i < opt->nb_filelist; i++) { + if (access(opt->filelist[i].model, F_OK) == -1) { + ml_err("Model file not accessible: id = %u, file = %s", i, + opt->filelist[i].model); + return -ENOENT; + } + + if (access(opt->filelist[i].input, F_OK) == -1) { + ml_err("Input file not accessible: id = %u, file = %s", i, + opt->filelist[i].input); + return -ENOENT; + } + } + + if (opt->repetitions == 0) { + ml_err("Invalid option, repetitions = %" PRIu64 "\n", opt->repetitions); + return -EINVAL; + } + + /* check number of available lcores. */ + if (rte_lcore_count() < 3) { + ml_err("Insufficient lcores = %u\n", rte_lcore_count()); + ml_err("Minimum lcores required to create %u queue-pairs = %u\n", 1, 3); + return -EINVAL; + } + + return 0; +} + +void +test_inference_opt_dump(struct ml_options *opt) +{ + uint32_t i; + + /* dump common opts */ + ml_test_opt_dump(opt); + + /* dump test opts */ + ml_dump("repetitions", "%" PRIu64, opt->repetitions); + + ml_dump_begin("filelist"); + for (i = 0; i < opt->nb_filelist; i++) { + ml_dump_list("model", i, opt->filelist[i].model); + ml_dump_list("input", i, opt->filelist[i].input); + ml_dump_list("output", i, opt->filelist[i].output); + } + ml_dump_end; +} + +int +test_inference_setup(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t; + void *test_inference; + int ret = 0; + uint32_t i; + + test_inference = rte_zmalloc_socket(test->name, sizeof(struct test_inference), + RTE_CACHE_LINE_SIZE, opt->socket_id); + if (test_inference == NULL) { + ml_err("failed to allocate memory for test_model"); + ret = -ENOMEM; + goto error; + } + test->test_priv = test_inference; + t = ml_test_priv(test); + + t->nb_used = 0; + t->cmn.result = ML_TEST_FAILED; + t->cmn.opt = opt; + + /* get device info */ + ret = rte_ml_dev_info_get(opt->dev_id, &t->cmn.dev_info); + if (ret < 0) { + ml_err("failed to get device info"); + goto error; + } + + t->enqueue = ml_enqueue_single; + t->dequeue = ml_dequeue_single; + + /* set model initial state */ + for (i = 0; i < opt->nb_filelist; i++) + t->model[i].state = MODEL_INITIAL; + + return 0; + +error: + if (test_inference != NULL) + rte_free(test_inference); + + return ret; +} + +void +test_inference_destroy(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t; + + RTE_SET_USED(opt); + + t = ml_test_priv(test); + if (t != NULL) + rte_free(t); +} + +int +ml_inference_mldev_setup(struct ml_test *test, struct ml_options *opt) +{ + struct rte_ml_dev_qp_conf qp_conf; + struct test_inference *t; + int ret; + + t = ml_test_priv(test); + + ret = ml_test_device_configure(test, opt); + if (ret != 0) + return ret; + + /* setup queue pairs */ + qp_conf.nb_desc = t->cmn.dev_info.max_desc; + qp_conf.cb = NULL; + + ret = rte_ml_dev_queue_pair_setup(opt->dev_id, 0, &qp_conf, opt->socket_id); + if (ret != 0) { + ml_err("Failed to setup ml device queue-pair, dev_id = %d, qp_id = %u\n", + opt->dev_id, 0); + goto error; + } + + ret = ml_test_device_start(test, opt); + if (ret != 0) + goto error; + + return 0; + +error: + ml_test_device_close(test, opt); + + return ret; +} + +int +ml_inference_mldev_destroy(struct ml_test *test, struct ml_options *opt) +{ + int ret; + + ret = ml_test_device_stop(test, opt); + if (ret != 0) + goto error; + + ret = ml_test_device_close(test, opt); + if (ret != 0) + return ret; + + return 0; + +error: + ml_test_device_close(test, opt); + + return ret; +} + +/* Callback for IO pool create. This function would compute the fields of ml_request + * structure and prepare the quantized input data. + */ +static void +ml_request_initialize(struct rte_mempool *mp, void *opaque, void *obj, unsigned int obj_idx) +{ + struct test_inference *t = ml_test_priv((struct ml_test *)opaque); + struct ml_request *req = (struct ml_request *)obj; + + RTE_SET_USED(mp); + RTE_SET_USED(obj_idx); + + req->input = RTE_PTR_ADD( + obj, RTE_ALIGN_CEIL(sizeof(struct ml_request), t->cmn.dev_info.min_align_size)); + req->output = RTE_PTR_ADD(req->input, RTE_ALIGN_CEIL(t->model[t->fid].inp_qsize, + t->cmn.dev_info.min_align_size)); + req->niters = 0; + + /* quantize data */ + rte_ml_io_quantize(t->cmn.opt->dev_id, t->model[t->fid].id, + t->model[t->fid].info.batch_size, t->model[t->fid].input, req->input); +} + +int +ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t fid) +{ + struct test_inference *t = ml_test_priv(test); + char mz_name[RTE_MEMZONE_NAMESIZE]; + char mp_name[RTE_MEMPOOL_NAMESIZE]; + const struct rte_memzone *mz; + uint64_t nb_buffers; + uint32_t buff_size; + uint32_t mz_size; + uint32_t fsize; + FILE *fp; + int ret; + + /* get input buffer size */ + ret = rte_ml_io_input_size_get(opt->dev_id, t->model[fid].id, t->model[fid].info.batch_size, + &t->model[fid].inp_qsize, &t->model[fid].inp_dsize); + if (ret != 0) { + ml_err("Failed to get input size, model : %s\n", opt->filelist[fid].model); + return ret; + } + + /* get output buffer size */ + ret = rte_ml_io_output_size_get(opt->dev_id, t->model[fid].id, + t->model[fid].info.batch_size, &t->model[fid].out_qsize, + &t->model[fid].out_dsize); + if (ret != 0) { + ml_err("Failed to get input size, model : %s\n", opt->filelist[fid].model); + return ret; + } + + /* allocate buffer for user data */ + mz_size = t->model[fid].inp_dsize + t->model[fid].out_dsize; + sprintf(mz_name, "ml_user_data_%d", fid); + mz = rte_memzone_reserve(mz_name, mz_size, opt->socket_id, 0); + if (mz == NULL) { + ml_err("Memzone allocation failed for ml_user_data\n"); + ret = -ENOMEM; + goto error; + } + + t->model[fid].input = mz->addr; + t->model[fid].output = RTE_PTR_ADD(t->model[fid].input, t->model[fid].inp_dsize); + + /* load input file */ + fp = fopen(opt->filelist[fid].input, "r"); + if (fp == NULL) { + ml_err("Failed to open input file : %s\n", opt->filelist[fid].input); + ret = -errno; + goto error; + } + + fseek(fp, 0, SEEK_END); + fsize = ftell(fp); + fseek(fp, 0, SEEK_SET); + if (fsize != t->model[fid].inp_dsize) { + ml_err("Invalid input file, size = %u (expected size = %" PRIu64 ")\n", fsize, + t->model[fid].inp_dsize); + ret = -EINVAL; + fclose(fp); + goto error; + } + + if (fread(t->model[fid].input, 1, t->model[fid].inp_dsize, fp) != t->model[fid].inp_dsize) { + ml_err("Failed to read input file : %s\n", opt->filelist[fid].input); + ret = -errno; + fclose(fp); + goto error; + } + fclose(fp); + + /* create mempool for quantized input and output buffers. ml_request_initialize is + * used as a callback for object creation. + */ + buff_size = RTE_ALIGN_CEIL(sizeof(struct ml_request), t->cmn.dev_info.min_align_size) + + RTE_ALIGN_CEIL(t->model[fid].inp_qsize, t->cmn.dev_info.min_align_size) + + RTE_ALIGN_CEIL(t->model[fid].out_qsize, t->cmn.dev_info.min_align_size); + nb_buffers = RTE_MIN((uint64_t)ML_TEST_MAX_POOL_SIZE, opt->repetitions); + + t->fid = fid; + sprintf(mp_name, "ml_io_pool_%d", fid); + t->model[fid].io_pool = rte_mempool_create(mp_name, nb_buffers, buff_size, 0, 0, NULL, NULL, + ml_request_initialize, test, opt->socket_id, 0); + if (t->model[fid].io_pool == NULL) { + ml_err("Failed to create io pool : %s\n", "ml_io_pool"); + ret = -ENOMEM; + goto error; + } + + return 0; + +error: + if (mz != NULL) + rte_memzone_free(mz); + + if (t->model[fid].io_pool != NULL) { + rte_mempool_free(t->model[fid].io_pool); + t->model[fid].io_pool = NULL; + } + + return ret; +} + +void +ml_inference_iomem_destroy(struct ml_test *test, struct ml_options *opt, int16_t fid) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + char mp_name[RTE_MEMPOOL_NAMESIZE]; + const struct rte_memzone *mz; + struct rte_mempool *mp; + + RTE_SET_USED(test); + RTE_SET_USED(opt); + + /* release user data memzone */ + sprintf(mz_name, "ml_user_data_%d", fid); + mz = rte_memzone_lookup(mz_name); + if (mz != NULL) + rte_memzone_free(mz); + + /* destroy io pool */ + sprintf(mp_name, "ml_io_pool_%d", fid); + mp = rte_mempool_lookup(mp_name); + if (mp != NULL) + rte_mempool_free(mp); +} + +int +ml_inference_mem_setup(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t = ml_test_priv(test); + + /* create op pool */ + t->op_pool = rte_ml_op_pool_create("ml_test_op_pool", ML_TEST_MAX_POOL_SIZE, 0, 0, + opt->socket_id); + if (t->op_pool == NULL) { + ml_err("Failed to create op pool : %s\n", "ml_op_pool"); + return -ENOMEM; + } + + return 0; +} + +void +ml_inference_mem_destroy(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t = ml_test_priv(test); + + RTE_SET_USED(opt); + + /* release op pool */ + if (t->op_pool != NULL) + rte_mempool_free(t->op_pool); +} + +/* Callback for mempool object iteration. This call would dequantize ouput data. */ +static void +ml_request_finish(struct rte_mempool *mp, void *opaque, void *obj, unsigned int obj_idx) +{ + struct test_inference *t = ml_test_priv((struct ml_test *)opaque); + struct ml_request *req = (struct ml_request *)obj; + struct ml_model *model = &t->model[req->fid]; + + RTE_SET_USED(mp); + RTE_SET_USED(obj_idx); + + if (req->niters == 0) + return; + + t->nb_used++; + rte_ml_io_dequantize(t->cmn.opt->dev_id, model->id, t->model[req->fid].info.batch_size, + req->output, model->output); +} + +int +ml_inference_result(struct ml_test *test, struct ml_options *opt, int16_t fid) +{ + struct test_inference *t = ml_test_priv(test); + + RTE_SET_USED(opt); + + rte_mempool_obj_iter(t->model[fid].io_pool, ml_request_finish, test); + + if (t->nb_used > 0) + t->cmn.result = ML_TEST_SUCCESS; + else + t->cmn.result = ML_TEST_FAILED; + + return t->cmn.result; +} + +int +ml_inference_launch_cores(struct ml_test *test, struct ml_options *opt, int16_t start_fid, + int16_t end_fid) +{ + struct test_inference *t = ml_test_priv(test); + uint32_t lcore_id; + uint32_t id = 0; + + RTE_LCORE_FOREACH_WORKER(lcore_id) + { + if (id == 2) + break; + + t->args[lcore_id].nb_reqs = opt->repetitions; + t->args[lcore_id].start_fid = start_fid; + t->args[lcore_id].end_fid = end_fid; + + if (id % 2 == 0) + rte_eal_remote_launch(t->enqueue, test, lcore_id); + else + rte_eal_remote_launch(t->dequeue, test, lcore_id); + + id++; + } + + return 0; +} diff --git a/app/test-mldev/test_inference_common.h b/app/test-mldev/test_inference_common.h new file mode 100644 index 0000000000..91007954b4 --- /dev/null +++ b/app/test-mldev/test_inference_common.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#ifndef _ML_TEST_INFERENCE_COMMON_ +#define _ML_TEST_INFERENCE_COMMON_ + +#include +#include + +#include +#include + +#include "ml_options.h" +#include "ml_test.h" +#include "test_common.h" +#include "test_model_common.h" + +struct ml_request { + void *input; + void *output; + int16_t fid; + uint64_t niters; +}; + +struct ml_core_args { + uint64_t nb_reqs; + int16_t start_fid; + int16_t end_fid; +}; + +struct test_inference { + /* common data */ + struct test_common cmn; + + /* test specific data */ + struct ml_model model[ML_TEST_MAX_MODELS]; + struct rte_mempool *op_pool; + + uint64_t nb_used; + int16_t fid; + + int (*enqueue)(void *arg); + int (*dequeue)(void *arg); + + struct ml_core_args args[RTE_MAX_LCORE]; +} __rte_cache_aligned; + +bool test_inference_cap_check(struct ml_options *opt); +int test_inference_opt_check(struct ml_options *opt); +void test_inference_opt_dump(struct ml_options *opt); +int test_inference_setup(struct ml_test *test, struct ml_options *opt); +void test_inference_destroy(struct ml_test *test, struct ml_options *opt); + +int ml_inference_mldev_setup(struct ml_test *test, struct ml_options *opt); +int ml_inference_mldev_destroy(struct ml_test *test, struct ml_options *opt); +int ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t fid); +void ml_inference_iomem_destroy(struct ml_test *test, struct ml_options *opt, int16_t fid); +int ml_inference_mem_setup(struct ml_test *test, struct ml_options *opt); +void ml_inference_mem_destroy(struct ml_test *test, struct ml_options *opt); +int ml_inference_result(struct ml_test *test, struct ml_options *opt, int16_t fid); +int ml_inference_launch_cores(struct ml_test *test, struct ml_options *opt, int16_t start_fid, + int16_t end_fid); + +#endif /* _ML_TEST_INFERENCE_COMMON_ */ diff --git a/app/test-mldev/test_inference_ordered.c b/app/test-mldev/test_inference_ordered.c new file mode 100644 index 0000000000..84e6bf9109 --- /dev/null +++ b/app/test-mldev/test_inference_ordered.c @@ -0,0 +1,119 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include + +#include +#include + +#include "ml_common.h" +#include "ml_test.h" +#include "test_inference_common.h" +#include "test_model_common.h" + +static int +test_inference_ordered_driver(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t; + int16_t fid = 0; + int ret = 0; + + t = ml_test_priv(test); + + ret = ml_inference_mldev_setup(test, opt); + if (ret != 0) + return ret; + + ret = ml_inference_mem_setup(test, opt); + if (ret != 0) + return ret; + +next_model: + /* load model */ + ret = ml_model_load(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + /* start model */ + ret = ml_model_start(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + ret = ml_inference_iomem_setup(test, opt, fid); + if (ret != 0) + goto error; + + /* launch inferences for one model using available queue pairs */ + ret = ml_inference_launch_cores(test, opt, fid, fid); + if (ret != 0) { + ml_err("failed to launch cores"); + goto error; + } + + rte_eal_mp_wait_lcore(); + + ret = ml_inference_result(test, opt, fid); + if (ret != ML_TEST_SUCCESS) + goto error; + + ml_inference_iomem_destroy(test, opt, fid); + + /* stop model */ + ret = ml_model_stop(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + /* unload model */ + ret = ml_model_unload(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + fid++; + if (fid < opt->nb_filelist) + goto next_model; + + ml_inference_mem_destroy(test, opt); + + ret = ml_inference_mldev_destroy(test, opt); + if (ret != 0) + return ret; + + t->cmn.result = ML_TEST_SUCCESS; + + return 0; + +error: + ml_inference_iomem_destroy(test, opt, fid); + ml_inference_mem_destroy(test, opt); + ml_model_stop(test, opt, &t->model[fid], fid); + ml_model_unload(test, opt, &t->model[fid], fid); + + t->cmn.result = ML_TEST_FAILED; + + return ret; +} + +static int +test_inference_ordered_result(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t; + + RTE_SET_USED(opt); + + t = ml_test_priv(test); + + return t->cmn.result; +} + +static const struct ml_test_ops inference_ordered = { + .cap_check = test_inference_cap_check, + .opt_check = test_inference_opt_check, + .opt_dump = test_inference_opt_dump, + .test_setup = test_inference_setup, + .test_destroy = test_inference_destroy, + .test_driver = test_inference_ordered_driver, + .test_result = test_inference_ordered_result, +}; + +ML_TEST_REGISTER(inference_ordered); diff --git a/app/test-mldev/test_model_common.h b/app/test-mldev/test_model_common.h index 302e4eb45f..c45ae80853 100644 --- a/app/test-mldev/test_model_common.h +++ b/app/test-mldev/test_model_common.h @@ -23,6 +23,16 @@ struct ml_model { int16_t id; struct rte_ml_model_info info; enum model_state state; + + uint64_t inp_dsize; + uint64_t inp_qsize; + uint64_t out_dsize; + uint64_t out_qsize; + + uint8_t *input; + uint8_t *output; + + struct rte_mempool *io_pool; }; int ml_model_load(struct ml_test *test, struct ml_options *opt, struct ml_model *model, From patchwork Tue Nov 29 06:50:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120225 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05041A0093; Tue, 29 Nov 2022 07:51:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B897342D3D; Tue, 29 Nov 2022 07:51:01 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B357242D30 for ; Tue, 29 Nov 2022 07:51:00 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AT3NrjJ005707; Mon, 28 Nov 2022 22:51:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=cSEO00qC4k5JmemYNeDW98fW1JfzuiaNXwlKPGRhzNw=; b=fetBvzVPDzAPfjR9YD8jWIR+i6NQrJOPdHqSXieTPa74uPdemhM9zycwB68xtDdlLlLM VKpA1uIGG0GsBEMPdT44sLrCbSITpIqjvGhNf41/mQGftjHyVj6N8MFZKSQF4oNI5myA pzxUghDvxuUnQe1N4AFgKBz4st7K6vQEXvVaw85XbzZx8gGYwAyGoazx+/jQx7Q/VL39 g7P6DWm8s5ze5MvOAmLFVcDJeZdlzeEoZZPO4Pp65Zb12WDWKnwGVpsOpFpaceuUTRYZ 7zYp2M8Y/F3bLrTL/vQRk2awC9eYPru0Ye6SpjZCyZW+ZwowBS/dUi9t4m1TmXzlyqFQ rQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3m5a508nm1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:50:57 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Nov 2022 22:50:56 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 28 Nov 2022 22:50:56 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 2FEFF3F7043; Mon, 28 Nov 2022 22:50:56 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 06/12] app/mldev: add test case to interleave inferences Date: Mon, 28 Nov 2022 22:50:34 -0800 Message-ID: <20221129065040.5875-7-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: TYSprYqAblMgj0AAHiUuZ4wbGTr8BXPo X-Proofpoint-GUID: TYSprYqAblMgj0AAHiUuZ4wbGTr8BXPo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added test case to interleave inference requests from multiple models. Interleaving would load and start all models and launch inference requests for the models using available queue-pairs Operations sequence when testing with N models and R reps, (load + start) x N -> (enqueue + dequeue) x N x R ... -> (stop + unload) x N Test can be executed by selecting "inference_interleave" test. Signed-off-by: Srikanth Yalavarthi Change-Id: Ia8f0e42e1838398dd77984111316621529f8d2e6 --- app/test-mldev/meson.build | 1 + app/test-mldev/ml_options.c | 3 +- app/test-mldev/test_inference_common.c | 12 +-- app/test-mldev/test_inference_common.h | 4 +- app/test-mldev/test_inference_interleave.c | 118 +++++++++++++++++++++ 5 files changed, 129 insertions(+), 9 deletions(-) create mode 100644 app/test-mldev/test_inference_interleave.c diff --git a/app/test-mldev/meson.build b/app/test-mldev/meson.build index 475d76d126..41d22fb22c 100644 --- a/app/test-mldev/meson.build +++ b/app/test-mldev/meson.build @@ -18,6 +18,7 @@ sources = files( 'test_model_ops.c', 'test_inference_common.c', 'test_inference_ordered.c', + 'test_inference_interleave.c', ) deps += ['mldev'] diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 59a5d16584..9a006ff7c8 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -162,7 +162,8 @@ ml_dump_test_options(const char *testname) printf("\n"); } - if (strcmp(testname, "inference_ordered") == 0) { + if ((strcmp(testname, "inference_ordered") == 0) || + (strcmp(testname, "inference_interleave") == 0)) { printf("\t\t--filelist : comma separated list of model, input and output\n" "\t\t--repetitions : number of inference repetitions\n"); printf("\n"); diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c index e5e300ffdc..1e0e30637f 100644 --- a/app/test-mldev/test_inference_common.c +++ b/app/test-mldev/test_inference_common.c @@ -115,7 +115,7 @@ ml_dequeue_single(void *arg) total_deq += burst_deq; if (unlikely(op->status == RTE_ML_OP_STATUS_ERROR)) { rte_ml_op_error_get(t->cmn.opt->dev_id, op, &error); - ml_err("error_code = 0x%016lx, error_message = %s\n", error.errcode, + ml_err("error_code = 0x%" PRIx64 ", error_message = %s\n", error.errcode, error.message); } req = (struct ml_request *)op->user_ptr; @@ -334,10 +334,10 @@ ml_request_initialize(struct rte_mempool *mp, void *opaque, void *obj, unsigned RTE_SET_USED(mp); RTE_SET_USED(obj_idx); - req->input = RTE_PTR_ADD( - obj, RTE_ALIGN_CEIL(sizeof(struct ml_request), t->cmn.dev_info.min_align_size)); - req->output = RTE_PTR_ADD(req->input, RTE_ALIGN_CEIL(t->model[t->fid].inp_qsize, - t->cmn.dev_info.min_align_size)); + req->input = (uint8_t *)obj + + RTE_ALIGN_CEIL(sizeof(struct ml_request), t->cmn.dev_info.min_align_size); + req->output = req->input + + RTE_ALIGN_CEIL(t->model[t->fid].inp_qsize, t->cmn.dev_info.min_align_size); req->niters = 0; /* quantize data */ @@ -387,7 +387,7 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t f } t->model[fid].input = mz->addr; - t->model[fid].output = RTE_PTR_ADD(t->model[fid].input, t->model[fid].inp_dsize); + t->model[fid].output = t->model[fid].input + t->model[fid].inp_dsize; /* load input file */ fp = fopen(opt->filelist[fid].input, "r"); diff --git a/app/test-mldev/test_inference_common.h b/app/test-mldev/test_inference_common.h index 91007954b4..b058abada4 100644 --- a/app/test-mldev/test_inference_common.h +++ b/app/test-mldev/test_inference_common.h @@ -17,8 +17,8 @@ #include "test_model_common.h" struct ml_request { - void *input; - void *output; + uint8_t *input; + uint8_t *output; int16_t fid; uint64_t niters; }; diff --git a/app/test-mldev/test_inference_interleave.c b/app/test-mldev/test_inference_interleave.c new file mode 100644 index 0000000000..74ad0c597f --- /dev/null +++ b/app/test-mldev/test_inference_interleave.c @@ -0,0 +1,118 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 Marvell. + */ + +#include + +#include +#include + +#include "ml_common.h" +#include "ml_test.h" +#include "test_inference_common.h" +#include "test_model_common.h" + +static int +test_inference_interleave_driver(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t; + int16_t fid = 0; + int ret = 0; + + t = ml_test_priv(test); + + ret = ml_inference_mldev_setup(test, opt); + if (ret != 0) + return ret; + + ret = ml_inference_mem_setup(test, opt); + if (ret != 0) + return ret; + + /* load and start all models */ + for (fid = 0; fid < opt->nb_filelist; fid++) { + ret = ml_model_load(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + ret = ml_model_start(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + ret = ml_inference_iomem_setup(test, opt, fid); + if (ret != 0) + goto error; + } + + /* launch inference requests */ + ret = ml_inference_launch_cores(test, opt, 0, opt->nb_filelist - 1); + if (ret != 0) { + ml_err("failed to launch cores"); + goto error; + } + + rte_eal_mp_wait_lcore(); + + /* stop and unload all models */ + for (fid = 0; fid < opt->nb_filelist; fid++) { + ret = ml_inference_result(test, opt, fid); + if (ret != ML_TEST_SUCCESS) + goto error; + + ml_inference_iomem_destroy(test, opt, fid); + + ret = ml_model_stop(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + + ret = ml_model_unload(test, opt, &t->model[fid], fid); + if (ret != 0) + goto error; + } + + ml_inference_mem_destroy(test, opt); + + ret = ml_inference_mldev_destroy(test, opt); + if (ret != 0) + return ret; + + t->cmn.result = ML_TEST_SUCCESS; + + return 0; + +error: + ml_inference_mem_destroy(test, opt); + for (fid = 0; fid < opt->nb_filelist; fid++) { + ml_inference_iomem_destroy(test, opt, fid); + ml_model_stop(test, opt, &t->model[fid], fid); + ml_model_unload(test, opt, &t->model[fid], fid); + } + + t->cmn.result = ML_TEST_FAILED; + + return ret; +} + +static int +test_inference_interleave_result(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t; + + RTE_SET_USED(opt); + + t = ml_test_priv(test); + + return t->cmn.result; +} + +static const struct ml_test_ops inference_interleave = { + .cap_check = test_inference_cap_check, + .opt_check = test_inference_opt_check, + .opt_dump = test_inference_opt_dump, + .test_setup = test_inference_setup, + .test_destroy = test_inference_destroy, + .test_driver = test_inference_interleave_driver, + .test_result = test_inference_interleave_result, +}; + +ML_TEST_REGISTER(inference_interleave); From patchwork Tue Nov 29 06:50:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120226 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EAE01A0093; Tue, 29 Nov 2022 07:51:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E976142D47; Tue, 29 Nov 2022 07:51:04 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 291C242D34 for ; Tue, 29 Nov 2022 07:51:01 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AT3NrjK005707; Mon, 28 Nov 2022 22:51:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=4bse9YC/7+lx+8dxtOabI64z+hZQq4mzH5xrwJVKYLA=; b=akL0/uEsqfE8maKkUnFRgBRGMF3Ks22N5A7nZn806ExMD7HDq7BLQodiKCHFhG4IkWQ4 S4VS+Lxqg+a6oX1H13vBGx8ix1GDCxktsov8bqc1ZhBiSgDObZH0/1Zuf9hL/sdAYZ9r 1H3loMCuqezN0GygQyshUqhXWXsTf000F5PAHyvQewPkv1HkK0VwUFh+U+nJLzY+WWuT f5iMazyR3JwnsTy5gfILE6J1E6m5+R/zRIh/ltj9cwUBlsRQ8LfVz0D/2xuzg/EH56UC /8le1cJ83Uwt6bQD2wEfQz12sJDJle3XefFES8+LzSgib2RTFB0V1OhmjuUuy59fxA0l ug== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3m5a508nm1-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:51:00 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Nov 2022 22:50:57 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 28 Nov 2022 22:50:57 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 7B5B73F7043; Mon, 28 Nov 2022 22:50:57 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 07/12] app/mldev: enable support for burst inferences Date: Mon, 28 Nov 2022 22:50:35 -0800 Message-ID: <20221129065040.5875-8-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: LMDHA2TvPxQ92oUyPovS9nqs1zvKwU1l X-Proofpoint-GUID: LMDHA2TvPxQ92oUyPovS9nqs1zvKwU1l X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added 'burst_size' support for inference tests. Burst size controls the number of inference requests handled during the burst enqueue and dequeue operations of the test case. Signed-off-by: Srikanth Yalavarthi Change-Id: I086c5d966b754d21c22d971d19780fa096b1e61c --- app/test-mldev/ml_options.c | 26 ++-- app/test-mldev/ml_options.h | 2 + app/test-mldev/test_inference_common.c | 159 ++++++++++++++++++++++++- app/test-mldev/test_inference_common.h | 4 + 4 files changed, 181 insertions(+), 10 deletions(-) diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 9a006ff7c8..957218af3c 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -30,6 +30,7 @@ ml_options_default(struct ml_options *opt) opt->socket_id = SOCKET_ID_ANY; opt->nb_filelist = 0; opt->repetitions = 1; + opt->burst_size = 1; opt->debug = false; } @@ -151,6 +152,12 @@ ml_parse_repetitions(struct ml_options *opt, const char *arg) return parser_read_uint64(&opt->repetitions, arg); } +static int +ml_parse_burst_size(struct ml_options *opt, const char *arg) +{ + return parser_read_uint16(&opt->burst_size, arg); +} + static void ml_dump_test_options(const char *testname) { @@ -165,7 +172,8 @@ ml_dump_test_options(const char *testname) if ((strcmp(testname, "inference_ordered") == 0) || (strcmp(testname, "inference_interleave") == 0)) { printf("\t\t--filelist : comma separated list of model, input and output\n" - "\t\t--repetitions : number of inference repetitions\n"); + "\t\t--repetitions : number of inference repetitions\n" + "\t\t--burst_size : inference burst size\n"); printf("\n"); } } @@ -185,10 +193,11 @@ print_usage(char *program) ml_test_dump_names(ml_dump_test_options); } -static struct option lgopts[] = { - {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, - {ML_MODELS, 1, 0, 0}, {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, - {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; +static struct option lgopts[] = {{ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, + {ML_SOCKET_ID, 1, 0, 0}, {ML_MODELS, 1, 0, 0}, + {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, + {ML_BURST_SIZE, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, + {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; static int ml_opts_parse_long(int opt_idx, struct ml_options *opt) @@ -196,9 +205,10 @@ ml_opts_parse_long(int opt_idx, struct ml_options *opt) unsigned int i; struct long_opt_parser parsermap[] = { - {ML_TEST, ml_parse_test_name}, {ML_DEVICE_ID, ml_parse_dev_id}, - {ML_SOCKET_ID, ml_parse_socket_id}, {ML_MODELS, ml_parse_models}, - {ML_FILELIST, ml_parse_filelist}, {ML_REPETITIONS, ml_parse_repetitions}, + {ML_TEST, ml_parse_test_name}, {ML_DEVICE_ID, ml_parse_dev_id}, + {ML_SOCKET_ID, ml_parse_socket_id}, {ML_MODELS, ml_parse_models}, + {ML_FILELIST, ml_parse_filelist}, {ML_REPETITIONS, ml_parse_repetitions}, + {ML_BURST_SIZE, ml_parse_burst_size}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h index ad8aee5964..305b39629a 100644 --- a/app/test-mldev/ml_options.h +++ b/app/test-mldev/ml_options.h @@ -19,6 +19,7 @@ #define ML_MODELS ("models") #define ML_FILELIST ("filelist") #define ML_REPETITIONS ("repetitions") +#define ML_BURST_SIZE ("burst_size") #define ML_DEBUG ("debug") #define ML_HELP ("help") @@ -35,6 +36,7 @@ struct ml_options { struct ml_filelist filelist[ML_TEST_MAX_MODELS]; uint8_t nb_filelist; uint64_t repetitions; + uint16_t burst_size; bool debug; }; diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c index 1e0e30637f..ea8106c4ec 100644 --- a/app/test-mldev/test_inference_common.c +++ b/app/test-mldev/test_inference_common.c @@ -129,6 +129,131 @@ ml_dequeue_single(void *arg) return 0; } +/* Enqueue inference requests with burst size greater than 1 */ +static int +ml_enqueue_burst(void *arg) +{ + struct test_inference *t = ml_test_priv((struct ml_test *)arg); + struct ml_core_args *args; + uint16_t ops_count; + uint64_t model_enq; + uint16_t burst_enq; + uint32_t lcore_id; + uint16_t pending; + uint16_t idx; + int16_t fid; + uint16_t i; + int ret; + + lcore_id = rte_lcore_id(); + args = &t->args[lcore_id]; + model_enq = 0; + + if (args->nb_reqs == 0) + return 0; + +next_rep: + fid = args->start_fid; + +next_model: + ops_count = RTE_MIN(t->cmn.opt->burst_size, args->nb_reqs - model_enq); + ret = rte_mempool_get_bulk(t->op_pool, (void **)args->enq_ops, ops_count); + if (ret != 0) + goto next_model; + +retry: + ret = rte_mempool_get_bulk(t->model[fid].io_pool, (void **)args->reqs, ops_count); + if (ret != 0) + goto retry; + + for (i = 0; i < ops_count; i++) { + args->enq_ops[i]->model_id = t->model[fid].id; + args->enq_ops[i]->nb_batches = t->model[fid].info.batch_size; + args->enq_ops[i]->mempool = t->op_pool; + + args->enq_ops[i]->input.addr = args->reqs[i]->input; + args->enq_ops[i]->input.length = t->model[fid].inp_qsize; + args->enq_ops[i]->input.next = NULL; + + args->enq_ops[i]->output.addr = args->reqs[i]->output; + args->enq_ops[i]->output.length = t->model[fid].out_qsize; + args->enq_ops[i]->output.next = NULL; + + args->enq_ops[i]->user_ptr = args->reqs[i]; + args->reqs[i]->niters++; + args->reqs[i]->fid = fid; + } + + idx = 0; + pending = ops_count; + +enqueue_reqs: + burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, 0, &args->enq_ops[idx], pending); + pending = pending - burst_enq; + + if (pending > 0) { + idx = idx + burst_enq; + goto enqueue_reqs; + } + + fid++; + if (fid <= args->end_fid) + goto next_model; + + model_enq = model_enq + ops_count; + if (model_enq < args->nb_reqs) + goto next_rep; + + return 0; +} + +/* Dequeue inference requests with burst size greater than 1 */ +static int +ml_dequeue_burst(void *arg) +{ + struct test_inference *t = ml_test_priv((struct ml_test *)arg); + struct rte_ml_op_error error; + struct ml_core_args *args; + struct ml_request *req; + uint64_t total_deq = 0; + uint16_t burst_deq = 0; + uint8_t nb_filelist; + uint32_t lcore_id; + uint32_t i; + + lcore_id = rte_lcore_id(); + args = &t->args[lcore_id]; + nb_filelist = args->end_fid - args->start_fid + 1; + + if (args->nb_reqs == 0) + return 0; + +dequeue_burst: + burst_deq = + rte_ml_dequeue_burst(t->cmn.opt->dev_id, 0, args->deq_ops, t->cmn.opt->burst_size); + + if (likely(burst_deq > 0)) { + total_deq += burst_deq; + + for (i = 0; i < burst_deq; i++) { + if (unlikely(args->deq_ops[i]->status == RTE_ML_OP_STATUS_ERROR)) { + rte_ml_op_error_get(t->cmn.opt->dev_id, args->deq_ops[i], &error); + ml_err("error_code = 0x%" PRIx64 ", error_message = %s\n", + error.errcode, error.message); + } + req = (struct ml_request *)args->deq_ops[i]->user_ptr; + if (req != NULL) + rte_mempool_put(t->model[req->fid].io_pool, req); + } + rte_mempool_put_bulk(t->op_pool, (void *)args->deq_ops, burst_deq); + } + + if (total_deq < args->nb_reqs * nb_filelist) + goto dequeue_burst; + + return 0; +} + bool test_inference_cap_check(struct ml_options *opt) { @@ -178,6 +303,17 @@ test_inference_opt_check(struct ml_options *opt) return -EINVAL; } + if (opt->burst_size == 0) { + ml_err("Invalid option, burst_size = %u\n", opt->burst_size); + return -EINVAL; + } + + if (opt->burst_size > ML_TEST_MAX_POOL_SIZE) { + ml_err("Invalid option, burst_size = %u (> max supported = %d)\n", opt->burst_size, + ML_TEST_MAX_POOL_SIZE); + return -EINVAL; + } + /* check number of available lcores. */ if (rte_lcore_count() < 3) { ml_err("Insufficient lcores = %u\n", rte_lcore_count()); @@ -198,6 +334,7 @@ test_inference_opt_dump(struct ml_options *opt) /* dump test opts */ ml_dump("repetitions", "%" PRIu64, opt->repetitions); + ml_dump("burst_size", "%u", opt->burst_size); ml_dump_begin("filelist"); for (i = 0; i < opt->nb_filelist; i++) { @@ -213,6 +350,7 @@ test_inference_setup(struct ml_test *test, struct ml_options *opt) { struct test_inference *t; void *test_inference; + uint32_t lcore_id; int ret = 0; uint32_t i; @@ -237,13 +375,30 @@ test_inference_setup(struct ml_test *test, struct ml_options *opt) goto error; } - t->enqueue = ml_enqueue_single; - t->dequeue = ml_dequeue_single; + if (opt->burst_size == 1) { + t->enqueue = ml_enqueue_single; + t->dequeue = ml_dequeue_single; + } else { + t->enqueue = ml_enqueue_burst; + t->dequeue = ml_dequeue_burst; + } /* set model initial state */ for (i = 0; i < opt->nb_filelist; i++) t->model[i].state = MODEL_INITIAL; + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + t->args[lcore_id].enq_ops = rte_zmalloc_socket( + "ml_test_enq_ops", opt->burst_size * sizeof(struct rte_ml_op *), + RTE_CACHE_LINE_SIZE, opt->socket_id); + t->args[lcore_id].deq_ops = rte_zmalloc_socket( + "ml_test_deq_ops", opt->burst_size * sizeof(struct rte_ml_op *), + RTE_CACHE_LINE_SIZE, opt->socket_id); + t->args[lcore_id].reqs = rte_zmalloc_socket( + "ml_test_requests", opt->burst_size * sizeof(struct ml_request *), + RTE_CACHE_LINE_SIZE, opt->socket_id); + } + return 0; error: diff --git a/app/test-mldev/test_inference_common.h b/app/test-mldev/test_inference_common.h index b058abada4..75d588308b 100644 --- a/app/test-mldev/test_inference_common.h +++ b/app/test-mldev/test_inference_common.h @@ -27,6 +27,10 @@ struct ml_core_args { uint64_t nb_reqs; int16_t start_fid; int16_t end_fid; + + struct rte_ml_op **enq_ops; + struct rte_ml_op **deq_ops; + struct ml_request **reqs; }; struct test_inference { From patchwork Tue Nov 29 06:50:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120227 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A59ABA0093; Tue, 29 Nov 2022 07:51:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CA35442D49; Tue, 29 Nov 2022 07:51:05 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 820A242D39 for ; Tue, 29 Nov 2022 07:51:01 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AT3NrjL005707; Mon, 28 Nov 2022 22:51:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=06cORJi9HPlAHs6QVXnwmdQIEaalU+KKpJ5oPgEJrf0=; b=TQ8oLZP1XomZUKeluCg0+AkenN7ZA9SRlPQqgFgxyO50ObPyq7oho6HgH5keMZuKy6Ru R26g/eDWlRCanY5pMiULR8Ty/A0c89Jg6MhGYSfN7BmjIUAtBO5yZBBjibTTJyfdwUew 0Zrb1zUK5D/sUM+Vo05QL9oUQyEIqqqf4du+EEYr65oEqLFyK8nK4GFESTy5swec5J7A 5r3EWppn2QnX/dVBDBFVRB7SVuP794zMqF7UnlKtmPjiqIH4C3Usr2xRx28KtElsvgCR 2OIDQNfS77qkqUttFbGN/Uq0mYZozCfmA6TCOEY8mXaYH6hNTtz2gKvlEICcVlMHbyZM Jw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3m5a508nm1-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:51:00 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Nov 2022 22:50:59 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 28 Nov 2022 22:50:59 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id D9DED3F707B; Mon, 28 Nov 2022 22:50:58 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 08/12] app/mldev: enable support for queue pairs and size Date: Mon, 28 Nov 2022 22:50:36 -0800 Message-ID: <20221129065040.5875-9-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Mcgg_aij7cZBboODaoJEbP-UrVRcmwOm X-Proofpoint-GUID: Mcgg_aij7cZBboODaoJEbP-UrVRcmwOm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to create multiple queue-pairs per device to enqueue and dequeue inference requests. Number of queue pairs to be created can be specified through "--queue_pairs" option. Support is also enabled to control the number of descriptors per each queue pair through "--queue_size" option. Inference requests for a model are distributed across all available queue-pairs. Signed-off-by: Srikanth Yalavarthi Change-Id: I28549fc0c56e6583e466a2ded1c00a2257396aaf --- app/test-mldev/ml_options.c | 40 ++++++++++--- app/test-mldev/ml_options.h | 4 ++ app/test-mldev/test_common.c | 2 +- app/test-mldev/test_inference_common.c | 79 +++++++++++++++++++++----- app/test-mldev/test_inference_common.h | 1 + 5 files changed, 102 insertions(+), 24 deletions(-) diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 957218af3c..27b628c8b3 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -31,6 +31,8 @@ ml_options_default(struct ml_options *opt) opt->nb_filelist = 0; opt->repetitions = 1; opt->burst_size = 1; + opt->queue_pairs = 1; + opt->queue_size = 1; opt->debug = false; } @@ -158,11 +160,30 @@ ml_parse_burst_size(struct ml_options *opt, const char *arg) return parser_read_uint16(&opt->burst_size, arg); } +static int +ml_parse_queue_pairs(struct ml_options *opt, const char *arg) +{ + int ret; + + ret = parser_read_uint16(&opt->queue_pairs, arg); + + return ret; +} + +static int +ml_parse_queue_size(struct ml_options *opt, const char *arg) +{ + return parser_read_uint16(&opt->queue_size, arg); +} + static void ml_dump_test_options(const char *testname) { - if (strcmp(testname, "device_ops") == 0) + if (strcmp(testname, "device_ops") == 0) { + printf("\t\t--queue_pairs : number of queue pairs to create\n" + "\t\t--queue_size : size fo queue-pair\n"); printf("\n"); + } if (strcmp(testname, "model_ops") == 0) { printf("\t\t--models : comma separated list of models\n"); @@ -173,7 +194,9 @@ ml_dump_test_options(const char *testname) (strcmp(testname, "inference_interleave") == 0)) { printf("\t\t--filelist : comma separated list of model, input and output\n" "\t\t--repetitions : number of inference repetitions\n" - "\t\t--burst_size : inference burst size\n"); + "\t\t--burst_size : inference burst size\n" + "\t\t--queue_pairs : number of queue pairs to create\n" + "\t\t--queue_size : size fo queue-pair\n"); printf("\n"); } } @@ -193,11 +216,11 @@ print_usage(char *program) ml_test_dump_names(ml_dump_test_options); } -static struct option lgopts[] = {{ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, - {ML_SOCKET_ID, 1, 0, 0}, {ML_MODELS, 1, 0, 0}, - {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, - {ML_BURST_SIZE, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, - {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; +static struct option lgopts[] = { + {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, + {ML_MODELS, 1, 0, 0}, {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, + {ML_BURST_SIZE, 1, 0, 0}, {ML_QUEUE_PAIRS, 1, 0, 0}, {ML_QUEUE_SIZE, 1, 0, 0}, + {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; static int ml_opts_parse_long(int opt_idx, struct ml_options *opt) @@ -208,7 +231,8 @@ ml_opts_parse_long(int opt_idx, struct ml_options *opt) {ML_TEST, ml_parse_test_name}, {ML_DEVICE_ID, ml_parse_dev_id}, {ML_SOCKET_ID, ml_parse_socket_id}, {ML_MODELS, ml_parse_models}, {ML_FILELIST, ml_parse_filelist}, {ML_REPETITIONS, ml_parse_repetitions}, - {ML_BURST_SIZE, ml_parse_burst_size}, + {ML_BURST_SIZE, ml_parse_burst_size}, {ML_QUEUE_PAIRS, ml_parse_queue_pairs}, + {ML_QUEUE_SIZE, ml_parse_queue_size}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h index 305b39629a..6bfef1b979 100644 --- a/app/test-mldev/ml_options.h +++ b/app/test-mldev/ml_options.h @@ -20,6 +20,8 @@ #define ML_FILELIST ("filelist") #define ML_REPETITIONS ("repetitions") #define ML_BURST_SIZE ("burst_size") +#define ML_QUEUE_PAIRS ("queue_pairs") +#define ML_QUEUE_SIZE ("queue_size") #define ML_DEBUG ("debug") #define ML_HELP ("help") @@ -37,6 +39,8 @@ struct ml_options { uint8_t nb_filelist; uint64_t repetitions; uint16_t burst_size; + uint16_t queue_pairs; + uint16_t queue_size; bool debug; }; diff --git a/app/test-mldev/test_common.c b/app/test-mldev/test_common.c index b6b32904e4..22e6acb3b6 100644 --- a/app/test-mldev/test_common.c +++ b/app/test-mldev/test_common.c @@ -78,7 +78,7 @@ ml_test_device_configure(struct ml_test *test, struct ml_options *opt) /* configure device */ dev_config.socket_id = opt->socket_id; dev_config.nb_models = t->dev_info.max_models; - dev_config.nb_queue_pairs = t->dev_info.max_queue_pairs; + dev_config.nb_queue_pairs = opt->queue_pairs; ret = rte_ml_dev_configure(opt->dev_id, &dev_config); if (ret != 0) { ml_err("Failed to configure ml device, dev_id = %d\n", opt->dev_id); diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c index ea8106c4ec..a1111d9119 100644 --- a/app/test-mldev/test_inference_common.c +++ b/app/test-mldev/test_inference_common.c @@ -72,7 +72,7 @@ ml_enqueue_single(void *arg) req->fid = fid; enqueue_req: - burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, 0, &op, 1); + burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, args->qp_id, &op, 1); if (burst_enq == 0) goto enqueue_req; @@ -109,7 +109,7 @@ ml_dequeue_single(void *arg) return 0; dequeue_req: - burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, 0, &op, 1); + burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, args->qp_id, &op, 1); if (likely(burst_deq == 1)) { total_deq += burst_deq; @@ -188,7 +188,8 @@ ml_enqueue_burst(void *arg) pending = ops_count; enqueue_reqs: - burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, 0, &args->enq_ops[idx], pending); + burst_enq = + rte_ml_enqueue_burst(t->cmn.opt->dev_id, args->qp_id, &args->enq_ops[idx], pending); pending = pending - burst_enq; if (pending > 0) { @@ -229,8 +230,8 @@ ml_dequeue_burst(void *arg) return 0; dequeue_burst: - burst_deq = - rte_ml_dequeue_burst(t->cmn.opt->dev_id, 0, args->deq_ops, t->cmn.opt->burst_size); + burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, args->qp_id, args->deq_ops, + t->cmn.opt->burst_size); if (likely(burst_deq > 0)) { total_deq += burst_deq; @@ -263,6 +264,19 @@ test_inference_cap_check(struct ml_options *opt) return false; rte_ml_dev_info_get(opt->dev_id, &dev_info); + + if (opt->queue_pairs > dev_info.max_queue_pairs) { + ml_err("Insufficient capabilities: queue_pairs = %u, max_queue_pairs = %u", + opt->queue_pairs, dev_info.max_queue_pairs); + return false; + } + + if (opt->queue_size > dev_info.max_desc) { + ml_err("Insufficient capabilities: queue_size = %u, max_desc = %u", opt->queue_size, + dev_info.max_desc); + return false; + } + if (opt->nb_filelist > dev_info.max_models) { ml_err("Insufficient capabilities: Filelist count exceeded device limit, count = %u (max limit = %u)", opt->nb_filelist, dev_info.max_models); @@ -314,10 +328,21 @@ test_inference_opt_check(struct ml_options *opt) return -EINVAL; } + if (opt->queue_pairs == 0) { + ml_err("Invalid option, queue_pairs = %u\n", opt->queue_pairs); + return -EINVAL; + } + + if (opt->queue_size == 0) { + ml_err("Invalid option, queue_size = %u\n", opt->queue_size); + return -EINVAL; + } + /* check number of available lcores. */ - if (rte_lcore_count() < 3) { + if (rte_lcore_count() < (uint32_t)(opt->queue_pairs * 2 + 1)) { ml_err("Insufficient lcores = %u\n", rte_lcore_count()); - ml_err("Minimum lcores required to create %u queue-pairs = %u\n", 1, 3); + ml_err("Minimum lcores required to create %u queue-pairs = %u\n", opt->queue_pairs, + (opt->queue_pairs * 2 + 1)); return -EINVAL; } @@ -335,6 +360,8 @@ test_inference_opt_dump(struct ml_options *opt) /* dump test opts */ ml_dump("repetitions", "%" PRIu64, opt->repetitions); ml_dump("burst_size", "%u", opt->burst_size); + ml_dump("queue_pairs", "%u", opt->queue_pairs); + ml_dump("queue_size", "%u", opt->queue_size); ml_dump_begin("filelist"); for (i = 0; i < opt->nb_filelist; i++) { @@ -425,23 +452,31 @@ ml_inference_mldev_setup(struct ml_test *test, struct ml_options *opt) { struct rte_ml_dev_qp_conf qp_conf; struct test_inference *t; + uint16_t qp_id; int ret; t = ml_test_priv(test); + RTE_SET_USED(t); + ret = ml_test_device_configure(test, opt); if (ret != 0) return ret; /* setup queue pairs */ - qp_conf.nb_desc = t->cmn.dev_info.max_desc; + qp_conf.nb_desc = opt->queue_size; qp_conf.cb = NULL; - ret = rte_ml_dev_queue_pair_setup(opt->dev_id, 0, &qp_conf, opt->socket_id); - if (ret != 0) { - ml_err("Failed to setup ml device queue-pair, dev_id = %d, qp_id = %u\n", - opt->dev_id, 0); - goto error; + for (qp_id = 0; qp_id < opt->queue_pairs; qp_id++) { + qp_conf.nb_desc = opt->queue_size; + qp_conf.cb = NULL; + + ret = rte_ml_dev_queue_pair_setup(opt->dev_id, qp_id, &qp_conf, opt->socket_id); + if (ret != 0) { + ml_err("Failed to setup ml device queue-pair, dev_id = %d, qp_id = %u\n", + opt->dev_id, qp_id); + return ret; + } } ret = ml_test_device_start(test, opt); @@ -697,14 +732,28 @@ ml_inference_launch_cores(struct ml_test *test, struct ml_options *opt, int16_t { struct test_inference *t = ml_test_priv(test); uint32_t lcore_id; + uint32_t nb_reqs; uint32_t id = 0; + uint32_t qp_id; + + nb_reqs = opt->repetitions / opt->queue_pairs; RTE_LCORE_FOREACH_WORKER(lcore_id) { - if (id == 2) + if (id >= opt->queue_pairs * 2) break; - t->args[lcore_id].nb_reqs = opt->repetitions; + qp_id = id / 2; + t->args[lcore_id].qp_id = qp_id; + t->args[lcore_id].nb_reqs = nb_reqs; + if (qp_id == 0) + t->args[lcore_id].nb_reqs += opt->repetitions - nb_reqs * opt->queue_pairs; + + if (t->args[lcore_id].nb_reqs == 0) { + id++; + break; + } + t->args[lcore_id].start_fid = start_fid; t->args[lcore_id].end_fid = end_fid; diff --git a/app/test-mldev/test_inference_common.h b/app/test-mldev/test_inference_common.h index 75d588308b..1bac2dcfa0 100644 --- a/app/test-mldev/test_inference_common.h +++ b/app/test-mldev/test_inference_common.h @@ -27,6 +27,7 @@ struct ml_core_args { uint64_t nb_reqs; int16_t start_fid; int16_t end_fid; + uint32_t qp_id; struct rte_ml_op **enq_ops; struct rte_ml_op **deq_ops; From patchwork Tue Nov 29 06:50:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120228 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DC39A0093; Tue, 29 Nov 2022 07:51:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A92AB42D4E; Tue, 29 Nov 2022 07:51:06 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6E1E042D47 for ; Tue, 29 Nov 2022 07:51:03 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AT0Gp84020517; Mon, 28 Nov 2022 22:51:02 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=QwR7SVa0m2MUQWpS0zX14uMzjM/swN7l8zZ1XxU0FNQ=; b=bGUzh51kmkHpizBfK9YXCti3rThmfpFdnfYYx1Uuc+Qv6QEX6F8yl3nqVwUVvMlUNajg mqkM86lb/QLRkZ3hXiLF/5zUt4zg0tFlXtq3CPJHjygk0NfSgjbuMYuz3g5nFZF8Nxlb f3iZjpJVuNvh3CA65GE1D5+5JkV9WuRqyVXoLhP5sLDkzEijZ/J01U3boOESiP8HBdyG hm+/swXK1LGcrMXDkmonOYWuw2cxBehk8Rc8owhL8Sr60MXt5KRdJDi9mXG4zGyCZdap mNoro+bmUVOBZqEUVPLRU4vHX8if2GKlHND4KAV2d6yTMFLDrmF/7G449JII4hLuDbvC AQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3m3k6wa1nc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:51:02 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Nov 2022 22:51:00 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 28 Nov 2022 22:51:00 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 4D6333F707A; Mon, 28 Nov 2022 22:51:00 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 09/12] app/mldev: enable support for inference batches Date: Mon, 28 Nov 2022 22:50:37 -0800 Message-ID: <20221129065040.5875-10-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: ee4boTBBLr4I_b24VULnRiti_4nrayOM X-Proofpoint-ORIG-GUID: ee4boTBBLr4I_b24VULnRiti_4nrayOM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enabled support to execute multiple batches of inferences per each enqueue request. Input and reference for the test should be appropriately provided for multi-batch run. Number of batches can be specified through "--batches" option. Signed-off-by: Srikanth Yalavarthi Change-Id: I03860f39e43762cd0d0e4478209e153127631735 --- app/test-mldev/ml_options.c | 15 ++++++++++++--- app/test-mldev/ml_options.h | 2 ++ app/test-mldev/test_inference_common.c | 22 +++++++++++++--------- app/test-mldev/test_model_common.c | 6 ++++++ app/test-mldev/test_model_common.h | 1 + 5 files changed, 34 insertions(+), 12 deletions(-) diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 27b628c8b3..a27a919a0d 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -33,6 +33,7 @@ ml_options_default(struct ml_options *opt) opt->burst_size = 1; opt->queue_pairs = 1; opt->queue_size = 1; + opt->batches = 0; opt->debug = false; } @@ -176,6 +177,12 @@ ml_parse_queue_size(struct ml_options *opt, const char *arg) return parser_read_uint16(&opt->queue_size, arg); } +static int +ml_parse_batches(struct ml_options *opt, const char *arg) +{ + return parser_read_uint16(&opt->batches, arg); +} + static void ml_dump_test_options(const char *testname) { @@ -196,7 +203,8 @@ ml_dump_test_options(const char *testname) "\t\t--repetitions : number of inference repetitions\n" "\t\t--burst_size : inference burst size\n" "\t\t--queue_pairs : number of queue pairs to create\n" - "\t\t--queue_size : size fo queue-pair\n"); + "\t\t--queue_size : size fo queue-pair\n" + "\t\t--batches : number of batches of input\n"); printf("\n"); } } @@ -220,7 +228,8 @@ static struct option lgopts[] = { {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, {ML_MODELS, 1, 0, 0}, {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, {ML_BURST_SIZE, 1, 0, 0}, {ML_QUEUE_PAIRS, 1, 0, 0}, {ML_QUEUE_SIZE, 1, 0, 0}, - {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; + {ML_BATCHES, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, + {NULL, 0, 0, 0}}; static int ml_opts_parse_long(int opt_idx, struct ml_options *opt) @@ -232,7 +241,7 @@ ml_opts_parse_long(int opt_idx, struct ml_options *opt) {ML_SOCKET_ID, ml_parse_socket_id}, {ML_MODELS, ml_parse_models}, {ML_FILELIST, ml_parse_filelist}, {ML_REPETITIONS, ml_parse_repetitions}, {ML_BURST_SIZE, ml_parse_burst_size}, {ML_QUEUE_PAIRS, ml_parse_queue_pairs}, - {ML_QUEUE_SIZE, ml_parse_queue_size}, + {ML_QUEUE_SIZE, ml_parse_queue_size}, {ML_BATCHES, ml_parse_batches}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h index 6bfef1b979..d23e842895 100644 --- a/app/test-mldev/ml_options.h +++ b/app/test-mldev/ml_options.h @@ -22,6 +22,7 @@ #define ML_BURST_SIZE ("burst_size") #define ML_QUEUE_PAIRS ("queue_pairs") #define ML_QUEUE_SIZE ("queue_size") +#define ML_BATCHES ("batches") #define ML_DEBUG ("debug") #define ML_HELP ("help") @@ -41,6 +42,7 @@ struct ml_options { uint16_t burst_size; uint16_t queue_pairs; uint16_t queue_size; + uint16_t batches; bool debug; }; diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c index a1111d9119..fc7e162514 100644 --- a/app/test-mldev/test_inference_common.c +++ b/app/test-mldev/test_inference_common.c @@ -56,7 +56,7 @@ ml_enqueue_single(void *arg) goto retry; op->model_id = t->model[fid].id; - op->nb_batches = t->model[fid].info.batch_size; + op->nb_batches = t->model[fid].nb_batches; op->mempool = t->op_pool; op->input.addr = req->input; @@ -168,7 +168,7 @@ ml_enqueue_burst(void *arg) for (i = 0; i < ops_count; i++) { args->enq_ops[i]->model_id = t->model[fid].id; - args->enq_ops[i]->nb_batches = t->model[fid].info.batch_size; + args->enq_ops[i]->nb_batches = t->model[fid].nb_batches; args->enq_ops[i]->mempool = t->op_pool; args->enq_ops[i]->input.addr = args->reqs[i]->input; @@ -363,6 +363,11 @@ test_inference_opt_dump(struct ml_options *opt) ml_dump("queue_pairs", "%u", opt->queue_pairs); ml_dump("queue_size", "%u", opt->queue_size); + if (opt->batches == 0) + ml_dump("batches", "%u (default)", opt->batches); + else + ml_dump("batches", "%u", opt->batches); + ml_dump_begin("filelist"); for (i = 0; i < opt->nb_filelist; i++) { ml_dump_list("model", i, opt->filelist[i].model); @@ -531,8 +536,8 @@ ml_request_initialize(struct rte_mempool *mp, void *opaque, void *obj, unsigned req->niters = 0; /* quantize data */ - rte_ml_io_quantize(t->cmn.opt->dev_id, t->model[t->fid].id, - t->model[t->fid].info.batch_size, t->model[t->fid].input, req->input); + rte_ml_io_quantize(t->cmn.opt->dev_id, t->model[t->fid].id, t->model[t->fid].nb_batches, + t->model[t->fid].input, req->input); } int @@ -550,7 +555,7 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t f int ret; /* get input buffer size */ - ret = rte_ml_io_input_size_get(opt->dev_id, t->model[fid].id, t->model[fid].info.batch_size, + ret = rte_ml_io_input_size_get(opt->dev_id, t->model[fid].id, t->model[fid].nb_batches, &t->model[fid].inp_qsize, &t->model[fid].inp_dsize); if (ret != 0) { ml_err("Failed to get input size, model : %s\n", opt->filelist[fid].model); @@ -558,9 +563,8 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t f } /* get output buffer size */ - ret = rte_ml_io_output_size_get(opt->dev_id, t->model[fid].id, - t->model[fid].info.batch_size, &t->model[fid].out_qsize, - &t->model[fid].out_dsize); + ret = rte_ml_io_output_size_get(opt->dev_id, t->model[fid].id, t->model[fid].nb_batches, + &t->model[fid].out_qsize, &t->model[fid].out_dsize); if (ret != 0) { ml_err("Failed to get input size, model : %s\n", opt->filelist[fid].model); return ret; @@ -705,7 +709,7 @@ ml_request_finish(struct rte_mempool *mp, void *opaque, void *obj, unsigned int return; t->nb_used++; - rte_ml_io_dequantize(t->cmn.opt->dev_id, model->id, t->model[req->fid].info.batch_size, + rte_ml_io_dequantize(t->cmn.opt->dev_id, model->id, t->model[req->fid].nb_batches, req->output, model->output); } diff --git a/app/test-mldev/test_model_common.c b/app/test-mldev/test_model_common.c index 5368be17fe..51260c0789 100644 --- a/app/test-mldev/test_model_common.c +++ b/app/test-mldev/test_model_common.c @@ -75,6 +75,12 @@ ml_model_load(struct ml_test *test, struct ml_options *opt, struct ml_model *mod return ret; } + /* Update number of batches */ + if (opt->batches == 0) + model->nb_batches = model->info.batch_size; + else + model->nb_batches = opt->batches; + model->state = MODEL_LOADED; return 0; diff --git a/app/test-mldev/test_model_common.h b/app/test-mldev/test_model_common.h index c45ae80853..dfbf568f0b 100644 --- a/app/test-mldev/test_model_common.h +++ b/app/test-mldev/test_model_common.h @@ -33,6 +33,7 @@ struct ml_model { uint8_t *output; struct rte_mempool *io_pool; + uint32_t nb_batches; }; int ml_model_load(struct ml_test *test, struct ml_options *opt, struct ml_model *model, From patchwork Tue Nov 29 06:50:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120229 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E219A0093; Tue, 29 Nov 2022 07:52:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E216D42D5A; Tue, 29 Nov 2022 07:51:07 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B028E42D24 for ; Tue, 29 Nov 2022 07:51:04 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AT0Gp85020517; Mon, 28 Nov 2022 22:51:04 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=sOvK7a5uobMhtuZF+ew1+1g9tdtgAg7jy1+BmKPCDoQ=; b=KDeX61nfZbJhwVBfOgC2fQLB+OI/i/ywcxKFUxaLCxpPlXe2yXkAMW4LyAB8hywm30Uj GJQA+MZRKn1mxLJhuzx5GjwumF/Aj8TjClu9Gqzx363B267XXkfA4fsln25c5I99/h8d xD44ui/m+PUywu1YW5lNELe1sZdloURY/stWaIJtdRRM86AaagnVqnvLCFXCbxUgFAx1 yxR24uJ+ZGQmys4PgriDNKmKu6QP98NV3UcNVauu9E5klMGwFXKOJ9i1btl8AlY7XLyI ioR3Z8Hay00VXA3is8Wy613zSLcjznCKqhdmkZy7Ltn2DIzVgTCKR7ehXFxdUBQoW3Ap dQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3m3k6wa1nc-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:51:03 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Nov 2022 22:51:02 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 28 Nov 2022 22:51:02 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id DC18A3F707A; Mon, 28 Nov 2022 22:51:01 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 10/12] app/mldev: enable support for inference validation Date: Mon, 28 Nov 2022 22:50:38 -0800 Message-ID: <20221129065040.5875-11-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: JG0sRvb7Q0eiT7TsCzO1rKAgkh_DTxg- X-Proofpoint-ORIG-GUID: JG0sRvb7Q0eiT7TsCzO1rKAgkh_DTxg- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enabled support to validate inference output with reference output provided by the user. Validation would be successful only when the inference outputs are within the 'tolerance' specified through command line option "--tolerance". Signed-off-by: Srikanth Yalavarthi Change-Id: I3776cf37a434079862c08c3c6aa2a0af771bcdae --- app/test-mldev/meson.build | 2 +- app/test-mldev/ml_options.c | 36 +++- app/test-mldev/ml_options.h | 3 + app/test-mldev/test_inference_common.c | 218 ++++++++++++++++++++++++- app/test-mldev/test_inference_common.h | 1 + app/test-mldev/test_model_common.h | 1 + 6 files changed, 250 insertions(+), 11 deletions(-) diff --git a/app/test-mldev/meson.build b/app/test-mldev/meson.build index 41d22fb22c..15db534dc2 100644 --- a/app/test-mldev/meson.build +++ b/app/test-mldev/meson.build @@ -21,4 +21,4 @@ sources = files( 'test_inference_interleave.c', ) -deps += ['mldev'] +deps += ['mldev', 'hash'] diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index a27a919a0d..4087ab52db 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -34,6 +35,7 @@ ml_options_default(struct ml_options *opt) opt->queue_pairs = 1; opt->queue_size = 1; opt->batches = 0; + opt->tolerance = 0.0; opt->debug = false; } @@ -139,6 +141,13 @@ ml_parse_filelist(struct ml_options *opt, const char *arg) } strlcpy(opt->filelist[opt->nb_filelist].output, token, PATH_MAX); + /* reference - optional */ + token = strtok(NULL, delim); + if (token != NULL) + strlcpy(opt->filelist[opt->nb_filelist].reference, token, PATH_MAX); + else + memset(opt->filelist[opt->nb_filelist].reference, 0, PATH_MAX); + opt->nb_filelist++; if (opt->nb_filelist == 0) { @@ -183,6 +192,14 @@ ml_parse_batches(struct ml_options *opt, const char *arg) return parser_read_uint16(&opt->batches, arg); } +static int +ml_parse_tolerance(struct ml_options *opt, const char *arg) +{ + opt->tolerance = fabs(atof(arg)); + + return 0; +} + static void ml_dump_test_options(const char *testname) { @@ -199,12 +216,13 @@ ml_dump_test_options(const char *testname) if ((strcmp(testname, "inference_ordered") == 0) || (strcmp(testname, "inference_interleave") == 0)) { - printf("\t\t--filelist : comma separated list of model, input and output\n" + printf("\t\t--filelist : comma separated list of model, input, output and reference\n" "\t\t--repetitions : number of inference repetitions\n" "\t\t--burst_size : inference burst size\n" "\t\t--queue_pairs : number of queue pairs to create\n" "\t\t--queue_size : size fo queue-pair\n" - "\t\t--batches : number of batches of input\n"); + "\t\t--batches : number of batches of input\n" + "\t\t--tolerance : maximum tolerance (%%) for output validation\n"); printf("\n"); } } @@ -224,12 +242,13 @@ print_usage(char *program) ml_test_dump_names(ml_dump_test_options); } -static struct option lgopts[] = { - {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, - {ML_MODELS, 1, 0, 0}, {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, - {ML_BURST_SIZE, 1, 0, 0}, {ML_QUEUE_PAIRS, 1, 0, 0}, {ML_QUEUE_SIZE, 1, 0, 0}, - {ML_BATCHES, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, - {NULL, 0, 0, 0}}; +static struct option lgopts[] = {{ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, + {ML_SOCKET_ID, 1, 0, 0}, {ML_MODELS, 1, 0, 0}, + {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, + {ML_BURST_SIZE, 1, 0, 0}, {ML_QUEUE_PAIRS, 1, 0, 0}, + {ML_QUEUE_SIZE, 1, 0, 0}, {ML_BATCHES, 1, 0, 0}, + {ML_TOLERANCE, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, + {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; static int ml_opts_parse_long(int opt_idx, struct ml_options *opt) @@ -242,6 +261,7 @@ ml_opts_parse_long(int opt_idx, struct ml_options *opt) {ML_FILELIST, ml_parse_filelist}, {ML_REPETITIONS, ml_parse_repetitions}, {ML_BURST_SIZE, ml_parse_burst_size}, {ML_QUEUE_PAIRS, ml_parse_queue_pairs}, {ML_QUEUE_SIZE, ml_parse_queue_size}, {ML_BATCHES, ml_parse_batches}, + {ML_TOLERANCE, ml_parse_tolerance}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h index d23e842895..79ac54de98 100644 --- a/app/test-mldev/ml_options.h +++ b/app/test-mldev/ml_options.h @@ -23,6 +23,7 @@ #define ML_QUEUE_PAIRS ("queue_pairs") #define ML_QUEUE_SIZE ("queue_size") #define ML_BATCHES ("batches") +#define ML_TOLERANCE ("tolerance") #define ML_DEBUG ("debug") #define ML_HELP ("help") @@ -30,6 +31,7 @@ struct ml_filelist { char model[PATH_MAX]; char input[PATH_MAX]; char output[PATH_MAX]; + char reference[PATH_MAX]; }; struct ml_options { @@ -43,6 +45,7 @@ struct ml_options { uint16_t queue_pairs; uint16_t queue_size; uint16_t batches; + float tolerance; bool debug; }; diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c index fc7e162514..cdd1667c71 100644 --- a/app/test-mldev/test_inference_common.c +++ b/app/test-mldev/test_inference_common.c @@ -3,12 +3,15 @@ */ #include +#include #include #include #include +#include #include #include +#include #include #include #include @@ -21,6 +24,27 @@ #include "test_common.h" #include "test_inference_common.h" +#define ML_TEST_READ_TYPE(buffer, type) (*((type *)buffer)) + +#define ML_TEST_CHECK_OUTPUT(output, reference, tolerance) \ + (((float)output - (float)reference) <= (((float)reference * tolerance) / 100.0)) + +#define ML_OPEN_WRITE_GET_ERR(name, buffer, size, err) \ + do { \ + FILE *fp = fopen(name, "w+"); \ + if (fp == NULL) { \ + ml_err("Unable to create file: %s, error: %s", name, strerror(errno)); \ + err = true; \ + } else { \ + if (fwrite(buffer, 1, size, fp) != size) { \ + ml_err("Error writing output, file: %s, error: %s", name, \ + strerror(errno)); \ + err = true; \ + } \ + fclose(fp); \ + } \ + } while (0) + /* Enqueue inference requests with burst size equal to 1 */ static int ml_enqueue_single(void *arg) @@ -362,6 +386,7 @@ test_inference_opt_dump(struct ml_options *opt) ml_dump("burst_size", "%u", opt->burst_size); ml_dump("queue_pairs", "%u", opt->queue_pairs); ml_dump("queue_size", "%u", opt->queue_size); + ml_dump("tolerance", "%-7.3f", opt->tolerance); if (opt->batches == 0) ml_dump("batches", "%u (default)", opt->batches); @@ -373,6 +398,8 @@ test_inference_opt_dump(struct ml_options *opt) ml_dump_list("model", i, opt->filelist[i].model); ml_dump_list("input", i, opt->filelist[i].input); ml_dump_list("output", i, opt->filelist[i].output); + if (strcmp(opt->filelist[i].reference, "\0") != 0) + ml_dump_list("reference", i, opt->filelist[i].reference); } ml_dump_end; } @@ -397,6 +424,7 @@ test_inference_setup(struct ml_test *test, struct ml_options *opt) t = ml_test_priv(test); t->nb_used = 0; + t->nb_valid = 0; t->cmn.result = ML_TEST_FAILED; t->cmn.opt = opt; @@ -572,6 +600,9 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t f /* allocate buffer for user data */ mz_size = t->model[fid].inp_dsize + t->model[fid].out_dsize; + if (strcmp(opt->filelist[fid].reference, "\0") != 0) + mz_size += t->model[fid].out_dsize; + sprintf(mz_name, "ml_user_data_%d", fid); mz = rte_memzone_reserve(mz_name, mz_size, opt->socket_id, 0); if (mz == NULL) { @@ -582,6 +613,10 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t f t->model[fid].input = mz->addr; t->model[fid].output = t->model[fid].input + t->model[fid].inp_dsize; + if (strcmp(opt->filelist[fid].reference, "\0") != 0) + t->model[fid].reference = t->model[fid].output + t->model[fid].out_dsize; + else + t->model[fid].reference = NULL; /* load input file */ fp = fopen(opt->filelist[fid].input, "r"); @@ -610,6 +645,27 @@ ml_inference_iomem_setup(struct ml_test *test, struct ml_options *opt, int16_t f } fclose(fp); + /* load reference file */ + if (t->model[fid].reference != NULL) { + fp = fopen(opt->filelist[fid].reference, "r"); + if (fp == NULL) { + ml_err("Failed to open reference file : %s\n", + opt->filelist[fid].reference); + ret = -errno; + goto error; + } + + if (fread(t->model[fid].reference, 1, t->model[fid].out_dsize, fp) != + t->model[fid].out_dsize) { + ml_err("Failed to read reference file : %s\n", + opt->filelist[fid].reference); + ret = -errno; + fclose(fp); + goto error; + } + fclose(fp); + } + /* create mempool for quantized input and output buffers. ml_request_initialize is * used as a callback for object creation. */ @@ -694,6 +750,121 @@ ml_inference_mem_destroy(struct ml_test *test, struct ml_options *opt) rte_mempool_free(t->op_pool); } +static bool +ml_inference_validation(struct ml_test *test, struct ml_request *req) +{ + struct test_inference *t = ml_test_priv((struct ml_test *)test); + struct ml_model *model; + uint32_t nb_elements; + uint8_t *reference; + uint8_t *output; + bool match; + uint32_t i; + uint32_t j; + + model = &t->model[req->fid]; + + /* compare crc when tolerance is 0 */ + if (t->cmn.opt->tolerance == 0.0) { + match = (rte_hash_crc(model->output, model->out_dsize, 0) == + rte_hash_crc(model->reference, model->out_dsize, 0)); + } else { + output = model->output; + reference = model->reference; + + i = 0; +next_output: + nb_elements = + model->info.output_info[i].shape.w * model->info.output_info[i].shape.x * + model->info.output_info[i].shape.y * model->info.output_info[i].shape.z; + j = 0; +next_element: + match = false; + switch (model->info.output_info[i].dtype) { + case RTE_ML_IO_TYPE_INT8: + if (ML_TEST_CHECK_OUTPUT(ML_TEST_READ_TYPE(output, int8_t), + ML_TEST_READ_TYPE(reference, int8_t), + t->cmn.opt->tolerance)) + match = true; + + output += sizeof(int8_t); + reference += sizeof(int8_t); + break; + case RTE_ML_IO_TYPE_UINT8: + if (ML_TEST_CHECK_OUTPUT(ML_TEST_READ_TYPE(output, uint8_t), + ML_TEST_READ_TYPE(reference, uint8_t), + t->cmn.opt->tolerance)) + match = true; + + output += sizeof(float); + reference += sizeof(float); + break; + case RTE_ML_IO_TYPE_INT16: + if (ML_TEST_CHECK_OUTPUT(ML_TEST_READ_TYPE(output, int16_t), + ML_TEST_READ_TYPE(reference, int16_t), + t->cmn.opt->tolerance)) + match = true; + + output += sizeof(int16_t); + reference += sizeof(int16_t); + break; + case RTE_ML_IO_TYPE_UINT16: + if (ML_TEST_CHECK_OUTPUT(ML_TEST_READ_TYPE(output, uint16_t), + ML_TEST_READ_TYPE(reference, uint16_t), + t->cmn.opt->tolerance)) + match = true; + + output += sizeof(uint16_t); + reference += sizeof(uint16_t); + break; + case RTE_ML_IO_TYPE_INT32: + if (ML_TEST_CHECK_OUTPUT(ML_TEST_READ_TYPE(output, int32_t), + ML_TEST_READ_TYPE(reference, int32_t), + t->cmn.opt->tolerance)) + match = true; + + output += sizeof(int32_t); + reference += sizeof(int32_t); + break; + case RTE_ML_IO_TYPE_UINT32: + if (ML_TEST_CHECK_OUTPUT(ML_TEST_READ_TYPE(output, uint32_t), + ML_TEST_READ_TYPE(reference, uint32_t), + t->cmn.opt->tolerance)) + match = true; + + output += sizeof(uint32_t); + reference += sizeof(uint32_t); + break; + case RTE_ML_IO_TYPE_FP32: + if (ML_TEST_CHECK_OUTPUT(ML_TEST_READ_TYPE(output, float), + ML_TEST_READ_TYPE(reference, float), + t->cmn.opt->tolerance)) + match = true; + + output += sizeof(float); + reference += sizeof(float); + break; + default: /* other types, fp8, fp16, bfloat16 */ + match = true; + } + + if (!match) + goto done; + j++; + if (j < nb_elements) + goto next_element; + + i++; + if (i < model->info.nb_outputs) + goto next_output; + } +done: + if (match) + t->nb_valid++; + + return match; +} + /* Callback for mempool object iteration. This call would dequantize ouput data. */ static void ml_request_finish(struct rte_mempool *mp, void *opaque, void *obj, unsigned int obj_idx) @@ -701,9 +872,10 @@ ml_request_finish(struct rte_mempool *mp, void *opaque, void *obj, unsigned int struct test_inference *t = ml_test_priv((struct ml_test *)opaque); struct ml_request *req = (struct ml_request *)obj; struct ml_model *model = &t->model[req->fid]; + char str[PATH_MAX]; + bool error = false; RTE_SET_USED(mp); - RTE_SET_USED(obj_idx); if (req->niters == 0) return; @@ -711,6 +883,48 @@ ml_request_finish(struct rte_mempool *mp, void *opaque, void *obj, unsigned int t->nb_used++; rte_ml_io_dequantize(t->cmn.opt->dev_id, model->id, t->model[req->fid].nb_batches, req->output, model->output); + + if (model->reference == NULL) { + t->nb_valid++; + goto dump_output_pass; + } + + if (!ml_inference_validation(opaque, req)) + goto dump_output_fail; + else + goto dump_output_pass; + +dump_output_pass: + if (obj_idx == 0) { + /* write quantized output */ + snprintf(str, PATH_MAX, "%s.q", t->cmn.opt->filelist[req->fid].output); + ML_OPEN_WRITE_GET_ERR(str, req->output, model->out_qsize, error); + if (error) + return; + + /* write dequantized output */ + snprintf(str, PATH_MAX, "%s", t->cmn.opt->filelist[req->fid].output); + ML_OPEN_WRITE_GET_ERR(str, model->output, model->out_dsize, error); + if (error) + return; + } + + return; + +dump_output_fail: + if (t->cmn.opt->debug) { + /* dump quantized output buffer */ + snprintf(str, PATH_MAX, "%s.q.%d", t->cmn.opt->filelist[req->fid].output, obj_idx); + ML_OPEN_WRITE_GET_ERR(str, req->output, model->out_qsize, error); + if (error) + return; + + /* dump dequantized output buffer */ + snprintf(str, PATH_MAX, "%s.%d", t->cmn.opt->filelist[req->fid].output, obj_idx); + ML_OPEN_WRITE_GET_ERR(str, model->output, model->out_dsize, error); + if (error) + return; + } } int @@ -722,7 +936,7 @@ ml_inference_result(struct ml_test *test, struct ml_options *opt, int16_t fid) rte_mempool_obj_iter(t->model[fid].io_pool, ml_request_finish, test); - if (t->nb_used > 0) + if (t->nb_used == t->nb_valid) t->cmn.result = ML_TEST_SUCCESS; else t->cmn.result = ML_TEST_FAILED; diff --git a/app/test-mldev/test_inference_common.h b/app/test-mldev/test_inference_common.h index 1bac2dcfa0..3f2b042360 100644 --- a/app/test-mldev/test_inference_common.h +++ b/app/test-mldev/test_inference_common.h @@ -43,6 +43,7 @@ struct test_inference { struct rte_mempool *op_pool; uint64_t nb_used; + uint64_t nb_valid; int16_t fid; int (*enqueue)(void *arg); diff --git a/app/test-mldev/test_model_common.h b/app/test-mldev/test_model_common.h index dfbf568f0b..ce12cbfecc 100644 --- a/app/test-mldev/test_model_common.h +++ b/app/test-mldev/test_model_common.h @@ -31,6 +31,7 @@ struct ml_model { uint8_t *input; uint8_t *output; + uint8_t *reference; struct rte_mempool *io_pool; uint32_t nb_batches; From patchwork Tue Nov 29 06:50:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120230 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BDBFA0093; Tue, 29 Nov 2022 07:52:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BDF0B42D4F; Tue, 29 Nov 2022 07:51:08 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id B490642D4F for ; Tue, 29 Nov 2022 07:51:06 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2ASNXEAQ020462; Mon, 28 Nov 2022 22:51:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Fp4GeYFPK7uLhRXtvq9brMbYgpLiMLluex6fKNsefRU=; b=GjFTd9288zGHES/tgFuipPJ+0q4WYToqtxLNCg7lu8KrrF/JCEUI+c+J9B2cWlFRCO3b 0mXTPEqkjQbRAv8Da9u1Gg5TW9CnrQA9gMzwd33IYG2iTZYnP9nIOlXvFsrA57QZ1sin 0nQyDAuT+5O4Sv9mTdGiQq9XcHAZg1UMCWtZ5Ni8hjWJnZwsm8ZPmyjlmbFU+exA1azE RL9981XIHhomxoWUG4I0F+XVcwK0uDUWBsAJXPDSkHXTGzuWx+mimABssYY4wh3z1tQy wYGeya9mI9V0Y5hTcAQd8D77zPWNvUo7DwAGqWLbpJWY7hs3UxmIhOXqTcuJsCAqLQmv Cg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3m3k6wa1nh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:51:06 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 28 Nov 2022 22:51:03 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 28 Nov 2022 22:51:03 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 76C333F707A; Mon, 28 Nov 2022 22:51:03 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi CC: , , Subject: [PATCH v1 11/12] app/mldev: enable reporting stats in mldev app Date: Mon, 28 Nov 2022 22:50:39 -0800 Message-ID: <20221129065040.5875-12-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: iy0mWZCkjn4RZDkM0Az-bhaSGX8ZJVwr X-Proofpoint-ORIG-GUID: iy0mWZCkjn4RZDkM0Az-bhaSGX8ZJVwr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Enable reporting driver xstats and inference end-to-end latency and throughput in mldev inference tests. Reporting of stats can be enabled using "--stats" option. Signed-off-by: Srikanth Yalavarthi Change-Id: I1af73ce1e361e88d2ccf2a4ff9916f0d5d8ca6f2 --- app/test-mldev/ml_options.c | 22 ++-- app/test-mldev/ml_options.h | 2 + app/test-mldev/test_inference_common.c | 139 +++++++++++++++++++++ app/test-mldev/test_inference_common.h | 8 ++ app/test-mldev/test_inference_interleave.c | 4 + app/test-mldev/test_inference_ordered.c | 1 + 6 files changed, 168 insertions(+), 8 deletions(-) diff --git a/app/test-mldev/ml_options.c b/app/test-mldev/ml_options.c index 4087ab52db..91052d4593 100644 --- a/app/test-mldev/ml_options.c +++ b/app/test-mldev/ml_options.c @@ -36,6 +36,7 @@ ml_options_default(struct ml_options *opt) opt->queue_size = 1; opt->batches = 0; opt->tolerance = 0.0; + opt->stats = false; opt->debug = false; } @@ -222,7 +223,8 @@ ml_dump_test_options(const char *testname) "\t\t--queue_pairs : number of queue pairs to create\n" "\t\t--queue_size : size fo queue-pair\n" "\t\t--batches : number of batches of input\n" - "\t\t--tolerance : maximum tolerance (%%) for output validation\n"); + "\t\t--tolerance : maximum tolerance (%%) for output validation\n" + "\t\t--stats : enable reporting performance statistics\n"); printf("\n"); } } @@ -242,13 +244,12 @@ print_usage(char *program) ml_test_dump_names(ml_dump_test_options); } -static struct option lgopts[] = {{ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, - {ML_SOCKET_ID, 1, 0, 0}, {ML_MODELS, 1, 0, 0}, - {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, - {ML_BURST_SIZE, 1, 0, 0}, {ML_QUEUE_PAIRS, 1, 0, 0}, - {ML_QUEUE_SIZE, 1, 0, 0}, {ML_BATCHES, 1, 0, 0}, - {ML_TOLERANCE, 1, 0, 0}, {ML_DEBUG, 0, 0, 0}, - {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; +static struct option lgopts[] = { + {ML_TEST, 1, 0, 0}, {ML_DEVICE_ID, 1, 0, 0}, {ML_SOCKET_ID, 1, 0, 0}, + {ML_MODELS, 1, 0, 0}, {ML_FILELIST, 1, 0, 0}, {ML_REPETITIONS, 1, 0, 0}, + {ML_BURST_SIZE, 1, 0, 0}, {ML_QUEUE_PAIRS, 1, 0, 0}, {ML_QUEUE_SIZE, 1, 0, 0}, + {ML_BATCHES, 1, 0, 0}, {ML_TOLERANCE, 1, 0, 0}, {ML_STATS, 0, 0, 0}, + {ML_DEBUG, 0, 0, 0}, {ML_HELP, 0, 0, 0}, {NULL, 0, 0, 0}}; static int ml_opts_parse_long(int opt_idx, struct ml_options *opt) @@ -283,6 +284,11 @@ ml_options_parse(struct ml_options *opt, int argc, char **argv) while ((opts = getopt_long(argc, argv, "", lgopts, &opt_idx)) != EOF) { switch (opts) { case 0: /* parse long options */ + if (!strcmp(lgopts[opt_idx].name, "stats")) { + opt->stats = true; + break; + } + if (!strcmp(lgopts[opt_idx].name, "debug")) { opt->debug = true; break; diff --git a/app/test-mldev/ml_options.h b/app/test-mldev/ml_options.h index 79ac54de98..a375ae6750 100644 --- a/app/test-mldev/ml_options.h +++ b/app/test-mldev/ml_options.h @@ -24,6 +24,7 @@ #define ML_QUEUE_SIZE ("queue_size") #define ML_BATCHES ("batches") #define ML_TOLERANCE ("tolerance") +#define ML_STATS ("stats") #define ML_DEBUG ("debug") #define ML_HELP ("help") @@ -46,6 +47,7 @@ struct ml_options { uint16_t queue_size; uint16_t batches; float tolerance; + bool stats; bool debug; }; diff --git a/app/test-mldev/test_inference_common.c b/app/test-mldev/test_inference_common.c index cdd1667c71..d3f0211852 100644 --- a/app/test-mldev/test_inference_common.c +++ b/app/test-mldev/test_inference_common.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -45,6 +46,17 @@ } \ } while (0) +static void +print_line(uint16_t len) +{ + uint16_t i; + + for (i = 0; i < len; i++) + printf("-"); + + printf("\n"); +} + /* Enqueue inference requests with burst size equal to 1 */ static int ml_enqueue_single(void *arg) @@ -54,6 +66,7 @@ ml_enqueue_single(void *arg) struct rte_ml_op *op = NULL; struct ml_core_args *args; uint64_t model_enq = 0; + uint64_t start_cycle; uint32_t burst_enq; uint32_t lcore_id; int16_t fid; @@ -61,6 +74,7 @@ ml_enqueue_single(void *arg) lcore_id = rte_lcore_id(); args = &t->args[lcore_id]; + args->start_cycles = 0; model_enq = 0; if (args->nb_reqs == 0) @@ -96,10 +110,12 @@ ml_enqueue_single(void *arg) req->fid = fid; enqueue_req: + start_cycle = rte_get_tsc_cycles(); burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, args->qp_id, &op, 1); if (burst_enq == 0) goto enqueue_req; + args->start_cycles += start_cycle; fid++; if (likely(fid <= args->end_fid)) goto next_model; @@ -123,10 +139,12 @@ ml_dequeue_single(void *arg) uint64_t total_deq = 0; uint8_t nb_filelist; uint32_t burst_deq; + uint64_t end_cycle; uint32_t lcore_id; lcore_id = rte_lcore_id(); args = &t->args[lcore_id]; + args->end_cycles = 0; nb_filelist = args->end_fid - args->start_fid + 1; if (args->nb_reqs == 0) @@ -134,9 +152,11 @@ ml_dequeue_single(void *arg) dequeue_req: burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, args->qp_id, &op, 1); + end_cycle = rte_get_tsc_cycles(); if (likely(burst_deq == 1)) { total_deq += burst_deq; + args->end_cycles += end_cycle; if (unlikely(op->status == RTE_ML_OP_STATUS_ERROR)) { rte_ml_op_error_get(t->cmn.opt->dev_id, op, &error); ml_err("error_code = 0x%" PRIx64 ", error_message = %s\n", error.errcode, @@ -159,6 +179,7 @@ ml_enqueue_burst(void *arg) { struct test_inference *t = ml_test_priv((struct ml_test *)arg); struct ml_core_args *args; + uint64_t start_cycle; uint16_t ops_count; uint64_t model_enq; uint16_t burst_enq; @@ -171,6 +192,7 @@ ml_enqueue_burst(void *arg) lcore_id = rte_lcore_id(); args = &t->args[lcore_id]; + args->start_cycles = 0; model_enq = 0; if (args->nb_reqs == 0) @@ -212,8 +234,10 @@ ml_enqueue_burst(void *arg) pending = ops_count; enqueue_reqs: + start_cycle = rte_get_tsc_cycles(); burst_enq = rte_ml_enqueue_burst(t->cmn.opt->dev_id, args->qp_id, &args->enq_ops[idx], pending); + args->start_cycles += burst_enq * start_cycle; pending = pending - burst_enq; if (pending > 0) { @@ -243,11 +267,13 @@ ml_dequeue_burst(void *arg) uint64_t total_deq = 0; uint16_t burst_deq = 0; uint8_t nb_filelist; + uint64_t end_cycle; uint32_t lcore_id; uint32_t i; lcore_id = rte_lcore_id(); args = &t->args[lcore_id]; + args->end_cycles = 0; nb_filelist = args->end_fid - args->start_fid + 1; if (args->nb_reqs == 0) @@ -256,9 +282,11 @@ ml_dequeue_burst(void *arg) dequeue_burst: burst_deq = rte_ml_dequeue_burst(t->cmn.opt->dev_id, args->qp_id, args->deq_ops, t->cmn.opt->burst_size); + end_cycle = rte_get_tsc_cycles(); if (likely(burst_deq > 0)) { total_deq += burst_deq; + args->end_cycles += burst_deq * end_cycle; for (i = 0; i < burst_deq; i++) { if (unlikely(args->deq_ops[i]->status == RTE_ML_OP_STATUS_ERROR)) { @@ -387,6 +415,7 @@ test_inference_opt_dump(struct ml_options *opt) ml_dump("queue_pairs", "%u", opt->queue_pairs); ml_dump("queue_size", "%u", opt->queue_size); ml_dump("tolerance", "%-7.3f", opt->tolerance); + ml_dump("stats", "%s", (opt->stats ? "true" : "false")); if (opt->batches == 0) ml_dump("batches", "%u (default)", opt->batches); @@ -459,6 +488,11 @@ test_inference_setup(struct ml_test *test, struct ml_options *opt) RTE_CACHE_LINE_SIZE, opt->socket_id); } + for (i = 0; i < RTE_MAX_LCORE; i++) { + t->args[i].start_cycles = 0; + t->args[i].end_cycles = 0; + } + return 0; error: @@ -985,3 +1019,108 @@ ml_inference_launch_cores(struct ml_test *test, struct ml_options *opt, int16_t return 0; } + +int +ml_inference_stats_get(struct ml_test *test, struct ml_options *opt) +{ + struct test_inference *t = ml_test_priv(test); + uint64_t total_cycles = 0; + uint32_t nb_filelist; + uint64_t throughput; + uint64_t avg_e2e; + uint32_t qp_id; + uint64_t freq; + int ret; + int i; + + if (!opt->stats) + return 0; + + /* get xstats size */ + t->xstats_size = rte_ml_dev_xstats_names_get(opt->dev_id, NULL, 0); + if (t->xstats_size >= 0) { + /* allocate for xstats_map and values */ + t->xstats_map = rte_malloc( + "ml_xstats_map", t->xstats_size * sizeof(struct rte_ml_dev_xstats_map), 0); + if (t->xstats_map == NULL) { + ret = -ENOMEM; + goto error; + } + + t->xstats_values = + rte_malloc("ml_xstats_values", t->xstats_size * sizeof(uint64_t), 0); + if (t->xstats_values == NULL) { + ret = -ENOMEM; + goto error; + } + + ret = rte_ml_dev_xstats_names_get(opt->dev_id, t->xstats_map, t->xstats_size); + if (ret != t->xstats_size) { + printf("Unable to get xstats names, ret = %d\n", ret); + ret = -1; + goto error; + } + + for (i = 0; i < t->xstats_size; i++) + rte_ml_dev_xstats_get(opt->dev_id, &t->xstats_map[i].id, + &t->xstats_values[i], 1); + } + + /* print xstats*/ + printf("\n"); + print_line(80); + printf(" ML Device Extended Statistics\n"); + print_line(80); + for (i = 0; i < t->xstats_size; i++) + printf(" %-64s = %" PRIu64 "\n", t->xstats_map[i].name, t->xstats_values[i]); + print_line(80); + + /* release buffers */ + if (t->xstats_map) + rte_free(t->xstats_map); + + if (t->xstats_values) + rte_free(t->xstats_values); + + /* print end-to-end stats */ + freq = rte_get_tsc_hz(); + for (qp_id = 0; qp_id < RTE_MAX_LCORE; qp_id++) + total_cycles += t->args[qp_id].end_cycles - t->args[qp_id].start_cycles; + avg_e2e = total_cycles / opt->repetitions; + + if (freq == 0) { + avg_e2e = total_cycles / opt->repetitions; + printf(" %-64s = %" PRIu64 "\n", "Average End-to-End Latency (cycles)", avg_e2e); + } else { + avg_e2e = (total_cycles * NS_PER_S) / (opt->repetitions * freq); + printf(" %-64s = %" PRIu64 "\n", "Average End-to-End Latency (ns)", avg_e2e); + } + + if (strcmp(opt->test_name, "inference_ordered") == 0) + nb_filelist = 1; + else + nb_filelist = t->cmn.opt->nb_filelist; + + if (freq == 0) { + throughput = (nb_filelist * t->cmn.opt->repetitions * 1000000) / total_cycles; + printf(" %-64s = %" PRIu64 "\n", "Average Throughput (inferences / million cycles)", + throughput); + } else { + throughput = (nb_filelist * t->cmn.opt->repetitions * freq) / total_cycles; + printf(" %-64s = %" PRIu64 "\n", "Average Throughput (inferences / second)", + throughput); + } + + print_line(80); + + return 0; + +error: + if (t->xstats_map) + rte_free(t->xstats_map); + + if (t->xstats_values) + rte_free(t->xstats_values); + + return ret; +} diff --git a/app/test-mldev/test_inference_common.h b/app/test-mldev/test_inference_common.h index 3f2b042360..bb2920cc30 100644 --- a/app/test-mldev/test_inference_common.h +++ b/app/test-mldev/test_inference_common.h @@ -32,6 +32,9 @@ struct ml_core_args { struct rte_ml_op **enq_ops; struct rte_ml_op **deq_ops; struct ml_request **reqs; + + uint64_t start_cycles; + uint64_t end_cycles; }; struct test_inference { @@ -50,6 +53,10 @@ struct test_inference { int (*dequeue)(void *arg); struct ml_core_args args[RTE_MAX_LCORE]; + + struct rte_ml_dev_xstats_map *xstats_map; + uint64_t *xstats_values; + int xstats_size; } __rte_cache_aligned; bool test_inference_cap_check(struct ml_options *opt); @@ -67,5 +74,6 @@ void ml_inference_mem_destroy(struct ml_test *test, struct ml_options *opt); int ml_inference_result(struct ml_test *test, struct ml_options *opt, int16_t fid); int ml_inference_launch_cores(struct ml_test *test, struct ml_options *opt, int16_t start_fid, int16_t end_fid); +int ml_inference_stats_get(struct ml_test *test, struct ml_options *opt); #endif /* _ML_TEST_INFERENCE_COMMON_ */ diff --git a/app/test-mldev/test_inference_interleave.c b/app/test-mldev/test_inference_interleave.c index 74ad0c597f..d86838c3fa 100644 --- a/app/test-mldev/test_inference_interleave.c +++ b/app/test-mldev/test_inference_interleave.c @@ -60,7 +60,11 @@ test_inference_interleave_driver(struct ml_test *test, struct ml_options *opt) goto error; ml_inference_iomem_destroy(test, opt, fid); + } + + ml_inference_stats_get(test, opt); + for (fid = 0; fid < opt->nb_filelist; fid++) { ret = ml_model_stop(test, opt, &t->model[fid], fid); if (ret != 0) goto error; diff --git a/app/test-mldev/test_inference_ordered.c b/app/test-mldev/test_inference_ordered.c index 84e6bf9109..3826121a65 100644 --- a/app/test-mldev/test_inference_ordered.c +++ b/app/test-mldev/test_inference_ordered.c @@ -58,6 +58,7 @@ test_inference_ordered_driver(struct ml_test *test, struct ml_options *opt) goto error; ml_inference_iomem_destroy(test, opt, fid); + ml_inference_stats_get(test, opt); /* stop model */ ret = ml_model_stop(test, opt, &t->model[fid], fid); From patchwork Tue Nov 29 06:50:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 120231 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F74BA0093; Tue, 29 Nov 2022 07:52:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6B32D42D5C; Tue, 29 Nov 2022 07:51:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6803242D5D for ; Tue, 29 Nov 2022 07:51:11 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AT3Np9A005657; Mon, 28 Nov 2022 22:51:10 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Tj+Xld0M5EkbvHWGhb3IGReqFuuR15meLTiA1ViOCLE=; b=eexAZ0WXBoHvhNKpkLCtcGYZIZ9DFjztT89OxSjXvsBXfgIvB9D5DvQKpsLJXq8pp4tO CXssiMJnH3TMKTvyvabE2lK6AnROmrLNRXYZX4vr4ljXscSQvtuqoACeSCcEDahdzphw 4ICFJppRmVQWM4xbPsWHDSrqtvJl65eGuWU+4L4G2TtGqTmPVuHwc3dBk97qlF7d6aFQ YgaT84JCYnxsSoSTNACSsFMdYgviAXY/LAQEi7yUUlzxVPKh6KjA8x+ostRgytSVu7U/ PXgaH0dUicnS8wNvdI8V0t5n0YSyEIPonfjtUl/sCMuqh4EAnhg9fhERAoaS89M2tjpF aA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3m5a508nmd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 28 Nov 2022 22:51:08 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 28 Nov 2022 22:51:05 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 28 Nov 2022 22:51:05 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 1F62C3F707A; Mon, 28 Nov 2022 22:51:05 -0800 (PST) From: Srikanth Yalavarthi To: Thomas Monjalon , Srikanth Yalavarthi CC: , , Subject: [PATCH v1 12/12] app/mldev: add documentation for mldev test cases Date: Mon, 28 Nov 2022 22:50:40 -0800 Message-ID: <20221129065040.5875-13-syalavarthi@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221129065040.5875-1-syalavarthi@marvell.com> References: <20221129065040.5875-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: mR1FMxzNCPabe6X_S9ceo1Qqx7UkMluM X-Proofpoint-GUID: mR1FMxzNCPabe6X_S9ceo1Qqx7UkMluM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-29_05,2022-11-28_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added documentation specific to mldev test cases. Added details about all test cases and option supported by individual tests. Signed-off-by: Srikanth Yalavarthi Change-Id: I29f7bfe991e0f9ceb375bc4073dd7fd0fbcddaac --- MAINTAINERS | 1 + .../tools/img/mldev_inference_interleave.svg | 667 ++++++++++++++++++ .../tools/img/mldev_inference_ordered.svg | 526 ++++++++++++++ .../tools/img/mldev_model_ops_subtest_a.svg | 418 +++++++++++ .../tools/img/mldev_model_ops_subtest_b.svg | 421 +++++++++++ .../tools/img/mldev_model_ops_subtest_c.svg | 364 ++++++++++ .../tools/img/mldev_model_ops_subtest_d.svg | 422 +++++++++++ doc/guides/tools/index.rst | 1 + doc/guides/tools/testmldev.rst | 441 ++++++++++++ 9 files changed, 3261 insertions(+) create mode 100644 doc/guides/tools/img/mldev_inference_interleave.svg create mode 100644 doc/guides/tools/img/mldev_inference_ordered.svg create mode 100644 doc/guides/tools/img/mldev_model_ops_subtest_a.svg create mode 100644 doc/guides/tools/img/mldev_model_ops_subtest_b.svg create mode 100644 doc/guides/tools/img/mldev_model_ops_subtest_c.svg create mode 100644 doc/guides/tools/img/mldev_model_ops_subtest_d.svg create mode 100644 doc/guides/tools/testmldev.rst diff --git a/MAINTAINERS b/MAINTAINERS index 1edea42fad..1cddd6ead2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -540,6 +540,7 @@ M: Srikanth Yalavarthi F: lib/mldev/ F: app/test-mldev/ F: doc/guides/prog_guide/mldev.rst +F: doc/guides/tools/testmldev.rst Memory Pool Drivers diff --git a/doc/guides/tools/img/mldev_inference_interleave.svg b/doc/guides/tools/img/mldev_inference_interleave.svg new file mode 100644 index 0000000000..517c53d294 --- /dev/null +++ b/doc/guides/tools/img/mldev_inference_interleave.svg @@ -0,0 +1,667 @@ + + + + + + + + + + + + + + + + + + + + + + test: inference_interleave + + + + + + + + + QueuePair 0 + QueuePair 2 + + + + QueuePair 1 + + Machine LearningHardware Engine + + lcore 1 + lcore 5 + + + + lcore 3 + Enqueue Workers + + lcore 2 + lcore 4 + + + + lcore 6 + Dequeue Workers + + + + + + + + + + + + + Model 0 + Model 1 + Model 2 + Model 3 + + + + + + + nb_worker_threads = 2 * MIN(nb_queue_pairs, (lcore_count - 1) / 2) + inferences_per_queue_pair = nb_models * (repetitions / nb_queue_pairs) + + + + diff --git a/doc/guides/tools/img/mldev_inference_ordered.svg b/doc/guides/tools/img/mldev_inference_ordered.svg new file mode 100644 index 0000000000..9d2b2c9246 --- /dev/null +++ b/doc/guides/tools/img/mldev_inference_ordered.svg @@ -0,0 +1,526 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Model X + QueuePair 1 + QueuePair 2 + + + + + + + QueuePair 0 + + Machine LearningHardware Engine + lcore 1 + lcore 5 + lcore 3 + Enqueue Workers + lcore 2 + lcore 4 + lcore 6 + Dequeue Workers + test: inference_ordered + nb_worker_threads = 2 * MIN(nb_queue_pairs, (lcore_count - 1) / 2) + inferences_per_queue_pair = repetitions / nb_queue_pairs + + + + + + + + + + + + diff --git a/doc/guides/tools/img/mldev_model_ops_subtest_a.svg b/doc/guides/tools/img/mldev_model_ops_subtest_a.svg new file mode 100644 index 0000000000..cce5c3be7c --- /dev/null +++ b/doc/guides/tools/img/mldev_model_ops_subtest_a.svg @@ -0,0 +1,418 @@ + + + + + + + + + + + + + + + + Model 0 / Load + + Model 0 / Start + + Model 0 / Stop + + Model 0 / Unload + + + + + Model 1 / Load + + Model 1 / Start + + Model 1 / Unload + + Model 1 / Stop + + + + + Model N / Load + + Model N / Start + + Model N / Stop + + Model N / Unload + + + mldev: model_ops / subtest D + + + diff --git a/doc/guides/tools/img/mldev_model_ops_subtest_b.svg b/doc/guides/tools/img/mldev_model_ops_subtest_b.svg new file mode 100644 index 0000000000..53a49a2823 --- /dev/null +++ b/doc/guides/tools/img/mldev_model_ops_subtest_b.svg @@ -0,0 +1,421 @@ + + + + + + + + + + + + + Model 0 / Load + + Model 1 / Load + + Model N / Load + + + + + Model 0 / Start + + Model 1 / Start + + Model N / Start + + + + + Model 1 / Stop + + Model 0 / Stop + + Model N / Stop + + + + + Model 0 / Unload + + Model 1 / Unload + + Model N / Unload + + + mldev: model_ops / subtest A + + + diff --git a/doc/guides/tools/img/mldev_model_ops_subtest_c.svg b/doc/guides/tools/img/mldev_model_ops_subtest_c.svg new file mode 100644 index 0000000000..320d4978e3 --- /dev/null +++ b/doc/guides/tools/img/mldev_model_ops_subtest_c.svg @@ -0,0 +1,364 @@ + + + + + + + + + + + + + Model 0 / Load + + Model 1 / Load + + Model N / Load + + + + + Model 0 / Start + + Model 0 / Stop + + + + Model 0 / Unload + + Model 1 / Unload + + Model N / Unload + + + + Model N / Stop + + Model N / Start + + + mldev: model_ops / subtest C + + + diff --git a/doc/guides/tools/img/mldev_model_ops_subtest_d.svg b/doc/guides/tools/img/mldev_model_ops_subtest_d.svg new file mode 100644 index 0000000000..80c1798d99 --- /dev/null +++ b/doc/guides/tools/img/mldev_model_ops_subtest_d.svg @@ -0,0 +1,422 @@ + + + + + + + + + + + + + Model 0 / Load + + Model 0 / Start + + Model 1 / Load + + + + + Model 1 / Start + + + Model N / Load + + Model N / Start + + + + Model N / Unload + + Model N / Stop + + + + Model 1 / Stop + + + Model 1 / Unload + + Model 0 / Stop + + Model 0 / Unload + + + mldev: model_ops / subest B + + + diff --git a/doc/guides/tools/index.rst b/doc/guides/tools/index.rst index f1f5b94c8c..6f84fc31ff 100644 --- a/doc/guides/tools/index.rst +++ b/doc/guides/tools/index.rst @@ -21,4 +21,5 @@ DPDK Tools User Guides comp_perf testeventdev testregex + testmldev dts diff --git a/doc/guides/tools/testmldev.rst b/doc/guides/tools/testmldev.rst new file mode 100644 index 0000000000..845c2d9381 --- /dev/null +++ b/doc/guides/tools/testmldev.rst @@ -0,0 +1,441 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (c) 2022 Marvell. + +dpdk-test-mldev Application +=========================== + +The ``dpdk-test-mldev`` tool is a Data Plane Development Kit (DPDK) application that allows testing +various mldev use cases. This application has a generic framework to add new mldev based test cases +to verify functionality and measure the performance of inference execution on DPDK ML devices. + + +Application and Options +----------------------- + +The application has a number of command line options: + +.. code-block:: console + + dpdk-test-mldev [EAL Options] -- [application options] + +EAL Options +~~~~~~~~~~~ + +The following are the EAL command-line options that can be used with the ``dpdk-test-mldev`` +application. See the DPDK Getting Started Guides for more information on these options. + +* ``-c `` or ``-l `` + + Set the hexadecimal bitmask of the cores to run on. The corelist is a list of cores to use. + +* ``-a `` + + Attach a PCI based ML device. Specific to drivers using a PCI based ML devices. + +* ``--vdev `` + + Add a virtual mldev device. Specific to drivers using a ML virtual device. + + +Application Options +~~~~~~~~~~~~~~~~~~~ + +The following are the command-line options supported by the test application. + +* ``--test `` + + ML tests are divided into two groups, Model and Device tests and Inference tests. Test + name one of the following supported tests. + + **ML Device Tests** :: + + device_ops + + **ML Model Tests** :: + + model_ops + + **ML Inference Tests** :: + + inference_ordered + inference_interleave + +* ``--dev_id `` + + Set the device id of the ML device to be used for the test. Default value is `0`. + +* ``--socket_id `` + + Set the socket id of the application resources. Default value is `SOCKET_ID_ANY`. + +* ``--debug`` + + Enable the tests to run in debug mode. + +* ``--models `` + + Set the list of model files to be used for the tests. Application expects the + ``model_list`` in comma separated form (i.e. ``--models model_A.bin,model_B.bin``). + Maximum number of models supported by the test is ``8``. + +* ``--filelist `` + + Set the list of model, input, output and reference files to be used for the tests. + Application expects the ``file_list`` to be in comma separated form + (i.e. ``--filelist [,reference]``). + + Multiple filelist entries can be specified when running the tests with multiple models. + Both quantized and dequantized outputs are written to the disk. Dequantized output file + would have the name specified by the user through ``--filelist`` option. A suffix ``.q`` + is appended to quantized output filename. Maximum number of filelist entries supported + by the test is ``8``. + +* ``--repetitions `` + + Set the number of inference repetitions to be executed in the test per each model. Default + value is `1`. + +* ``--burst_size `` + + Set the burst size to be used when enqueuing / dequeuing inferences. Default value is `1`. + +* ``--queue_pairs `` + + Set the number of queue-pairs to be used for inference enqueue and dequeue operations. + Default value is `1`. + +* ``--queue_size `` + + Set the size of queue-pair to be created for inference enqueue / dequeue operations. + Queue size would translate into `rte_ml_dev_qp_conf::nb_desc` field during queue-pair + creation. Default value is `1`. + +* ``--batches `` + + Set the number batches in the input file provided for inference run. When not specified + the test would assume the number of batches is equal to the batch size of the model. + +* ``--tolerance `` + + Set the tolerance value in percentage to be used for output validation. Default value + is `0`. + +* ``--stats`` + + Enable reporting device extended stats. + + +ML Device Tests +------------------------- + +ML device tests are functional tests to validate ML device APIs. Device tests validate the ML device +handling APIs configure, close, start and stop APIs. + + +Application Options +~~~~~~~~~~~~~~~~~~~ + +Supported command line options for the `model_ops` test are following:: + + --debug + --test + --dev_id + --socket_id + --queue_pairs + --queue_size + + +DEVICE_OPS Test +~~~~~~~~~~~~~~~ + +Device ops test validates the device configuration and reconfiguration support. The test configures +ML device based on the option ``--queue_pairs`` and ``--queue_size`` specified by the user, and +later reconfigures the ML device with the number of queue pairs and queue size based the maximum +specified through the device info. + + +Example +^^^^^^^ + +Command to run device_ops test: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=device_ops + + +Command to run device_ops test with user options: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=device_ops --queue_pairs --queue_size + + +ML Model Tests +------------------------- + +Model tests are functional tests to validate ML model APIs. Model tests validate the functioning +of APIs to load, start, stop and unload ML models. + + +Application Options +~~~~~~~~~~~~~~~~~~~ + +Supported command line options for the `model_ops` test are following:: + + --debug + --test + --dev_id + --socket_id + --models + + +List of model files to be used for the `model_ops` test can be specified through the option +``--models `` as a comma separated list. Maximum number of models supported in +the test is `8`. + +.. Note:: + + * The ``--models `` is a mandatory option for running this test. + * Options not supported by the test are ignored if specified. + + +MODEL_OPS Test +~~~~~~~~~~~~~~ + +The test is a collection of multiple sub-tests, each with a different order of slow-path +operations when handling with `N` number of models. + + +**Sub-test A:** executes the sequence of load / start / stop / unload for a model in order, +followed by next model. +.. _figure_mldev_model_ops_subtest_a: + +.. figure:: img/mldev_model_ops_subtest_a.* + + Execution sequence of model_ops subtest A. + + +**Sub-test B:** executes load for all models, followed by a start for all models. Upon successful +start of all models, stop is invoked for all models followed by unload. +.. _figure_mldev_model_ops_subtest_b: + +.. figure:: img/mldev_model_ops_subtest_b.* + + Execution sequence of model_ops subtest B. + + +**Sub-test C:** loads all models, followed by a start and stop of all models in order. Upon +completion of stop, unload is invoked for all models. +.. _figure_mldev_model_ops_subtest_c: + +.. figure:: img/mldev_model_ops_subtest_c.* + + Execution sequence of model_ops subtest C. + + +**Sub-test D:** executes load and start for all models available. Upon successful start of all +models, stop and stop is executed for the models. +.. _figure_mldev_model_ops_subtest_d: + +.. figure:: img/mldev_model_ops_subtest_d.* + + Execution sequence of model_ops subtest D. + + +Example +^^^^^^^ + +Command to run model_ops test: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=model_ops --models model_1.bin,model_2.bin,model_3.bin, model_4.bin + + +ML Inference Tests +------------------ + +Inference tests are a set of tests to validate end-to-end inference execution on ML device. +These tests executes the full sequence of operations required to run inferences with one or +multiple models. + +Application Options +~~~~~~~~~~~~~~~~~~~ + +Supported command line options for inference tests are following:: + + --debug + --test + --dev_id + --socket_id + --filelist + --repetitions + --burst_size + --queue_pairs + --queue_size + --batches + --tolerance + --stats + + +List of files to be used for the inference tests can be specified through the option +``--filelist `` as a comma separated list. A filelist entry would be of the format +``--filelist [,reference_file]`` and is used to specify the +list of files required to test with a single model. Multiple filelist entries are supported by +the test, one entry per model. Maximum number of file entries supported by the test is `8`. + +When ``--burst_size `` option is specified for the test, enqueue and dequeue burst would +try to enqueue or dequeue ``num`` number of inferences per each call respectively. + +In the inference test, a pair of lcores are mapped to each queue pair. Minimum number of lcores +required for the tests is equal to ``(queue_pairs * 2 + 1)``. + +Output validation of inference would be enabled only when a reference file is specified through +the ``--filelist`` option. Application would additionally consider the tolerance value provided +through ``--tolerance`` option during validation. When the tolerance values is 0, CRC32 hash of +inference output and reference output are compared. When the tolerance is non-zero, element wise +comparison of output is performed. Validation is considered as successful only when all the +elements of the output tensor are with in the tolerance range specified. + +When ``--debug`` option is specified, tests are run in debug mode. + +Enabling ``--stats`` would print the extended stats supported by the driver. + +.. Note:: + + * The ``--filelist `` is a mandatory option for running inference tests. + * Options not supported by the tests are ignored if specified. + * Element wise comparison is not supported when the output dtype is either fp8, fp16 + or bfloat16. This is applicable only when the tolerance is greater than zero and for + pre-quantized models only. + + +INFERENCE_ORDERED Test +~~~~~~~~~~~~~~~~~~~~~~ + +This is a functional test for validating the end-to-end inference execution on ML device. This +test configures ML device and queue pairs as per the queue-pair related options (queue_pairs and +queue_size) specified by the user. Upon successful configuration of the device and queue pairs, +the first model specified through the filelist is loaded to the device and inferences are enqueued +by a pool of worker threads to the ML device. Total number of inferences enqueued for the model +are equal to the repetitions specified. A dedicated pool of worker threads would dequeue the +inferences from the device. The model is unloaded upon completion of all inferences for the model. +The test would continue loading and executing inference requests for all models specified +through ``filelist`` option in an ordered manner. + +.. _figure_mldev_inference_ordered: + +.. figure:: img/mldev_inference_ordered.* + + Execution of inference_ordered on single model. + + +Example +^^^^^^^ + +Example command to run inference_ordered test: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=inference_ordered --filelist model.bin,input.bin,output.bin + +Example command to run inference_ordered with output validation using tolerance of `1%``: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=inference_ordered --filelist model.bin,input.bin,output.bin,reference.bin \ + --tolerance 1.0 + +Example command to run inference_ordered test with multiple queue-pairs and queue size: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=inference_ordered --filelist model.bin,input.bin,output.bin \ + --queue_pairs 4 --queue_size 16 + +Example command to run inference_ordered test with a specific burst size: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=inference_ordered --filelist model.bin,input.bin,output.bin \ + --burst_size 12 + + +INFERENCE_INTERLEAVE Test +~~~~~~~~~~~~~~~~~~~~~~~~~ + +This is a stress test for validating the end-to-end inference execution on ML device. The test +configures the ML device and queue pairs as per the queue-pair related options (queue_pairs +and queue_size) specified by the user. Upon successful configuration of the device and queue +pairs, all models specified through the filelist are loaded to the device. Inferences for multiple +models are enqueued by a pool of worker threads in parallel. Inference execution by the device is +interleaved between multiple models. Total number of inferences enqueued for a model are equal to +the repetitions specified. An additional pool of threads would dequeue the inferences from the +device. Models would be unloaded upon completion of inferences for all models loaded. + + +.. _figure_mldev_inference_interleave: + +.. figure:: img/mldev_inference_interleave.* + + Execution of inference_interleave on single model. + + +Example +^^^^^^^ + +Example command to run inference_interleave test: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=inference_interleave --filelist model.bin,input.bin,output.bin + + +Example command to run inference_interleave test with multiple models: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=inference_interleave --filelist model_A.bin,input_A.bin,output_A.bin \ + --filelist model_B.bin,input_B.bin,output_B.bin + + +Example command to run inference_interleave test with multiple models ad output validation +using tolerance of `2.0%``: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=inference_interleave \ + --filelist model_A.bin,input_A.bin,output_A.bin,reference_A.bin \ + --filelist model_B.bin,input_B.bin,output_B.bin,reference_B.bin \ + --tolerance 2.0 + +Example command to run inference_interleave test with multiple queue-pairs and queue size +and burst size: + +.. code-block:: console + + sudo /app/dpdk-test-mldev -c 0xf -a -- \ + --test=inference_interleave --filelist model.bin,input.bin,output.bin \ + --queue_pairs 8 --queue_size 12 --burst_size 16 + + +Debug mode +---------- + +ML tests can be executed in debug mode by enabling the option ``--debug``. Execution of tests in +debug mode would enable additional prints. + +When a validation failure is observed, output from that buffer is written to the disk, with the +filenames having similar convention when the test has passed. Additionally index of the buffer +would be appended to the filenames.