From patchwork Wed Jul 1 19:46:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72638 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0956A0350; Wed, 1 Jul 2020 21:47:12 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2F0171C1CB; Wed, 1 Jul 2020 21:47:04 +0200 (CEST) Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by dpdk.org (Postfix) with ESMTP id E767A1C1B3 for ; Wed, 1 Jul 2020 21:47:02 +0200 (CEST) Received: by mail-pf1-f196.google.com with SMTP id u185so9401943pfu.1 for ; Wed, 01 Jul 2020 12:47:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZTd1TA8NHjAVGfiO8a9O+yoH6csOcCI14iFvVl4Ny+o=; b=PucKM2TcgphGG3YSQOS5+2SGcEtvp7ZLZFg9EmXz2K9jJxZMXaG1/CIfQQ7OIzTx3A XRnVAgrOHzSm5LayXiWxHFZ5Edl4HArlBTh5THuZZ1u3lVpYVEQdGRMUMOquahwZWSqx usSFDZZNPcECQY3TIH3YG9lc0yfjCplE5nzTrcJpYcJ32GPEo55CVuAe4los/wIcJH28 Uma9R0X84fML7p5SBqE4aS5s+lzEXZnZyc4F+apEl3ZAgk75/9iCF1MEQ4FRQKdBFsC1 1gRMrsyYg8Ie+AY3v2EhwM5h/nNIpJZ8w8hofd8ayFSZFwer/vdaLtrFGkOJE/Ly8g29 Ok9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZTd1TA8NHjAVGfiO8a9O+yoH6csOcCI14iFvVl4Ny+o=; b=L5wFlO7Yj3VcABWkybExwvbzVW/3oD3fgDBiqpezc4bBPHc5+Fg+14J+s54r9sBBIB idQvpa4VgUyb55pA+OTJA5rajdR25CvPRErrA+Axxd3rpMynHplFCn8tAfH/Drw9tw84 ApCRqKCManSJc+h382OcE69ymkabbMTa1x3/v+erXBdHjxc+ivyA7cuaYMFjHG4XLYvX NX/+28ULrv4EFUpzj8qaGtMrY5o40R65IxeI3M3b2RM4NRr+Rk/0BqfutBkTkGp4rJeS mu/d+09haKjnhQm5fj3qAr4F8U/WlkLCuIIn50B2D3XOmwY5WQmxX2TduVnVUXbYwuFW /tSA== X-Gm-Message-State: AOAM531tgtVnhs3keUTVxtnPWIWlkGyfaHuOGj9dqdgd1H4wppfvVxtM dKtAsc+5ehtpvWxFywHCfKC7UbCPy0I= X-Google-Smtp-Source: ABdhPJy0yXfNRNAEeeV5Vt0csLi04TNqjliXNQ9kGrjooMu2Qq4ccbPjdXmhbDgmqjL4h3g8PZxg0Q== X-Received: by 2002:a62:8045:: with SMTP id j66mr24964815pfd.162.1593632820616; Wed, 01 Jul 2020 12:47:00 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.46.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:46:59 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:24 -0700 Message-Id: <20200701194650.10705-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 01/27] eal: rename terms used for DPDK lcores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Replace the old use of master/slave lcore with more inclusive name of initial/secondary lcore. The old visible API will stay for now. Change master2slave to new init2worker and vice-versa. This patch breaks the expected practice for new API's. The new rte_get_initial_lcore() will not go through the standard experimental API phase; there is no functional difference from the previous name. Signed-off-by: Stephen Hemminger --- doc/guides/nics/memif.rst | 4 +- lib/librte_eal/common/eal_common_launch.c | 36 ++++++------- lib/librte_eal/common/eal_common_lcore.c | 11 ++-- lib/librte_eal/common/eal_common_options.c | 62 +++++++++++----------- lib/librte_eal/common/eal_options.h | 4 +- lib/librte_eal/common/eal_private.h | 6 +-- lib/librte_eal/common/eal_thread.h | 6 +-- lib/librte_eal/common/rte_random.c | 2 +- lib/librte_eal/common/rte_service.c | 2 +- lib/librte_eal/freebsd/eal.c | 31 +++++++---- lib/librte_eal/freebsd/eal_thread.c | 34 ++++++------ lib/librte_eal/include/rte_eal.h | 4 +- lib/librte_eal/include/rte_eal_trace.h | 4 +- lib/librte_eal/include/rte_launch.h | 62 ++++++++++++---------- lib/librte_eal/include/rte_lcore.h | 29 +++++++--- lib/librte_eal/linux/eal.c | 24 ++++----- lib/librte_eal/linux/eal_memory.c | 10 ++-- lib/librte_eal/linux/eal_thread.c | 34 ++++++------ lib/librte_eal/rte_eal_version.map | 1 + lib/librte_eal/windows/eal.c | 18 ++++--- lib/librte_eal/windows/eal_thread.c | 32 +++++------ 21 files changed, 230 insertions(+), 186 deletions(-) diff --git a/doc/guides/nics/memif.rst b/doc/guides/nics/memif.rst index ddeebed25ccd..9c67d7141cbe 100644 --- a/doc/guides/nics/memif.rst +++ b/doc/guides/nics/memif.rst @@ -106,13 +106,13 @@ region n (no-zero-copy): +-----------------------+-------------------------------------------------------------------------+ | Rings | Buffers | +-----------+-----------+-----------------+---+---------------------------------------------------+ -| S2M rings | M2S rings | packet buffer 0 | . | pb ((1 << pmd->run.log2_ring_size)*(s2m + m2s))-1 | +| S2M rings | M2S rings | packet buffer 0 | . | pb ((1 << pmd->run.log2_ring_size)*(w2i + i2w))-1 | +-----------+-----------+-----------------+---+---------------------------------------------------+ S2M OR M2S Rings: +--------+--------+-----------------------+ -| ring 0 | ring 1 | ring num_s2m_rings - 1| +| ring 0 | ring 1 | ring num_w2i_rings - 1| +--------+--------+-----------------------+ ring 0: diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c index cf52d717f68e..43a0af196db2 100644 --- a/lib/librte_eal/common/eal_common_launch.c +++ b/lib/librte_eal/common/eal_common_launch.c @@ -21,55 +21,55 @@ * Wait until a lcore finished its job. */ int -rte_eal_wait_lcore(unsigned slave_id) +rte_eal_wait_lcore(unsigned worker_id) { - if (lcore_config[slave_id].state == WAIT) + if (lcore_config[worker_id].state == WAIT) return 0; - while (lcore_config[slave_id].state != WAIT && - lcore_config[slave_id].state != FINISHED) + while (lcore_config[worker_id].state != WAIT && + lcore_config[worker_id].state != FINISHED) rte_pause(); rte_rmb(); /* we are in finished state, go to wait state */ - lcore_config[slave_id].state = WAIT; - return lcore_config[slave_id].ret; + lcore_config[worker_id].state = WAIT; + return lcore_config[worker_id].ret; } /* - * Check that every SLAVE lcores are in WAIT state, then call - * rte_eal_remote_launch() for all of them. If call_master is true - * (set to CALL_MASTER), also call the function on the master lcore. + * Check that every WORKER lcores are in WAIT state, then call + * rte_eal_remote_launch() for all of them. If call_initial is true + * (set to CALL_INITIAL), also call the function on the initial lcore. */ int rte_eal_mp_remote_launch(int (*f)(void *), void *arg, - enum rte_rmt_call_master_t call_master) + enum rte_rmt_call_initial_t call_initial) { int lcore_id; - int master = rte_get_master_lcore(); + int initial = rte_get_initial_lcore(); /* check state of lcores */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (lcore_config[lcore_id].state != WAIT) return -EBUSY; } /* send messages to cores */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(f, arg, lcore_id); } - if (call_master == CALL_MASTER) { - lcore_config[master].ret = f(arg); - lcore_config[master].state = FINISHED; + if (call_initial == CALL_INITIAL) { + lcore_config[initial].ret = f(arg); + lcore_config[initial].state = FINISHED; } return 0; } /* - * Return the state of the lcore identified by slave_id. + * Return the state of the lcore identified by worker_id. */ enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned lcore_id) @@ -86,7 +86,7 @@ rte_eal_mp_wait_lcore(void) { unsigned lcore_id; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_wait_lcore(lcore_id); } } diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c index 5404922a87d2..a8c8b7206992 100644 --- a/lib/librte_eal/common/eal_common_lcore.c +++ b/lib/librte_eal/common/eal_common_lcore.c @@ -16,9 +16,14 @@ #include "eal_private.h" #include "eal_thread.h" +unsigned int rte_get_initial_lcore(void) +{ + return rte_eal_get_configuration()->initial_lcore; +} + unsigned int rte_get_master_lcore(void) { - return rte_eal_get_configuration()->master_lcore; + return rte_eal_get_configuration()->initial_lcore; } unsigned int rte_lcore_count(void) @@ -72,7 +77,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id) return cfg->lcore_role[lcore_id] == ROLE_RTE; } -unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap) +unsigned int rte_get_next_lcore(unsigned int i, int skip_initial, int wrap) { i++; if (wrap) @@ -80,7 +85,7 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap) while (i < RTE_MAX_LCORE) { if (!rte_lcore_is_enabled(i) || - (skip_master && (i == rte_get_master_lcore()))) { + (skip_initial && (i == rte_get_initial_lcore()))) { i++; if (wrap) i %= RTE_MAX_LCORE; diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index 24b223ebfd0f..48a82cce9ad7 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -79,7 +79,7 @@ eal_long_options[] = { {OPT_TRACE_DIR, 1, NULL, OPT_TRACE_DIR_NUM }, {OPT_TRACE_BUF_SIZE, 1, NULL, OPT_TRACE_BUF_SIZE_NUM }, {OPT_TRACE_MODE, 1, NULL, OPT_TRACE_MODE_NUM }, - {OPT_MASTER_LCORE, 1, NULL, OPT_MASTER_LCORE_NUM }, + {OPT_INITIAL_LCORE, 1, NULL, OPT_INITIAL_LCORE_NUM }, {OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM}, {OPT_NO_HPET, 0, NULL, OPT_NO_HPET_NUM }, {OPT_NO_HUGE, 0, NULL, OPT_NO_HUGE_NUM }, @@ -142,7 +142,7 @@ struct device_option { static struct device_option_list devopt_list = TAILQ_HEAD_INITIALIZER(devopt_list); -static int master_lcore_parsed; +static int initial_lcore_parsed; static int mem_parsed; static int core_parsed; @@ -485,12 +485,12 @@ eal_parse_service_coremask(const char *coremask) for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE; j++, idx++) { if ((1 << j) & val) { - /* handle master lcore already parsed */ + /* handle initial lcore already parsed */ uint32_t lcore = idx; - if (master_lcore_parsed && - cfg->master_lcore == lcore) { + if (initial_lcore_parsed && + cfg->initial_lcore == lcore) { RTE_LOG(ERR, EAL, - "lcore %u is master lcore, cannot use as service core\n", + "lcore %u is initial lcore, cannot use as service core\n", idx); return -1; } @@ -658,12 +658,12 @@ eal_parse_service_corelist(const char *corelist) min = idx; for (idx = min; idx <= max; idx++) { if (cfg->lcore_role[idx] != ROLE_SERVICE) { - /* handle master lcore already parsed */ + /* handle initial lcore already parsed */ uint32_t lcore = idx; - if (cfg->master_lcore == lcore && - master_lcore_parsed) { + if (cfg->initial_lcore == lcore && + initial_lcore_parsed) { RTE_LOG(ERR, EAL, - "Error: lcore %u is master lcore, cannot use as service core\n", + "Error: lcore %u is initial lcore, cannot use as service core\n", idx); return -1; } @@ -746,25 +746,25 @@ eal_parse_corelist(const char *corelist, int *cores) return 0; } -/* Changes the lcore id of the master thread */ +/* Changes the lcore id of the initial thread */ static int -eal_parse_master_lcore(const char *arg) +eal_parse_initial_lcore(const char *arg) { char *parsing_end; struct rte_config *cfg = rte_eal_get_configuration(); errno = 0; - cfg->master_lcore = (uint32_t) strtol(arg, &parsing_end, 0); + cfg->initial_lcore = (uint32_t) strtol(arg, &parsing_end, 0); if (errno || parsing_end[0] != 0) return -1; - if (cfg->master_lcore >= RTE_MAX_LCORE) + if (cfg->initial_lcore >= RTE_MAX_LCORE) return -1; - master_lcore_parsed = 1; + initial_lcore_parsed = 1; - /* ensure master core is not used as service core */ - if (lcore_config[cfg->master_lcore].core_role == ROLE_SERVICE) { + /* ensure initial core is not used as service core */ + if (lcore_config[cfg->initial_lcore].core_role == ROLE_SERVICE) { RTE_LOG(ERR, EAL, - "Error: Master lcore is used as a service core\n"); + "Error: Initial lcore is used as a service core\n"); return -1; } @@ -1502,10 +1502,10 @@ eal_parse_common_option(int opt, const char *optarg, conf->process_type = eal_parse_proc_type(optarg); break; - case OPT_MASTER_LCORE_NUM: - if (eal_parse_master_lcore(optarg) < 0) { + case OPT_INITIAL_LCORE_NUM: + if (eal_parse_initial_lcore(optarg) < 0) { RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_MASTER_LCORE "\n"); + OPT_INITIAL_LCORE "\n"); return -1; } break; @@ -1673,9 +1673,9 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg) RTE_CPU_AND(cpuset, cpuset, &default_set); - /* if no remaining cpu, use master lcore cpu affinity */ + /* if no remaining cpu, use initial lcore cpu affinity */ if (!CPU_COUNT(cpuset)) { - memcpy(cpuset, &lcore_config[rte_get_master_lcore()].cpuset, + memcpy(cpuset, &lcore_config[rte_get_initial_lcore()].cpuset, sizeof(*cpuset)); } } @@ -1707,12 +1707,12 @@ eal_adjust_config(struct internal_config *internal_cfg) if (internal_conf->process_type == RTE_PROC_AUTO) internal_conf->process_type = eal_proc_type_detect(); - /* default master lcore is the first one */ - if (!master_lcore_parsed) { - cfg->master_lcore = rte_get_next_lcore(-1, 0, 0); - if (cfg->master_lcore >= RTE_MAX_LCORE) + /* default initial lcore is the first one */ + if (!initial_lcore_parsed) { + cfg->initial_lcore = rte_get_next_lcore(-1, 0, 0); + if (cfg->initial_lcore >= RTE_MAX_LCORE) return -1; - lcore_config[cfg->master_lcore].core_role = ROLE_RTE; + lcore_config[cfg->initial_lcore].core_role = ROLE_RTE; } compute_ctrl_threads_cpuset(internal_cfg); @@ -1732,8 +1732,8 @@ eal_check_common_options(struct internal_config *internal_cfg) const struct internal_config *internal_conf = eal_get_internal_configuration(); - if (cfg->lcore_role[cfg->master_lcore] != ROLE_RTE) { - RTE_LOG(ERR, EAL, "Master lcore is not enabled for DPDK\n"); + if (cfg->lcore_role[cfg->initial_lcore] != ROLE_RTE) { + RTE_LOG(ERR, EAL, "Initial lcore is not enabled for DPDK\n"); return -1; } @@ -1831,7 +1831,7 @@ eal_common_usage(void) " '( )' can be omitted for single element group,\n" " '@' can be omitted if cpus and lcores have the same value\n" " -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n" - " --"OPT_MASTER_LCORE" ID Core ID that is used as master\n" + " --"OPT_INITIAL_LCORE" ID Core ID that is used as initial\n" " --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n" " -n CHANNELS Number of memory channels\n" " -m MB Memory to allocate (see also --"OPT_SOCKET_MEM")\n" diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index 18e6da9ab37b..0ffe1a5ec0a3 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -43,8 +43,8 @@ enum { OPT_TRACE_BUF_SIZE_NUM, #define OPT_TRACE_MODE "trace-mode" OPT_TRACE_MODE_NUM, -#define OPT_MASTER_LCORE "master-lcore" - OPT_MASTER_LCORE_NUM, +#define OPT_INITIAL_LCORE "initial-lcore" + OPT_INITIAL_LCORE_NUM, #define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name" OPT_MBUF_POOL_OPS_NAME_NUM, #define OPT_PROC_TYPE "proc-type" diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h index 46bcae930590..9ef057d9b0f3 100644 --- a/lib/librte_eal/common/eal_private.h +++ b/lib/librte_eal/common/eal_private.h @@ -20,8 +20,8 @@ */ struct lcore_config { pthread_t thread_id; /**< pthread identifier */ - int pipe_master2slave[2]; /**< communication pipe with master */ - int pipe_slave2master[2]; /**< communication pipe with master */ + int pipe_init2worker[2]; /**< communication pipe with initial core */ + int pipe_worker2init[2]; /**< communication pipe with initial core */ lcore_function_t * volatile f; /**< function to call */ void * volatile arg; /**< argument of function */ @@ -42,7 +42,7 @@ extern struct lcore_config lcore_config[RTE_MAX_LCORE]; * The global RTE configuration structure. */ struct rte_config { - uint32_t master_lcore; /**< Id of the master lcore */ + uint32_t initial_lcore; /**< Id of the initial lcore */ uint32_t lcore_count; /**< Number of available logical cores. */ uint32_t numa_node_count; /**< Number of detected NUMA nodes. */ uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */ diff --git a/lib/librte_eal/common/eal_thread.h b/lib/librte_eal/common/eal_thread.h index b40ed249edab..35301d852bc2 100644 --- a/lib/librte_eal/common/eal_thread.h +++ b/lib/librte_eal/common/eal_thread.h @@ -16,12 +16,12 @@ __rte_noreturn void *eal_thread_loop(void *arg); /** - * Init per-lcore info for master thread + * Init per-lcore info for initial thread * * @param lcore_id - * identifier of master lcore + * identifier of initial lcore */ -void eal_thread_init_master(unsigned lcore_id); +void eal_thread_initial_lcore(unsigned lcore_id); /** * Get the NUMA socket id from cpu id. diff --git a/lib/librte_eal/common/rte_random.c b/lib/librte_eal/common/rte_random.c index b7a089ac4fe0..6bae53bdf659 100644 --- a/lib/librte_eal/common/rte_random.c +++ b/lib/librte_eal/common/rte_random.c @@ -122,7 +122,7 @@ struct rte_rand_state *__rte_rand_get_state(void) lcore_id = rte_lcore_id(); if (unlikely(lcore_id == LCORE_ID_ANY)) - lcore_id = rte_get_master_lcore(); + lcore_id = rte_get_initial_lcore(); return &rand_states[lcore_id]; } diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c index 6123a2124d33..6a80b1675559 100644 --- a/lib/librte_eal/common/rte_service.c +++ b/lib/librte_eal/common/rte_service.c @@ -106,7 +106,7 @@ rte_service_init(void) struct rte_config *cfg = rte_eal_get_configuration(); for (i = 0; i < RTE_MAX_LCORE; i++) { if (lcore_config[i].core_role == ROLE_SERVICE) { - if ((unsigned int)i == cfg->master_lcore) + if ((unsigned int)i == cfg->initial_lcore) continue; rte_service_lcore_add(i); count++; diff --git a/lib/librte_eal/freebsd/eal.c b/lib/librte_eal/freebsd/eal.c index 8c75cba79a71..c7cc79ce65c3 100644 --- a/lib/librte_eal/freebsd/eal.c +++ b/lib/librte_eal/freebsd/eal.c @@ -622,10 +622,14 @@ eal_check_mem_on_local_socket(void) int socket_id; const struct rte_config *config = rte_eal_get_configuration(); +<<<<<<< HEAD socket_id = rte_lcore_to_socket_id(config->master_lcore); +======= + socket_id = rte_lcore_to_socket_id(rte_config.initial_lcore); +>>>>>>> 28604a3e5a3a... eal: rename terms used for DPDK lcores if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n"); + RTE_LOG(WARNING, EAL, "WARNING: Initial core has no memory on local socket!\n"); } @@ -845,23 +849,32 @@ rte_eal_init(int argc, char **argv) eal_check_mem_on_local_socket(); - eal_thread_init_master(config->master_lcore); +<<<<<<< HEAD + eal_thread_initial_lcore(config->master_lcore); ret = eal_thread_dump_affinity(cpuset, sizeof(cpuset)); RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%p;cpuset=[%s%s])\n", config->master_lcore, thread_id, cpuset, +======= + eal_thread_set_initial_lcore(rte_config.initial_lcore); + + ret = eal_thread_dump_affinity(cpuset, sizeof(cpuset)); + + RTE_LOG(DEBUG, EAL, "Initial lcore %u is ready (tid=%p;cpuset=[%s%s])\n", + rte_config.initial_lcore, thread_id, cpuset, +>>>>>>> 28604a3e5a3a... eal: rename terms used for DPDK lcores ret == 0 ? "" : "..."); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { /* - * create communication pipes between master thread + * create communication pipes between initial thread * and children */ - if (pipe(lcore_config[i].pipe_master2slave) < 0) + if (pipe(lcore_config[i].pipe_init2worker) < 0) rte_panic("Cannot create pipe\n"); - if (pipe(lcore_config[i].pipe_slave2master) < 0) + if (pipe(lcore_config[i].pipe_worker2init) < 0) rte_panic("Cannot create pipe\n"); lcore_config[i].state = WAIT; @@ -874,15 +887,15 @@ rte_eal_init(int argc, char **argv) /* Set thread_name for aid in debugging. */ snprintf(thread_name, sizeof(thread_name), - "lcore-slave-%d", i); + "lcore-work-%d", i); rte_thread_setname(lcore_config[i].thread_id, thread_name); } /* - * Launch a dummy function on all slave lcores, so that master lcore + * Launch a dummy function on all worker lcores, so that initial lcore * knows they are all ready when this function returns. */ - rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(sync_func, NULL, SKIP_INITIAL); rte_eal_mp_wait_lcore(); /* initialize services so vdevs register service during bus_probe. */ diff --git a/lib/librte_eal/freebsd/eal_thread.c b/lib/librte_eal/freebsd/eal_thread.c index b52019782ac6..5fff09c017c1 100644 --- a/lib/librte_eal/freebsd/eal_thread.c +++ b/lib/librte_eal/freebsd/eal_thread.c @@ -30,35 +30,35 @@ RTE_DEFINE_PER_LCORE(unsigned, _socket_id) = (unsigned)SOCKET_ID_ANY; RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset); /* - * Send a message to a slave lcore identified by slave_id to call a + * Send a message to a worker lcore identified by worker_id to call a * function f with argument arg. Once the execution is done, the * remote lcore switch in FINISHED state. */ int -rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id) +rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id) { int n; char c = 0; - int m2s = lcore_config[slave_id].pipe_master2slave[1]; - int s2m = lcore_config[slave_id].pipe_slave2master[0]; + int i2w = lcore_config[worker_id].pipe_init2worker[1]; + int w2i = lcore_config[worker_id].pipe_worker2init[0]; int rc = -EBUSY; - if (lcore_config[slave_id].state != WAIT) + if (lcore_config[worker_id].state != WAIT) goto finish; - lcore_config[slave_id].f = f; - lcore_config[slave_id].arg = arg; + lcore_config[worker_id].f = f; + lcore_config[worker_id].arg = arg; /* send message */ n = 0; while (n == 0 || (n < 0 && errno == EINTR)) - n = write(m2s, &c, 1); + n = write(i2w, &c, 1); if (n < 0) rte_panic("cannot write on configuration pipe\n"); /* wait ack */ do { - n = read(s2m, &c, 1); + n = read(w2i, &c, 1); } while (n < 0 && errno == EINTR); if (n <= 0) @@ -66,7 +66,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id) rc = 0; finish: - rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc); + rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc); return rc; } @@ -83,7 +83,7 @@ eal_thread_set_affinity(void) return rte_thread_set_affinity(&lcore_config[lcore_id].cpuset); } -void eal_thread_init_master(unsigned lcore_id) +void eal_thread_set_initial_lcore(unsigned lcore_id) { /* set the lcore ID in per-lcore memory area */ RTE_PER_LCORE(_lcore_id) = lcore_id; @@ -101,21 +101,21 @@ eal_thread_loop(__rte_unused void *arg) int n, ret; unsigned lcore_id; pthread_t thread_id; - int m2s, s2m; + int i2w, w2i; char cpuset[RTE_CPU_AFFINITY_STR_LEN]; thread_id = pthread_self(); /* retrieve our lcore_id from the configuration structure */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (thread_id == lcore_config[lcore_id].thread_id) break; } if (lcore_id == RTE_MAX_LCORE) rte_panic("cannot retrieve lcore id\n"); - m2s = lcore_config[lcore_id].pipe_master2slave[0]; - s2m = lcore_config[lcore_id].pipe_slave2master[1]; + i2w = lcore_config[lcore_id].pipe_init2worker[0]; + w2i = lcore_config[lcore_id].pipe_worker2init[1]; /* set the lcore ID in per-lcore memory area */ RTE_PER_LCORE(_lcore_id) = lcore_id; @@ -138,7 +138,7 @@ eal_thread_loop(__rte_unused void *arg) /* wait command */ do { - n = read(m2s, &c, 1); + n = read(i2w, &c, 1); } while (n < 0 && errno == EINTR); if (n <= 0) @@ -149,7 +149,7 @@ eal_thread_loop(__rte_unused void *arg) /* send ack */ n = 0; while (n == 0 || (n < 0 && errno == EINTR)) - n = write(s2m, &c, 1); + n = write(w2i, &c, 1); if (n < 0) rte_panic("cannot write on configuration pipe\n"); diff --git a/lib/librte_eal/include/rte_eal.h b/lib/librte_eal/include/rte_eal.h index 2f9ed298de63..c01b9e913d48 100644 --- a/lib/librte_eal/include/rte_eal.h +++ b/lib/librte_eal/include/rte_eal.h @@ -73,11 +73,11 @@ int rte_eal_iopl_init(void); /** * Initialize the Environment Abstraction Layer (EAL). * - * This function is to be executed on the MASTER lcore only, as soon + * This function is to be executed on the initial lcore only, as soon * as possible in the application's main() function. * * The function finishes the initialization process before main() is called. - * It puts the SLAVE lcores in the WAIT state. + * It puts the worker lcores in the WAIT state. * * When the multi-partition feature is supported, depending on the * configuration (if CONFIG_RTE_EAL_MAIN_PARTITION is disabled), this diff --git a/lib/librte_eal/include/rte_eal_trace.h b/lib/librte_eal/include/rte_eal_trace.h index bcfef0cfaa62..f2b47184d72f 100644 --- a/lib/librte_eal/include/rte_eal_trace.h +++ b/lib/librte_eal/include/rte_eal_trace.h @@ -210,10 +210,10 @@ RTE_TRACE_POINT( RTE_TRACE_POINT( rte_eal_trace_thread_remote_launch, RTE_TRACE_POINT_ARGS(int (*f)(void *), void *arg, - unsigned int slave_id, int rc), + unsigned int worker_id, int rc), rte_trace_point_emit_ptr(f); rte_trace_point_emit_ptr(arg); - rte_trace_point_emit_u32(slave_id); + rte_trace_point_emit_u32(worker_id); rte_trace_point_emit_int(rc); ) RTE_TRACE_POINT( diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h index 06a671752ace..9b68685d99d4 100644 --- a/lib/librte_eal/include/rte_launch.h +++ b/lib/librte_eal/include/rte_launch.h @@ -32,12 +32,12 @@ typedef int (lcore_function_t)(void *); /** * Launch a function on another lcore. * - * To be executed on the MASTER lcore only. + * To be executed on the INITIAL lcore only. * - * Sends a message to a slave lcore (identified by the slave_id) that + * Sends a message to a worker lcore (identified by the id) that * is in the WAIT state (this is true after the first call to * rte_eal_init()). This can be checked by first calling - * rte_eal_wait_lcore(slave_id). + * rte_eal_wait_lcore(id). * * When the remote lcore receives the message, it switches to * the RUNNING state, then calls the function f with argument arg. Once the @@ -45,7 +45,7 @@ typedef int (lcore_function_t)(void *); * the return value of f is stored in a local variable to be read using * rte_eal_wait_lcore(). * - * The MASTER lcore returns as soon as the message is sent and knows + * The INITIAL lcore returns as soon as the message is sent and knows * nothing about the completion of f. * * Note: This function is not designed to offer optimum @@ -56,37 +56,43 @@ typedef int (lcore_function_t)(void *); * The function to be called. * @param arg * The argument for the function. - * @param slave_id + * @param id * The identifier of the lcore on which the function should be executed. * @return * - 0: Success. Execution of function f started on the remote lcore. * - (-EBUSY): The remote lcore is not in a WAIT state. */ -int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id); +int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned id); /** - * This enum indicates whether the master core must execute the handler + * This enum indicates whether the initial core must execute the handler * launched on all logical cores. */ -enum rte_rmt_call_master_t { - SKIP_MASTER = 0, /**< lcore handler not executed by master core. */ - CALL_MASTER, /**< lcore handler executed by master core. */ +enum rte_rmt_call_initial_t { + SKIP_INITIAL = 0, /**< lcore handler not executed by initial core. */ + CALL_INITIAL, /**< lcore handler executed by initial core. */ }; +/** + * Deprecated backward compatiable definitions + */ +#define SKIP_MASTER SKIP_INITIAL +#define CALL_MASTER CALL_INITIAL + /** * Launch a function on all lcores. * - * Check that each SLAVE lcore is in a WAIT state, then call + * Check that each worker lcore is in a WAIT state, then call * rte_eal_remote_launch() for each lcore. * * @param f * The function to be called. * @param arg * The argument for the function. - * @param call_master - * If call_master set to SKIP_MASTER, the MASTER lcore does not call - * the function. If call_master is set to CALL_MASTER, the function - * is also called on master before returning. In any case, the master + * @param call_initial + * If call_initial set to SKIP_INITIAL, the INITIAL lcore does not call + * the function. If call_initial is set to CALL_INITIAL, the function + * is also called on initial before returning. In any case, the initial * lcore returns as soon as it finished its job and knows nothing * about the completion of f on the other lcores. * @return @@ -95,49 +101,49 @@ enum rte_rmt_call_master_t { * case, no message is sent to any of the lcores. */ int rte_eal_mp_remote_launch(lcore_function_t *f, void *arg, - enum rte_rmt_call_master_t call_master); + enum rte_rmt_call_initial_t call_initial); /** - * Get the state of the lcore identified by slave_id. + * Get the state of the lcore identified by id. * - * To be executed on the MASTER lcore only. + * To be executed on the INITIAL lcore only. * - * @param slave_id + * @param id * The identifier of the lcore. * @return * The state of the lcore. */ -enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned slave_id); +enum rte_lcore_state_t rte_eal_get_lcore_state(unsigned id); /** * Wait until an lcore finishes its job. * - * To be executed on the MASTER lcore only. + * To be executed on the INITIAL lcore only. * - * If the slave lcore identified by the slave_id is in a FINISHED state, + * If the lcore identified by the id is in a FINISHED state, * switch to the WAIT state. If the lcore is in RUNNING state, wait until * the lcore finishes its job and moves to the FINISHED state. * - * @param slave_id + * @param id * The identifier of the lcore. * @return - * - 0: If the lcore identified by the slave_id is in a WAIT state. + * - 0: If the lcore identified by the id is in a WAIT state. * - The value that was returned by the previous remote launch - * function call if the lcore identified by the slave_id was in a + * function call if the lcore identified by the id was in a * FINISHED or RUNNING state. In this case, it changes the state * of the lcore to WAIT. */ -int rte_eal_wait_lcore(unsigned slave_id); +int rte_eal_wait_lcore(unsigned id); /** * Wait until all lcores finish their jobs. * - * To be executed on the MASTER lcore only. Issue an + * To be executed on the INITIAL lcore only. Issue an * rte_eal_wait_lcore() for every lcore. The return values are * ignored. * * After a call to rte_eal_mp_wait_lcore(), the caller can assume - * that all slave lcores are in a WAIT state. + * that all worker lcores are in a WAIT state. */ void rte_eal_mp_wait_lcore(void); diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h index 339046bc8691..069cb1f427b9 100644 --- a/lib/librte_eal/include/rte_lcore.h +++ b/lib/librte_eal/include/rte_lcore.h @@ -54,10 +54,18 @@ rte_lcore_id(void) } /** - * Get the id of the master lcore + * Get the id of the initial lcore * * @return - * the id of the master lcore + * the id of the initial lcore + */ +unsigned int rte_get_initial_lcore(void); + +/** + * Deprecated API to get the id of the initial lcore + * + * @return + * the id of the initial lcore */ unsigned int rte_get_master_lcore(void); @@ -179,15 +187,15 @@ int rte_lcore_is_enabled(unsigned int lcore_id); * * @param i * The current lcore (reference). - * @param skip_master - * If true, do not return the ID of the master lcore. + * @param skip_initial + * If true, do not return the ID of the initial lcore. * @param wrap * If true, go back to 0 when RTE_MAX_LCORE is reached; otherwise, * return RTE_MAX_LCORE. * @return * The next lcore_id or RTE_MAX_LCORE if not found. */ -unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap); +unsigned int rte_get_next_lcore(unsigned int i, int skip_initial, int wrap); /** * Macro to browse all running lcores. @@ -198,13 +206,20 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap); i = rte_get_next_lcore(i, 0, 0)) /** - * Macro to browse all running lcores except the master lcore. + * Macro to browse all running lcores except the initial lcore. */ -#define RTE_LCORE_FOREACH_SLAVE(i) \ +#define RTE_LCORE_FOREACH_WORKER(i) \ for (i = rte_get_next_lcore(-1, 1, 0); \ imaster_lcore); + socket_id = rte_lcore_to_socket_id(config->initial_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Master core has no memory on local socket!\n"); + RTE_LOG(WARNING, EAL, "WARNING: Initial core has no memory on local socket!\n"); } static int @@ -1184,23 +1184,23 @@ rte_eal_init(int argc, char **argv) eal_check_mem_on_local_socket(); - eal_thread_init_master(config->master_lcore); + eal_thread_initial_lcore(config->initial_lcore); ret = eal_thread_dump_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Master lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", - config->master_lcore, (uintptr_t)thread_id, cpuset, + RTE_LOG(DEBUG, EAL, "Initial lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + config->initial_lcore, (uintptr_t)thread_id, cpuset, ret == 0 ? "" : "..."); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { /* - * create communication pipes between master thread + * create communication pipes between initial thread * and children */ - if (pipe(lcore_config[i].pipe_master2slave) < 0) + if (pipe(lcore_config[i].pipe_init2worker) < 0) rte_panic("Cannot create pipe\n"); - if (pipe(lcore_config[i].pipe_slave2master) < 0) + if (pipe(lcore_config[i].pipe_worker2init) < 0) rte_panic("Cannot create pipe\n"); lcore_config[i].state = WAIT; @@ -1213,7 +1213,7 @@ rte_eal_init(int argc, char **argv) /* Set thread_name for aid in debugging. */ snprintf(thread_name, sizeof(thread_name), - "lcore-slave-%d", i); + "lcore-work-%d", i); ret = rte_thread_setname(lcore_config[i].thread_id, thread_name); if (ret != 0) @@ -1222,10 +1222,10 @@ rte_eal_init(int argc, char **argv) } /* - * Launch a dummy function on all slave lcores, so that master lcore + * Launch a dummy function on all worker lcores, so that initial lcore * knows they are all ready when this function returns. */ - rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(sync_func, NULL, SKIP_INITIAL); rte_eal_mp_wait_lcore(); /* initialize services so vdevs register service during bus_probe. */ diff --git a/lib/librte_eal/linux/eal_memory.c b/lib/librte_eal/linux/eal_memory.c index 89725291b0ce..1cbb8aab6016 100644 --- a/lib/librte_eal/linux/eal_memory.c +++ b/lib/librte_eal/linux/eal_memory.c @@ -1737,7 +1737,7 @@ memseg_primary_init_32(void) /* the allocation logic is a little bit convoluted, but here's how it * works, in a nutshell: * - if user hasn't specified on which sockets to allocate memory via - * --socket-mem, we allocate all of our memory on master core socket. + * --socket-mem, we allocate all of our memory on initial core socket. * - if user has specified sockets to allocate memory on, there may be * some "unused" memory left (e.g. if user has specified --socket-mem * such that not all memory adds up to 2 gigabytes), so add it to all @@ -1751,7 +1751,7 @@ memseg_primary_init_32(void) for (i = 0; i < rte_socket_count(); i++) { int hp_sizes = (int) internal_conf->num_hugepage_sizes; uint64_t max_socket_mem, cur_socket_mem; - unsigned int master_lcore_socket; + unsigned int initial_lcore_socket; struct rte_config *cfg = rte_eal_get_configuration(); bool skip; @@ -1767,10 +1767,10 @@ memseg_primary_init_32(void) skip = active_sockets != 0 && internal_conf->socket_mem[socket_id] == 0; /* ...or if we didn't specifically request memory on *any* - * socket, and this is not master lcore + * socket, and this is not initial lcore */ - master_lcore_socket = rte_lcore_to_socket_id(cfg->master_lcore); - skip |= active_sockets == 0 && socket_id != master_lcore_socket; + initial_lcore_socket = rte_lcore_to_socket_id(cfg->initial_lcore); + skip |= active_sockets == 0 && socket_id != initial_lcore_socket; if (skip) { RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n", diff --git a/lib/librte_eal/linux/eal_thread.c b/lib/librte_eal/linux/eal_thread.c index cd9d6e0ebf5b..324e9a73060b 100644 --- a/lib/librte_eal/linux/eal_thread.c +++ b/lib/librte_eal/linux/eal_thread.c @@ -30,35 +30,35 @@ RTE_DEFINE_PER_LCORE(unsigned, _socket_id) = (unsigned)SOCKET_ID_ANY; RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset); /* - * Send a message to a slave lcore identified by slave_id to call a + * Send a message to a worker lcore identified by worker_id to call a * function f with argument arg. Once the execution is done, the * remote lcore switch in FINISHED state. */ int -rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id) +rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned worker_id) { int n; char c = 0; - int m2s = lcore_config[slave_id].pipe_master2slave[1]; - int s2m = lcore_config[slave_id].pipe_slave2master[0]; + int i2w = lcore_config[worker_id].pipe_init2worker[1]; + int w2i = lcore_config[worker_id].pipe_worker2init[0]; int rc = -EBUSY; - if (lcore_config[slave_id].state != WAIT) + if (lcore_config[worker_id].state != WAIT) goto finish; - lcore_config[slave_id].f = f; - lcore_config[slave_id].arg = arg; + lcore_config[worker_id].f = f; + lcore_config[worker_id].arg = arg; /* send message */ n = 0; while (n == 0 || (n < 0 && errno == EINTR)) - n = write(m2s, &c, 1); + n = write(i2w, &c, 1); if (n < 0) rte_panic("cannot write on configuration pipe\n"); /* wait ack */ do { - n = read(s2m, &c, 1); + n = read(w2i, &c, 1); } while (n < 0 && errno == EINTR); if (n <= 0) @@ -66,7 +66,7 @@ rte_eal_remote_launch(int (*f)(void *), void *arg, unsigned slave_id) rc = 0; finish: - rte_eal_trace_thread_remote_launch(f, arg, slave_id, rc); + rte_eal_trace_thread_remote_launch(f, arg, worker_id, rc); return rc; } @@ -83,7 +83,7 @@ eal_thread_set_affinity(void) return rte_thread_set_affinity(&lcore_config[lcore_id].cpuset); } -void eal_thread_init_master(unsigned lcore_id) +void eal_thread_initial_lcore(unsigned lcore_id) { /* set the lcore ID in per-lcore memory area */ RTE_PER_LCORE(_lcore_id) = lcore_id; @@ -101,21 +101,21 @@ eal_thread_loop(__rte_unused void *arg) int n, ret; unsigned lcore_id; pthread_t thread_id; - int m2s, s2m; + int i2w, w2i; char cpuset[RTE_CPU_AFFINITY_STR_LEN]; thread_id = pthread_self(); /* retrieve our lcore_id from the configuration structure */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (thread_id == lcore_config[lcore_id].thread_id) break; } if (lcore_id == RTE_MAX_LCORE) rte_panic("cannot retrieve lcore id\n"); - m2s = lcore_config[lcore_id].pipe_master2slave[0]; - s2m = lcore_config[lcore_id].pipe_slave2master[1]; + i2w = lcore_config[lcore_id].pipe_init2worker[0]; + w2i = lcore_config[lcore_id].pipe_worker2init[1]; /* set the lcore ID in per-lcore memory area */ RTE_PER_LCORE(_lcore_id) = lcore_id; @@ -138,7 +138,7 @@ eal_thread_loop(__rte_unused void *arg) /* wait command */ do { - n = read(m2s, &c, 1); + n = read(i2w, &c, 1); } while (n < 0 && errno == EINTR); if (n <= 0) @@ -149,7 +149,7 @@ eal_thread_loop(__rte_unused void *arg) /* send ack */ n = 0; while (n == 0 || (n < 0 && errno == EINTR)) - n = write(s2m, &c, 1); + n = write(w2i, &c, 1); if (n < 0) rte_panic("cannot write on configuration pipe\n"); diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map index 196eef5afab7..fb8f8a32beaf 100644 --- a/lib/librte_eal/rte_eal_version.map +++ b/lib/librte_eal/rte_eal_version.map @@ -79,6 +79,7 @@ DPDK_20.0 { rte_hexdump; rte_hypervisor_get; rte_hypervisor_get_name; + rte_init_lcore_id; rte_intr_allow_others; rte_intr_callback_register; rte_intr_callback_unregister; diff --git a/lib/librte_eal/windows/eal.c b/lib/librte_eal/windows/eal.c index eb10b4ef9689..f3ff4f921bcc 100644 --- a/lib/librte_eal/windows/eal.c +++ b/lib/librte_eal/windows/eal.c @@ -273,6 +273,7 @@ rte_eal_init(int argc, char **argv) if (fctret < 0) exit(1); +<<<<<<< HEAD /* Prevent creation of shared memory files. */ if (internal_conf->in_memory == 0) { RTE_LOG(WARNING, EAL, "Multi-process support is requested, " @@ -333,7 +334,7 @@ rte_eal_init(int argc, char **argv) return -1; } - eal_thread_init_master(config->master_lcore); + eal_thread_initial_lcore(config->master_lcore); bscan = rte_bus_scan(); if (bscan < 0) { @@ -341,17 +342,20 @@ rte_eal_init(int argc, char **argv) rte_errno = ENODEV; return -1; } +======= + eal_thread_set_initial_lcore(rte_config.initial_lcore); +>>>>>>> 28604a3e5a3a... eal: rename terms used for DPDK lcores - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { /* - * create communication pipes between master thread + * create communication pipes between initial thread * and children */ - if (_pipe(lcore_config[i].pipe_master2slave, + if (_pipe(lcore_config[i].pipe_init2worker, sizeof(char), _O_BINARY) < 0) rte_panic("Cannot create pipe\n"); - if (_pipe(lcore_config[i].pipe_slave2master, + if (_pipe(lcore_config[i].pipe_worker2init, sizeof(char), _O_BINARY) < 0) rte_panic("Cannot create pipe\n"); @@ -363,10 +367,10 @@ rte_eal_init(int argc, char **argv) } /* - * Launch a dummy function on all slave lcores, so that master lcore + * Launch a dummy function on all worker lcores, so that initial lcore * knows they are all ready when this function returns. */ - rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(sync_func, NULL, SKIP_INITIAL); rte_eal_mp_wait_lcore(); return fctret; } diff --git a/lib/librte_eal/windows/eal_thread.c b/lib/librte_eal/windows/eal_thread.c index 3dd56519c99a..b3a8a47be313 100644 --- a/lib/librte_eal/windows/eal_thread.c +++ b/lib/librte_eal/windows/eal_thread.c @@ -21,34 +21,34 @@ RTE_DEFINE_PER_LCORE(unsigned int, _socket_id) = (unsigned int)SOCKET_ID_ANY; RTE_DEFINE_PER_LCORE(rte_cpuset_t, _cpuset); /* - * Send a message to a slave lcore identified by slave_id to call a + * Send a message to a worker lcore identified by worker_id to call a * function f with argument arg. Once the execution is done, the * remote lcore switch in FINISHED state. */ int -rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id) +rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id) { int n; char c = 0; - int m2s = lcore_config[slave_id].pipe_master2slave[1]; - int s2m = lcore_config[slave_id].pipe_slave2master[0]; + int i2w = lcore_config[worker_id].pipe_init2worker[1]; + int w2i = lcore_config[worker_id].pipe_worker2init[0]; - if (lcore_config[slave_id].state != WAIT) + if (lcore_config[worker_id].state != WAIT) return -EBUSY; - lcore_config[slave_id].f = f; - lcore_config[slave_id].arg = arg; + lcore_config[worker_id].f = f; + lcore_config[worker_id].arg = arg; /* send message */ n = 0; while (n == 0 || (n < 0 && errno == EINTR)) - n = _write(m2s, &c, 1); + n = _write(i2w, &c, 1); if (n < 0) rte_panic("cannot write on configuration pipe\n"); /* wait ack */ do { - n = _read(s2m, &c, 1); + n = _read(w2i, &c, 1); } while (n < 0 && errno == EINTR); if (n <= 0) @@ -58,7 +58,7 @@ rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int slave_id) } void -eal_thread_init_master(unsigned int lcore_id) +eal_thread_set_initial_lcore(unsigned int lcore_id) { /* set the lcore ID in per-lcore memory area */ RTE_PER_LCORE(_lcore_id) = lcore_id; @@ -72,21 +72,21 @@ eal_thread_loop(void *arg __rte_unused) int n, ret; unsigned int lcore_id; pthread_t thread_id; - int m2s, s2m; + int i2w, w2i; char cpuset[RTE_CPU_AFFINITY_STR_LEN]; thread_id = pthread_self(); /* retrieve our lcore_id from the configuration structure */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (thread_id == lcore_config[lcore_id].thread_id) break; } if (lcore_id == RTE_MAX_LCORE) rte_panic("cannot retrieve lcore id\n"); - m2s = lcore_config[lcore_id].pipe_master2slave[0]; - s2m = lcore_config[lcore_id].pipe_slave2master[1]; + i2w = lcore_config[lcore_id].pipe_init2worker[0]; + w2i = lcore_config[lcore_id].pipe_worker2init[1]; /* set the lcore ID in per-lcore memory area */ RTE_PER_LCORE(_lcore_id) = lcore_id; @@ -100,7 +100,7 @@ eal_thread_loop(void *arg __rte_unused) /* wait command */ do { - n = _read(m2s, &c, 1); + n = _read(i2w, &c, 1); } while (n < 0 && errno == EINTR); if (n <= 0) @@ -111,7 +111,7 @@ eal_thread_loop(void *arg __rte_unused) /* send ack */ n = 0; while (n == 0 || (n < 0 && errno == EINTR)) - n = _write(s2m, &c, 1); + n = _write(w2i, &c, 1); if (n < 0) rte_panic("cannot write on configuration pipe\n"); From patchwork Wed Jul 1 19:46:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72639 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05658A0350; Wed, 1 Jul 2020 21:47:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9C5801C1E3; Wed, 1 Jul 2020 21:47:05 +0200 (CEST) Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by dpdk.org (Postfix) with ESMTP id A265F1C1C0 for ; Wed, 1 Jul 2020 21:47:03 +0200 (CEST) Received: by mail-pj1-f66.google.com with SMTP id k71so7881435pje.0 for ; Wed, 01 Jul 2020 12:47:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RPc7j4Uf81GlG1NPglUXJUZH/BPZGnQFMvbZlKWFyy0=; b=RF59bHHoxe0HghwJrV3z3lUaV+qM6szaIOdJKDkFSEP8p0VDNR2nkrXAuG7eKWVCYb 0/RAF/Ss5xthCGf2zKDk4bpqsGnQLSMlIgpVb5dwuMO/bBAsQnvWIOR/hgZLgrgio9/h GXXQVC2TvXb/pefvmnRUM7QFUOd3FgYXEfK1lb6rpZb2JAIioAL9Un1gwSYvEW0fPHRj dIfSQRm4zKvbMdi+XubQC0KmdYdSXnscXiC116E5MIYqWyUg/xIJwsbvjlSU9ttsRQ2G lrfl2HYxiDZKBBJR4qUQuKwlr59C7vwi1FTapVcbMB6WwWnkSdlBzopvcBZIYrjZ6i9k F72g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RPc7j4Uf81GlG1NPglUXJUZH/BPZGnQFMvbZlKWFyy0=; b=LCRbsdSl+miVtJM1zTiIHn7MYbl1NNczY/9EDdiZcJMdBXyC7ONHNqtW3r2lm9kHA/ /I8BOfE6me+TJDOqmLFbh7cp3y8BdcUkEdAol1nznXkLxnCjEjhCsSrYAqsAZjb8ZWPq rd5KMibwzJGHM5Ybm4w3qCUzWkqxgKSzDnJLdLQhenuvyD4GNNMn+uz3r2PlNGyLnDxL KzUHtzxhuoey6h6QPjkrv9W7Rqw9E2c3KtgrH0fonNc8H2xO2L37pmyUHJX7vyLYfrSM aBcIouRWrX0KlfK3y3bpA90fTCrs7YQ5ZenFvRzmUapTG0eeXFKB3i8my8Hu6rghccEQ s3Ew== X-Gm-Message-State: AOAM532cGbrt0xGpHs7bq2ZXrGYCFg4Djm/1cu1mnSoaSZmSgyY9pmPS WSHPfGFr53ksuQUCs/KUpYvVGxkpgoI= X-Google-Smtp-Source: ABdhPJzNISYYfOqO8IrCHlLcF8sLgEUC5wYCA7f2Lv2KgmoCzf1yCCalQilk9XJaYew9fNNZ9AXfoQ== X-Received: by 2002:a17:902:c405:: with SMTP id k5mr7416697plk.233.1593632822278; Wed, 01 Jul 2020 12:47:02 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:01 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:25 -0700 Message-Id: <20200701194650.10705-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 02/27] kni: fix reference to master/slave process X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In DPDK, the correct terms for process are primary/secondary. This is bugfix, not a change in terms for new release. Fixes: f2e7592c474c ("kni: fix multi-process support") Signed-off-by: Stephen Hemminger --- lib/librte_kni/rte_kni.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index f1bb782c68ea..855facd1a319 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -212,7 +212,7 @@ const char *rte_kni_get_name(const struct rte_kni *kni); /** * Register KNI request handling for a specified port,and it can - * be called by master process or slave process. + * be called by primary process or secondary process. * * @param kni * pointer to struct rte_kni. From patchwork Wed Jul 1 19:46:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72640 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C283DA0350; Wed, 1 Jul 2020 21:47:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 18CDA1C225; Wed, 1 Jul 2020 21:47:07 +0200 (CEST) Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by dpdk.org (Postfix) with ESMTP id DAB911C1DF for ; Wed, 1 Jul 2020 21:47:04 +0200 (CEST) Received: by mail-pf1-f193.google.com with SMTP id m9so1177763pfh.0 for ; Wed, 01 Jul 2020 12:47:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QuJ/bJXNZRyHpe6NihzlA53kq2xYzeNuA5myWG2lig0=; b=DwxcIiMWchtyBs/ij4VEWyRGTBYrGv2/kMJF6ZedhZifrpALi6jy448ky9sFat7UyU 8S1LAE/4vCCgVd645t8HbE2RAC261PW1FW60yT68ERZfrAkmdWUVZ1lifeqMWORcZYQB nT7t78lvAoszav9O+J/kXeSbE6huUUHpGpvfJKcssuRySXpB3EMthyUi+XW5k03DI5xt 0lp7KJ+orEDMkUOFyjNYCLF/M6xn44m/VhPUJrsByhq03JBGAV/4vTaY2cN73U8pKdj1 WaQfd/unJzu0XK+6H0KY3bG4DLgd/8Dx8Uw0R4DT+2XCUbpsHl2JPAOjTAbVjRma8E0M gkvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QuJ/bJXNZRyHpe6NihzlA53kq2xYzeNuA5myWG2lig0=; b=fJ+xSqj3Vn7UQUXStyhjGb33RAc44KBHCYW4/sHHBMyjR+MD4k/gWlOQEKOm8L2MUZ m7WpmNeRxFwOeVJpIbC9zM3EmImD6HwbscfCDiPRFJ7S8akQRaIbiuGlB3HqFwOSoAre D1lPXBzsLbTghWPvNbGNERwXY1lZfKRwAsG7t1KJ2Qx6KLv49JHsPg0QMiFkFfw7Sx5D B9b41efYzXyx1/poXlMVWs3G5R3LsdQkt4OdFszy5oY6dz/hXGrScknmJgACl+t+OExf 8hJo1fk+QtP6wZe8F2ubQHTYcx3ULjtfQ9UPZhG0vpqrhccJyB7DVsTsB0jVmANtCBei ilxQ== X-Gm-Message-State: AOAM531431AxCwXPHEtST6SFIuzaduxKkJx9u4FkaaF7ZnbCSv4I3Hx5 6F0lPIAnS/4AjsnJBeYOeoo9l4aQO44= X-Google-Smtp-Source: ABdhPJwWLDdp5VkXU+gMgPF+PcaWg4Gjsxw51a3qcaJnvPc0Fp4tD3Dvj076b9y+5A3B2L3o7bu3Bw== X-Received: by 2002:a62:7e51:: with SMTP id z78mr9488659pfc.3.1593632823645; Wed, 01 Jul 2020 12:47:03 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:02 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:26 -0700 Message-Id: <20200701194650.10705-4-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 03/27] bbdev: rename master to initial lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Conform to new API. Signed-off-by: Stephen Hemminger --- examples/bbdev_app/main.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c index 68a46050c048..d1db739d4e2f 100644 --- a/examples/bbdev_app/main.c +++ b/examples/bbdev_app/main.c @@ -1042,7 +1042,7 @@ main(int argc, char **argv) struct stats_lcore_params stats_lcore; struct rte_ring *enc_to_dec_ring; bool stats_thread_started = false; - unsigned int master_lcore_id = rte_get_master_lcore(); + unsigned int initial_lcore_id = rte_get_initial_lcore(); rte_atomic16_init(&global_exit_flag); @@ -1145,9 +1145,9 @@ main(int argc, char **argv) stats_lcore.app_params = &app_params; stats_lcore.lconf = lcore_conf; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (lcore_conf[lcore_id].core_type != 0) - /* launch per-lcore processing loop on slave lcores */ + /* launch per-lcore processing loop on worker lcores */ rte_eal_remote_launch(processing_loop, &lcore_conf[lcore_id], lcore_id); else if (!stats_thread_started) { @@ -1159,15 +1159,15 @@ main(int argc, char **argv) } if (!stats_thread_started && - lcore_conf[master_lcore_id].core_type != 0) + lcore_conf[initial_lcore_id].core_type != 0) rte_exit(EXIT_FAILURE, "Not enough lcores to run the statistics printing loop!"); - else if (lcore_conf[master_lcore_id].core_type != 0) - processing_loop(&lcore_conf[master_lcore_id]); + else if (lcore_conf[initial_lcore_id].core_type != 0) + processing_loop(&lcore_conf[initial_lcore_id]); else if (!stats_thread_started) stats_loop(&stats_lcore); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { ret |= rte_eal_wait_lcore(lcore_id); } From patchwork Wed Jul 1 19:46:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72641 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25FC7A0350; Wed, 1 Jul 2020 21:47:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4ADFA1C28F; Wed, 1 Jul 2020 21:47:09 +0200 (CEST) Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by dpdk.org (Postfix) with ESMTP id 21B721C1EB for ; Wed, 1 Jul 2020 21:47:06 +0200 (CEST) Received: by mail-pj1-f68.google.com with SMTP id o22so6402567pjw.2 for ; Wed, 01 Jul 2020 12:47:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TdNNjCwT7lxNMzLIE/8EZU8zsFA4GxHsEZbek+TXv48=; b=Aftf7QVio8JYa3fOHSwS5KuOsbs5t4UmiqQX/yvzGWBk2KeEDer54xg7ATOoa9YO4S VGjqt2c7Pla6LaNn0/uRPKLInzUA3RuVPcIHhsnt+Qy56LhTeKmfBQBbn+G/lgYOgISw z/7O1PkNgBl87nm+nuZ8Ovn1U7ASaHyFOjGeTy56w59L+/AHzljxs8NXlxggoLPNuSQw xRyEjPZh13NiJFov3JQER4T5SB8cdpYsvEGmfJCUguRA4kXZgry5DI+8uXqEguLSaTKb 2Ah4QgMjzbi4qyck5Ka12jR7HOPKlJ8mjGNm6empwHryj06+BrzIO6imxkr499ikhW6A 5txg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TdNNjCwT7lxNMzLIE/8EZU8zsFA4GxHsEZbek+TXv48=; b=TKkLu1w5PDLrAfsIdMU+LdtP9ji3h6U+30g49VYoDHe3XhJpqvClAUxJD/citweTVJ LddlImZGcgM0gfipTLumTjVTpFk5ta5XYeOlYz4G7bsJrt6f4GMq94Hn1nTT03Cxm5GP tyapVEuAJmtJF0n9qWOf+YQe63+mDDyRtcZOyLtZkVTWWPDV1fYK19hlWOPIzZhlewCJ c+kA9fcaPZrdOkWyVxMvHHLTMk4061nsfsdJsAeU22uawUY+/aY3cM7wJMsmDUwatv0E 5QZG6WvIK5H3luX4UC7J/kXUq7s22qZIG7Rhzr2AE3hujOUon+q+XG4nna5LkZOpA2G9 XJuQ== X-Gm-Message-State: AOAM533fzYxXhY9s4lRtpBB0VuRsjpzVrSx89dnqjxUVOek4u0tWB5Co c5aFGALJouvHmLh2zxoGBhD++5eIWWM= X-Google-Smtp-Source: ABdhPJwCfpnumg9rlRbRjldh/F11oiVhAbQEMlAbyzus9rinF8XrHsYIJ5A0VVFqumCOYY9ym648Zw== X-Received: by 2002:a17:90a:d998:: with SMTP id d24mr28553472pjv.43.1593632825011; Wed, 01 Jul 2020 12:47:05 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:04 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:27 -0700 Message-Id: <20200701194650.10705-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 04/27] librte_power: change reference to rte_master_lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" New API name is rte_get_initial_lcore() Signed-off-by: Stephen Hemminger --- lib/librte_power/rte_power_empty_poll.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/librte_power/rte_power_empty_poll.c b/lib/librte_power/rte_power_empty_poll.c index 70c07b1533f3..d54a09b9d6a4 100644 --- a/lib/librte_power/rte_power_empty_poll.c +++ b/lib/librte_power/rte_power_empty_poll.c @@ -452,7 +452,7 @@ rte_power_empty_poll_stat_init(struct ep_params **eptr, uint8_t *freq_tlb, if (get_freq_index(LOW) > total_avail_freqs[i]) return -1; - if (rte_get_master_lcore() != i) { + if (rte_get_initial_lcore() != i) { w->wrk_stats[i].lcore_id = i; set_policy(&w->wrk_stats[i], policy); } From patchwork Wed Jul 1 19:46:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72642 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 96D5EA0350; Wed, 1 Jul 2020 21:47:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 730611C2AA; Wed, 1 Jul 2020 21:47:10 +0200 (CEST) Received: from mail-pl1-f194.google.com (mail-pl1-f194.google.com [209.85.214.194]) by dpdk.org (Postfix) with ESMTP id ADF2A1C23C for ; Wed, 1 Jul 2020 21:47:07 +0200 (CEST) Received: by mail-pl1-f194.google.com with SMTP id x8so9414725plm.10 for ; Wed, 01 Jul 2020 12:47:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HfnmXZXh3bGBw/B/5Wx3sUTDIvXrpYdLmyIxCPqtVko=; b=ajfuwJiB7UHJHLUiF5E7ME0Exn+OYUPc4RvbZj5tM8drpYgfebOB7i1KPODmTo/sQe 0fLmgJsEEXBMlWZn0p46QLz+AVtXJmqjoCa7tCaH5LHn6b0o/KG/3T1tdvvrXlUmLlXq YCcgyOb/U+/2CUTkoVf592sLxo/oyzkGNTphirl24jN8h2kkJWZO1RdHpMIXXjuQgpGI ktYC/pVBovgCB21pw6AQBkEvcYSi+gnJWIbAiUSogXmTXRdLBXQi4LntNrYnIZFwCgFQ OWM+icQkEHEMuJuZRDhID42nta+mJoi390vZHvsQN15KXAn2sK7FuZX1npZn0pwbESAQ J2fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HfnmXZXh3bGBw/B/5Wx3sUTDIvXrpYdLmyIxCPqtVko=; b=g+lUpA6YNPNLShzyKRrL4ow5+LMaHjAk2LhvLQMQjK9a10vWjW9Zo4OH305rMtU3Zn RvkJb+JjEbBWt7sWvx3xXLeLPJqzDdWlSL4WGXJPriqBWSPd12n36drrUfvUImYwhbOv tAoaZy8hZn7Nhpa2ftiqotbbrGl2HmOX4LJFrBs9rCxaEk0jZ0nOFSIIGzhbRh3wk0YJ ixrCRiXBYxptl1/3sB6S4uU6wmgXQDxeBXC9USp/08jsBc+z8UsE4d1KPA2G0mOawTxa 2xYZO3Jy7NNbmYAn/4FIOGGqW97aMbKDg0ilbjQEhnG4eC/LiIkSd9jbyrPlDUlGJE0C b+fA== X-Gm-Message-State: AOAM5307AHyvMZlnw2KBqTW2lV7/pxNMK4GkbZf/sZXA57/6xNqqqJN0 8Vwk1F+88cBFczZDt/GevaH3OpezPAA= X-Google-Smtp-Source: ABdhPJxGSmlqxuFACMornDchmPpSnkJRq78OSy71HmSzt9G2Jr1SdY4mli8o5HwsO4U6lHKRL78XXg== X-Received: by 2002:a17:902:7e04:: with SMTP id b4mr15117730plm.295.1593632826393; Wed, 01 Jul 2020 12:47:06 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:05 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:28 -0700 Message-Id: <20200701194650.10705-6-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 05/27] drivers: replace master/slave terminolgy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Replace rte_get_master_lcore() with rte_get_initial_lcore(). Replace RTE_LCORE_FOREACH_SLAVE with RTE_LCORE_FOREACH_WORKER Signed-off-by: Stephen Hemminger --- drivers/bus/dpaa/dpaa_bus.c | 2 +- drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 6 +++--- drivers/net/bnxt/bnxt_ring.c | 4 ++-- drivers/net/mvpp2/mrvl_ethdev.c | 6 +++--- drivers/net/qede/base/bcm_osal.c | 4 ++-- drivers/net/softnic/rte_eth_softnic_thread.c | 4 ++-- 6 files changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index 591e28c1e709..81d62b02428f 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -260,7 +260,7 @@ int rte_dpaa_portal_init(void *arg) BUS_INIT_FUNC_TRACE(); if ((size_t)arg == 1 || lcore == LCORE_ID_ANY) - lcore = rte_get_master_lcore(); + lcore = rte_get_initial_lcore(); else if (lcore >= RTE_MAX_LCORE) return -1; diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c index 21c535f2fbad..d498161fe427 100644 --- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c +++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c @@ -195,7 +195,7 @@ dpaa2_configure_stashing(struct dpaa2_dpio_dev *dpio_dev, int lcoreid) /* Set the Stashing Destination */ if (lcoreid < 0) { - lcoreid = rte_get_master_lcore(); + lcoreid = rte_get_initial_lcore(); if (lcoreid < 0) { DPAA2_BUS_ERR("Getting CPU Index failed"); return -1; @@ -259,7 +259,7 @@ dpaa2_affine_qbman_swp(void) uint64_t tid = syscall(SYS_gettid); if (lcore_id == LCORE_ID_ANY) - lcore_id = rte_get_master_lcore(); + lcore_id = rte_get_initial_lcore(); /* if the core id is not supported */ else if (lcore_id >= RTE_MAX_LCORE) return -1; @@ -307,7 +307,7 @@ dpaa2_affine_qbman_ethrx_swp(void) uint64_t tid = syscall(SYS_gettid); if (lcore_id == LCORE_ID_ANY) - lcore_id = rte_get_master_lcore(); + lcore_id = rte_get_initial_lcore(); /* if the core id is not supported */ else if (lcore_id >= RTE_MAX_LCORE) return -1; diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 24a947f27823..b649e3317ea1 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -429,7 +429,7 @@ int bnxt_alloc_rxtx_nq_ring(struct bnxt *bp) if (!BNXT_HAS_NQ(bp) || bp->rxtx_nq_ring) return 0; - socket_id = rte_lcore_to_socket_id(rte_get_master_lcore()); + socket_id = rte_lcore_to_socket_id(rte_get_initial_lcore()); nqr = rte_zmalloc_socket("nqr", sizeof(struct bnxt_cp_ring_info), @@ -819,7 +819,7 @@ int bnxt_alloc_async_ring_struct(struct bnxt *bp) if (BNXT_NUM_ASYNC_CPR(bp) == 0) return 0; - socket_id = rte_lcore_to_socket_id(rte_get_master_lcore()); + socket_id = rte_lcore_to_socket_id(rte_get_initial_lcore()); cpr = rte_zmalloc_socket("cpr", sizeof(struct bnxt_cp_ring_info), diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 9037274327b1..4ffd81d903c4 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -816,7 +816,7 @@ mrvl_flush_bpool(struct rte_eth_dev *dev) unsigned int core_id = rte_lcore_id(); if (core_id == LCORE_ID_ANY) - core_id = rte_get_master_lcore(); + core_id = rte_get_initial_lcore(); hif = mrvl_get_hif(priv, core_id); @@ -1620,7 +1620,7 @@ mrvl_fill_bpool(struct mrvl_rxq *rxq, int num) core_id = rte_lcore_id(); if (core_id == LCORE_ID_ANY) - core_id = rte_get_master_lcore(); + core_id = rte_get_initial_lcore(); hif = mrvl_get_hif(rxq->priv, core_id); if (!hif) @@ -1770,7 +1770,7 @@ mrvl_rx_queue_release(void *rxq) unsigned int core_id = rte_lcore_id(); if (core_id == LCORE_ID_ANY) - core_id = rte_get_master_lcore(); + core_id = rte_get_initial_lcore(); if (!q) return; diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c index 54e5e4f98159..8ecbf82c5e7c 100644 --- a/drivers/net/qede/base/bcm_osal.c +++ b/drivers/net/qede/base/bcm_osal.c @@ -112,7 +112,7 @@ void *osal_dma_alloc_coherent(struct ecore_dev *p_dev, snprintf(mz_name, sizeof(mz_name), "%lx", (unsigned long)rte_get_timer_cycles()); if (core_id == (unsigned int)LCORE_ID_ANY) - core_id = rte_get_master_lcore(); + core_id = rte_get_initial_lcore(); socket_id = rte_lcore_to_socket_id(core_id); mz = rte_memzone_reserve_aligned(mz_name, size, socket_id, RTE_MEMZONE_IOVA_CONTIG, RTE_CACHE_LINE_SIZE); @@ -151,7 +151,7 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *p_dev, snprintf(mz_name, sizeof(mz_name), "%lx", (unsigned long)rte_get_timer_cycles()); if (core_id == (unsigned int)LCORE_ID_ANY) - core_id = rte_get_master_lcore(); + core_id = rte_get_initial_lcore(); socket_id = rte_lcore_to_socket_id(core_id); mz = rte_memzone_reserve_aligned(mz_name, size, socket_id, RTE_MEMZONE_IOVA_CONTIG, align); diff --git a/drivers/net/softnic/rte_eth_softnic_thread.c b/drivers/net/softnic/rte_eth_softnic_thread.c index dcfb5eb82c18..a2b6f522427a 100644 --- a/drivers/net/softnic/rte_eth_softnic_thread.c +++ b/drivers/net/softnic/rte_eth_softnic_thread.c @@ -25,7 +25,7 @@ softnic_thread_free(struct pmd_internals *softnic) { uint32_t i; - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { struct softnic_thread *t = &softnic->thread[i]; /* MSGQs */ @@ -99,7 +99,7 @@ softnic_thread_init(struct pmd_internals *softnic) static inline int thread_is_valid(struct pmd_internals *softnic, uint32_t thread_id) { - if (thread_id == rte_get_master_lcore()) + if (thread_id == rte_get_initial_lcore()) return 0; /* FALSE */ if (softnic->params.sc && rte_lcore_has_role(thread_id, ROLE_SERVICE)) From patchwork Wed Jul 1 19:46:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72643 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0207DA0350; Wed, 1 Jul 2020 21:47:58 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A87031C2F5; Wed, 1 Jul 2020 21:47:11 +0200 (CEST) Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by dpdk.org (Postfix) with ESMTP id E1D141C23C for ; Wed, 1 Jul 2020 21:47:08 +0200 (CEST) Received: by mail-pg1-f193.google.com with SMTP id m22so1845497pgv.9 for ; Wed, 01 Jul 2020 12:47:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kRm49XF4hKFzTEWrX3nDjLnv6Y4r2/DEQTOyn8KygpE=; b=BBJUDO2+WtLUVJbywXm4knxWQQp47w7LM9hF+Q5PZOb598yFX5Wz79uFsS6MuEiX9m fv3d51/WQAPqjU0Z4Zq7oHFHMvC23GpsA4A5d/vgdHLayc5stU+Gj/z2qaqlwsTUER1a 3vjtD+ebe8YidxQj24Dy5Qo5COFz6zZcvOWCNITQC0AtP1tBFgiflv2zOB3vGfCCRyJK Et/XXJINkXMg+Mxo1f+sUJphC+TMad/QspoKOJOZ+NsCRvB17RwK9psjYFEREn0k5W6J oLpoi5SRCxYtWfOwNe7l+oS/7HqGOY6BrDqGlT2b1WIJ8ijhFEIeQekyTUV4FxFtiaK2 w9ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kRm49XF4hKFzTEWrX3nDjLnv6Y4r2/DEQTOyn8KygpE=; b=D06BEJPniJE5KGm6fNAHFtxXhGe7kYpJQUpUHSsTrcUc98Fa/EN1RfIcsFo2TeaJjE gQ5cS2vrE34U/DLMoiZ6xpRipTQlwvfyLleGvuJ9kLzdXqgJiqUrko1w+TD3GNdzVhaC 6Td4gz5dYa5bt7hoEGrycvd3yJWVsuYTeUt8u6qt1d6xzvka25rLLU8Nij6YIzz2J669 ++jS1VKPHz0eNLFe/fh75cQoI3ifzvI2QCTrd8mU2KIXLPFioig6BKDgYEAM4ZXUmjNY zSvh46XINCJz4H7cGppDyKGjKnBWLgJJueDVFir5B0S/tRX8UHNITTXH8JfBAqv/L5fN TfYQ== X-Gm-Message-State: AOAM532/XNHwGaBNlohKGcN81USnpizqzWL+5n6fKA0kbWz2W+gKN2Za TQrfgOxjtcGL5UOq5DBWGheTKD23dAo= X-Google-Smtp-Source: ABdhPJx+TCm3lUlXyBawA5Tnxn2Aupw8arYicHjezNsUXosdhmiWi+zENAC5O8AiJJ4B8l8DwhzMcg== X-Received: by 2002:a65:64c5:: with SMTP id t5mr16411172pgv.28.1593632827731; Wed, 01 Jul 2020 12:47:07 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:06 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:29 -0700 Message-Id: <20200701194650.10705-7-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 06/27] examples/distrutor: rename master to initial X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Conform to new API Signed-off-by: Stephen Hemminger --- examples/distributor/main.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/examples/distributor/main.c b/examples/distributor/main.c index 567c5e98919d..d3906bede6fc 100644 --- a/examples/distributor/main.c +++ b/examples/distributor/main.c @@ -612,7 +612,7 @@ static int init_power_library(void) { int ret = 0, lcore_id; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { /* init power management library */ ret = rte_power_init(lcore_id); if (ret) { @@ -808,7 +808,7 @@ main(int argc, char *argv[]) * available, the higher frequency cores will go to the * distributor first, then rx, then tx. */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_power_get_capabilities(lcore_id, &lcore_cap); @@ -841,7 +841,7 @@ main(int argc, char *argv[]) * after the high performing core assignment above, pre-assign * them here. */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (lcore_id == (unsigned int)distr_core_id || lcore_id == (unsigned int)rx_core_id || lcore_id == (unsigned int)tx_core_id) @@ -872,7 +872,7 @@ main(int argc, char *argv[]) * Kick off all the worker threads first, avoiding the pre-assigned * lcore_ids for tx, rx and distributor workloads. */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (lcore_id == (unsigned int)distr_core_id || lcore_id == (unsigned int)rx_core_id || lcore_id == (unsigned int)tx_core_id) @@ -925,7 +925,7 @@ main(int argc, char *argv[]) usleep(1000); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } From patchwork Wed Jul 1 19:46:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72644 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C43A5A0350; Wed, 1 Jul 2020 21:48:06 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BFBDA1D148; Wed, 1 Jul 2020 21:47:12 +0200 (CEST) Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by dpdk.org (Postfix) with ESMTP id 29D631C2A7 for ; Wed, 1 Jul 2020 21:47:10 +0200 (CEST) Received: by mail-pl1-f196.google.com with SMTP id bf7so987002plb.2 for ; Wed, 01 Jul 2020 12:47:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XSpJNa6t5RwC0gPoDXSI7UXiiMlOYAHd+IVlpTd7Q64=; b=fmWQxSDAOvLN/GLz4NOR2IKXYSybMqQ/tQj7Q4O3Tt6NvjoIxNuV6rYgetcnfcCft8 bd+aydGoLN323PW9X9s/DoqPZlJZRdhJHkKyQX09u9cuzivrZ0LPYxPl7A7TFWIMdgOR Oz1LjrHlBpcqRfk4L7VpZ52qPDqH7elb09qO0TwwG6Ql7I2I+o4kJY7rhgMfjYwGdUg/ A51QTVS1iCKQrr+DxNpL7dqr95ksfKRDdHnGNpNF//dhlSV+v0ogmvclY/ST8y1FcPdn 3QOZZ+3nSyEg9I9nmQxyBk+MTPmyL4k7px/yzlvvfpdztdBTztZNVVBlzsK+dq1tkSan zpzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XSpJNa6t5RwC0gPoDXSI7UXiiMlOYAHd+IVlpTd7Q64=; b=JVW1HlPX1843+cy+hZwQ/TRY/GA/974gBfSOe0OGk1gtysOVnsnau9lYxCBpEJ6Wqd X5zZVsVK4AiGfUS5gwjOnuct9tN2VsmreHldRngkUcwrdV7eqfvAi7zqoghAzAkB8ji8 kAE8sWp2u3SisTzhL9KynaUAx+YYx5Svmkd6XwQWfyAlxZa5iG56yUn9bXbVXuuwyYha TK4z5hMzTMYjmPVD1Gn4Lk8AbK++/jirqjkUd6F7aFtdurRNiw6E/wJOZ2iKRlaxnwFg mxYodvjNnK8zN14HUEFZgakdo1rAtpizBlg3GBs892PfC4HaiJRZaLX9bsn8ovyfXA8f mWSA== X-Gm-Message-State: AOAM530jNT2Z/xDYRNXRPyGkP0Y4QwRmjbFbjqrp+MfZhNq7JjI9/2/e UnOZiyD2e11m0CScuZmGuNUkq563EGk= X-Google-Smtp-Source: ABdhPJwa71gmXYlOZGl3FKvLUtqZMTiIDJmHsGloKMpaiAZenSmnUtL66JuPGEY41Zx18WX5l51SGw== X-Received: by 2002:a17:90a:fa09:: with SMTP id cm9mr20772519pjb.146.1593632828979; Wed, 01 Jul 2020 12:47:08 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:08 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:30 -0700 Message-Id: <20200701194650.10705-8-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 07/27] examples/bond: replace references to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Change the references to master lcore. For now, leave refernces to slave for bonding since that is part of the API not addressed yet. Signed-off-by: Stephen Hemminger --- examples/bond/main.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/examples/bond/main.c b/examples/bond/main.c index 8608285b686e..e415d7bc9c8c 100644 --- a/examples/bond/main.c +++ b/examples/bond/main.c @@ -590,7 +590,7 @@ static void cmd_start_parsed(__rte_unused void *parsed_result, return; } - /* start lcore main on core != master_core - ARP response thread */ + /* start lcore main on core != initial_core - ARP response thread */ slave_core_id = rte_get_next_lcore(rte_lcore_id(), 1, 0); if ((slave_core_id >= RTE_MAX_LCORE) || (slave_core_id == 0)) return; @@ -802,7 +802,7 @@ cmdline_parse_ctx_t main_ctx[] = { NULL, }; -/* prompt function, called from main on MASTER lcore */ +/* prompt function, called from main on initial lcore */ static void prompt(__rte_unused void *arg1) { struct cmdline *cl; @@ -852,12 +852,12 @@ main(int argc, char *argv[]) rte_spinlock_init(&global_flag_stru_p->lock); /* check state of lcores */ - RTE_LCORE_FOREACH_SLAVE(slave_core_id) { + RTE_LCORE_FOREACH_WORKER(slave_core_id) { if (rte_eal_get_lcore_state(slave_core_id) != WAIT) return -EBUSY; } - /* start lcore main on core != master_core - ARP response thread */ + /* start lcore main on core != initial_core - ARP response thread */ slave_core_id = rte_get_next_lcore(rte_lcore_id(), 1, 0); if ((slave_core_id >= RTE_MAX_LCORE) || (slave_core_id == 0)) return -EPERM; From patchwork Wed Jul 1 19:46:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72645 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7B46A0350; Wed, 1 Jul 2020 21:48:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 77F051D16A; Wed, 1 Jul 2020 21:47:14 +0200 (CEST) Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by dpdk.org (Postfix) with ESMTP id 648711C2BB for ; Wed, 1 Jul 2020 21:47:11 +0200 (CEST) Received: by mail-pl1-f193.google.com with SMTP id u9so6697676pls.13 for ; Wed, 01 Jul 2020 12:47:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=iURrDw4mcqjpXLPqyxOm87giupxuZKgBPH4SxLnz8w8=; b=KiixRbgve6AfO5rLY0eOAM+l6LolcAns+bqhkX7q1jCq81SJGYJxnh+CXg3dnaBC+u l+4hnMXWK9knMnwlEUQgH5AykR4N+98cc+nuSLalheohlm48rmFN1Ci4JVysc5sAnkpJ vY81jx/Ga4e9Vbjeg2kQFZFUb5PavEIO2Rmh3VDzFTW5cZAuVqdG9Wse6UE6qrLifhN1 Pig/3Lx2TAttIPlWCIt0y4gwRZUE7R+lR1gT9D3nG6u8lCYbSWmegRwNJv2up+RxOn7V Gq84/hXYFc1Jbf+3ADya1veQRM1BNOWCgX9uPAT7CAiyDNPFMVrVjp3dBBSer0ozwxcV XW6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iURrDw4mcqjpXLPqyxOm87giupxuZKgBPH4SxLnz8w8=; b=kmrqNJixdtvldR4gc8oiNam50Tumt0qEoMx8Y1Ww/bBoA2reS7x/5+4n5B99JQ1Dgu lLixwp+v7NeFMM1RT9oxGIRora+Fu8vxJypmoGqvLQMJuIxniTjBPIkaeJuN18MUVimE 15H95dXxZNZLCTfZEttXJtkuTAWUYJdYYMLEWYWCgWC3cfqb5T4OSULniWkKNUso5PLu H9js+Ciqj9eHran0ZJwAlzBry8ULOt3tf/z3puk66q8oiJxqJ0YbzGYYsjU8E16jlJw7 +4A5ZQf88tJgiykh/qHb4gbsWzM6x0sgTXjgxdi6Sa9UMlKgZvCkFXK9xAAYwAHDCGcN p4mw== X-Gm-Message-State: AOAM532IrYBI44RynnS9QVtXRhz++RTsZkh9csZQ7yi47QM4M/L4zhYz tPGjX8UigJbz+vFTS5v5tCn164bexoQ= X-Google-Smtp-Source: ABdhPJwYtgmoriqCTzSQvHf1Uqj9gR/I6gKHWR+tWHOdlsRDmvL6XdNvS3q1eG7XzorE30OmV73Bgw== X-Received: by 2002:a17:90a:1544:: with SMTP id y4mr29036628pja.130.1593632830210; Wed, 01 Jul 2020 12:47:10 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:09 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:31 -0700 Message-Id: <20200701194650.10705-9-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 08/27] examples/ethtool-app: replace references to slave with worker X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Conforms to new API naming conventions. Signed-off-by: Stephen Hemminger --- examples/ethtool/ethtool-app/main.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c index 7383413215d6..a55d31d891df 100644 --- a/examples/ethtool/ethtool-app/main.c +++ b/examples/ethtool/ethtool-app/main.c @@ -176,7 +176,7 @@ static void process_frame(struct app_port *ptr_port, rte_ether_addr_copy(&ptr_port->mac_addr, &ptr_mac_hdr->s_addr); } -static int slave_main(__rte_unused void *ptr_data) +static int worker_main(__rte_unused void *ptr_data) { struct app_port *ptr_port; struct rte_mbuf *ptr_frame; @@ -284,16 +284,16 @@ int main(int argc, char **argv) app_cfg.cnt_ports = cnt_ports; if (rte_lcore_count() < 2) - rte_exit(EXIT_FAILURE, "No available slave core!\n"); - /* Assume there is an available slave.. */ + rte_exit(EXIT_FAILURE, "No available worker core!\n"); + /* Assume there is an available worker.. */ id_core = rte_lcore_id(); id_core = rte_get_next_lcore(id_core, 1, 1); - rte_eal_remote_launch(slave_main, NULL, id_core); + rte_eal_remote_launch(worker_main, NULL, id_core); ethapp_main(); app_cfg.exit_now = 1; - RTE_LCORE_FOREACH_SLAVE(id_core) { + RTE_LCORE_FOREACH_WORKER(id_core) { if (rte_eal_wait_lcore(id_core) < 0) return -1; } From patchwork Wed Jul 1 19:46:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72646 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BEFE9A0350; Wed, 1 Jul 2020 21:48:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B65FC1D183; Wed, 1 Jul 2020 21:47:15 +0200 (CEST) Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by dpdk.org (Postfix) with ESMTP id CF1EC1D151 for ; Wed, 1 Jul 2020 21:47:12 +0200 (CEST) Received: by mail-pl1-f177.google.com with SMTP id p1so1041265pls.4 for ; Wed, 01 Jul 2020 12:47:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VkvR/tJszlILjuUfOY6/qEntVb2eFhf5fZ8BA6J80P4=; b=MxeX36TQRtLdGQyJxlJ+O5NpwStDV1Gvsc2ekJi5wJX7YKXcONWz7yPVt0nu9vXcQa jKoYViRCcjBJ0AA5wuXSyLXEHKs5J68OW2Tmais5AT6+nKiTXKYM79wqYCNc9UMtCiYY IZzk9r8Xv55dK35vasf4YPPMAgPEg390w7/ccqaM24yR5Wvau5NFOQJlzJgQV3yqZsXi FZWXEr8pQmiUwe7gfBHdUhT9P0+/gBQmNFCRmz/7bmbaBGGqkIrkkdanfEMoelTrv8i9 JBfycHcXTgWQGR2knDNruijmeinKWcJeoVsXP5CdeWotAnPnVT3gOiEtEdolpQ/oWJxt E0og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VkvR/tJszlILjuUfOY6/qEntVb2eFhf5fZ8BA6J80P4=; b=e71jicNB+1gswuEPSj2zmk2UEf/5PX/qm7BKe4ZKuWnwZdciU45++jwauOHPKileKp /ltuqiolX8sokiiIBvmb4S/YSsBmFN0rGMYyGKOa3HD6Ztuvlkd2wqMGzNgwpmjWF6kd UWyMj8bs3HnkQS+6M6YKDwU0bTjnj2ae99p5fHArexk7MVdzKGVi5Hj5tFFlp5/d/tDX beMzgJSv0vI5zFu4MRLjL16zzE1XW2LloNUfdbHrYi2GtC2i12DUwY7uyPjfIeIsUpw0 mRj8G7nMaWrSC5fUKAKK3ODraxC5MRebNJZeP1BBD4GV1LLzwATXpBcEQGVTeG+zoTlN 7vlw== X-Gm-Message-State: AOAM5302xoAvmuF+fRz/XiYoofo/Ln4tICBOIEouSVlQPRVst0ZG5f7O SIJFyElN6Ji9s0cvY24Zra0K0tjZydY= X-Google-Smtp-Source: ABdhPJzMHUXC9XPdySuk4RX7TdVstXNPBIfJKjbZPNeN541W3J/I0s2CNoyx7aKsGmw1DUkn7RQ14g== X-Received: by 2002:a17:90a:e60b:: with SMTP id j11mr30353483pjy.189.1593632831619; Wed, 01 Jul 2020 12:47:11 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:10 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:32 -0700 Message-Id: <20200701194650.10705-10-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 09/27] examples/ip_pipeline: replace references to master_lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial lcore instead of primary lcore Signed-off-by: Stephen Hemminger --- examples/ip_pipeline/main.c | 2 +- examples/ip_pipeline/thread.c | 16 ++++++++-------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/examples/ip_pipeline/main.c b/examples/ip_pipeline/main.c index 97d1e91c2b4b..060c6311ddc6 100644 --- a/examples/ip_pipeline/main.c +++ b/examples/ip_pipeline/main.c @@ -250,7 +250,7 @@ main(int argc, char **argv) rte_eal_mp_remote_launch( thread_main, NULL, - SKIP_MASTER); + SKIP_INITIAL); /* Script */ if (app.script_name) diff --git a/examples/ip_pipeline/thread.c b/examples/ip_pipeline/thread.c index adb83167cd84..feefdca4eba2 100644 --- a/examples/ip_pipeline/thread.c +++ b/examples/ip_pipeline/thread.c @@ -32,7 +32,7 @@ #endif /** - * Master thead: data plane thread context + * Initial thread: data plane thread context */ struct thread { struct rte_ring *msgq_req; @@ -78,7 +78,7 @@ struct thread_data { static struct thread_data thread_data[RTE_MAX_LCORE]; /** - * Master thread: data plane thread init + * Initial thread: data plane thread init */ static void thread_free(void) @@ -105,7 +105,7 @@ thread_init(void) { uint32_t i; - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { char name[NAME_MAX]; struct rte_ring *msgq_req, *msgq_rsp; struct thread *t = &thread[i]; @@ -137,7 +137,7 @@ thread_init(void) return -1; } - /* Master thread records */ + /* Initial thread records */ t->msgq_req = msgq_req; t->msgq_rsp = msgq_rsp; t->enabled = 1; @@ -179,7 +179,7 @@ pipeline_is_running(struct pipeline *p) } /** - * Master thread & data plane threads: message passing + * Initial thread & data plane threads: message passing */ enum thread_req_type { THREAD_REQ_PIPELINE_ENABLE = 0, @@ -213,7 +213,7 @@ struct thread_msg_rsp { }; /** - * Master thread + * Initial thread */ static struct thread_msg_req * thread_msg_alloc(void) @@ -556,7 +556,7 @@ thread_msg_handle(struct thread_data *t) } /** - * Master thread & data plane threads: message passing + * Initial thread & data plane threads: message passing */ enum pipeline_req_type { /* Port IN */ @@ -730,7 +730,7 @@ struct pipeline_msg_rsp { }; /** - * Master thread + * Initial thread */ static struct pipeline_msg_req * pipeline_msg_alloc(void) From patchwork Wed Jul 1 19:46:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72647 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8688A0350; Wed, 1 Jul 2020 21:48:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CF7DF1D405; Wed, 1 Jul 2020 21:47:16 +0200 (CEST) Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by dpdk.org (Postfix) with ESMTP id 3347A1D156 for ; Wed, 1 Jul 2020 21:47:14 +0200 (CEST) Received: by mail-pj1-f65.google.com with SMTP id h22so11462872pjf.1 for ; Wed, 01 Jul 2020 12:47:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=61Vi4FGZytjg+jh4get4I3NRcVkP+biWELOvXBDju1s=; b=QCrlEBdTvk155b7vorpSuk3DPWZo+C3D4F0wDW4B02ConwUvZBkTf9fFAhDmRBpF24 8TqpQvTo2FdXyi+qRFMviv/iE9/owZCJDCvYhXZ9UOO2VUhxbR1cxGSs2rrxLs8/WQX4 SUvqMj/D3zt1YoBTpVyMGo9XwnJXWOsIsfJ/tnFk8vbWq/DayZv9qT+SGqf0nD6JnX0z JfOmIopY3w2Rzjn+11txJ4177eBnAUqlE09FdvvX3Ub2ilufOd29/FhFJrjIxh+CmxzT hJC5sfNFnzKaEookWbXPnghQ27ipEiWKy500Tew7dKe/L5YXj+Z4L08ztLED9/+vIuPX H/8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=61Vi4FGZytjg+jh4get4I3NRcVkP+biWELOvXBDju1s=; b=bkWL5/ez3c/xx6UdyQDEbansV0YlRy6/Uo+ukBwqLip1zGqKLOpPbcx8XDOhYMogpt PgDKjnsDkgeEZxOJxStxNj5PhtG9ezH9MNPBVG2JMHGTuC6DjNZ6bjnN8eKjfsZmBdwi 6g1O1YjVSMQmNx0Rt0EgF0aMINHAXqf44ZyGkl21B6Ki35/c8uV10aXs5hWTWMy5DrA2 pPAzBREjNN3YG2MrFJLpfZ8Yp3kdigtm80jXea7Y6rLH5UAbCjSx6b1dfsS9ySluo1cp jhZELN3ysXj2x0W3MrlT6nLuDCHtne6U4KE8LA8l3Opk/wKj24WMiQyLcB9b2kGMaXAK UH3A== X-Gm-Message-State: AOAM533NtAA4RAPCtRw1vZI9CiMZimGZl8Nxg2s+5royzjSTnm52l2md nPL0sTtLOp06YG96KN9fqGTH53lviiw= X-Google-Smtp-Source: ABdhPJxg5zCUt4vyLhyyPRdbqzdQFmsi9Nnr/L8vU9Ep74x1zR1TaAMgr+MFLqT+jeCQc8yCW98A2A== X-Received: by 2002:a17:90b:898:: with SMTP id bj24mr7652769pjb.133.1593632832920; Wed, 01 Jul 2020 12:47:12 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:12 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:33 -0700 Message-Id: <20200701194650.10705-11-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 10/27] examples/qos_{meter/sched}: replace references to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial lcore instead. Signed-off-by: Stephen Hemminger --- examples/qos_meter/main.c | 4 ++-- examples/qos_sched/args.c | 22 +++++++++++----------- examples/qos_sched/cmdline.c | 2 +- examples/qos_sched/main.c | 2 +- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c index 6d057abfe3e7..9f1497121105 100644 --- a/examples/qos_meter/main.c +++ b/examples/qos_meter/main.c @@ -457,8 +457,8 @@ main(int argc, char **argv) rte_exit(EXIT_FAILURE, "Invalid configure flow table\n"); /* Launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(main_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/qos_sched/args.c b/examples/qos_sched/args.c index 7431b29816aa..3fdd7ddf1651 100644 --- a/examples/qos_sched/args.c +++ b/examples/qos_sched/args.c @@ -22,7 +22,7 @@ #define MAX_OPT_VALUES 8 #define SYS_CPU_DIR "/sys/devices/system/cpu/cpu%u/topology/" -static uint32_t app_master_core = 1; +static uint32_t app_initial_core = 1; static uint32_t app_numa_mask; static uint64_t app_used_core_mask = 0; static uint64_t app_used_port_mask = 0; @@ -40,7 +40,7 @@ static const char usage[] = " \n" "Application optional parameters: \n" " --i : run in interactive mode (default value is %u) \n" - " --mst I : master core index (default value is %u) \n" + " --mst I : initial core index (default value is %u) \n" " --rsz \"A, B, C\" : Ring sizes \n" " A = Size (in number of buffer descriptors) of each of the NIC RX \n" " rings read by the I/O RX lcores (default value is %u) \n" @@ -72,7 +72,7 @@ static const char usage[] = static void app_usage(const char *prgname) { - printf(usage, prgname, APP_INTERACTIVE_DEFAULT, app_master_core, + printf(usage, prgname, APP_INTERACTIVE_DEFAULT, app_initial_core, APP_RX_DESC_DEFAULT, APP_RING_SIZE, APP_TX_DESC_DEFAULT, MAX_PKT_RX_BURST, PKT_ENQUEUE, PKT_DEQUEUE, MAX_PKT_TX_BURST, NB_MBUF, @@ -98,7 +98,7 @@ app_eal_core_mask(void) cm |= (1ULL << i); } - cm |= (1ULL << rte_get_master_lcore()); + cm |= (1ULL << rte_get_initial_lcore()); return cm; } @@ -353,7 +353,7 @@ app_parse_args(int argc, char **argv) break; } if (str_is(optname, "mst")) { - app_master_core = (uint32_t)atoi(optarg); + app_initial_core = (uint32_t)atoi(optarg); break; } if (str_is(optname, "rsz")) { @@ -408,18 +408,18 @@ app_parse_args(int argc, char **argv) } } - /* check master core index validity */ - for(i = 0; i <= app_master_core; i++) { - if (app_used_core_mask & (1u << app_master_core)) { - RTE_LOG(ERR, APP, "Master core index is not configured properly\n"); + /* check initial core index validity */ + for(i = 0; i <= app_initial_core; i++) { + if (app_used_core_mask & (1u << app_initial_core)) { + RTE_LOG(ERR, APP, "Initial core index is not configured properly\n"); app_usage(prgname); return -1; } } - app_used_core_mask |= 1u << app_master_core; + app_used_core_mask |= 1u << app_initial_core; if ((app_used_core_mask != app_eal_core_mask()) || - (app_master_core != rte_get_master_lcore())) { + (app_initial_core != rte_get_initial_lcore())) { RTE_LOG(ERR, APP, "EAL core mask not configured properly, must be %" PRIx64 " instead of %" PRIx64 "\n" , app_used_core_mask, app_eal_core_mask()); return -1; diff --git a/examples/qos_sched/cmdline.c b/examples/qos_sched/cmdline.c index ba68e0d02693..67e7695b6558 100644 --- a/examples/qos_sched/cmdline.c +++ b/examples/qos_sched/cmdline.c @@ -599,7 +599,7 @@ cmdline_parse_ctx_t main_ctx[] = { NULL, }; -/* prompt function, called from main on MASTER lcore */ +/* prompt function, called from main on initial lcore */ void prompt(void) { diff --git a/examples/qos_sched/main.c b/examples/qos_sched/main.c index 73864d66dbc8..9c6be9fa17c0 100644 --- a/examples/qos_sched/main.c +++ b/examples/qos_sched/main.c @@ -204,7 +204,7 @@ main(int argc, char **argv) return -1; /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(app_main_loop, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(app_main_loop, NULL, SKIP_INITIAL); if (interactive) { sleep(1); From patchwork Wed Jul 1 19:46:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72648 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 781A0A0350; Wed, 1 Jul 2020 21:48:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E548D1D41A; Wed, 1 Jul 2020 21:47:17 +0200 (CEST) Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by dpdk.org (Postfix) with ESMTP id 8783D1D176 for ; Wed, 1 Jul 2020 21:47:15 +0200 (CEST) Received: by mail-pf1-f195.google.com with SMTP id x72so2006856pfc.6 for ; Wed, 01 Jul 2020 12:47:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U61yi4Jo6845yX1Z0+JgAL9qSjnWA/M9rZ4S1AWeZ3g=; b=EtJ2De32Gu04orpleVMBSvJbvV08PfsyNEFlLCHGff8PtIylKJHcRdRztxzItgf7yz xYKysmlIyFCyJffHunHAZPKSC8Y5UVOlfhqo/K1ktk9pCML1qJZeVrr8ZAAKqT0aMoOS anJlzhAF+g/7RKd3vppFb5qcTqWqjoOV7u6MpeWAr0MiNaU9CipW9RxpX3Kn7F/8gk0r HeHU+BvMV0PHrjGruVXV9Rc2rSE/02pfHX3ml5KyIj7z9fMkOW0uBFgpBcoVfJT2ZMDj ES6UNb4Nwn5JMvRDnNABif6KWvoKv5quPLzghg7eO1j1uk9tOpxrZB0h8jm8W0qIBdxI EZRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U61yi4Jo6845yX1Z0+JgAL9qSjnWA/M9rZ4S1AWeZ3g=; b=P+bkIR1zBZpN/g3pBZbnT4p+A+4QDGT0SGQXG3ZXDDiQKeD6pxVsd/wyO6AzPj9Uc8 81CCy/tL7tnSH4BerKJv3HwCuaFKJrKE7DGrZyoY2d0w9WaZneZCrrYwr83jD+M7edJ+ ZsozYrrOj+wA413SblRYjdDYh8CDANySnzB0PMjL69/DWfo+Tu0cO/17Dr+6BS+6W5+7 cpPborAOiCu3dyvTzzHqIvRadxI0uHq1/k4JuGaRDYKW0IBIQ6lNmdXysECt7qbjvAcs 6mQaz67wU9ofRnK5tEo0mIRpERtdCSARCS6hW0WfBxbSagWoKGChqk7qdeXr4Pxzq0mc 3aYw== X-Gm-Message-State: AOAM532JwLXCYn0ngUek9HmbiW3Xi99vKEpFGPc6EhDu3ASGuNRerhI+ qa6bVeL7s7LT5khXXpIjpipOOQgmo0w= X-Google-Smtp-Source: ABdhPJyaAgoYsdx9QVsZrSZWgsOfdheCgHi5dOv/ROPNtr2j/dMSoELnX+lvhyfy/cR3RUVAmR5S6Q== X-Received: by 2002:a63:1b4b:: with SMTP id b11mr20695323pgm.243.1593632834261; Wed, 01 Jul 2020 12:47:14 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:13 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:34 -0700 Message-Id: <20200701194650.10705-12-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 11/27] examples/l3fwd: replace references to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial lcore instead. Signed-off-by: Stephen Hemminger --- examples/l3fwd-acl/main.c | 4 ++-- examples/l3fwd-graph/main.c | 14 +++++++------- examples/l3fwd-power/main.c | 20 ++++++++++---------- examples/l3fwd/main.c | 2 +- 4 files changed, 20 insertions(+), 20 deletions(-) diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c index f22fca732892..c351f87bc159 100644 --- a/examples/l3fwd-acl/main.c +++ b/examples/l3fwd-acl/main.c @@ -2110,8 +2110,8 @@ main(int argc, char **argv) check_all_ports_link_status(enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(main_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index c70270c4d131..8b9f61fd8e4a 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -167,8 +167,8 @@ check_lcore_params(void) return -1; } - if (lcore == rte_get_master_lcore()) { - printf("Error: lcore %u is master lcore\n", lcore); + if (lcore == rte_get_initial_lcore()) { + printf("Error: lcore %u is initial lcore\n", lcore); return -1; } socketid = rte_lcore_to_socket_id(lcore); @@ -1099,16 +1099,16 @@ main(int argc, char **argv) route_str, i); } - /* Launch per-lcore init on every slave lcore */ - rte_eal_mp_remote_launch(graph_main_loop, NULL, SKIP_MASTER); + /* Launch per-lcore init on every worker lcore */ + rte_eal_mp_remote_launch(graph_main_loop, NULL, SKIP_INITIAL); - /* Accumulate and print stats on master until exit */ + /* Accumulate and print stats on initial until exit */ if (rte_graph_has_stats_feature()) print_stats(); - /* Wait for slave cores to exit */ + /* Wait for worker cores to exit */ ret = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { ret = rte_eal_wait_lcore(lcore_id); /* Destroy graph */ if (ret < 0 || rte_graph_destroy( diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index 9db94ce044c2..71a60bb93a37 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -1351,7 +1351,7 @@ check_lcore_params(void) "off\n", lcore, socketid); } if (app_mode == APP_MODE_TELEMETRY && lcore == rte_lcore_id()) { - printf("cannot enable master core %d in config for telemetry mode\n", + printf("cannot enable initial core %d in config for telemetry mode\n", rte_lcore_id()); return -1; } @@ -2089,7 +2089,7 @@ get_current_stat_values(uint64_t *values) uint64_t app_eps = 0, app_fps = 0, app_br = 0; uint64_t count = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { qconf = &lcore_conf[lcore_id]; if (qconf->n_rx_queue == 0) continue; @@ -2181,10 +2181,10 @@ launch_timer(unsigned int lcore_id) RTE_SET_USED(lcore_id); - if (rte_get_master_lcore() != lcore_id) { - rte_panic("timer on lcore:%d which is not master core:%d\n", + if (rte_get_initial_lcore() != lcore_id) { + rte_panic("timer on lcore:%d which is not initial core:%d\n", lcore_id, - rte_get_master_lcore()); + rte_get_initial_lcore()); } RTE_LOG(INFO, POWER, "Bring up the Timer\n"); @@ -2515,11 +2515,11 @@ main(int argc, char **argv) /* launch per-lcore init on every lcore */ if (app_mode == APP_MODE_LEGACY) { - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(main_loop, NULL, CALL_INITIAL); } else if (app_mode == APP_MODE_EMPTY_POLL) { empty_poll_stop = false; rte_eal_mp_remote_launch(main_empty_poll_loop, NULL, - SKIP_MASTER); + SKIP_INITIAL); } else { unsigned int i; @@ -2535,7 +2535,7 @@ main(int argc, char **argv) else rte_exit(EXIT_FAILURE, "failed to register metrics names"); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_spinlock_init(&stats[lcore_id].telemetry_lock); } rte_timer_init(&telemetry_timer); @@ -2543,13 +2543,13 @@ main(int argc, char **argv) handle_app_stats, "Returns global power stats. Parameters: None"); rte_eal_mp_remote_launch(main_telemetry_loop, NULL, - SKIP_MASTER); + SKIP_INITIAL); } if (app_mode == APP_MODE_EMPTY_POLL || app_mode == APP_MODE_TELEMETRY) launch_timer(rte_lcore_id()); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index 24ede42903db..5ad2256e68c8 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -1278,7 +1278,7 @@ main(int argc, char **argv) ret = 0; /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_INITIAL); if (evt_rsrc->enabled) { for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) rte_event_eth_rx_adapter_stop( From patchwork Wed Jul 1 19:46:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72649 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11AEBA0350; Wed, 1 Jul 2020 21:48:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 16F751D423; Wed, 1 Jul 2020 21:47:19 +0200 (CEST) Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by dpdk.org (Postfix) with ESMTP id 1093F1D40E for ; Wed, 1 Jul 2020 21:47:17 +0200 (CEST) Received: by mail-pg1-f194.google.com with SMTP id z5so12215320pgb.6 for ; Wed, 01 Jul 2020 12:47:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KYj6YTRIaN5wsKRGQ/c7T6flCCiXxCRCfP8kXMiZ/6g=; b=k3fk/XbxF7uQqIqW3Wdi3GiZT0iYJZEnHhoJhBo/fiRV2claAqzdf9cH80m7Ar9m+n d8DEJaWDfYLkQNmCRx4yxaEYkkZdwiPkUqt7wnCeh/I5XFZWVQlivx4cydikoeiBXTvY sbeDWWtsXmO2lIuaJ0zYMotbWF9tkgD8AP5Pram16atHhRXq59VJNja3N3eKUspwkexd UFdiYvlm39qs8Ovq6/xDsPIs4rg5R1h1rx05AX9kp24s4IdiEF0UBJ5xMHFMrMIOpkkv ut6rHM+Ce1FswygpjB/N80U7ItdHHjAaTIO2M5Bk2qjYDgDq22MpyHfgfk7tji/BuxxE eFoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KYj6YTRIaN5wsKRGQ/c7T6flCCiXxCRCfP8kXMiZ/6g=; b=rdwAhjjKgrl8SM6QuALQyPofw+tgFhgV1SHSJCQDfQre+MhJzHdSFF0uWXyyuk2n7+ 9FfyFHIEFYHu4jvZWRlzbTrDr7gyqO3ZuVtnfHlwe+1IXn1N4Mdd0vuc1i5L96nwAShH OETTDzEmFdwJD348qEkCkHdr3L+jZv6mTh+C67cnAVhNvHcM+1L0c581pu/6wVSxESGB W7mqFoqq3vR9qnKjllULOD8WKl31O95lhE7PvFZMvBTKeOwnsR2UxpbcZkT/yofXbafZ Ua93EgmzOarI9NppgviCOHwG6QjtsYI8rYKxNeh6pesQxwE+ur9Q3DCTjkFewPj/iwc7 7YEw== X-Gm-Message-State: AOAM533P84XZhqcpEt9OKDaeTNWfM+4dY8mpxM6352TE74JhHgKH1QVm UlVdlz7dNn6LvxKpqQs+pz1oahaQSts= X-Google-Smtp-Source: ABdhPJzxAYnz/mSxPzvHIbsU2RLMbtaHYHw/KzElZ3E8nywRRCjj+y7WG+WOipRyfayxAmPMinbR+g== X-Received: by 2002:a65:640c:: with SMTP id a12mr21237769pgv.88.1593632835778; Wed, 01 Jul 2020 12:47:15 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:14 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:35 -0700 Message-Id: <20200701194650.10705-13-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 12/27] examples/l2fwd: replace references to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use term initial lcore instead. Signed-off-by: Stephen Hemminger --- examples/l2fwd-cat/l2fwd-cat.c | 2 +- examples/l2fwd-crypto/main.c | 8 ++++---- examples/l2fwd-event/l2fwd_event_generic.c | 2 +- examples/l2fwd-event/l2fwd_event_internal_port.c | 2 +- examples/l2fwd-event/l2fwd_poll.c | 2 +- examples/l2fwd-event/main.c | 2 +- examples/l2fwd-jobstats/main.c | 4 ++-- examples/l2fwd-keepalive/main.c | 6 +++--- examples/l2fwd/main.c | 8 ++++---- 9 files changed, 18 insertions(+), 18 deletions(-) diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c index 45a497c082da..a8a6e0c6ca91 100644 --- a/examples/l2fwd-cat/l2fwd-cat.c +++ b/examples/l2fwd-cat/l2fwd-cat.c @@ -198,7 +198,7 @@ main(int argc, char *argv[]) if (rte_lcore_count() > 1) printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); - /* Call lcore_main on the master core only. */ + /* Call lcore_main on the initial core only. */ lcore_main(); return 0; diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c index 827da9b3e38b..3d35b67e6a89 100644 --- a/examples/l2fwd-crypto/main.c +++ b/examples/l2fwd-crypto/main.c @@ -874,8 +874,8 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options) if (unlikely(timer_tsc >= (uint64_t)timer_period)) { - /* do this only on master core */ - if (lcore_id == rte_get_master_lcore() + /* do this only on initial core */ + if (lcore_id == rte_get_initial_lcore() && options->refresh_period) { print_stats(); timer_tsc = 0; @@ -2802,8 +2802,8 @@ main(int argc, char **argv) /* launch per-lcore init on every lcore */ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, (void *)&options, - CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c index 2dc95e5f7d1a..63de608db459 100644 --- a/examples/l2fwd-event/l2fwd_event_generic.c +++ b/examples/l2fwd-event/l2fwd_event_generic.c @@ -72,7 +72,7 @@ l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc) event_d_conf.nb_event_port_enqueue_depth = dev_info.max_event_port_enqueue_depth; - /* Ignore Master core and service cores. */ + /* Ignore initial core and service cores. */ num_workers = rte_lcore_count() - 1 - rte_service_lcore_count(); if (dev_info.max_event_ports < num_workers) num_workers = dev_info.max_event_ports; diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c index 63d57b46c2da..376c44d662a1 100644 --- a/examples/l2fwd-event/l2fwd_event_internal_port.c +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c @@ -71,7 +71,7 @@ l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc) event_d_conf.nb_event_port_enqueue_depth = dev_info.max_event_port_enqueue_depth; - /* Ignore Master core. */ + /* Ignore initial core. */ num_workers = rte_lcore_count() - 1; if (dev_info.max_event_ports < num_workers) num_workers = dev_info.max_event_ports; diff --git a/examples/l2fwd-event/l2fwd_poll.c b/examples/l2fwd-event/l2fwd_poll.c index 2033c65e54b1..6a16819ad5e8 100644 --- a/examples/l2fwd-event/l2fwd_poll.c +++ b/examples/l2fwd-event/l2fwd_poll.c @@ -116,7 +116,7 @@ l2fwd_poll_lcore_config(struct l2fwd_resources *rsrc) /* get the lcore_id for this port */ while (rte_lcore_is_enabled(rx_lcore_id) == 0 || - rx_lcore_id == rte_get_master_lcore() || + rx_lcore_id == rte_get_initial_lcore() || poll_rsrc->lcore_queue_conf[rx_lcore_id].n_rx_port == rsrc->rx_queue_per_lcore) { rx_lcore_id++; diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c index 4fe500333cdc..02d08569a757 100644 --- a/examples/l2fwd-event/main.c +++ b/examples/l2fwd-event/main.c @@ -673,7 +673,7 @@ main(int argc, char **argv) /* launch per-lcore init on every lcore */ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, rsrc, - SKIP_MASTER); + SKIP_INITIAL); l2fwd_event_print_stats(rsrc); if (rsrc->event_mode) { struct l2fwd_event_resources *evt_rsrc = diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c index 47a3b0976546..c06027964a64 100644 --- a/examples/l2fwd-jobstats/main.c +++ b/examples/l2fwd-jobstats/main.c @@ -1022,8 +1022,8 @@ main(int argc, char **argv) RTE_LOG(INFO, L2FWD, "Stats display disabled\n"); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c index b2742633bc65..c0a99a3b3da5 100644 --- a/examples/l2fwd-keepalive/main.c +++ b/examples/l2fwd-keepalive/main.c @@ -792,8 +792,8 @@ main(int argc, char **argv) ) != 0 ) rte_exit(EXIT_FAILURE, "Stats setup failure.\n"); } - /* launch per-lcore init on every slave lcore */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + /* launch per-lcore init on every worker lcore */ + RTE_LCORE_FOREACH_WORKER(lcore_id) { struct lcore_queue_conf *qconf = &lcore_queue_conf[lcore_id]; if (qconf->n_rx_port == 0) @@ -816,7 +816,7 @@ main(int argc, char **argv) rte_delay_ms(5); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c index 4a41aac63841..b07476fb7a38 100644 --- a/examples/l2fwd/main.c +++ b/examples/l2fwd/main.c @@ -250,8 +250,8 @@ l2fwd_main_loop(void) /* if timer has reached its timeout */ if (unlikely(timer_tsc >= timer_period)) { - /* do this only on master core */ - if (lcore_id == rte_get_master_lcore()) { + /* do this only on initial core */ + if (lcore_id == rte_get_initial_lcore()) { print_stats(); /* reset the timer */ timer_tsc = 0; @@ -756,8 +756,8 @@ main(int argc, char **argv) ret = 0; /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) { ret = -1; break; From patchwork Wed Jul 1 19:46:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72650 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE9DBA0350; Wed, 1 Jul 2020 21:49:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 877DF1D483; Wed, 1 Jul 2020 21:47:20 +0200 (CEST) Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by dpdk.org (Postfix) with ESMTP id 519F91C1C0 for ; Wed, 1 Jul 2020 21:47:18 +0200 (CEST) Received: by mail-pf1-f193.google.com with SMTP id u5so11443753pfn.7 for ; Wed, 01 Jul 2020 12:47:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vJSz10Nndq1TBf16R1LDZG59khgZOuxUwZTccNKXjno=; b=PixcX74fuvBGMVHmu2oOSUc99ZHz3EEg1psmDH4P5dzmYQTXT434qHamte3U1azSmb epQewE1aGquqGGKkJ0sODV7SykS+45F1TrdpiSdhsXPfpNpZKiZHis3woNtBPMZ7rf63 wjm4utMhbm8W2EB5o/2gX3kZMR1dcrJZ0/AuIRt9yF6QbITb0sbAjdXrXTFEHurwwvTG cavBhoe856R8uBnq8V6EqUoYQqqdk7ATC/eHfH7mfHtSChWrtJv0oPOmYrIVd9GAwZaD LAkLBBLm2Lje8xOtn12QRfUmS5H4U2TIvDWzTHCdlkeJhd2zaFpS6hLhmwFvywngngh+ F47A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vJSz10Nndq1TBf16R1LDZG59khgZOuxUwZTccNKXjno=; b=e/q26OpCRANRBi/jIdg8CHZ+SkNt5+hrzkwXycbvLDWWzduMz5o3wGV1e5+Qcdt2km MRBulRbsaDaXsg8foUNMD1QpjJJ6FUm0hNnEj84pHxoYmWlnFkBRrGIIyH4/UW9dH/0G AZkRuLyVRzOqYz5ZNZGNWjMjtUd4YaMqiw+hPH7+LZTcg22EFQgSX5d7NuPZHIDf1sJd jNYlHkh2Po5jnKMY5IkeTNumfVAwSiW1C50UryqhjQUFCwVKN5za7cdcY5MgkKdd+yw+ PuQBjZZHJGsy57wlE5lTup1eT3CI1asl2Ihuz+Ht02VnSR+xmCpv6ltqj8HJfgexDVnb JHIg== X-Gm-Message-State: AOAM5331uWYexZI6w0GDXC+OejnwSEenHY0VpngXuVIYk3LGPVGw6aUv S5PeJmqTA3Vx6iUqVb7ackjHBtnZ2hA= X-Google-Smtp-Source: ABdhPJxXrzxFeNhQKrQXNHHM85EpHn7kLGbqJXgJpl8k38t14tuEq6JUVKZq0kIzxltvvttMPbLUcA== X-Received: by 2002:a63:c44b:: with SMTP id m11mr20278959pgg.404.1593632837083; Wed, 01 Jul 2020 12:47:17 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:16 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:36 -0700 Message-Id: <20200701194650.10705-14-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 13/27] examples/multi_process: replace references to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial lcore instead. Signed-off-by: Stephen Hemminger --- .../multi_process/client_server_mp/mp_server/main.c | 10 +++++----- examples/multi_process/simple_mp/main.c | 6 +++--- examples/multi_process/symmetric_mp/main.c | 2 +- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c index 280dab867281..8b9c6ca12a66 100644 --- a/examples/multi_process/client_server_mp/mp_server/main.c +++ b/examples/multi_process/client_server_mp/mp_server/main.c @@ -84,7 +84,7 @@ get_printable_mac_addr(uint16_t port) /* * This function displays the recorded statistics for each port * and for each client. It uses ANSI terminal codes to clear - * screen when called. It is called from a single non-master + * screen when called. It is called from a single worker * thread in the server process, when the process is run with more * than one lcore enabled. */ @@ -146,7 +146,7 @@ do_stats_display(void) } /* - * The function called from each non-master lcore used by the process. + * The function called from each worker lcore used by the process. * The test_and_set function is used to randomly pick a single lcore on which * the code to display the statistics will run. Otherwise, the code just * repeatedly sleeps. @@ -244,7 +244,7 @@ process_packets(uint32_t port_num __rte_unused, } /* - * Function called by the master lcore of the DPDK process. + * Function called by the initial lcore of the DPDK process. */ static void do_packet_forwarding(void) @@ -297,8 +297,8 @@ main(int argc, char *argv[]) /* clear statistics */ clear_stats(); - /* put all other cores to sleep bar master */ - rte_eal_mp_remote_launch(sleep_lcore, NULL, SKIP_MASTER); + /* put all other cores to sleep execpt inital core */ + rte_eal_mp_remote_launch(sleep_lcore, NULL, SKIP_INITIAL); do_packet_forwarding(); return 0; diff --git a/examples/multi_process/simple_mp/main.c b/examples/multi_process/simple_mp/main.c index fc79528462e9..f89d46a720b2 100644 --- a/examples/multi_process/simple_mp/main.c +++ b/examples/multi_process/simple_mp/main.c @@ -108,12 +108,12 @@ main(int argc, char **argv) RTE_LOG(INFO, APP, "Finished Process Init.\n"); - /* call lcore_recv() on every slave lcore */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + /* call lcore_recv() on every worker lcore */ + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(lcore_recv, NULL, lcore_id); } - /* call cmd prompt on master lcore */ + /* call cmd prompt on initial lcore */ struct cmdline *cl = cmdline_stdin_new(simple_mp_ctx, "\nsimple_mp > "); if (cl == NULL) rte_exit(EXIT_FAILURE, "Cannot create cmdline instance\n"); diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c index 9a16e198cbf4..683062a627a2 100644 --- a/examples/multi_process/symmetric_mp/main.c +++ b/examples/multi_process/symmetric_mp/main.c @@ -473,7 +473,7 @@ main(int argc, char **argv) RTE_LOG(INFO, APP, "Finished Process Init.\n"); - rte_eal_mp_remote_launch(lcore_main, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(lcore_main, NULL, CALL_INITIAL); return 0; } From patchwork Wed Jul 1 19:46:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72651 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 40D01A0350; Wed, 1 Jul 2020 21:49:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 929E91D519; Wed, 1 Jul 2020 21:47:21 +0200 (CEST) Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by dpdk.org (Postfix) with ESMTP id 9E4D01D42B for ; Wed, 1 Jul 2020 21:47:19 +0200 (CEST) Received: by mail-pf1-f195.google.com with SMTP id x72so2006925pfc.6 for ; Wed, 01 Jul 2020 12:47:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=atwxq0PGcDovAnTNLhgwFP4SbM+hXx4hVDfYSho97o8=; b=m29tU35dlVhcaouc8CDIkGwlfI7JyunbOJQuOEwI0bwCbTMVj/rfP6yBXJarCWfB/2 AEWGKobRIMbNYWmrxFHMSgxpKPG4PJQFhMeulAwciq8fYHH5muZAKChJ5CMlZ/bD5q02 UpeS2SEHRxU/BrldaW4ETnx0GZrZDO1YeMsRTXFavzMXBQwaPAU1Vy+rIU3u1yFg9GOS WzQUWvu+7ToA187ygp22wf8fwke8EhDAmJX70G+0ZU7QDAQ2obJatMGUQdJcpHtL+r0p L+v93HmsTGGiwHvh2EPLOdXr5T2R0Zbc4A6EY5lNGkmmOp7CaxG7bu7Ypstbi2FKNoZx Bjow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=atwxq0PGcDovAnTNLhgwFP4SbM+hXx4hVDfYSho97o8=; b=XPKqSM2jsNZ24qxSLgjDMDGcIcRtEr8yNJMPljSTqY+Z6cVzceiIWGXoiHeL+rsSFF 2l30U1wOkW9OZaiD4yrwO11XR1MGDmJ/HkoYKaG1D3gaO1mXWwSghOibQ5t9I1r+0pGT aTf7gxN7sJIZoC3tH/DWxibeZIqGv9Etp7mjP3sXeZZGRLp6AMkhWsVGhxEABMaFn+9j Vz1hAg4q0Lo8iyw74K60INg/Ycp0hU3gmaT8Bl1xq+gY5X9m//1trsde7vgwKI+aHYN2 9xYBfKom4o/oLAVF77bzT+rBXyNJSokKkIZfN2+rqKnI3YkrcN6x8RhM7Rm4HY9hif2y s4tA== X-Gm-Message-State: AOAM531sX82yip2oP8qw3n9FjFbFH9jMXLcEySbK+sciQpuGcSQYkQoE fiw91KuJDyqavAtCzanBT7johqY76lE= X-Google-Smtp-Source: ABdhPJxiR8+9a2czBlH2+ecMW/DeLD0feFM0txafEOEHGHUYvYbCSwFdDgY/9v1hqJ9io/0kRjaEWw== X-Received: by 2002:a63:5a60:: with SMTP id k32mr21107725pgm.73.1593632838399; Wed, 01 Jul 2020 12:47:18 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:17 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:37 -0700 Message-Id: <20200701194650.10705-15-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 14/27] examples/performance-thread: replace reference to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial lcore instead. Signed-off-by: Stephen Hemminger --- examples/performance-thread/l3fwd-thread/main.c | 12 ++++++------ examples/performance-thread/pthread_shim/main.c | 4 ++-- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c index 84c1d7b3a232..ee8c8622ad8e 100644 --- a/examples/performance-thread/l3fwd-thread/main.c +++ b/examples/performance-thread/l3fwd-thread/main.c @@ -2211,7 +2211,7 @@ lthread_rx(void *dummy) /* * Start scheduler with initial lthread on lcore * - * This lthread loop spawns all rx and tx lthreads on master lcore + * This lthread loop spawns all rx and tx lthreads on intial lcore */ static void * @@ -2265,7 +2265,7 @@ lthread_spawner(__rte_unused void *arg) * (main_lthread_master). */ static int -lthread_master_spawner(__rte_unused void *arg) { +lthread_initial_spawner(__rte_unused void *arg) { struct lthread *lt; int lcore_id = rte_lcore_id(); @@ -3765,14 +3765,14 @@ main(int argc, char **argv) #endif lthread_num_schedulers_set(nb_lcores); - rte_eal_mp_remote_launch(sched_spawner, NULL, SKIP_MASTER); - lthread_master_spawner(NULL); + rte_eal_mp_remote_launch(sched_spawner, NULL, SKIP_INITIAL); + lthread_initial_spawner(NULL); } else { printf("Starting P-Threading Model\n"); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(pthread_run, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(pthread_run, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/performance-thread/pthread_shim/main.c b/examples/performance-thread/pthread_shim/main.c index 18f83059bc17..fa6c52209065 100644 --- a/examples/performance-thread/pthread_shim/main.c +++ b/examples/performance-thread/pthread_shim/main.c @@ -252,10 +252,10 @@ int main(int argc, char **argv) lthread_num_schedulers_set(num_sched); /* launch all threads */ - rte_eal_mp_remote_launch(lthread_scheduler, (void *)NULL, CALL_MASTER); + rte_eal_mp_remote_launch(lthread_scheduler, (void *)NULL, CALL_INITIAL); /* wait for threads to stop */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_wait_lcore(lcore_id); } return 0; From patchwork Wed Jul 1 19:46:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72652 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C8A7A0350; Wed, 1 Jul 2020 21:49:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 93A771D15A; Wed, 1 Jul 2020 21:47:23 +0200 (CEST) Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by dpdk.org (Postfix) with ESMTP id EF6681D509 for ; Wed, 1 Jul 2020 21:47:20 +0200 (CEST) Received: by mail-pg1-f196.google.com with SMTP id e8so12230731pgc.5 for ; Wed, 01 Jul 2020 12:47:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/ROKNDW6dVWGU9glJJkCoW2TckiE9Jhd+U88Ner7Oqc=; b=EzpsoVd1HWvmYbVlxm9g/fon4QwcYWbh6KyJZK10cXLdEnySknCsZUyOvBjjN3sZKD 2e2RJFkKZ36lrDUMaKmSt1+KH7pATvrZsq+N7ozC7N9riYNeykei8jYjsfoAEbOyETPq sFjmMFeevTMGZrxYPrP6qPLCfNergZP4k3E0UO9swIl7BUxCNA6SVBsQ6539QxMvCPcL 7X14BM2YXOv2kvVYPlnCtv0G6F0twGRWY3MpcQ7pSGx7OmOtYDie9BbL5hudu4Bom9U1 Sy1hQTsIK4XtsgxKq8EO5X5WuyX7AZptRpqLI+4n2tGhw65NTpUtl0jpfA1yWYc9GuyP ko0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/ROKNDW6dVWGU9glJJkCoW2TckiE9Jhd+U88Ner7Oqc=; b=aXZcHP8SMMJjLXjD/4dha+S9sct6lLcGDOShwa7nJM5N84UpJ9n6jw3HqupBURMZUd AsNS/UphxE01707P8+d5IAFM0TJvVztN1aS9A+HmD8iKqiRAMamVGhxBLume1wsycPPe sTI53KsSgmlx7ZBHnp4zfJTzVAnbo9ud8qgkKVbukxwA/v7GnK1C7oPBlPXLfeLr0KiQ fRtrBSMyQKuiiV29lm4rCY1z/fmYNQQX7AsZ5xB6sK/wyKEura51gTjkLk2QaP6BBEbO 10A7TF2Y0iBLDrRT6OwpcRpb/Hja6srxtSuOpb4oKzUdXlN6Nykhq6gWpBrnWXLbPEpG S6lg== X-Gm-Message-State: AOAM530FMKltfgdchheyE0hAd+NeqQUm+3y4M7kADpcEmxGuT8NG2Gq5 Ip0XZNL0z0kEEdEOr4tKTkJZO3TeVao= X-Google-Smtp-Source: ABdhPJzNf4jRejJNbVRpF60cOivDSZCkZm4UTxCzOR+1bcTlrg0Mf096uav+9hAZcwBGqcVhXjYpXQ== X-Received: by 2002:a63:d40d:: with SMTP id a13mr21291903pgh.225.1593632839783; Wed, 01 Jul 2020 12:47:19 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:18 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:38 -0700 Message-Id: <20200701194650.10705-16-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 15/27] examples/ptpclient: replace references to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Replace master lcore with initial lcore. This API still has issue with use of term master clock. Signed-off-by: Stephen Hemminger --- examples/ptpclient/ptpclient.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c index bfa86eec5a4a..119f4f6e5d89 100644 --- a/examples/ptpclient/ptpclient.c +++ b/examples/ptpclient/ptpclient.c @@ -785,7 +785,7 @@ main(int argc, char *argv[]) if (rte_lcore_count() > 1) printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); - /* Call lcore_main on the master core only. */ + /* Call lcore_main on the initial core only. */ lcore_main(); return 0; From patchwork Wed Jul 1 19:46:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72653 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 49D70A0350; Wed, 1 Jul 2020 21:49:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9C1151D531; Wed, 1 Jul 2020 21:47:24 +0200 (CEST) Received: from mail-pj1-f65.google.com (mail-pj1-f65.google.com [209.85.216.65]) by dpdk.org (Postfix) with ESMTP id 556B41D525 for ; Wed, 1 Jul 2020 21:47:22 +0200 (CEST) Received: by mail-pj1-f65.google.com with SMTP id c1so4197495pja.5 for ; Wed, 01 Jul 2020 12:47:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xLYRH6npH+95zSvB8PtjC28bx5SheY9Pmdl4p97DNLw=; b=QVXVFHabruNGrd3NfRrOJ4X6yCuKyKI4U4M0VGjn3KsnRlbeFuII64HvL/3wh1STl5 Z0l+qxPwIuYcxh3nrNTgl1DJxZZuG2j8IHsfbo5zAKiaIpEMbg/eIhaVhjadYLO+l777 gMTPLFxgGWeOF/tFvJ8la4VNcnbTqtRihzARcglJuy+xWydxExZYfpxE6QQ8nRWXtYeK FMugS/J8RvVbu0a7RStIwMbq32Yl6yUONpuXybv84sN6uq2W9mSU6gfvYs2JCRf7N3Uy 1UrFw0CHtF9OZlnGxWty1uIzl/k5eB+ZXirY4R9Dpvjm4d5pnjgMopYjieogiG75Os9A Fvqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xLYRH6npH+95zSvB8PtjC28bx5SheY9Pmdl4p97DNLw=; b=TdKnF+WyiG5y4KHdtNYvOBgWgMlsETQ17e5ersTL5ylDvdz+QWtFHQlVQgKV1Db+Rj cVocGa3G+9SVwrCaYlf/QfmWHgdgAzqJ1AemzsESHNEFHoTonxYMaHgL9TSZnDjwXCoO eTvUsAJmdLLdQSCYB4YOfPwkkPBzM7FBe2G0BfC89viilB/nJ0Q9Dva1pl9gfJ4dwMfJ g5zbuUcUIhyAZSUqoTLYgpcsY5P7Lv/jlBjTR8A+47pS8APNsH4utJLqkfGApQzYuTiT 74khhW1ZlHovTmfD8gCA0Te5d1x/Ut7OyjZBq96ZpJYuG0R7YRIrkVj6nD6+SgDIU5wE awkQ== X-Gm-Message-State: AOAM532ukv2Vsc5NszgOb4aBE5Rhi8Dt3HtrXZxi8Gioq9jRqvlKw2kY F7aOUCMKbtF3QxFey6gQMntroFSrHVQ= X-Google-Smtp-Source: ABdhPJwcm30eN83LQGfh8VzJsPkig3gLM1H0l6Md+7ws6AR78zLBe0lu/PH8pz3mJTzvE3y9JdOlJg== X-Received: by 2002:a17:90a:c70e:: with SMTP id o14mr31918003pjt.70.1593632841104; Wed, 01 Jul 2020 12:47:21 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:20 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:39 -0700 Message-Id: <20200701194650.10705-17-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 16/27] examples/ipcsec-secgw: replace references to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial lcore instead. Signed-off-by: Stephen Hemminger --- examples/ipsec-secgw/event_helper.c | 6 +++--- examples/ipsec-secgw/ipsec-secgw.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 865dc911b864..239c35985bd6 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1461,16 +1461,16 @@ eh_conf_init(void) /* Set two cores as eth cores for Rx & Tx */ - /* Use first core other than master core as Rx core */ + /* Use first core other than initial core as Rx core */ eth_core_id = rte_get_next_lcore(0, /* curr core */ - 1, /* skip master core */ + 1, /* skip initial core */ 0 /* wrap */); rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); /* Use next core as Tx core */ eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ - 1, /* skip master core */ + 1, /* skip initial core */ 0 /* wrap */); rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index f777ce2afe41..388f7f614ae0 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2917,8 +2917,8 @@ main(int32_t argc, char **argv) check_all_ports_link_status(enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } From patchwork Wed Jul 1 19:46:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72654 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D07EA0350; Wed, 1 Jul 2020 21:49:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A455A1D53C; Wed, 1 Jul 2020 21:47:25 +0200 (CEST) Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by dpdk.org (Postfix) with ESMTP id 468D61D529 for ; Wed, 1 Jul 2020 21:47:24 +0200 (CEST) Received: by mail-pg1-f194.google.com with SMTP id d4so12219085pgk.4 for ; Wed, 01 Jul 2020 12:47:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BtBkKHJFnciPhfToHNMH6YtyieBQBvKBHayos1rQXL8=; b=VqBjyYtoRyxyx2EpR4qppEt/W8A2bEQ0nr3NIsgE/1/7oBAy+55N4NU7Gf1mpYXLKS P1o8ndjdzKeOlKSXb/61y/Kswp+yQ6tQ4Xyti5ejxaLnKxXfvadAI+m2o37d5qjRWpGO 0dXLIAXIG/s32g1ryUF8bPTQXY5dSnyo3y55DPvZGsWZbMb83kAbv77rLGNlS/xPifbF 8PYbUl7pbqZiH0YYdgk+yoQ2LFzMlms8uXuyO1rHRQpSrqpc8WjrRU5oGzAZFA4k7CQU N+KbrpBoxFH62DXMuytfXhF4kUmL6BsfAV63yEVB12di86utpQYDv0tmX9hdAL3oxULP BfdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BtBkKHJFnciPhfToHNMH6YtyieBQBvKBHayos1rQXL8=; b=Qn6KYCjsVvcPwn84itwSLym1HRExXIRsq6CmjYqOKobbQ72gR7il/4SlTRioOd2TYW PbRnwPNFT3VZ8eEZTgV6wVvDzYGuoubvP+DvvoK/6R+BIjx6uykb5xqqnipXI8t4qVGI deoqf+/riXjMD/AoFRh11R5DhGD2Q0r47RYJxwAV9+F/Aq+ztjU425fLcbIa5bZMYWd7 sq3aQBdJNbzPBSY8qYJCnjpzN80W871fFGEjuyfN9zHpvQ7XxU7zVQJXXk7gbo5baIiq WcxjY3httsoDWASFvfo2um2fhPWQI1BuIV2mfMbM3+YYuNCM/5IDOfmSQ2hqj3KHhZEw Z7lQ== X-Gm-Message-State: AOAM533r7YYI6BwT199iCxmHP5zmNENSvYGgfY0spVXguq4qbUswGuje sTHEU4Un8aSFax+EoYpyItOa8ukyHMs= X-Google-Smtp-Source: ABdhPJyHipu56TUITdhXGLkAmFK4dBDdJTxW6xcBguVYcv4yNh5baK9zyqrX4UmpORJkT8DaZS/McA== X-Received: by 2002:a63:2b93:: with SMTP id r141mr21242677pgr.171.1593632842516; Wed, 01 Jul 2020 12:47:22 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:21 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:40 -0700 Message-Id: <20200701194650.10705-18-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 17/27] examples: replace reference to master lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Small changes in several examples to replace master_lcore with initial_lcore across multiple small examples. Signed-off-by: Stephen Hemminger --- examples/eventdev_pipeline/main.c | 2 +- examples/flow_classify/flow_classify.c | 2 +- examples/helloworld/main.c | 6 +++--- examples/ioat/ioatfwd.c | 6 +++--- examples/ip_fragmentation/main.c | 4 ++-- examples/ip_reassembly/main.c | 4 ++-- examples/ipv4_multicast/main.c | 4 ++-- examples/kni/main.c | 7 +++---- examples/link_status_interrupt/main.c | 8 ++++---- examples/ntb/ntb_fwd.c | 14 +++++++------- examples/packet_ordering/main.c | 22 +++++++++++----------- examples/rxtx_callbacks/main.c | 2 +- examples/server_node_efd/server/main.c | 10 +++++----- examples/skeleton/basicfwd.c | 2 +- examples/tep_termination/main.c | 12 ++++++------ examples/timer/main.c | 8 ++++---- examples/vhost/main.c | 10 +++++----- examples/vmdq/main.c | 4 ++-- examples/vmdq_dcb/main.c | 6 +++--- 19 files changed, 66 insertions(+), 67 deletions(-) diff --git a/examples/eventdev_pipeline/main.c b/examples/eventdev_pipeline/main.c index 21958269f743..91969ce039cf 100644 --- a/examples/eventdev_pipeline/main.c +++ b/examples/eventdev_pipeline/main.c @@ -395,7 +395,7 @@ main(int argc, char **argv) } int worker_idx = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (lcore_id >= MAX_NUM_CORE) break; diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c index 433e64d3f901..e8d23225c4b8 100644 --- a/examples/flow_classify/flow_classify.c +++ b/examples/flow_classify/flow_classify.c @@ -850,7 +850,7 @@ main(int argc, char *argv[]) rte_exit(EXIT_FAILURE, "Failed to add rules\n"); } - /* Call lcore_main on the master core only. */ + /* Call lcore_main on the initial core only. */ lcore_main(cls_app); return 0; diff --git a/examples/helloworld/main.c b/examples/helloworld/main.c index 968045f1b042..029bee26cba2 100644 --- a/examples/helloworld/main.c +++ b/examples/helloworld/main.c @@ -34,12 +34,12 @@ main(int argc, char **argv) if (ret < 0) rte_panic("Cannot init EAL\n"); - /* call lcore_hello() on every slave lcore */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + /* call lcore_hello() on every worker lcore */ + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(lcore_hello, NULL, lcore_id); } - /* call it on master lcore too */ + /* call it on initial lcore too */ lcore_hello(NULL); rte_eal_mp_wait_lcore(); diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c index b66ee73bcec4..919c95e425b2 100644 --- a/examples/ioat/ioatfwd.c +++ b/examples/ioat/ioatfwd.c @@ -520,7 +520,7 @@ tx_main_loop(void) ioat_tx_port(&cfg.ports[i]); } -/* Main rx and tx loop if only one slave lcore available */ +/* Main rx and tx loop if only one worker lcore available */ static void rxtx_main_loop(void) { @@ -984,7 +984,7 @@ main(int argc, char **argv) cfg.nb_lcores = rte_lcore_count() - 1; if (cfg.nb_lcores < 1) rte_exit(EXIT_FAILURE, - "There should be at least one slave lcore.\n"); + "There should be at least one worker lcore.\n"); if (copy_mode == COPY_MODE_IOAT_NUM) assign_rawdevs(); @@ -992,7 +992,7 @@ main(int argc, char **argv) assign_rings(); start_forwarding_cores(); - /* master core prints stats while other cores forward */ + /* initial core prints stats while other cores forward */ print_stats(argv[0]); /* force_quit is true when we get here */ diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c index 4afb97109fed..13b9f3ca0b66 100644 --- a/examples/ip_fragmentation/main.c +++ b/examples/ip_fragmentation/main.c @@ -1072,8 +1072,8 @@ main(int argc, char **argv) check_all_ports_link_status(enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(main_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index 494d7ee77641..01d89eef2f70 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -1201,8 +1201,8 @@ main(int argc, char **argv) signal(SIGINT, signal_handler); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(main_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c index 7e255c35a301..52dee9f68bcf 100644 --- a/examples/ipv4_multicast/main.c +++ b/examples/ipv4_multicast/main.c @@ -801,8 +801,8 @@ main(int argc, char **argv) rte_exit(EXIT_FAILURE, "Cannot build the multicast hash\n"); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(main_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/kni/main.c b/examples/kni/main.c index f5d12a5b8676..fde7db45e657 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -956,8 +956,7 @@ kni_alloc(uint16_t port_id) conf.mbuf_size = MAX_PACKET_SZ; /* * The first KNI device associated to a port - * is the master, for multiple kernel thread - * environment. + * is special, for multiple kernel thread environment. */ if (i == 0) { struct rte_kni_ops ops; @@ -1105,8 +1104,8 @@ main(int argc, char** argv) "Could not create link status thread!\n"); /* Launch per-lcore function on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(i) { + rte_eal_mp_remote_launch(main_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(i) { if (rte_eal_wait_lcore(i) < 0) return -1; } diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c index 9bbcadfcf8b9..e22c264fda71 100644 --- a/examples/link_status_interrupt/main.c +++ b/examples/link_status_interrupt/main.c @@ -255,8 +255,8 @@ lsi_main_loop(void) /* if timer has reached its timeout */ if (unlikely(timer_tsc >= (uint64_t) timer_period)) { - /* do this only on master core */ - if (lcore_id == rte_get_master_lcore()) { + /* do this only on initial core */ + if (lcore_id == rte_get_initial_lcore()) { print_stats(); /* reset the timer */ timer_tsc = 0; @@ -735,8 +735,8 @@ main(int argc, char **argv) check_all_ports_link_status(nb_ports, lsi_enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(lsi_launch_one_lcore, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(lsi_launch_one_lcore, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c index eba8ebf9fab0..c60b6f3f944b 100644 --- a/examples/ntb/ntb_fwd.c +++ b/examples/ntb/ntb_fwd.c @@ -162,7 +162,7 @@ cmd_quit_parsed(__rte_unused void *parsed_result, uint32_t lcore_id; /* Stop transmission first. */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { conf = &fwd_lcore_conf[lcore_id]; if (!conf->nb_stream) @@ -668,7 +668,7 @@ assign_stream_to_lcores(void) uint8_t lcore_num, nb_extra; lcore_num = rte_lcore_count(); - /* Exclude master core */ + /* Exclude initial core */ lcore_num--; nb_streams = (fwd_mode == IOFWD) ? num_queues * 2 : num_queues; @@ -678,7 +678,7 @@ assign_stream_to_lcores(void) sm_id = 0; i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { conf = &fwd_lcore_conf[lcore_id]; if (i < nb_extra) { @@ -697,7 +697,7 @@ assign_stream_to_lcores(void) } /* Print packet forwading config. */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { conf = &fwd_lcore_conf[lcore_id]; if (!conf->nb_stream) @@ -765,7 +765,7 @@ start_pkt_fwd(void) assign_stream_to_lcores(); in_test = 1; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { conf = &fwd_lcore_conf[lcore_id]; if (!conf->nb_stream) @@ -826,7 +826,7 @@ cmd_stop_parsed(__rte_unused void *parsed_result, struct ntb_fwd_lcore_conf *conf; uint32_t lcore_id; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { conf = &fwd_lcore_conf[lcore_id]; if (!conf->nb_stream) @@ -1074,7 +1074,7 @@ cmdline_parse_ctx_t main_ctx[] = { NULL, }; -/* prompt function, called from main on MASTER lcore */ +/* prompt function, called from main on initial lcore */ static void prompt(void) { diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c index b397b318e651..bcf3cbf89af2 100644 --- a/examples/packet_ordering/main.c +++ b/examples/packet_ordering/main.c @@ -348,10 +348,10 @@ print_stats(void) { uint16_t i; struct rte_eth_stats eth_stats; - unsigned int lcore_id, last_lcore_id, master_lcore_id, end_w_lcore_id; + unsigned int lcore_id, last_lcore_id, initial_lcore_id, end_w_lcore_id; last_lcore_id = get_last_lcore_id(); - master_lcore_id = rte_get_master_lcore(); + initial_lcore_id = rte_get_initial_lcore(); end_w_lcore_id = get_previous_lcore_id(last_lcore_id); printf("\nRX thread stats:\n"); @@ -363,7 +363,7 @@ print_stats(void) for (lcore_id = 0; lcore_id <= end_w_lcore_id; lcore_id++) { if (insight_worker && rte_lcore_is_enabled(lcore_id) - && lcore_id != master_lcore_id) { + && lcore_id != initial_lcore_id) { printf("\nWorker thread stats on core [%u]:\n", lcore_id); printf(" - Pkts deqd from workers ring: %"PRIu64"\n", @@ -661,7 +661,7 @@ main(int argc, char **argv) { int ret; unsigned nb_ports; - unsigned int lcore_id, last_lcore_id, master_lcore_id; + unsigned int lcore_id, last_lcore_id, initial_lcore_id; uint16_t port_id; uint16_t nb_ports_available; struct worker_thread_args worker_args = {NULL, NULL}; @@ -748,32 +748,32 @@ main(int argc, char **argv) } last_lcore_id = get_last_lcore_id(); - master_lcore_id = rte_get_master_lcore(); + initial_lcore_id = rte_get_initial_lcore(); worker_args.ring_in = rx_to_workers; worker_args.ring_out = workers_to_tx; - /* Start worker_thread() on all the available slave cores but the last 1 */ + /* Start worker_thread() on all the available worker cores but the last 1 */ for (lcore_id = 0; lcore_id <= get_previous_lcore_id(last_lcore_id); lcore_id++) - if (rte_lcore_is_enabled(lcore_id) && lcore_id != master_lcore_id) + if (rte_lcore_is_enabled(lcore_id) && lcore_id != initial_lcore_id) rte_eal_remote_launch(worker_thread, (void *)&worker_args, lcore_id); if (disable_reorder) { - /* Start tx_thread() on the last slave core */ + /* Start tx_thread() on the last worker core */ rte_eal_remote_launch((lcore_function_t *)tx_thread, workers_to_tx, last_lcore_id); } else { send_args.ring_in = workers_to_tx; - /* Start send_thread() on the last slave core */ + /* Start send_thread() on the last worker core */ rte_eal_remote_launch((lcore_function_t *)send_thread, (void *)&send_args, last_lcore_id); } - /* Start rx_thread() on the master core */ + /* Start rx_thread() on the initial core */ rx_thread(rx_to_workers); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c index 54d124b00bc9..562abb87d7ff 100644 --- a/examples/rxtx_callbacks/main.c +++ b/examples/rxtx_callbacks/main.c @@ -302,7 +302,7 @@ main(int argc, char *argv[]) printf("\nWARNING: Too much enabled lcores - " "App uses only 1 lcore\n"); - /* call lcore_main on master core only */ + /* call lcore_main on initial core only */ lcore_main(); return 0; } diff --git a/examples/server_node_efd/server/main.c b/examples/server_node_efd/server/main.c index 05f961cff5d0..912dcd06c70f 100644 --- a/examples/server_node_efd/server/main.c +++ b/examples/server_node_efd/server/main.c @@ -95,7 +95,7 @@ get_printable_mac_addr(uint16_t port) /* * This function displays the recorded statistics for each port * and for each node. It uses ANSI terminal codes to clear - * screen when called. It is called from a single non-master + * screen when called. It is called from a single worker * thread in the server process, when the process is run with more * than one lcore enabled. */ @@ -168,7 +168,7 @@ do_stats_display(void) } /* - * The function called from each non-master lcore used by the process. + * The function called from each worker lcore used by the process. * The test_and_set function is used to randomly pick a single lcore on which * the code to display the statistics will run. Otherwise, the code just * repeatedly sleeps. @@ -290,7 +290,7 @@ process_packets(uint32_t port_num __rte_unused, struct rte_mbuf *pkts[], } /* - * Function called by the master lcore of the DPDK process. + * Function called by the initial lcore of the DPDK process. */ static void do_packet_forwarding(void) @@ -330,8 +330,8 @@ main(int argc, char *argv[]) /* clear statistics */ clear_stats(); - /* put all other cores to sleep bar master */ - rte_eal_mp_remote_launch(sleep_lcore, NULL, SKIP_MASTER); + /* put all other cores to sleep */ + rte_eal_mp_remote_launch(sleep_lcore, NULL, SKIP_INITIAL); do_packet_forwarding(); return 0; diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c index 72ba85fa1fe5..3062ac2f596c 100644 --- a/examples/skeleton/basicfwd.c +++ b/examples/skeleton/basicfwd.c @@ -202,7 +202,7 @@ main(int argc, char *argv[]) if (rte_lcore_count() > 1) printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); - /* Call lcore_main on the master core only. */ + /* Call lcore_main on the initial core only. */ lcore_main(); return 0; diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c index b9fffca020a9..55fb3335538f 100644 --- a/examples/tep_termination/main.c +++ b/examples/tep_termination/main.c @@ -838,7 +838,7 @@ init_data_ll(void) { int lcore; - RTE_LCORE_FOREACH_SLAVE(lcore) { + RTE_LCORE_FOREACH_WORKER(lcore) { lcore_info[lcore].lcore_ll = malloc(sizeof(struct lcore_ll_info)); if (lcore_info[lcore].lcore_ll == NULL) { @@ -930,7 +930,7 @@ destroy_device(int vid) rm_data_ll_entry(&ll_root_used, ll_main_dev_cur, ll_main_dev_last); /* Set the dev_removal_flag on each lcore. */ - RTE_LCORE_FOREACH_SLAVE(lcore) { + RTE_LCORE_FOREACH_WORKER(lcore) { lcore_info[lcore].lcore_ll->dev_removal_flag = REQUEST_DEV_REMOVAL; } @@ -941,7 +941,7 @@ destroy_device(int vid) * the device removed from the linked lists and that the devices * are no longer in use. */ - RTE_LCORE_FOREACH_SLAVE(lcore) { + RTE_LCORE_FOREACH_WORKER(lcore) { while (lcore_info[lcore].lcore_ll->dev_removal_flag != ACK_DEV_REMOVAL) rte_pause(); @@ -1001,7 +1001,7 @@ new_device(int vid) vdev->remove = 0; /* Find a suitable lcore to add the device. */ - RTE_LCORE_FOREACH_SLAVE(lcore) { + RTE_LCORE_FOREACH_WORKER(lcore) { if (lcore_info[lcore].lcore_ll->device_num < device_num_min) { device_num_min = lcore_info[lcore].lcore_ll->device_num; core_add = lcore; @@ -1207,7 +1207,7 @@ main(int argc, char *argv[]) } /* Launch all data cores. */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(switch_worker, mbuf_pool, lcore_id); } @@ -1231,7 +1231,7 @@ main(int argc, char *argv[]) "failed to start vhost driver.\n"); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) + RTE_LCORE_FOREACH_WORKER(lcore_id) rte_eal_wait_lcore(lcore_id); return 0; diff --git a/examples/timer/main.c b/examples/timer/main.c index 0259022f104e..50a0f60951e2 100644 --- a/examples/timer/main.c +++ b/examples/timer/main.c @@ -100,7 +100,7 @@ main(int argc, char **argv) rte_timer_init(&timer0); rte_timer_init(&timer1); - /* load timer0, every second, on master lcore, reloaded automatically */ + /* load timer0, every second, on initial lcore, reloaded automatically */ hz = rte_get_timer_hz(); lcore_id = rte_lcore_id(); rte_timer_reset(&timer0, hz, PERIODICAL, lcore_id, timer0_cb, NULL); @@ -109,12 +109,12 @@ main(int argc, char **argv) lcore_id = rte_get_next_lcore(lcore_id, 0, 1); rte_timer_reset(&timer1, hz/3, SINGLE, lcore_id, timer1_cb, NULL); - /* call lcore_mainloop() on every slave lcore */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + /* call lcore_mainloop() on every worker lcore */ + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(lcore_mainloop, NULL, lcore_id); } - /* call it on master lcore too */ + /* call it on initial lcore too */ (void) lcore_mainloop(NULL); return 0; diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 312829e8b930..012aa6706ca1 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1205,7 +1205,7 @@ destroy_device(int vid) /* Set the dev_removal_flag on each lcore. */ - RTE_LCORE_FOREACH_SLAVE(lcore) + RTE_LCORE_FOREACH_WORKER(lcore) lcore_info[lcore].dev_removal_flag = REQUEST_DEV_REMOVAL; /* @@ -1213,7 +1213,7 @@ destroy_device(int vid) * we can be sure that they can no longer access the device removed * from the linked lists and that the devices are no longer in use. */ - RTE_LCORE_FOREACH_SLAVE(lcore) { + RTE_LCORE_FOREACH_WORKER(lcore) { while (lcore_info[lcore].dev_removal_flag != ACK_DEV_REMOVAL) rte_pause(); } @@ -1258,7 +1258,7 @@ new_device(int vid) vdev->remove = 0; /* Find a suitable lcore to add the device. */ - RTE_LCORE_FOREACH_SLAVE(lcore) { + RTE_LCORE_FOREACH_WORKER(lcore) { if (lcore_info[lcore].device_num < device_num_min) { device_num_min = lcore_info[lcore].device_num; core_add = lcore; @@ -1507,7 +1507,7 @@ main(int argc, char *argv[]) } /* Launch all data cores. */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) + RTE_LCORE_FOREACH_WORKER(lcore_id) rte_eal_remote_launch(switch_worker, NULL, lcore_id); if (client_mode) @@ -1568,7 +1568,7 @@ main(int argc, char *argv[]) } } - RTE_LCORE_FOREACH_SLAVE(lcore_id) + RTE_LCORE_FOREACH_WORKER(lcore_id) rte_eal_wait_lcore(lcore_id); return 0; diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index d08826c868f8..25fb6fd62f1b 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -656,8 +656,8 @@ main(int argc, char *argv[]) } /* call lcore_main() on every lcore */ - rte_eal_mp_remote_launch(lcore_main, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(lcore_main, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c index f417b2fd9b91..e1c8ed0c381a 100644 --- a/examples/vmdq_dcb/main.c +++ b/examples/vmdq_dcb/main.c @@ -702,12 +702,12 @@ main(int argc, char *argv[]) rte_exit(EXIT_FAILURE, "Cannot initialize network ports\n"); } - /* call lcore_main() on every slave lcore */ + /* call lcore_main() on every worker lcore */ i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(lcore_main, (void*)i++, lcore_id); } - /* call on master too */ + /* call on initial too */ (void) lcore_main((void*)i); return 0; From patchwork Wed Jul 1 19:46:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72655 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B32DDA0350; Wed, 1 Jul 2020 21:49:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 287C11D54F; Wed, 1 Jul 2020 21:47:27 +0200 (CEST) Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by dpdk.org (Postfix) with ESMTP id 0ABB51D535 for ; Wed, 1 Jul 2020 21:47:25 +0200 (CEST) Received: by mail-pj1-f67.google.com with SMTP id l6so8293801pjq.1 for ; Wed, 01 Jul 2020 12:47:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gY95tjbbLspwO1mZqwsh0b2j3kKWOm4FnvQonp+EeR0=; b=cQZa6XeHVwlER+wnzeDY6G+bTNhwPyBytLj23HJy05HnvGEDWUQn7nOO9aKDlG5ZeC lKRu/K+JgXNLDj6e7YM16BI5RzFCQZ2I1OPnejLVFFbmCI1XyCqafIKFGhl5N5y4WIwP VBgrYOrkPgclK6Vf9YgOrAgEiGh+3Im2wEoiWNczXuXCNMy4C48tAoXD66Y5JWm4SceN xnq8yuPYjoCOz7iNOEkZ1CMmgqQoYvXRvNiCGR9MUC3Vs7LnKW9WonSBzu5M5LhdH7Qf MCRwgRxJ8pOv/CiR1dYWWDGYS/6Ax8m0CG6PwAVn3tA7NY+jVz2y3tx8GOegA9NhBDjN Q9Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gY95tjbbLspwO1mZqwsh0b2j3kKWOm4FnvQonp+EeR0=; b=BaFgv6jxiHUVoQ9Vhgpx6vOnVG02nyoJVHFxc3a9l9OgLQ36h+oity3Ft52orS5iL1 2tDyZ8vf5mBb/p0cORwfE8oDe91PGWRiDzgmzqon/skkDd0mPjhQIhF/Djsn57ALM58x Jgg4h+EbVEz+KFlbVnJfcllaJj2vDIjXe4f+A+wE3TsoVZvQyZY17I9D7QBPDoUIAqQV 1k4VbbgqW0Q7ehgiHj3SHz6Lk4k1AXTz119KdXcfdUpAXM59fBjfOO+tUdCD5CYoV5yM sNRqKZPuOtdFI4rSwnZ7SHcMGbyW/MnOM0x5KpDPnA/E+C7aA/GQz22IKQLYIwMPjapS ARBA== X-Gm-Message-State: AOAM533/c8lQifX5z+qpW6wqGOZYfTKyNFNSRWPSs4pmGLUPkU5Ku3V+ XCNV4/sooH/BlWkSPxO2V6CIEC8kVNM= X-Google-Smtp-Source: ABdhPJxAFb1b7D1p8I45o5Y6yiGVYFRNuM9NqS4QEV/AJJ89YogVOKbDW2omYC5LLmJsQ2WF7K4f2A== X-Received: by 2002:a17:90b:1b52:: with SMTP id nv18mr24604998pjb.129.1593632843796; Wed, 01 Jul 2020 12:47:23 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:22 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:41 -0700 Message-Id: <20200701194650.10705-19-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 18/27] app/test-pmd: change references to master/slave X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use new API and naming convention Signed-off-by: Stephen Hemminger --- app/test-pmd/config.c | 4 ++-- app/test-pmd/parameters.c | 2 +- app/test-pmd/softnicfwd.c | 2 +- app/test-pmd/testpmd.c | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index a7112c998bdb..816d9e2064ef 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2649,9 +2649,9 @@ set_fwd_lcores_list(unsigned int *lcorelist, unsigned int nb_lc) printf("lcore %u not enabled\n", lcore_cpuid); return -1; } - if (lcore_cpuid == rte_get_master_lcore()) { + if (lcore_cpuid == rte_get_initial_lcore()) { printf("lcore %u cannot be masked on for running " - "packet forwarding, which is the master lcore " + "packet forwarding, which is the initial lcore " "and reserved for command line parsing only\n", lcore_cpuid); return -1; diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index f761e14707bf..48aafa289367 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -88,7 +88,7 @@ usage(char* progname) printf(" --nb-ports=N: set the number of forwarding ports " "(1 <= N <= %d).\n", nb_ports); printf(" --coremask=COREMASK: hexadecimal bitmask of cores running " - "the packet forwarding test. The master lcore is reserved for " + "the packet forwarding test. The initial lcore is reserved for " "command line parsing only, and cannot be masked on for " "packet forwarding.\n"); printf(" --portmask=PORTMASK: hexadecimal bitmask of ports used " diff --git a/app/test-pmd/softnicfwd.c b/app/test-pmd/softnicfwd.c index e9d437364467..d773551845e4 100644 --- a/app/test-pmd/softnicfwd.c +++ b/app/test-pmd/softnicfwd.c @@ -654,7 +654,7 @@ softnic_fwd_begin(portid_t pi) if (!rte_lcore_is_enabled(lcore)) continue; - if (lcore == rte_get_master_lcore()) + if (lcore == rte_get_initial_lcore()) continue; if (fwd_core_present == 0) { diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 4989d22ca86c..a68ef351e37e 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -83,7 +83,7 @@ uint16_t verbose_level = 0; /**< Silent by default. */ int testpmd_logtype; /**< Log type for testpmd logs */ -/* use master core for command line ? */ +/* use initial core for command line ? */ uint8_t interactive = 0; uint8_t auto_start = 0; uint8_t tx_first; @@ -552,7 +552,7 @@ set_default_fwd_lcores_config(void) } socket_ids[num_sockets++] = sock_num; } - if (i == rte_get_master_lcore()) + if (i == rte_get_initial_lcore()) continue; fwd_lcores_cpuids[nb_lc++] = i; } From patchwork Wed Jul 1 19:46:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72656 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1D2C7A0350; Wed, 1 Jul 2020 21:49:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 26F1B1D558; Wed, 1 Jul 2020 21:47:28 +0200 (CEST) Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by dpdk.org (Postfix) with ESMTP id 6A38E1D53F for ; Wed, 1 Jul 2020 21:47:26 +0200 (CEST) Received: by mail-pf1-f172.google.com with SMTP id m9so1178141pfh.0 for ; Wed, 01 Jul 2020 12:47:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6lrew1sBu5xUZxs6DAtj6W7O8h85tPgH5gZLmpD1pTc=; b=eSQH4ukg/M7idqP68lv3GMgwkH3U//ojcXKRGYzYKalmdj4CDPiJEb4Yx2mGe5Y10P 72uSOlkA706bS0PcQTM9b4GizRmlXowxdYMiBddzjHKb5CJqaN8L1n8XRHtmtMmLVTN6 UPKcAFXVTehffKO+SvUTIx2DaLEQRgbeuD4nfKC7POuhYh7AYI0V+aEOBdeowZfO67J1 dz3O9WKatddrXLiUDBeGu2n0ovpxUJQJhDIxFYsRGdn3cZ0qvj4qJqMwelLSwpLrRHU2 4ihxqdonjJdBrHCMAtRTcBZS2giO1T+6mTP3PYc7HksA2ljGJbNi/tMYEyfICaoTfbaB zoTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6lrew1sBu5xUZxs6DAtj6W7O8h85tPgH5gZLmpD1pTc=; b=hkFqOLME1jxG1R3808c+caq8Kgrkrq+rCSf0H3mDPXhCTGA3t9vvOBlq3U4VCwyz7x U47fNUPa36X83zWK/m95e5EbfYiWdeAGaupv3N0Bf4OsQQhSQchnGXIF8YKqgJ1ZwFaq lHz0Y32blu6VZAgPu15x+9TwrEL9xGCrZD8HeSUwWDSHmjOTkgRryjgP8UTX7d78Onru x6a45Yw+W8/P8bTqT575cOl9RbNcd9HEgaglgwROmWj2i/YC5JAzWum8BP6tH9nmQgIU NiMLqs0zlFVhiQKFiWyJtvWdNRH5RDjmwgcLnLsWP56mHbshLcgPklyYw/Bg4wBOREn2 VZlg== X-Gm-Message-State: AOAM532Aiaz6zTPkIW3nHWiba5rKbDDd+Z55DxTVyXB3lTVNFv/Pk2cw ifQRkfCS+CtK4NXHmWPXCqFnkqjJ0Gc= X-Google-Smtp-Source: ABdhPJxFz+9RzcFaChx0/Nv08pQFd8yCyOQ2a+nPx0nnUrAoe1/Y5fbrRwPuW8qXNWOhZ6zUSQ0z6g== X-Received: by 2002:a62:64ce:: with SMTP id y197mr2728202pfb.19.1593632845094; Wed, 01 Jul 2020 12:47:25 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:24 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:42 -0700 Message-Id: <20200701194650.10705-20-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 19/27] test-eventdev: replace references to slave with worker lcores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use new API terminolgy Signed-off-by: Stephen Hemminger --- app/test-eventdev/evt_options.c | 2 +- app/test-eventdev/test_order_common.c | 12 ++++++------ app/test-eventdev/test_perf_common.c | 16 ++++++++-------- app/test-eventdev/test_pipeline_common.c | 8 ++++---- 4 files changed, 19 insertions(+), 19 deletions(-) diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c index c60b61a904b0..e09a0592673f 100644 --- a/app/test-eventdev/evt_options.c +++ b/app/test-eventdev/evt_options.c @@ -438,7 +438,7 @@ evt_options_dump(struct evt_options *opt) evt_dump("verbose_level", "%d", opt->verbose_level); evt_dump("socket_id", "%d", opt->socket_id); evt_dump("pool_sz", "%d", opt->pool_sz); - evt_dump("master lcore", "%d", rte_get_master_lcore()); + evt_dump("initial lcore", "%d", rte_get_initial_lcore()); evt_dump("nb_pkts", "%"PRIu64, opt->nb_pkts); evt_dump("nb_timers", "%"PRIu64, opt->nb_timers); evt_dump_begin("available lcores"); diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c index 4190f9ade82b..4082e2f210f8 100644 --- a/app/test-eventdev/test_order_common.c +++ b/app/test-eventdev/test_order_common.c @@ -74,15 +74,15 @@ order_opt_check(struct evt_options *opt) return -1; } - /* 1 producer + N workers + 1 master */ + /* 1 producer + N workers + 1 initial lcore */ if (rte_lcore_count() < 3) { evt_err("test need minimum 3 lcores"); return -1; } /* Validate worker lcores */ - if (evt_lcores_has_overlap(opt->wlcores, rte_get_master_lcore())) { - evt_err("worker lcores overlaps with master lcore"); + if (evt_lcores_has_overlap(opt->wlcores, rte_get_initial_lcore())) { + evt_err("worker lcores overlaps with initial lcore"); return -1; } @@ -117,8 +117,8 @@ order_opt_check(struct evt_options *opt) } /* Validate producer lcore */ - if (plcore == (int)rte_get_master_lcore()) { - evt_err("producer lcore and master lcore should be different"); + if (plcore == (int)rte_get_initial_lcore()) { + evt_err("producer lcore and initial lcore should be different"); return -1; } if (!rte_lcore_is_enabled(plcore)) { @@ -245,7 +245,7 @@ order_launch_lcores(struct evt_test *test, struct evt_options *opt, int wkr_idx = 0; /* launch workers */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (!(opt->wlcores[lcore_id])) continue; diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index b3af4bfecaff..752603803b25 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -254,7 +254,7 @@ perf_launch_lcores(struct evt_test *test, struct evt_options *opt, int port_idx = 0; /* launch workers */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (!(opt->wlcores[lcore_id])) continue; @@ -268,7 +268,7 @@ perf_launch_lcores(struct evt_test *test, struct evt_options *opt, } /* launch producers */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (!(opt->plcores[lcore_id])) continue; @@ -541,8 +541,8 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues) { unsigned int lcores; - /* N producer + N worker + 1 master when producer cores are used - * Else N worker + 1 master when Rx adapter is used + /* N producer + N worker + 1 initial lcore when producer cores are used + * Else N worker + 1 initial lcore when Rx adapter is used */ lcores = opt->prod_type == EVT_PROD_TYPE_SYNT ? 3 : 2; @@ -552,8 +552,8 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues) } /* Validate worker lcores */ - if (evt_lcores_has_overlap(opt->wlcores, rte_get_master_lcore())) { - evt_err("worker lcores overlaps with master lcore"); + if (evt_lcores_has_overlap(opt->wlcores, rte_get_initial_lcore())) { + evt_err("worker lcores overlaps with initial lcore"); return -1; } if (evt_lcores_has_overlap_multi(opt->wlcores, opt->plcores)) { @@ -573,8 +573,8 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues) opt->prod_type == EVT_PROD_TYPE_EVENT_TIMER_ADPTR) { /* Validate producer lcores */ if (evt_lcores_has_overlap(opt->plcores, - rte_get_master_lcore())) { - evt_err("producer lcores overlaps with master lcore"); + rte_get_initial_lcore())) { + evt_err("producer lcores overlaps with initial lcore"); return -1; } if (evt_has_disabled_lcore(opt->plcores)) { diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c index 17088b1b48e4..00ec3dad6bf7 100644 --- a/app/test-eventdev/test_pipeline_common.c +++ b/app/test-eventdev/test_pipeline_common.c @@ -60,7 +60,7 @@ pipeline_launch_lcores(struct evt_test *test, struct evt_options *opt, int port_idx = 0; /* launch workers */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (!(opt->wlcores[lcore_id])) continue; @@ -107,7 +107,7 @@ pipeline_opt_check(struct evt_options *opt, uint64_t nb_queues) { unsigned int lcores; /* - * N worker + 1 master + * N worker + 1 initial lcore */ lcores = 2; @@ -129,8 +129,8 @@ pipeline_opt_check(struct evt_options *opt, uint64_t nb_queues) } /* Validate worker lcores */ - if (evt_lcores_has_overlap(opt->wlcores, rte_get_master_lcore())) { - evt_err("worker lcores overlaps with master lcore"); + if (evt_lcores_has_overlap(opt->wlcores, rte_get_initial_lcore())) { + evt_err("worker lcores overlaps with initial lcore"); return -1; } if (evt_has_disabled_lcore(opt->wlcores)) { From patchwork Wed Jul 1 19:46:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72658 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B688A0350; Wed, 1 Jul 2020 21:50:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 962191D56C; Wed, 1 Jul 2020 21:47:33 +0200 (CEST) Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by dpdk.org (Postfix) with ESMTP id C21611D546 for ; Wed, 1 Jul 2020 21:47:29 +0200 (CEST) Received: by mail-pj1-f67.google.com with SMTP id o22so6402916pjw.2 for ; Wed, 01 Jul 2020 12:47:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KwaCsQEx9S9o7Lz2D/4I0p1yE/mNST1I3Kcw64KIHww=; b=pgIXjWT5v1R8pBayhgoz2bV5rZeBi+2ap278p+JSKVYDCY5HwAii1Pi/1uEzdJ/pRy czBsW+JONNRsAiVRdESrLHTtHcFv3VOx1+5x8bztpGH2A4eDoop7OAw8Fyp3AvK3Fssk 68M89tOtkBOW/Zpr8jwdVqMEL2K7/E6+wFtVtqIYXGO1Ae/pZw527eIZjiFLBw/dy916 abvuw+08Zi3U9Qv6Yt0YB+opmQ0z4x2kwUhM7sbZRKGI6ppoSB/BDZ71RQuvnxuApvyo 7QuwAwt1tiPuRhIpcV+FSYmEEc5cnFv3cYtMiwNMy5wNrYgBFH8OhfYQIIPgUMg7i1Ez d0uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KwaCsQEx9S9o7Lz2D/4I0p1yE/mNST1I3Kcw64KIHww=; b=VLSxanc6R123mZTR+dwNI0qpuV1YpK77JlA8wB3nmCYxmyhy4XATj2VLGlC7+SLqcj c9ded4ryQPJGUjEThKf1xHwwNF+Ja/FPmyCDN99YNf/rFA+7SDL7oXztzspsACJbCZqm S8m52tnnaGagpbUvnGtUFywv2jqcU1XoP7RPu+evtCO1T+qkViS0LY65WXwr9YDIc5E5 Zh/9sRweqJvNyLdvpiuTN4lr/sv3Y4+GQYHQwUSbJQrdQAt44v/uR3a6yXXYPgwiYBiH 3LCCiYZmJrdJK0WsOmw9I30wuAs3xNqB6wli0obgYNkVRffJ/Dri0d6f1v3WwZwQT9+/ 6sCg== X-Gm-Message-State: AOAM533F9uO6EwbCxbEQlXXegs5j5FO75mloNdjrnQ1VwdPycHVNvgeb BHlIL4VvyN19IN0n5wR9zp91HZpPFC8= X-Google-Smtp-Source: ABdhPJzfT+xdO1rXwFp4CYxCl60vJWtk+YvQanB6NFj6MC8hR/M5RF3bCXTV4Q/Ye4/097ub+y1dyg== X-Received: by 2002:a17:902:bd08:: with SMTP id p8mr13028530pls.154.1593632846722; Wed, 01 Jul 2020 12:47:26 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:25 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:43 -0700 Message-Id: <20200701194650.10705-21-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 20/27] app/test: replace refernces to master/slave X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial and worker when referring to lcores Signed-off-by: Stephen Hemminger --- app/test/autotest_test_funcs.py | 2 +- app/test/test.c | 2 +- app/test/test_atomic.c | 26 ++++---- app/test/test_barrier.c | 2 +- app/test/test_cryptodev.c | 16 ++--- app/test/test_distributor.c | 8 +-- app/test/test_distributor_perf.c | 10 +-- app/test/test_eal_flags.c | 32 +++++----- app/test/test_efd.c | 2 +- app/test/test_efd_perf.c | 2 +- app/test/test_func_reentrancy.c | 20 +++--- app/test/test_hash_multiwriter.c | 4 +- app/test/test_hash_readwrite.c | 38 +++++------ app/test/test_kni.c | 16 ++--- app/test/test_malloc.c | 12 ++-- app/test/test_mbuf.c | 36 +++++------ app/test/test_mcslock.c | 28 ++++---- app/test/test_mempool_perf.c | 10 +-- app/test/test_mp_secondary.c | 2 +- app/test/test_pdump.c | 2 +- app/test/test_per_lcore.c | 14 ++-- app/test/test_pmd_perf.c | 20 +++--- app/test/test_rcu_qsbr.c | 2 +- app/test/test_rcu_qsbr_perf.c | 2 +- app/test/test_ring_perf.c | 14 ++-- app/test/test_ring_stress_impl.h | 10 +-- app/test/test_rwlock.c | 28 ++++---- app/test/test_service_cores.c | 10 +-- app/test/test_spinlock.c | 34 +++++----- app/test/test_stack.c | 2 +- app/test/test_stack_perf.c | 6 +- app/test/test_ticketlock.c | 36 +++++------ app/test/test_timer.c | 106 +++++++++++++++---------------- app/test/test_timer_racecond.c | 27 ++++---- app/test/test_timer_secondary.c | 2 +- app/test/test_trace_perf.c | 4 +- 36 files changed, 294 insertions(+), 293 deletions(-) diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py index 26688b71323e..23d530f1e18b 100644 --- a/app/test/autotest_test_funcs.py +++ b/app/test/autotest_test_funcs.py @@ -102,7 +102,7 @@ def rwlock_autotest(child, test_name): index = child.expect(["Test OK", "Test Failed", "Hello from core ([0-9]*) !", - "Global write lock taken on master " + "Global write lock taken on initial lcore " "core ([0-9]*)", pexpect.TIMEOUT], timeout=10) # ok diff --git a/app/test/test.c b/app/test/test.c index 94d26ab1f67c..a9fce18ca73e 100644 --- a/app/test/test.c +++ b/app/test/test.c @@ -58,7 +58,7 @@ do_recursive_call(void) #endif #endif { "test_missing_c_flag", no_action }, - { "test_master_lcore_flag", no_action }, + { "test_initial_lcore_flag", no_action }, { "test_invalid_n_flag", no_action }, { "test_no_hpet_flag", no_action }, { "test_whitelist_flag", no_action }, diff --git a/app/test/test_atomic.c b/app/test/test_atomic.c index 214452e54399..88c3639d759d 100644 --- a/app/test/test_atomic.c +++ b/app/test/test_atomic.c @@ -456,7 +456,7 @@ test_atomic(void) printf("usual inc/dec/add/sub functions\n"); - rte_eal_mp_remote_launch(test_atomic_usual, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_usual, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_set(&synchro, 0); @@ -482,7 +482,7 @@ test_atomic(void) rte_atomic32_set(&a32, 0); rte_atomic16_set(&a16, 0); rte_atomic64_set(&count, 0); - rte_eal_mp_remote_launch(test_atomic_tas, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_tas, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_set(&synchro, 0); @@ -499,7 +499,7 @@ test_atomic(void) rte_atomic16_set(&a16, 0); rte_atomic64_set(&count, 0); rte_eal_mp_remote_launch(test_atomic_addsub_and_return, NULL, - SKIP_MASTER); + SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_set(&synchro, 0); @@ -510,8 +510,8 @@ test_atomic(void) } /* - * Set a64, a32 and a16 with the same value of minus "number of slave - * lcores", launch all slave lcores to atomically increase by one and + * Set a64, a32 and a16 with the same value of minus "number of worker + * lcores", launch all worker lcores to atomically increase by one and * test them respectively. * Each lcore should have only one chance to increase a64 by one and * then check if it is equal to 0, but there should be only one lcore @@ -519,7 +519,7 @@ test_atomic(void) * Then a variable of "count", initialized to zero, is increased by * one if a64, a32 or a16 is 0 after being increased and tested * atomically. - * We can check if "count" is finally equal to 3 to see if all slave + * We can check if "count" is finally equal to 3 to see if all worker * lcores performed "atomic inc and test" right. */ printf("inc and test\n"); @@ -533,7 +533,7 @@ test_atomic(void) rte_atomic64_set(&a64, (int64_t)(1 - (int64_t)rte_lcore_count())); rte_atomic32_set(&a32, (int32_t)(1 - (int32_t)rte_lcore_count())); rte_atomic16_set(&a16, (int16_t)(1 - (int16_t)rte_lcore_count())); - rte_eal_mp_remote_launch(test_atomic_inc_and_test, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_inc_and_test, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_clear(&synchro); @@ -544,7 +544,7 @@ test_atomic(void) } /* - * Same as above, but this time we set the values to "number of slave + * Same as above, but this time we set the values to "number of worker * lcores", and decrement instead of increment. */ printf("dec and test\n"); @@ -555,7 +555,7 @@ test_atomic(void) rte_atomic64_set(&a64, (int64_t)(rte_lcore_count() - 1)); rte_atomic32_set(&a32, (int32_t)(rte_lcore_count() - 1)); rte_atomic16_set(&a16, (int16_t)(rte_lcore_count() - 1)); - rte_eal_mp_remote_launch(test_atomic_dec_and_test, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_dec_and_test, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_clear(&synchro); @@ -569,10 +569,10 @@ test_atomic(void) /* * This case tests the functionality of rte_atomic128_cmp_exchange * API. It calls rte_atomic128_cmp_exchange with four kinds of memory - * models successively on each slave core. Once each 128-bit atomic + * models successively on each worker core. Once each 128-bit atomic * compare and swap operation is successful, it updates the global * 128-bit counter by 2 for the first 64-bit and 1 for the second - * 64-bit. Each slave core iterates this test N times. + * 64-bit. Each worker core iterates this test N times. * At the end of test, verify whether the first 64-bits of the 128-bit * counter and the second 64bits is differ by the total iterations. If * it is, the test passes. @@ -585,7 +585,7 @@ test_atomic(void) count128.val[1] = 0; rte_eal_mp_remote_launch(test_atomic128_cmp_exchange, NULL, - SKIP_MASTER); + SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_clear(&synchro); @@ -619,7 +619,7 @@ test_atomic(void) token64 = ((uint64_t)get_crc8(&t.u8[0], sizeof(token64) - 1) << 56) | (t.u64 & 0x00ffffffffffffff); - rte_eal_mp_remote_launch(test_atomic_exchange, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_exchange, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_clear(&synchro); diff --git a/app/test/test_barrier.c b/app/test/test_barrier.c index 43b5f6232c6d..a27a4b0ae06f 100644 --- a/app/test/test_barrier.c +++ b/app/test/test_barrier.c @@ -236,7 +236,7 @@ plock_test(uint64_t iter, enum plock_use_type utype) /* test phase - start and wait for completion on each active lcore */ - rte_eal_mp_remote_launch(plock_test1_lcore, lpt, CALL_MASTER); + rte_eal_mp_remote_launch(plock_test1_lcore, lpt, CALL_INITIAL); rte_eal_mp_wait_lcore(); /* validation phase - make sure that shared and local data match */ diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 8f631468b740..2ebc5c6ac806 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -474,29 +474,29 @@ testsuite_setup(void) char vdev_args[VDEV_ARGS_SIZE] = {""}; char temp_str[VDEV_ARGS_SIZE] = {"mode=multi-core," "ordering=enable,name=cryptodev_test_scheduler,corelist="}; - uint16_t slave_core_count = 0; + uint16_t worker_core_count = 0; uint16_t socket_id = 0; if (gbl_driver_id == rte_cryptodev_driver_id_get( RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD))) { - /* Identify the Slave Cores - * Use 2 slave cores for the device args + /* Identify the Worker Cores + * Use 2 worker cores for the device args */ - RTE_LCORE_FOREACH_SLAVE(i) { - if (slave_core_count > 1) + RTE_LCORE_FOREACH_WORKER(i) { + if (worker_core_count > 1) break; snprintf(vdev_args, sizeof(vdev_args), "%s%d", temp_str, i); strcpy(temp_str, vdev_args); strlcat(temp_str, ";", sizeof(temp_str)); - slave_core_count++; + worker_core_count++; socket_id = rte_lcore_to_socket_id(i); } - if (slave_core_count != 2) { + if (worker_core_count != 2) { RTE_LOG(ERR, USER1, "Cryptodev scheduler test require at least " - "two slave cores to run. " + "two worker cores to run. " "Please use the correct coremask.\n"); return TEST_FAILED; } diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c index ba1f81cf8d19..ebc415953872 100644 --- a/app/test/test_distributor.c +++ b/app/test/test_distributor.c @@ -654,13 +654,13 @@ test_distributor(void) sizeof(worker_params.name)); rte_eal_mp_remote_launch(handle_work, - &worker_params, SKIP_MASTER); + &worker_params, SKIP_INITIAL); if (sanity_test(&worker_params, p) < 0) goto err; quit_workers(&worker_params, p); rte_eal_mp_remote_launch(handle_work_with_free_mbufs, - &worker_params, SKIP_MASTER); + &worker_params, SKIP_INITIAL); if (sanity_test_with_mbuf_alloc(&worker_params, p) < 0) goto err; quit_workers(&worker_params, p); @@ -668,7 +668,7 @@ test_distributor(void) if (rte_lcore_count() > 2) { rte_eal_mp_remote_launch(handle_work_for_shutdown_test, &worker_params, - SKIP_MASTER); + SKIP_INITIAL); if (sanity_test_with_worker_shutdown(&worker_params, p) < 0) goto err; @@ -676,7 +676,7 @@ test_distributor(void) rte_eal_mp_remote_launch(handle_work_for_shutdown_test, &worker_params, - SKIP_MASTER); + SKIP_INITIAL); if (test_flush_with_worker_shutdown(&worker_params, p) < 0) goto err; diff --git a/app/test/test_distributor_perf.c b/app/test/test_distributor_perf.c index f153bcf9bd87..06fb2dbb183b 100644 --- a/app/test/test_distributor_perf.c +++ b/app/test/test_distributor_perf.c @@ -54,10 +54,10 @@ time_cache_line_switch(void) /* allocate a full cache line for data, we use only first byte of it */ uint64_t data[RTE_CACHE_LINE_SIZE*3 / sizeof(uint64_t)]; - unsigned i, slaveid = rte_get_next_lcore(rte_lcore_id(), 0, 0); + unsigned i, workerid = rte_get_next_lcore(rte_lcore_id(), 0, 0); volatile uint64_t *pdata = &data[0]; *pdata = 1; - rte_eal_remote_launch((lcore_function_t *)flip_bit, &data[0], slaveid); + rte_eal_remote_launch((lcore_function_t *)flip_bit, &data[0], workerid); while (*pdata) rte_pause(); @@ -72,7 +72,7 @@ time_cache_line_switch(void) while (*pdata) rte_pause(); *pdata = 2; - rte_eal_wait_lcore(slaveid); + rte_eal_wait_lcore(workerid); printf("==== Cache line switch test ===\n"); printf("Time for %u iterations = %"PRIu64" ticks\n", (1<single_read = end / i; for (n = 0; n < NUM_TEST; n++) { - unsigned int tot_slave_lcore = rte_lcore_count() - 1; - if (tot_slave_lcore < core_cnt[n] * 2) + unsigned int tot_worker_lcore = rte_lcore_count() - 1; + if (tot_worker_lcore < core_cnt[n] * 2) goto finish; rte_atomic64_clear(&greads); @@ -467,7 +467,7 @@ test_hash_readwrite_perf(struct perf *perf_results, int use_htm, for (i = 0; i < core_cnt[n]; i++) rte_eal_remote_launch(test_rw_reader, (void *)(uintptr_t)read_cnt, - slave_core_ids[i]); + worker_core_ids[i]); rte_eal_mp_wait_lcore(); @@ -476,7 +476,7 @@ test_hash_readwrite_perf(struct perf *perf_results, int use_htm, for (; i < core_cnt[n] * 2; i++) rte_eal_remote_launch(test_rw_writer, (void *)((uintptr_t)start_coreid), - slave_core_ids[i]); + worker_core_ids[i]); rte_eal_mp_wait_lcore(); @@ -521,20 +521,20 @@ test_hash_readwrite_perf(struct perf *perf_results, int use_htm, for (i = core_cnt[n]; i < core_cnt[n] * 2; i++) rte_eal_remote_launch(test_rw_writer, (void *)((uintptr_t)start_coreid), - slave_core_ids[i]); + worker_core_ids[i]); for (i = 0; i < core_cnt[n]; i++) rte_eal_remote_launch(test_rw_reader, (void *)(uintptr_t)read_cnt, - slave_core_ids[i]); + worker_core_ids[i]); } else { for (i = 0; i < core_cnt[n]; i++) rte_eal_remote_launch(test_rw_reader, (void *)(uintptr_t)read_cnt, - slave_core_ids[i]); + worker_core_ids[i]); for (; i < core_cnt[n] * 2; i++) rte_eal_remote_launch(test_rw_writer, (void *)((uintptr_t)start_coreid), - slave_core_ids[i]); + worker_core_ids[i]); } rte_eal_mp_wait_lcore(); @@ -626,8 +626,8 @@ test_hash_rw_perf_main(void) return TEST_SKIPPED; } - RTE_LCORE_FOREACH_SLAVE(core_id) { - slave_core_ids[i] = core_id; + RTE_LCORE_FOREACH_WORKER(core_id) { + worker_core_ids[i] = core_id; i++; } @@ -710,8 +710,8 @@ test_hash_rw_func_main(void) return TEST_SKIPPED; } - RTE_LCORE_FOREACH_SLAVE(core_id) { - slave_core_ids[i] = core_id; + RTE_LCORE_FOREACH_WORKER(core_id) { + worker_core_ids[i] = core_id; i++; } diff --git a/app/test/test_kni.c b/app/test/test_kni.c index e47ab36e0231..dfa28a2a2999 100644 --- a/app/test/test_kni.c +++ b/app/test/test_kni.c @@ -85,7 +85,7 @@ static struct rte_kni_ops kni_ops = { .config_promiscusity = NULL, }; -static unsigned lcore_master, lcore_ingress, lcore_egress; +static unsigned lcore_initial, lcore_ingress, lcore_egress; static struct rte_kni *test_kni_ctx; static struct test_kni_stats stats; @@ -202,7 +202,7 @@ test_kni_link_change(void) * supported by KNI kernel module. The ingress lcore will allocate mbufs and * transmit them to kernel space; while the egress lcore will receive the mbufs * from kernel space and free them. - * On the master lcore, several commands will be run to check handling the + * On the initial lcore, several commands will be run to check handling the * kernel requests. And it will finally set the flag to exit the KNI * transmitting/receiving to/from the kernel space. * @@ -217,7 +217,7 @@ test_kni_loop(__rte_unused void *arg) const unsigned lcore_id = rte_lcore_id(); struct rte_mbuf *pkts_burst[PKT_BURST_SZ]; - if (lcore_id == lcore_master) { + if (lcore_id == lcore_initial) { rte_delay_ms(KNI_TIMEOUT_MS); /* tests of handling kernel request */ if (system(IFCONFIG TEST_KNI_PORT" up") == -1) @@ -276,12 +276,12 @@ test_kni_allocate_lcores(void) { unsigned i, count = 0; - lcore_master = rte_get_master_lcore(); - printf("master lcore: %u\n", lcore_master); + lcore_initial = rte_get_initial_lcore(); + printf("initial lcore: %u\n", lcore_initial); for (i = 0; i < RTE_MAX_LCORE; i++) { if (count >=2 ) break; - if (rte_lcore_is_enabled(i) && i != lcore_master) { + if (rte_lcore_is_enabled(i) && i != lcore_initial) { count ++; if (count == 1) lcore_ingress = i; @@ -487,8 +487,8 @@ test_kni_processing(uint16_t port_id, struct rte_mempool *mp) if (ret != 0) goto fail_kni; - rte_eal_mp_remote_launch(test_kni_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(i) { + rte_eal_mp_remote_launch(test_kni_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(i) { if (rte_eal_wait_lcore(i) < 0) { ret = -1; goto fail_kni; diff --git a/app/test/test_malloc.c b/app/test/test_malloc.c index 71b3cfdde5cf..758e6194a852 100644 --- a/app/test/test_malloc.c +++ b/app/test/test_malloc.c @@ -1007,11 +1007,11 @@ test_malloc(void) else printf("test_realloc() passed\n"); /*----------------------------*/ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(test_align_overlap_per_lcore, NULL, lcore_id); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) ret = -1; } @@ -1022,11 +1022,11 @@ test_malloc(void) else printf("test_align_overlap_per_lcore() passed\n"); /*----------------------------*/ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(test_reordered_free_per_lcore, NULL, lcore_id); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) ret = -1; } @@ -1037,11 +1037,11 @@ test_malloc(void) else printf("test_reordered_free_per_lcore() passed\n"); /*----------------------------*/ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(test_random_alloc_free, NULL, lcore_id); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) ret = -1; } diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c index 06e44f0a79f8..43922f66854f 100644 --- a/app/test/test_mbuf.c +++ b/app/test/test_mbuf.c @@ -72,7 +72,7 @@ #ifdef RTE_MBUF_REFCNT_ATOMIC -static volatile uint32_t refcnt_stop_slaves; +static volatile uint32_t refcnt_stop_workers; static unsigned refcnt_lcore[RTE_MAX_LCORE]; #endif @@ -1000,7 +1000,7 @@ test_pktmbuf_free_segment(struct rte_mempool *pktmbuf_pool) #ifdef RTE_MBUF_REFCNT_ATOMIC static int -test_refcnt_slave(void *arg) +test_refcnt_worker(void *arg) { unsigned lcore, free; void *mp = 0; @@ -1010,7 +1010,7 @@ test_refcnt_slave(void *arg) printf("%s started at lcore %u\n", __func__, lcore); free = 0; - while (refcnt_stop_slaves == 0) { + while (refcnt_stop_workers == 0) { if (rte_ring_dequeue(refcnt_mbuf_ring, &mp) == 0) { free++; rte_pktmbuf_free(mp); @@ -1038,7 +1038,7 @@ test_refcnt_iter(unsigned int lcore, unsigned int iter, /* For each mbuf in the pool: * - allocate mbuf, * - increment it's reference up to N+1, - * - enqueue it N times into the ring for slave cores to free. + * - enqueue it N times into the ring for worker cores to free. */ for (i = 0, n = rte_mempool_avail_count(refcnt_pool); i != n && (m = rte_pktmbuf_alloc(refcnt_pool)) != NULL; @@ -1062,7 +1062,7 @@ test_refcnt_iter(unsigned int lcore, unsigned int iter, rte_panic("(lcore=%u, iter=%u): was able to allocate only " "%u from %u mbufs\n", lcore, iter, i, n); - /* wait till slave lcores will consume all mbufs */ + /* wait till worker lcores will consume all mbufs */ while (!rte_ring_empty(refcnt_mbuf_ring)) ; @@ -1083,8 +1083,8 @@ test_refcnt_iter(unsigned int lcore, unsigned int iter, } static int -test_refcnt_master(struct rte_mempool *refcnt_pool, - struct rte_ring *refcnt_mbuf_ring) +test_refcnt_main(struct rte_mempool *refcnt_pool, + struct rte_ring *refcnt_mbuf_ring) { unsigned i, lcore; @@ -1094,7 +1094,7 @@ test_refcnt_master(struct rte_mempool *refcnt_pool, for (i = 0; i != REFCNT_MAX_ITER; i++) test_refcnt_iter(lcore, i, refcnt_pool, refcnt_mbuf_ring); - refcnt_stop_slaves = 1; + refcnt_stop_workers = 1; rte_wmb(); printf("%s finished at lcore %u\n", __func__, lcore); @@ -1107,7 +1107,7 @@ static int test_refcnt_mbuf(void) { #ifdef RTE_MBUF_REFCNT_ATOMIC - unsigned int master, slave, tref; + unsigned int initial, worker, tref; int ret = -1; struct rte_mempool *refcnt_pool = NULL; struct rte_ring *refcnt_mbuf_ring = NULL; @@ -1139,26 +1139,26 @@ test_refcnt_mbuf(void) goto err; } - refcnt_stop_slaves = 0; + refcnt_stop_workers = 0; memset(refcnt_lcore, 0, sizeof (refcnt_lcore)); - rte_eal_mp_remote_launch(test_refcnt_slave, refcnt_mbuf_ring, - SKIP_MASTER); + rte_eal_mp_remote_launch(test_refcnt_worker, refcnt_mbuf_ring, + SKIP_INITIAL); - test_refcnt_master(refcnt_pool, refcnt_mbuf_ring); + test_refcnt_main(refcnt_pool, refcnt_mbuf_ring); rte_eal_mp_wait_lcore(); /* check that we porcessed all references */ tref = 0; - master = rte_get_master_lcore(); + initial = rte_get_initial_lcore(); - RTE_LCORE_FOREACH_SLAVE(slave) - tref += refcnt_lcore[slave]; + RTE_LCORE_FOREACH_WORKER(worker) + tref += refcnt_lcore[worker]; - if (tref != refcnt_lcore[master]) + if (tref != refcnt_lcore[initial]) rte_panic("referenced mbufs: %u, freed mbufs: %u\n", - tref, refcnt_lcore[master]); + tref, refcnt_lcore[initial]); rte_mempool_dump(stdout, refcnt_pool); rte_ring_dump(stdout, refcnt_mbuf_ring); diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c index ddccaafa9242..9cd9da19dd72 100644 --- a/app/test/test_mcslock.c +++ b/app/test/test_mcslock.c @@ -28,7 +28,7 @@ * These tests are derived from spin lock test cases. * * - The functional test takes all of these locks and launches the - * ''test_mcslock_per_core()'' function on each core (except the master). + * ''test_mcslock_per_core()'' function on each core (except the initial). * * - The function takes the global lock, display something, then releases * the global lock on each core. @@ -123,9 +123,9 @@ test_mcslock_perf(void) printf("\nTest with lock on %u cores...\n", (rte_lcore_count())); rte_atomic32_set(&synchro, 0); - rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_MASTER); + rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_INITIAL); - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); load_loop_fn(&lock); @@ -154,8 +154,8 @@ test_mcslock_try(__rte_unused void *arg) rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me); rte_mcslock_t ml_try_me = RTE_PER_LCORE(_ml_try_me); - /* Locked ml_try in the master lcore, so it should fail - * when trying to lock it in the slave lcore. + /* Locked ml_try in the initial lcore, so it should fail + * when trying to lock it in the worker lcore. */ if (rte_mcslock_trylock(&p_ml_try, &ml_try_me) == 0) { rte_mcslock_lock(&p_ml, &ml_me); @@ -185,20 +185,20 @@ test_mcslock(void) * Test mcs lock & unlock on each core */ - /* slave cores should be waiting: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be waiting: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } rte_mcslock_lock(&p_ml, &ml_me); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_mcslock_per_core, NULL, i); } - /* slave cores should be busy: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be busy: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } @@ -210,19 +210,19 @@ test_mcslock(void) /* * Test if it could return immediately from try-locking a locked object. * Here it will lock the mcs lock object first, then launch all the - * slave lcores to trylock the same mcs lock object. - * All the slave lcores should give up try-locking a locked object and + * worker lcores to trylock the same mcs lock object. + * All the worker lcores should give up try-locking a locked object and * return immediately, and then increase the "count" initialized with * zero by one per times. * We can check if the "count" is finally equal to the number of all - * slave lcores to see if the behavior of try-locking a locked + * worker lcores to see if the behavior of try-locking a locked * mcslock object is correct. */ if (rte_mcslock_trylock(&p_ml_try, &ml_try_me) == 0) return -1; count = 0; - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_mcslock_try, NULL, i); } rte_eal_mp_wait_lcore(); diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c index 60bda8aadbe8..383f3928f2c1 100644 --- a/app/test/test_mempool_perf.c +++ b/app/test/test_mempool_perf.c @@ -143,8 +143,8 @@ per_lcore_mempool_test(void *arg) stats[lcore_id].enq_count = 0; - /* wait synchro for slaves */ - if (lcore_id != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore_id != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0); start_cycles = rte_get_timer_cycles(); @@ -214,7 +214,7 @@ launch_cores(struct rte_mempool *mp, unsigned int cores) return -1; } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (cores == 1) break; cores--; @@ -222,13 +222,13 @@ launch_cores(struct rte_mempool *mp, unsigned int cores) mp, lcore_id); } - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); ret = per_lcore_mempool_test(mp); cores = cores_save; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (cores == 1) break; cores--; diff --git a/app/test/test_mp_secondary.c b/app/test/test_mp_secondary.c index ac15ddbf2009..2cc1586fcde0 100644 --- a/app/test/test_mp_secondary.c +++ b/app/test/test_mp_secondary.c @@ -94,7 +94,7 @@ run_secondary_instances(void) #endif snprintf(coremask, sizeof(coremask), "%x", \ - (1 << rte_get_master_lcore())); + (1 << rte_get_initial_lcore())); ret |= launch_proc(argv1); ret |= launch_proc(argv2); diff --git a/app/test/test_pdump.c b/app/test/test_pdump.c index 6a1180bcb78e..d2d2df7a8016 100644 --- a/app/test/test_pdump.c +++ b/app/test/test_pdump.c @@ -184,7 +184,7 @@ run_pdump_server_tests(void) }; snprintf(coremask, sizeof(coremask), "%x", - (1 << rte_get_master_lcore())); + (1 << rte_get_initial_lcore())); ret = test_pdump_init(); ret |= launch_p(argv1); diff --git a/app/test/test_per_lcore.c b/app/test/test_per_lcore.c index fcd00212f1eb..ff91a3cf5b2b 100644 --- a/app/test/test_per_lcore.c +++ b/app/test/test_per_lcore.c @@ -73,31 +73,31 @@ test_per_lcore(void) unsigned lcore_id; int ret; - rte_eal_mp_remote_launch(assign_vars, NULL, SKIP_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(assign_vars, NULL, SKIP_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } - rte_eal_mp_remote_launch(display_vars, NULL, SKIP_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(display_vars, NULL, SKIP_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } /* test if it could do remote launch twice at the same time or not */ - ret = rte_eal_mp_remote_launch(test_per_lcore_delay, NULL, SKIP_MASTER); + ret = rte_eal_mp_remote_launch(test_per_lcore_delay, NULL, SKIP_INITIAL); if (ret < 0) { printf("It fails to do remote launch but it should able to do\n"); return -1; } /* it should not be able to launch a lcore which is running */ - ret = rte_eal_mp_remote_launch(test_per_lcore_delay, NULL, SKIP_MASTER); + ret = rte_eal_mp_remote_launch(test_per_lcore_delay, NULL, SKIP_INITIAL); if (ret == 0) { printf("It does remote launch successfully but it should not at this time\n"); return -1; } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c index 352cd47156ba..efe0814f175e 100644 --- a/app/test/test_pmd_perf.c +++ b/app/test/test_pmd_perf.c @@ -278,7 +278,7 @@ alloc_lcore(uint16_t socketid) for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { if (LCORE_AVAIL != lcore_conf[lcore_id].status || lcore_conf[lcore_id].socketid != socketid || - lcore_id == rte_get_master_lcore()) + lcore_id == rte_get_initial_lcore()) continue; lcore_conf[lcore_id].status = LCORE_USED; lcore_conf[lcore_id].nb_ports = 0; @@ -664,7 +664,7 @@ exec_burst(uint32_t flags, int lcore) static int test_pmd_perf(void) { - uint16_t nb_ports, num, nb_lcores, slave_id = (uint16_t)-1; + uint16_t nb_ports, num, nb_lcores, worker_id = (uint16_t)-1; uint16_t nb_rxd = MAX_TRAFFIC_BURST; uint16_t nb_txd = MAX_TRAFFIC_BURST; uint16_t portid; @@ -702,13 +702,13 @@ test_pmd_perf(void) RTE_ETH_FOREACH_DEV(portid) { if (socketid == -1) { socketid = rte_eth_dev_socket_id(portid); - slave_id = alloc_lcore(socketid); - if (slave_id == (uint16_t)-1) { + worker_id = alloc_lcore(socketid); + if (worker_id == (uint16_t)-1) { printf("No avail lcore to run test\n"); return -1; } printf("Performance test runs on lcore %u socket %u\n", - slave_id, socketid); + worker_id, socketid); } if (socketid != rte_eth_dev_socket_id(portid)) { @@ -765,8 +765,8 @@ test_pmd_perf(void) "rte_eth_promiscuous_enable: err=%s, port=%d\n", rte_strerror(-ret), portid); - lcore_conf[slave_id].portlist[num++] = portid; - lcore_conf[slave_id].nb_ports++; + lcore_conf[worker_id].portlist[num++] = portid; + lcore_conf[worker_id].nb_ports++; } check_all_ports_link_status(nb_ports, RTE_PORT_ALL); @@ -791,13 +791,13 @@ test_pmd_perf(void) if (NULL == do_measure) do_measure = measure_rxtx; - rte_eal_remote_launch(main_loop, NULL, slave_id); + rte_eal_remote_launch(main_loop, NULL, worker_id); - if (rte_eal_wait_lcore(slave_id) < 0) + if (rte_eal_wait_lcore(worker_id) < 0) return -1; } else if (sc_flag == SC_BURST_POLL_FIRST || sc_flag == SC_BURST_XMIT_FIRST) - if (exec_burst(sc_flag, slave_id) < 0) + if (exec_burst(sc_flag, worker_id) < 0) return -1; /* port tear down */ diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c index 0a9e5ecd1a44..7ae66e4dfb76 100644 --- a/app/test/test_rcu_qsbr.c +++ b/app/test/test_rcu_qsbr.c @@ -1327,7 +1327,7 @@ test_rcu_qsbr_main(void) } num_cores = 0; - RTE_LCORE_FOREACH_SLAVE(core_id) { + RTE_LCORE_FOREACH_WORKER(core_id) { enabled_core_ids[num_cores] = core_id; num_cores++; } diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c index d35a6d089784..3017e71120ad 100644 --- a/app/test/test_rcu_qsbr_perf.c +++ b/app/test/test_rcu_qsbr_perf.c @@ -625,7 +625,7 @@ test_rcu_qsbr_main(void) rte_atomic64_init(&check_cycles); num_cores = 0; - RTE_LCORE_FOREACH_SLAVE(core_id) { + RTE_LCORE_FOREACH_WORKER(core_id) { enabled_core_ids[num_cores] = core_id; num_cores++; } diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index ac9bf5608daa..42d82d85bab2 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -297,7 +297,7 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, const int esize) lcore_count = 0; param1.size = param2.size = bulk_sizes[i]; param1.r = param2.r = r; - if (cores->c1 == rte_get_master_lcore()) { + if (cores->c1 == rte_get_initial_lcore()) { rte_eal_remote_launch(f2, ¶m2, cores->c2); f1(¶m1); rte_eal_wait_lcore(cores->c2); @@ -340,8 +340,8 @@ load_loop_fn_helper(struct thread_params *p, const int esize) if (burst == NULL) return -1; - /* wait synchro for slaves */ - if (lcore != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0) rte_pause(); @@ -397,12 +397,12 @@ run_on_all_cores(struct rte_ring *r, const int esize) param.size = bulk_sizes[i]; param.r = r; - /* clear synchro and start slaves */ + /* clear synchro and start workers */ rte_atomic32_set(&synchro, 0); - if (rte_eal_mp_remote_launch(lcore_f, ¶m, SKIP_MASTER) < 0) + if (rte_eal_mp_remote_launch(lcore_f, ¶m, SKIP_INITIAL) < 0) return -1; - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); lcore_f(¶m); @@ -553,7 +553,7 @@ test_ring_perf_esize(const int esize) goto test_fail; } - printf("\n### Testing using all slave nodes ###\n"); + printf("\n### Testing using all worker nodes ###\n"); if (run_on_all_cores(r, esize) < 0) goto test_fail; diff --git a/app/test/test_ring_stress_impl.h b/app/test/test_ring_stress_impl.h index 222d62bc4f4d..fab924515fc3 100644 --- a/app/test/test_ring_stress_impl.h +++ b/app/test/test_ring_stress_impl.h @@ -6,7 +6,7 @@ /** * Stress test for ring enqueue/dequeue operations. - * Performs the following pattern on each slave worker: + * Performs the following pattern on each worker worker: * dequeue/read-write data from the dequeued objects/enqueue. * Serves as both functional and performance test of ring * enqueue/dequeue operations under high contention @@ -348,8 +348,8 @@ test_mt1(int (*test)(void *)) memset(arg, 0, sizeof(arg)); - /* launch on all slaves */ - RTE_LCORE_FOREACH_SLAVE(lc) { + /* launch on all workers */ + RTE_LCORE_FOREACH_WORKER(lc) { arg[lc].rng = r; arg[lc].stats = init_stat; rte_eal_remote_launch(test, &arg[lc], lc); @@ -365,12 +365,12 @@ test_mt1(int (*test)(void *)) wrk_cmd = WRK_CMD_STOP; rte_smp_wmb(); - /* wait for slaves and collect stats. */ + /* wait for workers and collect stats. */ mc = rte_lcore_id(); arg[mc].stats = init_stat; rc = 0; - RTE_LCORE_FOREACH_SLAVE(lc) { + RTE_LCORE_FOREACH_WORKER(lc) { rc |= rte_eal_wait_lcore(lc); lcore_stat_aggr(&arg[mc].stats, &arg[lc].stats); if (verbose != 0) diff --git a/app/test/test_rwlock.c b/app/test/test_rwlock.c index 61bee7d7c296..ea20318c4c84 100644 --- a/app/test/test_rwlock.c +++ b/app/test/test_rwlock.c @@ -99,8 +99,8 @@ load_loop_fn(__rte_unused void *arg) uint64_t lcount = 0; const unsigned int lcore = rte_lcore_id(); - /* wait synchro for slaves */ - if (lcore != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0) ; @@ -134,12 +134,12 @@ test_rwlock_perf(void) printf("\nRwlock Perf Test on %u cores...\n", rte_lcore_count()); - /* clear synchro and start slaves */ + /* clear synchro and start workers */ rte_atomic32_set(&synchro, 0); - if (rte_eal_mp_remote_launch(load_loop_fn, NULL, SKIP_MASTER) < 0) + if (rte_eal_mp_remote_launch(load_loop_fn, NULL, SKIP_INITIAL) < 0) return -1; - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); load_loop_fn(NULL); @@ -161,7 +161,7 @@ test_rwlock_perf(void) * - There is a global rwlock and a table of rwlocks (one per lcore). * * - The test function takes all of these locks and launches the - * ``test_rwlock_per_core()`` function on each core (except the master). + * ``test_rwlock_per_core()`` function on each core (except the initial). * * - The function takes the global write lock, display something, * then releases the global lock. @@ -187,21 +187,21 @@ rwlock_test1(void) rte_rwlock_write_lock(&sl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_rwlock_write_lock(&sl_tab[i]); rte_eal_remote_launch(test_rwlock_per_core, NULL, i); } rte_rwlock_write_unlock(&sl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_rwlock_write_unlock(&sl_tab[i]); rte_delay_ms(100); } rte_rwlock_write_lock(&sl); /* this message should be the last message of test */ - printf("Global write lock taken on master core %u\n", rte_lcore_id()); + printf("Global write lock taken on initial core %u\n", rte_lcore_id()); rte_rwlock_write_unlock(&sl); rte_eal_mp_wait_lcore(); @@ -462,26 +462,26 @@ try_rwlock_test_rda(void) try_test_reset(); /* start read test on all avaialble lcores */ - rte_eal_mp_remote_launch(try_read_lcore, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(try_read_lcore, NULL, CALL_INITIAL); rte_eal_mp_wait_lcore(); return process_try_lcore_stats(); } -/* all slave lcores grab RDLOCK, master one grabs WRLOCK */ +/* all worker lcores grab RDLOCK, initial one grabs WRLOCK */ static int try_rwlock_test_rds_wrm(void) { try_test_reset(); - rte_eal_mp_remote_launch(try_read_lcore, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(try_read_lcore, NULL, SKIP_INITIAL); try_write_lcore(NULL); rte_eal_mp_wait_lcore(); return process_try_lcore_stats(); } -/* master and even slave lcores grab RDLOCK, odd lcores grab WRLOCK */ +/* initial and even worker lcores grab RDLOCK, odd lcores grab WRLOCK */ static int try_rwlock_test_rde_wro(void) { @@ -489,7 +489,7 @@ try_rwlock_test_rde_wro(void) try_test_reset(); - mlc = rte_get_master_lcore(); + mlc = rte_get_initial_lcore(); RTE_LCORE_FOREACH(lc) { if (lc != mlc) { diff --git a/app/test/test_service_cores.c b/app/test/test_service_cores.c index 981e212130bf..6b23363425c9 100644 --- a/app/test/test_service_cores.c +++ b/app/test/test_service_cores.c @@ -30,7 +30,7 @@ static int testsuite_setup(void) { slcore_id = rte_get_next_lcore(/* start core */ -1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); return TEST_SUCCESS; @@ -532,12 +532,12 @@ service_lcore_add_del(void) TEST_ASSERT_EQUAL(1, rte_service_lcore_count(), "Service core count not equal to one"); uint32_t slcore_1 = rte_get_next_lcore(/* start core */ -1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); TEST_ASSERT_EQUAL(0, rte_service_lcore_add(slcore_1), "Service core add did not return zero"); uint32_t slcore_2 = rte_get_next_lcore(/* start core */ slcore_1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); TEST_ASSERT_EQUAL(0, rte_service_lcore_add(slcore_2), "Service core add did not return zero"); @@ -583,12 +583,12 @@ service_threaded_test(int mt_safe) /* add next 2 cores */ uint32_t slcore_1 = rte_get_next_lcore(/* start core */ -1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); TEST_ASSERT_EQUAL(0, rte_service_lcore_add(slcore_1), "mt safe lcore add fail"); uint32_t slcore_2 = rte_get_next_lcore(/* start core */ slcore_1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); TEST_ASSERT_EQUAL(0, rte_service_lcore_add(slcore_2), "mt safe lcore add fail"); diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c index 842990ed3b30..87dc8a1f1eeb 100644 --- a/app/test/test_spinlock.c +++ b/app/test/test_spinlock.c @@ -28,7 +28,7 @@ * - There is a global spinlock and a table of spinlocks (one per lcore). * * - The test function takes all of these locks and launches the - * ``test_spinlock_per_core()`` function on each core (except the master). + * ``test_spinlock_per_core()`` function on each core (except the initial). * * - The function takes the global lock, display something, then releases * the global lock. @@ -109,8 +109,8 @@ load_loop_fn(void *func_param) const int use_lock = *(int*)func_param; const unsigned lcore = rte_lcore_id(); - /* wait synchro for slaves */ - if (lcore != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0); begin = rte_get_timer_cycles(); @@ -149,11 +149,11 @@ test_spinlock_perf(void) printf("\nTest with lock on %u cores...\n", rte_lcore_count()); - /* Clear synchro and start slaves */ + /* Clear synchro and start workers */ rte_atomic32_set(&synchro, 0); - rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_MASTER); + rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_INITIAL); - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); load_loop_fn(&lock); @@ -200,8 +200,8 @@ test_spinlock(void) int ret = 0; int i; - /* slave cores should be waiting: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be waiting: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } @@ -214,19 +214,19 @@ test_spinlock(void) rte_spinlock_lock(&sl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_spinlock_lock(&sl_tab[i]); rte_eal_remote_launch(test_spinlock_per_core, NULL, i); } - /* slave cores should be busy: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be busy: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } rte_spinlock_unlock(&sl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_spinlock_unlock(&sl_tab[i]); rte_delay_ms(10); } @@ -245,7 +245,7 @@ test_spinlock(void) } else rte_spinlock_recursive_unlock(&slr); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_spinlock_recursive_per_core, NULL, i); } rte_spinlock_recursive_unlock(&slr); @@ -253,12 +253,12 @@ test_spinlock(void) /* * Test if it could return immediately from try-locking a locked object. - * Here it will lock the spinlock object first, then launch all the slave + * Here it will lock the spinlock object first, then launch all the worker * lcores to trylock the same spinlock object. - * All the slave lcores should give up try-locking a locked object and + * All the worker lcores should give up try-locking a locked object and * return immediately, and then increase the "count" initialized with zero * by one per times. - * We can check if the "count" is finally equal to the number of all slave + * We can check if the "count" is finally equal to the number of all worker * lcores to see if the behavior of try-locking a locked spinlock object * is correct. */ @@ -266,7 +266,7 @@ test_spinlock(void) return -1; } count = 0; - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_spinlock_try, NULL, i); } rte_eal_mp_wait_lcore(); diff --git a/app/test/test_stack.c b/app/test/test_stack.c index c8dac1f55cdc..0ef5f47874f2 100644 --- a/app/test/test_stack.c +++ b/app/test/test_stack.c @@ -362,7 +362,7 @@ test_stack_multithreaded(uint32_t flags) rte_atomic64_init(&size); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { args[lcore_id].s = s; args[lcore_id].sz = &size; diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c index 3ab7267b1b72..1a49667a91fc 100644 --- a/app/test/test_stack_perf.c +++ b/app/test/test_stack_perf.c @@ -180,7 +180,7 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s, args[0].sz = args[1].sz = bulk_sizes[i]; args[0].s = args[1].s = s; - if (cores->c1 == rte_get_master_lcore()) { + if (cores->c1 == rte_get_initial_lcore()) { rte_eal_remote_launch(fn, &args[1], cores->c2); fn(&args[0]); rte_eal_wait_lcore(cores->c2); @@ -210,7 +210,7 @@ run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n) rte_atomic32_set(&lcore_barrier, n); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (++cnt >= n) break; @@ -235,7 +235,7 @@ run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n) avg = args[rte_lcore_id()].avg; cnt = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (++cnt >= n) break; avg += args[lcore_id].avg; diff --git a/app/test/test_ticketlock.c b/app/test/test_ticketlock.c index 66ab3d1a0248..3b4e68af24ce 100644 --- a/app/test/test_ticketlock.c +++ b/app/test/test_ticketlock.c @@ -28,7 +28,7 @@ * - There is a global ticketlock and a table of ticketlocks (one per lcore). * * - The test function takes all of these locks and launches the - * ``test_ticketlock_per_core()`` function on each core (except the master). + * ``test_ticketlock_per_core()`` function on each core (except the initial). * * - The function takes the global lock, display something, then releases * the global lock. @@ -110,8 +110,8 @@ load_loop_fn(void *func_param) const int use_lock = *(int *)func_param; const unsigned int lcore = rte_lcore_id(); - /* wait synchro for slaves */ - if (lcore != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0) ; @@ -154,11 +154,11 @@ test_ticketlock_perf(void) lcount = 0; printf("\nTest with lock on %u cores...\n", rte_lcore_count()); - /* Clear synchro and start slaves */ + /* Clear synchro and start workers */ rte_atomic32_set(&synchro, 0); - rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_MASTER); + rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_INITIAL); - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); load_loop_fn(&lock); @@ -208,8 +208,8 @@ test_ticketlock(void) int ret = 0; int i; - /* slave cores should be waiting: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be waiting: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } @@ -217,25 +217,25 @@ test_ticketlock(void) rte_ticketlock_init(&tl); rte_ticketlock_init(&tl_try); rte_ticketlock_recursive_init(&tlr); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_ticketlock_init(&tl_tab[i]); } rte_ticketlock_lock(&tl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_ticketlock_lock(&tl_tab[i]); rte_eal_remote_launch(test_ticketlock_per_core, NULL, i); } - /* slave cores should be busy: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be busy: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } rte_ticketlock_unlock(&tl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_ticketlock_unlock(&tl_tab[i]); rte_delay_ms(10); } @@ -254,7 +254,7 @@ test_ticketlock(void) } else rte_ticketlock_recursive_unlock(&tlr); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_ticketlock_recursive_per_core, NULL, i); } @@ -264,19 +264,19 @@ test_ticketlock(void) /* * Test if it could return immediately from try-locking a locked object. * Here it will lock the ticketlock object first, then launch all the - * slave lcores to trylock the same ticketlock object. - * All the slave lcores should give up try-locking a locked object and + * worker lcores to trylock the same ticketlock object. + * All the worker lcores should give up try-locking a locked object and * return immediately, and then increase the "count" initialized with * zero by one per times. * We can check if the "count" is finally equal to the number of all - * slave lcores to see if the behavior of try-locking a locked + * worker lcores to see if the behavior of try-locking a locked * ticketlock object is correct. */ if (rte_ticketlock_trylock(&tl_try) == 0) return -1; count = 0; - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_ticketlock_try, NULL, i); } rte_eal_mp_wait_lcore(); diff --git a/app/test/test_timer.c b/app/test/test_timer.c index 5933f56ed544..37a944e32bcd 100644 --- a/app/test/test_timer.c +++ b/app/test/test_timer.c @@ -37,7 +37,7 @@ * - All cores then simultaneously are set to schedule all the timers at * the same time, so conflicts should occur. * - Then there is a delay while we wait for the timers to expire - * - Then the master lcore calls timer_manage() and we check that all + * - Then the initial lcore calls timer_manage() and we check that all * timers have had their callbacks called exactly once - no more no less. * - Then we repeat the process, except after setting up the timers, we have * all cores randomly reschedule them. @@ -58,7 +58,7 @@ * * - timer0 * - * - At initialization, timer0 is loaded by the master core, on master core + * - At initialization, timer0 is loaded by the initial core, on initial lcore core * in "single" mode (time = 1 second). * - In the first 19 callbacks, timer0 is reloaded on the same core, * then, it is explicitly stopped at the 20th call. @@ -66,21 +66,21 @@ * * - timer1 * - * - At initialization, timer1 is loaded by the master core, on the - * master core in "single" mode (time = 2 seconds). + * - At initialization, timer1 is loaded by the initial core, on the + * initial core in "single" mode (time = 2 seconds). * - In the first 9 callbacks, timer1 is reloaded on another * core. After the 10th callback, timer1 is not reloaded anymore. * * - timer2 * - * - At initialization, timer2 is loaded by the master core, on the - * master core in "periodical" mode (time = 1 second). + * - At initialization, timer2 is loaded by the initial core, on the + * initial core in "periodical" mode (time = 1 second). * - In the callback, when t=25s, it stops timer3 and reloads timer0 * on the current core. * * - timer3 * - * - At initialization, timer3 is loaded by the master core, on + * - At initialization, timer3 is loaded by the initial core, on * another core in "periodical" mode (time = 1 second). * - It is stopped at t=25s by timer2. */ @@ -201,68 +201,68 @@ timer_stress_main_loop(__rte_unused void *arg) return 0; } -/* Need to synchronize slave lcores through multiple steps. */ -enum { SLAVE_WAITING = 1, SLAVE_RUN_SIGNAL, SLAVE_RUNNING, SLAVE_FINISHED }; -static rte_atomic16_t slave_state[RTE_MAX_LCORE]; +/* Need to synchronize worker lcores through multiple steps. */ +enum { WORKER_WAITING = 1, WORKER_RUN_SIGNAL, WORKER_RUNNING, WORKER_FINISHED }; +static rte_atomic16_t worker_state[RTE_MAX_LCORE]; static void -master_init_slaves(void) +init_workers(void) { unsigned i; - RTE_LCORE_FOREACH_SLAVE(i) { - rte_atomic16_set(&slave_state[i], SLAVE_WAITING); + RTE_LCORE_FOREACH_WORKER(i) { + rte_atomic16_set(&worker_state[i], WORKER_WAITING); } } static void -master_start_slaves(void) +start_workers(void) { unsigned i; - RTE_LCORE_FOREACH_SLAVE(i) { - rte_atomic16_set(&slave_state[i], SLAVE_RUN_SIGNAL); + RTE_LCORE_FOREACH_WORKER(i) { + rte_atomic16_set(&worker_state[i], WORKER_RUN_SIGNAL); } - RTE_LCORE_FOREACH_SLAVE(i) { - while (rte_atomic16_read(&slave_state[i]) != SLAVE_RUNNING) + RTE_LCORE_FOREACH_WORKER(i) { + while (rte_atomic16_read(&worker_state[i]) != WORKER_RUNNING) rte_pause(); } } static void -master_wait_for_slaves(void) +wait_for_workers(void) { unsigned i; - RTE_LCORE_FOREACH_SLAVE(i) { - while (rte_atomic16_read(&slave_state[i]) != SLAVE_FINISHED) + RTE_LCORE_FOREACH_WORKER(i) { + while (rte_atomic16_read(&worker_state[i]) != WORKER_FINISHED) rte_pause(); } } static void -slave_wait_to_start(void) +worker_wait_to_start(void) { unsigned lcore_id = rte_lcore_id(); - while (rte_atomic16_read(&slave_state[lcore_id]) != SLAVE_RUN_SIGNAL) + while (rte_atomic16_read(&worker_state[lcore_id]) != WORKER_RUN_SIGNAL) rte_pause(); - rte_atomic16_set(&slave_state[lcore_id], SLAVE_RUNNING); + rte_atomic16_set(&worker_state[lcore_id], WORKER_RUNNING); } static void -slave_finish(void) +worker_finish(void) { unsigned lcore_id = rte_lcore_id(); - rte_atomic16_set(&slave_state[lcore_id], SLAVE_FINISHED); + rte_atomic16_set(&worker_state[lcore_id], WORKER_FINISHED); } static volatile int cb_count = 0; /* callback for second stress test. will only be called - * on master lcore */ + * on initial lcore */ static void timer_stress2_cb(struct rte_timer *tim __rte_unused, void *arg __rte_unused) { @@ -278,35 +278,35 @@ timer_stress2_main_loop(__rte_unused void *arg) int i, ret; uint64_t delay = rte_get_timer_hz() / 20; unsigned lcore_id = rte_lcore_id(); - unsigned master = rte_get_master_lcore(); + unsigned initial = rte_get_initial_lcore(); int32_t my_collisions = 0; static rte_atomic32_t collisions; - if (lcore_id == master) { + if (lcore_id == initial) { cb_count = 0; test_failed = 0; rte_atomic32_set(&collisions, 0); - master_init_slaves(); + init_workers(); timers = rte_malloc(NULL, sizeof(*timers) * NB_STRESS2_TIMERS, 0); if (timers == NULL) { printf("Test Failed\n"); printf("- Cannot allocate memory for timers\n" ); test_failed = 1; - master_start_slaves(); + start_workers(); goto cleanup; } for (i = 0; i < NB_STRESS2_TIMERS; i++) rte_timer_init(&timers[i]); - master_start_slaves(); + start_workers(); } else { - slave_wait_to_start(); + worker_wait_to_start(); if (test_failed) goto cleanup; } - /* have all cores schedule all timers on master lcore */ + /* have all cores schedule all timers on initial lcore */ for (i = 0; i < NB_STRESS2_TIMERS; i++) { - ret = rte_timer_reset(&timers[i], delay, SINGLE, master, + ret = rte_timer_reset(&timers[i], delay, SINGLE, initial, timer_stress2_cb, NULL); /* there will be collisions when multiple cores simultaneously * configure the same timers */ @@ -320,14 +320,14 @@ timer_stress2_main_loop(__rte_unused void *arg) rte_delay_ms(100); /* all cores rendezvous */ - if (lcore_id == master) { - master_wait_for_slaves(); + if (lcore_id == initial) { + wait_for_workers(); } else { - slave_finish(); + worker_finish(); } /* now check that we get the right number of callbacks */ - if (lcore_id == master) { + if (lcore_id == initial) { my_collisions = rte_atomic32_read(&collisions); if (my_collisions != 0) printf("- %d timer reset collisions (OK)\n", my_collisions); @@ -338,23 +338,23 @@ timer_stress2_main_loop(__rte_unused void *arg) printf("- Expected %d callbacks, got %d\n", NB_STRESS2_TIMERS, cb_count); test_failed = 1; - master_start_slaves(); + start_workers(); goto cleanup; } cb_count = 0; /* proceed */ - master_start_slaves(); + start_workers(); } else { /* proceed */ - slave_wait_to_start(); + worker_wait_to_start(); if (test_failed) goto cleanup; } /* now test again, just stop and restart timers at random after init*/ for (i = 0; i < NB_STRESS2_TIMERS; i++) - rte_timer_reset(&timers[i], delay, SINGLE, master, + rte_timer_reset(&timers[i], delay, SINGLE, initial, timer_stress2_cb, NULL); /* pick random timer to reset, stopping them first half the time */ @@ -362,7 +362,7 @@ timer_stress2_main_loop(__rte_unused void *arg) int r = rand() % NB_STRESS2_TIMERS; if (i % 2) rte_timer_stop(&timers[r]); - rte_timer_reset(&timers[r], delay, SINGLE, master, + rte_timer_reset(&timers[r], delay, SINGLE, initial, timer_stress2_cb, NULL); } @@ -370,8 +370,8 @@ timer_stress2_main_loop(__rte_unused void *arg) rte_delay_ms(100); /* now check that we get the right number of callbacks */ - if (lcore_id == master) { - master_wait_for_slaves(); + if (lcore_id == initial) { + wait_for_workers(); rte_timer_manage(); if (cb_count != NB_STRESS2_TIMERS) { @@ -386,14 +386,14 @@ timer_stress2_main_loop(__rte_unused void *arg) } cleanup: - if (lcore_id == master) { - master_wait_for_slaves(); + if (lcore_id == initial) { + wait_for_workers(); if (timers != NULL) { rte_free(timers); timers = NULL; } } else { - slave_finish(); + worker_finish(); } return 0; @@ -465,7 +465,7 @@ timer_basic_main_loop(__rte_unused void *arg) int64_t diff = 0; /* launch all timers on core 0 */ - if (lcore_id == rte_get_master_lcore()) { + if (lcore_id == rte_get_initial_lcore()) { mytimer_reset(&mytiminfo[0], hz/4, SINGLE, lcore_id, timer_basic_cb); mytimer_reset(&mytiminfo[1], hz/2, SINGLE, lcore_id, @@ -563,7 +563,7 @@ test_timer(void) /* start other cores */ printf("Start timer stress tests\n"); - rte_eal_mp_remote_launch(timer_stress_main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(timer_stress_main_loop, NULL, CALL_INITIAL); rte_eal_mp_wait_lcore(); /* stop timer 0 used for stress test */ @@ -572,7 +572,7 @@ test_timer(void) /* run a second, slightly different set of stress tests */ printf("\nStart timer stress tests 2\n"); test_failed = 0; - rte_eal_mp_remote_launch(timer_stress2_main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(timer_stress2_main_loop, NULL, CALL_INITIAL); rte_eal_mp_wait_lcore(); if (test_failed) return TEST_FAILED; @@ -584,7 +584,7 @@ test_timer(void) /* start other cores */ printf("\nStart timer basic tests\n"); - rte_eal_mp_remote_launch(timer_basic_main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(timer_basic_main_loop, NULL, CALL_INITIAL); rte_eal_mp_wait_lcore(); /* stop all timers */ diff --git a/app/test/test_timer_racecond.c b/app/test/test_timer_racecond.c index 4fc917995415..5b8941950c30 100644 --- a/app/test/test_timer_racecond.c +++ b/app/test/test_timer_racecond.c @@ -56,8 +56,8 @@ static struct rte_timer timer[N_TIMERS]; static unsigned timer_lcore_id[N_TIMERS]; -static unsigned master; -static volatile unsigned stop_slaves; +static unsigned int initial_lcore; +static volatile unsigned stop_workers; static int reload_timer(struct rte_timer *tim); @@ -95,7 +95,8 @@ reload_timer(struct rte_timer *tim) (tim - timer); int ret; - ret = rte_timer_reset(tim, ticks, PERIODICAL, master, timer_cb, NULL); + ret = rte_timer_reset(tim, ticks, PERIODICAL, + initial_lcore, timer_cb, NULL); if (ret != 0) { rte_log(RTE_LOG_DEBUG, timer_logtype_test, "- core %u failed to reset timer %" PRIuPTR " (OK)\n", @@ -106,7 +107,7 @@ reload_timer(struct rte_timer *tim) } static int -slave_main_loop(__rte_unused void *arg) +worker_main_loop(__rte_unused void *arg) { unsigned lcore_id = rte_lcore_id(); unsigned i; @@ -115,7 +116,7 @@ slave_main_loop(__rte_unused void *arg) printf("Starting main loop on core %u\n", lcore_id); - while (!stop_slaves) { + while (!stop_workers) { /* Wait until the timer manager is running. * We know it's running when we see timer[0] NOT pending. */ @@ -152,7 +153,7 @@ test_timer_racecond(void) unsigned lcore_id; unsigned i; - master = lcore_id = rte_lcore_id(); + initial_lcore = lcore_id = rte_lcore_id(); hz = rte_get_timer_hz(); /* init and start timers */ @@ -161,8 +162,8 @@ test_timer_racecond(void) ret = reload_timer(&timer[i]); TEST_ASSERT(ret == 0, "reload_timer failed"); - /* Distribute timers to slaves. - * Note that we assign timer[0] to the master. + /* Distribute timers to workers. + * Note that we assign timer[0] to the inital lcore. */ timer_lcore_id[i] = lcore_id; lcore_id = rte_get_next_lcore(lcore_id, 1, 1); @@ -172,11 +173,11 @@ test_timer_racecond(void) cur_time = rte_get_timer_cycles(); end_time = cur_time + (hz * TEST_DURATION_S); - /* start slave cores */ - stop_slaves = 0; + /* start worker cores */ + stop_workers = 0; printf("Start timer manage race condition test (%u seconds)\n", TEST_DURATION_S); - rte_eal_mp_remote_launch(slave_main_loop, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(worker_main_loop, NULL, SKIP_INITIAL); while (diff >= 0) { /* run the timers */ @@ -189,9 +190,9 @@ test_timer_racecond(void) diff = end_time - cur_time; } - /* stop slave cores */ + /* stop worker cores */ printf("Stopping timer manage race condition test\n"); - stop_slaves = 1; + stop_workers = 1; rte_eal_mp_wait_lcore(); /* stop timers */ diff --git a/app/test/test_timer_secondary.c b/app/test/test_timer_secondary.c index 7a3bc873b359..86f187280120 100644 --- a/app/test/test_timer_secondary.c +++ b/app/test/test_timer_secondary.c @@ -141,7 +141,7 @@ test_timer_secondary(void) unsigned int *mgr_lcorep = &test_info->mgr_lcore; unsigned int *sec_lcorep = &test_info->sec_lcore; - *mstr_lcorep = rte_get_master_lcore(); + *mstr_lcorep = rte_get_initial_lcore(); *mgr_lcorep = rte_get_next_lcore(*mstr_lcorep, 1, 1); *sec_lcorep = rte_get_next_lcore(*mgr_lcorep, 1, 1); diff --git a/app/test/test_trace_perf.c b/app/test/test_trace_perf.c index 50c7381b77e7..e1ad8e6f555c 100644 --- a/app/test/test_trace_perf.c +++ b/app/test/test_trace_perf.c @@ -132,7 +132,7 @@ run_test(const char *str, lcore_function_t f, struct test_data *data, size_t sz) memset(data, 0, sz); data->nb_workers = rte_lcore_count() - 1; - RTE_LCORE_FOREACH_SLAVE(id) + RTE_LCORE_FOREACH_WORKER(id) rte_eal_remote_launch(f, &data->ldata[worker++], id); wait_till_workers_are_ready(data); @@ -140,7 +140,7 @@ run_test(const char *str, lcore_function_t f, struct test_data *data, size_t sz) measure_perf(str, data); signal_workers_to_finish(data); - RTE_LCORE_FOREACH_SLAVE(id) + RTE_LCORE_FOREACH_WORKER(id) rte_eal_wait_lcore(id); } From patchwork Wed Jul 1 19:46:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72657 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1BF09A0350; Wed, 1 Jul 2020 21:50:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4E61B1D560; Wed, 1 Jul 2020 21:47:30 +0200 (CEST) Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by dpdk.org (Postfix) with ESMTP id 26BA01D546 for ; Wed, 1 Jul 2020 21:47:29 +0200 (CEST) Received: by mail-pg1-f196.google.com with SMTP id g67so11324598pgc.8 for ; Wed, 01 Jul 2020 12:47:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JC/yXr/kjpdZ89wjQ525wv9Hf2qxUo78XSVSDD6dE3g=; b=kVxqipudSbewlEf9xcOEBCHqKZ5/iku63vDhTDVbeSWqqFnosfEkSQaAGvWaWzk+Ni K3BRQl1ohDNOixs6wO4AI7P535Ud4Og5AcosNrpzcAkhYZh7DycLEzymzbKv/9bbVcCU uwV28XZ9Zyq47Sq2gJh890d4rUAulExDtFXeCFuW3FIsqnMXtB5YiknHqln5kTu8rwAP nWPFdLhaaJoN+QX8YLP2y9Hv3qrt87bnp57Pg/Voh/Kcy96FPsavbyzSNrtZirT7i7at T+2Ks5hX2pjLd4oKla6h3hlG2e2GzRLmx613mSiNL/3Ix6WbLQfYUoFBSpqa5K9W6dTT P1QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JC/yXr/kjpdZ89wjQ525wv9Hf2qxUo78XSVSDD6dE3g=; b=iGIPJnHv+7+lQFyeiJY/CdldZIfJzz/cdkRUF6dFLRcUDAzKiujGBjuZoWD7uAEgIC GJFMMj5FNb1yIHv74Hn0VfaMGLYQGvPI9lGawoKI+zoodO+7WVpCGYi+guvswMn1+nW4 OqnW+28+oDQNEn8dnOQglPzTtWTekLkyk0eQVKapDvUW3Fhjf7VE+oa1cGpfuTR315PI f+gh7cpitxFiLSoq1e5WNTuTCcO44GMR9OQRbVXFbhwmlIiq+/TfIMZDDqn6IJYQ9CP9 Al/mvhgrzf85oMAhBmfO/J9Ki/320UyCK7zC93ANaIWZ/Doz/dAo92x8pmiE57/9dmIG +YWA== X-Gm-Message-State: AOAM531cqoqt0czo/7EiVpAmI/ePgSi6d+CkA2wM21vEVhloyqjHZcw0 7x7gGpbvmWmUkxWUJeEgCmM9U3IcUeM= X-Google-Smtp-Source: ABdhPJyQjWBJJTO9E2dk/kUyuXlkPjJrm01TIB4fdGU/tDcdy3Ke3c9tgTyF8eQ8xcUc3t1Rg01tgg== X-Received: by 2002:a62:cf42:: with SMTP id b63mr19185806pfg.322.1593632847938; Wed, 01 Jul 2020 12:47:27 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:27 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , bernard.iremonger@intel.com Date: Wed, 1 Jul 2020 12:46:44 -0700 Message-Id: <20200701194650.10705-22-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 21/27] doc: fix incorrect reference to master process X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Correct terminolgy here is primary process. This is a bug in original doc. Fixes: fc1f2750a3ec ("doc: programmers guide") Cc: bernard.iremonger@intel.com Signed-off-by: Stephen Hemminger --- doc/guides/prog_guide/thread_safety_dpdk_functions.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/doc/guides/prog_guide/thread_safety_dpdk_functions.rst b/doc/guides/prog_guide/thread_safety_dpdk_functions.rst index 0f539db2b869..5618e25e47fb 100644 --- a/doc/guides/prog_guide/thread_safety_dpdk_functions.rst +++ b/doc/guides/prog_guide/thread_safety_dpdk_functions.rst @@ -61,8 +61,8 @@ rather than subsequently in the forwarding threads. However, the DPDK performs checks to ensure that libraries are only initialized once. If initialization is attempted more than once, an error is returned. -In the multi-process case, the configuration information of shared memory will only be initialized by the master process. -Thereafter, both master and secondary processes can allocate/release any objects of memory that finally rely on rte_malloc or memzones. +In the multi-process case, the configuration information of shared memory will only be initialized by the primary process. +Thereafter, both primary and secondary processes can allocate/release any objects of memory that finally rely on rte_malloc or memzones. Interrupt Thread ---------------- From patchwork Wed Jul 1 19:46:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72659 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2209BA0350; Wed, 1 Jul 2020 21:50:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0CE0C1D575; Wed, 1 Jul 2020 21:47:35 +0200 (CEST) Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by dpdk.org (Postfix) with ESMTP id 4C3281C23C for ; Wed, 1 Jul 2020 21:47:31 +0200 (CEST) Received: by mail-pf1-f179.google.com with SMTP id u5so11443966pfn.7 for ; Wed, 01 Jul 2020 12:47:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QRqYMcKlSodjgcj4eyy9P4kRm/5MIEI3aaZRHYT/ldw=; b=vp5uIOhEMgWmdKrl5cqDLtWQaSu6FsAWHUTri0uCb9vAYgRUYQoUP11g0hY1jK9L9f 1C0vLXJ93fOC7+mcKHX6/b3qvoE2j/UfEhzs7tBVx97cwUm7/pcZNsDGrV1GSLpnISan +RKEyodAxbK4tY3aNcD1yIvdANkxaCe6jDEKuQ0oq+bnttuwAiVUwWZrPu/ijEqw9arF zJSKtmCE6wPnXRMaU9IwhL/WVx5vQLPz/Kbz8CVdMHbs3N/13O+ZO0lfRa+f/al+inPI MfPtC05+QLaZ5qYcP7jj4cJ9orNk/Ch68FGbWUlgtAI1pZVZIZeYMGCZDaiImNpoMzcF G/Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QRqYMcKlSodjgcj4eyy9P4kRm/5MIEI3aaZRHYT/ldw=; b=ifvPlMD3g/2llxz1hmatNItnGHXxJO5ed4EQ13AGQBQ3KbWE4Lrc4i5mTIfJdPtCgV rVSS4aKefUxQC8ga5hMr84zTkiLudz84i0IExaX7+3g/U1ad/qf1X7lePgOx4GaJrzew Sdj6EBeLCxZeU1fLpZ1/Xt9GCqEjftF6F7pHT8VJsjdub+Cil39SvV5IIigBoaFJUX4Z GNQibfVAp28U9kp5euu9W0wGCYws4GcscLQn20kM9JuaeOjHFoSw7giE+yvU9eo8+Zv1 /Cs4awUDPeerg3pNI6iVtb5L6r3GMxGSGYEY82Actib7OhjaYuFOEs4vlxdIM+LfJvSF 2a9A== X-Gm-Message-State: AOAM532Y17tzOHtVMZH6WC/R3XMEdrmOsAInbPrAjOZ7ePOCgbVJgliX XPjDqSvDo4yd3f2cNQnfICy92nmgYSk= X-Google-Smtp-Source: ABdhPJyCGkXSxGioDIDyMt4FD8GT0dOlMVmmAQFg9K9pXAPLvALivym41Vc+ABh0ievE1eS/ddjiYg== X-Received: by 2002:a62:1b4a:: with SMTP id b71mr15869000pfb.9.1593632849170; Wed, 01 Jul 2020 12:47:29 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:28 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:45 -0700 Message-Id: <20200701194650.10705-23-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 22/27] doc: update references to master/slave lcore in documentation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" New terms are initial and worker lcores. Signed-off-by: Stephen Hemminger --- doc/guides/contributing/coding_style.rst | 2 +- doc/guides/faq/faq.rst | 6 +++--- doc/guides/howto/debug_troubleshoot.rst | 2 +- doc/guides/linux_gsg/eal_args.include.rst | 4 ++-- doc/guides/nics/bnxt.rst | 2 +- doc/guides/nics/fail_safe.rst | 3 --- doc/guides/prog_guide/env_abstraction_layer.rst | 6 +++--- doc/guides/prog_guide/event_ethernet_rx_adapter.rst | 2 +- doc/guides/prog_guide/glossary.rst | 8 ++++---- doc/guides/rel_notes/release_20_08.rst | 7 ++++++- doc/guides/sample_app_ug/bbdev_app.rst | 2 +- doc/guides/sample_app_ug/ethtool.rst | 4 ++-- doc/guides/sample_app_ug/hello_world.rst | 8 ++++---- doc/guides/sample_app_ug/ioat.rst | 12 ++++++------ doc/guides/sample_app_ug/ip_pipeline.rst | 4 ++-- doc/guides/sample_app_ug/keep_alive.rst | 2 +- doc/guides/sample_app_ug/l2_forward_event.rst | 4 ++-- .../sample_app_ug/l2_forward_real_virtual.rst | 4 ++-- doc/guides/sample_app_ug/l3_forward_graph.rst | 6 +++--- doc/guides/sample_app_ug/l3_forward_power_man.rst | 2 +- doc/guides/sample_app_ug/link_status_intr.rst | 4 ++-- doc/guides/sample_app_ug/multi_process.rst | 6 +++--- doc/guides/sample_app_ug/packet_ordering.rst | 8 ++++---- doc/guides/sample_app_ug/performance_thread.rst | 6 +++--- doc/guides/sample_app_ug/qos_scheduler.rst | 4 ++-- doc/guides/sample_app_ug/timer.rst | 13 +++++++------ doc/guides/testpmd_app_ug/run_app.rst | 2 +- doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 +- 28 files changed, 69 insertions(+), 66 deletions(-) diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst index 4efde93f6af0..321d54438f7d 100644 --- a/doc/guides/contributing/coding_style.rst +++ b/doc/guides/contributing/coding_style.rst @@ -334,7 +334,7 @@ For example: typedef int (lcore_function_t)(void *); /* launch a function of lcore_function_t type */ - int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id); + int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned id); C Indentation diff --git a/doc/guides/faq/faq.rst b/doc/guides/faq/faq.rst index f19c1389b6af..cb5f35923d64 100644 --- a/doc/guides/faq/faq.rst +++ b/doc/guides/faq/faq.rst @@ -42,13 +42,13 @@ I am running a 32-bit DPDK application on a NUMA system, and sometimes the appli If your system has a lot (>1 GB size) of hugepage memory, not all of it will be allocated. Due to hugepages typically being allocated on a local NUMA node, the hugepages allocation the application gets during the initialization depends on which NUMA node it is running on (the EAL does not affinitize cores until much later in the initialization process). -Sometimes, the Linux OS runs the DPDK application on a core that is located on a different NUMA node from DPDK master core and +Sometimes, the Linux OS runs the DPDK application on a core that is located on a different NUMA node from DPDK initial core and therefore all the hugepages are allocated on the wrong socket. To avoid this scenario, either lower the amount of hugepage memory available to 1 GB size (or less), or run the application with taskset -affinitizing the application to a would-be master core. +affinitizing the application to a would-be initial core. -For example, if your EAL coremask is 0xff0, the master core will usually be the first core in the coremask (0x10); this is what you have to supply to taskset:: +For example, if your EAL coremask is 0xff0, the initial core will usually be the first core in the coremask (0x10); this is what you have to supply to taskset:: taskset 0x10 ./l2fwd -l 4-11 -n 2 diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst index cef016b2fef4..fdeaabe62206 100644 --- a/doc/guides/howto/debug_troubleshoot.rst +++ b/doc/guides/howto/debug_troubleshoot.rst @@ -311,7 +311,7 @@ Custom worker function :numref:`dtg_distributor_worker`. SERVICE. Check performance functions are mapped to run on the cores. * For high-performance execution logic ensure running it on correct NUMA - and non-master core. + and worker core. * Analyze run logic with ``rte_dump_stack``, ``rte_dump_registers`` and ``rte_memdump`` for more insights. diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst index 0fe44579689b..ca7508fb423e 100644 --- a/doc/guides/linux_gsg/eal_args.include.rst +++ b/doc/guides/linux_gsg/eal_args.include.rst @@ -33,9 +33,9 @@ Lcore-related options At a given instance only one core option ``--lcores``, ``-l`` or ``-c`` can be used. -* ``--master-lcore `` +* ``--initial-lcore `` - Core ID that is used as master. + Core ID that is used as initial lcore. * ``-s `` diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst index a53cdad21d34..6a7314a91627 100644 --- a/doc/guides/nics/bnxt.rst +++ b/doc/guides/nics/bnxt.rst @@ -385,7 +385,7 @@ The application enables multiple TX and RX queues when it is started. .. code-block:: console -   testpmd -l 1,3,5 --master-lcore 1 --txq=2 –rxq=2 --nb-cores=2 +   testpmd -l 1,3,5 --initial-lcore 1 --txq=2 –rxq=2 --nb-cores=2 **TSS** diff --git a/doc/guides/nics/fail_safe.rst b/doc/guides/nics/fail_safe.rst index b4a92f663b17..3b15d6f0743d 100644 --- a/doc/guides/nics/fail_safe.rst +++ b/doc/guides/nics/fail_safe.rst @@ -236,9 +236,6 @@ Upkeep round (brought down or up accordingly). Additionally, any sub-device marked for removal is cleaned-up. -Slave - In the context of the fail-safe PMD, synonymous to sub-device. - Sub-device A device being utilized by the fail-safe PMD. This is another PMD running underneath the fail-safe PMD. diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 48a2fec066db..463245463c52 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -64,7 +64,7 @@ It consist of calls to the pthread library (more specifically, pthread_self(), p .. note:: Initialization of objects, such as memory zones, rings, memory pools, lpm tables and hash tables, - should be done as part of the overall application initialization on the master lcore. + should be done as part of the overall application initialization on the initial lcore. The creation and initialization functions for these objects are not multi-thread safe. However, once initialized, the objects themselves can safely be used in multiple threads simultaneously. @@ -186,7 +186,7 @@ very dependent on the memory allocation patterns of the application. Additional restrictions are present when running in 32-bit mode. In dynamic memory mode, by default maximum of 2 gigabytes of VA space will be preallocated, -and all of it will be on master lcore NUMA node unless ``--socket-mem`` flag is +and all of it will be on initial lcore NUMA node unless ``--socket-mem`` flag is used. In legacy mode, VA space will only be preallocated for segments that were @@ -603,7 +603,7 @@ controlled with tools like taskset (Linux) or cpuset (FreeBSD), - with affinity restricted to 2-4, the Control Threads will end up on CPU 4. - with affinity restricted to 2-3, the Control Threads will end up on - CPU 2 (master lcore, which is the default when no CPU is available). + CPU 2 (initial lcore, which is the default when no CPU is available). .. _known_issue_label: diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst index c7dda92215ea..5d015fa2d678 100644 --- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst +++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst @@ -172,7 +172,7 @@ converts the received packets to events in the same manner as packets received on a polled Rx queue. The interrupt thread is affinitized to the same CPUs as the lcores of the Rx adapter service function, if the Rx adapter service function has not been mapped to any lcores, the interrupt thread -is mapped to the master lcore. +is mapped to the initial lcore. Rx Callback for SW Rx Adapter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/prog_guide/glossary.rst b/doc/guides/prog_guide/glossary.rst index 21063a414729..3716efd13da2 100644 --- a/doc/guides/prog_guide/glossary.rst +++ b/doc/guides/prog_guide/glossary.rst @@ -124,9 +124,9 @@ LAN LPM Longest Prefix Match -master lcore +initial lcore The execution unit that executes the main() function and that launches - other lcores. + other lcores. Described in older versions as master lcore. mbuf An mbuf is a data structure used internally to carry messages (mainly @@ -184,8 +184,8 @@ RTE Rx Reception -Slave lcore - Any *lcore* that is not the *master lcore*. +Worker lcore + Any *lcore* that is not the *initial lcore*. Socket A physical CPU, that includes several *cores*. diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index 5cbc4ce14446..ecbceb0d05e3 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -107,6 +107,9 @@ New Features * Dump ``rte_flow`` memory consumption. * Measure packet per second forwarding. +* **Renamed master lcore to initial lcore.** + + The name given to the first thread in DPDK is changed from master lcore to initial lcore. Removed Items ------------- @@ -122,7 +125,6 @@ Removed Items * Removed ``RTE_KDRV_NONE`` based PCI device driver probing. - API Changes ----------- @@ -143,6 +145,9 @@ API Changes * vhost: The API of ``rte_vhost_host_notifier_ctrl`` was changed to be per queue and not per device, a qid parameter was added to the arguments list. +* ``rte_get_master_lcore`` was renamed to ``rte_get_initial_lcore`` + The old function is deprecated and will be removed in future release. + ABI Changes ----------- diff --git a/doc/guides/sample_app_ug/bbdev_app.rst b/doc/guides/sample_app_ug/bbdev_app.rst index 405e706a46e4..5917d52ca199 100644 --- a/doc/guides/sample_app_ug/bbdev_app.rst +++ b/doc/guides/sample_app_ug/bbdev_app.rst @@ -94,7 +94,7 @@ device gets linked to a corresponding ethernet port as whitelisted by the parameter -w. 3 cores are allocated to the application, and assigned as: - - core 3 is the master and used to print the stats live on screen, + - core 3 is the initial and used to print the stats live on screen, - core 4 is the encoding lcore performing Rx and Turbo Encode operations diff --git a/doc/guides/sample_app_ug/ethtool.rst b/doc/guides/sample_app_ug/ethtool.rst index 8f7fc6ca66c0..a4b92255c266 100644 --- a/doc/guides/sample_app_ug/ethtool.rst +++ b/doc/guides/sample_app_ug/ethtool.rst @@ -64,8 +64,8 @@ Explanation ----------- The sample program has two parts: A background `packet reflector`_ -that runs on a slave core, and a foreground `Ethtool Shell`_ that -runs on the master core. These are described below. +that runs on a worker core, and a foreground `Ethtool Shell`_ that +runs on the initial core. These are described below. Packet Reflector ~~~~~~~~~~~~~~~~ diff --git a/doc/guides/sample_app_ug/hello_world.rst b/doc/guides/sample_app_ug/hello_world.rst index 46f997a7dce3..f6740b10e385 100644 --- a/doc/guides/sample_app_ug/hello_world.rst +++ b/doc/guides/sample_app_ug/hello_world.rst @@ -75,13 +75,13 @@ The code that launches the function on each lcore is as follows: .. code-block:: c - /* call lcore_hello() on every slave lcore */ + /* call lcore_hello() on every worker lcore */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(lcore_hello, NULL, lcore_id); } - /* call it on master lcore too */ + /* call it on initial lcore too */ lcore_hello(NULL); @@ -89,6 +89,6 @@ The following code is equivalent and simpler: .. code-block:: c - rte_eal_mp_remote_launch(lcore_hello, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(lcore_hello, NULL, CALL_INITIAL); Refer to the *DPDK API Reference* for detailed information on the rte_eal_mp_remote_launch() function. diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst index bab7654b8d4d..c75b91bfa989 100644 --- a/doc/guides/sample_app_ug/ioat.rst +++ b/doc/guides/sample_app_ug/ioat.rst @@ -69,13 +69,13 @@ provided parameters. The app can use up to 2 lcores: one of them receives incoming traffic and makes a copy of each packet. The second lcore then updates MAC address and sends the copy. If one lcore per port is used, both operations are done sequentially. For each configuration an additional -lcore is needed since the master lcore does not handle traffic but is +lcore is needed since the initial lcore does not handle traffic but is responsible for configuration, statistics printing and safe shutdown of all ports and devices. The application can use a maximum of 8 ports. -To run the application in a Linux environment with 3 lcores (the master lcore, +To run the application in a Linux environment with 3 lcores (the initial lcore, plus two forwarding cores), a single port (port 0), software copying and MAC updating issue the command: @@ -83,7 +83,7 @@ updating issue the command: $ ./build/ioatfwd -l 0-2 -n 2 -- -p 0x1 --mac-updating -c sw -To run the application in a Linux environment with 2 lcores (the master lcore, +To run the application in a Linux environment with 2 lcores (the initial lcore, plus one forwarding core), 2 ports (ports 0 and 1), hardware copying and no MAC updating issue the command: @@ -208,7 +208,7 @@ After that each port application assigns resources needed. cfg.nb_lcores = rte_lcore_count() - 1; if (cfg.nb_lcores < 1) rte_exit(EXIT_FAILURE, - "There should be at least one slave lcore.\n"); + "There should be at least one worker lcore.\n"); ret = 0; @@ -310,8 +310,8 @@ If initialization is successful, memory for hardware device statistics is allocated. Finally ``main()`` function starts all packet handling lcores and starts -printing stats in a loop on the master lcore. The application can be -interrupted and closed using ``Ctrl-C``. The master lcore waits for +printing stats in a loop on the initial lcore. The application can be +interrupted and closed using ``Ctrl-C``. The initial lcore waits for all slave processes to finish, deallocates resources and exits. The processing lcores launching function are described below. diff --git a/doc/guides/sample_app_ug/ip_pipeline.rst b/doc/guides/sample_app_ug/ip_pipeline.rst index 56014be17458..f395027b3498 100644 --- a/doc/guides/sample_app_ug/ip_pipeline.rst +++ b/doc/guides/sample_app_ug/ip_pipeline.rst @@ -122,7 +122,7 @@ is displayed and the application is terminated. Run-time ~~~~~~~~ -The master thread is creating and managing all the application objects based on CLI input. +The initial thread is creating and managing all the application objects based on CLI input. Each data plane thread runs one or several pipelines previously assigned to it in round-robin order. Each data plane thread executes two tasks in time-sharing mode: @@ -130,7 +130,7 @@ executes two tasks in time-sharing mode: 1. *Packet processing task*: Process bursts of input packets read from the pipeline input ports. 2. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request - messages send by the master thread. Examples: add/remove pipeline to/from current data plane thread, add/delete rules + messages send by the initial thread. Examples: add/remove pipeline to/from current data plane thread, add/delete rules to/from given table of a specific pipeline owned by the current data plane thread, read statistics, etc. Examples diff --git a/doc/guides/sample_app_ug/keep_alive.rst b/doc/guides/sample_app_ug/keep_alive.rst index 865ba69e5c47..bca5df8ba934 100644 --- a/doc/guides/sample_app_ug/keep_alive.rst +++ b/doc/guides/sample_app_ug/keep_alive.rst @@ -16,7 +16,7 @@ Overview -------- The application demonstrates how to protect against 'silent outages' -on packet processing cores. A Keep Alive Monitor Agent Core (master) +on packet processing cores. A Keep Alive Monitor Agent Core (initial) monitors the state of packet processing cores (worker cores) by dispatching pings at a regular time interval (default is 5ms) and monitoring the state of the cores. Cores states are: Alive, MIA, Dead diff --git a/doc/guides/sample_app_ug/l2_forward_event.rst b/doc/guides/sample_app_ug/l2_forward_event.rst index d536eee819d0..f384420cf1f0 100644 --- a/doc/guides/sample_app_ug/l2_forward_event.rst +++ b/doc/guides/sample_app_ug/l2_forward_event.rst @@ -630,8 +630,8 @@ not many packets to send, however it improves performance: /* if timer has reached its timeout */ if (unlikely(timer_tsc >= timer_period)) { - /* do this only on master core */ - if (lcore_id == rte_get_master_lcore()) { + /* do this only on initial core */ + if (lcore_id == rte_get_initial_lcore()) { print_stats(); /* reset the timer */ timer_tsc = 0; diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst index 671d0c7c19d4..615a55c36db9 100644 --- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst +++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst @@ -440,9 +440,9 @@ however it improves performance: /* if timer has reached its timeout */ if (unlikely(timer_tsc >= (uint64_t) timer_period)) { - /* do this only on master core */ + /* do this only on initial core */ - if (lcore_id == rte_get_master_lcore()) { + if (lcore_id == rte_get_initial_lcore()) { print_stats(); /* reset the timer */ diff --git a/doc/guides/sample_app_ug/l3_forward_graph.rst b/doc/guides/sample_app_ug/l3_forward_graph.rst index df50827bab86..4ac96fc0c2f7 100644 --- a/doc/guides/sample_app_ug/l3_forward_graph.rst +++ b/doc/guides/sample_app_ug/l3_forward_graph.rst @@ -22,7 +22,7 @@ Run-time path is main thing that differs from L3 forwarding sample application. Difference is that forwarding logic starting from Rx, followed by LPM lookup, TTL update and finally Tx is implemented inside graph nodes. These nodes are interconnected in graph framework. Application main loop needs to walk over -graph using ``rte_graph_walk()`` with graph objects created one per slave lcore. +graph using ``rte_graph_walk()`` with graph objects created one per worker lcore. The lookup method is as per implementation of ``ip4_lookup`` graph node. The ID of the output interface for the input packet is the next hop returned by @@ -265,7 +265,7 @@ headers will be provided run-time using ``rte_node_ip4_route_add()`` and Since currently ``ip4_lookup`` and ``ip4_rewrite`` nodes don't support lock-less mechanisms(RCU, etc) to add run-time forwarding data like route and rewrite data, forwarding data is added before packet processing loop is - launched on slave lcore. + launched on worker lcore. .. code-block:: c @@ -297,7 +297,7 @@ Packet Forwarding using Graph Walk ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Now that all the device configurations are done, graph creations are done and -forwarding data is updated with nodes, slave lcores will be launched with graph +forwarding data is updated with nodes, worker lcores will be launched with graph main loop. Graph main loop is very simple in the sense that it needs to continuously call a non-blocking API ``rte_graph_walk()`` with it's lcore specific graph object that was already created. diff --git a/doc/guides/sample_app_ug/l3_forward_power_man.rst b/doc/guides/sample_app_ug/l3_forward_power_man.rst index 0cc6f2e62e75..f20502c41a37 100644 --- a/doc/guides/sample_app_ug/l3_forward_power_man.rst +++ b/doc/guides/sample_app_ug/l3_forward_power_man.rst @@ -441,7 +441,7 @@ The telemetry mode support for ``l3fwd-power`` is a standalone mode, in this mod ``l3fwd-power`` does simple l3fwding along with calculating empty polls, full polls, and busy percentage for each forwarding core. The aggregation of these values of all cores is reported as application level telemetry to metric -library for every 500ms from the master core. +library for every 500ms from the initial core. The busy percentage is calculated by recording the poll_count and when the count reaches a defined value the total diff --git a/doc/guides/sample_app_ug/link_status_intr.rst b/doc/guides/sample_app_ug/link_status_intr.rst index 04c40f28540d..e31fd2cc7368 100644 --- a/doc/guides/sample_app_ug/link_status_intr.rst +++ b/doc/guides/sample_app_ug/link_status_intr.rst @@ -401,9 +401,9 @@ However, it improves performance: /* if timer has reached its timeout */ if (unlikely(timer_tsc >= (uint64_t) timer_period)) { - /* do this only on master core */ + /* do this only on initial core */ - if (lcore_id == rte_get_master_lcore()) { + if (lcore_id == rte_get_initial_lcore()) { print_stats(); /* reset the timer */ diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst index f2a79a639763..51b8db5cf75a 100644 --- a/doc/guides/sample_app_ug/multi_process.rst +++ b/doc/guides/sample_app_ug/multi_process.rst @@ -66,7 +66,7 @@ The process should start successfully and display a command prompt as follows: EAL: check igb_uio module EAL: check module finished - EAL: Master core 0 is ready (tid=54e41820) + EAL: Initial core 0 is ready (tid=54e41820) EAL: Core 1 is ready (tid=53b32700) Starting core 1 @@ -92,7 +92,7 @@ At any stage, either process can be terminated using the quit command. .. code-block:: console - EAL: Master core 10 is ready (tid=b5f89820) EAL: Master core 8 is ready (tid=864a3820) + EAL: Initial core 10 is ready (tid=b5f89820) EAL: Initial core 8 is ready (tid=864a3820) EAL: Core 11 is ready (tid=84ffe700) EAL: Core 9 is ready (tid=85995700) Starting core 11 Starting core 9 simple_mp > send hello_secondary simple_mp > core 9: Received 'hello_secondary' @@ -273,7 +273,7 @@ In addition to the EAL parameters, the application- specific parameters are: .. note:: - In the server process, a single thread, the master thread, that is, the lowest numbered lcore in the coremask/corelist, performs all packet I/O. + In the server process, a single thread, the initial thread, that is, the lowest numbered lcore in the coremask/corelist, performs all packet I/O. If a coremask/corelist is specified with more than a single lcore bit set in it, an additional lcore will be used for a thread to periodically print packet count statistics. diff --git a/doc/guides/sample_app_ug/packet_ordering.rst b/doc/guides/sample_app_ug/packet_ordering.rst index 1c8ee5d04071..e82938bd7c9c 100644 --- a/doc/guides/sample_app_ug/packet_ordering.rst +++ b/doc/guides/sample_app_ug/packet_ordering.rst @@ -12,14 +12,14 @@ Overview The application uses at least three CPU cores: -* RX core (maser core) receives traffic from the NIC ports and feeds Worker +* RX core (initial core) receives traffic from the NIC ports and feeds Worker cores with traffic through SW queues. -* Worker core (slave core) basically do some light work on the packet. +* Worker cores basically do some light work on the packet. Currently it modifies the output port of the packet for configurations with more than one port enabled. -* TX Core (slave core) receives traffic from Worker cores through software queues, +* TX Core receives traffic from Worker cores through software queues, inserts out-of-order packets into reorder buffer, extracts ordered packets from the reorder buffer and sends them to the NIC ports for transmission. @@ -46,7 +46,7 @@ The application execution command line is: ./packet_ordering [EAL options] -- -p PORTMASK [--disable-reorder] [--insight-worker] The -c EAL CPU_COREMASK option has to contain at least 3 CPU cores. -The first CPU core in the core mask is the master core and would be assigned to +The first CPU core in the core mask is the initial core and would be assigned to RX core, the last to TX core and the rest to Worker cores. The PORTMASK parameter must contain either 1 or even enabled port numbers. diff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst index b04d0ba444af..29105f9708eb 100644 --- a/doc/guides/sample_app_ug/performance_thread.rst +++ b/doc/guides/sample_app_ug/performance_thread.rst @@ -280,8 +280,8 @@ functionality into different threads, and the pairs of RX and TX threads are interconnected via software rings. On initialization an L-thread scheduler is started on every EAL thread. On all -but the master EAL thread only a dummy L-thread is initially started. -The L-thread started on the master EAL thread then spawns other L-threads on +but the initial EAL thread only a dummy L-thread is initially started. +The L-thread started on the initial EAL thread then spawns other L-threads on different L-thread schedulers according the command line parameters. The RX threads poll the network interface queues and post received packets @@ -1217,5 +1217,5 @@ Setting ``LTHREAD_DIAG`` also enables counting of statistics about cache and queue usage, and these statistics can be displayed by calling the function ``lthread_diag_stats_display()``. This function also performs a consistency check on the caches and queues. The function should only be called from the -master EAL thread after all slave threads have stopped and returned to the C +initial EAL thread after all worker threads have stopped and returned to the C main program, otherwise the consistency check will fail. diff --git a/doc/guides/sample_app_ug/qos_scheduler.rst b/doc/guides/sample_app_ug/qos_scheduler.rst index b5010657a7d8..345ecbb5905d 100644 --- a/doc/guides/sample_app_ug/qos_scheduler.rst +++ b/doc/guides/sample_app_ug/qos_scheduler.rst @@ -71,7 +71,7 @@ Optional application parameters include: In this mode, the application shows a command line that can be used for obtaining statistics while scheduling is taking place (see interactive mode below for more information). -* --mst n: Master core index (the default value is 1). +* --mst n: Initial core index (the default value is 1). * --rsz "A, B, C": Ring sizes: @@ -329,7 +329,7 @@ Another example with 2 packet flow configurations using different ports but shar Note that independent cores for the packet flow configurations for each of the RX, WT and TX thread are also supported, providing flexibility to balance the work. -The EAL coremask/corelist is constrained to contain the default mastercore 1 and the RX, WT and TX cores only. +The EAL coremask/corelist is constrained to contain the default initial lcore 1 and the RX, WT and TX cores only. Explanation ----------- diff --git a/doc/guides/sample_app_ug/timer.rst b/doc/guides/sample_app_ug/timer.rst index 98d762d2388c..59a8ab11e9b6 100644 --- a/doc/guides/sample_app_ug/timer.rst +++ b/doc/guides/sample_app_ug/timer.rst @@ -49,17 +49,18 @@ In addition to EAL initialization, the timer subsystem must be initialized, by c rte_timer_subsystem_init(); After timer creation (see the next paragraph), -the main loop is executed on each slave lcore using the well-known rte_eal_remote_launch() and also on the master. +the main loop is executed on each worker lcore using the well-known rte_eal_remote_launch() and +also on the initial lcore. .. code-block:: c - /* call lcore_mainloop() on every slave lcore */ + /* call lcore_mainloop() on every worker lcore */ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(lcore_mainloop, NULL, lcore_id); } - /* call it on master lcore too */ + /* call it on initial lcore too */ (void) lcore_mainloop(NULL); @@ -105,7 +106,7 @@ This call to rte_timer_init() is necessary before doing any other operation on t Then, the two timers are configured: -* The first timer (timer0) is loaded on the master lcore and expires every second. +* The first timer (timer0) is loaded on the initial lcore and expires every second. Since the PERIODICAL flag is provided, the timer is reloaded automatically by the timer subsystem. The callback function is timer0_cb(). @@ -115,7 +116,7 @@ Then, the two timers are configured: .. code-block:: c - /* load timer0, every second, on master lcore, reloaded automatically */ + /* load timer0, every second, on initial lcore, reloaded automatically */ hz = rte_get_hpet_hz(); diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index f169604752b8..7d6b81de7f46 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -71,7 +71,7 @@ The command line options are: * ``--coremask=0xXX`` Set the hexadecimal bitmask of the cores running the packet forwarding test. - The master lcore is reserved for command line parsing only and cannot be masked on for packet forwarding. + The initial lcore is reserved for command line parsing only and cannot be masked on for packet forwarding. * ``--portmask=0xXX`` diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index a808b6a308f2..7d4db1140092 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -692,7 +692,7 @@ This is equivalent to the ``--coremask`` command-line option. .. note:: - The master lcore is reserved for command line parsing only and cannot be masked on for packet forwarding. + The initial lcore is reserved for command line parsing only and cannot be masked on for packet forwarding. set portmask ~~~~~~~~~~~~ From patchwork Wed Jul 1 19:46:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72660 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7BC9DA0350; Wed, 1 Jul 2020 21:50:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 343C71C2E8; Wed, 1 Jul 2020 21:47:36 +0200 (CEST) Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by dpdk.org (Postfix) with ESMTP id 9BE5B1D568 for ; Wed, 1 Jul 2020 21:47:31 +0200 (CEST) Received: by mail-pl1-f193.google.com with SMTP id bf7so987316plb.2 for ; Wed, 01 Jul 2020 12:47:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nZpKTqS2c89gbb+Jo5xp1qIaqvDRrbGoDxRJTkV84Xw=; b=kaWL7cbMJ264E+cir+pH5ULPp9piSKjmJZdwAmlR7bmcBUbphqbTDAGWDJuhkAIxgi RmIcAKZU2tQwRpIUl0IWjkjiInr6v7k6AsMzmATwLu66AI5GaXU9mI3xgvc89cqn4L4t +DUwRVxbzoJD7obapBsc6jaOXUC9i9MBuuBJSaRW86a2aREgvVs/A4VFq6cadKXJpjZF V/DXt2aq6+lRhtvNZ/9FccbSNmHFKrk9poC8Nm0wItO7ADi7qt28fz0cVO5TsyWsU+tu gysCQdijzzYgH1F00JPnZ2Va9HCTFFollCNuH8mRUYZssuWkMXjf+T8j2egOQQOo0HGG otww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nZpKTqS2c89gbb+Jo5xp1qIaqvDRrbGoDxRJTkV84Xw=; b=gj11WBGsukkmMBVhVxczOtXbLCwPw2ANjAvMeeRXtOtqdSYZxLw5EeA1Gbocn5YbNS oBYDmQ2GOWBW5Qf70mHx59viU36PjYc2UX86HN8af3Zuiaf2PxuzjiPQuzo/Ctd/7Nap OTP+CsWL5pCvHfs1gvi+rzgwjZwxqorxh0HSNOeM8wGkw5UWrjPw9UZT3OtrcW6XqgeF cSAvA95ybEcenDqDkW2sfDVLkJmePHN42msytyy2ghD+MAMVw4+J4+wcdrAoUecbtCbD SaFIUPBllUKvK2WLV/BEbOu8lEoeYoh/sY4+VQmMd09R1JrgmUtccC+Uj1HhwOmFstTT mUqg== X-Gm-Message-State: AOAM531An9oq7b6a2Hb81klCB7aUuD3T6zfG5B0rTqOacYjPn/s3wnS7 N9kZPR7/MDDqhd7ZDXhGQ7LoyTV/x8M= X-Google-Smtp-Source: ABdhPJwn+qizPud6kFGjdMMgtdjzFO5swyaMpeViuAOEAUc79S0U8GXmSsYSZoEbhcrHNcc0MlIwPg== X-Received: by 2002:a17:90a:e7cf:: with SMTP id kb15mr30956121pjb.86.1593632850500; Wed, 01 Jul 2020 12:47:30 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:29 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:46 -0700 Message-Id: <20200701194650.10705-24-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 23/27] app/pdump: replace references to master/slave lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial and worker lcore instead. Signed-off-by: Stephen Hemminger --- app/pdump/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/app/pdump/main.c b/app/pdump/main.c index c38c53719e7d..1590a716a31d 100644 --- a/app/pdump/main.c +++ b/app/pdump/main.c @@ -947,7 +947,7 @@ dump_packets(void) rte_exit(EXIT_FAILURE, "failed to wait\n"); } - /* master core */ + /* initial core */ while (!quit_signal) ; } From patchwork Wed Jul 1 19:46:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72661 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B586A0350; Wed, 1 Jul 2020 21:50:42 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E22BD1D146; Wed, 1 Jul 2020 21:47:37 +0200 (CEST) Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by dpdk.org (Postfix) with ESMTP id 4310B1C2A9 for ; Wed, 1 Jul 2020 21:47:33 +0200 (CEST) Received: by mail-pj1-f68.google.com with SMTP id h22so11463167pjf.1 for ; Wed, 01 Jul 2020 12:47:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EwseC58lqRaNP3bGpu64ZyOpL/S00h/5lex0HaBgHME=; b=TxIv5mB8MaoPdy+xMPVL0B62UK+PdtfA98qWvd3SgmrBfAGvrq1wpYk+aKhl7ZY4PF l5avxohgv6CZUUDT/AZq4EWVHedorxR4w/lMDnSihgg1Ucg+++aRRsWZ7V0KagMAuMWg WFOTGmPRMhn+al18hpeUtvfPwP70/o0/kq/hf8aBmS+DzgW2YQ1faA83kn9gfDr14IkI hZcFHUPkKUfL8Efi7ygponyw5exJ23JG1hg+NDhwL2T+QCHLL3jfduHWDBhXR2biYa9x vDuKEquIN3Kp04abVRertAAUM0aNXwGdVqlfYGgR5KhWE/QGHVIpsPRlJh60ip4utctf WaIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EwseC58lqRaNP3bGpu64ZyOpL/S00h/5lex0HaBgHME=; b=gSLkDdVJdZ1Qj4SzjmdhqRyNptgw2U5DYUy6zKui3SwfQn/M2yYTqMjwufLukCSQHQ T+1UrJ0ksfGOUG/PyRaYF5l6cBYHJELF7mqx90qJ+XJQWQHYFWts5DOpNEFjtO02DJip ysG4qPWuCyOjMKCjPB43IAHGdU+crBLPEqQ7KnWeEhJtgz7I/lsXqCsY4IlNYaM5enD2 QO+c7sNBXWZGs2eNGTpLMIsPHH6WbnQA1gxsHsDe/5fJPphze3pcF5eSGf+DFaFa+HC1 skZbYGNKhn/PoCH25ELadNbyF+t/Xe9qhoUzDkdkBMdMLOmYdC6khiTAD+MD9+R55L8W KQmA== X-Gm-Message-State: AOAM533ovshJ894cGv9sDSeBmTGGcKIXfGBWabeLhMO0Qc9haja15h46 tJilj8s87++hokufis17uhhQEvej/aw= X-Google-Smtp-Source: ABdhPJw0P+7+8f7hRP1oVMcSUTgR+r1rvTFgTHB3vaoXg8MPIhiaJF+Dcn3C+X3nKZkIa0Fd3REnxw== X-Received: by 2002:a17:90a:f00d:: with SMTP id bt13mr22880229pjb.109.1593632851708; Wed, 01 Jul 2020 12:47:31 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:30 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:47 -0700 Message-Id: <20200701194650.10705-25-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 24/27] app/test-XXX: replace reference to master/slave X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use new terminology for lcore's Signed-off-by: Stephen Hemminger --- app/test-acl/main.c | 2 +- app/test-bbdev/test_bbdev_perf.c | 16 ++++++++-------- app/test-compress-perf/main.c | 8 ++++---- app/test-crypto-perf/main.c | 14 +++++++------- app/test-flow-perf/main.c | 2 +- app/test-pipeline/main.c | 4 ++-- app/test-sad/main.c | 4 ++-- 7 files changed, 25 insertions(+), 25 deletions(-) diff --git a/app/test-acl/main.c b/app/test-acl/main.c index 0a5dfb621d5e..72ff26674dac 100644 --- a/app/test-acl/main.c +++ b/app/test-acl/main.c @@ -1085,7 +1085,7 @@ main(int argc, char **argv) if (config.trace_file != NULL) tracef_init(); - RTE_LCORE_FOREACH_SLAVE(lcore) + RTE_LCORE_FOREACH_WORKER(lcore) rte_eal_remote_launch(search_ip5tuples, NULL, lcore); search_ip5tuples(NULL); diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c index 45c0d62acabc..fecd20f72e8a 100644 --- a/app/test-bbdev/test_bbdev_perf.c +++ b/app/test-bbdev/test_bbdev_perf.c @@ -3651,14 +3651,14 @@ bler_test(struct active_device *ad, rte_atomic16_set(&op_params->sync, SYNC_WAIT); - /* Master core is set at first entry */ + /* Initial core is set at first entry */ t_params[0].dev_id = ad->dev_id; t_params[0].lcore_id = rte_lcore_id(); t_params[0].op_params = op_params; t_params[0].queue_id = ad->queue_ids[used_cores++]; t_params[0].iter_count = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (used_cores >= num_lcores) break; @@ -3675,7 +3675,7 @@ bler_test(struct active_device *ad, rte_atomic16_set(&op_params->sync, SYNC_START); ret = bler_function(&t_params[0]); - /* Master core is always used */ + /* Initial core is always used */ for (used_cores = 1; used_cores < num_lcores; used_cores++) ret |= rte_eal_wait_lcore(t_params[used_cores].lcore_id); @@ -3769,14 +3769,14 @@ throughput_test(struct active_device *ad, rte_atomic16_set(&op_params->sync, SYNC_WAIT); - /* Master core is set at first entry */ + /* Initial core is set at first entry */ t_params[0].dev_id = ad->dev_id; t_params[0].lcore_id = rte_lcore_id(); t_params[0].op_params = op_params; t_params[0].queue_id = ad->queue_ids[used_cores++]; t_params[0].iter_count = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (used_cores >= num_lcores) break; @@ -3793,7 +3793,7 @@ throughput_test(struct active_device *ad, rte_atomic16_set(&op_params->sync, SYNC_START); ret = throughput_function(&t_params[0]); - /* Master core is always used */ + /* Initial core is always used */ for (used_cores = 1; used_cores < num_lcores; used_cores++) ret |= rte_eal_wait_lcore(t_params[used_cores].lcore_id); @@ -3817,7 +3817,7 @@ throughput_test(struct active_device *ad, /* In interrupt TC we need to wait for the interrupt callback to deqeue * all pending operations. Skip waiting for queues which reported an * error using processing_status variable. - * Wait for master lcore operations. + * Wait for initial lcore operations. */ tp = &t_params[0]; while ((rte_atomic16_read(&tp->nb_dequeued) < @@ -3830,7 +3830,7 @@ throughput_test(struct active_device *ad, tp->mbps /= TEST_REPETITIONS; ret |= (int)rte_atomic16_read(&tp->processing_status); - /* Wait for slave lcores operations */ + /* Wait for worker lcores operations */ for (used_cores = 1; used_cores < num_lcores; used_cores++) { tp = &t_params[used_cores]; diff --git a/app/test-compress-perf/main.c b/app/test-compress-perf/main.c index ed21605d89c2..cc9951a9b107 100644 --- a/app/test-compress-perf/main.c +++ b/app/test-compress-perf/main.c @@ -389,7 +389,7 @@ main(int argc, char **argv) i = 0; uint8_t qp_id = 0, cdev_index = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -413,7 +413,7 @@ main(int argc, char **argv) while (test_data->level <= test_data->level_lst.max) { i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -424,7 +424,7 @@ main(int argc, char **argv) i++; } i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -449,7 +449,7 @@ main(int argc, char **argv) case ST_DURING_TEST: i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c index 7bb286ccbe6c..9cab779e164c 100644 --- a/app/test-crypto-perf/main.c +++ b/app/test-crypto-perf/main.c @@ -590,7 +590,7 @@ main(int argc, char **argv) i = 0; uint8_t qp_id = 0, cdev_index = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -654,7 +654,7 @@ main(int argc, char **argv) distribution_total[buffer_size_count - 1]; i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -664,7 +664,7 @@ main(int argc, char **argv) i++; } i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -684,7 +684,7 @@ main(int argc, char **argv) while (opts.test_buffer_size <= opts.max_buffer_size) { i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -694,7 +694,7 @@ main(int argc, char **argv) i++; } i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -718,7 +718,7 @@ main(int argc, char **argv) } i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; @@ -738,7 +738,7 @@ main(int argc, char **argv) err: i = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (i == total_nb_qps) break; diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index 1ae285655669..99a2e1e4ccc9 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -1097,7 +1097,7 @@ main(int argc, char **argv) if (enable_fwd) { init_lcore_info(); - rte_eal_mp_remote_launch(start_forwarding, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(start_forwarding, NULL, CALL_INITIAL); } RTE_ETH_FOREACH_DEV(port) { diff --git a/app/test-pipeline/main.c b/app/test-pipeline/main.c index 7f0d6d3f1862..a54c32a32d17 100644 --- a/app/test-pipeline/main.c +++ b/app/test-pipeline/main.c @@ -66,8 +66,8 @@ main(int argc, char **argv) app_init(); /* Launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(app_lcore_main_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore) { + rte_eal_mp_remote_launch(app_lcore_main_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore) { if (rte_eal_wait_lcore(lcore) < 0) return -1; } diff --git a/app/test-sad/main.c b/app/test-sad/main.c index b01e84c570bb..38cd40ee87cf 100644 --- a/app/test-sad/main.c +++ b/app/test-sad/main.c @@ -657,11 +657,11 @@ main(int argc, char **argv) add_rules(sad, 10); if (config.parallel_lookup) - rte_eal_mp_remote_launch(lookup, sad, SKIP_MASTER); + rte_eal_mp_remote_launch(lookup, sad, SKIP_INITIAL); lookup(sad); if (config.parallel_lookup) - RTE_LCORE_FOREACH_SLAVE(lcore_id) + RTE_LCORE_FOREACH_WORKER(lcore_id) if (rte_eal_wait_lcore(lcore_id) < 0) return -1; From patchwork Wed Jul 1 19:46:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72662 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 53765A0350; Wed, 1 Jul 2020 21:50:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 327E91D59E; Wed, 1 Jul 2020 21:47:39 +0200 (CEST) Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by dpdk.org (Postfix) with ESMTP id 5CC2D1D544 for ; Wed, 1 Jul 2020 21:47:34 +0200 (CEST) Received: by mail-pl1-f195.google.com with SMTP id j4so10312861plk.3 for ; Wed, 01 Jul 2020 12:47:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HVxhS9CDT6vcCZ3vdHAapIoebNGTFRveRuUj4bTdETQ=; b=A8lVe9jc29b05knbDtbyZDpJjRxKOSW4GHjeqeKpvKU+F7dfqX5C6xMNgepSm8iN40 hMDBM7IG68i4Ry2wu7Y00tQHPR2ElT43Wkqg05j2t1+sPekN5bCDLLjKyfupodYbEEfh DsROJgvYrv1Nd62Xtb4+0j5JQ+Vbt6njaUhThwvAM/e6vmCMociLDTA3/HK6ACvzvx2G SgArNOyEIQrfyLcGHUtMwHb68DQBw3bSzacc7xkg3bb4ZXZBgeTrCRF3OIXUZBJ1ZQlK 9MwXWY8rrFodfDmu2VVTOh/hIsDHdI45OL72l8LpX6SpeKKR53yjoXvIye9Waaz0s/HD +Biw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HVxhS9CDT6vcCZ3vdHAapIoebNGTFRveRuUj4bTdETQ=; b=mQP667+NxP7KJ4e1/9MCwUzNjHJcAr7mzM9Kka9XvWb2NcorX+EqcUEqJIkMPJh/8e 874SJ8q8GPE0IhRLzPsUSfItzW5OFP7SJYan0GTuqERYPdE4Ouj9kIvgTCQlEBE73qxS KO5fHVQ30jx5NYI1b0+I2WCYA60MVrl0YAQ2WzIeTuQ5C4KiME//HQ004wjbt8owoDbk tdjUReHLhKR9Y5u4CXSNl7/HHLNOQhwa3OqTyCNEDRTCdk4+OPWP0WYseE1AcBjPiK+i yJka6Vhi2T8ONAENDjAgjZRmkwZYgO4JtYCBxXf/BAxRHMhrX4sTnksIgTPsUh2pHmQ8 YPmg== X-Gm-Message-State: AOAM531mIAQ7I7VnPeESNsTjjmbxbcJo+W7HwY4YvA1Dp9KtOhuUzdg4 gi266RIf7ocBYXkCRwCVS8kkG2w41Vw= X-Google-Smtp-Source: ABdhPJxXPb5yVvoopIwQ7Dla82lSPEXCEGkXn3KtWiJdRmCpaZRZDxD4VlgsyPxXeaq5OEGT8QzjpQ== X-Received: by 2002:a17:90a:24ed:: with SMTP id i100mr25789176pje.22.1593632853146; Wed, 01 Jul 2020 12:47:33 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:32 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:48 -0700 Message-Id: <20200701194650.10705-26-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 25/27] eal: mark old naming as deprecated X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use of old RTE_LCORE_FOREACH_SLAVE and rte_get_master_lcore_id() are marked as deprecated. All uses of these in DPDK itself is gone. This will cause warnings for applications still using them. Signed-off-by: Stephen Hemminger --- lib/librte_eal/include/rte_launch.h | 4 ++-- lib/librte_eal/include/rte_lcore.h | 4 +++- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/lib/librte_eal/include/rte_launch.h b/lib/librte_eal/include/rte_launch.h index 9b68685d99d4..8d58ca5302a7 100644 --- a/lib/librte_eal/include/rte_launch.h +++ b/lib/librte_eal/include/rte_launch.h @@ -76,8 +76,8 @@ enum rte_rmt_call_initial_t { /** * Deprecated backward compatiable definitions */ -#define SKIP_MASTER SKIP_INITIAL -#define CALL_MASTER CALL_INITIAL +#define SKIP_MASTER _Pragma("GCC warning \"'SKIP_MASTER' is deprecated\"") SKIP_INITIAL +#define CALL_MASTER _Pragma("GCC warning \"'CALL_MASTER' is deprecated\"") CALL_INITIAL /** * Launch a function on all lcores. diff --git a/lib/librte_eal/include/rte_lcore.h b/lib/librte_eal/include/rte_lcore.h index 069cb1f427b9..eaa7c0f0b67c 100644 --- a/lib/librte_eal/include/rte_lcore.h +++ b/lib/librte_eal/include/rte_lcore.h @@ -67,6 +67,7 @@ unsigned int rte_get_initial_lcore(void); * @return * the id of the initial lcore */ +__rte_deprecated unsigned int rte_get_master_lcore(void); /** @@ -216,7 +217,8 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_initial, int wrap); /** * Backward compatibility */ -#define RTE_LCORE_FOREACH_SLAVE(x) \ +#define RTE_LCORE_FOREACH_SLAVE(x) \ + _Pragma("GCC warning \"'RTE_LCORE_FOREACH_SLAVE' macro is deprecated\"") \ RTE_LCORE_FOREACH_WORKER(x) From patchwork Wed Jul 1 19:46:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72663 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B2F9A0350; Wed, 1 Jul 2020 21:50:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 30B561D5AA; Wed, 1 Jul 2020 21:47:40 +0200 (CEST) Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by dpdk.org (Postfix) with ESMTP id E6D3F1D582 for ; Wed, 1 Jul 2020 21:47:36 +0200 (CEST) Received: by mail-pj1-f68.google.com with SMTP id u8so11084718pje.4 for ; Wed, 01 Jul 2020 12:47:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5yeUzuDwSRLEvW7EsNi1RDYCkdTIF22O2pAqitBNUQU=; b=e02ATKRiQSJjDN6ZUfaMmn8hzmAaNPYM00KYyJhr8SyLg9cu4olk6XbN/jjp2MZlfc FvGngIETPWzxTwjUo7b+237WTB40+uK+CPgFqY14BpuXizguwaQhFAlb0QBXDPzfCZzm zV48nTY7iryuSKg0HKDpAtXi3FO/ldlUQo0kuiKYcCQz9Cef2mTy5li8zXzv4IL9YumA Y5cOfh7Q4NYSeXJdFCOMTpuywBMZ+M2BcPVCNqatK0ys8Vwoh3srmYtH3kL27qD8lAxm NH8TV31W2c64WZL8AMfMV7rXK/CT1JBKJWqekxCbdoEad62u0VPVbPtvoOGXmahkMyQ1 v5bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5yeUzuDwSRLEvW7EsNi1RDYCkdTIF22O2pAqitBNUQU=; b=Yg14H58mg8jPnSgn0NRvsJci4cxy5Wc9nPOW5GAxzXwDH0hTjDUKyP0ysdZdCoCeC1 GRWbYdJXSNC++GbWSuA95g2FuaOXpXW3VEkoK6buLDLvJJ8Bl0RBop6Gvpcef+0Lo0C/ uVT8OSYgSCZikQ1za0zsjClD6Cxo1N+xoHIvq4Ihw3OKdv59TNtLpBd8+wt+9GLgMz8r KgWqtSvxuR2/w1nUXjsxhRfH0CwWWtiZ1aU9UFIk50EBbozOCGNb0dBaWUGNpG4z3JHD RDc90d7lV9ZAL+JLvhf9UTRiM6cGjLLcRre0yNuPjADB4Q3n1yjy/cOokeHk+lk3Uxkn ULPA== X-Gm-Message-State: AOAM532InRAAkTWCKKQbkQYJov0HEvFu5K8Y3MoKxttnGKaLXc5rYk3W 8wiKyiODtkz9QwzByy3XrGRr3QTjDZ0= X-Google-Smtp-Source: ABdhPJxKgsI56+d94asQOAYiHFHTWwKqbkoy9WIuytz6+liI6A/tvEk6Xy1QdIjRoi7OmDLMSEvvJg== X-Received: by 2002:a17:90a:7ac6:: with SMTP id b6mr20003289pjl.213.1593632854398; Wed, 01 Jul 2020 12:47:34 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:33 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:49 -0700 Message-Id: <20200701194650.10705-27-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 26/27] memif: replace master/slave with server/client X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The names master and slave should be changed in this driver. It is really a traditional client server model, so use those names. Signed-off-by: Stephen Hemminger --- doc/guides/nics/memif.rst | 78 +++++++++--------- drivers/net/memif/memif.h | 40 +++++----- drivers/net/memif/memif_socket.c | 54 ++++++------- drivers/net/memif/memif_socket.h | 5 +- drivers/net/memif/rte_eth_memif.c | 128 +++++++++++++++--------------- drivers/net/memif/rte_eth_memif.h | 20 ++--- 6 files changed, 163 insertions(+), 162 deletions(-) diff --git a/doc/guides/nics/memif.rst b/doc/guides/nics/memif.rst index 9c67d7141cbe..5723ac01c7ee 100644 --- a/doc/guides/nics/memif.rst +++ b/doc/guides/nics/memif.rst @@ -13,13 +13,13 @@ The created device transmits packets in a raw format. It can be used with Ethernet mode, IP mode, or Punt/Inject. At this moment, only Ethernet mode is supported in DPDK memif implementation. -Memif works in two roles: master and slave. Slave connects to master over an +Memif works in two roles: server and client. Client connects to server over an existing socket. It is also a producer of shared memory file and initializes the shared memory. Each interface can be connected to one peer interface -at same time. The peer interface is identified by id parameter. Master -creates the socket and listens for any slave connection requests. The socket +at same time. The peer interface is identified by id parameter. Server +creates the socket and listens for any client connection requests. The socket may already exist on the system. Be sure to remove any such sockets, if you -are creating a master interface, or you will see an "Address already in use" +are creating a server interface, or you will see an "Address already in use" error. Function ``rte_pmd_memif_remove()``, which removes memif interface, will also remove a listener socket, if it is not being used by any other interface. @@ -31,57 +31,57 @@ net_memif1, and so on. Memif uses unix domain socket to transmit control messages. Each memif has a unique id per socket. This id is used to identify peer interface. If you are connecting multiple interfaces using same socket, be sure to specify unique ids ``id=0``, ``id=1``, -etc. Note that if you assign a socket to a master interface it becomes a -listener socket. Listener socket can not be used by a slave interface on same +etc. Note that if you assign a socket to a server interface it becomes a +listener socket. Listener socket can not be used by a client interface on same client. .. csv-table:: **Memif configuration options** :header: "Option", "Description", "Default", "Valid value" "id=0", "Used to identify peer interface", "0", "uint32_t" - "role=master", "Set memif role", "slave", "master|slave" + "role=server", "Set memif role", "client", "server|client" "bsize=1024", "Size of single packet buffer", "2048", "uint16_t" "rsize=11", "Log2 of ring size. If rsize is 10, actual ring size is 1024", "10", "1-14" "socket=/tmp/memif.sock", "Socket filename", "/tmp/memif.sock", "string len 108" "mac=01:23:45:ab:cd:ef", "Mac address", "01:ab:23:cd:45:ef", "" "secret=abc123", "Secret is an optional security option, which if specified, must be matched by peer", "", "string len 24" - "zero-copy=yes", "Enable/disable zero-copy slave mode. Only relevant to slave, requires '--single-file-segments' eal argument", "no", "yes|no" + "zero-copy=yes", "Enable/disable zero-copy client mode. Only relevant to client, requires '--single-file-segments' eal argument", "no", "yes|no" **Connection establishment** In order to create memif connection, two memif interfaces, each in separate -process, are needed. One interface in ``master`` role and other in -``slave`` role. It is not possible to connect two interfaces in a single +process, are needed. One interface in ``server`` role and other in +``client`` role. It is not possible to connect two interfaces in a single process. Each interface can be connected to one interface at same time, identified by matching id parameter. Memif driver uses unix domain socket to exchange required information between memif interfaces. Socket file path is specified at interface creation see -*Memif configuration options* table above. If socket is used by ``master`` +*Memif configuration options* table above. If socket is used by ``server`` interface, it's marked as listener socket (in scope of current process) and listens to connection requests from other processes. One socket can be used by -multiple interfaces. One process can have ``slave`` and ``master`` interfaces +multiple interfaces. One process can have ``client`` and ``server`` interfaces at the same time, provided each role is assigned unique socket. For detailed information on memif control messages, see: net/memif/memif.h. -Slave interface attempts to make a connection on assigned socket. Process +Client interface attempts to make a connection on assigned socket. Process listening on this socket will extract the connection request and create a new connected socket (control channel). Then it sends the 'hello' message -(``MEMIF_MSG_TYPE_HELLO``), containing configuration boundaries. Slave interface +(``MEMIF_MSG_TYPE_HELLO``), containing configuration boundaries. Client interface adjusts its configuration accordingly, and sends 'init' message (``MEMIF_MSG_TYPE_INIT``). This message among others contains interface id. Driver -uses this id to find master interface, and assigns the control channel to this +uses this id to find server interface, and assigns the control channel to this interface. If such interface is found, 'ack' message (``MEMIF_MSG_TYPE_ACK``) is -sent. Slave interface sends 'add region' message (``MEMIF_MSG_TYPE_ADD_REGION``) for -every region allocated. Master responds to each of these messages with 'ack' -message. Same behavior applies to rings. Slave sends 'add ring' message -(``MEMIF_MSG_TYPE_ADD_RING``) for every initialized ring. Master again responds to -each message with 'ack' message. To finalize the connection, slave interface +sent. Client interface sends 'add region' message (``MEMIF_MSG_TYPE_ADD_REGION``) for +every region allocated. Server responds to each of these messages with 'ack' +message. Same behavior applies to rings. Client sends 'add ring' message +(``MEMIF_MSG_TYPE_ADD_RING``) for every initialized ring. Server again responds to +each message with 'ack' message. To finalize the connection, client interface sends 'connect' message (``MEMIF_MSG_TYPE_CONNECT``). Upon receiving this message -master maps regions to its address space, initializes rings and responds with +server maps regions to its address space, initializes rings and responds with 'connected' message (``MEMIF_MSG_TYPE_CONNECTED``). Disconnect -(``MEMIF_MSG_TYPE_DISCONNECT``) can be sent by both master and slave interfaces at +(``MEMIF_MSG_TYPE_DISCONNECT``) can be sent by both server and client interfaces at any time, due to driver error or if the interface is being deleted. Files @@ -95,8 +95,8 @@ Shared memory **Shared memory format** -Slave is producer and master is consumer. Memory regions, are mapped shared memory files, -created by memif slave and provided to master at connection establishment. +Client is producer and server is consumer. Memory regions, are mapped shared memory files, +created by memif client and provided to server at connection establishment. Regions contain rings and buffers. Rings and buffers can also be separated into multiple regions. For no-zero-copy, rings and buffers are stored inside single memory region to reduce the number of opened files. @@ -171,11 +171,11 @@ Files - net/memif/memif.h *- descriptor and ring definitions* - net/memif/rte_eth_memif.c *- eth_memif_rx() eth_memif_tx()* -Zero-copy slave +Zero-copy client ~~~~~~~~~~~~~~~ -Zero-copy slave can be enabled with memif configuration option 'zero-copy=yes'. This option -is only relevant to slave and requires eal argument '--single-file-segments'. +Zero-copy client can be enabled with memif configuration option 'zero-copy=yes'. This option +is only relevant to client and requires eal argument '--single-file-segments'. This limitation is in place, because it is too expensive to identify memseg for each packet buffer, resulting in worse performance than with zero-copy disabled. With single file segments we can calculate offset from the beginning of the file @@ -183,9 +183,9 @@ for each packet buffer. **Shared memory format** -Region 0 is created by memif driver and contains rings. Slave interface exposes DPDK memory (memseg). +Region 0 is created by memif driver and contains rings. Client interface exposes DPDK memory (memseg). Instead of using memfd_create() to create new shared file, existing memsegs are used. -Master interface functions the same as with zero-copy disabled. +Server interface functions the same as with zero-copy disabled. region 0: @@ -211,24 +211,24 @@ Example: testpmd ---------------------------- In this example we run two instances of testpmd application and transmit packets over memif. -First create ``master`` interface:: +First create ``server`` interface:: - #./build/app/testpmd -l 0-1 --proc-type=primary --file-prefix=pmd1 --vdev=net_memif,role=master -- -i + #./build/app/testpmd -l 0-1 --proc-type=primary --file-prefix=pmd1 --vdev=net_memif,role=server -- -i -Now create ``slave`` interface (master must be already running so the slave will connect):: +Now create ``client`` interface (server must be already running so the client will connect):: #./build/app/testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif -- -i -You can also enable ``zero-copy`` on ``slave`` interface:: +You can also enable ``zero-copy`` on ``client`` interface:: #./build/app/testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif,zero-copy=yes --single-file-segments -- -i Start forwarding packets:: - Slave: + Client: testpmd> start - Master: + Server: testpmd> start tx_first Show status:: @@ -241,9 +241,9 @@ Example: testpmd and VPP ------------------------ For information on how to get and run VPP please see ``_. -Start VPP in interactive mode (should be by default). Create memif master interface in VPP:: +Start VPP in interactive mode (should be by default). Create memif server interface in VPP:: - vpp# create interface memif id 0 master no-zero-copy + vpp# create interface memif id 0 server no-zero-copy vpp# set interface state memif0/0 up vpp# set interface ip address memif0/0 192.168.1.1/24 @@ -259,7 +259,7 @@ Now create memif interface by running testpmd with these command line options:: #./testpmd --vdev=net_memif,socket=/run/vpp/memif.sock -- -i -Testpmd should now create memif slave interface and try to connect to master. +Testpmd should now create memif client interface and try to connect to server. In testpmd set forward option to icmpecho and start forwarding:: testpmd> set fwd icmpecho @@ -280,7 +280,7 @@ The situation is analogous to cross connecting 2 ports of the NIC by cable. To set the loopback, just use the same socket and id with different roles:: - #./testpmd --vdev=net_memif0,role=master,id=0 --vdev=net_memif1,role=slave,id=0 -- -i + #./testpmd --vdev=net_memif0,role=server,id=0 --vdev=net_memif1,role=client,id=0 -- -i Then start the communication:: diff --git a/drivers/net/memif/memif.h b/drivers/net/memif/memif.h index b91230890410..cb72c692ba03 100644 --- a/drivers/net/memif/memif.h +++ b/drivers/net/memif/memif.h @@ -12,8 +12,8 @@ #define MEMIF_NAME_SZ 32 /* - * S2M: direction slave -> master - * M2S: direction master -> slave + * C2S: direction client -> server + * S2C: direction server -> client */ /* @@ -33,8 +33,8 @@ typedef enum memif_msg_type { } memif_msg_type_t; typedef enum { - MEMIF_RING_S2M, /**< buffer ring in direction slave -> master */ - MEMIF_RING_M2S, /**< buffer ring in direction master -> slave */ + MEMIF_RING_C2S, /**< buffer ring in direction client -> server */ + MEMIF_RING_S2C, /**< buffer ring in direction server -> client */ } memif_ring_type_t; typedef enum { @@ -56,23 +56,23 @@ typedef uint8_t memif_log2_ring_size_t; */ /** - * M2S - * Contains master interfaces configuration. + * S2C + * Contains server interfaces configuration. */ typedef struct __rte_packed { uint8_t name[MEMIF_NAME_SZ]; /**< Client app name. In this case DPDK version */ memif_version_t min_version; /**< lowest supported memif version */ memif_version_t max_version; /**< highest supported memif version */ memif_region_index_t max_region; /**< maximum num of regions */ - memif_ring_index_t max_m2s_ring; /**< maximum num of M2S ring */ - memif_ring_index_t max_s2m_ring; /**< maximum num of S2M rings */ + memif_ring_index_t max_s2c_ring; /**< maximum num of S2C ring */ + memif_ring_index_t max_c2s_ring; /**< maximum num of C2S rings */ memif_log2_ring_size_t max_log2_ring_size; /**< maximum ring size (as log2) */ } memif_msg_hello_t; /** - * S2M + * C2S * Contains information required to identify interface - * to which the slave wants to connect. + * to which the client wants to connect. */ typedef struct __rte_packed { memif_version_t version; /**< memif version */ @@ -83,8 +83,8 @@ typedef struct __rte_packed { } memif_msg_init_t; /** - * S2M - * Request master to add new shared memory region to master interface. + * C2S + * Request server to add new shared memory region to server interface. * Shared files file descriptor is passed in cmsghdr. */ typedef struct __rte_packed { @@ -93,12 +93,12 @@ typedef struct __rte_packed { } memif_msg_add_region_t; /** - * S2M - * Request master to add new ring to master interface. + * C2S + * Request server to add new ring to server interface. */ typedef struct __rte_packed { uint16_t flags; /**< flags */ -#define MEMIF_MSG_ADD_RING_FLAG_S2M 1 /**< ring is in S2M direction */ +#define MEMIF_MSG_ADD_RING_FLAG_C2S 1 /**< ring is in C2S direction */ memif_ring_index_t index; /**< ring index */ memif_region_index_t region; /**< region index on which this ring is located */ memif_region_offset_t offset; /**< buffer start offset */ @@ -107,23 +107,23 @@ typedef struct __rte_packed { } memif_msg_add_ring_t; /** - * S2M + * C2S * Finalize connection establishment. */ typedef struct __rte_packed { - uint8_t if_name[MEMIF_NAME_SZ]; /**< slave interface name */ + uint8_t if_name[MEMIF_NAME_SZ]; /**< client interface name */ } memif_msg_connect_t; /** - * M2S + * S2C * Finalize connection establishment. */ typedef struct __rte_packed { - uint8_t if_name[MEMIF_NAME_SZ]; /**< master interface name */ + uint8_t if_name[MEMIF_NAME_SZ]; /**< server interface name */ } memif_msg_connected_t; /** - * S2M & M2S + * C2S & S2C * Disconnect interfaces. */ typedef struct __rte_packed { diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c index 67794cb6fa8c..b1475374910e 100644 --- a/drivers/net/memif/memif_socket.c +++ b/drivers/net/memif/memif_socket.c @@ -143,8 +143,8 @@ memif_msg_enq_hello(struct memif_control_channel *cc) e->msg.type = MEMIF_MSG_TYPE_HELLO; h->min_version = MEMIF_VERSION; h->max_version = MEMIF_VERSION; - h->max_s2m_ring = ETH_MEMIF_MAX_NUM_Q_PAIRS; - h->max_m2s_ring = ETH_MEMIF_MAX_NUM_Q_PAIRS; + h->max_c2s_ring = ETH_MEMIF_MAX_NUM_Q_PAIRS; + h->max_s2c_ring = ETH_MEMIF_MAX_NUM_Q_PAIRS; h->max_region = ETH_MEMIF_MAX_REGION_NUM - 1; h->max_log2_ring_size = ETH_MEMIF_MAX_LOG2_RING_SIZE; @@ -165,10 +165,10 @@ memif_msg_receive_hello(struct rte_eth_dev *dev, memif_msg_t *msg) } /* Set parameters for active connection */ - pmd->run.num_s2m_rings = RTE_MIN(h->max_s2m_ring + 1, - pmd->cfg.num_s2m_rings); - pmd->run.num_m2s_rings = RTE_MIN(h->max_m2s_ring + 1, - pmd->cfg.num_m2s_rings); + pmd->run.num_c2s_rings = RTE_MIN(h->max_c2s_ring + 1, + pmd->cfg.num_c2s_rings); + pmd->run.num_s2c_rings = RTE_MIN(h->max_s2c_ring + 1, + pmd->cfg.num_s2c_rings); pmd->run.log2_ring_size = RTE_MIN(h->max_log2_ring_size, pmd->cfg.log2_ring_size); pmd->run.pkt_buffer_size = pmd->cfg.pkt_buffer_size; @@ -203,7 +203,7 @@ memif_msg_receive_init(struct memif_control_channel *cc, memif_msg_t *msg) dev = elt->dev; pmd = dev->data->dev_private; if (((pmd->flags & ETH_MEMIF_FLAG_DISABLED) == 0) && - (pmd->id == i->id) && (pmd->role == MEMIF_ROLE_MASTER)) { + (pmd->id == i->id) && (pmd->role == MEMIF_ROLE_SERVER)) { if (pmd->flags & (ETH_MEMIF_FLAG_CONNECTING | ETH_MEMIF_FLAG_CONNECTED)) { memif_msg_enq_disconnect(cc, @@ -300,21 +300,21 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd) } /* check if we have enough queues */ - if (ar->flags & MEMIF_MSG_ADD_RING_FLAG_S2M) { - if (ar->index >= pmd->cfg.num_s2m_rings) { + if (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) { + if (ar->index >= pmd->cfg.num_c2s_rings) { memif_msg_enq_disconnect(pmd->cc, "Invalid ring index", 0); return -1; } - pmd->run.num_s2m_rings++; + pmd->run.num_c2s_rings++; } else { - if (ar->index >= pmd->cfg.num_m2s_rings) { + if (ar->index >= pmd->cfg.num_s2c_rings) { memif_msg_enq_disconnect(pmd->cc, "Invalid ring index", 0); return -1; } - pmd->run.num_m2s_rings++; + pmd->run.num_s2c_rings++; } - mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_S2M) ? + mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ? dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index]; mq->intr_handle.fd = fd; @@ -449,7 +449,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx, return -1; ar = &e->msg.add_ring; - mq = (type == MEMIF_RING_S2M) ? dev->data->tx_queues[idx] : + mq = (type == MEMIF_RING_C2S) ? dev->data->tx_queues[idx] : dev->data->rx_queues[idx]; e->msg.type = MEMIF_MSG_TYPE_ADD_RING; @@ -458,7 +458,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx, ar->offset = mq->ring_offset; ar->region = mq->region; ar->log2_ring_size = mq->log2_ring_size; - ar->flags = (type == MEMIF_RING_S2M) ? MEMIF_MSG_ADD_RING_FLAG_S2M : 0; + ar->flags = (type == MEMIF_RING_C2S) ? MEMIF_MSG_ADD_RING_FLAG_C2S : 0; ar->private_hdr_size = 0; return 0; @@ -575,8 +575,8 @@ memif_disconnect(struct rte_eth_dev *dev) rte_spinlock_unlock(&pmd->cc_lock); /* unconfig interrupts */ - for (i = 0; i < pmd->cfg.num_s2m_rings; i++) { - if (pmd->role == MEMIF_ROLE_SLAVE) { + for (i = 0; i < pmd->cfg.num_c2s_rings; i++) { + if (pmd->role == MEMIF_ROLE_CLIENT) { if (dev->data->tx_queues != NULL) mq = dev->data->tx_queues[i]; else @@ -592,8 +592,8 @@ memif_disconnect(struct rte_eth_dev *dev) mq->intr_handle.fd = -1; } } - for (i = 0; i < pmd->cfg.num_m2s_rings; i++) { - if (pmd->role == MEMIF_ROLE_MASTER) { + for (i = 0; i < pmd->cfg.num_s2c_rings; i++) { + if (pmd->role == MEMIF_ROLE_SERVER) { if (dev->data->tx_queues != NULL) mq = dev->data->tx_queues[i]; else @@ -616,7 +616,7 @@ memif_disconnect(struct rte_eth_dev *dev) memset(&pmd->run, 0, sizeof(pmd->run)); MIF_LOG(DEBUG, "Disconnected, id: %d, role: %s.", pmd->id, - (pmd->role == MEMIF_ROLE_MASTER) ? "master" : "slave"); + (pmd->role == MEMIF_ROLE_SERVER) ? "server" : "client"); } static int @@ -694,15 +694,15 @@ memif_msg_receive(struct memif_control_channel *cc) if (ret < 0) goto exit; } - for (i = 0; i < pmd->run.num_s2m_rings; i++) { + for (i = 0; i < pmd->run.num_c2s_rings; i++) { ret = memif_msg_enq_add_ring(cc->dev, i, - MEMIF_RING_S2M); + MEMIF_RING_C2S); if (ret < 0) goto exit; } - for (i = 0; i < pmd->run.num_m2s_rings; i++) { + for (i = 0; i < pmd->run.num_s2c_rings; i++) { ret = memif_msg_enq_add_ring(cc->dev, i, - MEMIF_RING_M2S); + MEMIF_RING_S2C); if (ret < 0) goto exit; } @@ -963,7 +963,7 @@ memif_socket_init(struct rte_eth_dev *dev, const char *socket_filename) ret = rte_hash_lookup_data(hash, key, (void **)&socket); if (ret < 0) { socket = memif_socket_create(key, - (pmd->role == MEMIF_ROLE_SLAVE) ? 0 : 1); + (pmd->role == MEMIF_ROLE_CLIENT) ? 0 : 1); if (socket == NULL) return -1; ret = rte_hash_add_key_data(hash, key, socket); @@ -1039,7 +1039,7 @@ memif_socket_remove_device(struct rte_eth_dev *dev) } int -memif_connect_master(struct rte_eth_dev *dev) +memif_connect_server(struct rte_eth_dev *dev) { struct pmd_internals *pmd = dev->data->dev_private; @@ -1050,7 +1050,7 @@ memif_connect_master(struct rte_eth_dev *dev) } int -memif_connect_slave(struct rte_eth_dev *dev) +memif_connect_client(struct rte_eth_dev *dev) { int sockfd; int ret; diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h index 5c49ec24ecbc..b9b8a151782f 100644 --- a/drivers/net/memif/memif_socket.h +++ b/drivers/net/memif/memif_socket.h @@ -60,7 +60,8 @@ void memif_disconnect(struct rte_eth_dev *dev); * - On success, zero. * - On failure, a negative value. */ -int memif_connect_master(struct rte_eth_dev *dev); +int memif_connect_server(struct rte_eth_dev *dev); + /** * If device is properly configured, send connection request. @@ -71,7 +72,7 @@ int memif_connect_master(struct rte_eth_dev *dev); * - On success, zero. * - On failure, a negative value. */ -int memif_connect_slave(struct rte_eth_dev *dev); +int memif_connect_client(struct rte_eth_dev *dev); struct memif_socket_dev_list_elt { TAILQ_ENTRY(memif_socket_dev_list_elt) next; diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c index b6da9a8b4046..7f49a71f583d 100644 --- a/drivers/net/memif/rte_eth_memif.c +++ b/drivers/net/memif/rte_eth_memif.c @@ -132,7 +132,7 @@ memif_mp_request_regions(struct rte_eth_dev *dev) struct memif_region *r; struct pmd_process_private *proc_private = dev->process_private; struct pmd_internals *pmd = dev->data->dev_private; - /* in case of zero-copy slave, only request region 0 */ + /* in case of zero-copy client, only request region 0 */ uint16_t max_region_num = (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) ? 1 : ETH_MEMIF_MAX_REGION_NUM; @@ -210,7 +210,7 @@ memif_get_ring(struct pmd_internals *pmd, struct pmd_process_private *proc_priva int ring_size = sizeof(memif_ring_t) + sizeof(memif_desc_t) * (1 << pmd->run.log2_ring_size); - p = (uint8_t *)p + (ring_num + type * pmd->run.num_s2m_rings) * ring_size; + p = (uint8_t *)p + (ring_num + type * pmd->run.num_c2s_rings) * ring_size; return (memif_ring_t *)p; } @@ -245,7 +245,7 @@ memif_get_buffer(struct pmd_process_private *proc_private, memif_desc_t *d) return ((uint8_t *)proc_private->regions[d->region]->addr + d->offset); } -/* Free mbufs received by master */ +/* Free mbufs received by server */ static void memif_free_stored_mbufs(struct pmd_process_private *proc_private, struct memif_queue *mq) { @@ -322,7 +322,7 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) ring_size = 1 << mq->log2_ring_size; mask = ring_size - 1; - if (type == MEMIF_RING_S2M) { + if (type == MEMIF_RING_C2S) { cur_slot = mq->last_head; last_slot = __atomic_load_n(&ring->head, __ATOMIC_ACQUIRE); } else { @@ -396,7 +396,7 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } no_free_bufs: - if (type == MEMIF_RING_S2M) { + if (type == MEMIF_RING_C2S) { __atomic_store_n(&ring->tail, cur_slot, __ATOMIC_RELEASE); mq->last_head = cur_slot; } else { @@ -404,7 +404,7 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } refill: - if (type == MEMIF_RING_M2S) { + if (type == MEMIF_RING_S2C) { head = __atomic_load_n(&ring->head, __ATOMIC_ACQUIRE); n_slots = ring_size - head + mq->last_tail; @@ -499,7 +499,7 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) mq->last_tail = cur_slot; -/* Supply master with new buffers */ +/* Supply server with new buffers */ refill: head = ring->head; n_slots = ring_size - head + mq->last_tail; @@ -571,7 +571,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) n_free = __atomic_load_n(&ring->tail, __ATOMIC_ACQUIRE) - mq->last_tail; mq->last_tail += n_free; - if (type == MEMIF_RING_S2M) { + if (type == MEMIF_RING_C2S) { slot = __atomic_load_n(&ring->head, __ATOMIC_ACQUIRE); n_free = ring_size - slot + mq->last_tail; } else { @@ -586,7 +586,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) saved_slot = slot; d0 = &ring->desc[slot & mask]; dst_off = 0; - dst_len = (type == MEMIF_RING_S2M) ? + dst_len = (type == MEMIF_RING_C2S) ? pmd->run.pkt_buffer_size : d0->length; next_in_chain: @@ -601,7 +601,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) d0->flags |= MEMIF_DESC_FLAG_NEXT; d0 = &ring->desc[slot & mask]; dst_off = 0; - dst_len = (type == MEMIF_RING_S2M) ? + dst_len = (type == MEMIF_RING_C2S) ? pmd->run.pkt_buffer_size : d0->length; d0->flags = 0; } else { @@ -636,7 +636,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) } no_free_slots: - if (type == MEMIF_RING_S2M) + if (type == MEMIF_RING_C2S) __atomic_store_n(&ring->head, slot, __ATOMIC_RELEASE); else __atomic_store_n(&ring->tail, slot, __ATOMIC_RELEASE); @@ -666,7 +666,7 @@ memif_tx_one_zc(struct pmd_process_private *proc_private, struct memif_queue *mq next_in_chain: /* store pointer to mbuf to free it later */ mq->buffers[slot & mask] = mbuf; - /* Increment refcnt to make sure the buffer is not freed before master + /* Increment refcnt to make sure the buffer is not freed before server * receives it. (current segment) */ rte_mbuf_refcnt_update(mbuf, 1); @@ -719,10 +719,10 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) ring_size = 1 << mq->log2_ring_size; mask = ring_size - 1; - /* free mbufs received by master */ + /* free mbufs received by server */ memif_free_stored_mbufs(proc_private, mq); - /* ring type always MEMIF_RING_S2M */ + /* ring type always MEMIF_RING_C2S */ slot = ring->head; n_free = ring_size - ring->head + mq->last_tail; @@ -780,7 +780,7 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) no_free_slots: rte_mb(); /* update ring pointers */ - if (type == MEMIF_RING_S2M) + if (type == MEMIF_RING_C2S) ring->head = slot; else ring->tail = slot; @@ -895,7 +895,7 @@ memif_region_init_shm(struct rte_eth_dev *dev, uint8_t has_buffers) } /* calculate buffer offset */ - r->pkt_buffer_offset = (pmd->run.num_s2m_rings + pmd->run.num_m2s_rings) * + r->pkt_buffer_offset = (pmd->run.num_c2s_rings + pmd->run.num_s2c_rings) * (sizeof(memif_ring_t) + sizeof(memif_desc_t) * (1 << pmd->run.log2_ring_size)); @@ -904,8 +904,8 @@ memif_region_init_shm(struct rte_eth_dev *dev, uint8_t has_buffers) if (has_buffers == 1) r->region_size += (uint32_t)(pmd->run.pkt_buffer_size * (1 << pmd->run.log2_ring_size) * - (pmd->run.num_s2m_rings + - pmd->run.num_m2s_rings)); + (pmd->run.num_c2s_rings + + pmd->run.num_s2c_rings)); memset(shm_name, 0, sizeof(char) * ETH_MEMIF_SHM_NAME_SIZE); snprintf(shm_name, ETH_MEMIF_SHM_NAME_SIZE, "memif_region_%d", @@ -990,8 +990,8 @@ memif_init_rings(struct rte_eth_dev *dev) int i, j; uint16_t slot; - for (i = 0; i < pmd->run.num_s2m_rings; i++) { - ring = memif_get_ring(pmd, proc_private, MEMIF_RING_S2M, i); + for (i = 0; i < pmd->run.num_c2s_rings; i++) { + ring = memif_get_ring(pmd, proc_private, MEMIF_RING_C2S, i); __atomic_store_n(&ring->head, 0, __ATOMIC_RELAXED); __atomic_store_n(&ring->tail, 0, __ATOMIC_RELAXED); ring->cookie = MEMIF_COOKIE; @@ -1010,8 +1010,8 @@ memif_init_rings(struct rte_eth_dev *dev) } } - for (i = 0; i < pmd->run.num_m2s_rings; i++) { - ring = memif_get_ring(pmd, proc_private, MEMIF_RING_M2S, i); + for (i = 0; i < pmd->run.num_s2c_rings; i++) { + ring = memif_get_ring(pmd, proc_private, MEMIF_RING_S2C, i); __atomic_store_n(&ring->head, 0, __ATOMIC_RELAXED); __atomic_store_n(&ring->tail, 0, __ATOMIC_RELAXED); ring->cookie = MEMIF_COOKIE; @@ -1021,7 +1021,7 @@ memif_init_rings(struct rte_eth_dev *dev) continue; for (j = 0; j < (1 << pmd->run.log2_ring_size); j++) { - slot = (i + pmd->run.num_s2m_rings) * + slot = (i + pmd->run.num_c2s_rings) * (1 << pmd->run.log2_ring_size) + j; ring->desc[j].region = 0; ring->desc[j].offset = @@ -1032,7 +1032,7 @@ memif_init_rings(struct rte_eth_dev *dev) } } -/* called only by slave */ +/* called only by client */ static int memif_init_queues(struct rte_eth_dev *dev) { @@ -1040,12 +1040,12 @@ memif_init_queues(struct rte_eth_dev *dev) struct memif_queue *mq; int i; - for (i = 0; i < pmd->run.num_s2m_rings; i++) { + for (i = 0; i < pmd->run.num_c2s_rings; i++) { mq = dev->data->tx_queues[i]; mq->log2_ring_size = pmd->run.log2_ring_size; /* queues located only in region 0 */ mq->region = 0; - mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2M, i); + mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i); mq->last_head = 0; mq->last_tail = 0; mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK); @@ -1063,12 +1063,12 @@ memif_init_queues(struct rte_eth_dev *dev) } } - for (i = 0; i < pmd->run.num_m2s_rings; i++) { + for (i = 0; i < pmd->run.num_s2c_rings; i++) { mq = dev->data->rx_queues[i]; mq->log2_ring_size = pmd->run.log2_ring_size; /* queues located only in region 0 */ mq->region = 0; - mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_M2S, i); + mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i); mq->last_head = 0; mq->last_tail = 0; mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK); @@ -1140,8 +1140,8 @@ memif_connect(struct rte_eth_dev *dev) } if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - for (i = 0; i < pmd->run.num_s2m_rings; i++) { - mq = (pmd->role == MEMIF_ROLE_SLAVE) ? + for (i = 0; i < pmd->run.num_c2s_rings; i++) { + mq = (pmd->role == MEMIF_ROLE_CLIENT) ? dev->data->tx_queues[i] : dev->data->rx_queues[i]; ring = memif_get_ring_from_queue(proc_private, mq); if (ring == NULL || ring->cookie != MEMIF_COOKIE) { @@ -1153,11 +1153,11 @@ memif_connect(struct rte_eth_dev *dev) mq->last_head = 0; mq->last_tail = 0; /* enable polling mode */ - if (pmd->role == MEMIF_ROLE_MASTER) + if (pmd->role == MEMIF_ROLE_SERVER) ring->flags = MEMIF_RING_FLAG_MASK_INT; } - for (i = 0; i < pmd->run.num_m2s_rings; i++) { - mq = (pmd->role == MEMIF_ROLE_SLAVE) ? + for (i = 0; i < pmd->run.num_s2c_rings; i++) { + mq = (pmd->role == MEMIF_ROLE_CLIENT) ? dev->data->rx_queues[i] : dev->data->tx_queues[i]; ring = memif_get_ring_from_queue(proc_private, mq); if (ring == NULL || ring->cookie != MEMIF_COOKIE) { @@ -1169,7 +1169,7 @@ memif_connect(struct rte_eth_dev *dev) mq->last_head = 0; mq->last_tail = 0; /* enable polling mode */ - if (pmd->role == MEMIF_ROLE_SLAVE) + if (pmd->role == MEMIF_ROLE_CLIENT) ring->flags = MEMIF_RING_FLAG_MASK_INT; } @@ -1188,11 +1188,11 @@ memif_dev_start(struct rte_eth_dev *dev) int ret = 0; switch (pmd->role) { - case MEMIF_ROLE_SLAVE: - ret = memif_connect_slave(dev); + case MEMIF_ROLE_CLIENT: + ret = memif_connect_client(dev); break; - case MEMIF_ROLE_MASTER: - ret = memif_connect_master(dev); + case MEMIF_ROLE_SERVER: + ret = memif_connect_server(dev); break; default: MIF_LOG(ERR, "Unknown role: %d.", pmd->role); @@ -1232,17 +1232,17 @@ memif_dev_configure(struct rte_eth_dev *dev) struct pmd_internals *pmd = dev->data->dev_private; /* - * SLAVE - TXQ - * MASTER - RXQ + * CLIENT - TXQ + * SERVER - RXQ */ - pmd->cfg.num_s2m_rings = (pmd->role == MEMIF_ROLE_SLAVE) ? + pmd->cfg.num_c2s_rings = (pmd->role == MEMIF_ROLE_CLIENT) ? dev->data->nb_tx_queues : dev->data->nb_rx_queues; /* - * SLAVE - RXQ - * MASTER - TXQ + * CLIENT - RXQ + * SERVER - TXQ */ - pmd->cfg.num_m2s_rings = (pmd->role == MEMIF_ROLE_SLAVE) ? + pmd->cfg.num_s2c_rings = (pmd->role == MEMIF_ROLE_CLIENT) ? dev->data->nb_rx_queues : dev->data->nb_tx_queues; return 0; @@ -1265,7 +1265,7 @@ memif_tx_queue_setup(struct rte_eth_dev *dev, } mq->type = - (pmd->role == MEMIF_ROLE_SLAVE) ? MEMIF_RING_S2M : MEMIF_RING_M2S; + (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C; mq->n_pkts = 0; mq->n_bytes = 0; mq->intr_handle.fd = -1; @@ -1293,7 +1293,7 @@ memif_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } - mq->type = (pmd->role == MEMIF_ROLE_SLAVE) ? MEMIF_RING_M2S : MEMIF_RING_S2M; + mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S; mq->n_pkts = 0; mq->n_bytes = 0; mq->intr_handle.fd = -1; @@ -1348,8 +1348,8 @@ memif_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) stats->opackets = 0; stats->obytes = 0; - tmp = (pmd->role == MEMIF_ROLE_SLAVE) ? pmd->run.num_s2m_rings : - pmd->run.num_m2s_rings; + tmp = (pmd->role == MEMIF_ROLE_CLIENT) ? pmd->run.num_c2s_rings : + pmd->run.num_s2c_rings; nq = (tmp < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? tmp : RTE_ETHDEV_QUEUE_STAT_CNTRS; @@ -1362,8 +1362,8 @@ memif_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) stats->ibytes += mq->n_bytes; } - tmp = (pmd->role == MEMIF_ROLE_SLAVE) ? pmd->run.num_m2s_rings : - pmd->run.num_s2m_rings; + tmp = (pmd->role == MEMIF_ROLE_CLIENT) ? pmd->run.num_s2c_rings : + pmd->run.num_c2s_rings; nq = (tmp < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? tmp : RTE_ETHDEV_QUEUE_STAT_CNTRS; @@ -1385,14 +1385,14 @@ memif_stats_reset(struct rte_eth_dev *dev) int i; struct memif_queue *mq; - for (i = 0; i < pmd->run.num_s2m_rings; i++) { - mq = (pmd->role == MEMIF_ROLE_SLAVE) ? dev->data->tx_queues[i] : + for (i = 0; i < pmd->run.num_c2s_rings; i++) { + mq = (pmd->role == MEMIF_ROLE_CLIENT) ? dev->data->tx_queues[i] : dev->data->rx_queues[i]; mq->n_pkts = 0; mq->n_bytes = 0; } - for (i = 0; i < pmd->run.num_m2s_rings; i++) { - mq = (pmd->role == MEMIF_ROLE_SLAVE) ? dev->data->rx_queues[i] : + for (i = 0; i < pmd->run.num_s2c_rings; i++) { + mq = (pmd->role == MEMIF_ROLE_CLIENT) ? dev->data->rx_queues[i] : dev->data->tx_queues[i]; mq->n_pkts = 0; mq->n_bytes = 0; @@ -1473,8 +1473,8 @@ memif_create(struct rte_vdev_device *vdev, enum memif_role_t role, pmd->flags = flags; pmd->flags |= ETH_MEMIF_FLAG_DISABLED; pmd->role = role; - /* Zero-copy flag irelevant to master. */ - if (pmd->role == MEMIF_ROLE_MASTER) + /* Zero-copy flag irelevant to server. */ + if (pmd->role == MEMIF_ROLE_SERVER) pmd->flags &= ~ETH_MEMIF_FLAG_ZERO_COPY; ret = memif_socket_init(eth_dev, socket_filename); @@ -1487,8 +1487,8 @@ memif_create(struct rte_vdev_device *vdev, enum memif_role_t role, pmd->cfg.log2_ring_size = log2_ring_size; /* set in .dev_configure() */ - pmd->cfg.num_s2m_rings = 0; - pmd->cfg.num_m2s_rings = 0; + pmd->cfg.num_c2s_rings = 0; + pmd->cfg.num_s2c_rings = 0; pmd->cfg.pkt_buffer_size = pkt_buffer_size; rte_spinlock_init(&pmd->cc_lock); @@ -1524,10 +1524,10 @@ memif_set_role(const char *key __rte_unused, const char *value, { enum memif_role_t *role = (enum memif_role_t *)extra_args; - if (strstr(value, "master") != NULL) { - *role = MEMIF_ROLE_MASTER; - } else if (strstr(value, "slave") != NULL) { - *role = MEMIF_ROLE_SLAVE; + if (strstr(value, "server") != NULL) { + *role = MEMIF_ROLE_SERVER; + } else if (strstr(value, "client") != NULL) { + *role = MEMIF_ROLE_CLIENT; } else { MIF_LOG(ERR, "Unknown role: %s.", value); return -EINVAL; @@ -1670,7 +1670,7 @@ rte_pmd_memif_probe(struct rte_vdev_device *vdev) int ret = 0; struct rte_kvargs *kvlist; const char *name = rte_vdev_device_name(vdev); - enum memif_role_t role = MEMIF_ROLE_SLAVE; + enum memif_role_t role = MEMIF_ROLE_CLIENT; memif_interface_id_t id = 0; uint16_t pkt_buffer_size = ETH_MEMIF_DEFAULT_PKT_BUFFER_SIZE; memif_log2_ring_size_t log2_ring_size = ETH_MEMIF_DEFAULT_RING_SIZE; @@ -1798,7 +1798,7 @@ RTE_PMD_REGISTER_VDEV(net_memif, pmd_memif_drv); RTE_PMD_REGISTER_PARAM_STRING(net_memif, ETH_MEMIF_ID_ARG "=" - ETH_MEMIF_ROLE_ARG "=master|slave" + ETH_MEMIF_ROLE_ARG "=server|client" ETH_MEMIF_PKT_BUFFER_SIZE_ARG "=" ETH_MEMIF_RING_SIZE_ARG "=" ETH_MEMIF_SOCKET_ARG "=" diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h index 6f45b7072c69..d45dfdd172b7 100644 --- a/drivers/net/memif/rte_eth_memif.h +++ b/drivers/net/memif/rte_eth_memif.h @@ -36,8 +36,8 @@ extern int memif_logtype; "%s(): " fmt "\n", __func__, ##args) enum memif_role_t { - MEMIF_ROLE_MASTER, - MEMIF_ROLE_SLAVE, + MEMIF_ROLE_SERVER, + MEMIF_ROLE_CLIENT, }; struct memif_region { @@ -64,8 +64,8 @@ struct memif_queue { uint16_t last_tail; /**< last ring tail */ struct rte_mbuf **buffers; - /**< Stored mbufs. Used in zero-copy tx. Slave stores transmitted - * mbufs to free them once master has received them. + /**< Stored mbufs. Used in zero-copy tx. Client stores transmitted + * mbufs to free them once server has received them. */ /* rx/tx info */ @@ -102,15 +102,15 @@ struct pmd_internals { struct { memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */ - uint8_t num_s2m_rings; /**< number of slave to master rings */ - uint8_t num_m2s_rings; /**< number of master to slave rings */ + uint8_t num_c2s_rings; /**< number of client to server rings */ + uint8_t num_s2c_rings; /**< number of server to client rings */ uint16_t pkt_buffer_size; /**< buffer size */ } cfg; /**< Configured parameters (max values) */ struct { memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */ - uint8_t num_s2m_rings; /**< number of slave to master rings */ - uint8_t num_m2s_rings; /**< number of master to slave rings */ + uint8_t num_c2s_rings; /**< number of client to server rings */ + uint8_t num_s2c_rings; /**< number of server to client rings */ uint16_t pkt_buffer_size; /**< buffer size */ } run; /**< Parameters used in active connection */ @@ -137,7 +137,7 @@ void memif_free_regions(struct rte_eth_dev *dev); /** * Finalize connection establishment process. Map shared memory file - * (master role), initialize ring queue, set link status up. + * (server role), initialize ring queue, set link status up. * * @param dev * memif device @@ -149,7 +149,7 @@ int memif_connect(struct rte_eth_dev *dev); /** * Create shared memory file and initialize ring queue. - * Only called by slave when establishing connection + * Only called by client when establishing connection * * @param dev * memif device From patchwork Wed Jul 1 19:46:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 72664 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3127AA0350; Wed, 1 Jul 2020 21:51:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 851F11D58D; Wed, 1 Jul 2020 21:47:41 +0200 (CEST) Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by dpdk.org (Postfix) with ESMTP id 603C61D146 for ; Wed, 1 Jul 2020 21:47:37 +0200 (CEST) Received: by mail-pg1-f196.google.com with SMTP id p3so12219834pgh.3 for ; Wed, 01 Jul 2020 12:47:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zVdL+PJg+rLzWUUWlLMYqG9QH0f/wv7OCEbMrPXVzAQ=; b=CFNSa+bf5OCRdXF24I7S4KZnM9UDIndjTZ97hNVc4U8/EnFKEU71pcjm3C3iWI7J+R WOJuOWzV+Pyb/H6wUVEJn+jEOkHnR+9xvKVdmBfI9pCxGuJBLUxJ1o/btemkE3y6dnIT fGmuVoY0gVx8QUV64wuL2VwQzLsdA/LQmY0q41pxTogCAJ3U+u2KW672GLAepu+wL8gP Prcoc9+7unSE/maXn2fnJJ4h3eFjUXh0JnnAgyZ+I68JcOHqyw4o1mLeKgzfWj7VV01V fSqoaAXUqwa+aFQFFSn63BCDZod+QGmpww6NdYbtmAXCEppBB+CplEkaKFVL9sGG9N8k fX0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zVdL+PJg+rLzWUUWlLMYqG9QH0f/wv7OCEbMrPXVzAQ=; b=SAM6Tu2F9no01iyCzwfohxaTBiQH3u4EUIjxz9R05BacVas1cy/v7VqThsCHKHmQ5y TK0uvyt2O5NuOWiANNAmr389dKbD0ET0DJ8jJntiI7o6v2bxpFZHXgyiKZu6lyb8rV2z vpCr+qgBkbhB/BJNLGDefkINEhrPoUDGWaiSSd27W1UzNFNnAlvQK6WnFuaE1GjQQFh1 +Y/BCzdYMQ3XfNfQEFdV/SjXBN+oq6Fpe/eLWxTEwFVhBoIrR/wyMNvAgSEscZUY+ZDe +1BH7JOMzaxabeWQGdu/RtdZTNS2pPtG992IoPQcar5dAY4HmCLUQK8juX1A87fj/qfI gPzA== X-Gm-Message-State: AOAM531m6SQu48lP5wv4ZQ8i1Jg8jI4QqvVeaXVyG4t4dGDT98BTljFq +43utaCkNfdVRNTtuyzo653nce1EB6o= X-Google-Smtp-Source: ABdhPJxO+KoUp26aMwuAh+7gOYc3gr5Twg7vELmw8gZob5VQJ6IME7gLwn//2f6k8tHcMLlv7/n5ig== X-Received: by 2002:a63:7f5d:: with SMTP id p29mr21143490pgn.259.1593632855772; Wed, 01 Jul 2020 12:47:35 -0700 (PDT) Received: from hermes.corp.microsoft.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n14sm6501870pgd.78.2020.07.01.12.47.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 12:47:34 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Wed, 1 Jul 2020 12:46:50 -0700 Message-Id: <20200701194650.10705-28-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200701194650.10705-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200701194650.10705-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 27/27] vhost: rename SLAVE to CLIENT X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The vhost is a client server architecture so replace the term slave with client. Signed-off-by: Stephen Hemminger --- drivers/vdpa/ifc/ifcvf_vdpa.c | 4 +- drivers/vdpa/mlx5/mlx5_vdpa.c | 4 +- lib/librte_vhost/rte_vhost.h | 16 ++--- lib/librte_vhost/rte_vhost_version.map | 2 +- lib/librte_vhost/vhost.c | 4 +- lib/librte_vhost/vhost.h | 4 +- lib/librte_vhost/vhost_crypto.c | 2 +- lib/librte_vhost/vhost_user.c | 96 +++++++++++++------------- lib/librte_vhost/vhost_user.h | 24 +++---- 9 files changed, 78 insertions(+), 78 deletions(-) diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c index de54dc8aab6c..6c02b99f5e44 100644 --- a/drivers/vdpa/ifc/ifcvf_vdpa.c +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c @@ -1071,8 +1071,8 @@ ifcvf_get_vdpa_features(struct rte_vdpa_device *vdev, uint64_t *features) #define VDPA_SUPPORTED_PROTOCOL_FEATURES \ (1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK | \ - 1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ | \ - 1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD | \ + 1ULL << VHOST_USER_PROTOCOL_F_CLIENT_REQ | \ + 1ULL << VHOST_USER_PROTOCOL_F_CLIENT_SEND_FD | \ 1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER | \ 1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD) static int diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c index dbd36ab0c95e..8fd2de9ba98f 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa.c @@ -34,8 +34,8 @@ (1ULL << VIRTIO_NET_F_MTU)) #define MLX5_VDPA_PROTOCOL_FEATURES \ - ((1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ) | \ - (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD) | \ + ((1ULL << VHOST_USER_PROTOCOL_F_CLIENT_REQ) | \ + (1ULL << VHOST_USER_PROTOCOL_F_CLIENT_SEND_FD) | \ (1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \ (1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD) | \ (1ULL << VHOST_USER_PROTOCOL_F_MQ) | \ diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index 8a5c332c83ac..b952e6d47021 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -74,8 +74,8 @@ extern "C" { #define VHOST_USER_PROTOCOL_F_NET_MTU 4 #endif -#ifndef VHOST_USER_PROTOCOL_F_SLAVE_REQ -#define VHOST_USER_PROTOCOL_F_SLAVE_REQ 5 +#ifndef VHOST_USER_PROTOCOL_F_CLIENT_REQ +#define VHOST_USER_PROTOCOL_F_CLIENT_REQ 5 #endif #ifndef VHOST_USER_PROTOCOL_F_CRYPTO_SESSION @@ -90,8 +90,8 @@ extern "C" { #define VHOST_USER_PROTOCOL_F_CONFIG 9 #endif -#ifndef VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD -#define VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD 10 +#ifndef VHOST_USER_PROTOCOL_F_CLIENT_SEND_FD +#define VHOST_USER_PROTOCOL_F_CLIENT_SEND_FD 10 #endif #ifndef VHOST_USER_PROTOCOL_F_HOST_NOTIFIER @@ -249,9 +249,9 @@ typedef enum rte_vhost_msg_result (*rte_vhost_msg_handle)(int vid, void *msg); * Optional vhost user message handlers. */ struct rte_vhost_user_extern_ops { - /* Called prior to the master message handling. */ + /* Called prior to the server message handling. */ rte_vhost_msg_handle pre_msg_handle; - /* Called after the master message handling. */ + /* Called after the server message handling. */ rte_vhost_msg_handle post_msg_handle; }; @@ -1008,13 +1008,13 @@ rte_vhost_get_vdpa_device(int vid); * @param vid * vhost device ID * @param need_reply - * wait for the master response the status of this operation + * wait for the server response the status of this operation * @return * 0 on success, < 0 on failure */ __rte_experimental int -rte_vhost_slave_config_change(int vid, bool need_reply); +rte_vhost_client_config_change(int vid, bool need_reply); #ifdef __cplusplus } diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map index 86784405a1e8..74973ce9b8ec 100644 --- a/lib/librte_vhost/rte_vhost_version.map +++ b/lib/librte_vhost/rte_vhost_version.map @@ -65,7 +65,7 @@ EXPERIMENTAL { rte_vhost_clr_inflight_desc_packed; rte_vhost_get_vhost_ring_inflight; rte_vhost_get_vring_base_from_inflight; - rte_vhost_slave_config_change; + rte_vhost_client_config_change; rte_vdpa_find_device_by_name; rte_vdpa_get_rte_device; rte_vdpa_get_queue_num; diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index 0d822d6a3f48..3d111d714bf1 100644 --- a/lib/librte_vhost/vhost.c +++ b/lib/librte_vhost/vhost.c @@ -632,9 +632,9 @@ vhost_new_device(void) vhost_devices[i] = dev; dev->vid = i; dev->flags = VIRTIO_DEV_BUILTIN_VIRTIO_NET; - dev->slave_req_fd = -1; + dev->client_req_fd = -1; dev->postcopy_ufd = -1; - rte_spinlock_init(&dev->slave_req_lock); + rte_spinlock_init(&dev->client_req_lock); return i; } diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 0344636997a6..a0902d1535c4 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -356,8 +356,8 @@ struct virtio_net { uint32_t max_guest_pages; struct guest_page *guest_pages; - int slave_req_fd; - rte_spinlock_t slave_req_lock; + int client_req_fd; + rte_spinlock_t client_req_lock; int postcopy_ufd; int postcopy_listening; diff --git a/lib/librte_vhost/vhost_crypto.c b/lib/librte_vhost/vhost_crypto.c index 0f9df4059d0b..8e4fc1bf015a 100644 --- a/lib/librte_vhost/vhost_crypto.c +++ b/lib/librte_vhost/vhost_crypto.c @@ -460,7 +460,7 @@ vhost_crypto_msg_post_handler(int vid, void *msg) return RTE_VHOST_MSG_RESULT_ERR; } - switch (vmsg->request.master) { + switch (vmsg->request.server) { case VHOST_USER_CRYPTO_CREATE_SESS: vhost_crypto_create_sess(vcrypto, &vmsg->payload.crypto_session); diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c index 6039a8fdb9cb..56da81cd1e17 100644 --- a/lib/librte_vhost/vhost_user.c +++ b/lib/librte_vhost/vhost_user.c @@ -7,11 +7,11 @@ * The vhost-user protocol connection is an external interface, so it must be * robust against invalid inputs. * - * This is important because the vhost-user master is only one step removed + * This is important because the vhost-user server is only one step removed * from the guest. Malicious guests that have escaped will then launch further - * attacks from the vhost-user master. + * attacks from the vhost-user server. * - * Even in deployments where guests are trusted, a bug in the vhost-user master + * Even in deployments where guests are trusted, a bug in the vhost-user server * can still cause invalid messages to be sent. Such messages must not * compromise the stability of the DPDK application by causing crashes, memory * corruption, or other problematic behavior. @@ -78,7 +78,7 @@ static const char *vhost_message_str[VHOST_USER_MAX] = { [VHOST_USER_SET_VRING_ENABLE] = "VHOST_USER_SET_VRING_ENABLE", [VHOST_USER_SEND_RARP] = "VHOST_USER_SEND_RARP", [VHOST_USER_NET_SET_MTU] = "VHOST_USER_NET_SET_MTU", - [VHOST_USER_SET_SLAVE_REQ_FD] = "VHOST_USER_SET_SLAVE_REQ_FD", + [VHOST_USER_SET_CLIENT_REQ_FD] = "VHOST_USER_SET_CLIENT_REQ_FD", [VHOST_USER_IOTLB_MSG] = "VHOST_USER_IOTLB_MSG", [VHOST_USER_CRYPTO_CREATE_SESS] = "VHOST_USER_CRYPTO_CREATE_SESS", [VHOST_USER_CRYPTO_CLOSE_SESS] = "VHOST_USER_CRYPTO_CLOSE_SESS", @@ -114,7 +114,7 @@ validate_msg_fds(struct VhostUserMsg *msg, int expected_fds) VHOST_LOG_CONFIG(ERR, " Expect %d FDs for request %s, received %d\n", expected_fds, - vhost_message_str[msg->request.master], + vhost_message_str[msg->request.server], msg->fd_num); close_msg_fds(msg); @@ -215,9 +215,9 @@ vhost_backend_cleanup(struct virtio_net *dev) dev->inflight_info = NULL; } - if (dev->slave_req_fd >= 0) { - close(dev->slave_req_fd); - dev->slave_req_fd = -1; + if (dev->client_req_fd >= 0) { + close(dev->client_req_fd); + dev->client_req_fd = -1; } if (dev->postcopy_ufd >= 0) { @@ -346,7 +346,7 @@ vhost_user_set_features(struct virtio_net **pdev, struct VhostUserMsg *msg, return RTE_VHOST_MSG_RESULT_OK; /* - * Error out if master tries to change features while device is + * Error out if server tries to change features while device is * in running state. The exception being VHOST_F_LOG_ALL, which * is enabled when the live-migration starts. */ @@ -1235,10 +1235,10 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, if (validate_msg_fds(&ack_msg, 0) != 0) goto err_mmap; - if (ack_msg.request.master != VHOST_USER_SET_MEM_TABLE) { + if (ack_msg.request.server != VHOST_USER_SET_MEM_TABLE) { VHOST_LOG_CONFIG(ERR, "Bad qemu ack on postcopy set-mem-table (%d)\n", - ack_msg.request.master); + ack_msg.request.server); goto err_mmap; } @@ -2049,14 +2049,14 @@ vhost_user_set_protocol_features(struct virtio_net **pdev, { struct virtio_net *dev = *pdev; uint64_t protocol_features = msg->payload.u64; - uint64_t slave_protocol_features = 0; + uint64_t client_protocol_features = 0; if (validate_msg_fds(msg, 0) != 0) return RTE_VHOST_MSG_RESULT_ERR; rte_vhost_driver_get_protocol_features(dev->ifname, - &slave_protocol_features); - if (protocol_features & ~slave_protocol_features) { + &client_protocol_features); + if (protocol_features & ~client_protocol_features) { VHOST_LOG_CONFIG(ERR, "(%d) received invalid protocol features.\n", dev->vid); @@ -2228,15 +2228,15 @@ vhost_user_set_req_fd(struct virtio_net **pdev, struct VhostUserMsg *msg, if (fd < 0) { VHOST_LOG_CONFIG(ERR, - "Invalid file descriptor for slave channel (%d)\n", + "Invalid file descriptor for client channel (%d)\n", fd); return RTE_VHOST_MSG_RESULT_ERR; } - if (dev->slave_req_fd >= 0) - close(dev->slave_req_fd); + if (dev->client_req_fd >= 0) + close(dev->client_req_fd); - dev->slave_req_fd = fd; + dev->client_req_fd = fd; return RTE_VHOST_MSG_RESULT_OK; } @@ -2472,7 +2472,7 @@ static vhost_message_handler_t vhost_message_handlers[VHOST_USER_MAX] = { [VHOST_USER_SET_VRING_ENABLE] = vhost_user_set_vring_enable, [VHOST_USER_SEND_RARP] = vhost_user_send_rarp, [VHOST_USER_NET_SET_MTU] = vhost_user_net_set_mtu, - [VHOST_USER_SET_SLAVE_REQ_FD] = vhost_user_set_req_fd, + [VHOST_USER_SET_CLIENT_REQ_FD] = vhost_user_set_req_fd, [VHOST_USER_IOTLB_MSG] = vhost_user_iotlb_msg, [VHOST_USER_POSTCOPY_ADVISE] = vhost_user_set_postcopy_advise, [VHOST_USER_POSTCOPY_LISTEN] = vhost_user_set_postcopy_listen, @@ -2541,16 +2541,16 @@ send_vhost_reply(int sockfd, struct VhostUserMsg *msg) } static int -send_vhost_slave_message(struct virtio_net *dev, struct VhostUserMsg *msg) +send_vhost_client_message(struct virtio_net *dev, struct VhostUserMsg *msg) { int ret; if (msg->flags & VHOST_USER_NEED_REPLY) - rte_spinlock_lock(&dev->slave_req_lock); + rte_spinlock_lock(&dev->client_req_lock); - ret = send_vhost_message(dev->slave_req_fd, msg); + ret = send_vhost_message(dev->client_req_fd, msg); if (ret < 0 && (msg->flags & VHOST_USER_NEED_REPLY)) - rte_spinlock_unlock(&dev->slave_req_lock); + rte_spinlock_unlock(&dev->client_req_lock); return ret; } @@ -2564,7 +2564,7 @@ vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, { uint32_t vring_idx; - switch (msg->request.master) { + switch (msg->request.server) { case VHOST_USER_SET_VRING_KICK: case VHOST_USER_SET_VRING_CALL: case VHOST_USER_SET_VRING_ERR: @@ -2667,7 +2667,7 @@ vhost_user_msg_handler(int vid, int fd) } ret = 0; - request = msg.request.master; + request = msg.request.server; if (request > VHOST_USER_NONE && request < VHOST_USER_MAX && vhost_message_str[request]) { if (request != VHOST_USER_IOTLB_MSG) @@ -2710,7 +2710,7 @@ vhost_user_msg_handler(int vid, int fd) case VHOST_USER_SET_VRING_ENABLE: case VHOST_USER_SEND_RARP: case VHOST_USER_NET_SET_MTU: - case VHOST_USER_SET_SLAVE_REQ_FD: + case VHOST_USER_SET_CLIENT_REQ_FD: if (!(dev->flags & VIRTIO_DEV_VDPA_CONFIGURED)) { vhost_user_lock_all_queue_pairs(dev); unlock_required = 1; @@ -2850,7 +2850,7 @@ vhost_user_msg_handler(int vid, int fd) return 0; } -static int process_slave_message_reply(struct virtio_net *dev, +static int process_client_message_reply(struct virtio_net *dev, const struct VhostUserMsg *msg) { struct VhostUserMsg msg_reply; @@ -2859,11 +2859,11 @@ static int process_slave_message_reply(struct virtio_net *dev, if ((msg->flags & VHOST_USER_NEED_REPLY) == 0) return 0; - ret = read_vhost_message(dev->slave_req_fd, &msg_reply); + ret = read_vhost_message(dev->client_req_fd, &msg_reply); if (ret <= 0) { if (ret < 0) VHOST_LOG_CONFIG(ERR, - "vhost read slave message reply failed\n"); + "vhost read client message reply failed\n"); else VHOST_LOG_CONFIG(INFO, "vhost peer closed\n"); @@ -2872,10 +2872,10 @@ static int process_slave_message_reply(struct virtio_net *dev, } ret = 0; - if (msg_reply.request.slave != msg->request.slave) { + if (msg_reply.request.client != msg->request.client) { VHOST_LOG_CONFIG(ERR, "Received unexpected msg type (%u), expected %u\n", - msg_reply.request.slave, msg->request.slave); + msg_reply.request.client, msg->request.client); ret = -1; goto out; } @@ -2883,7 +2883,7 @@ static int process_slave_message_reply(struct virtio_net *dev, ret = msg_reply.payload.u64 ? -1 : 0; out: - rte_spinlock_unlock(&dev->slave_req_lock); + rte_spinlock_unlock(&dev->client_req_lock); return ret; } @@ -2892,7 +2892,7 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) { int ret; struct VhostUserMsg msg = { - .request.slave = VHOST_USER_SLAVE_IOTLB_MSG, + .request.client = VHOST_USER_CLIENT_IOTLB_MSG, .flags = VHOST_USER_VERSION, .size = sizeof(msg.payload.iotlb), .payload.iotlb = { @@ -2902,7 +2902,7 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) }, }; - ret = send_vhost_message(dev->slave_req_fd, &msg); + ret = send_vhost_message(dev->client_req_fd, &msg); if (ret < 0) { VHOST_LOG_CONFIG(ERR, "Failed to send IOTLB miss message (%d)\n", @@ -2914,11 +2914,11 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) } static int -vhost_user_slave_config_change(struct virtio_net *dev, bool need_reply) +vhost_user_client_config_change(struct virtio_net *dev, bool need_reply) { int ret; struct VhostUserMsg msg = { - .request.slave = VHOST_USER_SLAVE_CONFIG_CHANGE_MSG, + .request.client = VHOST_USER_CLIENT_CONFIG_CHANGE_MSG, .flags = VHOST_USER_VERSION, .size = 0, }; @@ -2926,7 +2926,7 @@ vhost_user_slave_config_change(struct virtio_net *dev, bool need_reply) if (need_reply) msg.flags |= VHOST_USER_NEED_REPLY; - ret = send_vhost_slave_message(dev, &msg); + ret = send_vhost_client_message(dev, &msg); if (ret < 0) { VHOST_LOG_CONFIG(ERR, "Failed to send config change (%d)\n", @@ -2934,11 +2934,11 @@ vhost_user_slave_config_change(struct virtio_net *dev, bool need_reply) return ret; } - return process_slave_message_reply(dev, &msg); + return process_client_message_reply(dev, &msg); } int -rte_vhost_slave_config_change(int vid, bool need_reply) +rte_vhost_client_config_change(int vid, bool need_reply) { struct virtio_net *dev; @@ -2946,17 +2946,17 @@ rte_vhost_slave_config_change(int vid, bool need_reply) if (!dev) return -ENODEV; - return vhost_user_slave_config_change(dev, need_reply); + return vhost_user_client_config_change(dev, need_reply); } -static int vhost_user_slave_set_vring_host_notifier(struct virtio_net *dev, +static int vhost_user_client_set_vring_host_notifier(struct virtio_net *dev, int index, int fd, uint64_t offset, uint64_t size) { int ret; struct VhostUserMsg msg = { - .request.slave = VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG, + .request.client = VHOST_USER_CLIENT_VRING_HOST_NOTIFIER_MSG, .flags = VHOST_USER_VERSION | VHOST_USER_NEED_REPLY, .size = sizeof(msg.payload.area), .payload.area = { @@ -2973,14 +2973,14 @@ static int vhost_user_slave_set_vring_host_notifier(struct virtio_net *dev, msg.fd_num = 1; } - ret = send_vhost_slave_message(dev, &msg); + ret = send_vhost_client_message(dev, &msg); if (ret < 0) { VHOST_LOG_CONFIG(ERR, "Failed to set host notifier (%d)\n", ret); return ret; } - return process_slave_message_reply(dev, &msg); + return process_client_message_reply(dev, &msg); } int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable) @@ -3002,9 +3002,9 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable) if (!(dev->features & (1ULL << VIRTIO_F_VERSION_1)) || !(dev->features & (1ULL << VHOST_USER_F_PROTOCOL_FEATURES)) || !(dev->protocol_features & - (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ)) || + (1ULL << VHOST_USER_PROTOCOL_F_CLIENT_REQ)) || !(dev->protocol_features & - (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD)) || + (1ULL << VHOST_USER_PROTOCOL_F_CLIENT_SEND_FD)) || !(dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER))) return -ENOTSUP; @@ -3034,7 +3034,7 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable) goto disable; } - if (vhost_user_slave_set_vring_host_notifier(dev, i, + if (vhost_user_client_set_vring_host_notifier(dev, i, vfio_device_fd, offset, size) < 0) { ret = -EFAULT; goto disable; @@ -3043,7 +3043,7 @@ int rte_vhost_host_notifier_ctrl(int vid, uint16_t qid, bool enable) } else { disable: for (i = q_start; i <= q_last; i++) { - vhost_user_slave_set_vring_host_notifier(dev, i, -1, + vhost_user_client_set_vring_host_notifier(dev, i, -1, 0, 0); } } diff --git a/lib/librte_vhost/vhost_user.h b/lib/librte_vhost/vhost_user.h index 1f65efa4a935..924da0dc17dd 100644 --- a/lib/librte_vhost/vhost_user.h +++ b/lib/librte_vhost/vhost_user.h @@ -19,9 +19,9 @@ (1ULL << VHOST_USER_PROTOCOL_F_RARP) | \ (1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK) | \ (1ULL << VHOST_USER_PROTOCOL_F_NET_MTU) | \ - (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ) | \ + (1ULL << VHOST_USER_PROTOCOL_F_CLIENT_REQ) | \ (1ULL << VHOST_USER_PROTOCOL_F_CRYPTO_SESSION) | \ - (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD) | \ + (1ULL << VHOST_USER_PROTOCOL_F_CLIENT_SEND_FD) | \ (1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \ (1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT)) @@ -47,7 +47,7 @@ typedef enum VhostUserRequest { VHOST_USER_SET_VRING_ENABLE = 18, VHOST_USER_SEND_RARP = 19, VHOST_USER_NET_SET_MTU = 20, - VHOST_USER_SET_SLAVE_REQ_FD = 21, + VHOST_USER_SET_CLIENT_REQ_FD = 21, VHOST_USER_IOTLB_MSG = 22, VHOST_USER_CRYPTO_CREATE_SESS = 26, VHOST_USER_CRYPTO_CLOSE_SESS = 27, @@ -59,13 +59,13 @@ typedef enum VhostUserRequest { VHOST_USER_MAX = 33 } VhostUserRequest; -typedef enum VhostUserSlaveRequest { - VHOST_USER_SLAVE_NONE = 0, - VHOST_USER_SLAVE_IOTLB_MSG = 1, - VHOST_USER_SLAVE_CONFIG_CHANGE_MSG = 2, - VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG = 3, - VHOST_USER_SLAVE_MAX -} VhostUserSlaveRequest; +typedef enum VhostUserClientRequest { + VHOST_USER_CLIENT_NONE = 0, + VHOST_USER_CLIENT_IOTLB_MSG = 1, + VHOST_USER_CLIENT_CONFIG_CHANGE_MSG = 2, + VHOST_USER_CLIENT_VRING_HOST_NOTIFIER_MSG = 3, + VHOST_USER_CLIENT_MAX +} VhostUserClientRequest; typedef struct VhostUserMemoryRegion { uint64_t guest_phys_addr; @@ -124,8 +124,8 @@ typedef struct VhostUserInflight { typedef struct VhostUserMsg { union { - uint32_t master; /* a VhostUserRequest value */ - uint32_t slave; /* a VhostUserSlaveRequest value*/ + uint32_t server; /* a VhostUserRequest value */ + uint32_t client; /* a VhostUserClientRequest value*/ } request; #define VHOST_USER_VERSION_MASK 0x3