From patchwork Fri Jun 5 22:58:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 70909 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA51CA0350; Sat, 6 Jun 2020 01:02:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 642801D668; Sat, 6 Jun 2020 00:59:30 +0200 (CEST) Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by dpdk.org (Postfix) with ESMTP id 003061D664 for ; Sat, 6 Jun 2020 00:59:26 +0200 (CEST) Received: by mail-pf1-f175.google.com with SMTP id h185so5641484pfg.2 for ; Fri, 05 Jun 2020 15:59:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BAl5U9O11ZTLTR3ibnEkdO9/YlDc/HZfMyqabeuq+JY=; b=vAd1VvqljwwB5pPSHqcHailV+qJWaluKQW6WvIUonwt4n0YoSWa/SuHuNccuCcpYYY 7p8AZX5Qs5xg/l478B8tIu0p/Wf6Z3OdvE4TVbWtCLCZjFRhgaXGzFYmHRAdBFjevGoa pHHKWlKREYMnn4Er2mj6db6aBK6bcJmDdm+ZLH680AJGx3BBPqnRdxSDMUwryUfWIjRQ poRGZixQPR2dSPEurxgWUB5WGZbD4Hh1Lmln2urP348iNkeOuhZF9GlvghLO9wncEi75 xdZVTQJDGhcHEG9A8qYOKqZGBQufn6LPkbq3/kCm2sduBda707gsouExnVDI/EGTZxrE IOvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BAl5U9O11ZTLTR3ibnEkdO9/YlDc/HZfMyqabeuq+JY=; b=bhwOvyCVpvcZBZwEQt+aXqhTfvfb293A8lYM3rUIab+4QP9M4SKGKDKCxk/tgM4swq uL0DDuIVPl/i2IugOjvggQf5CVw6o4/Y0l64INYIJLdmL4Iz0R/qGOORfQczJY2zb+HU jwOQrwRJFTo96asfa2z4mKl62eaI/wRygeqrFDyK1QRb5ljTgKvUxQj8YbYCCkO09wb1 rT9/1tgvvLUZ166SzNipl4fW+H+nRp/exqLGlV52Bkqc26CFMIeC+Uq8A7UF6hSf6m1n Qj4q1emP9D953BnHbplyHVX6+12tVmzfYvQIsEv42GiwHYIIlP3PXsCsSRqiHbMwgREt QMpw== X-Gm-Message-State: AOAM531tcaSycYjhhsCD0fNEXCkovW/4dvEnXbHo8P1VTHBk93BzMEHY jKWmbe7Nzs5ydMrXxv3oPVZmT5ASIWA= X-Google-Smtp-Source: ABdhPJxxFwEbtd23NrV5f8cN6bCZc0GI69JGLS9AYr1VJcKjYtPClfvjInDevmeKPqIgF3Psgs3Tng== X-Received: by 2002:a63:521e:: with SMTP id g30mr11896315pgb.106.1591397964204; Fri, 05 Jun 2020 15:59:24 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id j186sm543121pfb.220.2020.06.05.15.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jun 2020 15:59:23 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Fri, 5 Jun 2020 15:58:05 -0700 Message-Id: <20200605225811.26342-21-stephen@networkplumber.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200605225811.26342-1-stephen@networkplumber.org> References: <20200604210200.25405-1-stephen@networkplumber.org> <20200605225811.26342-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [RFC v2 20/26] app/test: repalce refernces to master/slave X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use initial and worker when referring to lcores Signed-off-by: Stephen Hemminger --- app/test/autotest_test_funcs.py | 2 +- app/test/test.c | 2 +- app/test/test_atomic.c | 26 ++++---- app/test/test_barrier.c | 2 +- app/test/test_cryptodev.c | 16 ++--- app/test/test_distributor.c | 8 +-- app/test/test_distributor_perf.c | 10 +-- app/test/test_eal_flags.c | 32 +++++----- app/test/test_efd.c | 2 +- app/test/test_efd_perf.c | 2 +- app/test/test_func_reentrancy.c | 20 +++--- app/test/test_hash_multiwriter.c | 4 +- app/test/test_hash_readwrite.c | 38 +++++------ app/test/test_kni.c | 16 ++--- app/test/test_malloc.c | 12 ++-- app/test/test_mbuf.c | 36 +++++------ app/test/test_mcslock.c | 28 ++++---- app/test/test_mempool_perf.c | 10 +-- app/test/test_mp_secondary.c | 2 +- app/test/test_pdump.c | 2 +- app/test/test_per_lcore.c | 14 ++-- app/test/test_pmd_perf.c | 20 +++--- app/test/test_rcu_qsbr.c | 2 +- app/test/test_rcu_qsbr_perf.c | 2 +- app/test/test_ring_perf.c | 14 ++-- app/test/test_ring_stress_impl.h | 10 +-- app/test/test_rwlock.c | 28 ++++---- app/test/test_service_cores.c | 10 +-- app/test/test_spinlock.c | 34 +++++----- app/test/test_stack.c | 2 +- app/test/test_stack_perf.c | 6 +- app/test/test_ticketlock.c | 36 +++++------ app/test/test_timer.c | 106 +++++++++++++++---------------- app/test/test_timer_racecond.c | 27 ++++---- app/test/test_timer_secondary.c | 2 +- app/test/test_trace_perf.c | 4 +- 36 files changed, 294 insertions(+), 293 deletions(-) diff --git a/app/test/autotest_test_funcs.py b/app/test/autotest_test_funcs.py index 26688b71323e..23d530f1e18b 100644 --- a/app/test/autotest_test_funcs.py +++ b/app/test/autotest_test_funcs.py @@ -102,7 +102,7 @@ def rwlock_autotest(child, test_name): index = child.expect(["Test OK", "Test Failed", "Hello from core ([0-9]*) !", - "Global write lock taken on master " + "Global write lock taken on initial lcore " "core ([0-9]*)", pexpect.TIMEOUT], timeout=10) # ok diff --git a/app/test/test.c b/app/test/test.c index 94d26ab1f67c..a9fce18ca73e 100644 --- a/app/test/test.c +++ b/app/test/test.c @@ -58,7 +58,7 @@ do_recursive_call(void) #endif #endif { "test_missing_c_flag", no_action }, - { "test_master_lcore_flag", no_action }, + { "test_initial_lcore_flag", no_action }, { "test_invalid_n_flag", no_action }, { "test_no_hpet_flag", no_action }, { "test_whitelist_flag", no_action }, diff --git a/app/test/test_atomic.c b/app/test/test_atomic.c index 214452e54399..88c3639d759d 100644 --- a/app/test/test_atomic.c +++ b/app/test/test_atomic.c @@ -456,7 +456,7 @@ test_atomic(void) printf("usual inc/dec/add/sub functions\n"); - rte_eal_mp_remote_launch(test_atomic_usual, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_usual, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_set(&synchro, 0); @@ -482,7 +482,7 @@ test_atomic(void) rte_atomic32_set(&a32, 0); rte_atomic16_set(&a16, 0); rte_atomic64_set(&count, 0); - rte_eal_mp_remote_launch(test_atomic_tas, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_tas, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_set(&synchro, 0); @@ -499,7 +499,7 @@ test_atomic(void) rte_atomic16_set(&a16, 0); rte_atomic64_set(&count, 0); rte_eal_mp_remote_launch(test_atomic_addsub_and_return, NULL, - SKIP_MASTER); + SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_set(&synchro, 0); @@ -510,8 +510,8 @@ test_atomic(void) } /* - * Set a64, a32 and a16 with the same value of minus "number of slave - * lcores", launch all slave lcores to atomically increase by one and + * Set a64, a32 and a16 with the same value of minus "number of worker + * lcores", launch all worker lcores to atomically increase by one and * test them respectively. * Each lcore should have only one chance to increase a64 by one and * then check if it is equal to 0, but there should be only one lcore @@ -519,7 +519,7 @@ test_atomic(void) * Then a variable of "count", initialized to zero, is increased by * one if a64, a32 or a16 is 0 after being increased and tested * atomically. - * We can check if "count" is finally equal to 3 to see if all slave + * We can check if "count" is finally equal to 3 to see if all worker * lcores performed "atomic inc and test" right. */ printf("inc and test\n"); @@ -533,7 +533,7 @@ test_atomic(void) rte_atomic64_set(&a64, (int64_t)(1 - (int64_t)rte_lcore_count())); rte_atomic32_set(&a32, (int32_t)(1 - (int32_t)rte_lcore_count())); rte_atomic16_set(&a16, (int16_t)(1 - (int16_t)rte_lcore_count())); - rte_eal_mp_remote_launch(test_atomic_inc_and_test, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_inc_and_test, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_clear(&synchro); @@ -544,7 +544,7 @@ test_atomic(void) } /* - * Same as above, but this time we set the values to "number of slave + * Same as above, but this time we set the values to "number of worker * lcores", and decrement instead of increment. */ printf("dec and test\n"); @@ -555,7 +555,7 @@ test_atomic(void) rte_atomic64_set(&a64, (int64_t)(rte_lcore_count() - 1)); rte_atomic32_set(&a32, (int32_t)(rte_lcore_count() - 1)); rte_atomic16_set(&a16, (int16_t)(rte_lcore_count() - 1)); - rte_eal_mp_remote_launch(test_atomic_dec_and_test, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_dec_and_test, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_clear(&synchro); @@ -569,10 +569,10 @@ test_atomic(void) /* * This case tests the functionality of rte_atomic128_cmp_exchange * API. It calls rte_atomic128_cmp_exchange with four kinds of memory - * models successively on each slave core. Once each 128-bit atomic + * models successively on each worker core. Once each 128-bit atomic * compare and swap operation is successful, it updates the global * 128-bit counter by 2 for the first 64-bit and 1 for the second - * 64-bit. Each slave core iterates this test N times. + * 64-bit. Each worker core iterates this test N times. * At the end of test, verify whether the first 64-bits of the 128-bit * counter and the second 64bits is differ by the total iterations. If * it is, the test passes. @@ -585,7 +585,7 @@ test_atomic(void) count128.val[1] = 0; rte_eal_mp_remote_launch(test_atomic128_cmp_exchange, NULL, - SKIP_MASTER); + SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_clear(&synchro); @@ -619,7 +619,7 @@ test_atomic(void) token64 = ((uint64_t)get_crc8(&t.u8[0], sizeof(token64) - 1) << 56) | (t.u64 & 0x00ffffffffffffff); - rte_eal_mp_remote_launch(test_atomic_exchange, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(test_atomic_exchange, NULL, SKIP_INITIAL); rte_atomic32_set(&synchro, 1); rte_eal_mp_wait_lcore(); rte_atomic32_clear(&synchro); diff --git a/app/test/test_barrier.c b/app/test/test_barrier.c index 43b5f6232c6d..a27a4b0ae06f 100644 --- a/app/test/test_barrier.c +++ b/app/test/test_barrier.c @@ -236,7 +236,7 @@ plock_test(uint64_t iter, enum plock_use_type utype) /* test phase - start and wait for completion on each active lcore */ - rte_eal_mp_remote_launch(plock_test1_lcore, lpt, CALL_MASTER); + rte_eal_mp_remote_launch(plock_test1_lcore, lpt, CALL_INITIAL); rte_eal_mp_wait_lcore(); /* validation phase - make sure that shared and local data match */ diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 8f631468b740..2ebc5c6ac806 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -474,29 +474,29 @@ testsuite_setup(void) char vdev_args[VDEV_ARGS_SIZE] = {""}; char temp_str[VDEV_ARGS_SIZE] = {"mode=multi-core," "ordering=enable,name=cryptodev_test_scheduler,corelist="}; - uint16_t slave_core_count = 0; + uint16_t worker_core_count = 0; uint16_t socket_id = 0; if (gbl_driver_id == rte_cryptodev_driver_id_get( RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD))) { - /* Identify the Slave Cores - * Use 2 slave cores for the device args + /* Identify the Worker Cores + * Use 2 worker cores for the device args */ - RTE_LCORE_FOREACH_SLAVE(i) { - if (slave_core_count > 1) + RTE_LCORE_FOREACH_WORKER(i) { + if (worker_core_count > 1) break; snprintf(vdev_args, sizeof(vdev_args), "%s%d", temp_str, i); strcpy(temp_str, vdev_args); strlcat(temp_str, ";", sizeof(temp_str)); - slave_core_count++; + worker_core_count++; socket_id = rte_lcore_to_socket_id(i); } - if (slave_core_count != 2) { + if (worker_core_count != 2) { RTE_LOG(ERR, USER1, "Cryptodev scheduler test require at least " - "two slave cores to run. " + "two worker cores to run. " "Please use the correct coremask.\n"); return TEST_FAILED; } diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c index ba1f81cf8d19..ebc415953872 100644 --- a/app/test/test_distributor.c +++ b/app/test/test_distributor.c @@ -654,13 +654,13 @@ test_distributor(void) sizeof(worker_params.name)); rte_eal_mp_remote_launch(handle_work, - &worker_params, SKIP_MASTER); + &worker_params, SKIP_INITIAL); if (sanity_test(&worker_params, p) < 0) goto err; quit_workers(&worker_params, p); rte_eal_mp_remote_launch(handle_work_with_free_mbufs, - &worker_params, SKIP_MASTER); + &worker_params, SKIP_INITIAL); if (sanity_test_with_mbuf_alloc(&worker_params, p) < 0) goto err; quit_workers(&worker_params, p); @@ -668,7 +668,7 @@ test_distributor(void) if (rte_lcore_count() > 2) { rte_eal_mp_remote_launch(handle_work_for_shutdown_test, &worker_params, - SKIP_MASTER); + SKIP_INITIAL); if (sanity_test_with_worker_shutdown(&worker_params, p) < 0) goto err; @@ -676,7 +676,7 @@ test_distributor(void) rte_eal_mp_remote_launch(handle_work_for_shutdown_test, &worker_params, - SKIP_MASTER); + SKIP_INITIAL); if (test_flush_with_worker_shutdown(&worker_params, p) < 0) goto err; diff --git a/app/test/test_distributor_perf.c b/app/test/test_distributor_perf.c index f153bcf9bd87..06fb2dbb183b 100644 --- a/app/test/test_distributor_perf.c +++ b/app/test/test_distributor_perf.c @@ -54,10 +54,10 @@ time_cache_line_switch(void) /* allocate a full cache line for data, we use only first byte of it */ uint64_t data[RTE_CACHE_LINE_SIZE*3 / sizeof(uint64_t)]; - unsigned i, slaveid = rte_get_next_lcore(rte_lcore_id(), 0, 0); + unsigned i, workerid = rte_get_next_lcore(rte_lcore_id(), 0, 0); volatile uint64_t *pdata = &data[0]; *pdata = 1; - rte_eal_remote_launch((lcore_function_t *)flip_bit, &data[0], slaveid); + rte_eal_remote_launch((lcore_function_t *)flip_bit, &data[0], workerid); while (*pdata) rte_pause(); @@ -72,7 +72,7 @@ time_cache_line_switch(void) while (*pdata) rte_pause(); *pdata = 2; - rte_eal_wait_lcore(slaveid); + rte_eal_wait_lcore(workerid); printf("==== Cache line switch test ===\n"); printf("Time for %u iterations = %"PRIu64" ticks\n", (1<single_read = end / i; for (n = 0; n < NUM_TEST; n++) { - unsigned int tot_slave_lcore = rte_lcore_count() - 1; - if (tot_slave_lcore < core_cnt[n] * 2) + unsigned int tot_worker_lcore = rte_lcore_count() - 1; + if (tot_worker_lcore < core_cnt[n] * 2) goto finish; rte_atomic64_clear(&greads); @@ -467,7 +467,7 @@ test_hash_readwrite_perf(struct perf *perf_results, int use_htm, for (i = 0; i < core_cnt[n]; i++) rte_eal_remote_launch(test_rw_reader, (void *)(uintptr_t)read_cnt, - slave_core_ids[i]); + worker_core_ids[i]); rte_eal_mp_wait_lcore(); @@ -476,7 +476,7 @@ test_hash_readwrite_perf(struct perf *perf_results, int use_htm, for (; i < core_cnt[n] * 2; i++) rte_eal_remote_launch(test_rw_writer, (void *)((uintptr_t)start_coreid), - slave_core_ids[i]); + worker_core_ids[i]); rte_eal_mp_wait_lcore(); @@ -521,20 +521,20 @@ test_hash_readwrite_perf(struct perf *perf_results, int use_htm, for (i = core_cnt[n]; i < core_cnt[n] * 2; i++) rte_eal_remote_launch(test_rw_writer, (void *)((uintptr_t)start_coreid), - slave_core_ids[i]); + worker_core_ids[i]); for (i = 0; i < core_cnt[n]; i++) rte_eal_remote_launch(test_rw_reader, (void *)(uintptr_t)read_cnt, - slave_core_ids[i]); + worker_core_ids[i]); } else { for (i = 0; i < core_cnt[n]; i++) rte_eal_remote_launch(test_rw_reader, (void *)(uintptr_t)read_cnt, - slave_core_ids[i]); + worker_core_ids[i]); for (; i < core_cnt[n] * 2; i++) rte_eal_remote_launch(test_rw_writer, (void *)((uintptr_t)start_coreid), - slave_core_ids[i]); + worker_core_ids[i]); } rte_eal_mp_wait_lcore(); @@ -626,8 +626,8 @@ test_hash_rw_perf_main(void) return TEST_SKIPPED; } - RTE_LCORE_FOREACH_SLAVE(core_id) { - slave_core_ids[i] = core_id; + RTE_LCORE_FOREACH_WORKER(core_id) { + worker_core_ids[i] = core_id; i++; } @@ -710,8 +710,8 @@ test_hash_rw_func_main(void) return TEST_SKIPPED; } - RTE_LCORE_FOREACH_SLAVE(core_id) { - slave_core_ids[i] = core_id; + RTE_LCORE_FOREACH_WORKER(core_id) { + worker_core_ids[i] = core_id; i++; } diff --git a/app/test/test_kni.c b/app/test/test_kni.c index e47ab36e0231..dfa28a2a2999 100644 --- a/app/test/test_kni.c +++ b/app/test/test_kni.c @@ -85,7 +85,7 @@ static struct rte_kni_ops kni_ops = { .config_promiscusity = NULL, }; -static unsigned lcore_master, lcore_ingress, lcore_egress; +static unsigned lcore_initial, lcore_ingress, lcore_egress; static struct rte_kni *test_kni_ctx; static struct test_kni_stats stats; @@ -202,7 +202,7 @@ test_kni_link_change(void) * supported by KNI kernel module. The ingress lcore will allocate mbufs and * transmit them to kernel space; while the egress lcore will receive the mbufs * from kernel space and free them. - * On the master lcore, several commands will be run to check handling the + * On the initial lcore, several commands will be run to check handling the * kernel requests. And it will finally set the flag to exit the KNI * transmitting/receiving to/from the kernel space. * @@ -217,7 +217,7 @@ test_kni_loop(__rte_unused void *arg) const unsigned lcore_id = rte_lcore_id(); struct rte_mbuf *pkts_burst[PKT_BURST_SZ]; - if (lcore_id == lcore_master) { + if (lcore_id == lcore_initial) { rte_delay_ms(KNI_TIMEOUT_MS); /* tests of handling kernel request */ if (system(IFCONFIG TEST_KNI_PORT" up") == -1) @@ -276,12 +276,12 @@ test_kni_allocate_lcores(void) { unsigned i, count = 0; - lcore_master = rte_get_master_lcore(); - printf("master lcore: %u\n", lcore_master); + lcore_initial = rte_get_initial_lcore(); + printf("initial lcore: %u\n", lcore_initial); for (i = 0; i < RTE_MAX_LCORE; i++) { if (count >=2 ) break; - if (rte_lcore_is_enabled(i) && i != lcore_master) { + if (rte_lcore_is_enabled(i) && i != lcore_initial) { count ++; if (count == 1) lcore_ingress = i; @@ -487,8 +487,8 @@ test_kni_processing(uint16_t port_id, struct rte_mempool *mp) if (ret != 0) goto fail_kni; - rte_eal_mp_remote_launch(test_kni_loop, NULL, CALL_MASTER); - RTE_LCORE_FOREACH_SLAVE(i) { + rte_eal_mp_remote_launch(test_kni_loop, NULL, CALL_INITIAL); + RTE_LCORE_FOREACH_WORKER(i) { if (rte_eal_wait_lcore(i) < 0) { ret = -1; goto fail_kni; diff --git a/app/test/test_malloc.c b/app/test/test_malloc.c index 71b3cfdde5cf..758e6194a852 100644 --- a/app/test/test_malloc.c +++ b/app/test/test_malloc.c @@ -1007,11 +1007,11 @@ test_malloc(void) else printf("test_realloc() passed\n"); /*----------------------------*/ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(test_align_overlap_per_lcore, NULL, lcore_id); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) ret = -1; } @@ -1022,11 +1022,11 @@ test_malloc(void) else printf("test_align_overlap_per_lcore() passed\n"); /*----------------------------*/ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(test_reordered_free_per_lcore, NULL, lcore_id); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) ret = -1; } @@ -1037,11 +1037,11 @@ test_malloc(void) else printf("test_reordered_free_per_lcore() passed\n"); /*----------------------------*/ - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { rte_eal_remote_launch(test_random_alloc_free, NULL, lcore_id); } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) ret = -1; } diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c index 71bdab6917b1..f5f092b9478d 100644 --- a/app/test/test_mbuf.c +++ b/app/test/test_mbuf.c @@ -72,7 +72,7 @@ #ifdef RTE_MBUF_REFCNT_ATOMIC -static volatile uint32_t refcnt_stop_slaves; +static volatile uint32_t refcnt_stop_workers; static unsigned refcnt_lcore[RTE_MAX_LCORE]; #endif @@ -1000,7 +1000,7 @@ test_pktmbuf_free_segment(struct rte_mempool *pktmbuf_pool) #ifdef RTE_MBUF_REFCNT_ATOMIC static int -test_refcnt_slave(void *arg) +test_refcnt_worker(void *arg) { unsigned lcore, free; void *mp = 0; @@ -1010,7 +1010,7 @@ test_refcnt_slave(void *arg) printf("%s started at lcore %u\n", __func__, lcore); free = 0; - while (refcnt_stop_slaves == 0) { + while (refcnt_stop_workers == 0) { if (rte_ring_dequeue(refcnt_mbuf_ring, &mp) == 0) { free++; rte_pktmbuf_free(mp); @@ -1038,7 +1038,7 @@ test_refcnt_iter(unsigned int lcore, unsigned int iter, /* For each mbuf in the pool: * - allocate mbuf, * - increment it's reference up to N+1, - * - enqueue it N times into the ring for slave cores to free. + * - enqueue it N times into the ring for worker cores to free. */ for (i = 0, n = rte_mempool_avail_count(refcnt_pool); i != n && (m = rte_pktmbuf_alloc(refcnt_pool)) != NULL; @@ -1062,7 +1062,7 @@ test_refcnt_iter(unsigned int lcore, unsigned int iter, rte_panic("(lcore=%u, iter=%u): was able to allocate only " "%u from %u mbufs\n", lcore, iter, i, n); - /* wait till slave lcores will consume all mbufs */ + /* wait till worker lcores will consume all mbufs */ while (!rte_ring_empty(refcnt_mbuf_ring)) ; @@ -1083,8 +1083,8 @@ test_refcnt_iter(unsigned int lcore, unsigned int iter, } static int -test_refcnt_master(struct rte_mempool *refcnt_pool, - struct rte_ring *refcnt_mbuf_ring) +test_refcnt_main(struct rte_mempool *refcnt_pool, + struct rte_ring *refcnt_mbuf_ring) { unsigned i, lcore; @@ -1094,7 +1094,7 @@ test_refcnt_master(struct rte_mempool *refcnt_pool, for (i = 0; i != REFCNT_MAX_ITER; i++) test_refcnt_iter(lcore, i, refcnt_pool, refcnt_mbuf_ring); - refcnt_stop_slaves = 1; + refcnt_stop_workers = 1; rte_wmb(); printf("%s finished at lcore %u\n", __func__, lcore); @@ -1107,7 +1107,7 @@ static int test_refcnt_mbuf(void) { #ifdef RTE_MBUF_REFCNT_ATOMIC - unsigned int master, slave, tref; + unsigned int initial, worker, tref; int ret = -1; struct rte_mempool *refcnt_pool = NULL; struct rte_ring *refcnt_mbuf_ring = NULL; @@ -1139,26 +1139,26 @@ test_refcnt_mbuf(void) goto err; } - refcnt_stop_slaves = 0; + refcnt_stop_workers = 0; memset(refcnt_lcore, 0, sizeof (refcnt_lcore)); - rte_eal_mp_remote_launch(test_refcnt_slave, refcnt_mbuf_ring, - SKIP_MASTER); + rte_eal_mp_remote_launch(test_refcnt_worker, refcnt_mbuf_ring, + SKIP_INITIAL); - test_refcnt_master(refcnt_pool, refcnt_mbuf_ring); + test_refcnt_main(refcnt_pool, refcnt_mbuf_ring); rte_eal_mp_wait_lcore(); /* check that we porcessed all references */ tref = 0; - master = rte_get_master_lcore(); + initial = rte_get_initial_lcore(); - RTE_LCORE_FOREACH_SLAVE(slave) - tref += refcnt_lcore[slave]; + RTE_LCORE_FOREACH_WORKER(worker) + tref += refcnt_lcore[worker]; - if (tref != refcnt_lcore[master]) + if (tref != refcnt_lcore[initial]) rte_panic("referenced mbufs: %u, freed mbufs: %u\n", - tref, refcnt_lcore[master]); + tref, refcnt_lcore[initial]); rte_mempool_dump(stdout, refcnt_pool); rte_ring_dump(stdout, refcnt_mbuf_ring); diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c index ddccaafa9242..9cd9da19dd72 100644 --- a/app/test/test_mcslock.c +++ b/app/test/test_mcslock.c @@ -28,7 +28,7 @@ * These tests are derived from spin lock test cases. * * - The functional test takes all of these locks and launches the - * ''test_mcslock_per_core()'' function on each core (except the master). + * ''test_mcslock_per_core()'' function on each core (except the initial). * * - The function takes the global lock, display something, then releases * the global lock on each core. @@ -123,9 +123,9 @@ test_mcslock_perf(void) printf("\nTest with lock on %u cores...\n", (rte_lcore_count())); rte_atomic32_set(&synchro, 0); - rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_MASTER); + rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_INITIAL); - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); load_loop_fn(&lock); @@ -154,8 +154,8 @@ test_mcslock_try(__rte_unused void *arg) rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me); rte_mcslock_t ml_try_me = RTE_PER_LCORE(_ml_try_me); - /* Locked ml_try in the master lcore, so it should fail - * when trying to lock it in the slave lcore. + /* Locked ml_try in the initial lcore, so it should fail + * when trying to lock it in the worker lcore. */ if (rte_mcslock_trylock(&p_ml_try, &ml_try_me) == 0) { rte_mcslock_lock(&p_ml, &ml_me); @@ -185,20 +185,20 @@ test_mcslock(void) * Test mcs lock & unlock on each core */ - /* slave cores should be waiting: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be waiting: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } rte_mcslock_lock(&p_ml, &ml_me); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_mcslock_per_core, NULL, i); } - /* slave cores should be busy: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be busy: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } @@ -210,19 +210,19 @@ test_mcslock(void) /* * Test if it could return immediately from try-locking a locked object. * Here it will lock the mcs lock object first, then launch all the - * slave lcores to trylock the same mcs lock object. - * All the slave lcores should give up try-locking a locked object and + * worker lcores to trylock the same mcs lock object. + * All the worker lcores should give up try-locking a locked object and * return immediately, and then increase the "count" initialized with * zero by one per times. * We can check if the "count" is finally equal to the number of all - * slave lcores to see if the behavior of try-locking a locked + * worker lcores to see if the behavior of try-locking a locked * mcslock object is correct. */ if (rte_mcslock_trylock(&p_ml_try, &ml_try_me) == 0) return -1; count = 0; - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_mcslock_try, NULL, i); } rte_eal_mp_wait_lcore(); diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c index 60bda8aadbe8..383f3928f2c1 100644 --- a/app/test/test_mempool_perf.c +++ b/app/test/test_mempool_perf.c @@ -143,8 +143,8 @@ per_lcore_mempool_test(void *arg) stats[lcore_id].enq_count = 0; - /* wait synchro for slaves */ - if (lcore_id != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore_id != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0); start_cycles = rte_get_timer_cycles(); @@ -214,7 +214,7 @@ launch_cores(struct rte_mempool *mp, unsigned int cores) return -1; } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (cores == 1) break; cores--; @@ -222,13 +222,13 @@ launch_cores(struct rte_mempool *mp, unsigned int cores) mp, lcore_id); } - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); ret = per_lcore_mempool_test(mp); cores = cores_save; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (cores == 1) break; cores--; diff --git a/app/test/test_mp_secondary.c b/app/test/test_mp_secondary.c index ac15ddbf2009..2cc1586fcde0 100644 --- a/app/test/test_mp_secondary.c +++ b/app/test/test_mp_secondary.c @@ -94,7 +94,7 @@ run_secondary_instances(void) #endif snprintf(coremask, sizeof(coremask), "%x", \ - (1 << rte_get_master_lcore())); + (1 << rte_get_initial_lcore())); ret |= launch_proc(argv1); ret |= launch_proc(argv2); diff --git a/app/test/test_pdump.c b/app/test/test_pdump.c index 6a1180bcb78e..d2d2df7a8016 100644 --- a/app/test/test_pdump.c +++ b/app/test/test_pdump.c @@ -184,7 +184,7 @@ run_pdump_server_tests(void) }; snprintf(coremask, sizeof(coremask), "%x", - (1 << rte_get_master_lcore())); + (1 << rte_get_initial_lcore())); ret = test_pdump_init(); ret |= launch_p(argv1); diff --git a/app/test/test_per_lcore.c b/app/test/test_per_lcore.c index fcd00212f1eb..ff91a3cf5b2b 100644 --- a/app/test/test_per_lcore.c +++ b/app/test/test_per_lcore.c @@ -73,31 +73,31 @@ test_per_lcore(void) unsigned lcore_id; int ret; - rte_eal_mp_remote_launch(assign_vars, NULL, SKIP_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(assign_vars, NULL, SKIP_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } - rte_eal_mp_remote_launch(display_vars, NULL, SKIP_MASTER); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + rte_eal_mp_remote_launch(display_vars, NULL, SKIP_INITIAL); + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } /* test if it could do remote launch twice at the same time or not */ - ret = rte_eal_mp_remote_launch(test_per_lcore_delay, NULL, SKIP_MASTER); + ret = rte_eal_mp_remote_launch(test_per_lcore_delay, NULL, SKIP_INITIAL); if (ret < 0) { printf("It fails to do remote launch but it should able to do\n"); return -1; } /* it should not be able to launch a lcore which is running */ - ret = rte_eal_mp_remote_launch(test_per_lcore_delay, NULL, SKIP_MASTER); + ret = rte_eal_mp_remote_launch(test_per_lcore_delay, NULL, SKIP_INITIAL); if (ret == 0) { printf("It does remote launch successfully but it should not at this time\n"); return -1; } - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c index 352cd47156ba..efe0814f175e 100644 --- a/app/test/test_pmd_perf.c +++ b/app/test/test_pmd_perf.c @@ -278,7 +278,7 @@ alloc_lcore(uint16_t socketid) for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { if (LCORE_AVAIL != lcore_conf[lcore_id].status || lcore_conf[lcore_id].socketid != socketid || - lcore_id == rte_get_master_lcore()) + lcore_id == rte_get_initial_lcore()) continue; lcore_conf[lcore_id].status = LCORE_USED; lcore_conf[lcore_id].nb_ports = 0; @@ -664,7 +664,7 @@ exec_burst(uint32_t flags, int lcore) static int test_pmd_perf(void) { - uint16_t nb_ports, num, nb_lcores, slave_id = (uint16_t)-1; + uint16_t nb_ports, num, nb_lcores, worker_id = (uint16_t)-1; uint16_t nb_rxd = MAX_TRAFFIC_BURST; uint16_t nb_txd = MAX_TRAFFIC_BURST; uint16_t portid; @@ -702,13 +702,13 @@ test_pmd_perf(void) RTE_ETH_FOREACH_DEV(portid) { if (socketid == -1) { socketid = rte_eth_dev_socket_id(portid); - slave_id = alloc_lcore(socketid); - if (slave_id == (uint16_t)-1) { + worker_id = alloc_lcore(socketid); + if (worker_id == (uint16_t)-1) { printf("No avail lcore to run test\n"); return -1; } printf("Performance test runs on lcore %u socket %u\n", - slave_id, socketid); + worker_id, socketid); } if (socketid != rte_eth_dev_socket_id(portid)) { @@ -765,8 +765,8 @@ test_pmd_perf(void) "rte_eth_promiscuous_enable: err=%s, port=%d\n", rte_strerror(-ret), portid); - lcore_conf[slave_id].portlist[num++] = portid; - lcore_conf[slave_id].nb_ports++; + lcore_conf[worker_id].portlist[num++] = portid; + lcore_conf[worker_id].nb_ports++; } check_all_ports_link_status(nb_ports, RTE_PORT_ALL); @@ -791,13 +791,13 @@ test_pmd_perf(void) if (NULL == do_measure) do_measure = measure_rxtx; - rte_eal_remote_launch(main_loop, NULL, slave_id); + rte_eal_remote_launch(main_loop, NULL, worker_id); - if (rte_eal_wait_lcore(slave_id) < 0) + if (rte_eal_wait_lcore(worker_id) < 0) return -1; } else if (sc_flag == SC_BURST_POLL_FIRST || sc_flag == SC_BURST_XMIT_FIRST) - if (exec_burst(sc_flag, slave_id) < 0) + if (exec_burst(sc_flag, worker_id) < 0) return -1; /* port tear down */ diff --git a/app/test/test_rcu_qsbr.c b/app/test/test_rcu_qsbr.c index 0a9e5ecd1a44..7ae66e4dfb76 100644 --- a/app/test/test_rcu_qsbr.c +++ b/app/test/test_rcu_qsbr.c @@ -1327,7 +1327,7 @@ test_rcu_qsbr_main(void) } num_cores = 0; - RTE_LCORE_FOREACH_SLAVE(core_id) { + RTE_LCORE_FOREACH_WORKER(core_id) { enabled_core_ids[num_cores] = core_id; num_cores++; } diff --git a/app/test/test_rcu_qsbr_perf.c b/app/test/test_rcu_qsbr_perf.c index d35a6d089784..3017e71120ad 100644 --- a/app/test/test_rcu_qsbr_perf.c +++ b/app/test/test_rcu_qsbr_perf.c @@ -625,7 +625,7 @@ test_rcu_qsbr_main(void) rte_atomic64_init(&check_cycles); num_cores = 0; - RTE_LCORE_FOREACH_SLAVE(core_id) { + RTE_LCORE_FOREACH_WORKER(core_id) { enabled_core_ids[num_cores] = core_id; num_cores++; } diff --git a/app/test/test_ring_perf.c b/app/test/test_ring_perf.c index ee21faf71380..4d95e3c17f3d 100644 --- a/app/test/test_ring_perf.c +++ b/app/test/test_ring_perf.c @@ -297,7 +297,7 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, const int esize) lcore_count = 0; param1.size = param2.size = bulk_sizes[i]; param1.r = param2.r = r; - if (cores->c1 == rte_get_master_lcore()) { + if (cores->c1 == rte_get_initial_lcore()) { rte_eal_remote_launch(f2, ¶m2, cores->c2); f1(¶m1); rte_eal_wait_lcore(cores->c2); @@ -340,8 +340,8 @@ load_loop_fn_helper(struct thread_params *p, const int esize) if (burst == NULL) return -1; - /* wait synchro for slaves */ - if (lcore != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0) rte_pause(); @@ -396,12 +396,12 @@ run_on_all_cores(struct rte_ring *r, const int esize) param.size = bulk_sizes[i]; param.r = r; - /* clear synchro and start slaves */ + /* clear synchro and start workers */ rte_atomic32_set(&synchro, 0); - if (rte_eal_mp_remote_launch(lcore_f, ¶m, SKIP_MASTER) < 0) + if (rte_eal_mp_remote_launch(lcore_f, ¶m, SKIP_INITIAL) < 0) return -1; - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); lcore_f(¶m); @@ -552,7 +552,7 @@ test_ring_perf_esize(const int esize) goto test_fail; } - printf("\n### Testing using all slave nodes ###\n"); + printf("\n### Testing using all worker nodes ###\n"); if (run_on_all_cores(r, esize) < 0) goto test_fail; diff --git a/app/test/test_ring_stress_impl.h b/app/test/test_ring_stress_impl.h index 222d62bc4f4d..fab924515fc3 100644 --- a/app/test/test_ring_stress_impl.h +++ b/app/test/test_ring_stress_impl.h @@ -6,7 +6,7 @@ /** * Stress test for ring enqueue/dequeue operations. - * Performs the following pattern on each slave worker: + * Performs the following pattern on each worker worker: * dequeue/read-write data from the dequeued objects/enqueue. * Serves as both functional and performance test of ring * enqueue/dequeue operations under high contention @@ -348,8 +348,8 @@ test_mt1(int (*test)(void *)) memset(arg, 0, sizeof(arg)); - /* launch on all slaves */ - RTE_LCORE_FOREACH_SLAVE(lc) { + /* launch on all workers */ + RTE_LCORE_FOREACH_WORKER(lc) { arg[lc].rng = r; arg[lc].stats = init_stat; rte_eal_remote_launch(test, &arg[lc], lc); @@ -365,12 +365,12 @@ test_mt1(int (*test)(void *)) wrk_cmd = WRK_CMD_STOP; rte_smp_wmb(); - /* wait for slaves and collect stats. */ + /* wait for workers and collect stats. */ mc = rte_lcore_id(); arg[mc].stats = init_stat; rc = 0; - RTE_LCORE_FOREACH_SLAVE(lc) { + RTE_LCORE_FOREACH_WORKER(lc) { rc |= rte_eal_wait_lcore(lc); lcore_stat_aggr(&arg[mc].stats, &arg[lc].stats); if (verbose != 0) diff --git a/app/test/test_rwlock.c b/app/test/test_rwlock.c index 61bee7d7c296..ea20318c4c84 100644 --- a/app/test/test_rwlock.c +++ b/app/test/test_rwlock.c @@ -99,8 +99,8 @@ load_loop_fn(__rte_unused void *arg) uint64_t lcount = 0; const unsigned int lcore = rte_lcore_id(); - /* wait synchro for slaves */ - if (lcore != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0) ; @@ -134,12 +134,12 @@ test_rwlock_perf(void) printf("\nRwlock Perf Test on %u cores...\n", rte_lcore_count()); - /* clear synchro and start slaves */ + /* clear synchro and start workers */ rte_atomic32_set(&synchro, 0); - if (rte_eal_mp_remote_launch(load_loop_fn, NULL, SKIP_MASTER) < 0) + if (rte_eal_mp_remote_launch(load_loop_fn, NULL, SKIP_INITIAL) < 0) return -1; - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); load_loop_fn(NULL); @@ -161,7 +161,7 @@ test_rwlock_perf(void) * - There is a global rwlock and a table of rwlocks (one per lcore). * * - The test function takes all of these locks and launches the - * ``test_rwlock_per_core()`` function on each core (except the master). + * ``test_rwlock_per_core()`` function on each core (except the initial). * * - The function takes the global write lock, display something, * then releases the global lock. @@ -187,21 +187,21 @@ rwlock_test1(void) rte_rwlock_write_lock(&sl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_rwlock_write_lock(&sl_tab[i]); rte_eal_remote_launch(test_rwlock_per_core, NULL, i); } rte_rwlock_write_unlock(&sl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_rwlock_write_unlock(&sl_tab[i]); rte_delay_ms(100); } rte_rwlock_write_lock(&sl); /* this message should be the last message of test */ - printf("Global write lock taken on master core %u\n", rte_lcore_id()); + printf("Global write lock taken on initial core %u\n", rte_lcore_id()); rte_rwlock_write_unlock(&sl); rte_eal_mp_wait_lcore(); @@ -462,26 +462,26 @@ try_rwlock_test_rda(void) try_test_reset(); /* start read test on all avaialble lcores */ - rte_eal_mp_remote_launch(try_read_lcore, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(try_read_lcore, NULL, CALL_INITIAL); rte_eal_mp_wait_lcore(); return process_try_lcore_stats(); } -/* all slave lcores grab RDLOCK, master one grabs WRLOCK */ +/* all worker lcores grab RDLOCK, initial one grabs WRLOCK */ static int try_rwlock_test_rds_wrm(void) { try_test_reset(); - rte_eal_mp_remote_launch(try_read_lcore, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(try_read_lcore, NULL, SKIP_INITIAL); try_write_lcore(NULL); rte_eal_mp_wait_lcore(); return process_try_lcore_stats(); } -/* master and even slave lcores grab RDLOCK, odd lcores grab WRLOCK */ +/* initial and even worker lcores grab RDLOCK, odd lcores grab WRLOCK */ static int try_rwlock_test_rde_wro(void) { @@ -489,7 +489,7 @@ try_rwlock_test_rde_wro(void) try_test_reset(); - mlc = rte_get_master_lcore(); + mlc = rte_get_initial_lcore(); RTE_LCORE_FOREACH(lc) { if (lc != mlc) { diff --git a/app/test/test_service_cores.c b/app/test/test_service_cores.c index 981e212130bf..6b23363425c9 100644 --- a/app/test/test_service_cores.c +++ b/app/test/test_service_cores.c @@ -30,7 +30,7 @@ static int testsuite_setup(void) { slcore_id = rte_get_next_lcore(/* start core */ -1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); return TEST_SUCCESS; @@ -532,12 +532,12 @@ service_lcore_add_del(void) TEST_ASSERT_EQUAL(1, rte_service_lcore_count(), "Service core count not equal to one"); uint32_t slcore_1 = rte_get_next_lcore(/* start core */ -1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); TEST_ASSERT_EQUAL(0, rte_service_lcore_add(slcore_1), "Service core add did not return zero"); uint32_t slcore_2 = rte_get_next_lcore(/* start core */ slcore_1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); TEST_ASSERT_EQUAL(0, rte_service_lcore_add(slcore_2), "Service core add did not return zero"); @@ -583,12 +583,12 @@ service_threaded_test(int mt_safe) /* add next 2 cores */ uint32_t slcore_1 = rte_get_next_lcore(/* start core */ -1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); TEST_ASSERT_EQUAL(0, rte_service_lcore_add(slcore_1), "mt safe lcore add fail"); uint32_t slcore_2 = rte_get_next_lcore(/* start core */ slcore_1, - /* skip master */ 1, + /* skip initial */ 1, /* wrap */ 0); TEST_ASSERT_EQUAL(0, rte_service_lcore_add(slcore_2), "mt safe lcore add fail"); diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c index 842990ed3b30..87dc8a1f1eeb 100644 --- a/app/test/test_spinlock.c +++ b/app/test/test_spinlock.c @@ -28,7 +28,7 @@ * - There is a global spinlock and a table of spinlocks (one per lcore). * * - The test function takes all of these locks and launches the - * ``test_spinlock_per_core()`` function on each core (except the master). + * ``test_spinlock_per_core()`` function on each core (except the initial). * * - The function takes the global lock, display something, then releases * the global lock. @@ -109,8 +109,8 @@ load_loop_fn(void *func_param) const int use_lock = *(int*)func_param; const unsigned lcore = rte_lcore_id(); - /* wait synchro for slaves */ - if (lcore != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0); begin = rte_get_timer_cycles(); @@ -149,11 +149,11 @@ test_spinlock_perf(void) printf("\nTest with lock on %u cores...\n", rte_lcore_count()); - /* Clear synchro and start slaves */ + /* Clear synchro and start workers */ rte_atomic32_set(&synchro, 0); - rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_MASTER); + rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_INITIAL); - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); load_loop_fn(&lock); @@ -200,8 +200,8 @@ test_spinlock(void) int ret = 0; int i; - /* slave cores should be waiting: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be waiting: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } @@ -214,19 +214,19 @@ test_spinlock(void) rte_spinlock_lock(&sl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_spinlock_lock(&sl_tab[i]); rte_eal_remote_launch(test_spinlock_per_core, NULL, i); } - /* slave cores should be busy: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be busy: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } rte_spinlock_unlock(&sl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_spinlock_unlock(&sl_tab[i]); rte_delay_ms(10); } @@ -245,7 +245,7 @@ test_spinlock(void) } else rte_spinlock_recursive_unlock(&slr); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_spinlock_recursive_per_core, NULL, i); } rte_spinlock_recursive_unlock(&slr); @@ -253,12 +253,12 @@ test_spinlock(void) /* * Test if it could return immediately from try-locking a locked object. - * Here it will lock the spinlock object first, then launch all the slave + * Here it will lock the spinlock object first, then launch all the worker * lcores to trylock the same spinlock object. - * All the slave lcores should give up try-locking a locked object and + * All the worker lcores should give up try-locking a locked object and * return immediately, and then increase the "count" initialized with zero * by one per times. - * We can check if the "count" is finally equal to the number of all slave + * We can check if the "count" is finally equal to the number of all worker * lcores to see if the behavior of try-locking a locked spinlock object * is correct. */ @@ -266,7 +266,7 @@ test_spinlock(void) return -1; } count = 0; - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_spinlock_try, NULL, i); } rte_eal_mp_wait_lcore(); diff --git a/app/test/test_stack.c b/app/test/test_stack.c index c8dac1f55cdc..0ef5f47874f2 100644 --- a/app/test/test_stack.c +++ b/app/test/test_stack.c @@ -362,7 +362,7 @@ test_stack_multithreaded(uint32_t flags) rte_atomic64_init(&size); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { args[lcore_id].s = s; args[lcore_id].sz = &size; diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c index 3ab7267b1b72..1a49667a91fc 100644 --- a/app/test/test_stack_perf.c +++ b/app/test/test_stack_perf.c @@ -180,7 +180,7 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s, args[0].sz = args[1].sz = bulk_sizes[i]; args[0].s = args[1].s = s; - if (cores->c1 == rte_get_master_lcore()) { + if (cores->c1 == rte_get_initial_lcore()) { rte_eal_remote_launch(fn, &args[1], cores->c2); fn(&args[0]); rte_eal_wait_lcore(cores->c2); @@ -210,7 +210,7 @@ run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n) rte_atomic32_set(&lcore_barrier, n); - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (++cnt >= n) break; @@ -235,7 +235,7 @@ run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n) avg = args[rte_lcore_id()].avg; cnt = 0; - RTE_LCORE_FOREACH_SLAVE(lcore_id) { + RTE_LCORE_FOREACH_WORKER(lcore_id) { if (++cnt >= n) break; avg += args[lcore_id].avg; diff --git a/app/test/test_ticketlock.c b/app/test/test_ticketlock.c index 66ab3d1a0248..3b4e68af24ce 100644 --- a/app/test/test_ticketlock.c +++ b/app/test/test_ticketlock.c @@ -28,7 +28,7 @@ * - There is a global ticketlock and a table of ticketlocks (one per lcore). * * - The test function takes all of these locks and launches the - * ``test_ticketlock_per_core()`` function on each core (except the master). + * ``test_ticketlock_per_core()`` function on each core (except the initial). * * - The function takes the global lock, display something, then releases * the global lock. @@ -110,8 +110,8 @@ load_loop_fn(void *func_param) const int use_lock = *(int *)func_param; const unsigned int lcore = rte_lcore_id(); - /* wait synchro for slaves */ - if (lcore != rte_get_master_lcore()) + /* wait synchro for workers */ + if (lcore != rte_get_initial_lcore()) while (rte_atomic32_read(&synchro) == 0) ; @@ -154,11 +154,11 @@ test_ticketlock_perf(void) lcount = 0; printf("\nTest with lock on %u cores...\n", rte_lcore_count()); - /* Clear synchro and start slaves */ + /* Clear synchro and start workers */ rte_atomic32_set(&synchro, 0); - rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_MASTER); + rte_eal_mp_remote_launch(load_loop_fn, &lock, SKIP_INITIAL); - /* start synchro and launch test on master */ + /* start synchro and launch test on initial lcore */ rte_atomic32_set(&synchro, 1); load_loop_fn(&lock); @@ -208,8 +208,8 @@ test_ticketlock(void) int ret = 0; int i; - /* slave cores should be waiting: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be waiting: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } @@ -217,25 +217,25 @@ test_ticketlock(void) rte_ticketlock_init(&tl); rte_ticketlock_init(&tl_try); rte_ticketlock_recursive_init(&tlr); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_ticketlock_init(&tl_tab[i]); } rte_ticketlock_lock(&tl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_ticketlock_lock(&tl_tab[i]); rte_eal_remote_launch(test_ticketlock_per_core, NULL, i); } - /* slave cores should be busy: print it */ - RTE_LCORE_FOREACH_SLAVE(i) { + /* worker cores should be busy: print it */ + RTE_LCORE_FOREACH_WORKER(i) { printf("lcore %d state: %d\n", i, (int) rte_eal_get_lcore_state(i)); } rte_ticketlock_unlock(&tl); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_ticketlock_unlock(&tl_tab[i]); rte_delay_ms(10); } @@ -254,7 +254,7 @@ test_ticketlock(void) } else rte_ticketlock_recursive_unlock(&tlr); - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_ticketlock_recursive_per_core, NULL, i); } @@ -264,19 +264,19 @@ test_ticketlock(void) /* * Test if it could return immediately from try-locking a locked object. * Here it will lock the ticketlock object first, then launch all the - * slave lcores to trylock the same ticketlock object. - * All the slave lcores should give up try-locking a locked object and + * worker lcores to trylock the same ticketlock object. + * All the worker lcores should give up try-locking a locked object and * return immediately, and then increase the "count" initialized with * zero by one per times. * We can check if the "count" is finally equal to the number of all - * slave lcores to see if the behavior of try-locking a locked + * worker lcores to see if the behavior of try-locking a locked * ticketlock object is correct. */ if (rte_ticketlock_trylock(&tl_try) == 0) return -1; count = 0; - RTE_LCORE_FOREACH_SLAVE(i) { + RTE_LCORE_FOREACH_WORKER(i) { rte_eal_remote_launch(test_ticketlock_try, NULL, i); } rte_eal_mp_wait_lcore(); diff --git a/app/test/test_timer.c b/app/test/test_timer.c index 5933f56ed544..37a944e32bcd 100644 --- a/app/test/test_timer.c +++ b/app/test/test_timer.c @@ -37,7 +37,7 @@ * - All cores then simultaneously are set to schedule all the timers at * the same time, so conflicts should occur. * - Then there is a delay while we wait for the timers to expire - * - Then the master lcore calls timer_manage() and we check that all + * - Then the initial lcore calls timer_manage() and we check that all * timers have had their callbacks called exactly once - no more no less. * - Then we repeat the process, except after setting up the timers, we have * all cores randomly reschedule them. @@ -58,7 +58,7 @@ * * - timer0 * - * - At initialization, timer0 is loaded by the master core, on master core + * - At initialization, timer0 is loaded by the initial core, on initial lcore core * in "single" mode (time = 1 second). * - In the first 19 callbacks, timer0 is reloaded on the same core, * then, it is explicitly stopped at the 20th call. @@ -66,21 +66,21 @@ * * - timer1 * - * - At initialization, timer1 is loaded by the master core, on the - * master core in "single" mode (time = 2 seconds). + * - At initialization, timer1 is loaded by the initial core, on the + * initial core in "single" mode (time = 2 seconds). * - In the first 9 callbacks, timer1 is reloaded on another * core. After the 10th callback, timer1 is not reloaded anymore. * * - timer2 * - * - At initialization, timer2 is loaded by the master core, on the - * master core in "periodical" mode (time = 1 second). + * - At initialization, timer2 is loaded by the initial core, on the + * initial core in "periodical" mode (time = 1 second). * - In the callback, when t=25s, it stops timer3 and reloads timer0 * on the current core. * * - timer3 * - * - At initialization, timer3 is loaded by the master core, on + * - At initialization, timer3 is loaded by the initial core, on * another core in "periodical" mode (time = 1 second). * - It is stopped at t=25s by timer2. */ @@ -201,68 +201,68 @@ timer_stress_main_loop(__rte_unused void *arg) return 0; } -/* Need to synchronize slave lcores through multiple steps. */ -enum { SLAVE_WAITING = 1, SLAVE_RUN_SIGNAL, SLAVE_RUNNING, SLAVE_FINISHED }; -static rte_atomic16_t slave_state[RTE_MAX_LCORE]; +/* Need to synchronize worker lcores through multiple steps. */ +enum { WORKER_WAITING = 1, WORKER_RUN_SIGNAL, WORKER_RUNNING, WORKER_FINISHED }; +static rte_atomic16_t worker_state[RTE_MAX_LCORE]; static void -master_init_slaves(void) +init_workers(void) { unsigned i; - RTE_LCORE_FOREACH_SLAVE(i) { - rte_atomic16_set(&slave_state[i], SLAVE_WAITING); + RTE_LCORE_FOREACH_WORKER(i) { + rte_atomic16_set(&worker_state[i], WORKER_WAITING); } } static void -master_start_slaves(void) +start_workers(void) { unsigned i; - RTE_LCORE_FOREACH_SLAVE(i) { - rte_atomic16_set(&slave_state[i], SLAVE_RUN_SIGNAL); + RTE_LCORE_FOREACH_WORKER(i) { + rte_atomic16_set(&worker_state[i], WORKER_RUN_SIGNAL); } - RTE_LCORE_FOREACH_SLAVE(i) { - while (rte_atomic16_read(&slave_state[i]) != SLAVE_RUNNING) + RTE_LCORE_FOREACH_WORKER(i) { + while (rte_atomic16_read(&worker_state[i]) != WORKER_RUNNING) rte_pause(); } } static void -master_wait_for_slaves(void) +wait_for_workers(void) { unsigned i; - RTE_LCORE_FOREACH_SLAVE(i) { - while (rte_atomic16_read(&slave_state[i]) != SLAVE_FINISHED) + RTE_LCORE_FOREACH_WORKER(i) { + while (rte_atomic16_read(&worker_state[i]) != WORKER_FINISHED) rte_pause(); } } static void -slave_wait_to_start(void) +worker_wait_to_start(void) { unsigned lcore_id = rte_lcore_id(); - while (rte_atomic16_read(&slave_state[lcore_id]) != SLAVE_RUN_SIGNAL) + while (rte_atomic16_read(&worker_state[lcore_id]) != WORKER_RUN_SIGNAL) rte_pause(); - rte_atomic16_set(&slave_state[lcore_id], SLAVE_RUNNING); + rte_atomic16_set(&worker_state[lcore_id], WORKER_RUNNING); } static void -slave_finish(void) +worker_finish(void) { unsigned lcore_id = rte_lcore_id(); - rte_atomic16_set(&slave_state[lcore_id], SLAVE_FINISHED); + rte_atomic16_set(&worker_state[lcore_id], WORKER_FINISHED); } static volatile int cb_count = 0; /* callback for second stress test. will only be called - * on master lcore */ + * on initial lcore */ static void timer_stress2_cb(struct rte_timer *tim __rte_unused, void *arg __rte_unused) { @@ -278,35 +278,35 @@ timer_stress2_main_loop(__rte_unused void *arg) int i, ret; uint64_t delay = rte_get_timer_hz() / 20; unsigned lcore_id = rte_lcore_id(); - unsigned master = rte_get_master_lcore(); + unsigned initial = rte_get_initial_lcore(); int32_t my_collisions = 0; static rte_atomic32_t collisions; - if (lcore_id == master) { + if (lcore_id == initial) { cb_count = 0; test_failed = 0; rte_atomic32_set(&collisions, 0); - master_init_slaves(); + init_workers(); timers = rte_malloc(NULL, sizeof(*timers) * NB_STRESS2_TIMERS, 0); if (timers == NULL) { printf("Test Failed\n"); printf("- Cannot allocate memory for timers\n" ); test_failed = 1; - master_start_slaves(); + start_workers(); goto cleanup; } for (i = 0; i < NB_STRESS2_TIMERS; i++) rte_timer_init(&timers[i]); - master_start_slaves(); + start_workers(); } else { - slave_wait_to_start(); + worker_wait_to_start(); if (test_failed) goto cleanup; } - /* have all cores schedule all timers on master lcore */ + /* have all cores schedule all timers on initial lcore */ for (i = 0; i < NB_STRESS2_TIMERS; i++) { - ret = rte_timer_reset(&timers[i], delay, SINGLE, master, + ret = rte_timer_reset(&timers[i], delay, SINGLE, initial, timer_stress2_cb, NULL); /* there will be collisions when multiple cores simultaneously * configure the same timers */ @@ -320,14 +320,14 @@ timer_stress2_main_loop(__rte_unused void *arg) rte_delay_ms(100); /* all cores rendezvous */ - if (lcore_id == master) { - master_wait_for_slaves(); + if (lcore_id == initial) { + wait_for_workers(); } else { - slave_finish(); + worker_finish(); } /* now check that we get the right number of callbacks */ - if (lcore_id == master) { + if (lcore_id == initial) { my_collisions = rte_atomic32_read(&collisions); if (my_collisions != 0) printf("- %d timer reset collisions (OK)\n", my_collisions); @@ -338,23 +338,23 @@ timer_stress2_main_loop(__rte_unused void *arg) printf("- Expected %d callbacks, got %d\n", NB_STRESS2_TIMERS, cb_count); test_failed = 1; - master_start_slaves(); + start_workers(); goto cleanup; } cb_count = 0; /* proceed */ - master_start_slaves(); + start_workers(); } else { /* proceed */ - slave_wait_to_start(); + worker_wait_to_start(); if (test_failed) goto cleanup; } /* now test again, just stop and restart timers at random after init*/ for (i = 0; i < NB_STRESS2_TIMERS; i++) - rte_timer_reset(&timers[i], delay, SINGLE, master, + rte_timer_reset(&timers[i], delay, SINGLE, initial, timer_stress2_cb, NULL); /* pick random timer to reset, stopping them first half the time */ @@ -362,7 +362,7 @@ timer_stress2_main_loop(__rte_unused void *arg) int r = rand() % NB_STRESS2_TIMERS; if (i % 2) rte_timer_stop(&timers[r]); - rte_timer_reset(&timers[r], delay, SINGLE, master, + rte_timer_reset(&timers[r], delay, SINGLE, initial, timer_stress2_cb, NULL); } @@ -370,8 +370,8 @@ timer_stress2_main_loop(__rte_unused void *arg) rte_delay_ms(100); /* now check that we get the right number of callbacks */ - if (lcore_id == master) { - master_wait_for_slaves(); + if (lcore_id == initial) { + wait_for_workers(); rte_timer_manage(); if (cb_count != NB_STRESS2_TIMERS) { @@ -386,14 +386,14 @@ timer_stress2_main_loop(__rte_unused void *arg) } cleanup: - if (lcore_id == master) { - master_wait_for_slaves(); + if (lcore_id == initial) { + wait_for_workers(); if (timers != NULL) { rte_free(timers); timers = NULL; } } else { - slave_finish(); + worker_finish(); } return 0; @@ -465,7 +465,7 @@ timer_basic_main_loop(__rte_unused void *arg) int64_t diff = 0; /* launch all timers on core 0 */ - if (lcore_id == rte_get_master_lcore()) { + if (lcore_id == rte_get_initial_lcore()) { mytimer_reset(&mytiminfo[0], hz/4, SINGLE, lcore_id, timer_basic_cb); mytimer_reset(&mytiminfo[1], hz/2, SINGLE, lcore_id, @@ -563,7 +563,7 @@ test_timer(void) /* start other cores */ printf("Start timer stress tests\n"); - rte_eal_mp_remote_launch(timer_stress_main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(timer_stress_main_loop, NULL, CALL_INITIAL); rte_eal_mp_wait_lcore(); /* stop timer 0 used for stress test */ @@ -572,7 +572,7 @@ test_timer(void) /* run a second, slightly different set of stress tests */ printf("\nStart timer stress tests 2\n"); test_failed = 0; - rte_eal_mp_remote_launch(timer_stress2_main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(timer_stress2_main_loop, NULL, CALL_INITIAL); rte_eal_mp_wait_lcore(); if (test_failed) return TEST_FAILED; @@ -584,7 +584,7 @@ test_timer(void) /* start other cores */ printf("\nStart timer basic tests\n"); - rte_eal_mp_remote_launch(timer_basic_main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(timer_basic_main_loop, NULL, CALL_INITIAL); rte_eal_mp_wait_lcore(); /* stop all timers */ diff --git a/app/test/test_timer_racecond.c b/app/test/test_timer_racecond.c index 4fc917995415..5b8941950c30 100644 --- a/app/test/test_timer_racecond.c +++ b/app/test/test_timer_racecond.c @@ -56,8 +56,8 @@ static struct rte_timer timer[N_TIMERS]; static unsigned timer_lcore_id[N_TIMERS]; -static unsigned master; -static volatile unsigned stop_slaves; +static unsigned int initial_lcore; +static volatile unsigned stop_workers; static int reload_timer(struct rte_timer *tim); @@ -95,7 +95,8 @@ reload_timer(struct rte_timer *tim) (tim - timer); int ret; - ret = rte_timer_reset(tim, ticks, PERIODICAL, master, timer_cb, NULL); + ret = rte_timer_reset(tim, ticks, PERIODICAL, + initial_lcore, timer_cb, NULL); if (ret != 0) { rte_log(RTE_LOG_DEBUG, timer_logtype_test, "- core %u failed to reset timer %" PRIuPTR " (OK)\n", @@ -106,7 +107,7 @@ reload_timer(struct rte_timer *tim) } static int -slave_main_loop(__rte_unused void *arg) +worker_main_loop(__rte_unused void *arg) { unsigned lcore_id = rte_lcore_id(); unsigned i; @@ -115,7 +116,7 @@ slave_main_loop(__rte_unused void *arg) printf("Starting main loop on core %u\n", lcore_id); - while (!stop_slaves) { + while (!stop_workers) { /* Wait until the timer manager is running. * We know it's running when we see timer[0] NOT pending. */ @@ -152,7 +153,7 @@ test_timer_racecond(void) unsigned lcore_id; unsigned i; - master = lcore_id = rte_lcore_id(); + initial_lcore = lcore_id = rte_lcore_id(); hz = rte_get_timer_hz(); /* init and start timers */ @@ -161,8 +162,8 @@ test_timer_racecond(void) ret = reload_timer(&timer[i]); TEST_ASSERT(ret == 0, "reload_timer failed"); - /* Distribute timers to slaves. - * Note that we assign timer[0] to the master. + /* Distribute timers to workers. + * Note that we assign timer[0] to the inital lcore. */ timer_lcore_id[i] = lcore_id; lcore_id = rte_get_next_lcore(lcore_id, 1, 1); @@ -172,11 +173,11 @@ test_timer_racecond(void) cur_time = rte_get_timer_cycles(); end_time = cur_time + (hz * TEST_DURATION_S); - /* start slave cores */ - stop_slaves = 0; + /* start worker cores */ + stop_workers = 0; printf("Start timer manage race condition test (%u seconds)\n", TEST_DURATION_S); - rte_eal_mp_remote_launch(slave_main_loop, NULL, SKIP_MASTER); + rte_eal_mp_remote_launch(worker_main_loop, NULL, SKIP_INITIAL); while (diff >= 0) { /* run the timers */ @@ -189,9 +190,9 @@ test_timer_racecond(void) diff = end_time - cur_time; } - /* stop slave cores */ + /* stop worker cores */ printf("Stopping timer manage race condition test\n"); - stop_slaves = 1; + stop_workers = 1; rte_eal_mp_wait_lcore(); /* stop timers */ diff --git a/app/test/test_timer_secondary.c b/app/test/test_timer_secondary.c index 7a3bc873b359..86f187280120 100644 --- a/app/test/test_timer_secondary.c +++ b/app/test/test_timer_secondary.c @@ -141,7 +141,7 @@ test_timer_secondary(void) unsigned int *mgr_lcorep = &test_info->mgr_lcore; unsigned int *sec_lcorep = &test_info->sec_lcore; - *mstr_lcorep = rte_get_master_lcore(); + *mstr_lcorep = rte_get_initial_lcore(); *mgr_lcorep = rte_get_next_lcore(*mstr_lcorep, 1, 1); *sec_lcorep = rte_get_next_lcore(*mgr_lcorep, 1, 1); diff --git a/app/test/test_trace_perf.c b/app/test/test_trace_perf.c index 50c7381b77e7..e1ad8e6f555c 100644 --- a/app/test/test_trace_perf.c +++ b/app/test/test_trace_perf.c @@ -132,7 +132,7 @@ run_test(const char *str, lcore_function_t f, struct test_data *data, size_t sz) memset(data, 0, sz); data->nb_workers = rte_lcore_count() - 1; - RTE_LCORE_FOREACH_SLAVE(id) + RTE_LCORE_FOREACH_WORKER(id) rte_eal_remote_launch(f, &data->ldata[worker++], id); wait_till_workers_are_ready(data); @@ -140,7 +140,7 @@ run_test(const char *str, lcore_function_t f, struct test_data *data, size_t sz) measure_perf(str, data); signal_workers_to_finish(data); - RTE_LCORE_FOREACH_SLAVE(id) + RTE_LCORE_FOREACH_WORKER(id) rte_eal_wait_lcore(id); }