From patchwork Tue Feb 21 03:17:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hunt, David" X-Patchwork-Id: 20625 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id A23D969C6; Tue, 21 Feb 2017 11:17:31 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 5D4CAFE5 for ; Tue, 21 Feb 2017 11:17:17 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Feb 2017 02:17:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.35,189,1484035200"; d="scan'208";a="227857023" Received: from silpixa00397515.ir.intel.com (HELO silpixa00397515.ger.corp.intel.com) ([10.237.223.14]) by fmsmga004.fm.intel.com with ESMTP; 21 Feb 2017 02:17:15 -0800 From: David Hunt To: dev@dpdk.org Cc: bruce.richardson@intel.com, David Hunt Date: Tue, 21 Feb 2017 03:17:38 +0000 Message-Id: <1487647073-129064-3-git-send-email-david.hunt@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1487647073-129064-1-git-send-email-david.hunt@intel.com> References: <1485163480-156507-2-git-send-email-david.hunt@intel.com> <1487647073-129064-1-git-send-email-david.hunt@intel.com> Subject: [dpdk-dev] [PATCH v7 02/17] lib: symbol versioning of functions in distributor X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" we will start the symbol versioning by renaming all legacy functions Signed-off-by: David Hunt --- app/test/test_distributor.c | 104 +++++++++++---------- app/test/test_distributor_perf.c | 28 +++--- examples/distributor/main.c | 24 ++--- lib/librte_distributor/rte_distributor_v20.c | 54 +++++------ lib/librte_distributor/rte_distributor_v20.h | 33 +++---- lib/librte_distributor/rte_distributor_version.map | 18 ++-- 6 files changed, 132 insertions(+), 129 deletions(-) diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c index ba402e2..6a4e20b 100644 --- a/app/test/test_distributor.c +++ b/app/test/test_distributor.c @@ -81,17 +81,17 @@ static int handle_work(void *arg) { struct rte_mbuf *pkt = NULL; - struct rte_distributor *d = arg; + struct rte_distributor_v20 *d = arg; unsigned count = 0; unsigned id = __sync_fetch_and_add(&worker_idx, 1); - pkt = rte_distributor_get_pkt(d, id, NULL); + pkt = rte_distributor_get_pkt_v20(d, id, NULL); while (!quit) { worker_stats[id].handled_packets++, count++; - pkt = rte_distributor_get_pkt(d, id, pkt); + pkt = rte_distributor_get_pkt_v20(d, id, pkt); } worker_stats[id].handled_packets++, count++; - rte_distributor_return_pkt(d, id, pkt); + rte_distributor_return_pkt_v20(d, id, pkt); return 0; } @@ -107,7 +107,7 @@ handle_work(void *arg) * not necessarily in the same order (as different flows). */ static int -sanity_test(struct rte_distributor *d, struct rte_mempool *p) +sanity_test(struct rte_distributor_v20 *d, struct rte_mempool *p) { struct rte_mbuf *bufs[BURST]; unsigned i; @@ -124,8 +124,8 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p) for (i = 0; i < BURST; i++) bufs[i]->hash.usr = 0; - rte_distributor_process(d, bufs, BURST); - rte_distributor_flush(d); + rte_distributor_process_v20(d, bufs, BURST); + rte_distributor_flush_v20(d); if (total_packet_count() != BURST) { printf("Line %d: Error, not all packets flushed. " "Expected %u, got %u\n", @@ -146,8 +146,8 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p) for (i = 0; i < BURST; i++) bufs[i]->hash.usr = (i & 1) << 8; - rte_distributor_process(d, bufs, BURST); - rte_distributor_flush(d); + rte_distributor_process_v20(d, bufs, BURST); + rte_distributor_flush_v20(d); if (total_packet_count() != BURST) { printf("Line %d: Error, not all packets flushed. " "Expected %u, got %u\n", @@ -171,8 +171,8 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p) for (i = 0; i < BURST; i++) bufs[i]->hash.usr = i; - rte_distributor_process(d, bufs, BURST); - rte_distributor_flush(d); + rte_distributor_process_v20(d, bufs, BURST); + rte_distributor_flush_v20(d); if (total_packet_count() != BURST) { printf("Line %d: Error, not all packets flushed. " "Expected %u, got %u\n", @@ -194,8 +194,8 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p) unsigned num_returned = 0; /* flush out any remaining packets */ - rte_distributor_flush(d); - rte_distributor_clear_returns(d); + rte_distributor_flush_v20(d); + rte_distributor_clear_returns_v20(d); if (rte_mempool_get_bulk(p, (void *)many_bufs, BIG_BATCH) != 0) { printf("line %d: Error getting mbufs from pool\n", __LINE__); return -1; @@ -204,13 +204,13 @@ sanity_test(struct rte_distributor *d, struct rte_mempool *p) many_bufs[i]->hash.usr = i << 2; for (i = 0; i < BIG_BATCH/BURST; i++) { - rte_distributor_process(d, &many_bufs[i*BURST], BURST); - num_returned += rte_distributor_returned_pkts(d, + rte_distributor_process_v20(d, &many_bufs[i*BURST], BURST); + num_returned += rte_distributor_returned_pkts_v20(d, &return_bufs[num_returned], BIG_BATCH - num_returned); } - rte_distributor_flush(d); - num_returned += rte_distributor_returned_pkts(d, + rte_distributor_flush_v20(d); + num_returned += rte_distributor_returned_pkts_v20(d, &return_bufs[num_returned], BIG_BATCH - num_returned); if (num_returned != BIG_BATCH) { @@ -249,18 +249,18 @@ static int handle_work_with_free_mbufs(void *arg) { struct rte_mbuf *pkt = NULL; - struct rte_distributor *d = arg; + struct rte_distributor_v20 *d = arg; unsigned count = 0; unsigned id = __sync_fetch_and_add(&worker_idx, 1); - pkt = rte_distributor_get_pkt(d, id, NULL); + pkt = rte_distributor_get_pkt_v20(d, id, NULL); while (!quit) { worker_stats[id].handled_packets++, count++; rte_pktmbuf_free(pkt); - pkt = rte_distributor_get_pkt(d, id, pkt); + pkt = rte_distributor_get_pkt_v20(d, id, pkt); } worker_stats[id].handled_packets++, count++; - rte_distributor_return_pkt(d, id, pkt); + rte_distributor_return_pkt_v20(d, id, pkt); return 0; } @@ -270,7 +270,8 @@ handle_work_with_free_mbufs(void *arg) * library. */ static int -sanity_test_with_mbuf_alloc(struct rte_distributor *d, struct rte_mempool *p) +sanity_test_with_mbuf_alloc(struct rte_distributor_v20 *d, + struct rte_mempool *p) { unsigned i; struct rte_mbuf *bufs[BURST]; @@ -280,16 +281,16 @@ sanity_test_with_mbuf_alloc(struct rte_distributor *d, struct rte_mempool *p) for (i = 0; i < ((1<hash.usr = (i+j) << 1; rte_mbuf_refcnt_set(bufs[j], 1); } - rte_distributor_process(d, bufs, BURST); + rte_distributor_process_v20(d, bufs, BURST); } - rte_distributor_flush(d); + rte_distributor_flush_v20(d); if (total_packet_count() < (1<hash.usr = 0; - rte_distributor_process(d, bufs, BURST); + rte_distributor_process_v20(d, bufs, BURST); /* at this point, we will have processed some packets and have a full * backlog for the other ones at worker 0. */ @@ -378,10 +379,10 @@ sanity_test_with_worker_shutdown(struct rte_distributor *d, /* get worker zero to quit */ zero_quit = 1; - rte_distributor_process(d, bufs, BURST); + rte_distributor_process_v20(d, bufs, BURST); /* flush the distributor */ - rte_distributor_flush(d); + rte_distributor_flush_v20(d); if (total_packet_count() != BURST * 2) { printf("Line %d: Error, not all packets flushed. " "Expected %u, got %u\n", @@ -401,7 +402,7 @@ sanity_test_with_worker_shutdown(struct rte_distributor *d, * one worker shuts down.. */ static int -test_flush_with_worker_shutdown(struct rte_distributor *d, +test_flush_with_worker_shutdown(struct rte_distributor_v20 *d, struct rte_mempool *p) { struct rte_mbuf *bufs[BURST]; @@ -420,7 +421,7 @@ test_flush_with_worker_shutdown(struct rte_distributor *d, for (i = 0; i < BURST; i++) bufs[i]->hash.usr = 0; - rte_distributor_process(d, bufs, BURST); + rte_distributor_process_v20(d, bufs, BURST); /* at this point, we will have processed some packets and have a full * backlog for the other ones at worker 0. */ @@ -429,7 +430,7 @@ test_flush_with_worker_shutdown(struct rte_distributor *d, zero_quit = 1; /* flush the distributor */ - rte_distributor_flush(d); + rte_distributor_flush_v20(d); zero_quit = 0; if (total_packet_count() != BURST) { @@ -450,10 +451,10 @@ test_flush_with_worker_shutdown(struct rte_distributor *d, static int test_error_distributor_create_name(void) { - struct rte_distributor *d = NULL; + struct rte_distributor_v20 *d = NULL; char *name = NULL; - d = rte_distributor_create(name, rte_socket_id(), + d = rte_distributor_create_v20(name, rte_socket_id(), rte_lcore_count() - 1); if (d != NULL || rte_errno != EINVAL) { printf("ERROR: No error on create() with NULL name param\n"); @@ -467,8 +468,8 @@ int test_error_distributor_create_name(void) static int test_error_distributor_create_numworkers(void) { - struct rte_distributor *d = NULL; - d = rte_distributor_create("test_numworkers", rte_socket_id(), + struct rte_distributor_v20 *d = NULL; + d = rte_distributor_create_v20("test_numworkers", rte_socket_id(), RTE_MAX_LCORE + 10); if (d != NULL || rte_errno != EINVAL) { printf("ERROR: No error on create() with num_workers > MAX\n"); @@ -480,7 +481,7 @@ int test_error_distributor_create_numworkers(void) /* Useful function which ensures that all worker functions terminate */ static void -quit_workers(struct rte_distributor *d, struct rte_mempool *p) +quit_workers(struct rte_distributor_v20 *d, struct rte_mempool *p) { const unsigned num_workers = rte_lcore_count() - 1; unsigned i; @@ -491,12 +492,12 @@ quit_workers(struct rte_distributor *d, struct rte_mempool *p) quit = 1; for (i = 0; i < num_workers; i++) bufs[i]->hash.usr = i << 1; - rte_distributor_process(d, bufs, num_workers); + rte_distributor_process_v20(d, bufs, num_workers); rte_mempool_put_bulk(p, (void *)bufs, num_workers); - rte_distributor_process(d, NULL, 0); - rte_distributor_flush(d); + rte_distributor_process_v20(d, NULL, 0); + rte_distributor_flush_v20(d); rte_eal_mp_wait_lcore(); quit = 0; worker_idx = 0; @@ -505,7 +506,7 @@ quit_workers(struct rte_distributor *d, struct rte_mempool *p) static int test_distributor(void) { - static struct rte_distributor *d; + static struct rte_distributor_v20 *d; static struct rte_mempool *p; if (rte_lcore_count() < 2) { @@ -514,15 +515,16 @@ test_distributor(void) } if (d == NULL) { - d = rte_distributor_create("Test_distributor", rte_socket_id(), + d = rte_distributor_create_v20("Test_distributor", + rte_socket_id(), rte_lcore_count() - 1); if (d == NULL) { printf("Error creating distributor\n"); return -1; } } else { - rte_distributor_flush(d); - rte_distributor_clear_returns(d); + rte_distributor_flush_v20(d); + rte_distributor_clear_returns_v20(d); } const unsigned nb_bufs = (511 * rte_lcore_count()) < BIG_BATCH ? diff --git a/app/test/test_distributor_perf.c b/app/test/test_distributor_perf.c index fe0c97d..a7e4823 100644 --- a/app/test/test_distributor_perf.c +++ b/app/test/test_distributor_perf.c @@ -130,17 +130,17 @@ static int handle_work(void *arg) { struct rte_mbuf *pkt = NULL; - struct rte_distributor *d = arg; + struct rte_distributor_v20 *d = arg; unsigned count = 0; unsigned id = __sync_fetch_and_add(&worker_idx, 1); - pkt = rte_distributor_get_pkt(d, id, NULL); + pkt = rte_distributor_get_pkt_v20(d, id, NULL); while (!quit) { worker_stats[id].handled_packets++, count++; - pkt = rte_distributor_get_pkt(d, id, pkt); + pkt = rte_distributor_get_pkt_v20(d, id, pkt); } worker_stats[id].handled_packets++, count++; - rte_distributor_return_pkt(d, id, pkt); + rte_distributor_return_pkt_v20(d, id, pkt); return 0; } @@ -149,7 +149,7 @@ handle_work(void *arg) * threads and finally how long per packet the processing took. */ static inline int -perf_test(struct rte_distributor *d, struct rte_mempool *p) +perf_test(struct rte_distributor_v20 *d, struct rte_mempool *p) { unsigned i; uint64_t start, end; @@ -166,12 +166,12 @@ perf_test(struct rte_distributor *d, struct rte_mempool *p) start = rte_rdtsc(); for (i = 0; i < (1<hash.usr = i << 1; - rte_distributor_process(d, bufs, num_workers); + rte_distributor_process_v20(d, bufs, num_workers); rte_mempool_put_bulk(p, (void *)bufs, num_workers); - rte_distributor_process(d, NULL, 0); + rte_distributor_process_v20(d, NULL, 0); rte_eal_mp_wait_lcore(); quit = 0; worker_idx = 0; @@ -215,7 +215,7 @@ quit_workers(struct rte_distributor *d, struct rte_mempool *p) static int test_distributor_perf(void) { - static struct rte_distributor *d; + static struct rte_distributor_v20 *d; static struct rte_mempool *p; if (rte_lcore_count() < 2) { @@ -227,15 +227,15 @@ test_distributor_perf(void) time_cache_line_switch(); if (d == NULL) { - d = rte_distributor_create("Test_perf", rte_socket_id(), + d = rte_distributor_create_v20("Test_perf", rte_socket_id(), rte_lcore_count() - 1); if (d == NULL) { printf("Error creating distributor\n"); return -1; } } else { - rte_distributor_flush(d); - rte_distributor_clear_returns(d); + rte_distributor_flush_v20(d); + rte_distributor_clear_returns_v20(d); } const unsigned nb_bufs = (511 * rte_lcore_count()) < BIG_BATCH ? diff --git a/examples/distributor/main.c b/examples/distributor/main.c index fba5446..350d6f6 100644 --- a/examples/distributor/main.c +++ b/examples/distributor/main.c @@ -160,13 +160,13 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool) struct lcore_params { unsigned worker_id; - struct rte_distributor *d; + struct rte_distributor_v20 *d; struct rte_ring *r; struct rte_mempool *mem_pool; }; static int -quit_workers(struct rte_distributor *d, struct rte_mempool *p) +quit_workers(struct rte_distributor_v20 *d, struct rte_mempool *p) { const unsigned num_workers = rte_lcore_count() - 2; unsigned i; @@ -180,7 +180,7 @@ quit_workers(struct rte_distributor *d, struct rte_mempool *p) for (i = 0; i < num_workers; i++) bufs[i]->hash.rss = i << 1; - rte_distributor_process(d, bufs, num_workers); + rte_distributor_process_v20(d, bufs, num_workers); rte_mempool_put_bulk(p, (void *)bufs, num_workers); return 0; @@ -189,7 +189,7 @@ quit_workers(struct rte_distributor *d, struct rte_mempool *p) static int lcore_rx(struct lcore_params *p) { - struct rte_distributor *d = p->d; + struct rte_distributor_v20 *d = p->d; struct rte_mempool *mem_pool = p->mem_pool; struct rte_ring *r = p->r; const uint8_t nb_ports = rte_eth_dev_count(); @@ -228,8 +228,8 @@ lcore_rx(struct lcore_params *p) } app_stats.rx.rx_pkts += nb_rx; - rte_distributor_process(d, bufs, nb_rx); - const uint16_t nb_ret = rte_distributor_returned_pkts(d, + rte_distributor_process_v20(d, bufs, nb_rx); + const uint16_t nb_ret = rte_distributor_returned_pkts_v20(d, bufs, BURST_SIZE*2); app_stats.rx.returned_pkts += nb_ret; if (unlikely(nb_ret == 0)) { @@ -249,9 +249,9 @@ lcore_rx(struct lcore_params *p) if (++port == nb_ports) port = 0; } - rte_distributor_process(d, NULL, 0); + rte_distributor_process_v20(d, NULL, 0); /* flush distributor to bring to known state */ - rte_distributor_flush(d); + rte_distributor_flush_v20(d); /* set worker & tx threads quit flag */ quit_signal = 1; /* @@ -403,7 +403,7 @@ print_stats(void) static int lcore_worker(struct lcore_params *p) { - struct rte_distributor *d = p->d; + struct rte_distributor_v20 *d = p->d; const unsigned id = p->worker_id; /* * for single port, xor_val will be zero so we won't modify the output @@ -414,7 +414,7 @@ lcore_worker(struct lcore_params *p) printf("\nCore %u acting as worker core.\n", rte_lcore_id()); while (!quit_signal) { - buf = rte_distributor_get_pkt(d, id, buf); + buf = rte_distributor_get_pkt_v20(d, id, buf); buf->port ^= xor_val; } return 0; @@ -496,7 +496,7 @@ int main(int argc, char *argv[]) { struct rte_mempool *mbuf_pool; - struct rte_distributor *d; + struct rte_distributor_v20 *d; struct rte_ring *output_ring; unsigned lcore_id, worker_id = 0; unsigned nb_ports; @@ -560,7 +560,7 @@ main(int argc, char *argv[]) "All available ports are disabled. Please set portmask.\n"); } - d = rte_distributor_create("PKT_DIST", rte_socket_id(), + d = rte_distributor_create_v20("PKT_DIST", rte_socket_id(), rte_lcore_count() - 2); if (d == NULL) rte_exit(EXIT_FAILURE, "Cannot create distributor\n"); diff --git a/lib/librte_distributor/rte_distributor_v20.c b/lib/librte_distributor/rte_distributor_v20.c index b890947..48a8794 100644 --- a/lib/librte_distributor/rte_distributor_v20.c +++ b/lib/librte_distributor/rte_distributor_v20.c @@ -75,7 +75,7 @@ * the next cache line to worker 0, we pad this out to three cache lines. * Only 64-bits of the memory is actually used though. */ -union rte_distributor_buffer { +union rte_distributor_buffer_v20 { volatile int64_t bufptr64; char pad[RTE_CACHE_LINE_SIZE*3]; } __rte_cache_aligned; @@ -92,8 +92,8 @@ struct rte_distributor_returned_pkts { struct rte_mbuf *mbufs[RTE_DISTRIB_MAX_RETURNS]; }; -struct rte_distributor { - TAILQ_ENTRY(rte_distributor) next; /**< Next in list. */ +struct rte_distributor_v20 { + TAILQ_ENTRY(rte_distributor_v20) next; /**< Next in list. */ char name[RTE_DISTRIBUTOR_NAMESIZE]; /**< Name of the ring. */ unsigned num_workers; /**< Number of workers polling */ @@ -108,12 +108,12 @@ struct rte_distributor { struct rte_distributor_backlog backlog[RTE_DISTRIB_MAX_WORKERS]; - union rte_distributor_buffer bufs[RTE_DISTRIB_MAX_WORKERS]; + union rte_distributor_buffer_v20 bufs[RTE_DISTRIB_MAX_WORKERS]; struct rte_distributor_returned_pkts returns; }; -TAILQ_HEAD(rte_distributor_list, rte_distributor); +TAILQ_HEAD(rte_distributor_list, rte_distributor_v20); static struct rte_tailq_elem rte_distributor_tailq = { .name = "RTE_DISTRIBUTOR", @@ -123,10 +123,10 @@ EAL_REGISTER_TAILQ(rte_distributor_tailq) /**** APIs called by workers ****/ void -rte_distributor_request_pkt(struct rte_distributor *d, +rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d, unsigned worker_id, struct rte_mbuf *oldpkt) { - union rte_distributor_buffer *buf = &d->bufs[worker_id]; + union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id]; int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_GET_BUF; while (unlikely(buf->bufptr64 & RTE_DISTRIB_FLAGS_MASK)) @@ -135,10 +135,10 @@ rte_distributor_request_pkt(struct rte_distributor *d, } struct rte_mbuf * -rte_distributor_poll_pkt(struct rte_distributor *d, +rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d, unsigned worker_id) { - union rte_distributor_buffer *buf = &d->bufs[worker_id]; + union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id]; if (buf->bufptr64 & RTE_DISTRIB_GET_BUF) return NULL; @@ -148,21 +148,21 @@ rte_distributor_poll_pkt(struct rte_distributor *d, } struct rte_mbuf * -rte_distributor_get_pkt(struct rte_distributor *d, +rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d, unsigned worker_id, struct rte_mbuf *oldpkt) { struct rte_mbuf *ret; - rte_distributor_request_pkt(d, worker_id, oldpkt); - while ((ret = rte_distributor_poll_pkt(d, worker_id)) == NULL) + rte_distributor_request_pkt_v20(d, worker_id, oldpkt); + while ((ret = rte_distributor_poll_pkt_v20(d, worker_id)) == NULL) rte_pause(); return ret; } int -rte_distributor_return_pkt(struct rte_distributor *d, +rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d, unsigned worker_id, struct rte_mbuf *oldpkt) { - union rte_distributor_buffer *buf = &d->bufs[worker_id]; + union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id]; uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS) | RTE_DISTRIB_RETURN_BUF; buf->bufptr64 = req; @@ -193,7 +193,7 @@ backlog_pop(struct rte_distributor_backlog *bl) /* stores a packet returned from a worker inside the returns array */ static inline void -store_return(uintptr_t oldbuf, struct rte_distributor *d, +store_return(uintptr_t oldbuf, struct rte_distributor_v20 *d, unsigned *ret_start, unsigned *ret_count) { /* store returns in a circular buffer - code is branch-free */ @@ -204,7 +204,7 @@ store_return(uintptr_t oldbuf, struct rte_distributor *d, } static inline void -handle_worker_shutdown(struct rte_distributor *d, unsigned wkr) +handle_worker_shutdown(struct rte_distributor_v20 *d, unsigned int wkr) { d->in_flight_tags[wkr] = 0; d->in_flight_bitmask &= ~(1UL << wkr); @@ -234,7 +234,7 @@ handle_worker_shutdown(struct rte_distributor *d, unsigned wkr) * Note that the tags were set before first level call * to rte_distributor_process. */ - rte_distributor_process(d, pkts, i); + rte_distributor_process_v20(d, pkts, i); bl->count = bl->start = 0; } } @@ -244,7 +244,7 @@ handle_worker_shutdown(struct rte_distributor *d, unsigned wkr) * to do a partial flush. */ static int -process_returns(struct rte_distributor *d) +process_returns(struct rte_distributor_v20 *d) { unsigned wkr; unsigned flushed = 0; @@ -283,7 +283,7 @@ process_returns(struct rte_distributor *d) /* process a set of packets to distribute them to workers */ int -rte_distributor_process(struct rte_distributor *d, +rte_distributor_process_v20(struct rte_distributor_v20 *d, struct rte_mbuf **mbufs, unsigned num_mbufs) { unsigned next_idx = 0; @@ -387,7 +387,7 @@ rte_distributor_process(struct rte_distributor *d, /* return to the caller, packets returned from workers */ int -rte_distributor_returned_pkts(struct rte_distributor *d, +rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d, struct rte_mbuf **mbufs, unsigned max_mbufs) { struct rte_distributor_returned_pkts *returns = &d->returns; @@ -408,7 +408,7 @@ rte_distributor_returned_pkts(struct rte_distributor *d, /* return the number of packets in-flight in a distributor, i.e. packets * being workered on or queued up in a backlog. */ static inline unsigned -total_outstanding(const struct rte_distributor *d) +total_outstanding(const struct rte_distributor_v20 *d) { unsigned wkr, total_outstanding; @@ -423,19 +423,19 @@ total_outstanding(const struct rte_distributor *d) /* flush the distributor, so that there are no outstanding packets in flight or * queued up. */ int -rte_distributor_flush(struct rte_distributor *d) +rte_distributor_flush_v20(struct rte_distributor_v20 *d) { const unsigned flushed = total_outstanding(d); while (total_outstanding(d) > 0) - rte_distributor_process(d, NULL, 0); + rte_distributor_process_v20(d, NULL, 0); return flushed; } /* clears the internal returns array in the distributor */ void -rte_distributor_clear_returns(struct rte_distributor *d) +rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d) { d->returns.start = d->returns.count = 0; #ifndef __OPTIMIZE__ @@ -444,12 +444,12 @@ rte_distributor_clear_returns(struct rte_distributor *d) } /* creates a distributor instance */ -struct rte_distributor * -rte_distributor_create(const char *name, +struct rte_distributor_v20 * +rte_distributor_create_v20(const char *name, unsigned socket_id, unsigned num_workers) { - struct rte_distributor *d; + struct rte_distributor_v20 *d; struct rte_distributor_list *distributor_list; char mz_name[RTE_MEMZONE_NAMESIZE]; const struct rte_memzone *mz; diff --git a/lib/librte_distributor/rte_distributor_v20.h b/lib/librte_distributor/rte_distributor_v20.h index 7d36bc8..6da2ae3 100644 --- a/lib/librte_distributor/rte_distributor_v20.h +++ b/lib/librte_distributor/rte_distributor_v20.h @@ -1,7 +1,7 @@ /*- * BSD LICENSE * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved. * All rights reserved. * * Redistribution and use in source and binary forms, with or without @@ -31,15 +31,15 @@ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ -#ifndef _RTE_DISTRIBUTE_H_ -#define _RTE_DISTRIBUTE_H_ +#ifndef _RTE_DISTRIBUTE_V20_H_ +#define _RTE_DISTRIBUTE_V20_H_ /** * @file * RTE distributor * - * The distributor is a component which is designed to pass packets - * one-at-a-time to workers, with dynamic load balancing. + * This file contains the legacy single-packet-at-a-time API and is + * here to allow the latest API provide backward compatibility. */ #ifdef __cplusplus @@ -48,7 +48,7 @@ extern "C" { #define RTE_DISTRIBUTOR_NAMESIZE 32 /**< Length of name for instance */ -struct rte_distributor; +struct rte_distributor_v20; struct rte_mbuf; /** @@ -67,8 +67,8 @@ struct rte_mbuf; * @return * The newly created distributor instance */ -struct rte_distributor * -rte_distributor_create(const char *name, unsigned socket_id, +struct rte_distributor_v20 * +rte_distributor_create_v20(const char *name, unsigned int socket_id, unsigned num_workers); /* *** APIS to be called on the distributor lcore *** */ @@ -103,7 +103,7 @@ rte_distributor_create(const char *name, unsigned socket_id, * The number of mbufs processed. */ int -rte_distributor_process(struct rte_distributor *d, +rte_distributor_process_v20(struct rte_distributor_v20 *d, struct rte_mbuf **mbufs, unsigned num_mbufs); /** @@ -121,7 +121,7 @@ rte_distributor_process(struct rte_distributor *d, * The number of mbufs returned in the mbufs array. */ int -rte_distributor_returned_pkts(struct rte_distributor *d, +rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d, struct rte_mbuf **mbufs, unsigned max_mbufs); /** @@ -136,7 +136,7 @@ rte_distributor_returned_pkts(struct rte_distributor *d, * The number of queued/in-flight packets that were completed by this call. */ int -rte_distributor_flush(struct rte_distributor *d); +rte_distributor_flush_v20(struct rte_distributor_v20 *d); /** * Clears the array of returned packets used as the source for the @@ -148,7 +148,7 @@ rte_distributor_flush(struct rte_distributor *d); * The distributor instance to be used */ void -rte_distributor_clear_returns(struct rte_distributor *d); +rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d); /* *** APIS to be called on the worker lcores *** */ /* @@ -177,7 +177,7 @@ rte_distributor_clear_returns(struct rte_distributor *d); * A new packet to be processed by the worker thread. */ struct rte_mbuf * -rte_distributor_get_pkt(struct rte_distributor *d, +rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d, unsigned worker_id, struct rte_mbuf *oldpkt); /** @@ -193,7 +193,8 @@ rte_distributor_get_pkt(struct rte_distributor *d, * The previous packet being processed by the worker */ int -rte_distributor_return_pkt(struct rte_distributor *d, unsigned worker_id, +rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d, + unsigned int worker_id, struct rte_mbuf *mbuf); /** @@ -217,7 +218,7 @@ rte_distributor_return_pkt(struct rte_distributor *d, unsigned worker_id, * The previous packet, if any, being processed by the worker */ void -rte_distributor_request_pkt(struct rte_distributor *d, +rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d, unsigned worker_id, struct rte_mbuf *oldpkt); /** @@ -237,7 +238,7 @@ rte_distributor_request_pkt(struct rte_distributor *d, * packet is yet available. */ struct rte_mbuf * -rte_distributor_poll_pkt(struct rte_distributor *d, +rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d, unsigned worker_id); #ifdef __cplusplus diff --git a/lib/librte_distributor/rte_distributor_version.map b/lib/librte_distributor/rte_distributor_version.map index 73fdc43..414fdc3 100644 --- a/lib/librte_distributor/rte_distributor_version.map +++ b/lib/librte_distributor/rte_distributor_version.map @@ -1,15 +1,15 @@ DPDK_2.0 { global: - rte_distributor_clear_returns; - rte_distributor_create; - rte_distributor_flush; - rte_distributor_get_pkt; - rte_distributor_poll_pkt; - rte_distributor_process; - rte_distributor_request_pkt; - rte_distributor_return_pkt; - rte_distributor_returned_pkts; + rte_distributor_clear_returns_v20; + rte_distributor_create_v20; + rte_distributor_flush_v20; + rte_distributor_get_pkt_v20; + rte_distributor_poll_pkt_v20; + rte_distributor_process_v20; + rte_distributor_request_pk_v20t; + rte_distributor_return_pkt_v20; + rte_distributor_returned_pkts_v20; local: *; };