From patchwork Thu Mar 30 06:18:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125617 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 47EA74286D; Thu, 30 Mar 2023 08:18:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB86442686; Thu, 30 Mar 2023 08:18:46 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 7E52040685 for ; Thu, 30 Mar 2023 08:18:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157123; x=1711693123; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fsPsNUzV9zGx7RZaygT5x7J1G9gjHYzuiC2oIiRpJtA=; b=dgMAFD5tzJSvbjuOAg1B5q3GqVqmQTiNdehbJXZuZo+E0PHS2KRe4714 a3jNqnH2louNI20yFvbA1Ow3mDg0KIGEiCbub/oT2ytw9TCUpW/Vq6ShR fQoWSkqP9yd+cBCcBhHsRsmYnIN5IubG4/JWUYGDDMNHhRZqEWwsoO7hX P+4vayfDn1z8J4tNA75Kz7OVgLzuRBarfQ5BX5q1st13fdvTBh6kDDhmx JBtS6S90bJwDPu+LUGxUaGhnQ/HJ+7YJYhc8fVBU3DeAcS/QW215S6uI4 JqiEUnEAgAMnFgY7qy8NqX4Te+NV9VhprYw+GYAj3+9X6XO9H6+dDnS45 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530559" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530559" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176155" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176155" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:40 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 01/15] graph: rename rte_graph_work as common Date: Thu, 30 Mar 2023 15:18:20 +0900 Message-Id: <20230330061834.3118201-2-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rename rte_graph_work.h to rte_graph_work_common.h for supporting multiple graph worker model. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- MAINTAINERS | 1 + lib/graph/graph_pcap.c | 2 +- lib/graph/graph_private.h | 2 +- lib/graph/meson.build | 2 +- lib/graph/{rte_graph_worker.h => rte_graph_worker_common.h} | 6 +++--- 5 files changed, 7 insertions(+), 6 deletions(-) rename lib/graph/{rte_graph_worker.h => rte_graph_worker_common.h} (99%) diff --git a/MAINTAINERS b/MAINTAINERS index 280058adfc..9d9467dd00 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1714,6 +1714,7 @@ F: doc/guides/prog_guide/bpf_lib.rst Graph - EXPERIMENTAL M: Jerin Jacob M: Kiran Kumar K +M: Zhirun Yan F: lib/graph/ F: doc/guides/prog_guide/graph_lib.rst F: app/test/test_graph* diff --git a/lib/graph/graph_pcap.c b/lib/graph/graph_pcap.c index 6c43330029..8a220370fa 100644 --- a/lib/graph/graph_pcap.c +++ b/lib/graph/graph_pcap.c @@ -10,7 +10,7 @@ #include #include -#include "rte_graph_worker.h" +#include "rte_graph_worker_common.h" #include "graph_pcap_private.h" diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index 7d1b30b8ac..f08dbc7e9d 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -12,7 +12,7 @@ #include #include "rte_graph.h" -#include "rte_graph_worker.h" +#include "rte_graph_worker_common.h" extern int rte_graph_logtype; diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 3526d1b5d4..4e2b612ad3 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -16,6 +16,6 @@ sources = files( 'graph_populate.c', 'graph_pcap.c', ) -headers = files('rte_graph.h', 'rte_graph_worker.h') +headers = files('rte_graph.h', 'rte_graph_worker_common.h') deps += ['eal', 'pcapng'] diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker_common.h similarity index 99% rename from lib/graph/rte_graph_worker.h rename to lib/graph/rte_graph_worker_common.h index 438595b15c..0bad2938f3 100644 --- a/lib/graph/rte_graph_worker.h +++ b/lib/graph/rte_graph_worker_common.h @@ -2,8 +2,8 @@ * Copyright(C) 2020 Marvell International Ltd. */ -#ifndef _RTE_GRAPH_WORKER_H_ -#define _RTE_GRAPH_WORKER_H_ +#ifndef _RTE_GRAPH_WORKER_COMMON_H_ +#define _RTE_GRAPH_WORKER_COMMON_H_ /** * @file rte_graph_worker.h @@ -518,4 +518,4 @@ rte_node_next_stream_move(struct rte_graph *graph, struct rte_node *src, } #endif -#endif /* _RTE_GRAPH_WORKER_H_ */ +#endif /* _RTE_GRAPH_WORKER_COIMMON_H_ */ From patchwork Thu Mar 30 06:18:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125618 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 809DA4286D; Thu, 30 Mar 2023 08:18:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BECF642BAC; Thu, 30 Mar 2023 08:18:48 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id CE8C941138 for ; Thu, 30 Mar 2023 08:18:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157126; x=1711693126; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Lf71wihasENtQbFGYC7KX/IvY97dKz3RS3iszWg6WiU=; b=W5r29AdeQc2QaG621Xk/cul/QMKNlJv+o/5KmqnbA8pReUeYHhErPr3J 32keJJg8w04O6gnHe58ZewicHSsR0beuCFO4TRPZcZ0hilm+ajRkAhnjI iM9WwRhHK/0QJyLcqEZWxdMMh5Mer9zVvX6cggKhjzuEE7pot/tE1NZPa fTZ7z9IXWkYWLyyGpNLlXJn7vo9vB3EhN/N6RMYEo3Ppqppx+0vJHrsuQ Rl44f7AQLdXEbcUieTpwZ3s777Xfpsh7I0zjj7TGxJP/G07DUirE/1Thi BaKA19ZCbp5XYCm/lW9pkLzPtcv1cU4n2618bYx8cde0qAGOKcnLIUJqX g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530565" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530565" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176160" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176160" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:42 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 02/15] graph: split graph worker into common and default model Date: Thu, 30 Mar 2023 15:18:21 +0900 Message-Id: <20230330061834.3118201-3-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To support multiple graph worker model, split graph into common and default. Naming the current walk function as rte_graph_model_rtc cause the default model is RTC(Run-to-completion). Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_pcap.c | 2 +- lib/graph/graph_private.h | 2 +- lib/graph/meson.build | 2 +- lib/graph/rte_graph_model_rtc.h | 61 +++++++++++++++++++++++++++++ lib/graph/rte_graph_worker.h | 34 ++++++++++++++++ lib/graph/rte_graph_worker_common.h | 57 --------------------------- 6 files changed, 98 insertions(+), 60 deletions(-) create mode 100644 lib/graph/rte_graph_model_rtc.h create mode 100644 lib/graph/rte_graph_worker.h diff --git a/lib/graph/graph_pcap.c b/lib/graph/graph_pcap.c index 8a220370fa..6c43330029 100644 --- a/lib/graph/graph_pcap.c +++ b/lib/graph/graph_pcap.c @@ -10,7 +10,7 @@ #include #include -#include "rte_graph_worker_common.h" +#include "rte_graph_worker.h" #include "graph_pcap_private.h" diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index f08dbc7e9d..7d1b30b8ac 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -12,7 +12,7 @@ #include #include "rte_graph.h" -#include "rte_graph_worker_common.h" +#include "rte_graph_worker.h" extern int rte_graph_logtype; diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 4e2b612ad3..3526d1b5d4 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -16,6 +16,6 @@ sources = files( 'graph_populate.c', 'graph_pcap.c', ) -headers = files('rte_graph.h', 'rte_graph_worker_common.h') +headers = files('rte_graph.h', 'rte_graph_worker.h') deps += ['eal', 'pcapng'] diff --git a/lib/graph/rte_graph_model_rtc.h b/lib/graph/rte_graph_model_rtc.h new file mode 100644 index 0000000000..665560f831 --- /dev/null +++ b/lib/graph/rte_graph_model_rtc.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Intel Corporation + */ + +#include "rte_graph_worker_common.h" + +/** + * Perform graph walk on the circular buffer and invoke the process function + * of the nodes and collect the stats. + * + * @param graph + * Graph pointer returned from rte_graph_lookup function. + * + * @see rte_graph_lookup() + */ +static inline void +rte_graph_walk_rtc(struct rte_graph *graph) +{ + const rte_graph_off_t *cir_start = graph->cir_start; + const rte_node_t mask = graph->cir_mask; + uint32_t head = graph->head; + struct rte_node *node; + uint64_t start; + uint16_t rc; + void **objs; + + /* + * Walk on the source node(s) ((cir_start - head) -> cir_start) and then + * on the pending streams (cir_start -> (cir_start + mask) -> cir_start) + * in a circular buffer fashion. + * + * +-----+ <= cir_start - head [number of source nodes] + * | | + * | ... | <= source nodes + * | | + * +-----+ <= cir_start [head = 0] [tail = 0] + * | | + * | ... | <= pending streams + * | | + * +-----+ <= cir_start + mask + */ + while (likely(head != graph->tail)) { + node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); + RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); + objs = node->objs; + rte_prefetch0(objs); + + if (rte_graph_has_stats_feature()) { + start = rte_rdtsc(); + rc = node->process(graph, node, objs, node->idx); + node->total_cycles += rte_rdtsc() - start; + node->total_calls++; + node->total_objs += rc; + } else { + node->process(graph, node, objs, node->idx); + } + node->idx = 0; + head = likely((int32_t)head > 0) ? head & mask : head; + } + graph->tail = 0; +} diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker.h new file mode 100644 index 0000000000..7ea18ba80a --- /dev/null +++ b/lib/graph/rte_graph_worker.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Intel Corporation + */ + +#ifndef _RTE_GRAPH_WORKER_H_ +#define _RTE_GRAPH_WORKER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "rte_graph_model_rtc.h" + +/** + * Perform graph walk on the circular buffer and invoke the process function + * of the nodes and collect the stats. + * + * @param graph + * Graph pointer returned from rte_graph_lookup function. + * + * @see rte_graph_lookup() + */ +__rte_experimental +static inline void +rte_graph_walk(struct rte_graph *graph) +{ + rte_graph_walk_rtc(graph); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_GRAPH_WORKER_H_ */ diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index 0bad2938f3..b58f8f6947 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -128,63 +128,6 @@ __rte_experimental void __rte_node_stream_alloc_size(struct rte_graph *graph, struct rte_node *node, uint16_t req_size); -/** - * Perform graph walk on the circular buffer and invoke the process function - * of the nodes and collect the stats. - * - * @param graph - * Graph pointer returned from rte_graph_lookup function. - * - * @see rte_graph_lookup() - */ -__rte_experimental -static inline void -rte_graph_walk(struct rte_graph *graph) -{ - const rte_graph_off_t *cir_start = graph->cir_start; - const rte_node_t mask = graph->cir_mask; - uint32_t head = graph->head; - struct rte_node *node; - uint64_t start; - uint16_t rc; - void **objs; - - /* - * Walk on the source node(s) ((cir_start - head) -> cir_start) and then - * on the pending streams (cir_start -> (cir_start + mask) -> cir_start) - * in a circular buffer fashion. - * - * +-----+ <= cir_start - head [number of source nodes] - * | | - * | ... | <= source nodes - * | | - * +-----+ <= cir_start [head = 0] [tail = 0] - * | | - * | ... | <= pending streams - * | | - * +-----+ <= cir_start + mask - */ - while (likely(head != graph->tail)) { - node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); - RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); - objs = node->objs; - rte_prefetch0(objs); - - if (rte_graph_has_stats_feature()) { - start = rte_rdtsc(); - rc = node->process(graph, node, objs, node->idx); - node->total_cycles += rte_rdtsc() - start; - node->total_calls++; - node->total_objs += rc; - } else { - node->process(graph, node, objs, node->idx); - } - node->idx = 0; - head = likely((int32_t)head > 0) ? head & mask : head; - } - graph->tail = 0; -} - /* Fast path helper functions */ /** From patchwork Thu Mar 30 06:18:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125619 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 409CC4286D; Thu, 30 Mar 2023 08:19:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4B31342D0D; Thu, 30 Mar 2023 08:18:51 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 8E3D742B8C for ; Thu, 30 Mar 2023 08:18:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157127; x=1711693127; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EoeDHzSoMCh9gW/nQpMjzg2WI1Xo6w5PxEC7qbrHzkk=; b=SZQGcJkEjR0iM9QKhUHqjAp+73oKWCJVJoYrqBFr65Wfqd+8LJ3nD4nO I8qhUdcpmwFYoj5nBM9v9Bru7XMHWVAm/ZbMQ3tnW9k59o49oJD8pl49V vvvnMPnwC0PCL8nn0yFWyP7ptEwkvm1Ev+2fsbtptDfJDQHXHKZxiI5PL 9EqRvfH1H62InzO+Q9Bz1UIT+Io5080pEZzk8hY0uCIz2NpIhlu5IYyNR Zk7mwXbKucepE8CGjAOiQ+wAfmsuxwB01Gf8Ie4Sk2ldQoxACzqCC4ioj U6my8PM7lJyHKmXTt+VN0LUFfBWtRxmyr8+BF1BPG2cmnDEVmfj+2Fvaj A==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530574" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530574" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176172" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176172" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:44 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 03/15] graph: move node process into inline function Date: Thu, 30 Mar 2023 15:18:22 +0900 Message-Id: <20230330061834.3118201-4-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Node process is a single and reusable block, move the code into an inline function. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/rte_graph_model_rtc.h | 20 ++--------------- lib/graph/rte_graph_worker_common.h | 33 +++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+), 18 deletions(-) diff --git a/lib/graph/rte_graph_model_rtc.h b/lib/graph/rte_graph_model_rtc.h index 665560f831..0dcb7151e9 100644 --- a/lib/graph/rte_graph_model_rtc.h +++ b/lib/graph/rte_graph_model_rtc.h @@ -20,9 +20,6 @@ rte_graph_walk_rtc(struct rte_graph *graph) const rte_node_t mask = graph->cir_mask; uint32_t head = graph->head; struct rte_node *node; - uint64_t start; - uint16_t rc; - void **objs; /* * Walk on the source node(s) ((cir_start - head) -> cir_start) and then @@ -41,21 +38,8 @@ rte_graph_walk_rtc(struct rte_graph *graph) */ while (likely(head != graph->tail)) { node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); - RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); - objs = node->objs; - rte_prefetch0(objs); - - if (rte_graph_has_stats_feature()) { - start = rte_rdtsc(); - rc = node->process(graph, node, objs, node->idx); - node->total_cycles += rte_rdtsc() - start; - node->total_calls++; - node->total_objs += rc; - } else { - node->process(graph, node, objs, node->idx); - } - node->idx = 0; - head = likely((int32_t)head > 0) ? head & mask : head; + __rte_node_process(graph, node); + head = likely((int32_t)head > 0) ? head & mask : head; } graph->tail = 0; } diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index b58f8f6947..41428974db 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -130,6 +130,39 @@ void __rte_node_stream_alloc_size(struct rte_graph *graph, /* Fast path helper functions */ +/** + * @internal + * + * Enqueue a given node to the tail of the graph reel. + * + * @param graph + * Pointer Graph object. + * @param node + * Pointer to node object to be enqueued. + */ +static __rte_always_inline void +__rte_node_process(struct rte_graph *graph, struct rte_node *node) +{ + uint64_t start; + uint16_t rc; + void **objs; + + RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); + objs = node->objs; + rte_prefetch0(objs); + + if (rte_graph_has_stats_feature()) { + start = rte_rdtsc(); + rc = node->process(graph, node, objs, node->idx); + node->total_cycles += rte_rdtsc() - start; + node->total_calls++; + node->total_objs += rc; + } else { + node->process(graph, node, objs, node->idx); + } + node->idx = 0; +} + /** * @internal * From patchwork Thu Mar 30 06:18:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125620 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 08E7B4286D; Thu, 30 Mar 2023 08:19:13 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 88AC642D12; Thu, 30 Mar 2023 08:18:52 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 87C5742BDA for ; Thu, 30 Mar 2023 08:18:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157129; x=1711693129; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4y65Umh6mepIreusFAJcS4HK+KVCADFMVvvhxc5T9Do=; b=L9ASsRcXNxfTwcykfhMcFOStati/4HwGYqXvxVKSvQGtMkHNIeio0cZe ZwcKpdkhp99l90oYuCTIchMU162KggP+79dnmIroGDUyJt85hW6lQXU89 5xt+n95BRY7gllPHV8gm5SZNFj4nq9TiQoZoAl8y0xsm1EjUwoHVc7byC 7q7G2uoQAQjVaOpANDQwt/tmwPn+HcVdf0TxSuEBMiUt9Amce3ndcouD9 HxBFNSQxVYHRWgY06jUKsAJE3I/wL1n1NyZD3y1NsPTe29oTldcbWKrAi GXxPRI2Uu1j0O70lnnPecdBxP4F8wGqzVj+B8qfdsGmQpfaY/dsRgURGG Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530578" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530578" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176179" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176179" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:47 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 04/15] graph: add get/set graph worker model APIs Date: Thu, 30 Mar 2023 15:18:23 +0900 Message-Id: <20230330061834.3118201-5-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add new get/set APIs to configure graph worker model which is used to determine which model will be chosen. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/meson.build | 1 + lib/graph/rte_graph_worker.c | 54 +++++++++++++++++++++++++++++ lib/graph/rte_graph_worker_common.h | 19 ++++++++++ lib/graph/version.map | 3 ++ 4 files changed, 77 insertions(+) create mode 100644 lib/graph/rte_graph_worker.c diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 3526d1b5d4..9fab8243da 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -15,6 +15,7 @@ sources = files( 'graph_stats.c', 'graph_populate.c', 'graph_pcap.c', + 'rte_graph_worker.c', ) headers = files('rte_graph.h', 'rte_graph_worker.h') diff --git a/lib/graph/rte_graph_worker.c b/lib/graph/rte_graph_worker.c new file mode 100644 index 0000000000..cabc101262 --- /dev/null +++ b/lib/graph/rte_graph_worker.c @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Intel Corporation + */ + +#include "rte_graph_worker_common.h" + +RTE_DEFINE_PER_LCORE(enum rte_graph_worker_model, worker_model) = RTE_GRAPH_MODEL_DEFAULT; + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * Set the graph worker model + * + * @note This function does not perform any locking, and is only safe to call + * before graph running. + * + * @param name + * Name of the graph worker model. + * + * @return + * 0 on success, -1 otherwise. + */ +int +rte_graph_worker_model_set(enum rte_graph_worker_model model) +{ + if (model >= RTE_GRAPH_MODEL_LIST_END) + goto fail; + + RTE_PER_LCORE(worker_model) = model; + return 0; + +fail: + RTE_PER_LCORE(worker_model) = RTE_GRAPH_MODEL_DEFAULT; + return -1; +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Get the graph worker model + * + * @param name + * Name of the graph worker model. + * + * @return + * Graph worker model on success. + */ +inline +enum rte_graph_worker_model +rte_graph_worker_model_get(void) +{ + return RTE_PER_LCORE(worker_model); +} diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index 41428974db..1526da6e2c 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -95,6 +96,16 @@ struct rte_node { struct rte_node *nodes[] __rte_cache_min_aligned; /**< Next nodes. */ } __rte_cache_aligned; +/** Graph worker models */ +enum rte_graph_worker_model { + RTE_GRAPH_MODEL_DEFAULT, + RTE_GRAPH_MODEL_RTC = RTE_GRAPH_MODEL_DEFAULT, + RTE_GRAPH_MODEL_MCORE_DISPATCH, + RTE_GRAPH_MODEL_LIST_END +}; + +RTE_DECLARE_PER_LCORE(enum rte_graph_worker_model, worker_model); + /** * @internal * @@ -490,6 +501,14 @@ rte_node_next_stream_move(struct rte_graph *graph, struct rte_node *src, } } +__rte_experimental +enum rte_graph_worker_model +rte_graph_worker_model_get(void); + +__rte_experimental +int +rte_graph_worker_model_set(enum rte_graph_worker_model model); + #ifdef __cplusplus } #endif diff --git a/lib/graph/version.map b/lib/graph/version.map index 13b838752d..eea73ec9ca 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -43,5 +43,8 @@ EXPERIMENTAL { rte_node_next_stream_put; rte_node_next_stream_move; + rte_graph_worker_model_set; + rte_graph_worker_model_get; + local: *; }; From patchwork Thu Mar 30 06:18:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125621 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BE4C4286D; Thu, 30 Mar 2023 08:19:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B175042D29; Thu, 30 Mar 2023 08:18:53 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 8F00742D12 for ; Thu, 30 Mar 2023 08:18:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157131; x=1711693131; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FktimrAftgdUvTqSuKu1EH+pk6wkvDCyjMuWxES05ng=; b=RfmFMeYqSP0BHfaXZnDhO6oHeyLKLoxk1Z9PWC77+BBvabaz96gOUkg2 AROoBrG0zWwdplK0gc4zmkRVhL3RwxwC95SqGtVVg6elwjhf5BTVLT/gd R1v8pedN2Wt3X/RQn6U8BGGuoURVr6fUSRqvKIQkcgZG1sEkEizTE+Oxh /gzFyIisWq9wK9OkVZ1CDWAB8Hh+ldLkonMw17ayeywbTuTDiNsxTjxLE WHcYPYMVySi21qUUJZvYAgoDwcHhokPECrLg53sGGAeuBExuwvswBGmNA zXrdP7ENGOqOm3hdIqnc2+HJ+pa+XiR5T6J2JpeYs+11tC6t5LNb7opgF g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530584" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530584" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176188" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176188" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:49 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 05/15] graph: introduce graph node core affinity API Date: Thu, 30 Mar 2023 15:18:24 +0900 Message-Id: <20230330061834.3118201-6-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add lcore_id for node to hold affinity core id and impl rte_graph_model_dispatch_lcore_affinity_set to set node affinity with specific lcore. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_private.h | 1 + lib/graph/meson.build | 1 + lib/graph/node.c | 1 + lib/graph/rte_graph_model_dispatch.c | 31 ++++++++++++++++++++ lib/graph/rte_graph_model_dispatch.h | 43 ++++++++++++++++++++++++++++ lib/graph/version.map | 2 ++ 6 files changed, 79 insertions(+) create mode 100644 lib/graph/rte_graph_model_dispatch.c create mode 100644 lib/graph/rte_graph_model_dispatch.h diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index 7d1b30b8ac..409eed3284 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -50,6 +50,7 @@ struct node { STAILQ_ENTRY(node) next; /**< Next node in the list. */ char name[RTE_NODE_NAMESIZE]; /**< Name of the node. */ uint64_t flags; /**< Node configuration flag. */ + unsigned int lcore_id; /**< Node runs on the Lcore ID */ rte_node_process_t process; /**< Node process function. */ rte_node_init_t init; /**< Node init function. */ rte_node_fini_t fini; /**< Node fini function. */ diff --git a/lib/graph/meson.build b/lib/graph/meson.build index 9fab8243da..c729d984b6 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -16,6 +16,7 @@ sources = files( 'graph_populate.c', 'graph_pcap.c', 'rte_graph_worker.c', + 'rte_graph_model_dispatch.c', ) headers = files('rte_graph.h', 'rte_graph_worker.h') diff --git a/lib/graph/node.c b/lib/graph/node.c index 149414dcd9..339b4a0da5 100644 --- a/lib/graph/node.c +++ b/lib/graph/node.c @@ -100,6 +100,7 @@ __rte_node_register(const struct rte_node_register *reg) goto free; } + node->lcore_id = RTE_MAX_LCORE; node->id = node_id++; /* Add the node at tail */ diff --git a/lib/graph/rte_graph_model_dispatch.c b/lib/graph/rte_graph_model_dispatch.c new file mode 100644 index 0000000000..4a2f99496d --- /dev/null +++ b/lib/graph/rte_graph_model_dispatch.c @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Intel Corporation + */ + +#include "graph_private.h" +#include "rte_graph_model_dispatch.h" + +int +rte_graph_model_dispatch_lcore_affinity_set(const char *name, unsigned int lcore_id) +{ + struct node *node; + int ret = -EINVAL; + + if (lcore_id >= RTE_MAX_LCORE) + return ret; + + graph_spinlock_lock(); + + STAILQ_FOREACH(node, node_list_head_get(), next) { + if (strncmp(node->name, name, RTE_NODE_NAMESIZE) == 0) { + node->lcore_id = lcore_id; + ret = 0; + break; + } + } + + graph_spinlock_unlock(); + + return ret; +} + diff --git a/lib/graph/rte_graph_model_dispatch.h b/lib/graph/rte_graph_model_dispatch.h new file mode 100644 index 0000000000..179624e972 --- /dev/null +++ b/lib/graph/rte_graph_model_dispatch.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Intel Corporation + */ + +#ifndef _RTE_GRAPH_MODEL_DISPATCH_H_ +#define _RTE_GRAPH_MODEL_DISPATCH_H_ + +/** + * @file rte_graph_model_dispatch.h + * + * @warning + * @b EXPERIMENTAL: + * All functions in this file may be changed or removed without prior notice. + * + * This API allows to set core affinity with the node. + */ +#include "rte_graph_worker_common.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Set lcore affinity with the node. + * + * @param name + * Valid node name. In the case of the cloned node, the name will be + * "parent node name" + "-" + name. + * @param lcore_id + * The lcore ID value. + * + * @return + * 0 on success, error otherwise. + */ +__rte_experimental +int rte_graph_model_dispatch_lcore_affinity_set(const char *name, + unsigned int lcore_id); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_GRAPH_MODEL_DISPATCH_H_ */ diff --git a/lib/graph/version.map b/lib/graph/version.map index eea73ec9ca..1f090be74e 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -46,5 +46,7 @@ EXPERIMENTAL { rte_graph_worker_model_set; rte_graph_worker_model_get; + rte_graph_model_dispatch_lcore_affinity_set; + local: *; }; From patchwork Thu Mar 30 06:18:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125622 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3345F4286D; Thu, 30 Mar 2023 08:19:25 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C78E142D17; Thu, 30 Mar 2023 08:18:55 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 9D02F42D1D for ; Thu, 30 Mar 2023 08:18:53 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157133; x=1711693133; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KvffPVut01BMQ6onAjS4VEulxb5OrYeG76XzuTei3Fs=; b=K1823g3a2wRpohh6Ly1q5oK4STcVPe7rMa3WGTGCEkUXUaxMapq0dnsN NDrSXLsrifJXHLpmEMnLtb9k+ZDc9gxS2bm9E7uS0zvEwyztW0QeqvzBb oJFG3J6ATzrN/g7K4GP29yFAU2FN5dP1kmQcmpNSv9daHIVQLzdPz5ANw K09rIMFdURFK7pGBP5QlxPtNSt5yroflmWhbZzGcdaz6eLFi7w/dsiaX1 pOA92k7BBNyZ85lFL+6V4TV+k9bgY1O9eDG4/lz1ybMGBJYnkWoo6+cDe 8gfF4TKG7FCj5MEQ9x7N5I4pzt+tqgsiv60BHgaDHsRGCSXVOKr7vJ4Eb g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530590" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530590" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176195" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176195" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:51 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 06/15] graph: introduce graph bind unbind API Date: Thu, 30 Mar 2023 15:18:25 +0900 Message-Id: <20230330061834.3118201-7-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add lcore_id for graph to hold affinity core id where graph would run on. Add bind/unbind API to set/unset graph affinity attribute. lcore_id will be set as MAX by default, it means not enable this attribute. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 59 +++++++++++++++++++++++++++++++++++++++ lib/graph/graph_private.h | 2 ++ lib/graph/rte_graph.h | 22 +++++++++++++++ lib/graph/version.map | 2 ++ 4 files changed, 85 insertions(+) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index a839a2803b..b39a99aac6 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -254,6 +254,64 @@ graph_mem_fixup_secondary(struct rte_graph *graph) return graph_mem_fixup_node_ctx(graph); } +static __rte_always_inline bool +graph_src_node_avail(struct graph *graph) +{ + struct graph_node *graph_node; + + STAILQ_FOREACH(graph_node, &graph->node_list, next) + if ((graph_node->node->flags & RTE_NODE_SOURCE_F) && + (graph_node->node->lcore_id == RTE_MAX_LCORE || + graph->lcore_id == graph_node->node->lcore_id)) + return true; + + return false; +} + +int +rte_graph_model_dispatch_core_bind(rte_graph_t id, int lcore) +{ + struct graph *graph; + + GRAPH_ID_CHECK(id); + if (!rte_lcore_is_enabled(lcore)) + SET_ERR_JMP(ENOLINK, fail, + "lcore %d not enabled\n", + lcore); + + STAILQ_FOREACH(graph, &graph_list, next) + if (graph->id == id) + break; + + graph->lcore_id = lcore; + graph->socket = rte_lcore_to_socket_id(lcore); + + /* check the availability of source node */ + if (!graph_src_node_avail(graph)) + graph->graph->head = 0; + + return 0; + +fail: + return -rte_errno; +} + +void +rte_graph_model_dispatch_core_unbind(rte_graph_t id) +{ + struct graph *graph; + + GRAPH_ID_CHECK(id); + STAILQ_FOREACH(graph, &graph_list, next) + if (graph->id == id) + break; + + graph->lcore_id = RTE_MAX_LCORE; + +fail: + return; +} + struct rte_graph * rte_graph_lookup(const char *name) { @@ -340,6 +398,7 @@ rte_graph_create(const char *name, struct rte_graph_param *prm) graph->src_node_count = src_node_count; graph->node_count = graph_nodes_count(graph); graph->id = graph_id; + graph->lcore_id = RTE_MAX_LCORE; graph->num_pkt_to_capture = prm->num_pkt_to_capture; if (prm->pcap_filename) rte_strscpy(graph->pcap_filename, prm->pcap_filename, RTE_GRAPH_PCAP_FILE_SZ); diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index 409eed3284..ad1d058945 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -98,6 +98,8 @@ struct graph { /**< Circular buffer mask for wrap around. */ rte_graph_t id; /**< Graph identifier. */ + unsigned int lcore_id; + /**< Lcore identifier where the graph prefer to run on. */ size_t mem_sz; /**< Memory size of the graph. */ int socket; diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index c9a77297fc..c523809d1f 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -285,6 +285,28 @@ char *rte_graph_id_to_name(rte_graph_t id); __rte_experimental int rte_graph_export(const char *name, FILE *f); +/** + * Bind graph with specific lcore + * + * @param id + * Graph id to get the pointer of graph object + * @param lcore + * The lcore where the graph will run on + * @return + * 0 on success, error otherwise. + */ +__rte_experimental +int rte_graph_model_dispatch_core_bind(rte_graph_t id, int lcore); + +/** + * Unbind graph with lcore + * + * @param id + * Graph id to get the pointer of graph object + */ +__rte_experimental +void rte_graph_model_dispatch_core_unbind(rte_graph_t id); + /** * Get graph object from its name. * diff --git a/lib/graph/version.map b/lib/graph/version.map index 1f090be74e..7de6f08f59 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -18,6 +18,8 @@ EXPERIMENTAL { rte_graph_node_get_by_name; rte_graph_obj_dump; rte_graph_walk; + rte_graph_model_dispatch_core_bind; + rte_graph_model_dispatch_core_unbind; rte_graph_cluster_stats_create; rte_graph_cluster_stats_destroy; From patchwork Thu Mar 30 06:18:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125623 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5287D4286D; Thu, 30 Mar 2023 08:19:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0265342D36; Thu, 30 Mar 2023 08:18:58 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id A037742B8C for ; Thu, 30 Mar 2023 08:18:55 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157135; x=1711693135; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zTJpYeQM6U0NN7ViwmTuVa7KdfrvCst8+3FJ3am20IA=; b=EHeGUSdOcSWavasZ22rfrjo1EjUk7+cU+BGi54IgrWUZzH24ggTmj30M P4bKN9DNEWxUxj/hUedAxqyzyNmSZu8AyIb8HCI8UmapXhxAdhW/imCWZ SEmH3Zvpy5BGraDU87WHwB+S4Jaq8wSAYbGptuOEDnWpJ5Xmkj+lPq7Gb 1IMdCr38jwa0Xvjfc/YJG+W23+c25IgkPtIkNtYG1Et+tSKLsg2pUhjXh +ySSSikyt/D0PVGOWPdWHtcMOxWk9NHsMMFgrcHPhlo+P04fbJcPHkvRV 2ijuDb910C1OQQF4O6WkZv9vpGXZv0mjdxyIE3OY65F526kLQXiXINR0M A==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530602" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530602" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176198" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176198" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:53 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 07/15] graph: introduce graph clone API for other worker core Date: Thu, 30 Mar 2023 15:18:26 +0900 Message-Id: <20230330061834.3118201-8-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds graph API for supporting to clone the graph object for a specified worker core. The new graph will also clone all nodes. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 110 ++++++++++++++++++++++++++++++++++++++ lib/graph/graph_private.h | 2 + lib/graph/rte_graph.h | 20 +++++++ lib/graph/version.map | 1 + 4 files changed, 133 insertions(+) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index b39a99aac6..90eaad0378 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -398,6 +398,7 @@ rte_graph_create(const char *name, struct rte_graph_param *prm) graph->src_node_count = src_node_count; graph->node_count = graph_nodes_count(graph); graph->id = graph_id; + graph->parent_id = RTE_GRAPH_ID_INVALID; graph->lcore_id = RTE_MAX_LCORE; graph->num_pkt_to_capture = prm->num_pkt_to_capture; if (prm->pcap_filename) @@ -462,6 +463,115 @@ rte_graph_destroy(rte_graph_t id) return rc; } +static int +clone_name(struct graph *graph, struct graph *parent_graph, const char *name) +{ + ssize_t sz, rc; + +#define SZ RTE_GRAPH_NAMESIZE + rc = rte_strscpy(graph->name, parent_graph->name, SZ); + if (rc < 0) + goto fail; + sz = rc; + rc = rte_strscpy(graph->name + sz, "-", RTE_MAX((int16_t)(SZ - sz), 0)); + if (rc < 0) + goto fail; + sz += rc; + sz = rte_strscpy(graph->name + sz, name, RTE_MAX((int16_t)(SZ - sz), 0)); + if (sz < 0) + goto fail; + + return 0; +fail: + rte_errno = E2BIG; + return -rte_errno; +} + +static rte_graph_t +graph_clone(struct graph *parent_graph, const char *name) +{ + struct graph_node *graph_node; + struct graph *graph; + + graph_spinlock_lock(); + + /* Don't allow to clone a node from a cloned graph */ + if (parent_graph->parent_id != RTE_GRAPH_ID_INVALID) + SET_ERR_JMP(EEXIST, fail, "A cloned graph is not allowed to be cloned"); + + /* Create graph object */ + graph = calloc(1, sizeof(*graph)); + if (graph == NULL) + SET_ERR_JMP(ENOMEM, fail, "Failed to calloc cloned graph object"); + + /* Naming ceremony of the new graph. name is node->name + "-" + name */ + if (clone_name(graph, parent_graph, name)) + goto free; + + /* Check for existence of duplicate graph */ + if (rte_graph_from_name(graph->name) != RTE_GRAPH_ID_INVALID) + SET_ERR_JMP(EEXIST, free, "Found duplicate graph %s", + graph->name); + + /* Clone nodes from parent graph firstly */ + STAILQ_INIT(&graph->node_list); + STAILQ_FOREACH(graph_node, &parent_graph->node_list, next) { + if (graph_node_add(graph, graph_node->node)) + goto graph_cleanup; + } + + /* Just update adjacency list of all nodes in the graph */ + if (graph_adjacency_list_update(graph)) + goto graph_cleanup; + + /* Initialize the graph object */ + graph->src_node_count = parent_graph->src_node_count; + graph->node_count = parent_graph->node_count; + graph->parent_id = parent_graph->id; + graph->lcore_id = parent_graph->lcore_id; + graph->socket = parent_graph->socket; + graph->id = graph_id; + + /* Allocate the Graph fast path memory and populate the data */ + if (graph_fp_mem_create(graph)) + goto graph_cleanup; + + /* Call init() of the all the nodes in the graph */ + if (graph_node_init(graph)) + goto graph_mem_destroy; + + /* All good, Lets add the graph to the list */ + graph_id++; + STAILQ_INSERT_TAIL(&graph_list, graph, next); + + graph_spinlock_unlock(); + return graph->id; + +graph_mem_destroy: + graph_fp_mem_destroy(graph); +graph_cleanup: + graph_cleanup(graph); +free: + free(graph); +fail: + graph_spinlock_unlock(); + return RTE_GRAPH_ID_INVALID; +} + +rte_graph_t +rte_graph_clone(rte_graph_t id, const char *name) +{ + struct graph *graph; + + GRAPH_ID_CHECK(id); + STAILQ_FOREACH(graph, &graph_list, next) + if (graph->id == id) + return graph_clone(graph, name); + +fail: + return RTE_GRAPH_ID_INVALID; +} + rte_graph_t rte_graph_from_name(const char *name) { diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index ad1d058945..d28a5af93e 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -98,6 +98,8 @@ struct graph { /**< Circular buffer mask for wrap around. */ rte_graph_t id; /**< Graph identifier. */ + rte_graph_t parent_id; + /**< Parent graph identifier. */ unsigned int lcore_id; /**< Lcore identifier where the graph prefer to run on. */ size_t mem_sz; diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index c523809d1f..2f86c17de7 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -247,6 +247,26 @@ rte_graph_t rte_graph_create(const char *name, struct rte_graph_param *prm); __rte_experimental int rte_graph_destroy(rte_graph_t id); +/** + * Clone Graph. + * + * Clone a graph from static graph (graph created from rte_graph_create). And + * all cloned graphs attached to the parent graph MUST be destroyed together + * for fast schedule design limitation (stop ALL graph walk firstly). + * + * @param id + * Static graph id to clone from. + * @param name + * Name of the new graph. The library prepends the parent graph name to the + * user-specified name. The final graph name will be, + * "parent graph name" + "-" + name. + * + * @return + * Valid graph id on success, RTE_GRAPH_ID_INVALID otherwise. + */ +__rte_experimental +rte_graph_t rte_graph_clone(rte_graph_t id, const char *name); + /** * Get graph id from graph name. * diff --git a/lib/graph/version.map b/lib/graph/version.map index 7de6f08f59..aaa86f66ed 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -7,6 +7,7 @@ EXPERIMENTAL { rte_graph_create; rte_graph_destroy; + rte_graph_clone; rte_graph_dump; rte_graph_export; rte_graph_from_name; From patchwork Thu Mar 30 06:18:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125624 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5BE814286D; Thu, 30 Mar 2023 08:19:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 48F9042D39; Thu, 30 Mar 2023 08:19:00 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 9CFF742D32 for ; Thu, 30 Mar 2023 08:18:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157137; x=1711693137; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i4bPHhLYEjnztA7cgmSSgSHPMH/gVksNCdgVwnohoaI=; b=h/980/JgDydGoLeXPWhI5oOK8ZVSCdQ+jE0EiaPBiupQMOMqPcwhSMf0 3SWMNFYSF3BB+73jX5DyLdYjCy/Nwr2lzvK+SZpzUdkW2zStX2q3qkc23 q/05N6+2wDgic1KtyJJ22J2D3Kqv2WMNqL3apt7kPWyDMdWSL1ZEE0RZd XqcBZTtenKznrvj1tFQ23Sfd8KcvuNEZ6Ek3xrAcl+Wi1MewYmyrHwUBT o6qJvP4E9c+klex8xWJGbu+lRY4MIaBgoN7ziDuWGOf16sssnnYwiAUS7 5k0krY4CBb4RABINrEiVni4B5q8f04ZmkiTDTZCqzqF2iuuXk4RhYyHFF w==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530609" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530609" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176206" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176206" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:55 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 08/15] graph: add struct for stream moving between cores Date: Thu, 30 Mar 2023 15:18:27 +0900 Message-Id: <20230330061834.3118201-9-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add graph_sched_wq_node to hold graph scheduling workqueue node. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 1 + lib/graph/graph_populate.c | 1 + lib/graph/graph_private.h | 12 ++++++++++++ lib/graph/rte_graph_worker_common.h | 21 +++++++++++++++++++++ 4 files changed, 35 insertions(+) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index 90eaad0378..dd3d69dbf7 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -284,6 +284,7 @@ rte_graph_model_dispatch_core_bind(rte_graph_t id, int lcore) break; graph->lcore_id = lcore; + graph->graph->lcore_id = graph->lcore_id; graph->socket = rte_lcore_to_socket_id(lcore); /* check the availability of source node */ diff --git a/lib/graph/graph_populate.c b/lib/graph/graph_populate.c index 2c0844ce92..7dcf1420c1 100644 --- a/lib/graph/graph_populate.c +++ b/lib/graph/graph_populate.c @@ -89,6 +89,7 @@ graph_nodes_populate(struct graph *_graph) } node->id = graph_node->node->id; node->parent_id = pid; + node->lcore_id = graph_node->node->lcore_id; nb_edges = graph_node->node->nb_edges; node->nb_edges = nb_edges; off += sizeof(struct rte_node); diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index d28a5af93e..b66b18ebbc 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -60,6 +60,18 @@ struct node { char next_nodes[][RTE_NODE_NAMESIZE]; /**< Names of next nodes. */ }; +/** + * @internal + * + * Structure that holds the graph scheduling workqueue node stream. + * Used for mcore dispatch model. + */ +struct graph_sched_wq_node { + rte_graph_off_t node_off; + uint16_t nb_objs; + void *objs[RTE_GRAPH_BURST_SIZE]; +} __rte_cache_aligned; + /** * @internal * diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index 1526da6e2c..dc0a0b5554 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -30,6 +30,13 @@ extern "C" { #endif +/** + * @internal + * + * Singly-linked list head for graph schedule run-queue. + */ +SLIST_HEAD(rte_graph_rq_head, rte_graph); + /** * @internal * @@ -41,6 +48,15 @@ struct rte_graph { uint32_t cir_mask; /**< Circular buffer wrap around mask. */ rte_node_t nb_nodes; /**< Number of nodes in the graph. */ rte_graph_off_t *cir_start; /**< Pointer to circular buffer. */ + /* Graph schedule */ + struct rte_graph_rq_head *rq __rte_cache_aligned; /* The run-queue */ + struct rte_graph_rq_head rq_head; /* The head for run-queue list */ + + SLIST_ENTRY(rte_graph) rq_next; /* The next for run-queue list */ + unsigned int lcore_id; /**< The graph running Lcore. */ + struct rte_ring *wq; /**< The work-queue for pending streams. */ + struct rte_mempool *mp; /**< The mempool for scheduling streams. */ + /* Graph schedule area */ rte_graph_off_t nodes_start; /**< Offset at which node memory starts. */ rte_graph_t id; /**< Graph identifier. */ int socket; /**< Socket ID where memory is allocated. */ @@ -74,6 +90,11 @@ struct rte_node { /** Original process function when pcap is enabled. */ rte_node_process_t original_process; + RTE_STD_C11 + union { + /* Fast schedule area for mcore dispatch model */ + unsigned int lcore_id; /**< Node running lcore. */ + }; /* Fast path area */ #define RTE_NODE_CTX_SZ 16 uint8_t ctx[RTE_NODE_CTX_SZ] __rte_cache_aligned; /**< Node Context. */ From patchwork Thu Mar 30 06:18:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125625 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B3084286D; Thu, 30 Mar 2023 08:19:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 836FA410F3; Thu, 30 Mar 2023 08:19:02 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id B3E3B42C76 for ; Thu, 30 Mar 2023 08:18:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157139; x=1711693139; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=waG3KCpTFEdTPBc0iI170hB0Nu9NYcVU40lI4bt5H4k=; b=bhDmxGe4uXru9pY59liWZweEMkDaBePAfhfuTtJxQPYxg7ITiLpPAAzm XpYliz56ByJno+uIc0Uzl7FNlmhbK8rjRvMKheDN/bkbN7YwtyErIsTkn zu0xq7E2vXfgWMSlWY6UHvlDlRy9ymx3jyoRfByr10zYYRs1pW6CdMvm1 qJSER2JgidEORWhytqPhZNjEtBJvrjpREPyLyEpKCE+fuRVl0MJNzg1AD puIcIVkUyANza97EewnJV+ycZXe4NqquaqFPrxzHMdZGzIjx1vGZ8X6Gd HGw5Sx7f62MkvEHFwp0WzjA51ITM9BvfgoQszrKZCDmNfNY5Csn29AQ1B Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530615" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530615" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:18:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176215" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176215" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:57 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 09/15] graph: introduce stream moving cross cores Date: Thu, 30 Mar 2023 15:18:28 +0900 Message-Id: <20230330061834.3118201-10-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces key functions to allow a worker thread to enable enqueue and move streams of objects to the next nodes over different cores. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_private.h | 27 +++++ lib/graph/meson.build | 2 +- lib/graph/rte_graph_model_dispatch.c | 145 +++++++++++++++++++++++++++ lib/graph/rte_graph_model_dispatch.h | 37 +++++++ lib/graph/version.map | 2 + 5 files changed, 212 insertions(+), 1 deletion(-) diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index b66b18ebbc..e1a2a4bfd8 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -366,4 +366,31 @@ void graph_dump(FILE *f, struct graph *g); */ void node_dump(FILE *f, struct node *n); +/** + * @internal + * + * Create the graph schedule work queue. And all cloned graphs attached to the + * parent graph MUST be destroyed together for fast schedule design limitation. + * + * @param _graph + * The graph object + * @param _parent_graph + * The parent graph object which holds the run-queue head. + * + * @return + * - 0: Success. + * - <0: Graph schedule work queue related error. + */ +int graph_sched_wq_create(struct graph *_graph, struct graph *_parent_graph); + +/** + * @internal + * + * Destroy the graph schedule work queue. + * + * @param _graph + * The graph object + */ +void graph_sched_wq_destroy(struct graph *_graph); + #endif /* _RTE_GRAPH_PRIVATE_H_ */ diff --git a/lib/graph/meson.build b/lib/graph/meson.build index c729d984b6..e21affa280 100644 --- a/lib/graph/meson.build +++ b/lib/graph/meson.build @@ -20,4 +20,4 @@ sources = files( ) headers = files('rte_graph.h', 'rte_graph_worker.h') -deps += ['eal', 'pcapng'] +deps += ['eal', 'pcapng', 'mempool', 'ring'] diff --git a/lib/graph/rte_graph_model_dispatch.c b/lib/graph/rte_graph_model_dispatch.c index 4a2f99496d..a300fefb85 100644 --- a/lib/graph/rte_graph_model_dispatch.c +++ b/lib/graph/rte_graph_model_dispatch.c @@ -5,6 +5,151 @@ #include "graph_private.h" #include "rte_graph_model_dispatch.h" +int +graph_sched_wq_create(struct graph *_graph, struct graph *_parent_graph) +{ + struct rte_graph *parent_graph = _parent_graph->graph; + struct rte_graph *graph = _graph->graph; + unsigned int wq_size; + + wq_size = GRAPH_SCHED_WQ_SIZE(graph->nb_nodes); + wq_size = rte_align32pow2(wq_size + 1); + + graph->wq = rte_ring_create(graph->name, wq_size, graph->socket, + RING_F_SC_DEQ); + if (graph->wq == NULL) + SET_ERR_JMP(EIO, fail, "Failed to allocate graph WQ"); + + graph->mp = rte_mempool_create(graph->name, wq_size, + sizeof(struct graph_sched_wq_node), + 0, 0, NULL, NULL, NULL, NULL, + graph->socket, MEMPOOL_F_SP_PUT); + if (graph->mp == NULL) + SET_ERR_JMP(EIO, fail_mp, + "Failed to allocate graph WQ schedule entry"); + + graph->lcore_id = _graph->lcore_id; + + if (parent_graph->rq == NULL) { + parent_graph->rq = &parent_graph->rq_head; + SLIST_INIT(parent_graph->rq); + } + + graph->rq = parent_graph->rq; + SLIST_INSERT_HEAD(graph->rq, graph, rq_next); + + return 0; + +fail_mp: + rte_ring_free(graph->wq); + graph->wq = NULL; +fail: + return -rte_errno; +} + +void +graph_sched_wq_destroy(struct graph *_graph) +{ + struct rte_graph *graph = _graph->graph; + + if (graph == NULL) + return; + + rte_ring_free(graph->wq); + graph->wq = NULL; + + rte_mempool_free(graph->mp); + graph->mp = NULL; +} + +static __rte_always_inline bool +__graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) +{ + struct graph_sched_wq_node *wq_node; + uint16_t off = 0; + uint16_t size; + +submit_again: + if (rte_mempool_get(graph->mp, (void **)&wq_node) < 0) + goto fallback; + + size = RTE_MIN(node->idx, RTE_DIM(wq_node->objs)); + wq_node->node_off = node->off; + wq_node->nb_objs = size; + rte_memcpy(wq_node->objs, &node->objs[off], size * sizeof(void *)); + + while (rte_ring_mp_enqueue_bulk_elem(graph->wq, (void *)&wq_node, + sizeof(wq_node), 1, NULL) == 0) + rte_pause(); + + off += size; + node->idx -= size; + if (node->idx > 0) + goto submit_again; + + return true; + +fallback: + if (off != 0) + memmove(&node->objs[0], &node->objs[off], + node->idx * sizeof(void *)); + + return false; +} + +bool __rte_noinline +__rte_graph_sched_node_enqueue(struct rte_node *node, + struct rte_graph_rq_head *rq) +{ + const unsigned int lcore_id = node->lcore_id; + struct rte_graph *graph; + + SLIST_FOREACH(graph, rq, rq_next) + if (graph->lcore_id == lcore_id) + break; + + return graph != NULL ? __graph_sched_node_enqueue(node, graph) : false; +} + +void +__rte_graph_sched_wq_process(struct rte_graph *graph) +{ + struct graph_sched_wq_node *wq_node; + struct rte_mempool *mp = graph->mp; + struct rte_ring *wq = graph->wq; + uint16_t idx, free_space; + struct rte_node *node; + unsigned int i, n; + struct graph_sched_wq_node *wq_nodes[32]; + + n = rte_ring_sc_dequeue_burst_elem(wq, wq_nodes, sizeof(wq_nodes[0]), + RTE_DIM(wq_nodes), NULL); + if (n == 0) + return; + + for (i = 0; i < n; i++) { + wq_node = wq_nodes[i]; + node = RTE_PTR_ADD(graph, wq_node->node_off); + RTE_ASSERT(node->fence == RTE_GRAPH_FENCE); + idx = node->idx; + free_space = node->size - idx; + + if (unlikely(free_space < wq_node->nb_objs)) + __rte_node_stream_alloc_size(graph, node, node->size + wq_node->nb_objs); + + memmove(&node->objs[idx], wq_node->objs, wq_node->nb_objs * sizeof(void *)); + memset(wq_node->objs, 0, wq_node->nb_objs * sizeof(void *)); + node->idx = idx + wq_node->nb_objs; + + __rte_node_process(graph, node); + + wq_node->nb_objs = 0; + node->idx = 0; + } + + rte_mempool_put_bulk(mp, (void **)wq_nodes, n); +} + int rte_graph_model_dispatch_lcore_affinity_set(const char *name, unsigned int lcore_id) { diff --git a/lib/graph/rte_graph_model_dispatch.h b/lib/graph/rte_graph_model_dispatch.h index 179624e972..18fa7ce0ab 100644 --- a/lib/graph/rte_graph_model_dispatch.h +++ b/lib/graph/rte_graph_model_dispatch.h @@ -14,12 +14,49 @@ * * This API allows to set core affinity with the node. */ +#include +#include +#include +#include + #include "rte_graph_worker_common.h" #ifdef __cplusplus extern "C" { #endif +#define GRAPH_SCHED_WQ_SIZE_MULTIPLIER 8 +#define GRAPH_SCHED_WQ_SIZE(nb_nodes) \ + ((typeof(nb_nodes))((nb_nodes) * GRAPH_SCHED_WQ_SIZE_MULTIPLIER)) + +/** + * @internal + * + * Schedule the node to the right graph's work queue. + * + * @param node + * Pointer to the scheduled node object. + * @param rq + * Pointer to the scheduled run-queue for all graphs. + * + * @return + * True on success, false otherwise. + */ +__rte_experimental +bool __rte_noinline __rte_graph_sched_node_enqueue(struct rte_node *node, + struct rte_graph_rq_head *rq); + +/** + * @internal + * + * Process all nodes (streams) in the graph's work queue. + * + * @param graph + * Pointer to the graph object. + */ +__rte_experimental +void __rte_graph_sched_wq_process(struct rte_graph *graph); + /** * Set lcore affinity with the node. * diff --git a/lib/graph/version.map b/lib/graph/version.map index aaa86f66ed..d511133f39 100644 --- a/lib/graph/version.map +++ b/lib/graph/version.map @@ -48,6 +48,8 @@ EXPERIMENTAL { rte_graph_worker_model_set; rte_graph_worker_model_get; + __rte_graph_sched_wq_process; + __rte_graph_sched_node_enqueue; rte_graph_model_dispatch_lcore_affinity_set; From patchwork Thu Mar 30 06:18:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125626 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECB704286D; Thu, 30 Mar 2023 08:19:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF87142D3A; Thu, 30 Mar 2023 08:19:04 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 8117C42D41 for ; Thu, 30 Mar 2023 08:19:01 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157141; x=1711693141; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0LebO7SGoi0DYaksMvOP4ILIt7ts9I6sVp4INv89Ees=; b=W3zU8oKvGjCLYTIiE3McCSHtqj2S98CTmuqSxDVdO/mM8BdwKCbSnk9A dwDdPeF9S+a2pE+l11CAB5W38uLP7Fxs0F+Vmqv7/xguIF/gEcLmlI+4G lKEEZx0kPzAJX4pI0Xl6PLAQ+dwhf8AWXodB4igls0TUnK1Xh+45gGhPT tX2QhDOgFIB6e7dwJOtdudMV71ve7Hc4dTx+2eGkTG3twsda8LcFsB+Jw X8dEUlYAx3Ht2VEzUj7tBMJmv3OTOtkC24HnJ9KpxegGBONtnEd1QWfbq TUcDifW0eRIXyO+alPvTN0KkcL5/EvIMAAy/6UZB54uL6Goiqo6Epp90J g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530621" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530621" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:19:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176225" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176225" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:18:59 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 10/15] graph: enable create and destroy graph scheduling workqueue Date: Thu, 30 Mar 2023 15:18:29 +0900 Message-Id: <20230330061834.3118201-11-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch enables to create and destroy scheduling workqueue into common graph operations. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/lib/graph/graph.c b/lib/graph/graph.c index dd3d69dbf7..1f1ee9b622 100644 --- a/lib/graph/graph.c +++ b/lib/graph/graph.c @@ -443,6 +443,10 @@ rte_graph_destroy(rte_graph_t id) while (graph != NULL) { tmp = STAILQ_NEXT(graph, next); if (graph->id == id) { + /* Destroy the schedule work queue if has */ + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) + graph_sched_wq_destroy(graph); + /* Call fini() of the all the nodes in the graph */ graph_node_fini(graph); /* Destroy graph fast path memory */ @@ -537,6 +541,11 @@ graph_clone(struct graph *parent_graph, const char *name) if (graph_fp_mem_create(graph)) goto graph_cleanup; + /* Create the graph schedule work queue */ + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH && + graph_sched_wq_create(graph, parent_graph)) + goto graph_mem_destroy; + /* Call init() of the all the nodes in the graph */ if (graph_node_init(graph)) goto graph_mem_destroy; From patchwork Thu Mar 30 06:18:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125627 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 540114286D; Thu, 30 Mar 2023 08:19:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C1FAD42D49; Thu, 30 Mar 2023 08:19:05 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 6974F42D44 for ; Thu, 30 Mar 2023 08:19:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157143; x=1711693143; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qBGBEn3KoxFQtl2B7/SEmvouD4rCBoDVkcMe8223W7w=; b=Oew/DNmeteG5t1unXeVTqqU1VXepjIO1sbQgktqZ3/bAn/bM8ZsXQ9Eu yc7Bnokm9G/HT0kPGegI4Kxl/tLPq2YXA53V7KDyAFv2VrduEODBqQmfs op1rYAIoQAiOKK4eSuGjATZnTzMehWBZdSIQ18cPxb03sVR5r27eMgcM9 +0tHOgid2Nu4UB6bSLc9noAtmH7NgALHc8qJFtUSyTnAhr35qhKglZ9Cc m7vCb8lWIj0S5pF3sjvfxoY0GwyQ/oOXf+iLKklr6f+jHy/e/BGxBbS6G ZYYHHupdSaRL05X46PePElmCGlhe5pKXVoVPGExgT3ce5cjIw1WfUfo5R w==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530625" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530625" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:19:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176229" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176229" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:19:01 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 11/15] graph: introduce graph walk by cross-core dispatch Date: Thu, 30 Mar 2023 15:18:30 +0900 Message-Id: <20230330061834.3118201-12-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces the task scheduler mechanism to enable dispatching tasks to another worker cores. Currently, there is only a local work queue for one graph to walk. We introduce a scheduler worker queue in each worker core for dispatching tasks. It will perform the walk on scheduler work queue first, then handle the local work queue. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/rte_graph_model_dispatch.h | 42 ++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/lib/graph/rte_graph_model_dispatch.h b/lib/graph/rte_graph_model_dispatch.h index 18fa7ce0ab..65b2cc6d87 100644 --- a/lib/graph/rte_graph_model_dispatch.h +++ b/lib/graph/rte_graph_model_dispatch.h @@ -73,6 +73,48 @@ __rte_experimental int rte_graph_model_dispatch_lcore_affinity_set(const char *name, unsigned int lcore_id); +/** + * Perform graph walk on the circular buffer and invoke the process function + * of the nodes and collect the stats. + * + * @param graph + * Graph pointer returned from rte_graph_lookup function. + * + * @see rte_graph_lookup() + */ +__rte_experimental +static inline void +rte_graph_walk_mcore_dispatch(struct rte_graph *graph) +{ + const rte_graph_off_t *cir_start = graph->cir_start; + const rte_node_t mask = graph->cir_mask; + uint32_t head = graph->head; + struct rte_node *node; + + if (graph->wq != NULL) + __rte_graph_sched_wq_process(graph); + + while (likely(head != graph->tail)) { + node = (struct rte_node *)RTE_PTR_ADD(graph, cir_start[(int32_t)head++]); + + /* skip the src nodes which not bind with current worker */ + if ((int32_t)head < 0 && node->lcore_id != graph->lcore_id) + continue; + + /* Schedule the node until all task/objs are done */ + if (node->lcore_id != RTE_MAX_LCORE && + graph->lcore_id != node->lcore_id && graph->rq != NULL && + __rte_graph_sched_node_enqueue(node, graph->rq)) + continue; + + __rte_node_process(graph, node); + + head = likely((int32_t)head > 0) ? head & mask : head; + } + + graph->tail = 0; +} + #ifdef __cplusplus } #endif From patchwork Thu Mar 30 06:18:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125628 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 523EA4286D; Thu, 30 Mar 2023 08:20:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E3F2942D53; Thu, 30 Mar 2023 08:19:29 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 96D6940E5A for ; Thu, 30 Mar 2023 08:19:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157165; x=1711693165; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uEYD7AaT6JY85ASk8lTux/2iq3TH/nB6Z++lQS3gzjs=; b=OYmvRtOeNq5U6H+jd5RJg+ejgxne/y0NqoQrNSA58n/A0PAcdXMj7J5r SPpLDsxwvTVpxeF34tm6rE26trAvn6E631EMUYNdRHfNjvcqdnI+dkJT+ 02bL739OVW+8HQhJqhnRCiXyf+Cg6YvBBxsAkE1BZkBJ2cWVK+Tlq6Oql e1WfnsXGp1+5bKUH6T8MhulkqHH50kSoR32FLqwNoSqWMOXSFkT9YCJ2t rvE1pohrIsFXQta+9Mn+sQnERmuGFy2zw5FndDjdtAXB3MZ+56kQnJ92B zHnX/VjPXu/uUTlD70COL+o/57ucl4Mm8X7dppGIzFU3TEsjvsUz+bsGg A==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530633" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530633" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:19:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176237" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176237" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:19:03 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 12/15] graph: enable graph multicore dispatch scheduler model Date: Thu, 30 Mar 2023 15:18:31 +0900 Message-Id: <20230330061834.3118201-13-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch enables to chose new scheduler model. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/rte_graph_worker.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/lib/graph/rte_graph_worker.h b/lib/graph/rte_graph_worker.h index 7ea18ba80a..d608c7513e 100644 --- a/lib/graph/rte_graph_worker.h +++ b/lib/graph/rte_graph_worker.h @@ -10,6 +10,7 @@ extern "C" { #endif #include "rte_graph_model_rtc.h" +#include "rte_graph_model_dispatch.h" /** * Perform graph walk on the circular buffer and invoke the process function @@ -24,7 +25,13 @@ __rte_experimental static inline void rte_graph_walk(struct rte_graph *graph) { - rte_graph_walk_rtc(graph); + int model = rte_graph_worker_model_get(); + + if (model == RTE_GRAPH_MODEL_DEFAULT || + model == RTE_GRAPH_MODEL_RTC) + rte_graph_walk_rtc(graph); + else if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) + rte_graph_walk_mcore_dispatch(graph); } #ifdef __cplusplus From patchwork Thu Mar 30 06:18:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125629 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C539C4286D; Thu, 30 Mar 2023 08:20:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0948642D5A; Thu, 30 Mar 2023 08:19:31 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 41D7642B7E for ; Thu, 30 Mar 2023 08:19:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157166; x=1711693166; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LVhckUaRkO+6TmsxwmLz/KPOPMLLS8UCjv+USsYZWwA=; b=Zqb+pVJKz6RYjA5j08/yn8AsbNxdVxtEuRAydb5w25o9s8zKC71ze4bY AtaFpXU1K1Iy+Hn1RygDDuZKD07k7MClWEfqT0duKXA9Cm9XXEh/lSFRg qyDVOywBBbLuP8RECtUsVvtVIPwZTmQcKq19/A8hU4LXtNhmAjRe36cIQ 8pr7DL8zVCAn+LWgrQDcrGkLA8psoFn4ujocEzTkDOd2tBI4aXjeS1r5K 4s11upPesApkejjThxNGXLqMkgibd42wfzagfxcb/YdS+4UjBgdVwy+1s 0/wK6lQ+Q4HPxOWdcfnBZC5cbOGrexdfgxkFEzoHKTWyPLy8WfAkt6OAA w==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530639" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530639" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:19:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176245" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176245" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:19:04 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 13/15] graph: add stats for cross-core dispatching Date: Thu, 30 Mar 2023 15:18:32 +0900 Message-Id: <20230330061834.3118201-14-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add stats for cross-core dispatching scheduler if stats collection is enabled. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- lib/graph/graph_debug.c | 6 +++ lib/graph/graph_stats.c | 74 +++++++++++++++++++++++++--- lib/graph/rte_graph.h | 2 + lib/graph/rte_graph_model_dispatch.c | 3 ++ lib/graph/rte_graph_worker_common.h | 2 + 5 files changed, 79 insertions(+), 8 deletions(-) diff --git a/lib/graph/graph_debug.c b/lib/graph/graph_debug.c index b84412f5dd..7dcf07b080 100644 --- a/lib/graph/graph_debug.c +++ b/lib/graph/graph_debug.c @@ -74,6 +74,12 @@ rte_graph_obj_dump(FILE *f, struct rte_graph *g, bool all) fprintf(f, " size=%d\n", n->size); fprintf(f, " idx=%d\n", n->idx); fprintf(f, " total_objs=%" PRId64 "\n", n->total_objs); + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + fprintf(f, " total_sched_objs=%" PRId64 "\n", + n->total_sched_objs); + fprintf(f, " total_sched_fail=%" PRId64 "\n", + n->total_sched_fail); + } fprintf(f, " total_calls=%" PRId64 "\n", n->total_calls); for (i = 0; i < n->nb_edges; i++) fprintf(f, " edge[%d] <%s>\n", i, diff --git a/lib/graph/graph_stats.c b/lib/graph/graph_stats.c index c0140ba922..aa22cc403c 100644 --- a/lib/graph/graph_stats.c +++ b/lib/graph/graph_stats.c @@ -40,13 +40,19 @@ struct rte_graph_cluster_stats { struct cluster_node clusters[]; } __rte_cache_aligned; +#define boarder_model_dispatch() \ + fprintf(f, "+-------------------------------+---------------+--------" \ + "-------+---------------+---------------+---------------+" \ + "---------------+---------------+-" \ + "----------+\n") + #define boarder() \ fprintf(f, "+-------------------------------+---------------+--------" \ "-------+---------------+---------------+---------------+-" \ "----------+\n") static inline void -print_banner(FILE *f) +print_banner_default(FILE *f) { boarder(); fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s\n", "|Node", "|calls", @@ -55,6 +61,27 @@ print_banner(FILE *f) boarder(); } +static inline void +print_banner_dispatch(FILE *f) +{ + boarder_model_dispatch(); + fprintf(f, "%-32s%-16s%-16s%-16s%-16s%-16s%-16s%-16s%-16s\n", + "|Node", "|calls", + "|objs", "|sched objs", "|sched fail", + "|realloc_count", "|objs/call", "|objs/sec(10E6)", + "|cycles/call|"); + boarder_model_dispatch(); +} + +static inline void +print_banner(FILE *f) +{ + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) + print_banner_dispatch(f); + else + print_banner_default(f); +} + static inline void print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat) { @@ -76,11 +103,21 @@ print_node(FILE *f, const struct rte_graph_cluster_node_stats *stat) objs_per_sec = ts_per_hz ? (objs - prev_objs) / ts_per_hz : 0; objs_per_sec /= 1000000; - fprintf(f, - "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 - "|%-15.3f|%-15.6f|%-11.4f|\n", - stat->name, calls, objs, stat->realloc_count, objs_per_call, - objs_per_sec, cycles_per_call); + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + fprintf(f, + "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15.3f|%-15.6f|%-11.4f|\n", + stat->name, calls, objs, stat->sched_objs, + stat->sched_fail, stat->realloc_count, objs_per_call, + objs_per_sec, cycles_per_call); + } else { + fprintf(f, + "|%-31s|%-15" PRIu64 "|%-15" PRIu64 "|%-15" PRIu64 + "|%-15.3f|%-15.6f|%-11.4f|\n", + stat->name, calls, objs, stat->realloc_count, objs_per_call, + objs_per_sec, cycles_per_call); + } } static int @@ -88,13 +125,20 @@ graph_cluster_stats_cb(bool is_first, bool is_last, void *cookie, const struct rte_graph_cluster_node_stats *stat) { FILE *f = cookie; + int model; + + model = rte_graph_worker_model_get(); if (unlikely(is_first)) print_banner(f); if (stat->objs) print_node(f, stat); - if (unlikely(is_last)) - boarder(); + if (unlikely(is_last)) { + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) + boarder_model_dispatch(); + else + boarder(); + } return 0; }; @@ -332,13 +376,21 @@ static inline void cluster_node_arregate_stats(struct cluster_node *cluster) { uint64_t calls = 0, cycles = 0, objs = 0, realloc_count = 0; + uint64_t sched_objs = 0, sched_fail = 0; struct rte_graph_cluster_node_stats *stat = &cluster->stat; struct rte_node *node; rte_node_t count; + int model; + model = rte_graph_worker_model_get(); for (count = 0; count < cluster->nb_nodes; count++) { node = cluster->nodes[count]; + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + sched_objs += node->total_sched_objs; + sched_fail += node->total_sched_fail; + } + calls += node->total_calls; objs += node->total_objs; cycles += node->total_cycles; @@ -348,6 +400,12 @@ cluster_node_arregate_stats(struct cluster_node *cluster) stat->calls = calls; stat->objs = objs; stat->cycles = cycles; + + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) { + stat->sched_objs = sched_objs; + stat->sched_fail = sched_fail; + } + stat->ts = rte_get_timer_cycles(); stat->realloc_count = realloc_count; } diff --git a/lib/graph/rte_graph.h b/lib/graph/rte_graph.h index 2f86c17de7..7d77a790ac 100644 --- a/lib/graph/rte_graph.h +++ b/lib/graph/rte_graph.h @@ -208,6 +208,8 @@ struct rte_graph_cluster_node_stats { uint64_t prev_calls; /**< Previous number of calls. */ uint64_t prev_objs; /**< Previous number of processed objs. */ uint64_t prev_cycles; /**< Previous number of cycles. */ + uint64_t sched_objs; /**< Previous number of scheduled objs. */ + uint64_t sched_fail; /**< Previous number of failed schedule objs. */ uint64_t realloc_count; /**< Realloc count. */ diff --git a/lib/graph/rte_graph_model_dispatch.c b/lib/graph/rte_graph_model_dispatch.c index a300fefb85..9db60eb463 100644 --- a/lib/graph/rte_graph_model_dispatch.c +++ b/lib/graph/rte_graph_model_dispatch.c @@ -83,6 +83,7 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) rte_pause(); off += size; + node->total_sched_objs += size; node->idx -= size; if (node->idx > 0) goto submit_again; @@ -94,6 +95,8 @@ __graph_sched_node_enqueue(struct rte_node *node, struct rte_graph *graph) memmove(&node->objs[0], &node->objs[off], node->idx * sizeof(void *)); + node->total_sched_fail += node->idx; + return false; } diff --git a/lib/graph/rte_graph_worker_common.h b/lib/graph/rte_graph_worker_common.h index dc0a0b5554..d94983589c 100644 --- a/lib/graph/rte_graph_worker_common.h +++ b/lib/graph/rte_graph_worker_common.h @@ -95,6 +95,8 @@ struct rte_node { /* Fast schedule area for mcore dispatch model */ unsigned int lcore_id; /**< Node running lcore. */ }; + uint64_t total_sched_objs; /**< Number of objects scheduled. */ + uint64_t total_sched_fail; /**< Number of scheduled failure. */ /* Fast path area */ #define RTE_NODE_CTX_SZ 16 uint8_t ctx[RTE_NODE_CTX_SZ] __rte_cache_aligned; /**< Node Context. */ From patchwork Thu Mar 30 06:18:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125630 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E57484286D; Thu, 30 Mar 2023 08:20:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 144F442BC9; Thu, 30 Mar 2023 08:19:32 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id BD401410D3 for ; Thu, 30 Mar 2023 08:19:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157166; x=1711693166; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eHLXI4NTC2tnJ48tUvIBBjy5etf2ju7CP7f28DoE7vk=; b=j45s72n4JknnOdLi3HxWd03KtK/FOe6eOLZv3wVHnOje2ZiwY5tNgHQy JiDzjy6mjrpEsOygTwsZAtzeIuciiWEZnE68GSSvJEJjiguiRKNL/ftIX MJwik/36Q/KgeAts+xl/E6WHmeT/0x9fXaYm2zfXMdz/syzI4KPGKXhgV IWAj1wLhqytFCHkQSetWDUUMjJQC7jZnXPkwYq+o97ZUs5yD0KE3QZOoS jZuy94FHPIFryd20SUOQj5w3oFNaEH62xsnyK+mLNw6L6vidwiD1/4V/t Fr4N01PNfm0v8TFUhtC6VPqZbdYBeBE8G51ZcAWKUtVZgAMj3Cb2kmArR g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530652" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530652" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:19:09 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176251" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176251" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:19:07 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 14/15] examples/l3fwd-graph: introduce multicore dispatch worker model Date: Thu, 30 Mar 2023 15:18:33 +0900 Message-Id: <20230330061834.3118201-15-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add new parameter "model" to choose dispatch or rtc worker model. And in dispatch model, the node will affinity to worker core successively. Note: only support one RX node for remote model in current implementation. ./dpdk-l3fwd-graph -l 8,9,10,11 -n 4 -- -p 0x1 --config="(0,0,9)" -P --model="dispatch" Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- examples/l3fwd-graph/main.c | 237 +++++++++++++++++++++++++++++------- 1 file changed, 195 insertions(+), 42 deletions(-) diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index 5feeab4f0f..cfa78003f4 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -55,6 +55,9 @@ #define NB_SOCKETS 8 +/* Graph module */ +#define WORKER_MODEL_RTC "rtc" +#define WORKER_MODEL_MCORE_DISPATCH "dispatch" /* Static global variables used within this file. */ static uint16_t nb_rxd = RX_DESC_DEFAULT; static uint16_t nb_txd = TX_DESC_DEFAULT; @@ -88,6 +91,10 @@ struct lcore_rx_queue { char node_name[RTE_NODE_NAMESIZE]; }; +struct model_conf { + enum rte_graph_worker_model model; +}; + /* Lcore conf */ struct lcore_conf { uint16_t n_rx_queue; @@ -153,6 +160,19 @@ static struct ipv4_l3fwd_lpm_route ipv4_l3fwd_lpm_route_array[] = { {RTE_IPV4(198, 18, 6, 0), 24, 6}, {RTE_IPV4(198, 18, 7, 0), 24, 7}, }; +static int +check_worker_model_params(void) +{ + if (rte_graph_worker_model_get() == RTE_GRAPH_MODEL_MCORE_DISPATCH && + nb_lcore_params > 1) { + printf("Exceeded max number of lcore params for remote model: %hu\n", + nb_lcore_params); + return -1; + } + + return 0; +} + static int check_lcore_params(void) { @@ -276,6 +296,7 @@ print_usage(const char *prgname) " --eth-dest=X,MM:MM:MM:MM:MM:MM: Ethernet destination for " "port X\n" " --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\n" + " --model NAME: walking model name, dispatch or rtc(by default)\n" " --no-numa: Disable numa awareness\n" " --per-port-pool: Use separate buffer pool per port\n" " --pcap-enable: Enables pcap capture\n" @@ -318,6 +339,20 @@ parse_max_pkt_len(const char *pktlen) return len; } +static int +parse_worker_model(const char *model) +{ + if (strcmp(model, WORKER_MODEL_MCORE_DISPATCH) == 0) { + rte_graph_worker_model_set(RTE_GRAPH_MODEL_MCORE_DISPATCH); + return RTE_GRAPH_MODEL_MCORE_DISPATCH; + } else if (strcmp(model, WORKER_MODEL_RTC) == 0) + return RTE_GRAPH_MODEL_RTC; + + rte_exit(EXIT_FAILURE, "Invalid worker model: %s", model); + + return RTE_GRAPH_MODEL_LIST_END; +} + static int parse_portmask(const char *portmask) { @@ -434,6 +469,8 @@ static const char short_options[] = "p:" /* portmask */ #define CMD_LINE_OPT_PCAP_ENABLE "pcap-enable" #define CMD_LINE_OPT_NUM_PKT_CAP "pcap-num-cap" #define CMD_LINE_OPT_PCAP_FILENAME "pcap-file-name" +#define CMD_LINE_OPT_WORKER_MODEL "model" + enum { /* Long options mapped to a short option */ @@ -449,6 +486,7 @@ enum { CMD_LINE_OPT_PARSE_PCAP_ENABLE, CMD_LINE_OPT_PARSE_NUM_PKT_CAP, CMD_LINE_OPT_PCAP_FILENAME_CAP, + CMD_LINE_OPT_WORKER_MODEL_TYPE, }; static const struct option lgopts[] = { @@ -460,6 +498,7 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_PCAP_ENABLE, 0, 0, CMD_LINE_OPT_PARSE_PCAP_ENABLE}, {CMD_LINE_OPT_NUM_PKT_CAP, 1, 0, CMD_LINE_OPT_PARSE_NUM_PKT_CAP}, {CMD_LINE_OPT_PCAP_FILENAME, 1, 0, CMD_LINE_OPT_PCAP_FILENAME_CAP}, + {CMD_LINE_OPT_WORKER_MODEL, 1, 0, CMD_LINE_OPT_WORKER_MODEL_TYPE}, {NULL, 0, 0, 0}, }; @@ -551,6 +590,11 @@ parse_args(int argc, char **argv) printf("Pcap file name: %s\n", pcap_filename); break; + case CMD_LINE_OPT_WORKER_MODEL_TYPE: + printf("Use new worker model: %s\n", optarg); + parse_worker_model(optarg); + break; + default: print_usage(prgname); return -1; @@ -726,15 +770,15 @@ print_stats(void) static int graph_main_loop(void *conf) { + struct model_conf *mconf = conf; struct lcore_conf *qconf; struct rte_graph *graph; uint32_t lcore_id; - RTE_SET_USED(conf); - lcore_id = rte_lcore_id(); qconf = &lcore_conf[lcore_id]; graph = qconf->graph; + rte_graph_worker_model_set(mconf->model); if (!graph) { RTE_LOG(INFO, L3FWD_GRAPH, "Lcore %u has nothing to do\n", @@ -788,6 +832,141 @@ config_port_max_pkt_len(struct rte_eth_conf *conf, return 0; } +static void +graph_config_mcore_dispatch(struct rte_graph_param graph_conf) +{ + uint16_t nb_patterns = graph_conf.nb_node_patterns; + int worker_count = rte_lcore_count() - 1; + int main_lcore_id = rte_get_main_lcore(); + int worker_lcore = main_lcore_id; + rte_graph_t main_graph_id = 0; + struct rte_node *node_tmp; + struct lcore_conf *qconf; + struct rte_graph *graph; + rte_graph_t graph_id; + rte_graph_off_t off; + int n_rx_node = 0; + rte_node_t count; + int i, j; + int ret; + + for (j = 0; j < nb_lcore_params; j++) { + qconf = &lcore_conf[lcore_params[j].lcore_id]; + /* Add rx node patterns of all lcore */ + for (i = 0; i < qconf->n_rx_queue; i++) { + char *node_name = qconf->rx_queue_list[i].node_name; + + graph_conf.node_patterns[nb_patterns + n_rx_node + i] = node_name; + n_rx_node++; + ret = rte_graph_model_dispatch_lcore_affinity_set(node_name, + lcore_params[j].lcore_id); + if (ret == 0) + printf("Set node %s affinity to lcore %u\n", node_name, + lcore_params[j].lcore_id); + } + } + + graph_conf.nb_node_patterns = nb_patterns + n_rx_node; + graph_conf.socket_id = rte_lcore_to_socket_id(main_lcore_id); + + qconf = &lcore_conf[main_lcore_id]; + snprintf(qconf->name, sizeof(qconf->name), "worker_%u", + main_lcore_id); + + /* create main graph */ + main_graph_id = rte_graph_create(qconf->name, &graph_conf); + if (main_graph_id == RTE_GRAPH_ID_INVALID) + rte_exit(EXIT_FAILURE, + "rte_graph_create(): main_graph_id invalid for lcore %u\n", + main_lcore_id); + + qconf->graph_id = main_graph_id; + qconf->graph = rte_graph_lookup(qconf->name); + /* >8 End of graph initialization. */ + if (!qconf->graph) + rte_exit(EXIT_FAILURE, + "rte_graph_lookup(): graph %s not found\n", + qconf->name); + + graph = qconf->graph; + rte_graph_foreach_node(count, off, graph, node_tmp) { + worker_lcore = rte_get_next_lcore(worker_lcore, true, 1); + + /* Need to set the node Lcore affinity before clone graph for each lcore */ + if (node_tmp->lcore_id == RTE_MAX_LCORE) { + ret = rte_graph_model_dispatch_lcore_affinity_set(node_tmp->name, + worker_lcore); + if (ret == 0) + printf("Set node %s affinity to lcore %u\n", + node_tmp->name, worker_lcore); + } + } + + worker_lcore = main_lcore_id; + for (i = 0; i < worker_count; i++) { + worker_lcore = rte_get_next_lcore(worker_lcore, true, 1); + + qconf = &lcore_conf[worker_lcore]; + snprintf(qconf->name, sizeof(qconf->name), "cloned-%u", worker_lcore); + graph_id = rte_graph_clone(main_graph_id, qconf->name); + ret = rte_graph_model_dispatch_core_bind(graph_id, worker_lcore); + if (ret == 0) + printf("bind graph %d to lcore %u\n", graph_id, worker_lcore); + + /* full cloned graph name */ + snprintf(qconf->name, sizeof(qconf->name), "%s", + rte_graph_id_to_name(graph_id)); + qconf->graph_id = graph_id; + qconf->graph = rte_graph_lookup(qconf->name); + if (!qconf->graph) + rte_exit(EXIT_FAILURE, + "Failed to lookup graph %s\n", + qconf->name); + continue; + } +} + +static void +graph_config_rtc(struct rte_graph_param graph_conf) +{ + uint16_t nb_patterns = graph_conf.nb_node_patterns; + struct lcore_conf *qconf; + rte_graph_t graph_id; + uint32_t lcore_id; + rte_edge_t i; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + qconf = &lcore_conf[lcore_id]; + /* Skip graph creation if no source exists */ + if (!qconf->n_rx_queue) + continue; + /* Add rx node patterns of this lcore */ + for (i = 0; i < qconf->n_rx_queue; i++) { + graph_conf.node_patterns[nb_patterns + i] = + qconf->rx_queue_list[i].node_name; + } + graph_conf.nb_node_patterns = nb_patterns + i; + graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id); + snprintf(qconf->name, sizeof(qconf->name), "worker_%u", + lcore_id); + graph_id = rte_graph_create(qconf->name, &graph_conf); + if (graph_id == RTE_GRAPH_ID_INVALID) + rte_exit(EXIT_FAILURE, + "rte_graph_create(): graph_id invalid for lcore %u\n", + lcore_id); + qconf->graph_id = graph_id; + qconf->graph = rte_graph_lookup(qconf->name); + /* >8 End of graph initialization. */ + if (!qconf->graph) + rte_exit(EXIT_FAILURE, + "rte_graph_lookup(): graph %s not found\n", + qconf->name); + } +} + int main(int argc, char **argv) { @@ -808,10 +987,12 @@ main(int argc, char **argv) uint16_t queueid, portid, i; const char **node_patterns; struct lcore_conf *qconf; + struct model_conf mconf; uint16_t nb_graphs = 0; uint16_t nb_patterns; uint8_t rewrite_len; uint32_t lcore_id; + uint16_t model; int ret; /* Init EAL */ @@ -840,6 +1021,9 @@ main(int argc, char **argv) if (check_lcore_params() < 0) rte_exit(EXIT_FAILURE, "check_lcore_params() failed\n"); + if (check_worker_model_params() < 0) + rte_exit(EXIT_FAILURE, "check_worker_model_params() failed\n"); + ret = init_lcore_rx_queues(); if (ret < 0) rte_exit(EXIT_FAILURE, "init_lcore_rx_queues() failed\n"); @@ -1079,51 +1263,18 @@ main(int argc, char **argv) memset(&graph_conf, 0, sizeof(graph_conf)); graph_conf.node_patterns = node_patterns; + graph_conf.nb_node_patterns = nb_patterns; /* Pcap config */ graph_conf.pcap_enable = pcap_trace_enable; graph_conf.num_pkt_to_capture = packet_to_capture; graph_conf.pcap_filename = pcap_filename; - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { - rte_graph_t graph_id; - rte_edge_t i; - - if (rte_lcore_is_enabled(lcore_id) == 0) - continue; - - qconf = &lcore_conf[lcore_id]; - - /* Skip graph creation if no source exists */ - if (!qconf->n_rx_queue) - continue; - - /* Add rx node patterns of this lcore */ - for (i = 0; i < qconf->n_rx_queue; i++) { - graph_conf.node_patterns[nb_patterns + i] = - qconf->rx_queue_list[i].node_name; - } - - graph_conf.nb_node_patterns = nb_patterns + i; - graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id); - - snprintf(qconf->name, sizeof(qconf->name), "worker_%u", - lcore_id); - - graph_id = rte_graph_create(qconf->name, &graph_conf); - if (graph_id == RTE_GRAPH_ID_INVALID) - rte_exit(EXIT_FAILURE, - "rte_graph_create(): graph_id invalid" - " for lcore %u\n", lcore_id); - - qconf->graph_id = graph_id; - qconf->graph = rte_graph_lookup(qconf->name); - /* >8 End of graph initialization. */ - if (!qconf->graph) - rte_exit(EXIT_FAILURE, - "rte_graph_lookup(): graph %s not found\n", - qconf->name); - } + model = rte_graph_worker_model_get(); + if (model == RTE_GRAPH_MODEL_MCORE_DISPATCH) + graph_config_mcore_dispatch(graph_conf); + else + graph_config_rtc(graph_conf); memset(&rewrite_data, 0, sizeof(rewrite_data)); rewrite_len = sizeof(rewrite_data); @@ -1174,8 +1325,10 @@ main(int argc, char **argv) } /* >8 End of adding route to ip4 graph infa. */ + mconf.model = model; /* Launch per-lcore init on every worker lcore */ - rte_eal_mp_remote_launch(graph_main_loop, NULL, SKIP_MAIN); + rte_eal_mp_remote_launch(graph_main_loop, &mconf, + SKIP_MAIN); /* Accumulate and print stats on main until exit */ if (rte_graph_has_stats_feature()) From patchwork Thu Mar 30 06:18:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Yan, Zhirun" X-Patchwork-Id: 125631 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 896CB4286D; Thu, 30 Mar 2023 08:20:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10C6A42C4D; Thu, 30 Mar 2023 08:19:33 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 6716040E25 for ; Thu, 30 Mar 2023 08:19:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680157167; x=1711693167; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cVBFyFydx/huomX4CzztdGz9CKHuz3+cEWi6ghu8TOU=; b=D7ek+zknL3k9NLYVul86rfX34Qk6qhgAONOp8UFpqs0vbH5VL96YL43l QGML35/Zo6iTFZTiCvGBqL12L2tNdLV8dTHft5LzTSJSUix1B/AB93FZF N/dEiurb1ZpXry+/ddP2m4law/lyuyVb2B9ggqp4aDhtoVV/ESZe3E2Cz TaCN9RMHvbndFhcUbBT5pkiIXlwM+IBg5lWJP3xD0uPSLvTinCWLfoR5f peCJQn8bp0D8P5XWDwpxNCWQLA5Hdk/Bf2UPVmtKWGRrEbqDjQLPcTbje 3nhsVqWvuIMHlDR2uw/iCYEzurfsetRG+6UAA1KDnDTzOT4bGPAUi1jdt g==; X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="343530665" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="343530665" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 23:19:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10664"; a="828176261" X-IronPort-AV: E=Sophos;i="5.98,303,1673942400"; d="scan'208";a="828176261" Received: from dpdk-zhirun-lmm.sh.intel.com ([10.67.119.68]) by fmsmga001.fm.intel.com with ESMTP; 29 Mar 2023 23:19:09 -0700 From: Zhirun Yan To: dev@dpdk.org, jerinj@marvell.com, kirankumark@marvell.com, ndabilpuram@marvell.com Cc: cunming.liang@intel.com, haiyue.wang@intel.com, Zhirun Yan Subject: [PATCH v4 15/15] doc: update multicore dispatch model in graph guides Date: Thu, 30 Mar 2023 15:18:34 +0900 Message-Id: <20230330061834.3118201-16-zhirun.yan@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20230330061834.3118201-1-zhirun.yan@intel.com> References: <20230329064340.2550530-1-zhirun.yan@intel.com> <20230330061834.3118201-1-zhirun.yan@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update graph documentation to introduce new multicore dispatch model. Signed-off-by: Haiyue Wang Signed-off-by: Cunming Liang Signed-off-by: Zhirun Yan --- doc/guides/prog_guide/graph_lib.rst | 59 +++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 4 deletions(-) diff --git a/doc/guides/prog_guide/graph_lib.rst b/doc/guides/prog_guide/graph_lib.rst index 1cfdc86433..72e26f3a5a 100644 --- a/doc/guides/prog_guide/graph_lib.rst +++ b/doc/guides/prog_guide/graph_lib.rst @@ -189,14 +189,65 @@ In the above example, A graph object will be created with ethdev Rx node of port 0 and queue 0, all ipv4* nodes in the system, and ethdev tx node of all ports. -Multicore graph processing -~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the current graph library implementation, specifically, -``rte_graph_walk()`` and ``rte_node_enqueue*`` fast path API functions +graph model chossing +~~~~~~~~~~~~~~~~~~~~ +Currently, there are 2 different walking model. Use +``rte_graph_worker_model_set()`` to set the walking model. + +RTC (Run-To-Completion) +^^^^^^^^^^^^^^^^^^^^^^^ +This is the default graph walking model. specifically, +``rte_graph_walk_rtc()`` and ``rte_node_enqueue*`` fast path API functions are designed to work on single-core to have better performance. The fast path API works on graph object, So the multi-core graph processing strategy would be to create graph object PER WORKER. +Example: + +Graph: node-0 -> node-1 -> node-2 @Core0. + +.. code-block:: diff + + + - - - - - - - - - - - - - - - - - - - - - + + ' Core #0 ' + ' ' + ' +--------+ +---------+ +--------+ ' + ' | Node-0 | --> | Node-1 | --> | Node-2 | ' + ' +--------+ +---------+ +--------+ ' + ' ' + + - - - - - - - - - - - - - - - - - - - - - + + +Dispatch model +^^^^^^^^^^^^^^ +The dispatch model enables a cross-core dispatching mechanism which employs +a scheduling work-queue to dispatch streams to other worker cores which +being associated with the destination node. + +Use ``rte_graph_model_dispatch_lcore_affinity_set()`` to set lcore affinity +with the node. +Each worker core will have a graph repetition. Use ``rte_graph_clone()`` to +clone graph for each worker and use``rte_graph_model_dispatch_core_bind()`` +to bind graph with the worker core. + +Example: + +Graph topo: node-0 -> Core1; node-1 -> node-2; node-2 -> node-3. +Config graph: node-0 @Core0; node-1/3 @Core1; node-2 @Core2. + +.. code-block:: diff + + + - - - - - -+ +- - - - - - - - - - - - - + + - - - - - -+ + ' Core #0 ' ' Core #1 ' ' Core #2 ' + ' ' ' ' ' ' + ' +--------+ ' ' +--------+ +--------+ ' ' +--------+ ' + ' | Node-0 | - - - ->| Node-1 | | Node-3 |<- - - - | Node-2 | ' + ' +--------+ ' ' +--------+ +--------+ ' ' +--------+ ' + ' ' ' | ' ' ^ ' + + - - - - - -+ +- - -|- - - - - - - - - - + + - - -|- - -+ + | | + + - - - - - - - - - - - - - - - - + + + In fast path ~~~~~~~~~~~~ Typical fast-path code looks like below, where the application