From patchwork Fri Apr 26 12:22:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitin Saxena X-Patchwork-Id: 139703 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9CB3943F15; Fri, 26 Apr 2024 14:22:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 83A8A43DB0; Fri, 26 Apr 2024 14:22:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 73D0243CE3 for ; Fri, 26 Apr 2024 14:22:51 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 43Q9hLbr027079; Fri, 26 Apr 2024 05:22:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=ajgiyoOfWxecEEKIV1mUxs2gzgbm1W5yREDFxsXpMv0=; b=NP+ 0J4nbRm8WIx8jCsDBsRbBhGR61MLcH5ekMdGXdyDmDO0JLBWWZsHQORkzUmv8Pde 5+je+8uu6/v+bUmbkdolVjKx8Nvh+dNJR1UUBC0vDDdyqLb+4krkV5ZZ2g+AmTvS Kh3CHRadx/GW2wu8MYX6wi27ACN94jjlMdeD71/BFcznXtwyoJIMP4/JTWiaOKdT hLmbywx9PqLKy4z4SKUs7yzE2/ra+cOcd/3A2LCydl7qXSQnzDRArQ3qO9YkTwSM raDTh6bblT2XNkK7IqTeZTVlhVQBeV+0BeVMdwgrJyTMR49BNoBLPicRHLwCaz32 oorrUkPlepUzUFKvEgA== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3xr9vp0cw1-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 26 Apr 2024 05:22:45 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Fri, 26 Apr 2024 05:22:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Fri, 26 Apr 2024 05:22:11 -0700 Received: from localhost.localdomain (unknown [10.28.36.207]) by maili.marvell.com (Postfix) with ESMTP id A3C733F7069; Fri, 26 Apr 2024 05:22:09 -0700 (PDT) From: Nitin Saxena To: Jerin Jacob , Kiran Kumar K , Nithin Dabilpuram , Zhirun Yan CC: Subject: [RFC PATCH 2/2] graph: add ip4 output feature arc Date: Fri, 26 Apr 2024 17:52:03 +0530 Message-ID: <20240426122203.32357-3-nsaxena@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240426122203.32357-1-nsaxena@marvell.com> References: <20240426122203.32357-1-nsaxena@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: NrbzTnAB55pETtRdxnghkcEv9IkBjJG8 X-Proofpoint-GUID: NrbzTnAB55pETtRdxnghkcEv9IkBjJG8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.650,FMLib:17.11.176.26 definitions=2024-04-26_12,2024-04-26_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Signed-off-by: Nitin Saxena Change-Id: I80021403c343354c7e494c6bc79b83b0d0fe6b7c --- lib/node/ip4_rewrite.c | 278 ++++++++++++++++++++++++++++-------- lib/node/ip4_rewrite_priv.h | 10 +- lib/node/node_private.h | 10 +- lib/node/rte_node_ip4_api.h | 3 + 4 files changed, 233 insertions(+), 68 deletions(-) diff --git a/lib/node/ip4_rewrite.c b/lib/node/ip4_rewrite.c index 34a920df5e..60efd6b171 100644 --- a/lib/node/ip4_rewrite.c +++ b/lib/node/ip4_rewrite.c @@ -20,6 +20,7 @@ struct ip4_rewrite_node_ctx { int mbuf_priv1_off; /* Cached next index */ uint16_t next_index; + rte_graph_feature_arc_t output_feature_arc; }; static struct ip4_rewrite_node_main *ip4_rewrite_nm; @@ -30,21 +31,34 @@ static struct ip4_rewrite_node_main *ip4_rewrite_nm; #define IP4_REWRITE_NODE_PRIV1_OFF(ctx) \ (((struct ip4_rewrite_node_ctx *)ctx)->mbuf_priv1_off) +#define IP4_REWRITE_NODE_OUTPUT_FEATURE_ARC(ctx) \ + (((struct ip4_rewrite_node_ctx *)ctx)->output_feature_arc) + static uint16_t ip4_rewrite_node_process(struct rte_graph *graph, struct rte_node *node, void **objs, uint16_t nb_objs) { + rte_graph_feature_arc_t out_feature_arc = IP4_REWRITE_NODE_OUTPUT_FEATURE_ARC(node->ctx); + uint16_t next0 = 0, next1 = 0, next2 = 0, next3 = 0, next_index; struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3, **pkts; struct ip4_rewrite_nh_header *nh = ip4_rewrite_nm->nh; const int dyn = IP4_REWRITE_NODE_PRIV1_OFF(node->ctx); - uint16_t next0, next1, next2, next3, next_index; - struct rte_ipv4_hdr *ip0, *ip1, *ip2, *ip3; uint16_t n_left_from, held = 0, last_spec = 0; + struct rte_ipv4_hdr *ip0, *ip1, *ip2, *ip3; + int b0_feat, b1_feat, b2_feat, b3_feat; + rte_graph_feature_t f0, f1, f2, f3; + uint16_t tx0, tx1, tx2, tx3; + int64_t fd0, fd1, fd2, fd3; void *d0, *d1, *d2, *d3; void **to_next, **from; rte_xmm_t priv01; rte_xmm_t priv23; - int i; + int i, has_feat; + + RTE_SET_USED(fd0); + RTE_SET_USED(fd1); + RTE_SET_USED(fd2); + RTE_SET_USED(fd3); /* Speculative next as last next */ next_index = IP4_REWRITE_NODE_LAST_NEXT(node->ctx); @@ -83,54 +97,167 @@ ip4_rewrite_node_process(struct rte_graph *graph, struct rte_node *node, priv23.u64[0] = node_mbuf_priv1(mbuf2, dyn)->u; priv23.u64[1] = node_mbuf_priv1(mbuf3, dyn)->u; - /* Increment checksum by one. */ - priv01.u32[1] += rte_cpu_to_be_16(0x0100); - priv01.u32[3] += rte_cpu_to_be_16(0x0100); - priv23.u32[1] += rte_cpu_to_be_16(0x0100); - priv23.u32[3] += rte_cpu_to_be_16(0x0100); - - /* Update ttl,cksum rewrite ethernet hdr on mbuf0 */ - d0 = rte_pktmbuf_mtod(mbuf0, void *); - rte_memcpy(d0, nh[priv01.u16[0]].rewrite_data, - nh[priv01.u16[0]].rewrite_len); - - next0 = nh[priv01.u16[0]].tx_node; - ip0 = (struct rte_ipv4_hdr *)((uint8_t *)d0 + - sizeof(struct rte_ether_hdr)); - ip0->time_to_live = priv01.u16[1] - 1; - ip0->hdr_checksum = priv01.u16[2] + priv01.u16[3]; - - /* Update ttl,cksum rewrite ethernet hdr on mbuf1 */ - d1 = rte_pktmbuf_mtod(mbuf1, void *); - rte_memcpy(d1, nh[priv01.u16[4]].rewrite_data, - nh[priv01.u16[4]].rewrite_len); - - next1 = nh[priv01.u16[4]].tx_node; - ip1 = (struct rte_ipv4_hdr *)((uint8_t *)d1 + - sizeof(struct rte_ether_hdr)); - ip1->time_to_live = priv01.u16[5] - 1; - ip1->hdr_checksum = priv01.u16[6] + priv01.u16[7]; - - /* Update ttl,cksum rewrite ethernet hdr on mbuf2 */ - d2 = rte_pktmbuf_mtod(mbuf2, void *); - rte_memcpy(d2, nh[priv23.u16[0]].rewrite_data, - nh[priv23.u16[0]].rewrite_len); - next2 = nh[priv23.u16[0]].tx_node; - ip2 = (struct rte_ipv4_hdr *)((uint8_t *)d2 + - sizeof(struct rte_ether_hdr)); - ip2->time_to_live = priv23.u16[1] - 1; - ip2->hdr_checksum = priv23.u16[2] + priv23.u16[3]; - - /* Update ttl,cksum rewrite ethernet hdr on mbuf3 */ - d3 = rte_pktmbuf_mtod(mbuf3, void *); - rte_memcpy(d3, nh[priv23.u16[4]].rewrite_data, - nh[priv23.u16[4]].rewrite_len); - - next3 = nh[priv23.u16[4]].tx_node; - ip3 = (struct rte_ipv4_hdr *)((uint8_t *)d3 + - sizeof(struct rte_ether_hdr)); - ip3->time_to_live = priv23.u16[5] - 1; - ip3->hdr_checksum = priv23.u16[6] + priv23.u16[7]; + f0 = nh[priv01.u16[0]].nh_feature; + f1 = nh[priv01.u16[4]].nh_feature; + f2 = nh[priv23.u16[0]].nh_feature; + f3 = nh[priv23.u16[4]].nh_feature; + + tx0 = nh[priv01.u16[0]].tx_node - 1; + tx1 = nh[priv01.u16[4]].tx_node - 1; + tx2 = nh[priv23.u16[0]].tx_node - 1; + tx3 = nh[priv23.u16[4]].tx_node - 1; + + b0_feat = rte_graph_feature_arc_has_feature(out_feature_arc, tx0, &f0); + b1_feat = rte_graph_feature_arc_has_feature(out_feature_arc, tx1, &f1); + b2_feat = rte_graph_feature_arc_has_feature(out_feature_arc, tx2, &f2); + b3_feat = rte_graph_feature_arc_has_feature(out_feature_arc, tx3, &f3); + + has_feat = b0_feat | b1_feat | b2_feat | b3_feat; + + if (unlikely(has_feat)) { + /* prefetch feature data */ + rte_graph_feature_data_prefetch(out_feature_arc, tx0, f0); + rte_graph_feature_data_prefetch(out_feature_arc, tx1, f1); + rte_graph_feature_data_prefetch(out_feature_arc, tx2, f2); + rte_graph_feature_data_prefetch(out_feature_arc, tx3, f3); + + /* Save feature into mbuf */ + node_mbuf_priv1(mbuf0, dyn)->current_feature = f0; + node_mbuf_priv1(mbuf1, dyn)->current_feature = f1; + node_mbuf_priv1(mbuf2, dyn)->current_feature = f2; + node_mbuf_priv1(mbuf3, dyn)->current_feature = f3; + + /* Save index into mbuf for next feature node */ + node_mbuf_priv1(mbuf0, dyn)->index = tx0; + node_mbuf_priv1(mbuf1, dyn)->index = tx1; + node_mbuf_priv1(mbuf2, dyn)->index = tx2; + node_mbuf_priv1(mbuf3, dyn)->index = tx3; + + /* Does all of them have feature enabled */ + has_feat = b0_feat && b1_feat && b2_feat && b3_feat; + if (has_feat) { + rte_graph_feature_arc_feature_data_get(out_feature_arc, + f0, tx0, &next0, &fd0); + rte_graph_feature_arc_feature_data_get(out_feature_arc, + f1, tx1, &next1, &fd1); + rte_graph_feature_arc_feature_data_get(out_feature_arc, + f2, tx2, &next2, &fd2); + rte_graph_feature_arc_feature_data_get(out_feature_arc, + f3, tx3, &next3, &fd3); + } else { + if (b0_feat) { + rte_graph_feature_arc_feature_data_get(out_feature_arc, f0, + tx0, &next0, &fd0); + } else { + priv01.u32[1] += rte_cpu_to_be_16(0x0100); + /* Update ttl,cksum rewrite ethernet hdr on mbuf0 */ + d0 = rte_pktmbuf_mtod(mbuf0, void *); + rte_memcpy(d0, nh[priv01.u16[0]].rewrite_data, + nh[priv01.u16[0]].rewrite_len); + + next0 = tx0 + 1; + ip0 = (struct rte_ipv4_hdr *)((uint8_t *)d0 + + sizeof(struct rte_ether_hdr)); + ip0->time_to_live = priv01.u16[1] - 1; + ip0->hdr_checksum = priv01.u16[2] + priv01.u16[3]; + } + if (b1_feat) { + rte_graph_feature_arc_feature_data_get(out_feature_arc, f1, + tx1, &next1, &fd1); + } else { + priv01.u32[3] += rte_cpu_to_be_16(0x0100); + /* Update ttl,cksum rewrite ethernet hdr on mbuf1 */ + d1 = rte_pktmbuf_mtod(mbuf1, void *); + rte_memcpy(d1, nh[priv01.u16[4]].rewrite_data, + nh[priv01.u16[4]].rewrite_len); + + next1 = tx1 + 1; + ip1 = (struct rte_ipv4_hdr *)((uint8_t *)d1 + + sizeof(struct rte_ether_hdr)); + ip1->time_to_live = priv01.u16[5] - 1; + ip1->hdr_checksum = priv01.u16[6] + priv01.u16[7]; + } + if (b2_feat) { + rte_graph_feature_arc_feature_data_get(out_feature_arc, f2, + tx2, &next2, &fd2); + } else { + priv23.u32[1] += rte_cpu_to_be_16(0x0100); + /* Update ttl,cksum rewrite ethernet hdr on mbuf2 */ + d2 = rte_pktmbuf_mtod(mbuf2, void *); + rte_memcpy(d2, nh[priv23.u16[0]].rewrite_data, + nh[priv23.u16[0]].rewrite_len); + next2 = tx2 + 1; + ip2 = (struct rte_ipv4_hdr *)((uint8_t *)d2 + + sizeof(struct rte_ether_hdr)); + ip2->time_to_live = priv23.u16[1] - 1; + ip2->hdr_checksum = priv23.u16[2] + priv23.u16[3]; + } + if (b3_feat) { + rte_graph_feature_arc_feature_data_get(out_feature_arc, f3, + tx3, &next1, &fd3); + } else { + priv23.u32[3] += rte_cpu_to_be_16(0x0100); + /* Update ttl,cksum rewrite ethernet hdr on mbuf3 */ + d3 = rte_pktmbuf_mtod(mbuf3, void *); + rte_memcpy(d3, nh[priv23.u16[4]].rewrite_data, + nh[priv23.u16[4]].rewrite_len); + next3 = tx3 + 1; + ip3 = (struct rte_ipv4_hdr *)((uint8_t *)d3 + + sizeof(struct rte_ether_hdr)); + ip3->time_to_live = priv23.u16[5] - 1; + ip3->hdr_checksum = priv23.u16[6] + priv23.u16[7]; + } + } + } else { + /* Increment checksum by one. */ + priv01.u32[1] += rte_cpu_to_be_16(0x0100); + priv01.u32[3] += rte_cpu_to_be_16(0x0100); + priv23.u32[1] += rte_cpu_to_be_16(0x0100); + priv23.u32[3] += rte_cpu_to_be_16(0x0100); + + /* Update ttl,cksum rewrite ethernet hdr on mbuf0 */ + d0 = rte_pktmbuf_mtod(mbuf0, void *); + rte_memcpy(d0, nh[priv01.u16[0]].rewrite_data, + nh[priv01.u16[0]].rewrite_len); + + next0 = tx0 + 1; + ip0 = (struct rte_ipv4_hdr *)((uint8_t *)d0 + + sizeof(struct rte_ether_hdr)); + ip0->time_to_live = priv01.u16[1] - 1; + ip0->hdr_checksum = priv01.u16[2] + priv01.u16[3]; + + /* Update ttl,cksum rewrite ethernet hdr on mbuf1 */ + d1 = rte_pktmbuf_mtod(mbuf1, void *); + rte_memcpy(d1, nh[priv01.u16[4]].rewrite_data, + nh[priv01.u16[4]].rewrite_len); + + next1 = tx1 + 1; + ip1 = (struct rte_ipv4_hdr *)((uint8_t *)d1 + + sizeof(struct rte_ether_hdr)); + ip1->time_to_live = priv01.u16[5] - 1; + ip1->hdr_checksum = priv01.u16[6] + priv01.u16[7]; + + /* Update ttl,cksum rewrite ethernet hdr on mbuf2 */ + d2 = rte_pktmbuf_mtod(mbuf2, void *); + rte_memcpy(d2, nh[priv23.u16[0]].rewrite_data, + nh[priv23.u16[0]].rewrite_len); + next2 = tx2 + 1; + ip2 = (struct rte_ipv4_hdr *)((uint8_t *)d2 + + sizeof(struct rte_ether_hdr)); + ip2->time_to_live = priv23.u16[1] - 1; + ip2->hdr_checksum = priv23.u16[2] + priv23.u16[3]; + + /* Update ttl,cksum rewrite ethernet hdr on mbuf3 */ + d3 = rte_pktmbuf_mtod(mbuf3, void *); + rte_memcpy(d3, nh[priv23.u16[4]].rewrite_data, + nh[priv23.u16[4]].rewrite_len); + + next3 = tx3 + 1; + ip3 = (struct rte_ipv4_hdr *)((uint8_t *)d3 + + sizeof(struct rte_ether_hdr)); + ip3->time_to_live = priv23.u16[5] - 1; + ip3->hdr_checksum = priv23.u16[6] + priv23.u16[7]; + } /* Enqueue four to next node */ rte_edge_t fix_spec = @@ -212,19 +339,28 @@ ip4_rewrite_node_process(struct rte_graph *graph, struct rte_node *node, pkts += 1; n_left_from -= 1; - d0 = rte_pktmbuf_mtod(mbuf0, void *); - rte_memcpy(d0, nh[node_mbuf_priv1(mbuf0, dyn)->nh].rewrite_data, - nh[node_mbuf_priv1(mbuf0, dyn)->nh].rewrite_len); - - next0 = nh[node_mbuf_priv1(mbuf0, dyn)->nh].tx_node; - ip0 = (struct rte_ipv4_hdr *)((uint8_t *)d0 + - sizeof(struct rte_ether_hdr)); - chksum = node_mbuf_priv1(mbuf0, dyn)->cksum + - rte_cpu_to_be_16(0x0100); - chksum += chksum >= 0xffff; - ip0->hdr_checksum = chksum; - ip0->time_to_live = node_mbuf_priv1(mbuf0, dyn)->ttl - 1; + tx0 = nh[node_mbuf_priv1(mbuf0, dyn)->nh].tx_node - 1; + f0 = nh[node_mbuf_priv1(mbuf0, dyn)->nh].nh_feature; + if (unlikely(rte_graph_feature_arc_has_feature(out_feature_arc, tx0, &f0))) { + rte_graph_feature_arc_feature_data_get(out_feature_arc, f0, tx0, + &next0, &fd0); + node_mbuf_priv1(mbuf0, dyn)->current_feature = f0; + node_mbuf_priv1(mbuf0, dyn)->index = tx0; + } else { + d0 = rte_pktmbuf_mtod(mbuf0, void *); + rte_memcpy(d0, nh[node_mbuf_priv1(mbuf0, dyn)->nh].rewrite_data, + nh[node_mbuf_priv1(mbuf0, dyn)->nh].rewrite_len); + + next0 = tx0 + 1; + ip0 = (struct rte_ipv4_hdr *)((uint8_t *)d0 + + sizeof(struct rte_ether_hdr)); + chksum = node_mbuf_priv1(mbuf0, dyn)->cksum + + rte_cpu_to_be_16(0x0100); + chksum += chksum >= 0xffff; + ip0->hdr_checksum = chksum; + ip0->time_to_live = node_mbuf_priv1(mbuf0, dyn)->ttl - 1; + } if (unlikely(next_index ^ next0)) { /* Copy things successfully speculated till now */ rte_memcpy(to_next, from, last_spec * sizeof(from[0])); @@ -258,19 +394,34 @@ ip4_rewrite_node_process(struct rte_graph *graph, struct rte_node *node, static int ip4_rewrite_node_init(const struct rte_graph *graph, struct rte_node *node) { + rte_graph_feature_arc_t feature_arc = RTE_GRAPH_FEATURE_ARC_INITIALIZER; static bool init_once; RTE_SET_USED(graph); RTE_BUILD_BUG_ON(sizeof(struct ip4_rewrite_node_ctx) > RTE_NODE_CTX_SZ); + RTE_BUILD_BUG_ON(sizeof(struct ip4_rewrite_nh_header) != RTE_CACHE_LINE_MIN_SIZE); if (!init_once) { node_mbuf_priv1_dynfield_offset = rte_mbuf_dynfield_register( &node_mbuf_priv1_dynfield_desc); if (node_mbuf_priv1_dynfield_offset < 0) return -rte_errno; + + /* Create ipv4-output feature arc, if not created + */ + if (rte_graph_feature_arc_lookup_by_name(RTE_IP4_OUTPUT_FEATURE_ARC_NAME, NULL) && + rte_graph_feature_arc_create(RTE_IP4_OUTPUT_FEATURE_ARC_NAME, + RTE_GRAPH_FEATURE_MAX_PER_ARC, /* max features */ + RTE_MAX_ETHPORTS + 1, /* max output interfaces */ + ip4_rewrite_node_get(), + &feature_arc)) { + return -rte_errno; + } + init_once = true; } IP4_REWRITE_NODE_PRIV1_OFF(node->ctx) = node_mbuf_priv1_dynfield_offset; + IP4_REWRITE_NODE_OUTPUT_FEATURE_ARC(node->ctx) = feature_arc; node_dbg("ip4_rewrite", "Initialized ip4_rewrite node initialized"); @@ -323,6 +474,7 @@ rte_node_ip4_rewrite_add(uint16_t next_hop, uint8_t *rewrite_data, nh->tx_node = ip4_rewrite_nm->next_index[dst_port]; nh->rewrite_len = rewrite_len; nh->enabled = true; + nh->nh_feature = RTE_GRAPH_FEATURE_INVALID_VALUE; return 0; } diff --git a/lib/node/ip4_rewrite_priv.h b/lib/node/ip4_rewrite_priv.h index 5105ec1d29..8b868026bf 100644 --- a/lib/node/ip4_rewrite_priv.h +++ b/lib/node/ip4_rewrite_priv.h @@ -5,9 +5,10 @@ #define __INCLUDE_IP4_REWRITE_PRIV_H__ #include +#include #define RTE_GRAPH_IP4_REWRITE_MAX_NH 64 -#define RTE_GRAPH_IP4_REWRITE_MAX_LEN 56 +#define RTE_GRAPH_IP4_REWRITE_MAX_LEN 53 /** * @internal @@ -15,11 +16,10 @@ * Ipv4 rewrite next hop header data structure. Used to store port specific * rewrite data. */ -struct ip4_rewrite_nh_header { +struct __rte_cache_min_aligned ip4_rewrite_nh_header { uint16_t rewrite_len; /**< Header rewrite length. */ uint16_t tx_node; /**< Tx node next index identifier. */ - uint16_t enabled; /**< NH enable flag */ - uint16_t rsvd; + rte_graph_feature_t nh_feature; union { struct { struct rte_ether_addr dst; @@ -30,6 +30,8 @@ struct ip4_rewrite_nh_header { uint8_t rewrite_data[RTE_GRAPH_IP4_REWRITE_MAX_LEN]; /**< Generic rewrite data */ }; + /* used in control path */ + uint8_t enabled; /**< NH enable flag */ }; /** diff --git a/lib/node/node_private.h b/lib/node/node_private.h index 1de7306792..36f6e05624 100644 --- a/lib/node/node_private.h +++ b/lib/node/node_private.h @@ -12,6 +12,9 @@ #include #include +#include +#include + extern int rte_node_logtype; #define RTE_LOGTYPE_NODE rte_node_logtype @@ -35,9 +38,14 @@ struct node_mbuf_priv1 { uint16_t ttl; uint32_t cksum; }; - uint64_t u; }; + struct { + /** feature that current mbuf holds */ + rte_graph_feature_t current_feature; + /** interface index */ + uint32_t index; + }; }; static const struct rte_mbuf_dynfield node_mbuf_priv1_dynfield_desc = { diff --git a/lib/node/rte_node_ip4_api.h b/lib/node/rte_node_ip4_api.h index 24f8ec843a..0de06f7fc7 100644 --- a/lib/node/rte_node_ip4_api.h +++ b/lib/node/rte_node_ip4_api.h @@ -23,6 +23,7 @@ extern "C" { #include #include +#include /** * IP4 lookup next nodes. @@ -67,6 +68,8 @@ struct rte_node_ip4_reassembly_cfg { /**< Node identifier to configure. */ }; +#define RTE_IP4_OUTPUT_FEATURE_ARC_NAME "ipv4-output" + /** * Add ipv4 route to lookup table. *