From patchwork Sun May 30 08:59:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Venkat Duvvuru X-Patchwork-Id: 93579 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6FD9DA0524; Sun, 30 May 2021 11:04:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D354A4118D; Sun, 30 May 2021 11:01:21 +0200 (CEST) Received: from relay.smtp-ext.broadcom.com (saphodev.broadcom.com [192.19.11.229]) by mails.dpdk.org (Postfix) with ESMTP id 891AC41100 for ; Sun, 30 May 2021 11:01:19 +0200 (CEST) Received: from S60.dhcp.broadcom.net (unknown [10.123.66.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by relay.smtp-ext.broadcom.com (Postfix) with ESMTPS id 5A4CC7DC0; Sun, 30 May 2021 02:01:18 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com 5A4CC7DC0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1622365279; bh=OETjWCWlPYGaxqq5FJzF1nJ2k5R/rE+3Wx8axnPj/TM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X1NiukazuaX/OoM90WiWl8GaCNMNCUhx3ugNUf6Pk07Waqj8fB01RUgmBTgvpR0Nr qjaFpNnIyGyYC70NQu52AhUWNm6HlLC1zQxWXKwJrJ/pjU+Hf0hzlF3bFxpzDSOGyh uWolJAQqeNXlkatouM2RWBoA+sAOhNKzspnjys+8= From: Venkat Duvvuru To: dev@dpdk.org Cc: Venkat Duvvuru Date: Sun, 30 May 2021 14:29:02 +0530 Message-Id: <20210530085929.29695-32-venkatkumar.duvvuru@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210530085929.29695-1-venkatkumar.duvvuru@broadcom.com> References: <20210530085929.29695-1-venkatkumar.duvvuru@broadcom.com> Subject: [dpdk-dev] [PATCH 31/58] net/bnxt: modify VXLAN decap for multichannel mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The driver is using physical port id as the index into the tunnel inner flow table. However, this will not work in case of multichannel mode where multiple physical functions are going to share the same physical port id. When tunnel inner flow offload request comes before tunnel outer flow offload request, the driver caches the tunnel inner flow details and programs it in the hardware after installing the tunnel outer flow in the hardware. If more than one tunnel inner flow arrives before tunnel outer flow is offloaded, the driver rejects any such tunnel inner flow offload requests. This patch fixes the above two problems by 1. Using dpdk port id as the index to store tunnel inner info. 2. Caching any number of tunnel inner flow offload requests that come before offloading tunnel outer flow offload request Signed-off-by: Venkat Duvvuru Reviewed-by: Shahaji Bhosle --- drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 3 + drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 3 +- drivers/net/bnxt/tf_ulp/ulp_template_struct.h | 1 + drivers/net/bnxt/tf_ulp/ulp_tun.c | 192 ++++++++++++------ drivers/net/bnxt/tf_ulp/ulp_tun.h | 30 ++- 5 files changed, 150 insertions(+), 79 deletions(-) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c index 5c805eef97..59fb530fb1 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c @@ -22,6 +22,7 @@ #include "ulp_flow_db.h" #include "ulp_mapper.h" #include "ulp_port_db.h" +#include "ulp_tun.h" /* Linked list of all TF sessions. */ STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list = @@ -533,6 +534,8 @@ ulp_ctx_init(struct bnxt *bp, if (rc) goto error_deinit; + ulp_tun_tbl_init(ulp_data->tun_tbl); + bnxt_ulp_cntxt_tfp_set(bp->ulp_ctx, &bp->tfp); return rc; diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c index ddf38ed931..836e94bc60 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c @@ -79,6 +79,7 @@ bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_create_parms *mapper_cparms, struct ulp_rte_parser_params *params, enum bnxt_ulp_fdb_type flow_type) { + memset(mapper_cparms, 0, sizeof(*mapper_cparms)); mapper_cparms->flow_type = flow_type; mapper_cparms->app_priority = params->priority; mapper_cparms->dir_attr = params->dir_attr; @@ -176,7 +177,7 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, params.fid = fid; params.func_id = func_id; params.priority = attr->priority; - params.port_id = bnxt_get_phy_port_id(dev->data->port_id); + params.port_id = dev->data->port_id; /* Perform the rte flow post process */ ret = bnxt_ulp_rte_parser_post_process(¶ms); if (ret == BNXT_TF_RC_ERROR) diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h index ee17390358..b253aefe8d 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h +++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h @@ -62,6 +62,7 @@ struct ulp_rte_act_prop { /* Structure to be used for passing all the parser functions */ struct ulp_rte_parser_params { + STAILQ_ENTRY(ulp_rte_parser_params) next; struct ulp_rte_hdr_bitmap hdr_bitmap; struct ulp_rte_hdr_bitmap hdr_fp_bit; struct ulp_rte_field_bitmap fld_bitmap; diff --git a/drivers/net/bnxt/tf_ulp/ulp_tun.c b/drivers/net/bnxt/tf_ulp/ulp_tun.c index 884692947a..6c1ae3ced2 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_tun.c +++ b/drivers/net/bnxt/tf_ulp/ulp_tun.c @@ -3,6 +3,8 @@ * All rights reserved. */ +#include + #include #include "ulp_tun.h" @@ -48,19 +50,18 @@ ulp_install_outer_tun_flow(struct ulp_rte_parser_params *params, goto err; /* Store the tunnel dmac in the tunnel cache table and use it while - * programming tunnel flow F2. + * programming tunnel inner flow. */ memcpy(tun_entry->t_dmac, ¶ms->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX].spec, RTE_ETHER_ADDR_LEN); - tun_entry->valid = true; tun_entry->tun_flow_info[params->port_id].state = BNXT_ULP_FLOW_STATE_TUN_O_OFFLD; tun_entry->outer_tun_flow_id = params->fid; - /* F1 and it's related F2s are correlated based on - * Tunnel Destination IP Address. + /* Tunnel outer flow and it's related inner flows are correlated + * based on Tunnel Destination IP Address. */ if (tun_entry->t_dst_ip_valid) goto done; @@ -89,25 +90,27 @@ ulp_install_inner_tun_flow(struct bnxt_tun_cache_entry *tun_entry, { struct bnxt_ulp_mapper_create_parms mparms = { 0 }; struct ulp_per_port_flow_info *flow_info; - struct ulp_rte_parser_params *params; + struct ulp_rte_parser_params *inner_params; int ret; - /* F2 doesn't have tunnel dmac, use the tunnel dmac that was - * stored during F1 programming. + /* Tunnel inner flow doesn't have tunnel dmac, use the tunnel + * dmac that was stored during F1 programming. */ flow_info = &tun_entry->tun_flow_info[tun_o_params->port_id]; - params = &flow_info->first_inner_tun_params; - memcpy(¶ms->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX], - tun_entry->t_dmac, RTE_ETHER_ADDR_LEN); - params->parent_fid = tun_entry->outer_tun_flow_id; - params->fid = flow_info->first_tun_i_fid; - - bnxt_ulp_init_mapper_params(&mparms, params, - BNXT_ULP_FDB_TYPE_REGULAR); - - ret = ulp_mapper_flow_create(params->ulp_ctx, &mparms); - if (ret) - PMD_DRV_LOG(ERR, "Failed to create F2 flow."); + STAILQ_FOREACH(inner_params, &flow_info->tun_i_prms_list, next) { + memcpy(&inner_params->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX], + tun_entry->t_dmac, RTE_ETHER_ADDR_LEN); + inner_params->parent_fid = tun_entry->outer_tun_flow_id; + + bnxt_ulp_init_mapper_params(&mparms, inner_params, + BNXT_ULP_FDB_TYPE_REGULAR); + + ret = ulp_mapper_flow_create(inner_params->ulp_ctx, &mparms); + if (ret) + PMD_DRV_LOG(ERR, + "Failed to create inner tun flow, FID:%u.", + inner_params->fid); + } } /* This function either install outer tunnel flow & inner tunnel flow @@ -118,21 +121,18 @@ ulp_post_process_outer_tun_flow(struct ulp_rte_parser_params *params, struct bnxt_tun_cache_entry *tun_entry, uint16_t tun_idx) { - enum bnxt_ulp_tun_flow_state flow_state; int ret; - flow_state = tun_entry->tun_flow_info[params->port_id].state; ret = ulp_install_outer_tun_flow(params, tun_entry, tun_idx); if (ret == BNXT_TF_RC_ERROR) { PMD_DRV_LOG(ERR, "Failed to create outer tunnel flow."); return ret; } - /* If flow_state == BNXT_ULP_FLOW_STATE_NORMAL before installing - * F1, that means F2 is not deferred. Hence, no need to install F2. + /* Install any cached tunnel inner flows that came before tunnel + * outer flow. */ - if (flow_state != BNXT_ULP_FLOW_STATE_NORMAL) - ulp_install_inner_tun_flow(tun_entry, params); + ulp_install_inner_tun_flow(tun_entry, params); return BNXT_TF_RC_FID; } @@ -141,9 +141,10 @@ ulp_post_process_outer_tun_flow(struct ulp_rte_parser_params *params, * outer tunnel flow request. */ static int32_t -ulp_post_process_first_inner_tun_flow(struct ulp_rte_parser_params *params, +ulp_post_process_cache_inner_tun_flow(struct ulp_rte_parser_params *params, struct bnxt_tun_cache_entry *tun_entry) { + struct ulp_rte_parser_params *inner_tun_params; struct ulp_per_port_flow_info *flow_info; int ret; @@ -155,19 +156,22 @@ ulp_post_process_first_inner_tun_flow(struct ulp_rte_parser_params *params, if (ret != BNXT_TF_RC_SUCCESS) return BNXT_TF_RC_ERROR; - /* If Tunnel F2 flow comes first then we can't install it in the - * hardware, because, F2 flow will not have L2 context information. - * So, just cache the F2 information and program it in the context - * of F1 flow installation. + /* If Tunnel inner flow comes first then we can't install it in the + * hardware, because, Tunnel inner flow will not have L2 context + * information. So, just cache the Tunnel inner flow information + * and program it in the context of F1 flow installation. */ flow_info = &tun_entry->tun_flow_info[params->port_id]; - memcpy(&flow_info->first_inner_tun_params, params, - sizeof(struct ulp_rte_parser_params)); - - flow_info->first_tun_i_fid = params->fid; - flow_info->state = BNXT_ULP_FLOW_STATE_TUN_I_CACHED; + inner_tun_params = rte_zmalloc("ulp_inner_tun_params", + sizeof(struct ulp_rte_parser_params), 0); + if (!inner_tun_params) + return BNXT_TF_RC_ERROR; + memcpy(inner_tun_params, params, sizeof(struct ulp_rte_parser_params)); + STAILQ_INSERT_TAIL(&flow_info->tun_i_prms_list, inner_tun_params, + next); + flow_info->tun_i_cnt++; - /* F1 and it's related F2s are correlated based on + /* F1 and it's related Tunnel inner flows are correlated based on * Tunnel Destination IP Address. It could be already set, if * the inner flow got offloaded first. */ @@ -248,8 +252,8 @@ ulp_get_tun_entry(struct ulp_rte_parser_params *params, int32_t ulp_post_process_tun_flow(struct ulp_rte_parser_params *params) { - bool outer_tun_sig, inner_tun_sig, first_inner_tun_flow; - bool outer_tun_reject, inner_tun_reject, outer_tun_flow, inner_tun_flow; + bool inner_tun_sig, cache_inner_tun_flow; + bool outer_tun_reject, outer_tun_flow, inner_tun_flow; enum bnxt_ulp_tun_flow_state flow_state; struct bnxt_tun_cache_entry *tun_entry; uint32_t l3_tun, l3_tun_decap; @@ -267,40 +271,31 @@ ulp_post_process_tun_flow(struct ulp_rte_parser_params *params) if (rc == BNXT_TF_RC_ERROR) return rc; + if (params->port_id >= RTE_MAX_ETHPORTS) + return BNXT_TF_RC_ERROR; flow_state = tun_entry->tun_flow_info[params->port_id].state; /* Outer tunnel flow validation */ - outer_tun_sig = BNXT_OUTER_TUN_SIGNATURE(l3_tun, params); - outer_tun_flow = BNXT_OUTER_TUN_FLOW(outer_tun_sig); + outer_tun_flow = BNXT_OUTER_TUN_FLOW(l3_tun, params); outer_tun_reject = BNXT_REJECT_OUTER_TUN_FLOW(flow_state, - outer_tun_sig); + outer_tun_flow); /* Inner tunnel flow validation */ inner_tun_sig = BNXT_INNER_TUN_SIGNATURE(l3_tun, l3_tun_decap, params); - first_inner_tun_flow = BNXT_FIRST_INNER_TUN_FLOW(flow_state, + cache_inner_tun_flow = BNXT_CACHE_INNER_TUN_FLOW(flow_state, inner_tun_sig); inner_tun_flow = BNXT_INNER_TUN_FLOW(flow_state, inner_tun_sig); - inner_tun_reject = BNXT_REJECT_INNER_TUN_FLOW(flow_state, - inner_tun_sig); if (outer_tun_reject) { tun_entry->outer_tun_rej_cnt++; BNXT_TF_DBG(ERR, "Tunnel F1 flow rejected, COUNT: %d\n", tun_entry->outer_tun_rej_cnt); - /* Inner tunnel flow is rejected if it comes between first inner - * tunnel flow and outer flow requests. - */ - } else if (inner_tun_reject) { - tun_entry->inner_tun_rej_cnt++; - BNXT_TF_DBG(ERR, - "Tunnel F2 flow rejected, COUNT: %d\n", - tun_entry->inner_tun_rej_cnt); } - if (outer_tun_reject || inner_tun_reject) + if (outer_tun_reject) return BNXT_TF_RC_ERROR; - else if (first_inner_tun_flow) - return ulp_post_process_first_inner_tun_flow(params, tun_entry); + else if (cache_inner_tun_flow) + return ulp_post_process_cache_inner_tun_flow(params, tun_entry); else if (outer_tun_flow) return ulp_post_process_outer_tun_flow(params, tun_entry, tun_idx); @@ -310,11 +305,86 @@ ulp_post_process_tun_flow(struct ulp_rte_parser_params *params) return BNXT_TF_RC_NORMAL; } +void +ulp_tun_tbl_init(struct bnxt_tun_cache_entry *tun_tbl) +{ + struct ulp_per_port_flow_info *flow_info; + int i, j; + + for (i = 0; i < BNXT_ULP_MAX_TUN_CACHE_ENTRIES; i++) { + for (j = 0; j < RTE_MAX_ETHPORTS; j++) { + flow_info = &tun_tbl[i].tun_flow_info[j]; + STAILQ_INIT(&flow_info->tun_i_prms_list); + } + } +} + void ulp_clear_tun_entry(struct bnxt_tun_cache_entry *tun_tbl, uint8_t tun_idx) { + struct ulp_rte_parser_params *inner_params; + struct ulp_per_port_flow_info *flow_info; + int j; + + for (j = 0; j < RTE_MAX_ETHPORTS; j++) { + flow_info = &tun_tbl[tun_idx].tun_flow_info[j]; + STAILQ_FOREACH(inner_params, + &flow_info->tun_i_prms_list, + next) { + STAILQ_REMOVE(&flow_info->tun_i_prms_list, + inner_params, + ulp_rte_parser_params, next); + rte_free(inner_params); + } + } + memset(&tun_tbl[tun_idx], 0, - sizeof(struct bnxt_tun_cache_entry)); + sizeof(struct bnxt_tun_cache_entry)); + + for (j = 0; j < RTE_MAX_ETHPORTS; j++) { + flow_info = &tun_tbl[tun_idx].tun_flow_info[j]; + STAILQ_INIT(&flow_info->tun_i_prms_list); + } +} + +static bool +ulp_chk_and_rem_tun_i_flow(struct bnxt_tun_cache_entry *tun_entry, + struct ulp_per_port_flow_info *flow_info, + uint32_t fid) +{ + struct ulp_rte_parser_params *inner_params; + int j; + + STAILQ_FOREACH(inner_params, + &flow_info->tun_i_prms_list, + next) { + if (inner_params->fid == fid) { + STAILQ_REMOVE(&flow_info->tun_i_prms_list, + inner_params, + ulp_rte_parser_params, + next); + rte_free(inner_params); + flow_info->tun_i_cnt--; + /* When a dpdk application offloads a duplicate + * tunnel inner flow on a port that it is not + * destined to, there won't be a tunnel outer flow + * associated with these duplicate tunnel inner flows. + * So, when the last tunnel inner flow ages out, the + * driver has to clear the tunnel entry, otherwise + * the tunnel entry cannot be reused. + */ + if (!flow_info->tun_i_cnt && + flow_info->state != BNXT_ULP_FLOW_STATE_TUN_O_OFFLD) { + memset(tun_entry, 0, + sizeof(struct bnxt_tun_cache_entry)); + for (j = 0; j < RTE_MAX_ETHPORTS; j++) + STAILQ_INIT(&flow_info->tun_i_prms_list); + } + return true; + } + } + + return false; } /* When a dpdk application offloads the same tunnel inner flow @@ -330,12 +400,14 @@ ulp_clear_tun_inner_entry(struct bnxt_tun_cache_entry *tun_tbl, uint32_t fid) struct ulp_per_port_flow_info *flow_info; int i, j; - for (i = 0; i < BNXT_ULP_MAX_TUN_CACHE_ENTRIES ; i++) { + for (i = 0; i < BNXT_ULP_MAX_TUN_CACHE_ENTRIES; i++) { + if (!tun_tbl[i].t_dst_ip_valid) + continue; for (j = 0; j < RTE_MAX_ETHPORTS; j++) { flow_info = &tun_tbl[i].tun_flow_info[j]; - if (flow_info->first_tun_i_fid == fid && - flow_info->state == BNXT_ULP_FLOW_STATE_TUN_I_CACHED) - memset(flow_info, 0, sizeof(*flow_info)); + if (ulp_chk_and_rem_tun_i_flow(&tun_tbl[i], + flow_info, fid) == true) + return; } } } diff --git a/drivers/net/bnxt/tf_ulp/ulp_tun.h b/drivers/net/bnxt/tf_ulp/ulp_tun.h index af6926f0e4..7e31f81f13 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_tun.h +++ b/drivers/net/bnxt/tf_ulp/ulp_tun.h @@ -15,7 +15,7 @@ #include "ulp_template_db_enum.h" #include "ulp_template_struct.h" -#define BNXT_OUTER_TUN_SIGNATURE(l3_tun, params) \ +#define BNXT_OUTER_TUN_FLOW(l3_tun, params) \ ((l3_tun) && \ ULP_BITMAP_ISSET((params)->act_bitmap.bits, \ BNXT_ULP_ACTION_BIT_JUMP)) @@ -24,22 +24,16 @@ !ULP_BITMAP_ISSET((params)->hdr_bitmap.bits, \ BNXT_ULP_HDR_BIT_O_ETH)) -#define BNXT_FIRST_INNER_TUN_FLOW(state, inner_tun_sig) \ +#define BNXT_CACHE_INNER_TUN_FLOW(state, inner_tun_sig) \ ((state) == BNXT_ULP_FLOW_STATE_NORMAL && (inner_tun_sig)) #define BNXT_INNER_TUN_FLOW(state, inner_tun_sig) \ ((state) == BNXT_ULP_FLOW_STATE_TUN_O_OFFLD && (inner_tun_sig)) -#define BNXT_OUTER_TUN_FLOW(outer_tun_sig) ((outer_tun_sig)) /* It is invalid to get another outer flow offload request * for the same tunnel, while the outer flow is already offloaded. */ #define BNXT_REJECT_OUTER_TUN_FLOW(state, outer_tun_sig) \ ((state) == BNXT_ULP_FLOW_STATE_TUN_O_OFFLD && (outer_tun_sig)) -/* It is invalid to get another inner flow offload request - * for the same tunnel, while the outer flow is not yet offloaded. - */ -#define BNXT_REJECT_INNER_TUN_FLOW(state, inner_tun_sig) \ - ((state) == BNXT_ULP_FLOW_STATE_TUN_I_CACHED && (inner_tun_sig)) #define ULP_TUN_O_DMAC_HDR_FIELD_INDEX 1 #define ULP_TUN_O_IPV4_DIP_INDEX 19 @@ -50,10 +44,10 @@ * requests arrive. * * If inner tunnel flow offload request arrives first then the flow - * state will change from BNXT_ULP_FLOW_STATE_NORMAL to - * BNXT_ULP_FLOW_STATE_TUN_I_CACHED and the following outer tunnel - * flow offload request will change the state of the flow to - * BNXT_ULP_FLOW_STATE_TUN_O_OFFLD from BNXT_ULP_FLOW_STATE_TUN_I_CACHED. + * state will remain in BNXT_ULP_FLOW_STATE_NORMAL state. + * The following outer tunnel flow offload request will change the + * state of the flow to BNXT_ULP_FLOW_STATE_TUN_O_OFFLD from + * BNXT_ULP_FLOW_STATE_NORMAL. * * If outer tunnel flow offload request arrives first then the flow state * will change from BNXT_ULP_FLOW_STATE_NORMAL to @@ -67,17 +61,15 @@ enum bnxt_ulp_tun_flow_state { BNXT_ULP_FLOW_STATE_NORMAL = 0, BNXT_ULP_FLOW_STATE_TUN_O_OFFLD, - BNXT_ULP_FLOW_STATE_TUN_I_CACHED }; struct ulp_per_port_flow_info { - enum bnxt_ulp_tun_flow_state state; - uint32_t first_tun_i_fid; - struct ulp_rte_parser_params first_inner_tun_params; + enum bnxt_ulp_tun_flow_state state; + uint32_t tun_i_cnt; + STAILQ_HEAD(, ulp_rte_parser_params) tun_i_prms_list; }; struct bnxt_tun_cache_entry { - bool valid; bool t_dst_ip_valid; uint8_t t_dmac[RTE_ETHER_ADDR_LEN]; union { @@ -86,10 +78,12 @@ struct bnxt_tun_cache_entry { }; uint32_t outer_tun_flow_id; uint16_t outer_tun_rej_cnt; - uint16_t inner_tun_rej_cnt; struct ulp_per_port_flow_info tun_flow_info[RTE_MAX_ETHPORTS]; }; +void +ulp_tun_tbl_init(struct bnxt_tun_cache_entry *tun_tbl); + void ulp_clear_tun_entry(struct bnxt_tun_cache_entry *tun_tbl, uint8_t tun_idx);