From patchwork Sun Jun 13 00:06:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 94125 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 49085A0C41; Sun, 13 Jun 2021 02:11:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 51F07411E2; Sun, 13 Jun 2021 02:07:51 +0200 (CEST) Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by mails.dpdk.org (Postfix) with ESMTP id 12493411C5 for ; Sun, 13 Jun 2021 02:07:48 +0200 (CEST) Received: by mail-pf1-f169.google.com with SMTP id y15so7615125pfl.4 for ; Sat, 12 Jun 2021 17:07:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=B5dU62YcrLO/2a9f71p2D52BWxij/47wqY1vLidWq98=; b=PLkEjEZsp9jbmQNYnANPlcifu0OboAWTvwwvWfwR7eJZ1ajV1jlfw07vEES4b77o3m ArkE9vCjtr7ilF2QNaBHqSTq+hDrKGZPGM0JshDS/SGuQsY9P7lBMGaqJF6xew3g1NGd IXi0P1YvXSvqibItgWkB2EjUeI/865VOkbqbY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=B5dU62YcrLO/2a9f71p2D52BWxij/47wqY1vLidWq98=; b=P0z0AUidwFGoB88UJD1uRk5d47D9BukjO0l2BkFDNPjQTRo9ED29jmOdQ5Q5ffJqsN sQzHU1GNY9Qgt5P8HbxdarC+xLRamh2EUVghNrep5oZa7LdDumB0aWoB0NuFa0PRIGDL e7N+p2faiH4zLNPQX9Cwu4TDepFnJmBF4dd4HQ3RXlo595HqnXPg1K9S6vNzBaqNBoa9 VRZ+NI83CfnA8UZi0ImFwRRhNHNII+z7BWWByXV6EXtsM9xRh4qe09l4E2Mp3+xb55dA 7kSlD3fp3DMgWdiU+qO/6uFxI7fgVd+t0+oPSBBbys9vnqWktXLB6vGb7EKuarB+TNkP d4pw== X-Gm-Message-State: AOAM530bBaT0ZNYBmlfms7u0UNkg4dx4lkFwRtm3Ypp5yeUt/2pACA00 EPNEu5pXeB2nPEuI9g+Sf3bOAor9KEyPgFiGc8aplQGgRckZG7SUz5NeWmR10+lpYWYa1at8b/p IDyOxE/s0PWWD59rfLX5au9CF3ZG9mVf7eVU/sGpbWjqEmqiMFft/P3HUHl95wa4= X-Google-Smtp-Source: ABdhPJym9UZRYpoX7uz12HzggHTFbcEAB0rbKmK5wrFtunYtUQHrNeolunuCD84JEogt5TBUul5PmA== X-Received: by 2002:a62:3344:0:b029:28c:6f0f:cb90 with SMTP id z65-20020a6233440000b029028c6f0fcb90mr14860464pfz.58.1623542866481; Sat, 12 Jun 2021 17:07:46 -0700 (PDT) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id gg22sm12774609pjb.17.2021.06.12.17.07.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 12 Jun 2021 17:07:45 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: Venkat Duvvuru , Shahaji Bhosle Date: Sat, 12 Jun 2021 17:06:25 -0700 Message-Id: <20210613000652.28191-32-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.21.1 (Apple Git-122.3) In-Reply-To: <20210613000652.28191-1-ajit.khaparde@broadcom.com> References: <20210530085929.29695-1-venkatkumar.duvvuru@broadcom.com> <20210613000652.28191-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH v2 31/58] net/bnxt: modify VXLAN decap for multichannel mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Venkat Duvvuru The driver is using physical port id as the index into the tunnel inner flow table. However, this will not work in case of multichannel mode where multiple physical functions are going to share the same physical port id. When tunnel inner flow offload request comes before tunnel outer flow offload request, the driver caches the tunnel inner flow details and programs it in the hardware after installing the tunnel outer flow in the hardware. If more than one tunnel inner flow arrives before tunnel outer flow is offloaded, the driver rejects any such tunnel inner flow offload requests. This patch fixes the above two problems by 1. Using dpdk port id as the index to store tunnel inner info. 2. Caching any number of tunnel inner flow offload requests that come before offloading tunnel outer flow offload request Signed-off-by: Venkat Duvvuru Reviewed-by: Shahaji Bhosle Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 3 + drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 3 +- drivers/net/bnxt/tf_ulp/ulp_template_struct.h | 1 + drivers/net/bnxt/tf_ulp/ulp_tun.c | 192 ++++++++++++------ drivers/net/bnxt/tf_ulp/ulp_tun.h | 30 ++- 5 files changed, 150 insertions(+), 79 deletions(-) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c index 5c805eef97..59fb530fb1 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c @@ -22,6 +22,7 @@ #include "ulp_flow_db.h" #include "ulp_mapper.h" #include "ulp_port_db.h" +#include "ulp_tun.h" /* Linked list of all TF sessions. */ STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list = @@ -533,6 +534,8 @@ ulp_ctx_init(struct bnxt *bp, if (rc) goto error_deinit; + ulp_tun_tbl_init(ulp_data->tun_tbl); + bnxt_ulp_cntxt_tfp_set(bp->ulp_ctx, &bp->tfp); return rc; diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c index ddf38ed931..836e94bc60 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c @@ -79,6 +79,7 @@ bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_create_parms *mapper_cparms, struct ulp_rte_parser_params *params, enum bnxt_ulp_fdb_type flow_type) { + memset(mapper_cparms, 0, sizeof(*mapper_cparms)); mapper_cparms->flow_type = flow_type; mapper_cparms->app_priority = params->priority; mapper_cparms->dir_attr = params->dir_attr; @@ -176,7 +177,7 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, params.fid = fid; params.func_id = func_id; params.priority = attr->priority; - params.port_id = bnxt_get_phy_port_id(dev->data->port_id); + params.port_id = dev->data->port_id; /* Perform the rte flow post process */ ret = bnxt_ulp_rte_parser_post_process(¶ms); if (ret == BNXT_TF_RC_ERROR) diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h index ee17390358..b253aefe8d 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h +++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h @@ -62,6 +62,7 @@ struct ulp_rte_act_prop { /* Structure to be used for passing all the parser functions */ struct ulp_rte_parser_params { + STAILQ_ENTRY(ulp_rte_parser_params) next; struct ulp_rte_hdr_bitmap hdr_bitmap; struct ulp_rte_hdr_bitmap hdr_fp_bit; struct ulp_rte_field_bitmap fld_bitmap; diff --git a/drivers/net/bnxt/tf_ulp/ulp_tun.c b/drivers/net/bnxt/tf_ulp/ulp_tun.c index 884692947a..6c1ae3ced2 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_tun.c +++ b/drivers/net/bnxt/tf_ulp/ulp_tun.c @@ -3,6 +3,8 @@ * All rights reserved. */ +#include + #include #include "ulp_tun.h" @@ -48,19 +50,18 @@ ulp_install_outer_tun_flow(struct ulp_rte_parser_params *params, goto err; /* Store the tunnel dmac in the tunnel cache table and use it while - * programming tunnel flow F2. + * programming tunnel inner flow. */ memcpy(tun_entry->t_dmac, ¶ms->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX].spec, RTE_ETHER_ADDR_LEN); - tun_entry->valid = true; tun_entry->tun_flow_info[params->port_id].state = BNXT_ULP_FLOW_STATE_TUN_O_OFFLD; tun_entry->outer_tun_flow_id = params->fid; - /* F1 and it's related F2s are correlated based on - * Tunnel Destination IP Address. + /* Tunnel outer flow and it's related inner flows are correlated + * based on Tunnel Destination IP Address. */ if (tun_entry->t_dst_ip_valid) goto done; @@ -89,25 +90,27 @@ ulp_install_inner_tun_flow(struct bnxt_tun_cache_entry *tun_entry, { struct bnxt_ulp_mapper_create_parms mparms = { 0 }; struct ulp_per_port_flow_info *flow_info; - struct ulp_rte_parser_params *params; + struct ulp_rte_parser_params *inner_params; int ret; - /* F2 doesn't have tunnel dmac, use the tunnel dmac that was - * stored during F1 programming. + /* Tunnel inner flow doesn't have tunnel dmac, use the tunnel + * dmac that was stored during F1 programming. */ flow_info = &tun_entry->tun_flow_info[tun_o_params->port_id]; - params = &flow_info->first_inner_tun_params; - memcpy(¶ms->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX], - tun_entry->t_dmac, RTE_ETHER_ADDR_LEN); - params->parent_fid = tun_entry->outer_tun_flow_id; - params->fid = flow_info->first_tun_i_fid; - - bnxt_ulp_init_mapper_params(&mparms, params, - BNXT_ULP_FDB_TYPE_REGULAR); - - ret = ulp_mapper_flow_create(params->ulp_ctx, &mparms); - if (ret) - PMD_DRV_LOG(ERR, "Failed to create F2 flow."); + STAILQ_FOREACH(inner_params, &flow_info->tun_i_prms_list, next) { + memcpy(&inner_params->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX], + tun_entry->t_dmac, RTE_ETHER_ADDR_LEN); + inner_params->parent_fid = tun_entry->outer_tun_flow_id; + + bnxt_ulp_init_mapper_params(&mparms, inner_params, + BNXT_ULP_FDB_TYPE_REGULAR); + + ret = ulp_mapper_flow_create(inner_params->ulp_ctx, &mparms); + if (ret) + PMD_DRV_LOG(ERR, + "Failed to create inner tun flow, FID:%u.", + inner_params->fid); + } } /* This function either install outer tunnel flow & inner tunnel flow @@ -118,21 +121,18 @@ ulp_post_process_outer_tun_flow(struct ulp_rte_parser_params *params, struct bnxt_tun_cache_entry *tun_entry, uint16_t tun_idx) { - enum bnxt_ulp_tun_flow_state flow_state; int ret; - flow_state = tun_entry->tun_flow_info[params->port_id].state; ret = ulp_install_outer_tun_flow(params, tun_entry, tun_idx); if (ret == BNXT_TF_RC_ERROR) { PMD_DRV_LOG(ERR, "Failed to create outer tunnel flow."); return ret; } - /* If flow_state == BNXT_ULP_FLOW_STATE_NORMAL before installing - * F1, that means F2 is not deferred. Hence, no need to install F2. + /* Install any cached tunnel inner flows that came before tunnel + * outer flow. */ - if (flow_state != BNXT_ULP_FLOW_STATE_NORMAL) - ulp_install_inner_tun_flow(tun_entry, params); + ulp_install_inner_tun_flow(tun_entry, params); return BNXT_TF_RC_FID; } @@ -141,9 +141,10 @@ ulp_post_process_outer_tun_flow(struct ulp_rte_parser_params *params, * outer tunnel flow request. */ static int32_t -ulp_post_process_first_inner_tun_flow(struct ulp_rte_parser_params *params, +ulp_post_process_cache_inner_tun_flow(struct ulp_rte_parser_params *params, struct bnxt_tun_cache_entry *tun_entry) { + struct ulp_rte_parser_params *inner_tun_params; struct ulp_per_port_flow_info *flow_info; int ret; @@ -155,19 +156,22 @@ ulp_post_process_first_inner_tun_flow(struct ulp_rte_parser_params *params, if (ret != BNXT_TF_RC_SUCCESS) return BNXT_TF_RC_ERROR; - /* If Tunnel F2 flow comes first then we can't install it in the - * hardware, because, F2 flow will not have L2 context information. - * So, just cache the F2 information and program it in the context - * of F1 flow installation. + /* If Tunnel inner flow comes first then we can't install it in the + * hardware, because, Tunnel inner flow will not have L2 context + * information. So, just cache the Tunnel inner flow information + * and program it in the context of F1 flow installation. */ flow_info = &tun_entry->tun_flow_info[params->port_id]; - memcpy(&flow_info->first_inner_tun_params, params, - sizeof(struct ulp_rte_parser_params)); - - flow_info->first_tun_i_fid = params->fid; - flow_info->state = BNXT_ULP_FLOW_STATE_TUN_I_CACHED; + inner_tun_params = rte_zmalloc("ulp_inner_tun_params", + sizeof(struct ulp_rte_parser_params), 0); + if (!inner_tun_params) + return BNXT_TF_RC_ERROR; + memcpy(inner_tun_params, params, sizeof(struct ulp_rte_parser_params)); + STAILQ_INSERT_TAIL(&flow_info->tun_i_prms_list, inner_tun_params, + next); + flow_info->tun_i_cnt++; - /* F1 and it's related F2s are correlated based on + /* F1 and it's related Tunnel inner flows are correlated based on * Tunnel Destination IP Address. It could be already set, if * the inner flow got offloaded first. */ @@ -248,8 +252,8 @@ ulp_get_tun_entry(struct ulp_rte_parser_params *params, int32_t ulp_post_process_tun_flow(struct ulp_rte_parser_params *params) { - bool outer_tun_sig, inner_tun_sig, first_inner_tun_flow; - bool outer_tun_reject, inner_tun_reject, outer_tun_flow, inner_tun_flow; + bool inner_tun_sig, cache_inner_tun_flow; + bool outer_tun_reject, outer_tun_flow, inner_tun_flow; enum bnxt_ulp_tun_flow_state flow_state; struct bnxt_tun_cache_entry *tun_entry; uint32_t l3_tun, l3_tun_decap; @@ -267,40 +271,31 @@ ulp_post_process_tun_flow(struct ulp_rte_parser_params *params) if (rc == BNXT_TF_RC_ERROR) return rc; + if (params->port_id >= RTE_MAX_ETHPORTS) + return BNXT_TF_RC_ERROR; flow_state = tun_entry->tun_flow_info[params->port_id].state; /* Outer tunnel flow validation */ - outer_tun_sig = BNXT_OUTER_TUN_SIGNATURE(l3_tun, params); - outer_tun_flow = BNXT_OUTER_TUN_FLOW(outer_tun_sig); + outer_tun_flow = BNXT_OUTER_TUN_FLOW(l3_tun, params); outer_tun_reject = BNXT_REJECT_OUTER_TUN_FLOW(flow_state, - outer_tun_sig); + outer_tun_flow); /* Inner tunnel flow validation */ inner_tun_sig = BNXT_INNER_TUN_SIGNATURE(l3_tun, l3_tun_decap, params); - first_inner_tun_flow = BNXT_FIRST_INNER_TUN_FLOW(flow_state, + cache_inner_tun_flow = BNXT_CACHE_INNER_TUN_FLOW(flow_state, inner_tun_sig); inner_tun_flow = BNXT_INNER_TUN_FLOW(flow_state, inner_tun_sig); - inner_tun_reject = BNXT_REJECT_INNER_TUN_FLOW(flow_state, - inner_tun_sig); if (outer_tun_reject) { tun_entry->outer_tun_rej_cnt++; BNXT_TF_DBG(ERR, "Tunnel F1 flow rejected, COUNT: %d\n", tun_entry->outer_tun_rej_cnt); - /* Inner tunnel flow is rejected if it comes between first inner - * tunnel flow and outer flow requests. - */ - } else if (inner_tun_reject) { - tun_entry->inner_tun_rej_cnt++; - BNXT_TF_DBG(ERR, - "Tunnel F2 flow rejected, COUNT: %d\n", - tun_entry->inner_tun_rej_cnt); } - if (outer_tun_reject || inner_tun_reject) + if (outer_tun_reject) return BNXT_TF_RC_ERROR; - else if (first_inner_tun_flow) - return ulp_post_process_first_inner_tun_flow(params, tun_entry); + else if (cache_inner_tun_flow) + return ulp_post_process_cache_inner_tun_flow(params, tun_entry); else if (outer_tun_flow) return ulp_post_process_outer_tun_flow(params, tun_entry, tun_idx); @@ -310,11 +305,86 @@ ulp_post_process_tun_flow(struct ulp_rte_parser_params *params) return BNXT_TF_RC_NORMAL; } +void +ulp_tun_tbl_init(struct bnxt_tun_cache_entry *tun_tbl) +{ + struct ulp_per_port_flow_info *flow_info; + int i, j; + + for (i = 0; i < BNXT_ULP_MAX_TUN_CACHE_ENTRIES; i++) { + for (j = 0; j < RTE_MAX_ETHPORTS; j++) { + flow_info = &tun_tbl[i].tun_flow_info[j]; + STAILQ_INIT(&flow_info->tun_i_prms_list); + } + } +} + void ulp_clear_tun_entry(struct bnxt_tun_cache_entry *tun_tbl, uint8_t tun_idx) { + struct ulp_rte_parser_params *inner_params; + struct ulp_per_port_flow_info *flow_info; + int j; + + for (j = 0; j < RTE_MAX_ETHPORTS; j++) { + flow_info = &tun_tbl[tun_idx].tun_flow_info[j]; + STAILQ_FOREACH(inner_params, + &flow_info->tun_i_prms_list, + next) { + STAILQ_REMOVE(&flow_info->tun_i_prms_list, + inner_params, + ulp_rte_parser_params, next); + rte_free(inner_params); + } + } + memset(&tun_tbl[tun_idx], 0, - sizeof(struct bnxt_tun_cache_entry)); + sizeof(struct bnxt_tun_cache_entry)); + + for (j = 0; j < RTE_MAX_ETHPORTS; j++) { + flow_info = &tun_tbl[tun_idx].tun_flow_info[j]; + STAILQ_INIT(&flow_info->tun_i_prms_list); + } +} + +static bool +ulp_chk_and_rem_tun_i_flow(struct bnxt_tun_cache_entry *tun_entry, + struct ulp_per_port_flow_info *flow_info, + uint32_t fid) +{ + struct ulp_rte_parser_params *inner_params; + int j; + + STAILQ_FOREACH(inner_params, + &flow_info->tun_i_prms_list, + next) { + if (inner_params->fid == fid) { + STAILQ_REMOVE(&flow_info->tun_i_prms_list, + inner_params, + ulp_rte_parser_params, + next); + rte_free(inner_params); + flow_info->tun_i_cnt--; + /* When a dpdk application offloads a duplicate + * tunnel inner flow on a port that it is not + * destined to, there won't be a tunnel outer flow + * associated with these duplicate tunnel inner flows. + * So, when the last tunnel inner flow ages out, the + * driver has to clear the tunnel entry, otherwise + * the tunnel entry cannot be reused. + */ + if (!flow_info->tun_i_cnt && + flow_info->state != BNXT_ULP_FLOW_STATE_TUN_O_OFFLD) { + memset(tun_entry, 0, + sizeof(struct bnxt_tun_cache_entry)); + for (j = 0; j < RTE_MAX_ETHPORTS; j++) + STAILQ_INIT(&flow_info->tun_i_prms_list); + } + return true; + } + } + + return false; } /* When a dpdk application offloads the same tunnel inner flow @@ -330,12 +400,14 @@ ulp_clear_tun_inner_entry(struct bnxt_tun_cache_entry *tun_tbl, uint32_t fid) struct ulp_per_port_flow_info *flow_info; int i, j; - for (i = 0; i < BNXT_ULP_MAX_TUN_CACHE_ENTRIES ; i++) { + for (i = 0; i < BNXT_ULP_MAX_TUN_CACHE_ENTRIES; i++) { + if (!tun_tbl[i].t_dst_ip_valid) + continue; for (j = 0; j < RTE_MAX_ETHPORTS; j++) { flow_info = &tun_tbl[i].tun_flow_info[j]; - if (flow_info->first_tun_i_fid == fid && - flow_info->state == BNXT_ULP_FLOW_STATE_TUN_I_CACHED) - memset(flow_info, 0, sizeof(*flow_info)); + if (ulp_chk_and_rem_tun_i_flow(&tun_tbl[i], + flow_info, fid) == true) + return; } } } diff --git a/drivers/net/bnxt/tf_ulp/ulp_tun.h b/drivers/net/bnxt/tf_ulp/ulp_tun.h index af6926f0e4..7e31f81f13 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_tun.h +++ b/drivers/net/bnxt/tf_ulp/ulp_tun.h @@ -15,7 +15,7 @@ #include "ulp_template_db_enum.h" #include "ulp_template_struct.h" -#define BNXT_OUTER_TUN_SIGNATURE(l3_tun, params) \ +#define BNXT_OUTER_TUN_FLOW(l3_tun, params) \ ((l3_tun) && \ ULP_BITMAP_ISSET((params)->act_bitmap.bits, \ BNXT_ULP_ACTION_BIT_JUMP)) @@ -24,22 +24,16 @@ !ULP_BITMAP_ISSET((params)->hdr_bitmap.bits, \ BNXT_ULP_HDR_BIT_O_ETH)) -#define BNXT_FIRST_INNER_TUN_FLOW(state, inner_tun_sig) \ +#define BNXT_CACHE_INNER_TUN_FLOW(state, inner_tun_sig) \ ((state) == BNXT_ULP_FLOW_STATE_NORMAL && (inner_tun_sig)) #define BNXT_INNER_TUN_FLOW(state, inner_tun_sig) \ ((state) == BNXT_ULP_FLOW_STATE_TUN_O_OFFLD && (inner_tun_sig)) -#define BNXT_OUTER_TUN_FLOW(outer_tun_sig) ((outer_tun_sig)) /* It is invalid to get another outer flow offload request * for the same tunnel, while the outer flow is already offloaded. */ #define BNXT_REJECT_OUTER_TUN_FLOW(state, outer_tun_sig) \ ((state) == BNXT_ULP_FLOW_STATE_TUN_O_OFFLD && (outer_tun_sig)) -/* It is invalid to get another inner flow offload request - * for the same tunnel, while the outer flow is not yet offloaded. - */ -#define BNXT_REJECT_INNER_TUN_FLOW(state, inner_tun_sig) \ - ((state) == BNXT_ULP_FLOW_STATE_TUN_I_CACHED && (inner_tun_sig)) #define ULP_TUN_O_DMAC_HDR_FIELD_INDEX 1 #define ULP_TUN_O_IPV4_DIP_INDEX 19 @@ -50,10 +44,10 @@ * requests arrive. * * If inner tunnel flow offload request arrives first then the flow - * state will change from BNXT_ULP_FLOW_STATE_NORMAL to - * BNXT_ULP_FLOW_STATE_TUN_I_CACHED and the following outer tunnel - * flow offload request will change the state of the flow to - * BNXT_ULP_FLOW_STATE_TUN_O_OFFLD from BNXT_ULP_FLOW_STATE_TUN_I_CACHED. + * state will remain in BNXT_ULP_FLOW_STATE_NORMAL state. + * The following outer tunnel flow offload request will change the + * state of the flow to BNXT_ULP_FLOW_STATE_TUN_O_OFFLD from + * BNXT_ULP_FLOW_STATE_NORMAL. * * If outer tunnel flow offload request arrives first then the flow state * will change from BNXT_ULP_FLOW_STATE_NORMAL to @@ -67,17 +61,15 @@ enum bnxt_ulp_tun_flow_state { BNXT_ULP_FLOW_STATE_NORMAL = 0, BNXT_ULP_FLOW_STATE_TUN_O_OFFLD, - BNXT_ULP_FLOW_STATE_TUN_I_CACHED }; struct ulp_per_port_flow_info { - enum bnxt_ulp_tun_flow_state state; - uint32_t first_tun_i_fid; - struct ulp_rte_parser_params first_inner_tun_params; + enum bnxt_ulp_tun_flow_state state; + uint32_t tun_i_cnt; + STAILQ_HEAD(, ulp_rte_parser_params) tun_i_prms_list; }; struct bnxt_tun_cache_entry { - bool valid; bool t_dst_ip_valid; uint8_t t_dmac[RTE_ETHER_ADDR_LEN]; union { @@ -86,10 +78,12 @@ struct bnxt_tun_cache_entry { }; uint32_t outer_tun_flow_id; uint16_t outer_tun_rej_cnt; - uint16_t inner_tun_rej_cnt; struct ulp_per_port_flow_info tun_flow_info[RTE_MAX_ETHPORTS]; }; +void +ulp_tun_tbl_init(struct bnxt_tun_cache_entry *tun_tbl); + void ulp_clear_tun_entry(struct bnxt_tun_cache_entry *tun_tbl, uint8_t tun_idx);