From patchwork Thu Oct 22 22:05:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 81847 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CFBAAA04DD; Fri, 23 Oct 2020 00:11:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 91B8A2DCC; Fri, 23 Oct 2020 00:06:57 +0200 (CEST) Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by dpdk.org (Postfix) with ESMTP id C19066A15 for ; Fri, 23 Oct 2020 00:06:51 +0200 (CEST) Received: by mail-pg1-f175.google.com with SMTP id t14so1885573pgg.1 for ; Thu, 22 Oct 2020 15:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=ddJ38Z4lpzSw3imwCTsSfJ4SoCFAyYlzB3ZZT9dsExQ=; b=PGx8nMzjodBp2wYTD+1gGdvrG98tHWd3iz+SoYbzAVkfnWNT8uqQ9TfuMVMRdKp92c skG6R7Zt6FXNs0cuJJoZKGn+1E59qDxAiFSqyxN8hFkmpw+4HFCFZF+dZVgX+jaf+IH+ a8MGEhNeVZ6xTvYDM0YATg7ksk8dDTu8gd12g= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=ddJ38Z4lpzSw3imwCTsSfJ4SoCFAyYlzB3ZZT9dsExQ=; b=jh2p92N+rG/bho8XvuggleS6CbnjpM65iW+0ihAK6r0CrofepXd4W65H/IMZGdt7/B NIk4xy3cBPMjl+6P3bWcOT3/4gQXesFFg7oCZMGFQr6cJwYnhJs5NMcMtauLEE+ABnkn VIR0+L1YBnXqxkdVbIjv/6ClQPLtvDp9eivWzsgBXQzXw64oC4aT+GAfF2IVMiWD2FUW 82+V3A9NuftA9meynu2rKnn6dMlb4eHI3od/75eqeCOi1RUCMWp8EkJ+U4heN5nbkNhp bNAAqrdR6kVHCIs1lIuJo62x9a3GtQoj9MansWuxwTmlFIgL6tfF5iO33J6gVeLkSXRv /J7Q== X-Gm-Message-State: AOAM531zigeuesXC1mSiGykwn4yuHPFifkUq7wQtMXW0lB5vPkZEk42r tio7o5nKPux8jg6tnfaFBEbpy/++fh7gtYPdTZvYPR4yQn+oOCz1D+VzH5z9v6MnkbjABy7RqGW PSM6650rQbsS8dg5i8M+/T900PqyYhW8amtvIgywXyW/U/eIPGpGHj7sJMZnR32wS+g== X-Google-Smtp-Source: ABdhPJw4Vrj+9QdtrUl1jCVvgk3JzSzrLoouk0l2PCqoydQamB4PkG8ghhsYoFkHDEo19gzW/3ywBQ== X-Received: by 2002:a05:6a00:802:b029:152:1a5f:1123 with SMTP id m2-20020a056a000802b02901521a5f1123mr4558056pfk.28.1603404408750; Thu, 22 Oct 2020 15:06:48 -0700 (PDT) Received: from localhost.localdomain ([192.19.228.250]) by smtp.gmail.com with ESMTPSA id q14sm3214059pjp.43.2020.10.22.15.06.47 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Oct 2020 15:06:48 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: Venkat Duvvuru , Kishore Padmanabha Date: Thu, 22 Oct 2020 15:05:43 -0700 Message-Id: <20201022220542.84166-12-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.21.1 (Apple Git-122.3) In-Reply-To: <20201020215538.59242-1-ajit.khaparde@broadcom.com> References: <20201020215538.59242-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] [PATCH v3 11/11] net/bnxt: add VXLAN decap offload support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Venkat Duvvuru VXLAN decap offload can happen in stages. The offload request may not come as a single flow request rather may come as two flow offload requests F1 & F2. This patch is adding support for this two stage offload design. The match criteria for F1 is O_DMAC, O_SMAC, O_DST_IP, O_UDP_DPORT and actions are COUNT, MARK, JUMP. The match criteria for F2 is O_SRC_IP, O_DST_IP, VNI and inner header fields. F1 and F2 flow offload requests can come in any order. If F2 flow offload request comes first then F2 can’t be offloaded as there is no O_DMAC information in F2. In this case, F2 will be deferred until F1 flow offload request arrives. When F1 flow offload request is received it will have O_DMAC information. Using F1’s O_DMAC, driver creates an L2 context entry in the hardware as part of offloading F1. F2 will now use F1’s O_DMAC to get the L2 context id associated with this O_DMAC and other flow fields that are cached already at the time of deferring F2 for offloading. F2s that arrive after F1 is offloaded will be directly programmed and not cached. Signed-off-by: Venkat Duvvuru Reviewed-by: Kishore Padmanabha Reviewed-by: Ajit Khaparde --- doc/guides/nics/bnxt.rst | 18 + doc/guides/rel_notes/release_20_11.rst | 1 + drivers/net/bnxt/meson.build | 1 + drivers/net/bnxt/tf_ulp/bnxt_tf_common.h | 4 +- drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 10 + drivers/net/bnxt/tf_ulp/bnxt_ulp.h | 12 + drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 84 ++--- drivers/net/bnxt/tf_ulp/ulp_flow_db.c | 149 +++++++-- drivers/net/bnxt/tf_ulp/ulp_flow_db.h | 2 + drivers/net/bnxt/tf_ulp/ulp_mapper.c | 1 + drivers/net/bnxt/tf_ulp/ulp_mapper.h | 2 + drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 75 ++++- drivers/net/bnxt/tf_ulp/ulp_rte_parser.h | 4 +- .../net/bnxt/tf_ulp/ulp_template_db_enum.h | 4 +- drivers/net/bnxt/tf_ulp/ulp_template_struct.h | 7 + drivers/net/bnxt/tf_ulp/ulp_tun.c | 310 ++++++++++++++++++ drivers/net/bnxt/tf_ulp/ulp_tun.h | 92 ++++++ 17 files changed, 694 insertions(+), 82 deletions(-) create mode 100644 drivers/net/bnxt/tf_ulp/ulp_tun.c create mode 100644 drivers/net/bnxt/tf_ulp/ulp_tun.h diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst index 28973fc3e2..eee10a3cd6 100644 --- a/doc/guides/nics/bnxt.rst +++ b/doc/guides/nics/bnxt.rst @@ -706,6 +706,24 @@ Notes flows to be directed to one or more queues associated with the VNIC id. This implementation is supported only when TRUFLOW functionality is disabled. +- An application can issue a VXLAN decap offload request using rte_flow API + either as a single rte_flow request or a combination of two stages. + The PMD currently supports the two stage offload design. + In this approach the offload request may come as two flow offload requests + Flow1 & Flow2. The match criteria for Flow1 is O_DMAC, O_SMAC, O_DST_IP, + O_UDP_DPORT and actions are COUNT, MARK, JUMP. The match criteria for Flow2 + is O_SRC_IP, O_DST_IP, VNI and inner header fields. + Flow1 and Flow2 flow offload requests can come in any order. If Flow2 flow + offload request comes first then Flow2 can’t be offloaded as there is + no O_DMAC information in Flow2. In this case, Flow2 will be deferred until + Flow1 flow offload request arrives. When Flow1 flow offload request is + received it will have O_DMAC information. Using Flow1’s O_DMAC, driver + creates an L2 context entry in the hardware as part of offloading Flow1. + Flow2 will now use Flow1’s O_DMAC to get the L2 context id associated with + this O_DMAC and other flow fields that are cached already at the time + of deferring Flow2 for offloading. Flow2 that arrive after Flow1 is offloaded + will be directly programmed and not cached. + Note: A VNIC represents a virtual interface in the hardware. It is a resource in the RX path of the chip and is used to setup various target actions such as RSS, MAC filtering etc. for the physical function in use. diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 4c1961ced7..06b74e56c1 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -148,6 +148,7 @@ New Features * Updated HWRM structures to 1.10.1.70 version. * Added TRUFLOW support for Stingray devices. * Added support for representors on MAIA cores of SR. + * Added support for VXLAN decap offload using rte_flow. * **Updated Cisco enic driver.** diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build index 39521080f8..bc74f88c69 100644 --- a/drivers/net/bnxt/meson.build +++ b/drivers/net/bnxt/meson.build @@ -64,6 +64,7 @@ sources = files('bnxt_cpr.c', 'tf_ulp/ulp_port_db.c', 'tf_ulp/ulp_def_rules.c', 'tf_ulp/ulp_fc_mgr.c', + 'tf_ulp/ulp_tun.c', 'tf_ulp/ulp_template_db_wh_plus_act.c', 'tf_ulp/ulp_template_db_wh_plus_class.c', 'tf_ulp/ulp_template_db_stingray_act.c', diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h index f0633f009c..b2629e47b6 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h +++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h @@ -33,7 +33,9 @@ enum bnxt_tf_rc { BNXT_TF_RC_PARSE_ERR = -2, BNXT_TF_RC_ERROR = -1, - BNXT_TF_RC_SUCCESS = 0 + BNXT_TF_RC_SUCCESS = 0, + BNXT_TF_RC_NORMAL = 1, + BNXT_TF_RC_FID = 2, }; /* eth IPv4 Type */ diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c index d753b5af9f..26fd3009f2 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c @@ -1321,6 +1321,16 @@ bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context *ulp_ctx) return ulp_ctx->cfg_data->flow_db; } +/* Function to get the tunnel cache table info from the ulp context. */ +struct bnxt_tun_cache_entry * +bnxt_ulp_cntxt_ptr2_tun_tbl_get(struct bnxt_ulp_context *ulp_ctx) +{ + if (!ulp_ctx || !ulp_ctx->cfg_data) + return NULL; + + return ulp_ctx->cfg_data->tun_tbl; +} + /* Function to get the ulp context from eth device. */ struct bnxt_ulp_context * bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev *dev) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h index c2c5bcb1d2..db1ee50c05 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h @@ -13,6 +13,8 @@ #include "rte_ethdev.h" #include "ulp_template_db_enum.h" +#include "ulp_tun.h" +#include "bnxt_tf_common.h" /* NAT defines to reuse existing inner L2 SMAC and DMAC */ #define BNXT_ULP_NAT_INNER_L2_HEADER_SMAC 0x2000 @@ -55,6 +57,9 @@ struct bnxt_ulp_data { struct bnxt_ulp_df_rule_info df_rule_info[RTE_MAX_ETHPORTS]; struct bnxt_ulp_vfr_rule_info vfr_rule_info[RTE_MAX_ETHPORTS]; enum bnxt_ulp_flow_mem_type mem_type; +#define BNXT_ULP_TUN_ENTRY_INVALID -1 +#define BNXT_ULP_MAX_TUN_CACHE_ENTRIES 16 + struct bnxt_tun_cache_entry tun_tbl[BNXT_ULP_MAX_TUN_CACHE_ENTRIES]; }; struct bnxt_ulp_context { @@ -151,6 +156,10 @@ bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context *ulp_ctx, struct bnxt_ulp_flow_db * bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context *ulp_ctx); +/* Function to get the tunnel cache table info from the ulp context. */ +struct bnxt_tun_cache_entry * +bnxt_ulp_cntxt_ptr2_tun_tbl_get(struct bnxt_ulp_context *ulp_ctx); + /* Function to get the ulp context from eth device. */ struct bnxt_ulp_context * bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev *dev); @@ -214,4 +223,7 @@ bnxt_ulp_cntxt_acquire_fdb_lock(struct bnxt_ulp_context *ulp_ctx); void bnxt_ulp_cntxt_release_fdb_lock(struct bnxt_ulp_context *ulp_ctx); +int32_t +ulp_post_process_tun_flow(struct ulp_rte_parser_params *params); + #endif /* _BNXT_ULP_H_ */ diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c index 47fbaba03c..75a7dbe623 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c @@ -77,24 +77,22 @@ bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params, void bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_create_parms *mapper_cparms, struct ulp_rte_parser_params *params, - uint32_t priority, uint32_t class_id, - uint32_t act_tmpl, uint16_t func_id, - uint32_t fid, enum bnxt_ulp_fdb_type flow_type) { - mapper_cparms->app_priority = priority; - mapper_cparms->dir_attr = params->dir_attr; - - mapper_cparms->class_tid = class_id; - mapper_cparms->act_tid = act_tmpl; - mapper_cparms->func_id = func_id; - mapper_cparms->hdr_bitmap = ¶ms->hdr_bitmap; - mapper_cparms->hdr_field = params->hdr_field; - mapper_cparms->comp_fld = params->comp_fld; - mapper_cparms->act = ¶ms->act_bitmap; - mapper_cparms->act_prop = ¶ms->act_prop; - mapper_cparms->flow_type = flow_type; - mapper_cparms->flow_id = fid; + mapper_cparms->flow_type = flow_type; + mapper_cparms->app_priority = params->priority; + mapper_cparms->dir_attr = params->dir_attr; + mapper_cparms->class_tid = params->class_id; + mapper_cparms->act_tid = params->act_tmpl; + mapper_cparms->func_id = params->func_id; + mapper_cparms->hdr_bitmap = ¶ms->hdr_bitmap; + mapper_cparms->hdr_field = params->hdr_field; + mapper_cparms->comp_fld = params->comp_fld; + mapper_cparms->act = ¶ms->act_bitmap; + mapper_cparms->act_prop = ¶ms->act_prop; + mapper_cparms->flow_id = params->fid; + mapper_cparms->parent_flow = params->parent_flow; + mapper_cparms->parent_fid = params->parent_fid; } /* Function to create the rte flow. */ @@ -109,7 +107,6 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, struct ulp_rte_parser_params params; struct bnxt_ulp_context *ulp_ctx; int rc, ret = BNXT_TF_RC_ERROR; - uint32_t class_id, act_tmpl; struct rte_flow *flow_id; uint16_t func_id; uint32_t fid; @@ -118,13 +115,13 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, pattern, actions, error) == BNXT_TF_RC_ERROR) { BNXT_TF_DBG(ERR, "Invalid arguments being passed\n"); - goto parse_err1; + goto flow_error; } ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev); if (!ulp_ctx) { BNXT_TF_DBG(ERR, "ULP context is not initialized\n"); - goto parse_err1; + goto flow_error; } /* Initialize the parser params */ @@ -145,13 +142,13 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, dev->data->port_id, &func_id)) { BNXT_TF_DBG(ERR, "conversion of port to func id failed\n"); - goto parse_err1; + goto flow_error; } /* Protect flow creation */ if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); - goto parse_err1; + goto flow_error; } /* Allocate a Flow ID for attaching all resources for the flow to. @@ -162,50 +159,55 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, func_id, &fid); if (rc) { BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n"); - goto parse_err2; + goto release_lock; } /* Parse the rte flow pattern */ ret = bnxt_ulp_rte_parser_hdr_parse(pattern, ¶ms); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_err3; + goto free_fid; /* Parse the rte flow action */ ret = bnxt_ulp_rte_parser_act_parse(actions, ¶ms); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_err3; + goto free_fid; + params.fid = fid; + params.func_id = func_id; + params.priority = attr->priority; /* Perform the rte flow post process */ ret = bnxt_ulp_rte_parser_post_process(¶ms); - if (ret != BNXT_TF_RC_SUCCESS) - goto parse_err3; + if (ret == BNXT_TF_RC_ERROR) + goto free_fid; + else if (ret == BNXT_TF_RC_FID) + goto return_fid; - ret = ulp_matcher_pattern_match(¶ms, &class_id); + ret = ulp_matcher_pattern_match(¶ms, ¶ms.class_id); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_err3; + goto free_fid; - ret = ulp_matcher_action_match(¶ms, &act_tmpl); + ret = ulp_matcher_action_match(¶ms, ¶ms.act_tmpl); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_err3; + goto free_fid; - bnxt_ulp_init_mapper_params(&mapper_cparms, ¶ms, attr->priority, - class_id, act_tmpl, func_id, fid, + bnxt_ulp_init_mapper_params(&mapper_cparms, ¶ms, BNXT_ULP_FDB_TYPE_REGULAR); /* Call the ulp mapper to create the flow in the hardware. */ ret = ulp_mapper_flow_create(ulp_ctx, &mapper_cparms); if (ret) - goto parse_err3; + goto free_fid; +return_fid: bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); flow_id = (struct rte_flow *)((uintptr_t)fid); return flow_id; -parse_err3: +free_fid: ulp_flow_db_fid_free(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, fid); -parse_err2: +release_lock: bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); -parse_err1: +flow_error: rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "Failed to create flow."); return NULL; @@ -219,10 +221,10 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error) { - struct ulp_rte_parser_params params; + struct ulp_rte_parser_params params; + struct bnxt_ulp_context *ulp_ctx; uint32_t class_id, act_tmpl; int ret = BNXT_TF_RC_ERROR; - struct bnxt_ulp_context *ulp_ctx; if (bnxt_ulp_flow_validate_args(attr, pattern, actions, @@ -256,8 +258,10 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev, /* Perform the rte flow post process */ ret = bnxt_ulp_rte_parser_post_process(¶ms); - if (ret != BNXT_TF_RC_SUCCESS) + if (ret == BNXT_TF_RC_ERROR) goto parse_error; + else if (ret == BNXT_TF_RC_FID) + return 0; ret = ulp_matcher_pattern_match(¶ms, &class_id); @@ -283,10 +287,10 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error) { - int ret = 0; struct bnxt_ulp_context *ulp_ctx; uint32_t flow_id; uint16_t func_id; + int ret; ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev); if (!ulp_ctx) { diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c index 8780c01cc7..5e7c8ab2e1 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c +++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c @@ -11,6 +11,7 @@ #include "ulp_mapper.h" #include "ulp_flow_db.h" #include "ulp_fc_mgr.h" +#include "ulp_tun.h" #define ULP_FLOW_DB_RES_DIR_BIT 31 #define ULP_FLOW_DB_RES_DIR_MASK 0x80000000 @@ -375,6 +376,101 @@ ulp_flow_db_parent_tbl_deinit(struct bnxt_ulp_flow_db *flow_db) } } +/* internal validation function for parent flow tbl */ +static struct bnxt_ulp_flow_db * +ulp_flow_db_parent_arg_validation(struct bnxt_ulp_context *ulp_ctxt, + uint32_t fid) +{ + struct bnxt_ulp_flow_db *flow_db; + + flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt); + if (!flow_db) { + BNXT_TF_DBG(ERR, "Invalid Arguments\n"); + return NULL; + } + + /* check for max flows */ + if (fid >= flow_db->flow_tbl.num_flows || !fid) { + BNXT_TF_DBG(ERR, "Invalid flow index\n"); + return NULL; + } + + /* No support for parent child db then just exit */ + if (!flow_db->parent_child_db.entries_count) { + BNXT_TF_DBG(ERR, "parent child db not supported\n"); + return NULL; + } + + return flow_db; +} + +/* + * Set the tunnel index in the parent flow + * + * ulp_ctxt [in] Ptr to ulp_context + * parent_idx [in] The parent index of the parent flow entry + * + * returns index on success and negative on failure. + */ +static int32_t +ulp_flow_db_parent_tun_idx_set(struct bnxt_ulp_context *ulp_ctxt, + uint32_t parent_idx, uint8_t tun_idx) +{ + struct bnxt_ulp_flow_db *flow_db; + struct ulp_fdb_parent_child_db *p_pdb; + + flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt); + if (!flow_db) { + BNXT_TF_DBG(ERR, "Invalid Arguments\n"); + return -EINVAL; + } + + /* check for parent idx validity */ + p_pdb = &flow_db->parent_child_db; + if (parent_idx >= p_pdb->entries_count || + !p_pdb->parent_flow_tbl[parent_idx].parent_fid) { + BNXT_TF_DBG(ERR, "Invalid parent flow index %x\n", parent_idx); + return -EINVAL; + } + + p_pdb->parent_flow_tbl[parent_idx].tun_idx = tun_idx; + return 0; +} + +/* + * Get the tunnel index from the parent flow + * + * ulp_ctxt [in] Ptr to ulp_context + * parent_fid [in] The flow id of the parent flow entry + * + * returns 0 if counter accum is set else -1. + */ +static int32_t +ulp_flow_db_parent_tun_idx_get(struct bnxt_ulp_context *ulp_ctxt, + uint32_t parent_fid, uint8_t *tun_idx) +{ + struct bnxt_ulp_flow_db *flow_db; + struct ulp_fdb_parent_child_db *p_pdb; + uint32_t idx; + + /* validate the arguments */ + flow_db = ulp_flow_db_parent_arg_validation(ulp_ctxt, parent_fid); + if (!flow_db) { + BNXT_TF_DBG(ERR, "parent child db validation failed\n"); + return -EINVAL; + } + + p_pdb = &flow_db->parent_child_db; + for (idx = 0; idx < p_pdb->entries_count; idx++) { + if (p_pdb->parent_flow_tbl[idx].parent_fid == parent_fid) { + *tun_idx = p_pdb->parent_flow_tbl[idx].tun_idx; + return 0; + } + } + + return -EINVAL; +} + /* * Initialize the flow database. Memory is allocated in this * call and assigned to the flow database. @@ -663,6 +759,9 @@ ulp_flow_db_resource_del(struct bnxt_ulp_context *ulp_ctxt, struct bnxt_ulp_flow_tbl *flow_tbl; struct ulp_fdb_resource_info *nxt_resource, *fid_resource; uint32_t nxt_idx = 0; + struct bnxt_tun_cache_entry *tun_tbl; + uint8_t tun_idx = 0; + int rc; flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt); if (!flow_db) { @@ -739,6 +838,18 @@ ulp_flow_db_resource_del(struct bnxt_ulp_context *ulp_ctxt, params->resource_hndl); } + if (params->resource_func == BNXT_ULP_RESOURCE_FUNC_PARENT_FLOW) { + tun_tbl = bnxt_ulp_cntxt_ptr2_tun_tbl_get(ulp_ctxt); + if (!tun_tbl) + return -EINVAL; + + rc = ulp_flow_db_parent_tun_idx_get(ulp_ctxt, fid, &tun_idx); + if (rc) + return rc; + + ulp_clear_tun_entry(tun_tbl, tun_idx); + } + /* all good, return success */ return 0; } @@ -1159,34 +1270,6 @@ ulp_default_flow_db_cfa_action_get(struct bnxt_ulp_context *ulp_ctx, return 0; } -/* internal validation function for parent flow tbl */ -static struct bnxt_ulp_flow_db * -ulp_flow_db_parent_arg_validation(struct bnxt_ulp_context *ulp_ctxt, - uint32_t fid) -{ - struct bnxt_ulp_flow_db *flow_db; - - flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt); - if (!flow_db) { - BNXT_TF_DBG(ERR, "Invalid Arguments\n"); - return NULL; - } - - /* check for max flows */ - if (fid >= flow_db->flow_tbl.num_flows || !fid) { - BNXT_TF_DBG(ERR, "Invalid flow index\n"); - return NULL; - } - - /* No support for parent child db then just exit */ - if (!flow_db->parent_child_db.entries_count) { - BNXT_TF_DBG(ERR, "parent child db not supported\n"); - return NULL; - } - - return flow_db; -} - /* * Allocate the entry in the parent-child database * @@ -1559,7 +1642,7 @@ ulp_flow_db_parent_flow_create(struct bnxt_ulp_mapper_parms *parms) struct ulp_flow_db_res_params fid_parms; uint32_t sub_type = BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT_ACC; struct ulp_flow_db_res_params res_params; - int32_t fid_idx; + int32_t fid_idx, rc; /* create the child flow entry in parent flow table */ fid_idx = ulp_flow_db_parent_flow_alloc(parms->ulp_ctx, parms->fid); @@ -1596,6 +1679,14 @@ ulp_flow_db_parent_flow_create(struct bnxt_ulp_mapper_parms *parms) return -1; } } + + rc = ulp_flow_db_parent_tun_idx_set(parms->ulp_ctx, fid_idx, + parms->tun_idx); + if (rc) { + BNXT_TF_DBG(ERR, "Error setting tun_idx in the parent flow\n"); + return rc; + } + return 0; } diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h index 10e69bae45..f7dfd67bed 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h +++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h @@ -60,6 +60,8 @@ struct ulp_fdb_parent_info { uint64_t pkt_count; uint64_t byte_count; uint64_t *child_fid_bitset; + uint32_t f2_cnt; + uint8_t tun_idx; }; /* Structure to maintain parent-child flow relationships */ diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c index d5c129b3a6..29643232d8 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c +++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c @@ -2815,6 +2815,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, parms.parent_flow = cparms->parent_flow; parms.parent_fid = cparms->parent_fid; parms.fid = cparms->flow_id; + parms.tun_idx = cparms->tun_idx; /* Get the device id from the ulp context */ if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &parms.dev_id)) { diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h index 0595d1555d..9bd94f5c29 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h +++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h @@ -78,6 +78,7 @@ struct bnxt_ulp_mapper_parms { struct bnxt_ulp_device_params *device_params; uint32_t parent_fid; uint32_t parent_flow; + uint8_t tun_idx; }; struct bnxt_ulp_mapper_create_parms { @@ -98,6 +99,7 @@ struct bnxt_ulp_mapper_create_parms { uint32_t parent_fid; /* if set then create a parent flow */ uint32_t parent_flow; + uint8_t tun_idx; }; /* Function to initialize any dynamic mapper data. */ diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c index 42021ae8d5..df38b83700 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c @@ -6,11 +6,16 @@ #include "bnxt.h" #include "ulp_template_db_enum.h" #include "ulp_template_struct.h" +#include "bnxt_ulp.h" #include "bnxt_tf_common.h" #include "ulp_rte_parser.h" +#include "ulp_matcher.h" #include "ulp_utils.h" #include "tfp.h" #include "ulp_port_db.h" +#include "ulp_flow_db.h" +#include "ulp_mapper.h" +#include "ulp_tun.h" /* Local defines for the parsing functions */ #define ULP_VLAN_PRIORITY_SHIFT 13 /* First 3 bits */ @@ -243,14 +248,11 @@ bnxt_ulp_comp_fld_intf_update(struct ulp_rte_parser_params *params) } } -/* - * Function to handle the post processing of the parsing details - */ -int32_t -bnxt_ulp_rte_parser_post_process(struct ulp_rte_parser_params *params) +static int32_t +ulp_post_process_normal_flow(struct ulp_rte_parser_params *params) { - enum bnxt_ulp_direction_type dir; enum bnxt_ulp_intf_type match_port_type, act_port_type; + enum bnxt_ulp_direction_type dir; uint32_t act_port_set; /* Get the computed details */ @@ -305,6 +307,16 @@ bnxt_ulp_rte_parser_post_process(struct ulp_rte_parser_params *params) return 0; } +/* + * Function to handle the post processing of the parsing details + */ +int32_t +bnxt_ulp_rte_parser_post_process(struct ulp_rte_parser_params *params) +{ + ulp_post_process_normal_flow(params); + return ulp_post_process_tun_flow(params); +} + /* * Function to compute the flow direction based on the match port details */ @@ -679,7 +691,16 @@ ulp_rte_eth_hdr_handler(const struct rte_flow_item *item, params->field_idx += BNXT_ULP_PROTO_HDR_VLAN_NUM; /* Update the protocol hdr bitmap */ - if (ULP_BITMAP_ISSET(params->hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_ETH)) { + if (ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_O_ETH) || + ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_O_IPV4) || + ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_O_IPV6) || + ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_O_UDP) || + ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_O_TCP)) { ULP_BITMAP_SET(params->hdr_bitmap.bits, BNXT_ULP_HDR_BIT_I_ETH); inner_flag = 1; } else { @@ -875,6 +896,22 @@ ulp_rte_ipv4_hdr_handler(const struct rte_flow_item *item, return BNXT_TF_RC_ERROR; } + if (!ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_O_ETH) && + !ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_I_ETH)) { + /* Since F2 flow does not include eth item, when parser detects + * IPv4/IPv6 item list and it belongs to the outer header; i.e., + * o_ipv4/o_ipv6, check if O_ETH and I_ETH is set. If not set, + * then add offset sizeof(o_eth/oo_vlan/oi_vlan) to the index. + * This will allow the parser post processor to update the + * t_dmac in hdr_field[o_eth.dmac] + */ + idx += (BNXT_ULP_PROTO_HDR_ETH_NUM + + BNXT_ULP_PROTO_HDR_VLAN_NUM); + params->field_idx = idx; + } + /* * Copy the rte_flow_item for ipv4 into hdr_field using ipv4 * header fields @@ -1004,6 +1041,22 @@ ulp_rte_ipv6_hdr_handler(const struct rte_flow_item *item, return BNXT_TF_RC_ERROR; } + if (!ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_O_ETH) && + !ULP_BITMAP_ISSET(params->hdr_bitmap.bits, + BNXT_ULP_HDR_BIT_I_ETH)) { + /* Since F2 flow does not include eth item, when parser detects + * IPv4/IPv6 item list and it belongs to the outer header; i.e., + * o_ipv4/o_ipv6, check if O_ETH and I_ETH is set. If not set, + * then add offset sizeof(o_eth/oo_vlan/oi_vlan) to the index. + * This will allow the parser post processor to update the + * t_dmac in hdr_field[o_eth.dmac] + */ + idx += (BNXT_ULP_PROTO_HDR_ETH_NUM + + BNXT_ULP_PROTO_HDR_VLAN_NUM); + params->field_idx = idx; + } + /* * Copy the rte_flow_item for ipv6 into hdr_field using ipv6 * header fields @@ -1109,9 +1162,11 @@ static void ulp_rte_l4_proto_type_update(struct ulp_rte_parser_params *param, uint16_t dst_port) { - if (dst_port == tfp_cpu_to_be_16(ULP_UDP_PORT_VXLAN)) + if (dst_port == tfp_cpu_to_be_16(ULP_UDP_PORT_VXLAN)) { ULP_BITMAP_SET(param->hdr_fp_bit.bits, BNXT_ULP_HDR_BIT_T_VXLAN); + ULP_COMP_FLD_IDX_WR(param, BNXT_ULP_CF_IDX_L3_TUN, 1); + } } /* Function to handle the parsing of RTE Flow item UDP Header. */ @@ -1143,6 +1198,7 @@ ulp_rte_udp_hdr_handler(const struct rte_flow_item *item, field = ulp_rte_parser_fld_copy(¶ms->hdr_field[idx], &udp_spec->hdr.src_port, size); + size = sizeof(udp_spec->hdr.dst_port); field = ulp_rte_parser_fld_copy(field, &udp_spec->hdr.dst_port, @@ -1689,6 +1745,9 @@ ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action *action_item /* update the hdr_bitmap with vxlan */ ULP_BITMAP_SET(params->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VXLAN_DECAP); + /* Update computational field with tunnel decap info */ + ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_L3_TUN_DECAP, 1); + ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_L3_TUN, 1); return BNXT_TF_RC_SUCCESS; } diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h index a71aabe5f0..7996317903 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h @@ -12,6 +12,7 @@ #include "ulp_template_db_enum.h" #include "ulp_template_struct.h" #include "ulp_mapper.h" +#include "bnxt_tf_common.h" /* defines to be used in the tunnel header parsing */ #define BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS 2 @@ -38,9 +39,6 @@ void bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_create_parms *mapper_cparms, struct ulp_rte_parser_params *params, - uint32_t priority, uint32_t class_id, - uint32_t act_tmpl, uint16_t func_id, - uint32_t flow_id, enum bnxt_ulp_fdb_type flow_type); /* Function to handle the parsing of the RTE port id. */ diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h index 10838f5cc2..6802debbb7 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h +++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h @@ -135,7 +135,9 @@ enum bnxt_ulp_cf_idx { BNXT_ULP_CF_IDX_L4_HDR_CNT = 41, BNXT_ULP_CF_IDX_VFR_MODE = 42, BNXT_ULP_CF_IDX_LOOPBACK_PARIF = 43, - BNXT_ULP_CF_IDX_LAST = 44 + BNXT_ULP_CF_IDX_L3_TUN = 44, + BNXT_ULP_CF_IDX_L3_TUN_DECAP = 45, + BNXT_ULP_CF_IDX_LAST = 46 }; enum bnxt_ulp_cond_opcode { diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h index 69bb61e110..9d690a9378 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h +++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h @@ -72,6 +72,13 @@ struct ulp_rte_parser_params { struct ulp_rte_act_bitmap act_bitmap; struct ulp_rte_act_prop act_prop; uint32_t dir_attr; + uint32_t priority; + uint32_t fid; + uint32_t parent_flow; + uint32_t parent_fid; + uint16_t func_id; + uint32_t class_id; + uint32_t act_tmpl; struct bnxt_ulp_context *ulp_ctx; }; diff --git a/drivers/net/bnxt/tf_ulp/ulp_tun.c b/drivers/net/bnxt/tf_ulp/ulp_tun.c new file mode 100644 index 0000000000..e8d2861880 --- /dev/null +++ b/drivers/net/bnxt/tf_ulp/ulp_tun.c @@ -0,0 +1,310 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2014-2020 Broadcom + * All rights reserved. + */ + +#include + +#include "ulp_tun.h" +#include "ulp_rte_parser.h" +#include "ulp_template_db_enum.h" +#include "ulp_template_struct.h" +#include "ulp_matcher.h" +#include "ulp_mapper.h" +#include "ulp_flow_db.h" + +/* This function programs the outer tunnel flow in the hardware. */ +static int32_t +ulp_install_outer_tun_flow(struct ulp_rte_parser_params *params, + struct bnxt_tun_cache_entry *tun_entry, + uint16_t tun_idx) +{ + struct bnxt_ulp_mapper_create_parms mparms = { 0 }; + int ret; + + /* Reset the JUMP action bit in the action bitmap as we don't + * offload this action. + */ + ULP_BITMAP_RESET(params->act_bitmap.bits, BNXT_ULP_ACTION_BIT_JUMP); + + ULP_BITMAP_SET(params->hdr_bitmap.bits, BNXT_ULP_HDR_BIT_F1); + + ret = ulp_matcher_pattern_match(params, ¶ms->class_id); + if (ret != BNXT_TF_RC_SUCCESS) + goto err; + + ret = ulp_matcher_action_match(params, ¶ms->act_tmpl); + if (ret != BNXT_TF_RC_SUCCESS) + goto err; + + params->parent_flow = true; + bnxt_ulp_init_mapper_params(&mparms, params, + BNXT_ULP_FDB_TYPE_REGULAR); + mparms.tun_idx = tun_idx; + + /* Call the ulp mapper to create the flow in the hardware. */ + ret = ulp_mapper_flow_create(params->ulp_ctx, &mparms); + if (ret) + goto err; + + /* Store the tunnel dmac in the tunnel cache table and use it while + * programming tunnel flow F2. + */ + memcpy(tun_entry->t_dmac, + ¶ms->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX].spec, + RTE_ETHER_ADDR_LEN); + + tun_entry->valid = true; + tun_entry->state = BNXT_ULP_FLOW_STATE_TUN_O_OFFLD; + tun_entry->outer_tun_flow_id = params->fid; + + /* F1 and it's related F2s are correlated based on + * Tunnel Destination IP Address. + */ + if (tun_entry->t_dst_ip_valid) + goto done; + if (ULP_BITMAP_ISSET(params->hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_IPV4)) + memcpy(&tun_entry->t_dst_ip, + ¶ms->hdr_field[ULP_TUN_O_IPV4_DIP_INDEX].spec, + sizeof(rte_be32_t)); + else + memcpy(tun_entry->t_dst_ip6, + ¶ms->hdr_field[ULP_TUN_O_IPV6_DIP_INDEX].spec, + sizeof(tun_entry->t_dst_ip6)); + tun_entry->t_dst_ip_valid = true; + +done: + return BNXT_TF_RC_FID; + +err: + memset(tun_entry, 0, sizeof(struct bnxt_tun_cache_entry)); + return BNXT_TF_RC_ERROR; +} + +/* This function programs the inner tunnel flow in the hardware. */ +static void +ulp_install_inner_tun_flow(struct bnxt_tun_cache_entry *tun_entry) +{ + struct bnxt_ulp_mapper_create_parms mparms = { 0 }; + struct ulp_rte_parser_params *params; + int ret; + + /* F2 doesn't have tunnel dmac, use the tunnel dmac that was + * stored during F1 programming. + */ + params = &tun_entry->first_inner_tun_params; + memcpy(¶ms->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX], + tun_entry->t_dmac, RTE_ETHER_ADDR_LEN); + params->parent_fid = tun_entry->outer_tun_flow_id; + params->fid = tun_entry->first_inner_tun_flow_id; + + bnxt_ulp_init_mapper_params(&mparms, params, + BNXT_ULP_FDB_TYPE_REGULAR); + + ret = ulp_mapper_flow_create(params->ulp_ctx, &mparms); + if (ret) + PMD_DRV_LOG(ERR, "Failed to create F2 flow."); +} + +/* This function either install outer tunnel flow & inner tunnel flow + * or just the outer tunnel flow based on the flow state. + */ +static int32_t +ulp_post_process_outer_tun_flow(struct ulp_rte_parser_params *params, + struct bnxt_tun_cache_entry *tun_entry, + uint16_t tun_idx) +{ + enum bnxt_ulp_tun_flow_state flow_state; + int ret; + + flow_state = tun_entry->state; + ret = ulp_install_outer_tun_flow(params, tun_entry, tun_idx); + if (ret) + return ret; + + /* If flow_state == BNXT_ULP_FLOW_STATE_NORMAL before installing + * F1, that means F2 is not deferred. Hence, no need to install F2. + */ + if (flow_state != BNXT_ULP_FLOW_STATE_NORMAL) + ulp_install_inner_tun_flow(tun_entry); + + return 0; +} + +/* This function will be called if inner tunnel flow request comes before + * outer tunnel flow request. + */ +static int32_t +ulp_post_process_first_inner_tun_flow(struct ulp_rte_parser_params *params, + struct bnxt_tun_cache_entry *tun_entry) +{ + int ret; + + ret = ulp_matcher_pattern_match(params, ¶ms->class_id); + if (ret != BNXT_TF_RC_SUCCESS) + return BNXT_TF_RC_ERROR; + + ret = ulp_matcher_action_match(params, ¶ms->act_tmpl); + if (ret != BNXT_TF_RC_SUCCESS) + return BNXT_TF_RC_ERROR; + + /* If Tunnel F2 flow comes first then we can't install it in the + * hardware, because, F2 flow will not have L2 context information. + * So, just cache the F2 information and program it in the context + * of F1 flow installation. + */ + memcpy(&tun_entry->first_inner_tun_params, params, + sizeof(struct ulp_rte_parser_params)); + + tun_entry->first_inner_tun_flow_id = params->fid; + tun_entry->state = BNXT_ULP_FLOW_STATE_TUN_I_CACHED; + + /* F1 and it's related F2s are correlated based on + * Tunnel Destination IP Address. It could be already set, if + * the inner flow got offloaded first. + */ + if (tun_entry->t_dst_ip_valid) + goto done; + if (ULP_BITMAP_ISSET(params->hdr_bitmap.bits, BNXT_ULP_HDR_BIT_O_IPV4)) + memcpy(&tun_entry->t_dst_ip, + ¶ms->hdr_field[ULP_TUN_O_IPV4_DIP_INDEX].spec, + sizeof(rte_be32_t)); + else + memcpy(tun_entry->t_dst_ip6, + ¶ms->hdr_field[ULP_TUN_O_IPV6_DIP_INDEX].spec, + sizeof(tun_entry->t_dst_ip6)); + tun_entry->t_dst_ip_valid = true; + +done: + return BNXT_TF_RC_FID; +} + +/* This function will be called if inner tunnel flow request comes after + * the outer tunnel flow request. + */ +static int32_t +ulp_post_process_inner_tun_flow(struct ulp_rte_parser_params *params, + struct bnxt_tun_cache_entry *tun_entry) +{ + memcpy(¶ms->hdr_field[ULP_TUN_O_DMAC_HDR_FIELD_INDEX], + tun_entry->t_dmac, RTE_ETHER_ADDR_LEN); + + params->parent_fid = tun_entry->outer_tun_flow_id; + + return BNXT_TF_RC_NORMAL; +} + +static int32_t +ulp_get_tun_entry(struct ulp_rte_parser_params *params, + struct bnxt_tun_cache_entry **tun_entry, + uint16_t *tun_idx) +{ + int i, first_free_entry = BNXT_ULP_TUN_ENTRY_INVALID; + struct bnxt_tun_cache_entry *tun_tbl; + bool tun_entry_found = false, free_entry_found = false; + + tun_tbl = bnxt_ulp_cntxt_ptr2_tun_tbl_get(params->ulp_ctx); + if (!tun_tbl) + return BNXT_TF_RC_ERROR; + + for (i = 0; i < BNXT_ULP_MAX_TUN_CACHE_ENTRIES; i++) { + if (!memcmp(&tun_tbl[i].t_dst_ip, + ¶ms->hdr_field[ULP_TUN_O_IPV4_DIP_INDEX].spec, + sizeof(rte_be32_t)) || + !memcmp(&tun_tbl[i].t_dst_ip6, + ¶ms->hdr_field[ULP_TUN_O_IPV6_DIP_INDEX].spec, + 16)) { + tun_entry_found = true; + break; + } + + if (!tun_tbl[i].t_dst_ip_valid && !free_entry_found) { + first_free_entry = i; + free_entry_found = true; + } + } + + if (tun_entry_found) { + *tun_entry = &tun_tbl[i]; + *tun_idx = i; + } else { + if (first_free_entry == BNXT_ULP_TUN_ENTRY_INVALID) + return BNXT_TF_RC_ERROR; + *tun_entry = &tun_tbl[first_free_entry]; + *tun_idx = first_free_entry; + } + + return 0; +} + +int32_t +ulp_post_process_tun_flow(struct ulp_rte_parser_params *params) +{ + bool outer_tun_sig, inner_tun_sig, first_inner_tun_flow; + bool outer_tun_reject, inner_tun_reject, outer_tun_flow, inner_tun_flow; + enum bnxt_ulp_tun_flow_state flow_state; + struct bnxt_tun_cache_entry *tun_entry; + uint32_t l3_tun, l3_tun_decap; + uint16_t tun_idx; + int rc; + + /* Computational fields that indicate it's a TUNNEL DECAP flow */ + l3_tun = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_L3_TUN); + l3_tun_decap = ULP_COMP_FLD_IDX_RD(params, + BNXT_ULP_CF_IDX_L3_TUN_DECAP); + if (!l3_tun) + return BNXT_TF_RC_NORMAL; + + rc = ulp_get_tun_entry(params, &tun_entry, &tun_idx); + if (rc == BNXT_TF_RC_ERROR) + return rc; + + flow_state = tun_entry->state; + /* Outer tunnel flow validation */ + outer_tun_sig = BNXT_OUTER_TUN_SIGNATURE(l3_tun, params); + outer_tun_flow = BNXT_OUTER_TUN_FLOW(outer_tun_sig); + outer_tun_reject = BNXT_REJECT_OUTER_TUN_FLOW(flow_state, + outer_tun_sig); + + /* Inner tunnel flow validation */ + inner_tun_sig = BNXT_INNER_TUN_SIGNATURE(l3_tun, l3_tun_decap, params); + first_inner_tun_flow = BNXT_FIRST_INNER_TUN_FLOW(flow_state, + inner_tun_sig); + inner_tun_flow = BNXT_INNER_TUN_FLOW(flow_state, inner_tun_sig); + inner_tun_reject = BNXT_REJECT_INNER_TUN_FLOW(flow_state, + inner_tun_sig); + + if (outer_tun_reject) { + tun_entry->outer_tun_rej_cnt++; + BNXT_TF_DBG(ERR, + "Tunnel F1 flow rejected, COUNT: %d\n", + tun_entry->outer_tun_rej_cnt); + /* Inner tunnel flow is rejected if it comes between first inner + * tunnel flow and outer flow requests. + */ + } else if (inner_tun_reject) { + tun_entry->inner_tun_rej_cnt++; + BNXT_TF_DBG(ERR, + "Tunnel F2 flow rejected, COUNT: %d\n", + tun_entry->inner_tun_rej_cnt); + } + + if (outer_tun_reject || inner_tun_reject) + return BNXT_TF_RC_ERROR; + else if (first_inner_tun_flow) + return ulp_post_process_first_inner_tun_flow(params, tun_entry); + else if (outer_tun_flow) + return ulp_post_process_outer_tun_flow(params, tun_entry, + tun_idx); + else if (inner_tun_flow) + return ulp_post_process_inner_tun_flow(params, tun_entry); + else + return BNXT_TF_RC_NORMAL; +} + +void +ulp_clear_tun_entry(struct bnxt_tun_cache_entry *tun_tbl, uint8_t tun_idx) +{ + memset(&tun_tbl[tun_idx], 0, + sizeof(struct bnxt_tun_cache_entry)); +} diff --git a/drivers/net/bnxt/tf_ulp/ulp_tun.h b/drivers/net/bnxt/tf_ulp/ulp_tun.h new file mode 100644 index 0000000000..ad70ae6164 --- /dev/null +++ b/drivers/net/bnxt/tf_ulp/ulp_tun.h @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2014-2020 Broadcom + * All rights reserved. + */ + +#ifndef _BNXT_TUN_H_ +#define _BNXT_TUN_H_ + +#include +#include +#include + +#include "rte_ethdev.h" + +#include "ulp_template_db_enum.h" +#include "ulp_template_struct.h" + +#define BNXT_OUTER_TUN_SIGNATURE(l3_tun, params) \ + ((l3_tun) && \ + ULP_BITMAP_ISSET((params)->act_bitmap.bits, \ + BNXT_ULP_ACTION_BIT_JUMP)) +#define BNXT_INNER_TUN_SIGNATURE(l3_tun, l3_tun_decap, params) \ + ((l3_tun) && (l3_tun_decap) && \ + !ULP_BITMAP_ISSET((params)->hdr_bitmap.bits, \ + BNXT_ULP_HDR_BIT_O_ETH)) + +#define BNXT_FIRST_INNER_TUN_FLOW(state, inner_tun_sig) \ + ((state) == BNXT_ULP_FLOW_STATE_NORMAL && (inner_tun_sig)) +#define BNXT_INNER_TUN_FLOW(state, inner_tun_sig) \ + ((state) == BNXT_ULP_FLOW_STATE_TUN_O_OFFLD && (inner_tun_sig)) +#define BNXT_OUTER_TUN_FLOW(outer_tun_sig) ((outer_tun_sig)) + +/* It is invalid to get another outer flow offload request + * for the same tunnel, while the outer flow is already offloaded. + */ +#define BNXT_REJECT_OUTER_TUN_FLOW(state, outer_tun_sig) \ + ((state) == BNXT_ULP_FLOW_STATE_TUN_O_OFFLD && (outer_tun_sig)) +/* It is invalid to get another inner flow offload request + * for the same tunnel, while the outer flow is not yet offloaded. + */ +#define BNXT_REJECT_INNER_TUN_FLOW(state, inner_tun_sig) \ + ((state) == BNXT_ULP_FLOW_STATE_TUN_I_CACHED && (inner_tun_sig)) + +#define ULP_TUN_O_DMAC_HDR_FIELD_INDEX 1 +#define ULP_TUN_O_IPV4_DIP_INDEX 19 +#define ULP_TUN_O_IPV6_DIP_INDEX 17 + +/* When a flow offload request comes the following state transitions + * happen based on the order in which the outer & inner flow offload + * requests arrive. + * + * If inner tunnel flow offload request arrives first then the flow + * state will change from BNXT_ULP_FLOW_STATE_NORMAL to + * BNXT_ULP_FLOW_STATE_TUN_I_CACHED and the following outer tunnel + * flow offload request will change the state of the flow to + * BNXT_ULP_FLOW_STATE_TUN_O_OFFLD from BNXT_ULP_FLOW_STATE_TUN_I_CACHED. + * + * If outer tunnel flow offload request arrives first then the flow state + * will change from BNXT_ULP_FLOW_STATE_NORMAL to + * BNXT_ULP_FLOW_STATE_TUN_O_OFFLD. + * + * Once the flow state is in BNXT_ULP_FLOW_STATE_TUN_O_OFFLD, any inner + * tunnel flow offload requests after that point will be treated as a + * normal flow and the tunnel flow state remains in + * BNXT_ULP_FLOW_STATE_TUN_O_OFFLD + */ +enum bnxt_ulp_tun_flow_state { + BNXT_ULP_FLOW_STATE_NORMAL = 0, + BNXT_ULP_FLOW_STATE_TUN_O_OFFLD, + BNXT_ULP_FLOW_STATE_TUN_I_CACHED +}; + +struct bnxt_tun_cache_entry { + enum bnxt_ulp_tun_flow_state state; + bool valid; + bool t_dst_ip_valid; + uint8_t t_dmac[RTE_ETHER_ADDR_LEN]; + union { + rte_be32_t t_dst_ip; + uint8_t t_dst_ip6[16]; + }; + uint32_t outer_tun_flow_id; + uint32_t first_inner_tun_flow_id; + uint16_t outer_tun_rej_cnt; + uint16_t inner_tun_rej_cnt; + struct ulp_rte_parser_params first_inner_tun_params; +}; + +void +ulp_clear_tun_entry(struct bnxt_tun_cache_entry *tun_tbl, uint8_t tun_idx); + +#endif