From patchwork Sat Oct 17 06:28:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Venkat Duvvuru X-Patchwork-Id: 81194 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4178EA04DB; Sat, 17 Oct 2020 08:32:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 64DD8FC4B; Sat, 17 Oct 2020 08:28:36 +0200 (CEST) Received: from relay.smtp-ext.broadcom.com (unknown [192.19.221.30]) by dpdk.org (Postfix) with ESMTP id 235D7E2D2 for ; Sat, 17 Oct 2020 08:28:16 +0200 (CEST) Received: from S60.dhcp.broadcom.net (unknown [10.123.66.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by relay.smtp-ext.broadcom.com (Postfix) with ESMTPS id EC63282CE9; Fri, 16 Oct 2020 23:28:14 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 relay.smtp-ext.broadcom.com EC63282CE9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1602916095; bh=5ED7q2CNLmNo1iAvkEdrvEe0vIM33oS8UlHY2qLWj7w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dh420ChbkpkJtXkFJ3cc+a9Fn6o1vaxFJOPQOTW0L12SsN9OvjI3Qu4E0b2xtxFmr 2zO+TsZCuz2+8ZeVtNRMwgxy88LR0Ev6BEWWPsEGradLO44YW/g354O9zj2TEtp4pw K1wM7eqoAlzNUbukiQPQyq4QTD1suGiMuivgeZiM= From: Venkat Duvvuru To: dev@dpdk.org Cc: Venkat Duvvuru Date: Sat, 17 Oct 2020 11:58:07 +0530 Message-Id: <1602916089-18576-13-git-send-email-venkatkumar.duvvuru@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1602916089-18576-1-git-send-email-venkatkumar.duvvuru@broadcom.com> References: <1602916089-18576-1-git-send-email-venkatkumar.duvvuru@broadcom.com> Subject: [dpdk-dev] [PATCH 12/14] net/bnxt: refactor flow id allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, the flow id is allocated inside ulp_mapper_flow_create. However with vxlan decap feature if F2 flow comes before F1 flow then F2 is cached and not really installed in the hardware which means the code will return without calling ulp_mapper_flow_create. But, ULP has to still return valid flow id to the stack. Hence, move the flow id allocation outside ulp_mapper_flow_create. Signed-off-by: Venkat Duvvuru Reviewed-by: Somnath Kotur --- drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 109 ++++++++++++++++++++++--------- drivers/net/bnxt/tf_ulp/ulp_def_rules.c | 48 ++++++++++++-- drivers/net/bnxt/tf_ulp/ulp_mapper.c | 35 +--------- drivers/net/bnxt/tf_ulp/ulp_mapper.h | 4 +- drivers/net/bnxt/tf_ulp/ulp_rte_parser.h | 9 +++ 5 files changed, 132 insertions(+), 73 deletions(-) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c index c7b2982..47fbaba 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c @@ -74,6 +74,29 @@ bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params, params->dir_attr |= BNXT_ULP_FLOW_ATTR_TRANSFER; } +void +bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_create_parms *mapper_cparms, + struct ulp_rte_parser_params *params, + uint32_t priority, uint32_t class_id, + uint32_t act_tmpl, uint16_t func_id, + uint32_t fid, + enum bnxt_ulp_fdb_type flow_type) +{ + mapper_cparms->app_priority = priority; + mapper_cparms->dir_attr = params->dir_attr; + + mapper_cparms->class_tid = class_id; + mapper_cparms->act_tid = act_tmpl; + mapper_cparms->func_id = func_id; + mapper_cparms->hdr_bitmap = ¶ms->hdr_bitmap; + mapper_cparms->hdr_field = params->hdr_field; + mapper_cparms->comp_fld = params->comp_fld; + mapper_cparms->act = ¶ms->act_bitmap; + mapper_cparms->act_prop = ¶ms->act_prop; + mapper_cparms->flow_type = flow_type; + mapper_cparms->flow_id = fid; +} + /* Function to create the rte flow. */ static struct rte_flow * bnxt_ulp_flow_create(struct rte_eth_dev *dev, @@ -85,22 +108,23 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, struct bnxt_ulp_mapper_create_parms mapper_cparms = { 0 }; struct ulp_rte_parser_params params; struct bnxt_ulp_context *ulp_ctx; + int rc, ret = BNXT_TF_RC_ERROR; uint32_t class_id, act_tmpl; struct rte_flow *flow_id; + uint16_t func_id; uint32_t fid; - int ret = BNXT_TF_RC_ERROR; if (bnxt_ulp_flow_validate_args(attr, pattern, actions, error) == BNXT_TF_RC_ERROR) { BNXT_TF_DBG(ERR, "Invalid arguments being passed\n"); - goto parse_error; + goto parse_err1; } ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev); if (!ulp_ctx) { BNXT_TF_DBG(ERR, "ULP context is not initialized\n"); - goto parse_error; + goto parse_err1; } /* Initialize the parser params */ @@ -116,56 +140,72 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, ULP_COMP_FLD_IDX_WR(¶ms, BNXT_ULP_CF_IDX_SVIF_FLAG, BNXT_ULP_INVALID_SVIF_VAL); + /* Get the function id */ + if (ulp_port_db_port_func_id_get(ulp_ctx, + dev->data->port_id, + &func_id)) { + BNXT_TF_DBG(ERR, "conversion of port to func id failed\n"); + goto parse_err1; + } + + /* Protect flow creation */ + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); + goto parse_err1; + } + + /* Allocate a Flow ID for attaching all resources for the flow to. + * Once allocated, all errors have to walk the list of resources and + * free each of them. + */ + rc = ulp_flow_db_fid_alloc(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, + func_id, &fid); + if (rc) { + BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n"); + goto parse_err2; + } + /* Parse the rte flow pattern */ ret = bnxt_ulp_rte_parser_hdr_parse(pattern, ¶ms); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; /* Parse the rte flow action */ ret = bnxt_ulp_rte_parser_act_parse(actions, ¶ms); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; /* Perform the rte flow post process */ ret = bnxt_ulp_rte_parser_post_process(¶ms); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; ret = ulp_matcher_pattern_match(¶ms, &class_id); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; ret = ulp_matcher_action_match(¶ms, &act_tmpl); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; - mapper_cparms.app_priority = attr->priority; - mapper_cparms.hdr_bitmap = ¶ms.hdr_bitmap; - mapper_cparms.hdr_field = params.hdr_field; - mapper_cparms.comp_fld = params.comp_fld; - mapper_cparms.act = ¶ms.act_bitmap; - mapper_cparms.act_prop = ¶ms.act_prop; - mapper_cparms.class_tid = class_id; - mapper_cparms.act_tid = act_tmpl; - mapper_cparms.flow_type = BNXT_ULP_FDB_TYPE_REGULAR; + bnxt_ulp_init_mapper_params(&mapper_cparms, ¶ms, attr->priority, + class_id, act_tmpl, func_id, fid, + BNXT_ULP_FDB_TYPE_REGULAR); + /* Call the ulp mapper to create the flow in the hardware. */ + ret = ulp_mapper_flow_create(ulp_ctx, &mapper_cparms); + if (ret) + goto parse_err3; - /* Get the function id */ - if (ulp_port_db_port_func_id_get(ulp_ctx, - dev->data->port_id, - &mapper_cparms.func_id)) { - BNXT_TF_DBG(ERR, "conversion of port to func id failed\n"); - goto parse_error; - } - mapper_cparms.dir_attr = params.dir_attr; + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); - /* Call the ulp mapper to create the flow in the hardware. */ - ret = ulp_mapper_flow_create(ulp_ctx, &mapper_cparms, &fid); - if (!ret) { - flow_id = (struct rte_flow *)((uintptr_t)fid); - return flow_id; - } + flow_id = (struct rte_flow *)((uintptr_t)fid); + return flow_id; -parse_error: +parse_err3: + ulp_flow_db_fid_free(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, fid); +parse_err2: + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); +parse_err1: rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "Failed to create flow."); return NULL; @@ -281,6 +321,10 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, return -EINVAL; } + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); + return -EINVAL; + } ret = ulp_mapper_flow_destroy(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, flow_id); if (ret) { @@ -290,6 +334,7 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "Failed to destroy flow."); } + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); return ret; } diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c index c36d4d4..ec504fc 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c +++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c @@ -304,8 +304,8 @@ ulp_default_flow_create(struct rte_eth_dev *eth_dev, struct ulp_rte_act_prop act_prop; struct ulp_rte_act_bitmap act = { 0 }; struct bnxt_ulp_context *ulp_ctx; - uint32_t type, ulp_flags = 0; - int rc; + uint32_t type, ulp_flags = 0, fid; + int rc = 0; memset(&mapper_params, 0, sizeof(mapper_params)); memset(hdr_field, 0, sizeof(hdr_field)); @@ -316,6 +316,8 @@ ulp_default_flow_create(struct rte_eth_dev *eth_dev, mapper_params.act = &act; mapper_params.act_prop = &act_prop; mapper_params.comp_fld = comp_fld; + mapper_params.class_tid = ulp_class_tid; + mapper_params.flow_type = BNXT_ULP_FDB_TYPE_DEFAULT; ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev); if (!ulp_ctx) { @@ -350,16 +352,43 @@ ulp_default_flow_create(struct rte_eth_dev *eth_dev, type = param_list->type; } - mapper_params.class_tid = ulp_class_tid; - mapper_params.flow_type = BNXT_ULP_FDB_TYPE_DEFAULT; + /* Get the function id */ + if (ulp_port_db_port_func_id_get(ulp_ctx, + eth_dev->data->port_id, + &mapper_params.func_id)) { + BNXT_TF_DBG(ERR, "conversion of port to func id failed\n"); + goto err1; + } + + /* Protect flow creation */ + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); + goto err1; + } - rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params, flow_id); + rc = ulp_flow_db_fid_alloc(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, + mapper_params.func_id, &fid); if (rc) { - BNXT_TF_DBG(ERR, "Failed to create default flow.\n"); - return rc; + BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n"); + goto err2; } + mapper_params.flow_id = fid; + rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params); + if (rc) + goto err3; + + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); + *flow_id = fid; return 0; + +err3: + ulp_flow_db_fid_free(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, fid); +err2: + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); +err1: + BNXT_TF_DBG(ERR, "Failed to create default flow.\n"); + return rc; } /* @@ -391,10 +420,15 @@ ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id) return rc; } + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); + return -EINVAL; + } rc = ulp_mapper_flow_destroy(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, flow_id); if (rc) BNXT_TF_DBG(ERR, "Failed to destroy flow.\n"); + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); return rc; } diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c index 27b4780..d5c129b 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c +++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c @@ -2723,15 +2723,9 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context *ulp_ctx, BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n"); return -EINVAL; } - if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { - BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); - return -EINVAL; - } rc = ulp_mapper_resources_free(ulp_ctx, flow_type, fid); - bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); return rc; - } /* Function to handle the default global templates that are allocated during @@ -2795,8 +2789,7 @@ ulp_mapper_glb_template_table_init(struct bnxt_ulp_context *ulp_ctx) */ int32_t ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, - struct bnxt_ulp_mapper_create_parms *cparms, - uint32_t *flowid) + struct bnxt_ulp_mapper_create_parms *cparms) { struct bnxt_ulp_mapper_parms parms; struct ulp_regfile regfile; @@ -2821,6 +2814,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, parms.flow_type = cparms->flow_type; parms.parent_flow = cparms->parent_flow; parms.parent_fid = cparms->parent_fid; + parms.fid = cparms->flow_id; /* Get the device id from the ulp context */ if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &parms.dev_id)) { @@ -2861,26 +2855,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, return -EINVAL; } - /* Protect flow creation */ - if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { - BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); - return -EINVAL; - } - - /* Allocate a Flow ID for attaching all resources for the flow to. - * Once allocated, all errors have to walk the list of resources and - * free each of them. - */ - rc = ulp_flow_db_fid_alloc(ulp_ctx, - parms.flow_type, - cparms->func_id, - &parms.fid); - if (rc) { - BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n"); - bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); - return rc; - } - + /* Process the action template list from the selected action table*/ if (parms.act_tid) { parms.tmpl_type = BNXT_ULP_TEMPLATE_TYPE_ACTION; /* Process the action template tables */ @@ -2911,13 +2886,9 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, goto flow_error; } - *flowid = parms.fid; - bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); - return rc; flow_error: - bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); /* Free all resources that were allocated during flow creation */ trc = ulp_mapper_flow_destroy(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, parms.fid); diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h index 542e41e..0595d15 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h +++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h @@ -93,6 +93,7 @@ struct bnxt_ulp_mapper_create_parms { uint32_t dir_attr; enum bnxt_ulp_fdb_type flow_type; + uint32_t flow_id; /* if set then create it as a child flow with parent as parent_fid */ uint32_t parent_fid; /* if set then create a parent flow */ @@ -113,8 +114,7 @@ ulp_mapper_deinit(struct bnxt_ulp_context *ulp_ctx); */ int32_t ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, - struct bnxt_ulp_mapper_create_parms *parms, - uint32_t *flowid); + struct bnxt_ulp_mapper_create_parms *parms); /* Function that frees all resources associated with the flow. */ int32_t diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h index 41f3df9..bb5a8a4 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h @@ -11,6 +11,7 @@ #include #include "ulp_template_db_enum.h" #include "ulp_template_struct.h" +#include "ulp_mapper.h" /* defines to be used in the tunnel header parsing */ #define BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS 2 @@ -34,6 +35,14 @@ #define BNXT_ULP_PARSER_IPV6_TC 0x0ff00000 #define BNXT_ULP_PARSER_IPV6_FLOW_LABEL 0x000fffff +void +bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_create_parms *mapper_cparms, + struct ulp_rte_parser_params *params, + uint32_t priority, uint32_t class_id, + uint32_t act_tmpl, uint16_t func_id, + uint32_t flow_id, + enum bnxt_ulp_fdb_type flow_type); + /* Function to handle the parsing of the RTE port id. */ int32_t ulp_rte_parser_implicit_match_port_process(struct ulp_rte_parser_params *param);