From patchwork Tue Oct 20 21:55:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 81628 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74249A04DD; Tue, 20 Oct 2020 23:58:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AED1FACD2; Tue, 20 Oct 2020 23:56:06 +0200 (CEST) Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by dpdk.org (Postfix) with ESMTP id B1DBBAC84 for ; Tue, 20 Oct 2020 23:55:54 +0200 (CEST) Received: by mail-pl1-f176.google.com with SMTP id w11so95145pll.8 for ; Tue, 20 Oct 2020 14:55:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=QIp+OXUCc4axhBKoKqT3UIZcw2MQKNelYAxsOwae91A=; b=cAaxulTUc2LI7Kc+USQrWl7FWR280L014W4ZV9YQe/GMP/AYuDjNJZidiZ73CO1Sk8 3VlWn5gJ8yvhVFmj6e0sbeVWxlL+ZBQUlTbamd6VOyU4nH42TkRE94009eLgE9yNaqOb 9UpXj50c6WaO/Re0N4ih1NPIsmIbT8lngNVsY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=QIp+OXUCc4axhBKoKqT3UIZcw2MQKNelYAxsOwae91A=; b=JqiB4zD/tS5CAA3T9JtAd6WqOWzWmxHgE2Kg3MnaRV3HTahbzHThD/97PsDmxBtJIO VWFB4e0Ht2fMRQmomPm0fDmeK+lfEJ9bmxmlvQA8VI+pekTz+Efwani36A7XG9FWkBXl 4vhFvppDqSqCJNroYIngHb6kYdE9VlwlM2vHso7W8JP6X1hD7S4fm1gpMBE52BDjsQr3 ENVeoYG38Kkt8/xg9wV2RE3pwi7l7q37rZKMbj3PH/+EPnFgRcQwTAtjllDD37FtyDfI 5V1ibZQTxhavFb37525W9X5OFUmdJ/3ibD/NzcRhWmBYxyEZ+aiIgazg+6Jazkh+cP5O z2UQ== X-Gm-Message-State: AOAM531HaM4zWVKoGYi6tTjOMSqLam78Ts2aAq16N3OaJ7s0cBKsGUzX YUueRvTjVaYE9jc60JmEJoLxrrYs0lvmIiQXj86oXYbU7DYjytpikw/RiuF/leoRXoc6xYU+N9Q y6vSGwiDiKAOvu8ZFm2bPpQ0vKflMClDDl1oj/GrnLj72T90KArvLWH0yonpnsfx0vg== X-Google-Smtp-Source: ABdhPJxbvDbKmpgo9I3UquEd7fJp7ckJuobfHMQZ2pZ5xCmhXPmRRM9W6m7X0DVW4QqdJ6E6UFMNqg== X-Received: by 2002:a17:902:6bc5:b029:d3:f10c:9449 with SMTP id m5-20020a1709026bc5b02900d3f10c9449mr234870plt.54.1603230953324; Tue, 20 Oct 2020 14:55:53 -0700 (PDT) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id e6sm24113pfn.190.2020.10.20.14.55.52 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Oct 2020 14:55:52 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: Venkat Duvvuru , Somnath Kotur Date: Tue, 20 Oct 2020 14:55:36 -0700 Message-Id: <20201020215538.59242-10-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.21.1 (Apple Git-122.3) In-Reply-To: <20201020215538.59242-1-ajit.khaparde@broadcom.com> References: <1602916089-18576-1-git-send-email-venkatkumar.duvvuru@broadcom.com> <20201020215538.59242-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] [PATCH v2 09/11] net/bnxt: refactor flow id allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Venkat Duvvuru Currently, the flow id is allocated inside ulp_mapper_flow_create. However with vxlan decap feature if F2 flow comes before F1 flow then F2 is cached and not really installed in the hardware which means the code will return without calling ulp_mapper_flow_create. But, ULP has to still return valid flow id to the stack. Hence, move the flow id allocation outside ulp_mapper_flow_create. Signed-off-by: Venkat Duvvuru Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 109 ++++++++++++++++------- drivers/net/bnxt/tf_ulp/ulp_def_rules.c | 48 ++++++++-- drivers/net/bnxt/tf_ulp/ulp_mapper.c | 35 +------- drivers/net/bnxt/tf_ulp/ulp_mapper.h | 4 +- drivers/net/bnxt/tf_ulp/ulp_rte_parser.h | 9 ++ 5 files changed, 132 insertions(+), 73 deletions(-) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c index c7b29824e..47fbaba03 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c @@ -74,6 +74,29 @@ bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params, params->dir_attr |= BNXT_ULP_FLOW_ATTR_TRANSFER; } +void +bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_create_parms *mapper_cparms, + struct ulp_rte_parser_params *params, + uint32_t priority, uint32_t class_id, + uint32_t act_tmpl, uint16_t func_id, + uint32_t fid, + enum bnxt_ulp_fdb_type flow_type) +{ + mapper_cparms->app_priority = priority; + mapper_cparms->dir_attr = params->dir_attr; + + mapper_cparms->class_tid = class_id; + mapper_cparms->act_tid = act_tmpl; + mapper_cparms->func_id = func_id; + mapper_cparms->hdr_bitmap = ¶ms->hdr_bitmap; + mapper_cparms->hdr_field = params->hdr_field; + mapper_cparms->comp_fld = params->comp_fld; + mapper_cparms->act = ¶ms->act_bitmap; + mapper_cparms->act_prop = ¶ms->act_prop; + mapper_cparms->flow_type = flow_type; + mapper_cparms->flow_id = fid; +} + /* Function to create the rte flow. */ static struct rte_flow * bnxt_ulp_flow_create(struct rte_eth_dev *dev, @@ -85,22 +108,23 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, struct bnxt_ulp_mapper_create_parms mapper_cparms = { 0 }; struct ulp_rte_parser_params params; struct bnxt_ulp_context *ulp_ctx; + int rc, ret = BNXT_TF_RC_ERROR; uint32_t class_id, act_tmpl; struct rte_flow *flow_id; + uint16_t func_id; uint32_t fid; - int ret = BNXT_TF_RC_ERROR; if (bnxt_ulp_flow_validate_args(attr, pattern, actions, error) == BNXT_TF_RC_ERROR) { BNXT_TF_DBG(ERR, "Invalid arguments being passed\n"); - goto parse_error; + goto parse_err1; } ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev); if (!ulp_ctx) { BNXT_TF_DBG(ERR, "ULP context is not initialized\n"); - goto parse_error; + goto parse_err1; } /* Initialize the parser params */ @@ -116,56 +140,72 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev, ULP_COMP_FLD_IDX_WR(¶ms, BNXT_ULP_CF_IDX_SVIF_FLAG, BNXT_ULP_INVALID_SVIF_VAL); + /* Get the function id */ + if (ulp_port_db_port_func_id_get(ulp_ctx, + dev->data->port_id, + &func_id)) { + BNXT_TF_DBG(ERR, "conversion of port to func id failed\n"); + goto parse_err1; + } + + /* Protect flow creation */ + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); + goto parse_err1; + } + + /* Allocate a Flow ID for attaching all resources for the flow to. + * Once allocated, all errors have to walk the list of resources and + * free each of them. + */ + rc = ulp_flow_db_fid_alloc(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, + func_id, &fid); + if (rc) { + BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n"); + goto parse_err2; + } + /* Parse the rte flow pattern */ ret = bnxt_ulp_rte_parser_hdr_parse(pattern, ¶ms); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; /* Parse the rte flow action */ ret = bnxt_ulp_rte_parser_act_parse(actions, ¶ms); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; /* Perform the rte flow post process */ ret = bnxt_ulp_rte_parser_post_process(¶ms); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; ret = ulp_matcher_pattern_match(¶ms, &class_id); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; ret = ulp_matcher_action_match(¶ms, &act_tmpl); if (ret != BNXT_TF_RC_SUCCESS) - goto parse_error; + goto parse_err3; - mapper_cparms.app_priority = attr->priority; - mapper_cparms.hdr_bitmap = ¶ms.hdr_bitmap; - mapper_cparms.hdr_field = params.hdr_field; - mapper_cparms.comp_fld = params.comp_fld; - mapper_cparms.act = ¶ms.act_bitmap; - mapper_cparms.act_prop = ¶ms.act_prop; - mapper_cparms.class_tid = class_id; - mapper_cparms.act_tid = act_tmpl; - mapper_cparms.flow_type = BNXT_ULP_FDB_TYPE_REGULAR; + bnxt_ulp_init_mapper_params(&mapper_cparms, ¶ms, attr->priority, + class_id, act_tmpl, func_id, fid, + BNXT_ULP_FDB_TYPE_REGULAR); + /* Call the ulp mapper to create the flow in the hardware. */ + ret = ulp_mapper_flow_create(ulp_ctx, &mapper_cparms); + if (ret) + goto parse_err3; - /* Get the function id */ - if (ulp_port_db_port_func_id_get(ulp_ctx, - dev->data->port_id, - &mapper_cparms.func_id)) { - BNXT_TF_DBG(ERR, "conversion of port to func id failed\n"); - goto parse_error; - } - mapper_cparms.dir_attr = params.dir_attr; + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); - /* Call the ulp mapper to create the flow in the hardware. */ - ret = ulp_mapper_flow_create(ulp_ctx, &mapper_cparms, &fid); - if (!ret) { - flow_id = (struct rte_flow *)((uintptr_t)fid); - return flow_id; - } + flow_id = (struct rte_flow *)((uintptr_t)fid); + return flow_id; -parse_error: +parse_err3: + ulp_flow_db_fid_free(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, fid); +parse_err2: + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); +parse_err1: rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "Failed to create flow."); return NULL; @@ -281,6 +321,10 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, return -EINVAL; } + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); + return -EINVAL; + } ret = ulp_mapper_flow_destroy(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, flow_id); if (ret) { @@ -290,6 +334,7 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, "Failed to destroy flow."); } + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); return ret; } diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c index c36d4d4c4..ec504fcf2 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c +++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c @@ -304,8 +304,8 @@ ulp_default_flow_create(struct rte_eth_dev *eth_dev, struct ulp_rte_act_prop act_prop; struct ulp_rte_act_bitmap act = { 0 }; struct bnxt_ulp_context *ulp_ctx; - uint32_t type, ulp_flags = 0; - int rc; + uint32_t type, ulp_flags = 0, fid; + int rc = 0; memset(&mapper_params, 0, sizeof(mapper_params)); memset(hdr_field, 0, sizeof(hdr_field)); @@ -316,6 +316,8 @@ ulp_default_flow_create(struct rte_eth_dev *eth_dev, mapper_params.act = &act; mapper_params.act_prop = &act_prop; mapper_params.comp_fld = comp_fld; + mapper_params.class_tid = ulp_class_tid; + mapper_params.flow_type = BNXT_ULP_FDB_TYPE_DEFAULT; ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev); if (!ulp_ctx) { @@ -350,16 +352,43 @@ ulp_default_flow_create(struct rte_eth_dev *eth_dev, type = param_list->type; } - mapper_params.class_tid = ulp_class_tid; - mapper_params.flow_type = BNXT_ULP_FDB_TYPE_DEFAULT; + /* Get the function id */ + if (ulp_port_db_port_func_id_get(ulp_ctx, + eth_dev->data->port_id, + &mapper_params.func_id)) { + BNXT_TF_DBG(ERR, "conversion of port to func id failed\n"); + goto err1; + } + + /* Protect flow creation */ + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); + goto err1; + } - rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params, flow_id); + rc = ulp_flow_db_fid_alloc(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, + mapper_params.func_id, &fid); if (rc) { - BNXT_TF_DBG(ERR, "Failed to create default flow.\n"); - return rc; + BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n"); + goto err2; } + mapper_params.flow_id = fid; + rc = ulp_mapper_flow_create(ulp_ctx, &mapper_params); + if (rc) + goto err3; + + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); + *flow_id = fid; return 0; + +err3: + ulp_flow_db_fid_free(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, fid); +err2: + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); +err1: + BNXT_TF_DBG(ERR, "Failed to create default flow.\n"); + return rc; } /* @@ -391,10 +420,15 @@ ulp_default_flow_destroy(struct rte_eth_dev *eth_dev, uint32_t flow_id) return rc; } + if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { + BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); + return -EINVAL; + } rc = ulp_mapper_flow_destroy(ulp_ctx, BNXT_ULP_FDB_TYPE_DEFAULT, flow_id); if (rc) BNXT_TF_DBG(ERR, "Failed to destroy flow.\n"); + bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); return rc; } diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c index 27b478099..d5c129b3a 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c +++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c @@ -2723,15 +2723,9 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context *ulp_ctx, BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n"); return -EINVAL; } - if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { - BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); - return -EINVAL; - } rc = ulp_mapper_resources_free(ulp_ctx, flow_type, fid); - bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); return rc; - } /* Function to handle the default global templates that are allocated during @@ -2795,8 +2789,7 @@ ulp_mapper_glb_template_table_init(struct bnxt_ulp_context *ulp_ctx) */ int32_t ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, - struct bnxt_ulp_mapper_create_parms *cparms, - uint32_t *flowid) + struct bnxt_ulp_mapper_create_parms *cparms) { struct bnxt_ulp_mapper_parms parms; struct ulp_regfile regfile; @@ -2821,6 +2814,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, parms.flow_type = cparms->flow_type; parms.parent_flow = cparms->parent_flow; parms.parent_fid = cparms->parent_fid; + parms.fid = cparms->flow_id; /* Get the device id from the ulp context */ if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &parms.dev_id)) { @@ -2861,26 +2855,7 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, return -EINVAL; } - /* Protect flow creation */ - if (bnxt_ulp_cntxt_acquire_fdb_lock(ulp_ctx)) { - BNXT_TF_DBG(ERR, "Flow db lock acquire failed\n"); - return -EINVAL; - } - - /* Allocate a Flow ID for attaching all resources for the flow to. - * Once allocated, all errors have to walk the list of resources and - * free each of them. - */ - rc = ulp_flow_db_fid_alloc(ulp_ctx, - parms.flow_type, - cparms->func_id, - &parms.fid); - if (rc) { - BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n"); - bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); - return rc; - } - + /* Process the action template list from the selected action table*/ if (parms.act_tid) { parms.tmpl_type = BNXT_ULP_TEMPLATE_TYPE_ACTION; /* Process the action template tables */ @@ -2911,13 +2886,9 @@ ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, goto flow_error; } - *flowid = parms.fid; - bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); - return rc; flow_error: - bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); /* Free all resources that were allocated during flow creation */ trc = ulp_mapper_flow_destroy(ulp_ctx, BNXT_ULP_FDB_TYPE_REGULAR, parms.fid); diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h index 542e41e5a..0595d1555 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h +++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h @@ -93,6 +93,7 @@ struct bnxt_ulp_mapper_create_parms { uint32_t dir_attr; enum bnxt_ulp_fdb_type flow_type; + uint32_t flow_id; /* if set then create it as a child flow with parent as parent_fid */ uint32_t parent_fid; /* if set then create a parent flow */ @@ -113,8 +114,7 @@ ulp_mapper_deinit(struct bnxt_ulp_context *ulp_ctx); */ int32_t ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx, - struct bnxt_ulp_mapper_create_parms *parms, - uint32_t *flowid); + struct bnxt_ulp_mapper_create_parms *parms); /* Function that frees all resources associated with the flow. */ int32_t diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h index 41f3df998..bb5a8a477 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h +++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h @@ -11,6 +11,7 @@ #include #include "ulp_template_db_enum.h" #include "ulp_template_struct.h" +#include "ulp_mapper.h" /* defines to be used in the tunnel header parsing */ #define BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS 2 @@ -34,6 +35,14 @@ #define BNXT_ULP_PARSER_IPV6_TC 0x0ff00000 #define BNXT_ULP_PARSER_IPV6_FLOW_LABEL 0x000fffff +void +bnxt_ulp_init_mapper_params(struct bnxt_ulp_mapper_create_parms *mapper_cparms, + struct ulp_rte_parser_params *params, + uint32_t priority, uint32_t class_id, + uint32_t act_tmpl, uint16_t func_id, + uint32_t flow_id, + enum bnxt_ulp_fdb_type flow_type); + /* Function to handle the parsing of the RTE port id. */ int32_t ulp_rte_parser_implicit_match_port_process(struct ulp_rte_parser_params *param);