From patchwork Tue Mar 17 15:38:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Venkat Duvvuru X-Patchwork-Id: 66813 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8DB0CA0565; Tue, 17 Mar 2020 16:46:12 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BD2971C1E5; Tue, 17 Mar 2020 16:40:08 +0100 (CET) Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by dpdk.org (Postfix) with ESMTP id 3CF451C0C2 for ; Tue, 17 Mar 2020 16:40:05 +0100 (CET) Received: by mail-wm1-f66.google.com with SMTP id t13so16200025wmi.3 for ; Tue, 17 Mar 2020 08:40:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=jyxl1Qp7uBrvG8Hubnfudv4fBCxRdZJ0z2k1gedBKTM=; b=b9FWsUQoJ72ezYIgWR1a9EMpAJdoKQuKZzOfudEtXIxcsigFdRAcSmKOLZTvYl1wvP EreFy7VrFlafoMzuodbm3uWFNcCmhOXDVT29vtAZXplujZ+PBUBKt6TimSB472asxYHL oj+Lkxp4EX8DTa03VcfdnMaRRB+pMBnoPZ+fc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jyxl1Qp7uBrvG8Hubnfudv4fBCxRdZJ0z2k1gedBKTM=; b=VWVuE1N7Om/O7DOfMk1nKKpgDLHMFqqF+1Apoyk76FfgPMnANPo7otDXTlu7wcWT70 lNkqstg7sdvrlBeDr/aG5hpCodl3lQY7+wNfMojmHbmpZo03H/O7+c3pyhiIfXCuDHNp qHXBFWBF0NY9gNK5QeITv+zH0cy1FNYco06bQrlAkoUyDoKAORAZoCLFogmS5t6AN2hc HS0IfGQQ0K6NreV4IjPmJ/SyMxiArmlmAvz63jgXtDQfWyJ2A/fHeOFDa9xLEetNdCon wYtuen4EXoJdThgSNIVQYpFCyEc6UY4w4SviastuLMA1XRxTkwIcc6/aqfprawZZKUlI QEmw== X-Gm-Message-State: ANhLgQ2bzNimHSl7Cfyqn57O+HcfVLscMBhpCcJq75Ii52RikpdKYa9g nMGgij8C5EioAh6z8ZDJHL2JevZW3bEXHPAd0wh8uHT28EvjpdU6ncQFa3nuEnQMshArceWuPXq ufH4zEg7dmP7RN0zmPREoe5KdEP/Sy++0uFtT7dNq/54j2+2cqhrd0NhsxoPobULdD4FI X-Google-Smtp-Source: ADFU+vs8q8wvCZmdbf4jLgGwI0cfu3OP4UfV/ZNDPthTk1sfsb5SDChqBUkVWSf393kllxNugODUPA== X-Received: by 2002:a05:600c:218f:: with SMTP id e15mr6261246wme.152.1584459604627; Tue, 17 Mar 2020 08:40:04 -0700 (PDT) Received: from S60.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id q4sm5052142wro.56.2020.03.17.08.40.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 17 Mar 2020 08:40:04 -0700 (PDT) From: Venkat Duvvuru To: dev@dpdk.org Cc: Kishore Padmanabha , Venkat Duvvuru Date: Tue, 17 Mar 2020 21:08:27 +0530 Message-Id: <1584459511-5353-30-git-send-email-venkatkumar.duvvuru@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1584459511-5353-1-git-send-email-venkatkumar.duvvuru@broadcom.com> References: <1584459511-5353-1-git-send-email-venkatkumar.duvvuru@broadcom.com> Subject: [dpdk-dev] [PATCH 29/33] net/bnxt: add support for rte flow flush driver hook X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kishore Padmanabha This patch does the following 1. Gets the ulp session information from eth_dev 2. Fetches the rte_flow table associated with this session 3. Iterates through all the flows in the flow table 4. Calls ulp_mapper_resources_free which releases the key & action tables associated with each flow Signed-off-by: Kishore Padmanabha Signed-off-by: Venkat Duvvuru Reviewed-by: Lance Richardson Reviewed-by: Ajit Kumar Khaparde --- drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 3 ++ drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 33 +++++++++++++++- drivers/net/bnxt/tf_ulp/ulp_flow_db.c | 69 +++++++++++++++++++++++++++++++++ drivers/net/bnxt/tf_ulp/ulp_flow_db.h | 11 ++++++ 4 files changed, 115 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c index 3795c6d..56e08f2 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c @@ -517,6 +517,9 @@ bnxt_ulp_deinit(struct bnxt *bp) if (!session) return; + /* clean up regular flows */ + ulp_flow_db_flush_flows(&bp->ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE); + /* cleanup the eem table scope */ ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx); diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c index 35099a3..4958895 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c @@ -262,11 +262,42 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev, return ret; } +/* Function to destroy the rte flows. */ +static int32_t +bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev, + struct rte_flow_error *error) +{ + struct bnxt_ulp_context *ulp_ctx; + int32_t ret; + struct bnxt *bp; + + ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev); + if (!ulp_ctx) { + BNXT_TF_DBG(ERR, "ULP context is not initialized\n"); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Failed to flush flow."); + return -EINVAL; + } + bp = eth_dev->data->dev_private; + + /* Free the resources for the last device */ + if (!ulp_ctx_deinit_allowed(bp)) + return 0; + + ret = ulp_flow_db_flush_flows(ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE); + if (ret) + rte_flow_error_set(error, ret, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Failed to flush flow."); + return ret; +} + const struct rte_flow_ops bnxt_ulp_rte_flow_ops = { .validate = bnxt_ulp_flow_validate, .create = bnxt_ulp_flow_create, .destroy = bnxt_ulp_flow_destroy, - .flush = NULL, + .flush = bnxt_ulp_flow_flush, .query = NULL, .isolate = NULL }; diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c index 76ec856..68ba6d4 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c +++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c @@ -555,3 +555,72 @@ int32_t ulp_flow_db_fid_free(struct bnxt_ulp_context *ulp_ctxt, /* all good, return success */ return 0; } + +/** Get the flow database entry iteratively + * + * flow_tbl [in] Ptr to flow table + * fid [in/out] The index to the flow entry + * + * returns 0 on success and negative on failure. + */ +static int32_t +ulp_flow_db_next_entry_get(struct bnxt_ulp_flow_tbl *flowtbl, + uint32_t *fid) +{ + uint32_t lfid = *fid; + uint32_t idx; + uint64_t bs; + + do { + lfid++; + if (lfid >= flowtbl->num_flows) + return -ENOENT; + idx = lfid / ULP_INDEX_BITMAP_SIZE; + while (!(bs = flowtbl->active_flow_tbl[idx])) { + idx++; + if ((idx * ULP_INDEX_BITMAP_SIZE) >= flowtbl->num_flows) + return -ENOENT; + } + lfid = (idx * ULP_INDEX_BITMAP_SIZE) + __builtin_clzl(bs); + if (*fid >= lfid) { + BNXT_TF_DBG(ERR, "Flow Database is corrupt\n"); + return -ENOENT; + } + } while (!ulp_flow_db_active_flow_is_set(flowtbl, lfid)); + + /* all good, return success */ + *fid = lfid; + return 0; +} + +/* + * Flush all flows in the flow database. + * + * ulp_ctxt [in] Ptr to ulp context + * tbl_idx [in] The index to table + * + * returns 0 on success or negative number on failure + */ +int32_t ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx, + uint32_t idx) +{ + uint32_t fid = 0; + struct bnxt_ulp_flow_db *flow_db; + struct bnxt_ulp_flow_tbl *flow_tbl; + + if (!ulp_ctx) { + BNXT_TF_DBG(ERR, "Invalid Argument\n"); + return -EINVAL; + } + + flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx); + if (!flow_db) { + BNXT_TF_DBG(ERR, "Flow database not found\n"); + return -EINVAL; + } + flow_tbl = &flow_db->flow_tbl[idx]; + while (!ulp_flow_db_next_entry_get(flow_tbl, &fid)) + (void)ulp_mapper_resources_free(ulp_ctx, fid, idx); + + return 0; +} diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h index eb5effa..5435415 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h +++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h @@ -142,4 +142,15 @@ int32_t ulp_flow_db_fid_free(struct bnxt_ulp_context *ulp_ctxt, enum bnxt_ulp_flow_db_tables tbl_idx, uint32_t fid); +/* + * Flush all flows in the flow database. + * + * ulp_ctxt [in] Ptr to ulp context + * tbl_idx [in] The index to table + * + * returns 0 on success or negative number on failure + */ +int32_t ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx, + uint32_t idx); + #endif /* _ULP_FLOW_DB_H_ */