From patchwork Sat Jun 27 10:00:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 72332 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 351F5A0522; Sat, 27 Jun 2020 12:03:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 087EA1C0AF; Sat, 27 Jun 2020 12:01:34 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 5983B1BF7D for ; Sat, 27 Jun 2020 12:01:00 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.242.48]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id A1B5D30C0FB; Sat, 27 Jun 2020 03:00:59 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com A1B5D30C0FB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1593252059; bh=qtdhanwLXbmWwvG5GQBG/YpqGb1zMfbkgtlUedwg6uc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lLmyYVe/akF85KhMuI510PxaqYG/9edp8Kzeubjk0DxBQso8JMlEPIYZHBQkL8MvE jelE6LFyEiiDO1qeB0j1LsOGK9xwJAuNwUTbJ3VOCWHsj6KJrO6Kiahgkvu3lgUEDj saz0K0oCxWJ06T+baj3Psn1ZdTCdSDHetsIabIuw= Received: from localhost.localdomain (unknown [10.230.185.215]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id 1EF4B14008D; Sat, 27 Jun 2020 03:00:59 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: Shuanglin Wang , Somnath Kotur , Kishore Padmanabha Date: Sat, 27 Jun 2020 03:00:39 -0700 Message-Id: <20200627100050.19688-15-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.21.1 (Apple Git-122.3) In-Reply-To: <20200627100050.19688-1-ajit.khaparde@broadcom.com> References: <20200612125024.15989-1-somnath.kotur@broadcom.com> <20200627100050.19688-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 14/25] net/bnxt: add a devarg to set max flow count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Shuanglin Wang User could set max flow count by passing a devarg "-w 0000:0d:00.0,max_num_kflows=64" to a DPDK application; The value must be not less than 32K and be power-of-2; the default value is 32K. Signed-off-by: Shuanglin Wang Signed-off-by: Somnath Kotur Reviewed-by: Kishore Padmanabha Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_ethdev.c | 62 +++++++++++++++++++++++++++++- drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 35 +++++++++++++++++ 3 files changed, 96 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 2c3aef6b2..79e9b288d 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -725,6 +725,7 @@ struct bnxt { struct bnxt_ulp_context *ulp_ctx; struct bnxt_flow_stat_info *flow_stat; uint8_t flow_xstat; + uint16_t max_num_kflows; }; #define BNXT_FC_TIMER 1 /* Timer freq in Sec Flow Counters */ diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index e8b4c058a..7022f6d52 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -129,9 +129,11 @@ static const struct rte_pci_id bnxt_pci_id_map[] = { #define BNXT_DEVARG_TRUFLOW "host-based-truflow" #define BNXT_DEVARG_FLOW_XSTAT "flow-xstat" +#define BNXT_DEVARG_MAX_NUM_KFLOWS "max-num-kflows" static const char *const bnxt_dev_args[] = { BNXT_DEVARG_TRUFLOW, BNXT_DEVARG_FLOW_XSTAT, + BNXT_DEVARG_MAX_NUM_KFLOWS, NULL }; @@ -147,6 +149,19 @@ static const char *const bnxt_dev_args[] = { */ #define BNXT_DEVARG_FLOW_XSTAT_INVALID(flow_xstat) ((flow_xstat) > 1) +/* + * max_num_kflows must be >= 32 + * and must be a power-of-2 supported value + * return: 1 -> invalid + * 0 -> valid + */ +static int bnxt_devarg_max_num_kflow_invalid(uint16_t max_num_kflows) +{ + if (max_num_kflows < 32 || !rte_is_power_of_2(max_num_kflows)) + return 1; + return 0; +} + static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask); static void bnxt_print_link_info(struct rte_eth_dev *eth_dev); static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev); @@ -5390,6 +5405,42 @@ bnxt_parse_devarg_flow_xstat(__rte_unused const char *key, return 0; } +static int +bnxt_parse_devarg_max_num_kflows(__rte_unused const char *key, + const char *value, void *opaque_arg) +{ + struct bnxt *bp = opaque_arg; + unsigned long max_num_kflows; + char *end = NULL; + + if (!value || !opaque_arg) { + PMD_DRV_LOG(ERR, + "Invalid parameter passed to max_num_kflows devarg.\n"); + return -EINVAL; + } + + max_num_kflows = strtoul(value, &end, 10); + if (end == NULL || *end != '\0' || + (max_num_kflows == ULONG_MAX && errno == ERANGE)) { + PMD_DRV_LOG(ERR, + "Invalid parameter passed to max_num_kflows devarg.\n"); + return -EINVAL; + } + + if (bnxt_devarg_max_num_kflow_invalid(max_num_kflows)) { + PMD_DRV_LOG(ERR, + "Invalid value passed to max_num_kflows devarg.\n"); + return -EINVAL; + } + + bp->max_num_kflows = max_num_kflows; + if (bp->max_num_kflows) + PMD_DRV_LOG(INFO, "max_num_kflows set as %ldK.\n", + max_num_kflows); + + return 0; +} + static void bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs) { @@ -5404,18 +5455,25 @@ bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs) /* * Handler for "truflow" devarg. - * Invoked as for ex: "-w 0000:00:0d.0,host-based-truflow=1” + * Invoked as for ex: "-w 0000:00:0d.0,host-based-truflow=1" */ rte_kvargs_process(kvlist, BNXT_DEVARG_TRUFLOW, bnxt_parse_devarg_truflow, bp); /* * Handler for "flow_xstat" devarg. - * Invoked as for ex: "-w 0000:00:0d.0,flow_xstat=1” + * Invoked as for ex: "-w 0000:00:0d.0,flow_xstat=1" */ rte_kvargs_process(kvlist, BNXT_DEVARG_FLOW_XSTAT, bnxt_parse_devarg_flow_xstat, bp); + /* + * Handler for "max_num_kflows" devarg. + * Invoked as for ex: "-w 000:00:0d.0,max_num_kflows=32" + */ + rte_kvargs_process(kvlist, BNXT_DEVARG_MAX_NUM_KFLOWS, + bnxt_parse_devarg_max_num_kflows, bp); + rte_kvargs_free(kvlist); } diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c index 872c1aba4..00e21fa22 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c @@ -276,6 +276,38 @@ ulp_ctx_init(struct bnxt *bp, return rc; } +/* The function to initialize ulp dparms with devargs */ +static int32_t +ulp_dparms_init(struct bnxt *bp, + struct bnxt_ulp_context *ulp_ctx) +{ + struct bnxt_ulp_device_params *dparms; + uint32_t dev_id; + + if (!bp->max_num_kflows) + return -EINVAL; + + if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &dev_id)) { + BNXT_TF_DBG(DEBUG, "Failed to get device id\n"); + return -EINVAL; + } + + dparms = bnxt_ulp_device_params_get(dev_id); + if (!dparms) { + BNXT_TF_DBG(DEBUG, "Failed to get device parms\n"); + return -EINVAL; + } + + /* num_flows = max_num_kflows * 1024 */ + dparms->num_flows = bp->max_num_kflows * 1024; + /* GFID = 2 * num_flows */ + dparms->gfid_entries = dparms->num_flows * 2; + BNXT_TF_DBG(DEBUG, "Set the number of flows = %"PRIu64"\n", + dparms->num_flows); + + return 0; +} + static int32_t ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx, struct bnxt_ulp_session_state *session) @@ -497,6 +529,9 @@ bnxt_ulp_init(struct bnxt *bp) goto jump_to_error; } + /* Initialize ulp dparms with values devargs passed */ + rc = ulp_dparms_init(bp, bp->ulp_ctx); + /* create the port database */ rc = ulp_port_db_init(bp->ulp_ctx); if (rc) {