From patchwork Sun Dec 20 05:24:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 85515 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5AA90A052A; Sun, 20 Dec 2020 06:26:14 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AE37BCBF0; Sun, 20 Dec 2020 06:24:48 +0100 (CET) Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by dpdk.org (Postfix) with ESMTP id 88D05CB79 for ; Sun, 20 Dec 2020 06:24:43 +0100 (CET) Received: by mail-pj1-f53.google.com with SMTP id l23so4537782pjg.1 for ; Sat, 19 Dec 2020 21:24:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xWjcRdVqhLDSz9dQw435xyAHbYKVSAH+Clw7QRM9bDc=; b=pe5Zy7RzQfDUSwpN+15uPycPlDI8FxZdBoUTvzkEzcSsKosFozLzRbZoBr4j/uzolH TgLJ1iUMGj2V+xSP5pQqf+4+hVUxeZlaaFawmzjExbm+OvdJCjP+UpIu5KxjtSE1KhSy TibH6KRlcKccQP08W5itG+yz9VJKbtXOG1WKXvXqkSO7TIvNHvYeYBADUiNbarysDusl uwP1aA1sTFsCz85hHqpdKATR7xLaCcm05Dbv1f7P6J6f++rCmm92EmodiuXhVhsXyWDQ 0N8BEHg+K+cjMJN454xOfca5Z88MrbAMlsQsp9BEiZCfqsrzMCrfQ4Lk+gQ825g8XU7G WQPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xWjcRdVqhLDSz9dQw435xyAHbYKVSAH+Clw7QRM9bDc=; b=LLVFpjinKAmyhaRJB044DSx/b4lUONvSB0uJoNWYnj94hiTF8c7vOOixFJXTcIVh4i SuEy4XzGY6QYtGBWPksVxWNhZwVmEWMz4Oj4SDASLDtuuB9QRcxjod0UuLBFpjOzYx30 BAgjxIQplzMW6Z1AJwN+WOIW0mDlQ3jNnLxCZPzvPc/xGqVoH7my3wwxE04veT5LGsro r0rgSBlRxtU9NbZIHTo79F4Ui1ypULZHid9BkA3mbTQ9JUGKcIY1kYgrtq+IlYlwgEzT abzovFphm7zxeuyBgD8As/gtN9PRAtbJ4ThHKJ/P2ZVm4r3mcOtMIK8tGBS9Sn5pDsgb FOxQ== X-Gm-Message-State: AOAM532aegLaSzd1UmFDGyRQ/H0ipenfCQfbfSufL8VC39X/S0gPB1SH wqq6eY4FEbXSCDqe+CQM9DJZYw2+FXKSfw== X-Google-Smtp-Source: ABdhPJxxeIoB67ZCwtnLWouUGQe4cw1qwXiGiz11pZ0PXkPvg1grfv7vaudHgCuzILODxClcDcIAYA== X-Received: by 2002:a17:90a:ba88:: with SMTP id t8mr11714983pjr.229.1608441881317; Sat, 19 Dec 2020 21:24:41 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id b11sm12936544pfr.38.2020.12.19.21.24.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 19 Dec 2020 21:24:40 -0800 (PST) From: Ajit Khaparde X-Google-Original-From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP Date: Sat, 19 Dec 2020 21:24:29 -0800 Message-Id: <20201220052430.99990-6-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.21.1 (Apple Git-122.3) In-Reply-To: <20201220052430.99990-1-ajit.khaparde@broadcom.com> References: <20201220052430.99990-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 5/6] net/bnxt: modify context memory allocation code X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kalesh AP Newer devices like SR2 may have chip backing store and do not require host backed memory allocation. In these cases, HWRM_FUNC_BACKING_STORE_QCAPS will return a zero entry size to indicate contexts for which the host should not allocate backing store. Selectively allocate context memory based on device capabilities and only enable backing store for the appropriate contexts. Signed-off-by: Kalesh AP Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 60 ++++++++++++++++++++-------------- drivers/net/bnxt/bnxt_hwrm.c | 3 ++ 2 files changed, 39 insertions(+), 24 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 8ca4fb151..e11751cc1 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -4212,39 +4212,49 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg = &ctx->qp_mem; ctx_pg->entries = ctx->qp_min_qp1_entries + ctx->qp_max_l2_entries; - mem_size = ctx->qp_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "qp_mem", 0); - if (rc) - return rc; + if (ctx->qp_entry_size) { + mem_size = ctx->qp_entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "qp_mem", 0); + if (rc) + return rc; + } ctx_pg = &ctx->srq_mem; ctx_pg->entries = ctx->srq_max_l2_entries; - mem_size = ctx->srq_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "srq_mem", 0); - if (rc) - return rc; + if (ctx->srq_entry_size) { + mem_size = ctx->srq_entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "srq_mem", 0); + if (rc) + return rc; + } ctx_pg = &ctx->cq_mem; ctx_pg->entries = ctx->cq_max_l2_entries; - mem_size = ctx->cq_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "cq_mem", 0); - if (rc) - return rc; + if (ctx->cq_entry_size) { + mem_size = ctx->cq_entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "cq_mem", 0); + if (rc) + return rc; + } ctx_pg = &ctx->vnic_mem; ctx_pg->entries = ctx->vnic_max_vnic_entries + ctx->vnic_max_ring_table_entries; - mem_size = ctx->vnic_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "vnic_mem", 0); - if (rc) - return rc; + if (ctx->vnic_entry_size) { + mem_size = ctx->vnic_entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "vnic_mem", 0); + if (rc) + return rc; + } ctx_pg = &ctx->stat_mem; ctx_pg->entries = ctx->stat_max_entries; - mem_size = ctx->stat_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "stat_mem", 0); - if (rc) - return rc; + if (ctx->stat_entry_size) { + mem_size = ctx->stat_entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "stat_mem", 0); + if (rc) + return rc; + } min = ctx->tqm_min_entries_per_ring; @@ -4260,10 +4270,12 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) for (i = 0, ena = 0; i < ctx->tqm_fp_rings_count + 1; i++) { ctx_pg = ctx->tqm_mem[i]; ctx_pg->entries = i ? entries : entries_sp; - mem_size = ctx->tqm_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "tqm_mem", i); - if (rc) - return rc; + if (ctx->tqm_entry_size) { + mem_size = ctx->tqm_entry_size * ctx_pg->entries; + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "tqm_mem", i); + if (rc) + return rc; + } ena |= HWRM_FUNC_BACKING_STORE_CFG_INPUT_ENABLES_TQM_SP << i; } diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 56e2e33a9..6d54b1656 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -64,6 +64,9 @@ static void bnxt_hwrm_set_pg_attr(struct bnxt_ring_mem_info *rmem, uint8_t *pg_attr, uint64_t *pg_dir) { + if (rmem->nr_pages == 0) + return; + if (rmem->nr_pages > 1) { *pg_attr = 1; *pg_dir = rte_cpu_to_le_64(rmem->pg_tbl_map);