From patchwork Wed Sep 9 15:52:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Richardson X-Patchwork-Id: 77070 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 19018A04B5; Wed, 9 Sep 2020 17:54:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 267DA1C12A; Wed, 9 Sep 2020 17:53:29 +0200 (CEST) Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by dpdk.org (Postfix) with ESMTP id 0BBC21C12A for ; Wed, 9 Sep 2020 17:53:27 +0200 (CEST) Received: by mail-pf1-f196.google.com with SMTP id w7so2531543pfi.4 for ; Wed, 09 Sep 2020 08:53:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=64geMs9OE3TrwH5ljq2ZtZjNYVoSmHSQ1C/JxK521fw=; b=H3WwGnnwUuBFs56EI8LyOE3/pDSceUXUhve2syTsFAXru3GIPs1qVQYt8xyxla0oHY uYP833lYXslo3D52pkfcerkAYtFG7WeLKN3K9RRDAXS8//sjBoe1KOUQZnmHXZfI55YE /AE/7WBm5OHHGYLcz9l6sZSaAIuhbkBqGLrIY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=64geMs9OE3TrwH5ljq2ZtZjNYVoSmHSQ1C/JxK521fw=; b=fKbCf20fyfzrNi9Uaih8lbOWtLQlgpSDDLQgX/fz4IHUIFaDALs2DKC3/R7ujHPqGi HzN0DVu2vIpkqaD3ma1n5sPOW8zFJz+LesoIwYvgZsbbS5mDm1jSuBaF6eiN+Yui76P1 U7QqldWeeN0hDf0243zqn0q5XhF5z6/Zrbydx2hwefbY++e1yy7mfeq/aLlv6qJadei1 kqsaYr2A1BphS5SWR/J+AaQlLGOHZJOL4rFhjLEM0gzOvgUg/qVqOJp2HOOcv7TnjjPy TRPm7pshqIsqoBZyfy7XvHie9Goo8uuxTTlBthwWSOskl9J1A6MIDNoBgM3OanKq8Di7 0XRg== X-Gm-Message-State: AOAM530i719AwtiIn5F/Asbvi1teL+zy8r8RX6YRkkQBeJqqZjDRFXtU cX/1AsgNm72280S6vnHpPSBnmg== X-Google-Smtp-Source: ABdhPJy4bJnwNOf1MrW6s3GvjjsUFIPAM/OSwy+wujOy+lU/vXFylBTRTRMGQrrLUw/Ezm0fGHRxEA== X-Received: by 2002:a62:2605:: with SMTP id m5mr1340217pfm.137.1599666806165; Wed, 09 Sep 2020 08:53:26 -0700 (PDT) Received: from localhost.localdomain ([192.19.231.250]) by smtp.gmail.com with ESMTPSA id h15sm3188427pfo.23.2020.09.09.08.53.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Sep 2020 08:53:25 -0700 (PDT) From: Lance Richardson To: Ajit Khaparde , Somnath Kotur Cc: dev@dpdk.org Date: Wed, 9 Sep 2020 11:52:59 -0400 Message-Id: <20200909155302.28656-7-lance.richardson@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200909155302.28656-1-lance.richardson@broadcom.com> References: <20200909155302.28656-1-lance.richardson@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 06/12] net/bnxt: use smaller cq when agg ring not needed X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Don't allocate extra completion queue entries for aggregation ring when aggregation ring will not be used. Reviewed-by: Ajit Kumar Khaparde Signed-off-by: Lance Richardson --- drivers/net/bnxt/bnxt_ethdev.c | 11 +++++------ drivers/net/bnxt/bnxt_rxr.c | 21 +++++++++++++++++++-- 2 files changed, 24 insertions(+), 8 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 1ad9bfc0a6..27eba431b8 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1295,6 +1295,8 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev) struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; eth_dev->data->dev_started = 0; + eth_dev->data->scattered_rx = 0; + /* Prevent crashes when queues are still in use */ eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts; eth_dev->tx_pkt_burst = &bnxt_dummy_xmit_pkts; @@ -2695,14 +2697,12 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * BNXT_NUM_VLANS; -#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) /* - * If vector-mode tx/rx is active, disallow any MTU change that would - * require scattered receive support. + * Disallow any MTU change that would require scattered receive support + * if it is not already enabled. */ if (eth_dev->data->dev_started && - (eth_dev->rx_pkt_burst == bnxt_recv_pkts_vec || - eth_dev->tx_pkt_burst == bnxt_xmit_pkts_vec) && + !eth_dev->data->scattered_rx && (new_pkt_size > eth_dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) { PMD_DRV_LOG(ERR, @@ -2710,7 +2710,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) PMD_DRV_LOG(ERR, "Stop port before changing MTU.\n"); return -EINVAL; } -#endif if (new_mtu > RTE_ETHER_MTU) { bp->flags |= BNXT_FLAG_JUMBO; diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 92102e3d57..5673e2b50f 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -938,9 +938,12 @@ void bnxt_free_rx_rings(struct bnxt *bp) int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) { + struct rte_eth_dev *eth_dev = rxq->bp->eth_dev; + struct rte_eth_rxmode *rxmode; struct bnxt_cp_ring_info *cpr; struct bnxt_rx_ring_info *rxr; struct bnxt_ring *ring; + bool use_agg_ring; rxq->rx_buf_size = BNXT_MAX_PKT_LEN + sizeof(struct rte_mbuf); @@ -978,8 +981,22 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) if (ring == NULL) return -ENOMEM; cpr->cp_ring_struct = ring; - ring->ring_size = rte_align32pow2(rxr->rx_ring_struct->ring_size * - (2 + AGG_RING_SIZE_FACTOR)); + + rxmode = ð_dev->data->dev_conf.rxmode; + use_agg_ring = (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) || + (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) || + (rxmode->max_rx_pkt_len > + (uint32_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) - + RTE_PKTMBUF_HEADROOM)); + + /* Allocate two completion slots per entry in desc ring. */ + ring->ring_size = rxr->rx_ring_struct->ring_size * 2; + + /* Allocate additional slots if aggregation ring is in use. */ + if (use_agg_ring) + ring->ring_size *= AGG_RING_SIZE_FACTOR; + + ring->ring_size = rte_align32pow2(ring->ring_size); ring->ring_mask = ring->ring_size - 1; ring->bd = (void *)cpr->cp_desc_ring; ring->bd_dma = cpr->cp_desc_mapping;