From patchwork Wed Jun 16 17:55:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Richardson X-Patchwork-Id: 94308 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4AF56A0C43; Wed, 16 Jun 2021 20:09:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A1FD7410F6; Wed, 16 Jun 2021 20:09:54 +0200 (CEST) Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by mails.dpdk.org (Postfix) with ESMTP id DAA83410F3 for ; Wed, 16 Jun 2021 20:09:53 +0200 (CEST) Received: by mail-pg1-f174.google.com with SMTP id g22so2671509pgk.1 for ; Wed, 16 Jun 2021 11:09:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=Iy5+f26wkYIksQw6J/AX2M5Np7BKky//Mcd3PbhUNp8=; b=a3v6jQDiAiI+lQgJHzeHguP+Cbizt7AMl9pGA/r9ZTz8z3RJ0deQuiuEo3lt5lEqfH QKgi+m+ljXm9IyuztTaZnP4B3wwZF8rjtGeMVdzCOhwN2bPwkWQRkLDJS3yaGmmUF9v8 lL1y5hU5wxlI1yXedD+8kevhZhwQimASuZ0Pw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=Iy5+f26wkYIksQw6J/AX2M5Np7BKky//Mcd3PbhUNp8=; b=nRqwDHU5HaovXtGPDiQiyuBr7BSFYXJxNxAInTJZ/FL/BFYpz3VvFJP0wrmvLA/gdB jMtyVxlYCD+v+5xIV+ZhxLmkrjHII7p/VHQKE3nN3Fxe7O74pkSdvkmFBobc9xvTNfp+ iH6Ncf9Bfb2oVtaaFJbzC9ue48BUCYZmP8javBsipwwqNpXd3iMi7Fp9hTqWijnX0zWR AT4wyCtE6jyJ+O7M3xnU861F1EFBP+fF1F0P6aWnCeWpBP+cKA1pcigZ129TlEJrjEla jnf1zC+wbeHqmxNN+Q9Hdb+BVeqKNe1rYdAUcDJTT1KJ1M/rjCpn6NkbpN3ajxAwJ/tz ueeg== X-Gm-Message-State: AOAM530Uan8Ckj+xamm1lj7cHorvBsooCK8yitnz5ihvx9vPeM25zM/e Vv6fdG089JKcjmwJUzvEi2wf2EzcjK0SsA== X-Google-Smtp-Source: ABdhPJyR++/eWZvEhzXByq3BgAwX/wvWL3W8UEM22K9aOUBLwFAIpxx4zyEV7BpLwohfUCAR/KFldw== X-Received: by 2002:a63:e958:: with SMTP id q24mr864866pgj.438.1623866992923; Wed, 16 Jun 2021 11:09:52 -0700 (PDT) Received: from lrichardson-VirtualBox.dhcp.broadcom.net ([192.19.231.250]) by smtp.gmail.com with ESMTPSA id q4sm2951401pfh.18.2021.06.16.11.09.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Jun 2021 11:09:52 -0700 (PDT) From: Lance Richardson To: Ajit Khaparde , Somnath Kotur Cc: dev@dpdk.org, stable@dpdk.org Date: Wed, 16 Jun 2021 13:55:22 -0400 Message-Id: <20210616175523.930678-4-lance.richardson@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210616175523.930678-1-lance.richardson@broadcom.com> References: <20210616175523.930678-1-lance.richardson@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH 3/4] net/bnxt: fix scalar Tx completion handling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Preserve the raw (unmasked) transmit completion ring consumer index. Remove cache prefetches that have no measurable performance benefit. Fixes: c7de4195cc4c ("net/bnxt: modify ring index logic") Cc: stable@dpdk.org Signed-off-by: Lance Richardson Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_txr.c | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c index 27459960d..54eaab34a 100644 --- a/drivers/net/bnxt/bnxt_txr.c +++ b/drivers/net/bnxt/bnxt_txr.c @@ -444,30 +444,26 @@ static void bnxt_tx_cmp(struct bnxt_tx_queue *txq, int nr_pkts) static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq) { + uint32_t nb_tx_pkts = 0, cons, ring_mask, opaque; struct bnxt_cp_ring_info *cpr = txq->cp_ring; uint32_t raw_cons = cpr->cp_raw_cons; - uint32_t cons; - uint32_t nb_tx_pkts = 0; + struct bnxt_ring *cp_ring_struct; struct tx_cmpl *txcmp; - struct cmpl_base *cp_desc_ring = cpr->cp_desc_ring; - struct bnxt_ring *cp_ring_struct = cpr->cp_ring_struct; - uint32_t ring_mask = cp_ring_struct->ring_mask; - uint32_t opaque = 0; if (bnxt_tx_bds_in_hw(txq) < txq->tx_free_thresh) return 0; + cp_ring_struct = cpr->cp_ring_struct; + ring_mask = cp_ring_struct->ring_mask; + do { cons = RING_CMPL(ring_mask, raw_cons); txcmp = (struct tx_cmpl *)&cpr->cp_desc_ring[cons]; - rte_prefetch_non_temporal(&cp_desc_ring[(cons + 2) & - ring_mask]); - if (!CMPL_VALID(txcmp, cpr->valid)) + if (!CMP_VALID(txcmp, raw_cons, cp_ring_struct)) break; - opaque = rte_cpu_to_le_32(txcmp->opaque); - NEXT_CMPL(cpr, cons, cpr->valid, 1); - rte_prefetch0(&cp_desc_ring[cons]); + + opaque = rte_le_to_cpu_32(txcmp->opaque); if (CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2) nb_tx_pkts += opaque; @@ -475,9 +471,11 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq) RTE_LOG_DP(ERR, PMD, "Unhandled CMP type %02x\n", CMP_TYPE(txcmp)); - raw_cons = cons; + raw_cons = NEXT_RAW_CMP(raw_cons); } while (nb_tx_pkts < ring_mask); + cpr->valid = !!(raw_cons & cp_ring_struct->ring_size); + if (nb_tx_pkts) { if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) bnxt_tx_cmp_fast(txq, nb_tx_pkts);