From patchwork Wed Apr 12 16:26:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ronak Doshi X-Patchwork-Id: 126163 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CA7454296B; Mon, 17 Apr 2023 10:15:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DC4D242D1A; Mon, 17 Apr 2023 10:15:08 +0200 (CEST) Received: from EX-PRD-EDGE01.vmware.com (EX-PRD-EDGE01.vmware.com [208.91.3.33]) by mails.dpdk.org (Postfix) with ESMTP id 32379410F2 for ; Wed, 12 Apr 2023 18:26:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; s=s1024; d=vmware.com; h=from:to:cc:subject:date:message-id:in-reply-to:mime-version: content-type; bh=7Qp9gpCa7+IRbcrytySuRMSJQVWMKDOipHc3Nx3mocs=; b=ZZ1C4qrEO3hCn7Qe1pc9LgyQaVNU8mh9ceYdTGm/SMmcxqhm18isv4kbAH2eY8 Ikan0RfrPms3/UVFNJHv5XGfZRNMJfOyOjgxz/M3FXcronER1Euhr95MNf2y8i pBdMiTGCMMrsLRC3hVPZbK1CLGi3vJKmg2W/8Uj7s9mudQ0= Received: from sc9-mailhost1.vmware.com (10.113.161.71) by EX-PRD-EDGE01.vmware.com (10.188.245.6) with Microsoft SMTP Server id 15.1.2375.34; Wed, 12 Apr 2023 09:26:33 -0700 Received: from htb-1n-eng-dhcp122.eng.vmware.com (unknown [10.20.114.216]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id 4F911201E4; Wed, 12 Apr 2023 09:26:39 -0700 (PDT) Received: by htb-1n-eng-dhcp122.eng.vmware.com (Postfix, from userid 0) id 4AD68A8385; Wed, 12 Apr 2023 09:26:39 -0700 (PDT) From: Ronak Doshi To: Jochen Behrens CC: , Ronak Doshi Subject: [PATCH next 6/7] vmxnet3: avoid updating rxprod register frequently Date: Wed, 12 Apr 2023 09:26:35 -0700 Message-ID: <20230412162636.30843-7-doshir@vmware.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20230412162636.30843-1-doshir@vmware.com> References: <20230412162636.30843-1-doshir@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX-PRD-EDGE01.vmware.com: doshir@vmware.com does not designate permitted sender hosts) X-Mailman-Approved-At: Mon, 17 Apr 2023 10:15:01 +0200 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When UPT is enabled, the driver updates rxprod register to let the device know that it has processed the received packets and new buffers are available. However, updating it too frequently can lead to reduced performance. This patch adds code to avoid updating the register frequently. Signed-off-by: Ronak Doshi Acked-by: Jochen Behrens --- drivers/net/vmxnet3/vmxnet3_rxtx.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c index 7bbae4177e..39ad0726cb 100644 --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c @@ -1007,7 +1007,8 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) /* It's time to renew descriptors */ vmxnet3_renew_desc(rxq, ring_idx, newm); - if (unlikely(rxq->shared->ctrl.updateRxProd)) { + if (unlikely(rxq->shared->ctrl.updateRxProd && + (rxq->cmd_ring[ring_idx].next2fill & 0xf) == 0)) { VMXNET3_WRITE_BAR0_REG(hw, hw->rx_prod_offset[ring_idx] + (rxq->queue_id * VMXNET3_REG_ALIGN), rxq->cmd_ring[ring_idx].next2fill); @@ -1027,18 +1028,21 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (unlikely(nb_rxd == 0)) { uint32_t avail; + uint32_t posted = 0; for (ring_idx = 0; ring_idx < VMXNET3_RX_CMDRING_SIZE; ring_idx++) { avail = vmxnet3_cmd_ring_desc_avail(&rxq->cmd_ring[ring_idx]); if (unlikely(avail > 0)) { /* try to alloc new buf and renew descriptors */ - vmxnet3_post_rx_bufs(rxq, ring_idx); + if (vmxnet3_post_rx_bufs(rxq, ring_idx) > 0) + posted |= (1 << ring_idx); } } if (unlikely(rxq->shared->ctrl.updateRxProd)) { for (ring_idx = 0; ring_idx < VMXNET3_RX_CMDRING_SIZE; ring_idx++) { - VMXNET3_WRITE_BAR0_REG(hw, hw->rx_prod_offset[ring_idx] + - (rxq->queue_id * VMXNET3_REG_ALIGN), - rxq->cmd_ring[ring_idx].next2fill); + if (posted & (1 << ring_idx)) + VMXNET3_WRITE_BAR0_REG(hw, hw->rx_prod_offset[ring_idx] + + (rxq->queue_id * VMXNET3_REG_ALIGN), + rxq->cmd_ring[ring_idx].next2fill); } } }