From patchwork Mon Feb 5 00:51:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 34933 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 38C561B2A7; Mon, 5 Feb 2018 01:52:01 +0100 (CET) Received: from mail-pg0-f65.google.com (mail-pg0-f65.google.com [74.125.83.65]) by dpdk.org (Postfix) with ESMTP id 047051B2B1 for ; Mon, 5 Feb 2018 01:52:00 +0100 (CET) Received: by mail-pg0-f65.google.com with SMTP id g2so1224187pgn.7 for ; Sun, 04 Feb 2018 16:51:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=n6Qv8Sna7lHLy4f4ImtqKytEQV86XQ4nhgRztuoNpNc=; b=Gd2PQvlzX2pk6T88agJ72n9EVRfYsaVTb1MDGtf7esHtGvVNwTlyw4ZlO9cSx8D0r/ UsqIksFHi5jqLIk/oj3K5KOlNZeELvgRYZ6wSazJb5CHRlbUZMTziUUF6vlRiC3QTOSI pU/l7Xb2NToxUoK4IzpO4y/O2NbwFAOvdXI1W+PG2ZW0lFDdCWsgb54XaJ3mXzVPDkJ+ KMqlbeZ92uPui4fjs/92iX8G+36i02XugH66zl9D/g+ue4nMm9/6ZEIlKis4j+JvEOpu bNGgtxQOE/yoXPT0+WXNbdGStm7QoVA8u7xYIAcmFq9BjM23T0gmHiPQw+dgv6YftHqf 5VwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=n6Qv8Sna7lHLy4f4ImtqKytEQV86XQ4nhgRztuoNpNc=; b=XyEMr13ddjctIkOM66Wsy8a5IVsSlod43sjsQHgRA/RK2z/mOi8j4NoAynh7eqnjWL zQ23bGVAlAyyrdY7bg0w8jxqosc9aTRmgSkaQxPWKqUt2C6u9L0QeKG0w05ISjTs8xSL rY6XYnxtlwkwFSJ1xqOfIyUz89x15xKUofbWEcM1vrAE2reIhEp7fm4vEyPTg2DeBrcP yU/LCKg5mCaWnVsvFucy8846+dCRpVGZEvNXN8fATRu/BLScAP+5+yyt7MCrtbH5sz6A IAhLaKJ5phCBn4s0U7tP8r09AYo/AddNnf79NFdXsslSAR5MktvBL5dzZRtxNo/ZzOMF cuiA== X-Gm-Message-State: APf1xPDMiQyQxTjUeHGA9+BhrKYHI41VWALd6vLDL1SJGEBRzUxkwydR HbGfvCiq+H7ReH5QJ18o1dI= X-Google-Smtp-Source: AH8x226UZ0vwMNjPcuuUR/3nEUGKiNPIE7k1oMyoKrZ5cimOXFMuafM/IpCOolj1aewRw+RTEQ0jSw== X-Received: by 10.98.73.70 with SMTP id w67mr832954pfa.61.1517791919317; Sun, 04 Feb 2018 16:51:59 -0800 (PST) Received: from local.opencloud.tech.localdomain ([23.102.225.250]) by smtp.gmail.com with ESMTPSA id b84sm9432835pfj.11.2018.02.04.16.51.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 04 Feb 2018 16:51:58 -0800 (PST) From: xiangxia.m.yue@gmail.com To: beilei.xing@intel.com, wei.dai@intel.com, helin.zhang@intel.com, wenzhuo.lu@intel.com Cc: dev@dpdk.org, Tonghao Zhang Date: Sun, 4 Feb 2018 16:51:35 -0800 Message-Id: <1517791895-3061-6-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1517791895-3061-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1517791895-3061-1-git-send-email-xiangxia.m.yue@gmail.com> Subject: [dpdk-dev] [PATCH v3 5/5] net/ixgbe: remove the unnecessary call rte_intr_enable. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Tonghao Zhang When binding the ixgbe pf or vf to vfio and call the rte_eth_dev_rx_intr_enable frequently, the interrupt setting (msi_set_mask_bit) will take more CPU as show below. rte_intr_enable calls the ioctl to map the fd to interrupts frequently. perf top: 5.45% [kernel] [k] msi_set_mask_bit It is unnecessary to call the rte_intr_enable in ixgbe_dev_rx_queue_intr_enable. Because the fds has been mapped to interrupt and not unmapped in ixgbe_dev_rx_queue_intr_disable. With the patch, msi_set_mask_bit is not listed in perl anymore. Signed-off-by: Tonghao Zhang --- drivers/net/ixgbe/ixgbe_ethdev.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 7928144..ef01e0c 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -4253,7 +4253,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, */ static int ixgbe_dev_interrupt_action(struct rte_eth_dev *dev, - struct rte_intr_handle *intr_handle) + __rte_unused struct rte_intr_handle *intr_handle) { struct ixgbe_interrupt *intr = IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private); @@ -4304,7 +4304,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, PMD_DRV_LOG(DEBUG, "enable intr immediately"); ixgbe_enable_intr(dev); - rte_intr_enable(intr_handle); return 0; } @@ -4327,8 +4326,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, ixgbe_dev_interrupt_delayed_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ixgbe_interrupt *intr = IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private); struct ixgbe_hw *hw = @@ -4366,7 +4363,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, PMD_DRV_LOG(DEBUG, "enable intr in delayed handler S[%08x]", eicr); ixgbe_enable_intr(dev); - rte_intr_enable(intr_handle); } /** @@ -5607,8 +5603,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) RTE_SET_USED(queue_id); IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, intr->mask); - rte_intr_enable(intr_handle); - return 0; } @@ -5635,8 +5629,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) static int ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; uint32_t mask; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -5656,7 +5648,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) mask &= (1 << (queue_id - 32)); IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(1), mask); } - rte_intr_enable(intr_handle); return 0; }