From patchwork Sun Jan 14 10:03:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 33702 X-Patchwork-Delegate: helin.zhang@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D2EC94B4B; Sun, 14 Jan 2018 11:05:13 +0100 (CET) Received: from mail-pg0-f67.google.com (mail-pg0-f67.google.com [74.125.83.67]) by dpdk.org (Postfix) with ESMTP id EDB52354D for ; Sun, 14 Jan 2018 11:05:12 +0100 (CET) Received: by mail-pg0-f67.google.com with SMTP id j2so6772739pgv.3 for ; Sun, 14 Jan 2018 02:05:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YHalybRybTDebUMYo8qa/cQvXpYX9KGEgDwgBKJsWCo=; b=Fj+rAKVP6tUszzCiHPeD0rI2Rm8QgBpuDUWlIA7ZEJSwSVzQqpPQpfJMIegvojZpGi Zk/QeposalrPty6H6S6b2PJUPjgqJBY4q2PkvYAXFzamPmyj7HgYlIMDnFMNaU3cKAYS DtRwioXvfeFkOP+NGH7NtUYSoXTV1/epMwxtpLd6mhWCPSG88X0Ke49k2Tcy2euHzsu5 oWvHfcKUQuTK8/90t3G1Nr93EEvV2KbLS7oVqQk+hTZgYGcTQ1452kNCmv8xzKFcCXHj 7kZOUDvWBrBn4uOkHoD5QqWmlXFhGu81Jj1/cIXqw/Jfzt6hIOhXyT73MFDsj0qwIECq EqAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YHalybRybTDebUMYo8qa/cQvXpYX9KGEgDwgBKJsWCo=; b=IwSGcdR8wTNC6fftQrg6wUXZLyFJN69wFgR0tZRYs/WFhUKkfY8Aze774x4X1GCZlB xoeuW7pY9ZtxpyAOirL2coARCZl0WCelirGJhq6VUE08oYz2yt1UQAngDYsLMhA2pnmr CyZae866y/S3Kvksx1SCaXm+Jq7hhq/wmr44GCyFdzWpSDLTasXKsEDjfIX03E8mlge3 z7rn6pbaFSY+EZv8HVGF+qel76XydBe5QUQZIj+mcmzlZYRe7ApxhITdPFUzDfUvvxEl 0y/OS+9BQnXPkY03xQRvq0Io2lyOkXhUGFp+276g7+DEMsPgbb35VbOFLbAF5oTX6nHq p2pA== X-Gm-Message-State: AKGB3mLebZuTDhmfPp/wPriVph4OBBETmdWkc2ruGwkVq46TDGCNipR/ JjL4u+qkIgHCc2fA9ZQeCfg= X-Google-Smtp-Source: ACJfBotxB2uXcm9PI35ShvNp6+aiZFPM3dQ36GFEubX5fkZmKewrf7Z13x5I7yDgs4kpl+cCPr+NDw== X-Received: by 10.99.140.65 with SMTP id q1mr24849049pgn.280.1515924312228; Sun, 14 Jan 2018 02:05:12 -0800 (PST) Received: from local.opencloud.tech.localdomain ([183.240.196.59]) by smtp.gmail.com with ESMTPSA id v7sm15208761pgs.83.2018.01.14.02.05.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 14 Jan 2018 02:05:11 -0800 (PST) From: xiangxia.m.yue@gmail.com To: wei.dai@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, Tonghao Zhang Date: Sun, 14 Jan 2018 02:03:42 -0800 Message-Id: <1515924222-19044-6-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1515924222-19044-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1515924222-19044-1-git-send-email-xiangxia.m.yue@gmail.com> Subject: [dpdk-dev] [PATCH v3 6/6] net/ixgbe: remove the unnecessary call rte_intr_enable. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Tonghao Zhang When binding the ixgbe pf or vf to vfio and call the rte_eth_dev_rx_intr_enable frequently, the interrupt setting (msi_set_mask_bit) will take more CPU as show below. rte_intr_enable calls the ioctl to map the fd to interrupts frequently. perf top: 5.45% [kernel] [k] msi_set_mask_bit It is unnecessary to call the rte_intr_enable in ixgbe_dev_rx_queue_intr_enable. Because the fds has been mapped to interrupt and not unmapped in ixgbe_dev_rx_queue_intr_disable. With the patch, msi_set_mask_bit is not listed in perl anymore. Signed-off-by: Tonghao Zhang --- drivers/net/ixgbe/ixgbe_ethdev.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index a5b09ac..a02b38e 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -4282,7 +4282,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, */ static int ixgbe_dev_interrupt_action(struct rte_eth_dev *dev, - struct rte_intr_handle *intr_handle) + __rte_unused struct rte_intr_handle *intr_handle) { struct ixgbe_interrupt *intr = IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private); @@ -4333,7 +4333,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, PMD_DRV_LOG(DEBUG, "enable intr immediately"); ixgbe_enable_intr(dev); - rte_intr_enable(intr_handle); return 0; } @@ -4356,8 +4355,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, ixgbe_dev_interrupt_delayed_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ixgbe_interrupt *intr = IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private); struct ixgbe_hw *hw = @@ -4395,7 +4392,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, PMD_DRV_LOG(DEBUG, "enable intr in delayed handler S[%08x]", eicr); ixgbe_enable_intr(dev); - rte_intr_enable(intr_handle); } /** @@ -5636,8 +5632,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) RTE_SET_USED(queue_id); IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, intr->mask); - rte_intr_enable(intr_handle); - return 0; } @@ -5664,8 +5658,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) static int ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; uint32_t mask; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -5685,7 +5677,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) mask &= (1 << (queue_id - 32)); IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(1), mask); } - rte_intr_enable(intr_handle); return 0; }