From patchwork Thu Mar 22 13:01:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tonghao Zhang X-Patchwork-Id: 36433 X-Patchwork-Delegate: helin.zhang@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 241E25F72; Thu, 22 Mar 2018 14:02:29 +0100 (CET) Received: from mail-pl0-f66.google.com (mail-pl0-f66.google.com [209.85.160.66]) by dpdk.org (Postfix) with ESMTP id D30505F3B for ; Thu, 22 Mar 2018 14:02:20 +0100 (CET) Received: by mail-pl0-f66.google.com with SMTP id u11-v6so5236757plq.1 for ; Thu, 22 Mar 2018 06:02:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=n6Qv8Sna7lHLy4f4ImtqKytEQV86XQ4nhgRztuoNpNc=; b=Wmx060LvctbRmMhi/GkTa6j7Y0tpkf89ZkGX6A+DNceZ80QDjikajQyNc2int0mbZt zUjanarbKl3LSAH5qCp/wbMTZX0Xn5qkGjx+4LbAP7HJqkEZXIRgDDPQaCkJtIrlhzWn RYltpAMliqpc8MG5Eu0pXnzoCc+7csnt9ltYyVajHaUoGNfDPyDx9SR4AdhXa0UtMh1g tLf6qe2UCPG0hX5GeXw2MYpEz916JJqMVbaNy4EFLDl/YLCQqa9Ez653vlE7/G/0jjIG KBaz92bQ3abCChaDn3EamfhdtYN37hEHFrpe5O21XOwED35UHuNn0omIb+TlC/6oitkX Ndtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=n6Qv8Sna7lHLy4f4ImtqKytEQV86XQ4nhgRztuoNpNc=; b=m+znb193uVy+vVcKW/dtX8pfhOKFXaaL4Xxk94SNunYmS3YWzTwRoOnN/Uc9I96phf M5Am1BHPli6oVm16aIlhsqHv/KmtQabplJvKXvdc3H6YdC76MlL/ZKrQodumiCqz6WFt gAa+LSRdCLsplKXvPNyEvRMnNdikDPfHORBWyukQNbiZMjH4k+SJMEVzf6FZGzWxK3k2 wjM//Np+5TphVkr0ddhPFJw1uqTIaQcavFKQP6NUhRUHxpv28IweYquC59IMLNGATFbC X4TQszxSBGjh0kobFE0sfjfBJAnIiiFQ+w+zlmxm85BeYlFjD5OyHZa/+sJETt2prvj5 Z4tQ== X-Gm-Message-State: AElRT7H3vSCIjWfbdeFEwHJn60xhqB/8G6AbPhq/J/Qi0CCbh7icsvol EKwh3V5QFutMWA/BqWizUNLofH9M X-Google-Smtp-Source: AG47ELs1ISGzgCUd2LnSvTJqmV61mEkiHabknzd442x/6Iv/ne/6DtTGyGJQx+6Qp3WbAONRsFZeVw== X-Received: by 2002:a17:902:7b92:: with SMTP id w18-v6mr25508437pll.159.1521723740044; Thu, 22 Mar 2018 06:02:20 -0700 (PDT) Received: from local.opencloud.tech.localdomain ([13.94.31.177]) by smtp.gmail.com with ESMTPSA id w24sm13450763pfl.14.2018.03.22.06.02.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Mar 2018 06:02:19 -0700 (PDT) From: xiangxia.m.yue@gmail.com To: wenzhuo.lu@intel.com, konstantin.ananyev@intel.com, beilei.xing@intel.com, wei.dai@intel.com Cc: dev@dpdk.org, Tonghao Zhang Date: Thu, 22 Mar 2018 06:01:58 -0700 Message-Id: <1521723718-93761-6-git-send-email-xiangxia.m.yue@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1521723718-93761-1-git-send-email-xiangxia.m.yue@gmail.com> References: <1521723718-93761-1-git-send-email-xiangxia.m.yue@gmail.com> Subject: [dpdk-dev] [PATCH v3 5/5] net/ixgbe: remove the unnecessary call rte_intr_enable. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Tonghao Zhang When binding the ixgbe pf or vf to vfio and call the rte_eth_dev_rx_intr_enable frequently, the interrupt setting (msi_set_mask_bit) will take more CPU as show below. rte_intr_enable calls the ioctl to map the fd to interrupts frequently. perf top: 5.45% [kernel] [k] msi_set_mask_bit It is unnecessary to call the rte_intr_enable in ixgbe_dev_rx_queue_intr_enable. Because the fds has been mapped to interrupt and not unmapped in ixgbe_dev_rx_queue_intr_disable. With the patch, msi_set_mask_bit is not listed in perl anymore. Signed-off-by: Tonghao Zhang --- drivers/net/ixgbe/ixgbe_ethdev.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 7928144..ef01e0c 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -4253,7 +4253,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, */ static int ixgbe_dev_interrupt_action(struct rte_eth_dev *dev, - struct rte_intr_handle *intr_handle) + __rte_unused struct rte_intr_handle *intr_handle) { struct ixgbe_interrupt *intr = IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private); @@ -4304,7 +4304,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, PMD_DRV_LOG(DEBUG, "enable intr immediately"); ixgbe_enable_intr(dev); - rte_intr_enable(intr_handle); return 0; } @@ -4327,8 +4326,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, ixgbe_dev_interrupt_delayed_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ixgbe_interrupt *intr = IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private); struct ixgbe_hw *hw = @@ -4366,7 +4363,6 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev, PMD_DRV_LOG(DEBUG, "enable intr in delayed handler S[%08x]", eicr); ixgbe_enable_intr(dev); - rte_intr_enable(intr_handle); } /** @@ -5607,8 +5603,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) RTE_SET_USED(queue_id); IXGBE_WRITE_REG(hw, IXGBE_VTEIMS, intr->mask); - rte_intr_enable(intr_handle); - return 0; } @@ -5635,8 +5629,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) static int ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; uint32_t mask; struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -5656,7 +5648,6 @@ static void ixgbevf_set_vfta_all(struct rte_eth_dev *dev, bool on) mask &= (1 << (queue_id - 32)); IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(1), mask); } - rte_intr_enable(intr_handle); return 0; }