From patchwork Sun Jun 12 17:11:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Iremonger, Bernard" X-Patchwork-Id: 13483 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id EFAFD47CE; Sun, 12 Jun 2016 19:11:40 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id C1D723977 for ; Sun, 12 Jun 2016 19:11:37 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP; 12 Jun 2016 10:11:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,462,1459839600"; d="scan'208";a="717891992" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by FMSMGA003.fm.intel.com with ESMTP; 12 Jun 2016 10:11:37 -0700 Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com [10.237.217.45]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id u5CHBZOZ019942; Sun, 12 Jun 2016 18:11:35 +0100 Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1]) by sivswdev01.ir.intel.com with ESMTP id u5CHBY81010166; Sun, 12 Jun 2016 18:11:34 +0100 Received: (from bairemon@localhost) by sivswdev01.ir.intel.com with id u5CHBYqY010162; Sun, 12 Jun 2016 18:11:34 +0100 From: Bernard Iremonger To: dev@dpdk.org Cc: declan.doherty@intel.com, konstantin.ananyev@intel.com, Bernard Iremonger Date: Sun, 12 Jun 2016 18:11:27 +0100 Message-Id: <1465751489-10111-3-git-send-email-bernard.iremonger@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1465751489-10111-1-git-send-email-bernard.iremonger@intel.com> References: <1464280727-25752-2-git-send-email-bernard.iremonger@intel.com> <1465751489-10111-1-git-send-email-bernard.iremonger@intel.com> Subject: [dpdk-dev] [PATCH v3 2/4] bonding: grab queue spinlocks in slave add and remove X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When adding or removing a slave device from the bonding device the rx and tx queue spinlocks should be held. Signed-off-by: Bernard Iremonger Acked-by: Konstantin Ananyev --- drivers/net/bonding/rte_eth_bond_api.c | 52 ++++++++++++++++++++++++++++++++-- 1 file changed, 49 insertions(+), 3 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c index 53df9fe..006c901 100644 --- a/drivers/net/bonding/rte_eth_bond_api.c +++ b/drivers/net/bonding/rte_eth_bond_api.c @@ -437,8 +437,10 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id) { struct rte_eth_dev *bonded_eth_dev; struct bond_dev_private *internals; - + struct bond_tx_queue *bd_tx_q; + struct bond_rx_queue *bd_rx_q; int retval; + uint16_t i; /* Verify that port id's are valid bonded and slave ports */ if (valid_bonded_port_id(bonded_port_id) != 0) @@ -448,11 +450,30 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id) internals = bonded_eth_dev->data->dev_private; rte_spinlock_lock(&internals->lock); + if (bonded_eth_dev->data->dev_started) { + for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) { + bd_rx_q = bonded_eth_dev->data->rx_queues[i]; + rte_spinlock_lock(&bd_rx_q->lock); + } + for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) { + bd_tx_q = bonded_eth_dev->data->tx_queues[i]; + rte_spinlock_lock(&bd_tx_q->lock); + } + } retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id); + if (bonded_eth_dev->data->dev_started) { + for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) { + bd_rx_q = bonded_eth_dev->data->rx_queues[i]; + rte_spinlock_unlock(&bd_rx_q->lock); + } + for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) { + bd_tx_q = bonded_eth_dev->data->tx_queues[i]; + rte_spinlock_unlock(&bd_tx_q->lock); + } + } rte_spinlock_unlock(&internals->lock); - return retval; } @@ -541,7 +562,10 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id) { struct rte_eth_dev *bonded_eth_dev; struct bond_dev_private *internals; + struct bond_tx_queue *bd_tx_q; + struct bond_rx_queue *bd_rx_q; int retval; + uint16_t i; if (valid_bonded_port_id(bonded_port_id) != 0) return -1; @@ -550,11 +574,33 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id) internals = bonded_eth_dev->data->dev_private; rte_spinlock_lock(&internals->lock); + if (bonded_eth_dev->data->dev_started) { + for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) { + bd_rx_q = bonded_eth_dev->data->rx_queues[i]; + rte_spinlock_lock(&bd_rx_q->lock); + } + + for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) { + bd_tx_q = bonded_eth_dev->data->tx_queues[i]; + rte_spinlock_lock(&bd_tx_q->lock); + } + } retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id); - rte_spinlock_unlock(&internals->lock); + if (bonded_eth_dev->data->dev_started) { + for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) { + bd_tx_q = bonded_eth_dev->data->tx_queues[i]; + rte_spinlock_unlock(&bd_tx_q->lock); + } + for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) { + bd_rx_q = bonded_eth_dev->data->rx_queues[i]; + rte_spinlock_unlock(&bd_rx_q->lock); + } + rte_spinlock_unlock(&internals->lock); + } + rte_spinlock_unlock(&internals->lock); return retval; }