From patchwork Thu Jan 10 10:22:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 49572 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 455EB1B5E9; Thu, 10 Jan 2019 11:23:28 +0100 (CET) Received: from rcdn-iport-7.cisco.com (rcdn-iport-7.cisco.com [173.37.86.78]) by dpdk.org (Postfix) with ESMTP id 56A741B5DE; Thu, 10 Jan 2019 11:23:26 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=1905; q=dns/txt; s=iport; t=1547115806; x=1548325406; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=PvusNf22IbtAcwkFJPm9eTsrqr2ygEKAc9qTJc9HIWA=; b=A7icIcYqWrjiV5rBi8a9RoM1Jb5Uu/h7OGQauDYWgYbS1rCuvunQto2k azKwWES3yEj48IhmaCK0i5XI05lwIaXibKHgIRgy1M2v4z8FXi9LHdVkW j8Qsoem8PN5nalTImapv1kYJ1Etsw+L0wu2NhaaY472yATpB/SFEGVK4q U=; X-IronPort-AV: E=Sophos;i="5.56,460,1539648000"; d="scan'208";a="500813186" Received: from alln-core-3.cisco.com ([173.36.13.136]) by rcdn-iport-7.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jan 2019 10:23:25 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-3.cisco.com (8.15.2/8.15.2) with ESMTP id x0AANPqB030412; Thu, 10 Jan 2019 10:23:25 GMT Received: by cisco.com (Postfix, from userid 508933) id 231EE20F2001; Thu, 10 Jan 2019 02:23:25 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit , Declan Doherty , Chas Williams Cc: dev@dpdk.org, Hyong Youb Kim , stable@dpdk.org Date: Thu, 10 Jan 2019 02:22:34 -0800 Message-Id: <20190110102235.1238-2-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190110102235.1238-1-hyonkim@cisco.com> References: <20190110102235.1238-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-3.cisco.com Subject: [dpdk-dev] [PATCH 1/2] net/bonding: do not set promisc on non-existent primary port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For active-backup, tlb, and alb mode, bond_ethdev_promiscuous_{enable,disable} tries to set promisc mode on the primary port, even when there are no slaves. It is harmless, as rte_eth_promiscuous_{enable,disable} does nothing if the port number is invalid. But, it does print a warning message. Here is an example from testpmd. testpmd> create bonded device 5 0 Created new bonded device net_bonding_testpmd_0 on (port 4). Invalid port_id=33 testpmd> set promisc 4 off Invalid port_id=33 33 in this case is RTE_MAX_ETHPORTS + 1, the invalid primary port number used within the bonding driver. This warning message is harmless but can be confusing to the user. So do not try to set promisc on a primary port when we know it does not exist (i.e. no slaves). Fixes: 2efb58cbab6e ("bond: new link bonding library") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim Acked-by: Chas Williams --- drivers/net/bonding/rte_eth_bond_pmd.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 44deaf119..daf2440cd 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -2593,6 +2593,9 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev) case BONDING_MODE_TLB: case BONDING_MODE_ALB: default: + /* Do not touch promisc when there cannot be primary ports */ + if (internals->slave_count == 0) + break; rte_eth_promiscuous_enable(internals->current_primary_port); } } @@ -2621,6 +2624,9 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev) case BONDING_MODE_TLB: case BONDING_MODE_ALB: default: + /* Do not touch promisc when there cannot be primary ports */ + if (internals->slave_count == 0) + break; rte_eth_promiscuous_disable(internals->current_primary_port); } } From patchwork Thu Jan 10 10:22:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hyong Youb Kim (hyonkim)" X-Patchwork-Id: 49573 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 55D561B5FD; Thu, 10 Jan 2019 11:23:41 +0100 (CET) Received: from rcdn-iport-8.cisco.com (rcdn-iport-8.cisco.com [173.37.86.79]) by dpdk.org (Postfix) with ESMTP id 391931B5F5; Thu, 10 Jan 2019 11:23:38 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=2637; q=dns/txt; s=iport; t=1547115818; x=1548325418; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=GNUDkAsuoLMRVgwhkU5k0/ftG87D1HZJ74rUYqC3VnA=; b=iHjLyg0LqCd2AyhKop3ZZeN5SuX6vawtG0qxTuVa8B0O1dLdezTHP9en IRHPpQD2noQUrI/O9fYrjBGMDz/76kByaizqhI6exgK8EREgXhvoF/GmP EoZuHb4jrmFzFtvZT8DmVbLX1MPFrexYNmJ9GeRYUxQAniGOmOU7HF1P4 U=; X-IronPort-AV: E=Sophos;i="5.56,460,1539648000"; d="scan'208";a="502009397" Received: from alln-core-1.cisco.com ([173.36.13.131]) by rcdn-iport-8.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jan 2019 10:23:37 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by alln-core-1.cisco.com (8.15.2/8.15.2) with ESMTP id x0AANbZJ008703; Thu, 10 Jan 2019 10:23:37 GMT Received: by cisco.com (Postfix, from userid 508933) id 09DF120F2001; Thu, 10 Jan 2019 02:23:37 -0800 (PST) From: Hyong Youb Kim To: Ferruh Yigit , Declan Doherty , Chas Williams Cc: dev@dpdk.org, Hyong Youb Kim , stable@dpdk.org Date: Thu, 10 Jan 2019 02:22:35 -0800 Message-Id: <20190110102235.1238-3-hyonkim@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20190110102235.1238-1-hyonkim@cisco.com> References: <20190110102235.1238-1-hyonkim@cisco.com> X-Outbound-SMTP-Client: 10.193.184.48, savbu-usnic-a.cisco.com X-Outbound-Node: alln-core-1.cisco.com Subject: [dpdk-dev] [PATCH 2/2] net/bonding: avoid the next active slave going out of bound X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For bonding modes like broadcast that use bond_ethdev_rx_burst(), it is fairly easy to produce a crash simply by bringing a slave port's link down. When slave links go down, the driver on one thread reduces active_slave_count via the LSC callback and deactivate_slave(). At the same time, bond_ethdev_rx_burst() running on a forwarding thread may increment active_slave (next active slave) beyond active_slave_count. Here is a typical sequence of events. At time 0: active_slave_count = 3 active_slave = 2 At time 1: A slave link goes down. Thread 0 (main) reduces active_slave_count to 2. At time 2: Thread 1 (forwarding) executes bond_ethdev_rx_burst(). - Reads active_slave_count = 2. - Increments active_slave at the end to 3. From this point on, everytime bond_ethdev_rx_burst() runs, active_slave increments by one, eventually going well out of bound of the active_slaves array and causing a crash. Make the rx burst function to first check that active_slave is within bound. If not, reset it to 0 to avoid out-of-range array access. Fixes: e1110e977648 ("net/bonding: fix Rx slave fairness") Cc: stable@dpdk.org Signed-off-by: Hyong Youb Kim Acked-by: Chas Williams --- drivers/net/bonding/rte_eth_bond_pmd.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index daf2440cd..bc2405e54 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -68,6 +68,15 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) internals = bd_rx_q->dev_private; slave_count = internals->active_slave_count; active_slave = internals->active_slave; + /* + * Reset the active slave index, in case active_slave goes out + * of bound. It can hapen when slave links go down, and + * another thread (LSC callback) shrinks the slave count. + */ + if (active_slave >= slave_count) { + internals->active_slave = 0; + active_slave = 0; + } for (i = 0; i < slave_count && nb_pkts; i++) { uint16_t num_rx_slave; @@ -273,6 +282,11 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs, active_slave = internals->active_slave; memcpy(slaves, internals->active_slaves, sizeof(internals->active_slaves[0]) * slave_count); + /* active_slave may go out of bound. See bond_ethdev_rx_burst() */ + if (active_slave >= slave_count) { + internals->active_slave = 0; + active_slave = 0; + } for (i = 0; i < slave_count && nb_pkts; i++) { uint16_t num_rx_slave;