From patchwork Fri Aug 31 12:25:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Ostruszka X-Patchwork-Id: 44098 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 86E0E4F9C; Fri, 31 Aug 2018 14:26:14 +0200 (CEST) Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by dpdk.org (Postfix) with ESMTP id 0B4B84CB5 for ; Fri, 31 Aug 2018 14:26:12 +0200 (CEST) Received: by mail-lf1-f68.google.com with SMTP id v77-v6so219604lfa.6 for ; Fri, 31 Aug 2018 05:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=adsAaIh09rnKQyCcEk7Ku64QY1pTOPAH2rD/OyEFEuc=; b=DUeC/DPzQjm/QjOMupmDXh0HeLlwSOJL5c/gO69vcFalh4ugat/RlpgVwYdGVu4eTv gn/AbKINM7GWnGaTfwOAnolkjlDTypziU0pJ4Dxsgnz0my44Bf+s8iuzeukctIL6hsGZ 0fv/n8TO6FHGxCT7uuTAbbFpb4UO4qjQsfdDNaniDxpQxnlN83YY9Yf3P67PWGIITJkD roG8WS9s9PaDYfCVYOBudLR1VFO3RR8KkYWn0DqTNHMeeQS+EjVLkU8O17fljHmC6ULF 1LKaQDyuyYSWSpR6u+d7fhD3vXFfbqc+5HUXNqVClflk6X9OnJoQnedhwTTaiJdCRVIE R8IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=adsAaIh09rnKQyCcEk7Ku64QY1pTOPAH2rD/OyEFEuc=; b=j9s8rykXb/fZSH2W+cqmAwX7mHyiXp0dvHWNYVniNi34SV0987ik7THTUTnvPyzQJA XR1T5mFoQZUxl0WicnXC2V0XNMYncpfsfQ1yh6eb0snVgF+cawgdZs3Hz8vxUKZdkLpw gTgx4Ks3UdQTt1NL7wCLNY7ZYW/ASXQUvWkeKLA7SaZIhZq5R+0OFAtBzQkQWLS86Wdj YMyLgQg5HOk3HpVIJ17YJVpZoXMemuiqW3/GY2iMKB2tWKYk/vAuvqrSU+//AL7XjGII cB8Q/o5qLsXMPsuFWyCUbObAfVyyrrfyMrJHxA36/lDWPlYaT3EkeiMCzxdiXgX/yMGh GDmg== X-Gm-Message-State: APzg51CzUEzuuyTZJ6DtAt9tkJtg7YZloGswUt0pIW786HmbO28pW7/C OSn4sQ+ISKoNCRag7PcdZ+2HasxyjCI= X-Google-Smtp-Source: ANB0VdZFsAKjt9ILPBwpn0eXYlVZ+QgmX95iuDO1EOP/3SZTD6bGq6iolHqGFy67/Fb01BFTHKQ2/w== X-Received: by 2002:a19:c4c9:: with SMTP id u192-v6mr9812690lff.87.1535718371472; Fri, 31 Aug 2018 05:26:11 -0700 (PDT) Received: from amok.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.googlemail.com with ESMTPSA id t4-v6sm1825422lfd.13.2018.08.31.05.26.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 31 Aug 2018 05:26:10 -0700 (PDT) From: Andrzej Ostruszka To: dev@dpdk.org Cc: mw@semihalf.com, zr@semihalf.com, tdu@semihalf.com, nsamsono@marvell.com, Jia Yu , stable@dpdk.org Date: Fri, 31 Aug 2018 14:25:57 +0200 Message-Id: <1535718368-15803-2-git-send-email-amo@semihalf.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1535718368-15803-1-git-send-email-amo@semihalf.com> References: <1535469030-18647-1-git-send-email-amo@semihalf.com> <1535718368-15803-1-git-send-email-amo@semihalf.com> Subject: [dpdk-dev] [PATCH v2 1/8] net/bonding: fix buf corruption in packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jia Yu When bond slave devices cannot transmit all packets in bufs array, tx_burst callback shall merge the un-transmitted packets back to bufs array. Recent merge logic introduced a bug which causes invalid mbuf addresses being written to bufs array. When caller frees the un-transmitted packets, due to invalid addresses, application will crash. The fix is avoid shifting mbufs, and directly write un-transmitted packets back to bufs array. Fixes: 09150784a776 ("net/bonding: burst mode hash calculation") Cc: stable@dpdk.org Signed-off-by: Jia Yu Acked-by: Chas Williams --- drivers/net/bonding/rte_eth_bond_pmd.c | 116 +++++++-------------------------- 1 file changed, 23 insertions(+), 93 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 4417422..b84f322 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -301,10 +301,10 @@ bond_ethdev_tx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs, /* Mapping array generated by hash function to map mbufs to slaves */ uint16_t bufs_slave_port_idxs[RTE_MAX_ETHPORTS] = { 0 }; - uint16_t slave_tx_count, slave_tx_fail_count[RTE_MAX_ETHPORTS] = { 0 }; + uint16_t slave_tx_count; uint16_t total_tx_count = 0, total_tx_fail_count = 0; - uint16_t i, j; + uint16_t i; if (unlikely(nb_bufs == 0)) return 0; @@ -359,34 +359,12 @@ bond_ethdev_tx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs, /* If tx burst fails move packets to end of bufs */ if (unlikely(slave_tx_count < slave_nb_bufs[i])) { - slave_tx_fail_count[i] = slave_nb_bufs[i] - + int slave_tx_fail_count = slave_nb_bufs[i] - slave_tx_count; - total_tx_fail_count += slave_tx_fail_count[i]; - - /* - * Shift bufs to beginning of array to allow reordering - * later - */ - for (j = 0; j < slave_tx_fail_count[i]; j++) { - slave_bufs[i][j] = - slave_bufs[i][(slave_tx_count - 1) + j]; - } - } - } - - /* - * If there are tx burst failures we move packets to end of bufs to - * preserve expected PMD behaviour of all failed transmitted being - * at the end of the input mbuf array - */ - if (unlikely(total_tx_fail_count > 0)) { - int bufs_idx = nb_bufs - total_tx_fail_count - 1; - - for (i = 0; i < slave_count; i++) { - if (slave_tx_fail_count[i] > 0) { - for (j = 0; j < slave_tx_fail_count[i]; j++) - bufs[bufs_idx++] = slave_bufs[i][j]; - } + total_tx_fail_count += slave_tx_fail_count; + memcpy(&bufs[nb_bufs - total_tx_fail_count], + &slave_bufs[i][slave_tx_count], + slave_tx_fail_count * sizeof(bufs[0])); } } @@ -716,8 +694,8 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs, tx_fail_total += tx_fail_slave; memcpy(&bufs[nb_pkts - tx_fail_total], - &slave_bufs[i][num_tx_slave], - tx_fail_slave * sizeof(bufs[0])); + &slave_bufs[i][num_tx_slave], + tx_fail_slave * sizeof(bufs[0])); } num_tx_total += num_tx_slave; } @@ -1222,10 +1200,10 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs, /* Mapping array generated by hash function to map mbufs to slaves */ uint16_t bufs_slave_port_idxs[nb_bufs]; - uint16_t slave_tx_count, slave_tx_fail_count[RTE_MAX_ETHPORTS] = { 0 }; + uint16_t slave_tx_count; uint16_t total_tx_count = 0, total_tx_fail_count = 0; - uint16_t i, j; + uint16_t i; if (unlikely(nb_bufs == 0)) return 0; @@ -1266,34 +1244,12 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs, /* If tx burst fails move packets to end of bufs */ if (unlikely(slave_tx_count < slave_nb_bufs[i])) { - slave_tx_fail_count[i] = slave_nb_bufs[i] - + int slave_tx_fail_count = slave_nb_bufs[i] - slave_tx_count; - total_tx_fail_count += slave_tx_fail_count[i]; - - /* - * Shift bufs to beginning of array to allow reordering - * later - */ - for (j = 0; j < slave_tx_fail_count[i]; j++) { - slave_bufs[i][j] = - slave_bufs[i][(slave_tx_count - 1) + j]; - } - } - } - - /* - * If there are tx burst failures we move packets to end of bufs to - * preserve expected PMD behaviour of all failed transmitted being - * at the end of the input mbuf array - */ - if (unlikely(total_tx_fail_count > 0)) { - int bufs_idx = nb_bufs - total_tx_fail_count - 1; - - for (i = 0; i < slave_count; i++) { - if (slave_tx_fail_count[i] > 0) { - for (j = 0; j < slave_tx_fail_count[i]; j++) - bufs[bufs_idx++] = slave_bufs[i][j]; - } + total_tx_fail_count += slave_tx_fail_count; + memcpy(&bufs[nb_bufs - total_tx_fail_count], + &slave_bufs[i][slave_tx_count], + slave_tx_fail_count * sizeof(bufs[0])); } } @@ -1320,10 +1276,10 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, /* Mapping array generated by hash function to map mbufs to slaves */ uint16_t bufs_slave_port_idxs[RTE_MAX_ETHPORTS] = { 0 }; - uint16_t slave_tx_count, slave_tx_fail_count[RTE_MAX_ETHPORTS] = { 0 }; + uint16_t slave_tx_count; uint16_t total_tx_count = 0, total_tx_fail_count = 0; - uint16_t i, j; + uint16_t i; if (unlikely(nb_bufs == 0)) return 0; @@ -1381,39 +1337,13 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, /* If tx burst fails move packets to end of bufs */ if (unlikely(slave_tx_count < slave_nb_bufs[i])) { - slave_tx_fail_count[i] = slave_nb_bufs[i] - + int slave_tx_fail_count = slave_nb_bufs[i] - slave_tx_count; - total_tx_fail_count += slave_tx_fail_count[i]; - - /* - * Shift bufs to beginning of array to allow - * reordering later - */ - for (j = 0; j < slave_tx_fail_count[i]; j++) - slave_bufs[i][j] = - slave_bufs[i] - [(slave_tx_count - 1) - + j]; - } - } + total_tx_fail_count += slave_tx_fail_count; - /* - * If there are tx burst failures we move packets to end of - * bufs to preserve expected PMD behaviour of all failed - * transmitted being at the end of the input mbuf array - */ - if (unlikely(total_tx_fail_count > 0)) { - int bufs_idx = nb_bufs - total_tx_fail_count - 1; - - for (i = 0; i < slave_count; i++) { - if (slave_tx_fail_count[i] > 0) { - for (j = 0; - j < slave_tx_fail_count[i]; - j++) { - bufs[bufs_idx++] = - slave_bufs[i][j]; - } - } + memcpy(&bufs[nb_bufs - total_tx_fail_count], + &slave_bufs[i][slave_tx_count], + slave_tx_fail_count * sizeof(bufs[0])); } } }