From patchwork Fri Dec 14 13:18:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 48872 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2AA9C1BC06; Fri, 14 Dec 2018 14:19:34 +0100 (CET) Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by dpdk.org (Postfix) with ESMTP id 7EA0A1BBD4 for ; Fri, 14 Dec 2018 14:19:15 +0100 (CET) Received: by mail-lj1-f195.google.com with SMTP id s5-v6so4846611ljd.12 for ; Fri, 14 Dec 2018 05:19:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=66B0odI/gR5V4nlyBIux/xYcrmUbkqL+7rj/8042agU=; b=A+jU8KbhPXHPuXRlhIrNQ5J//rDO43EkmahdXl6SQ+LLwNV9fVQLeuo0l25qMy4B1C DYSlZESWiCZsfohzB08TKIQF4W1VQc+8RharwFXjFa9M2TRlbh4n0AyU8mxpFs0iOBnI esxqzFHg8IdDQ2Ki0ojqspHAldqlMWQV1HtWPXIbuOKZk0RlgTj0f8TUcWw7c1gkUcj4 s/FDNAEjhZYc/Phb0qvoonpjDCse/tcNBx5JHXE7kij2iylI8rdHugqq3tjVPJi9UN94 M1ESnW6+1x/BVSyriAyAMBQKAmgzO+8/bzZcm9umGvZ+nn9XJnIaCRjzN6Ud1Oqrwl7Q WTWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=66B0odI/gR5V4nlyBIux/xYcrmUbkqL+7rj/8042agU=; b=NU0DCkIJw1WruxIKx8OeEMwlTRw1tYCKnzWN62SMuHFzld8MfkySBSJ2cKSgZfmyUT kOmzWRIViuEKF1N7uDFC9tkbobvQTVrjRfJkZO3kq5w1PL+cPpH/lTFkpGXDpjSC0EOY J4uyLog0PPatb8VmdoYwErCZOeuFZdQxSi4WTF5eCoeUran4Kgd4bB1+acSFgZ3IWSff jfnOOtmjBEU3nTNoef55iAaIIeHZABm+mJjE0klcTu7LBnLO67hVNk6dkNiUmjfO+Irq JGCMuC14Rfx+xR2k1Bq0p4Sv/ESTHXYaLybpnj8+0EOltRVvdPmRCp6eMjDJh0f5oxdG WivA== X-Gm-Message-State: AA+aEWYRjfRuLO1nnumRVBbtNdZ3dgUzPg9GZbL/XFvpR+jN8SoLjHlp FNtCm/75zkqADikQd0lTKD8UlHEMyJc= X-Google-Smtp-Source: AFSGD/UxODFraYroAFvHyIkVvl3lnqWu7hOY/7aA+4/6+nBFEMRDoQz2UMqCE0qSjwzKk3BHpAqPGw== X-Received: by 2002:a2e:908b:: with SMTP id l11-v6mr1830739ljg.150.1544793554793; Fri, 14 Dec 2018 05:19:14 -0800 (PST) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id o25sm873884lfd.29.2018.12.14.05.19.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Dec 2018 05:19:14 -0800 (PST) From: Michal Krawczyk To: dev@dpdk.org Cc: gtzalik@dpdk.org, mw@dpdk.org, matua@amazon.com, rk@semihalf.com, stable@dpdk.org Date: Fri, 14 Dec 2018 14:18:40 +0100 Message-Id: <20181214131846.22439-15-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20181214131846.22439-1-mk@semihalf.com> References: <20181214131846.22439-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH 14/20] net/ena: fix cleanup for out of order packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik When wrong req_id is detected some previous mbufs could be used for receiving different segments of received packets. In such cases chained mbufs will be twice returned to pool. To prevent it chained mbuf is now freed just after error detection. To simplify cleaning, pointers taken for Rx ring are set to NULL. As after ena_rx_queue_release_bufs and ena_tx_queue_release_bufs queues are not used updating of next_to_clean pointer is not necessary. Fixes: c2034976673d ("net/ena: add Rx out of order completion") Cc: stable@dpdk.org Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 14165561e..ce0ca40c4 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -770,17 +770,11 @@ static void ena_tx_queue_release(void *queue) static void ena_rx_queue_release_bufs(struct ena_ring *ring) { - unsigned int ring_mask = ring->ring_size - 1; - - while (ring->next_to_clean != ring->next_to_use) { - struct rte_mbuf *m = - ring->rx_buffer_info[ring->next_to_clean & ring_mask]; - - if (m) - rte_mbuf_raw_free(m); - - ring->next_to_clean++; - } + for (unsigned int i = 0; i < ring->ring_size; ++i) + if (ring->rx_buffer_info[i]) { + rte_mbuf_raw_free(ring->rx_buffer_info[i]); + ring->rx_buffer_info[i] = NULL; + } } static void ena_tx_queue_release_bufs(struct ena_ring *ring) @@ -792,8 +786,6 @@ static void ena_tx_queue_release_bufs(struct ena_ring *ring) if (tx_buf->mbuf) rte_pktmbuf_free(tx_buf->mbuf); - - ring->next_to_clean++; } } @@ -2077,10 +2069,14 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, while (segments < ena_rx_ctx.descs) { req_id = ena_rx_ctx.ena_bufs[segments].req_id; rc = validate_rx_req_id(rx_ring, req_id); - if (unlikely(rc)) + if (unlikely(rc)) { + if (segments != 0) + rte_mbuf_raw_free(mbuf_head); break; + } mbuf = rx_buff_info[req_id]; + rx_buff_info[req_id] = NULL; mbuf->data_len = ena_rx_ctx.ena_bufs[segments].len; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->refcnt = 1;