From patchwork Mon Dec 17 11:03:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 49003 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 14D4A1B8BB; Mon, 17 Dec 2018 12:03:27 +0100 (CET) Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by dpdk.org (Postfix) with ESMTP id 090EA1B889 for ; Mon, 17 Dec 2018 12:03:25 +0100 (CET) Received: by mail-lj1-f193.google.com with SMTP id e5-v6so10567693lja.4 for ; Mon, 17 Dec 2018 03:03:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CWF56fmI3MOFAzWfvexsq3ynPghA8ox5J7I+ketwOA8=; b=uFQGZZfm0n67ZE7/ktH0RUpvvLFAX3T/pgElwhQCiIe4PwVRLhkelLHS1FhalapNGE ZXXJvHTeyJE9YAyfRhN3f2EUWNdxejGyRBHt6KZp5vm90BozFc0Z6UqbeRkUpqRALZeq U/s+cuurRwGGOr33SNF31W5J4+TRQmgyrmmwzrWdqWw1YviV7cpFCoPdb+uOJmgqzzs9 78Hp4CJLyt10qWGPFZw1+0Nn8mVnXc8fmgR1xxsPLMdAi9VO2W9EOXyfcunJzNs9DiYM cUj9Xecw3JiqvjEnGabD/f/895WEJnJEzz3aR/4o6crYHyd8dsUNDK+DZ/eaRtr+WngO CmFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CWF56fmI3MOFAzWfvexsq3ynPghA8ox5J7I+ketwOA8=; b=lddHrlCBCK+sbkIqyrH2dTK2ifsjTVKWj3cq9z03Pdf3ehiK4MSo2BwMh//Qpnl2MT S3iXJRS50flLVL+fRAa0yhJbCupCGvWa5/tL8uwcYQZAcEuzTLXehPf5GBvnPk1wYX3X oMJS+KE5e5zswzuBGX1vz4QU66CqsoqCeEOYFm95S/7d9LMvM354QuZ/mCCjrmQ8MmIl 1FjPXfQXj0rYxx7CYXifw9KncyewDGXQ4HgDxYP7B+k8VoYrYbbmurQMfH5uNEwxQNes kT9URnuOBNZgnqMU56uWDdgJhQOG0kIpVXejgZ877T4VRRFo+KzK5zhwmoyPZ1M5AkB2 gCQQ== X-Gm-Message-State: AA+aEWbD/WDd3Nmxh7dggXaU8xXTnggtZmUnODOnq4DxwdmGIGHp4oia vDJUTlg4E/GMW6gOIc1L5tvH0GC31Mg= X-Google-Smtp-Source: AFSGD/WzysXbVSvLMylF34Vz9RPCKS8QTc8RfzRunvXlglG8GGuEMK+aS/ihnkkHxfN9NFFS0K0fGA== X-Received: by 2002:a2e:484:: with SMTP id a4-v6mr7090344ljf.27.1545044604104; Mon, 17 Dec 2018 03:03:24 -0800 (PST) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p77-v6sm2942387lja.0.2018.12.17.03.03.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 17 Dec 2018 03:03:23 -0800 (PST) From: Michal Krawczyk To: dev@dpdk.org Cc: Rafal Kozik , stable@dpdk.org Date: Mon, 17 Dec 2018 12:03:07 +0100 Message-Id: <20181217110307.29969-1-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20181214131846.22439-1-mk@semihalf.com> References: <20181214131846.22439-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v2 14/20] net/ena: fix cleanup for out of order packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik When wrong req_id is detected some previous mbufs could be used for receiving different segments of received packets. In such cases chained mbufs will be twice returned to pool. To prevent it chained mbuf is now freed just after error detection. To simplify cleaning, pointers taken for Rx ring are set to NULL. As after ena_rx_queue_release_bufs and ena_tx_queue_release_bufs queues are not used updating of next_to_clean pointer is not necessary. Fixes: c2034976673d ("net/ena: add Rx out of order completion") Cc: stable@dpdk.org Change-Id: I5e93cfb93c145f507fee2a8b2fb230332ce78e33 Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- v2: * Fix for loop error when compiler is not using C99 mode drivers/net/ena/ena_ethdev.c | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 14165561e..364778840 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -770,17 +770,13 @@ static void ena_tx_queue_release(void *queue) static void ena_rx_queue_release_bufs(struct ena_ring *ring) { - unsigned int ring_mask = ring->ring_size - 1; - - while (ring->next_to_clean != ring->next_to_use) { - struct rte_mbuf *m = - ring->rx_buffer_info[ring->next_to_clean & ring_mask]; - - if (m) - rte_mbuf_raw_free(m); + unsigned int i; - ring->next_to_clean++; - } + for (i = 0; i < ring->ring_size; ++i) + if (ring->rx_buffer_info[i]) { + rte_mbuf_raw_free(ring->rx_buffer_info[i]); + ring->rx_buffer_info[i] = NULL; + } } static void ena_tx_queue_release_bufs(struct ena_ring *ring) @@ -792,8 +788,6 @@ static void ena_tx_queue_release_bufs(struct ena_ring *ring) if (tx_buf->mbuf) rte_pktmbuf_free(tx_buf->mbuf); - - ring->next_to_clean++; } } @@ -2077,10 +2071,14 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, while (segments < ena_rx_ctx.descs) { req_id = ena_rx_ctx.ena_bufs[segments].req_id; rc = validate_rx_req_id(rx_ring, req_id); - if (unlikely(rc)) + if (unlikely(rc)) { + if (segments != 0) + rte_mbuf_raw_free(mbuf_head); break; + } mbuf = rx_buff_info[req_id]; + rx_buff_info[req_id] = NULL; mbuf->data_len = ena_rx_ctx.ena_bufs[segments].len; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->refcnt = 1;