From patchwork Mon Dec 17 11:06:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 49004 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0C84E1B8B5; Mon, 17 Dec 2018 12:06:29 +0100 (CET) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id EF04D1B2AC for ; Mon, 17 Dec 2018 12:06:27 +0100 (CET) Received: by mail-lj1-f194.google.com with SMTP id n18-v6so10567762lji.7 for ; Mon, 17 Dec 2018 03:06:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+GhRHtTvjnf+2hWSGDOaUlWyLP1NW0IVokJw7o9K+P4=; b=oSMQ5Xv7uVdzXtlC7SRDIzDdVdd0RDUr9P/8lk3HSqs9HskuXjdOILUKsd6Z4Bymw7 LkKkAxlt7F32TuZWhtGmjTfrsSOgtykkkSUHEgHRjZC1d+53EVvVXlQyv0/xOwP0uwdX X7Kcd0JGtVYGBsRSdlxmnIdCqxlpyxLgavfMkjueL6XVJ05fieGaYTGtbygdjlSc/zFT 8nrltpxkW78l/KSTqD5/KHnIVrRB2tIPBgpP7SwihnaxPE3N1TbyNYdYJAKRf36DDNCE 0vhhR+2NYoV4YpTGcwMYKPeccTxoZ8sCwFZOYAJA5B1M4dgSAu+6n3/5jogYd32iCp0X KrHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+GhRHtTvjnf+2hWSGDOaUlWyLP1NW0IVokJw7o9K+P4=; b=r7Zu9VtfUvWGm5QaNxsAaT3rPxoAh3dKubXlgJ3YTS7DVSWTqztk0q2+F6j4vTXIYh yzqWtAcUEdbOmqo8UBi6b10RVxFerN4WnLNyOgpI72n4oJo7GwBMafScZTnTUxUQS5t9 l0zcnIhbq5w7uVF99zgvWh/ul9pJleOhR2xA99cOcQ7yfxX0vFdfnT4JRTTMaQmhtzSM UJl9LLKSV/2v6ztS0ZTdLfmrn/lQ4AgNXQinMi7XV8hFUsH8xJBBINY9POHRpoMZ4lWX bVmUFZqAyXGJNkUygKdYxPVl5wmFbyw8vPoeuqKNAJFbsjlQwUIrBzuxLsvo6PhB5vjM HAeQ== X-Gm-Message-State: AA+aEWb6aE4i+op+vWcV5008ak/w6f8G4l6L7Y62QCpxBRnse3aDURwl 5gjNp1vKC5s2jk2XEiq0oQT4haC+y4E= X-Google-Smtp-Source: AFSGD/XpvIxfytFKgo7SxLwiMutkFOSj+haMQqpmj0XNZ+ekaXVrSs/sD5vi0yks/Fs7s6JqWcbEsg== X-Received: by 2002:a2e:2d11:: with SMTP id t17-v6mr7004453ljt.159.1545044787140; Mon, 17 Dec 2018 03:06:27 -0800 (PST) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p77-v6sm2943956lja.0.2018.12.17.03.06.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 17 Dec 2018 03:06:26 -0800 (PST) From: Michal Krawczyk To: dev@dpdk.org Cc: Rafal Kozik , stable@dpdk.org Date: Mon, 17 Dec 2018 12:06:18 +0100 Message-Id: <20181217110618.30204-1-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20181217110307.29969-1-mk@semihalf.com> References: <20181217110307.29969-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 14/20] net/ena: fix cleanup for out of order packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik When wrong req_id is detected some previous mbufs could be used for receiving different segments of received packets. In such cases chained mbufs will be twice returned to pool. To prevent it chained mbuf is now freed just after error detection. To simplify cleaning, pointers taken for Rx ring are set to NULL. As after ena_rx_queue_release_bufs and ena_tx_queue_release_bufs queues are not used updating of next_to_clean pointer is not necessary. Fixes: c2034976673d ("net/ena: add Rx out of order completion") Cc: stable@dpdk.org Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- v3: * Remove Gerrit Change-Id from commit message v2: * Fix for loop error when compiler is not using C99 mode drivers/net/ena/ena_ethdev.c | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 14165561e..364778840 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -770,17 +770,13 @@ static void ena_tx_queue_release(void *queue) static void ena_rx_queue_release_bufs(struct ena_ring *ring) { - unsigned int ring_mask = ring->ring_size - 1; - - while (ring->next_to_clean != ring->next_to_use) { - struct rte_mbuf *m = - ring->rx_buffer_info[ring->next_to_clean & ring_mask]; - - if (m) - rte_mbuf_raw_free(m); + unsigned int i; - ring->next_to_clean++; - } + for (i = 0; i < ring->ring_size; ++i) + if (ring->rx_buffer_info[i]) { + rte_mbuf_raw_free(ring->rx_buffer_info[i]); + ring->rx_buffer_info[i] = NULL; + } } static void ena_tx_queue_release_bufs(struct ena_ring *ring) @@ -792,8 +788,6 @@ static void ena_tx_queue_release_bufs(struct ena_ring *ring) if (tx_buf->mbuf) rte_pktmbuf_free(tx_buf->mbuf); - - ring->next_to_clean++; } } @@ -2077,10 +2071,14 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, while (segments < ena_rx_ctx.descs) { req_id = ena_rx_ctx.ena_bufs[segments].req_id; rc = validate_rx_req_id(rx_ring, req_id); - if (unlikely(rc)) + if (unlikely(rc)) { + if (segments != 0) + rte_mbuf_raw_free(mbuf_head); break; + } mbuf = rx_buff_info[req_id]; + rx_buff_info[req_id] = NULL; mbuf->data_len = ena_rx_ctx.ena_bufs[segments].len; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->refcnt = 1;