From patchwork Wed Mar 28 15:43:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Didier Pallard X-Patchwork-Id: 36629 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A7D9E5F17; Wed, 28 Mar 2018 17:44:39 +0200 (CEST) Received: from mail-wm0-f68.google.com (mail-wm0-f68.google.com [74.125.82.68]) by dpdk.org (Postfix) with ESMTP id 348E64F9A for ; Wed, 28 Mar 2018 17:44:34 +0200 (CEST) Received: by mail-wm0-f68.google.com with SMTP id r82so6262360wme.0 for ; Wed, 28 Mar 2018 08:44:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=3BHW8pDKhoQaP9yetlQ5soQY2rCWqD/ZQklp1ZWwlcw=; b=ubAW+Z49y8aoa8X6zNFtfsIbmvx+hswI44GatyBsafOmWX5NhreqWhogIHzYi9E7iX 3EHe80iPwx6u1bXMh1HsKlwrfEEoIUdIiahkrNM9t/t7D2LoMURpxmrToEntOB3oZp2h fe3ndKIXnwZMP6Lb/BQSfE2C929BXRxqdHvq7K+N1+yYU1tPJwy4EUIAFTL2DiRQcU3M C4ci+QDVXQ/Whw9ZBOGzVKTreePxoGf4BRgRdqng6LWojsACfGLKW+TAslHBgIh/IELO mWzAFS96MMqHMeuQhfivEXAt9wT8URuXevcETIVtqWOhC5tHAx8bS074dqU1+moIOM6c kgxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=3BHW8pDKhoQaP9yetlQ5soQY2rCWqD/ZQklp1ZWwlcw=; b=rsJkywL8L8sS4BWyxWHXy5sKV7pQN0Le2wgw0fahF0noVtPZiQEzy5EPejlGmKzGjv E8hS+coHNg+ulebLgoTb2VtvO1K+52dE9w93oGhJWbYmGnjgj0E02K6POQNN5h30Z/AE rQK+mXSNKdoBGjSR2P811J77lAj58T7KBhDFQIsIrLsM1V7D7SBABH/bL5/xYdWTCNzn I8dJSyRuxFFwhmT60A8dg/Yh2hxDTGhB4WXnj1WaVgi1Lgn6MzHao1cfMUfZTqpNPDkz y3cuPxBPjWa1PYIlDgbjStsi2ton2078hZXDVaQVoOYMb9uqi0GJoOr/JpPHuex3ICS9 Kl5Q== X-Gm-Message-State: AElRT7FBJIbzFJ4R+VXpsmXjYmnzGB/B1P8YZAe4mVy6ftwC7U1o7Uam cMG2wfIwMx/SPsWsUFghOqbIeolQ X-Google-Smtp-Source: AIpwx4+/S7YBqgjPaeGjHUlobbdTnlOi0TsFmegjOUajC5jOArKwjsbIsjcszKGlgDSTU07Cxnq6bw== X-Received: by 10.28.109.27 with SMTP id i27mr3254312wmc.109.1522251873481; Wed, 28 Mar 2018 08:44:33 -0700 (PDT) Received: from pala.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id o23sm7957279wrf.93.2018.03.28.08.44.32 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 28 Mar 2018 08:44:32 -0700 (PDT) From: Didier Pallard To: dev@dpdk.org Date: Wed, 28 Mar 2018 17:43:48 +0200 Message-Id: <20180328154349.24976-8-didier.pallard@6wind.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180328154349.24976-1-didier.pallard@6wind.com> References: <20180328154349.24976-1-didier.pallard@6wind.com> Subject: [dpdk-dev] [PATCH 7/8] net/vmxnet3: ignore emtpy segments in reception X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When several TCP fragments are contained in a packet that is only one mbuf segment long, vmxnet3 receives an empty segment following first one, that contains offload information. In current version, this segment is propagated as is to upper application. Remove those empty segments directly when receiving buffers, they may generate unneeded extra processing in the upper application. Signed-off-by: Didier Pallard --- drivers/net/vmxnet3/vmxnet3_rxtx.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c index 1f273f88e..1d344b26e 100644 --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c @@ -925,18 +925,23 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } rxq->start_seg = rxm; + rxq->last_seg = rxm; vmxnet3_rx_offload(hw, rcd, rxm, 1); } else { struct rte_mbuf *start = rxq->start_seg; RTE_ASSERT(rxd->btype == VMXNET3_RXD_BTYPE_BODY); - start->pkt_len += rxm->data_len; - start->nb_segs++; + if (rxm->data_len) { + start->pkt_len += rxm->data_len; + start->nb_segs++; - rxq->last_seg->next = rxm; + rxq->last_seg->next = rxm; + rxq->last_seg = rxm; + } else { + rte_pktmbuf_free_seg(rxm); + } } - rxq->last_seg = rxm; if (rcd->eop) { struct rte_mbuf *start = rxq->start_seg;