From patchwork Mon Dec 23 02:55:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Li, Xiaoyun" X-Patchwork-Id: 64107 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2FC20A0352; Mon, 23 Dec 2019 03:58:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 80DAC378B; Mon, 23 Dec 2019 03:58:01 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 022651C01; Mon, 23 Dec 2019 03:57:59 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Dec 2019 18:57:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.69,346,1571727600"; d="scan'208";a="417120420" Received: from dpdk-xiaoyunl.sh.intel.com ([10.67.110.162]) by fmsmga005.fm.intel.com with ESMTP; 22 Dec 2019 18:57:56 -0800 From: Xiaoyun Li To: qi.z.zhang@intel.com, beilei.xing@intel.com, xiaolong.ye@intel.com, dev@dpdk.org Cc: Xiaoyun Li , stable@dpdk.org Date: Mon, 23 Dec 2019 10:55:47 +0800 Message-Id: <20191223025547.88798-1-xiaoyun.li@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH] net/i40e: fix TSO pkt exceeds allowed buf size issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hardware limits that max buffer size per tx descriptor should be (16K-1)B. So when TSO enabled, the mbuf data size may exceed the limit and cause malicious behaviour to the NIC. This patch fixes this issue by using more tx descs for this kind of large buffer. Fixes: 4861cde46116 ("i40e: new poll mode driver") Cc: stable@dpdk.org Signed-off-by: Xiaoyun Li --- drivers/net/i40e/i40e_rxtx.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 17dc8c78f..8269dc022 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -989,6 +989,8 @@ i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload) return ctx_desc; } +/* HW requires that Tx buffer size ranges from 1B up to (16K-1)B. */ +#define I40E_MAX_DATA_PER_TXD (16 * 1024 - 1) uint16_t i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -1046,8 +1048,15 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) * The number of descriptors that must be allocated for * a packet equals to the number of the segments of that * packet plus 1 context descriptor if needed. + * Recalculate the needed tx descs when TSO enabled in case + * the mbuf data size exceeds max data size that hw allows + * per tx desc. */ - nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx); + if (ol_flags & PKT_TX_TCP_SEG) + nb_used = (uint16_t)(DIV_ROUND_UP(tx_pkt->data_len, + I40E_MAX_DATA_PER_TXD) + nb_ctx); + else + nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx); tx_last = (uint16_t)(tx_id + nb_used - 1); /* Circular ring */ @@ -1160,6 +1169,24 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) slen = m_seg->data_len; buf_dma_addr = rte_mbuf_data_iova(m_seg); + while ((ol_flags & PKT_TX_TCP_SEG) && + unlikely(slen > I40E_MAX_DATA_PER_TXD)) { + txd->buffer_addr = + rte_cpu_to_le_64(buf_dma_addr); + txd->cmd_type_offset_bsz = + i40e_build_ctob(td_cmd, + td_offset, I40E_MAX_DATA_PER_TXD, + td_tag); + + buf_dma_addr += I40E_MAX_DATA_PER_TXD; + slen -= I40E_MAX_DATA_PER_TXD; + + txe->last_id = tx_last; + tx_id = txe->next_id; + txe = txn; + txd = &txr[tx_id]; + txn = &sw_ring[txe->next_id]; + } PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]:\n" "buf_dma_addr: %#"PRIx64";\n" "td_cmd: %#x;\n"