From patchwork Tue Oct 13 13:45:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 80526 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FD6DA04B7; Tue, 13 Oct 2020 15:52:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 75CB51DC0C; Tue, 13 Oct 2020 15:46:32 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 4FEA31DB64 for ; Tue, 13 Oct 2020 15:46:04 +0200 (CEST) Received: from mx1-us1.ppe-hosted.com (unknown [10.110.50.143]) by dispatch1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id E342D2005F for ; Tue, 13 Oct 2020 13:46:02 +0000 (UTC) Received: from us4-mdac16-64.at1.mdlocal (unknown [10.110.50.158]) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id E1521800B0 for ; Tue, 13 Oct 2020 13:46:02 +0000 (UTC) X-Virus-Scanned: Proofpoint Essentials engine Received: from mx1-us1.ppe-hosted.com (unknown [10.110.49.106]) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 7FBFC4006C for ; Tue, 13 Oct 2020 13:46:02 +0000 (UTC) Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 45D4CB40079 for ; Tue, 13 Oct 2020 13:46:02 +0000 (UTC) Received: from ukex01.SolarFlarecom.com (10.17.10.4) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 13 Oct 2020 14:45:56 +0100 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 13 Oct 2020 14:45:56 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id 09DDjuDO006077; Tue, 13 Oct 2020 14:45:56 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 5AEF21613A9; Tue, 13 Oct 2020 14:45:56 +0100 (BST) From: Andrew Rybchenko To: CC: Ivan Malov Date: Tue, 13 Oct 2020 14:45:38 +0100 Message-ID: <1602596753-32282-22-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1602596753-32282-1-git-send-email-arybchenko@solarflare.com> References: <1602596753-32282-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.6.1012-25722.003 X-TM-AS-Result: No-12.215800-8.000000-10 X-TMASE-MatchedRID: rITGDmik3EZv3B7lPyZaTKo2fOuRT7aa+LljbN4c70Mlfe31Kb3l/6OH AJvB0dNqq1SffpAHzg6jNIrIg5n4kE6L74JlBxByqJSK+HSPY+/pVMb1xnESMgaYevV4zG3ZQBz oPKhLasjwNFheFiTN2c+y31r6WZwqcGCkjn3GWnbiNGQgiadfQ5Gi2FvzLDzAqPGqHIPGZiPesi 7Se1e9ADOUhTQ0ONAjSPM0Eh/czwpuWORu+GFNx0V4CvmC4hgmNpy6NoTePCE2/UwdvFG5Imlys 1PDhWLoLk3jDVywBXNbdNfP8uJyikd++++MC1b2joyKzEmtrEdrTWaGefu3pAQsw9A3PIlLZreC lvijIp5QTWqzQfgfXIF9fWM1wBLJXUxxMbLz357J1E/nrJFED442PpHB0pwipYm3BHswmOl5bBi V7e/VjHRTisYeVCls4gYlnr6q/ysfE8yM4pjsDwtuKBGekqUpI/NGWt0UYPDiOFvVKnDHjo0ET2 6MuxPwu0n0hClJE0Fi6MaA/I/6QvcqHKzCJIUv X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--12.215800-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.6.1012-25722.003 X-MDID: 1602596763-AMVUGdOgN4Sd X-PPE-DISP: 1602596763;AMVUGdOgN4Sd Subject: [dpdk-dev] [PATCH 21/36] net/sfc: add header segments check for EF100 Tx datapath X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ivan Malov EF100 native Tx datapath demands that packet header be contiguous when partial checksum offloads are used since helper function is used to calculate pseudo-header checksum (and the function requires contiguous header). Add an explicit check for this assumption and restructure the code to avoid TSO header linearisation check since TSO header linearisation is not done on EF100 native Tx datapath. Signed-off-by: Ivan Malov Signed-off-by: Andrew Rybchenko --- drivers/net/sfc/sfc_dp_tx.h | 85 +++++++++++++++++++++++++++------- drivers/net/sfc/sfc_ef100_tx.c | 4 +- drivers/net/sfc/sfc_ef10_tx.c | 2 +- drivers/net/sfc/sfc_tx.c | 2 +- 4 files changed, 73 insertions(+), 20 deletions(-) diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h index 67aa398b7f..bed8ce84aa 100644 --- a/drivers/net/sfc/sfc_dp_tx.h +++ b/drivers/net/sfc/sfc_dp_tx.h @@ -206,14 +206,38 @@ sfc_dp_tx_offload_capa(const struct sfc_dp_tx *dp_tx) return dp_tx->dev_offload_capa | dp_tx->queue_offload_capa; } +static inline unsigned int +sfc_dp_tx_pkt_extra_hdr_segs(struct rte_mbuf **m_seg, + unsigned int *header_len_remaining) +{ + unsigned int nb_extra_header_segs = 0; + + while (rte_pktmbuf_data_len(*m_seg) < *header_len_remaining) { + *header_len_remaining -= rte_pktmbuf_data_len(*m_seg); + *m_seg = (*m_seg)->next; + ++nb_extra_header_segs; + } + + return nb_extra_header_segs; +} + static inline int sfc_dp_tx_prepare_pkt(struct rte_mbuf *m, + unsigned int max_nb_header_segs, + unsigned int tso_bounce_buffer_len, uint32_t tso_tcp_header_offset_limit, unsigned int max_fill_level, unsigned int nb_tso_descs, unsigned int nb_vlan_descs) { unsigned int descs_required = m->nb_segs; + unsigned int tcph_off = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ? + m->outer_l2_len + m->outer_l3_len : 0) + + m->l2_len + m->l3_len; + unsigned int header_len = tcph_off + m->l4_len; + unsigned int header_len_remaining = header_len; + unsigned int nb_header_segs = 1; + struct rte_mbuf *m_seg = m; #ifdef RTE_LIBRTE_SFC_EFX_DEBUG int ret; @@ -229,10 +253,29 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m, } #endif - if (m->ol_flags & PKT_TX_TCP_SEG) { - unsigned int tcph_off = m->l2_len + m->l3_len; - unsigned int header_len; + if (max_nb_header_segs != 0) { + /* There is a limit on the number of header segments. */ + nb_header_segs += + sfc_dp_tx_pkt_extra_hdr_segs(&m_seg, + &header_len_remaining); + + if (unlikely(nb_header_segs > max_nb_header_segs)) { + /* + * The number of header segments is too large. + * + * If TSO is requested and if the datapath supports + * linearisation of TSO headers, allow the packet + * to proceed with additional checks below. + * Otherwise, throw an error. + */ + if ((m->ol_flags & PKT_TX_TCP_SEG) == 0 || + tso_bounce_buffer_len == 0) + return EINVAL; + } + } + + if (m->ol_flags & PKT_TX_TCP_SEG) { switch (m->ol_flags & PKT_TX_TUNNEL_MASK) { case 0: break; @@ -242,30 +285,38 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m, if (!(m->ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6))) return EINVAL; - - tcph_off += m->outer_l2_len + m->outer_l3_len; } - header_len = tcph_off + m->l4_len; - if (unlikely(tcph_off > tso_tcp_header_offset_limit)) return EINVAL; descs_required += nb_tso_descs; /* - * Extra descriptor that is required when a packet header - * is separated from remaining content of the first segment. + * If headers segments are already counted above, here + * nothing is done since remaining length is smaller + * then current segment size. + */ + nb_header_segs += + sfc_dp_tx_pkt_extra_hdr_segs(&m_seg, + &header_len_remaining); + + /* + * Extra descriptor which is required when (a part of) payload + * shares the same segment with (a part of) the header. */ - if (rte_pktmbuf_data_len(m) > header_len) { + if (rte_pktmbuf_data_len(m_seg) > header_len_remaining) descs_required++; - } else if (rte_pktmbuf_data_len(m) < header_len && - unlikely(header_len > SFC_TSOH_STD_LEN)) { - /* - * Header linearization is required and - * the header is too big to be linearized - */ - return EINVAL; + + if (tso_bounce_buffer_len != 0) { + if (nb_header_segs > 1 && + unlikely(header_len > tso_bounce_buffer_len)) { + /* + * Header linearization is required and + * the header is too big to be linearized + */ + return EINVAL; + } } } diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c index 41b1554f12..0dba5c8eee 100644 --- a/drivers/net/sfc/sfc_ef100_tx.c +++ b/drivers/net/sfc/sfc_ef100_tx.c @@ -95,9 +95,11 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, for (i = 0; i < nb_pkts; i++) { struct rte_mbuf *m = tx_pkts[i]; + unsigned int max_nb_header_segs = 0; int ret; - ret = sfc_dp_tx_prepare_pkt(m, 0, txq->max_fill_level, 0, 0); + ret = sfc_dp_tx_prepare_pkt(m, max_nb_header_segs, 0, + 0, txq->max_fill_level, 0, 0); if (unlikely(ret != 0)) { rte_errno = ret; break; diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c index 6fb4ac88a8..961689dc34 100644 --- a/drivers/net/sfc/sfc_ef10_tx.c +++ b/drivers/net/sfc/sfc_ef10_tx.c @@ -352,7 +352,7 @@ sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } } #endif - ret = sfc_dp_tx_prepare_pkt(m, + ret = sfc_dp_tx_prepare_pkt(m, 0, SFC_TSOH_STD_LEN, txq->tso_tcp_header_offset_limit, txq->max_fill_level, SFC_EF10_TSO_OPT_DESCS_NUM, 0); diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c index 4ea614816a..d50d49ca56 100644 --- a/drivers/net/sfc/sfc_tx.c +++ b/drivers/net/sfc/sfc_tx.c @@ -718,7 +718,7 @@ sfc_efx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, * insertion offload is requested regardless the offload * requested/supported. */ - ret = sfc_dp_tx_prepare_pkt(tx_pkts[i], + ret = sfc_dp_tx_prepare_pkt(tx_pkts[i], 0, SFC_TSOH_STD_LEN, encp->enc_tx_tso_tcp_header_offset_limit, txq->max_fill_level, EFX_TX_FATSOV2_OPT_NDESCS, 1);