Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/94545/?format=api
https://patches.dpdk.org/api/patches/94545/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/patch/20210619110154.10301-5-pbhagavatula@marvell.com/", "project": { "id": 1, "url": "https://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20210619110154.10301-5-pbhagavatula@marvell.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20210619110154.10301-5-pbhagavatula@marvell.com", "date": "2021-06-19T11:01:45", "name": "[v2,05/13] net/cnxk: enable TSO processing in vector Tx", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "4a66e8f5a175d202c2a9be5d8bfcf952775a8a5f", "submitter": { "id": 1183, "url": "https://patches.dpdk.org/api/people/1183/?format=api", "name": "Pavan Nikhilesh Bhagavatula", "email": "pbhagavatula@marvell.com" }, "delegate": null, "mbox": "https://patches.dpdk.org/project/dpdk/patch/20210619110154.10301-5-pbhagavatula@marvell.com/mbox/", "series": [ { "id": 17405, "url": "https://patches.dpdk.org/api/series/17405/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=17405", "date": "2021-06-19T11:01:41", "name": "[v2,01/13] net/cnxk: add multi seg Rx vector routine", "version": 2, "mbox": "https://patches.dpdk.org/series/17405/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/94545/comments/", "check": "success", "checks": "https://patches.dpdk.org/api/patches/94545/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 8BE5FA0A0C;\n\tSat, 19 Jun 2021 13:02:50 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 52BE141150;\n\tSat, 19 Jun 2021 13:02:26 +0200 (CEST)", "from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com\n [67.231.156.173])\n by mails.dpdk.org (Postfix) with ESMTP id D8FF441109\n for <dev@dpdk.org>; Sat, 19 Jun 2021 13:02:24 +0200 (CEST)", "from pps.filterd (m0045851.ppops.net [127.0.0.1])\n by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id\n 15JB2K5s017014 for <dev@dpdk.org>; Sat, 19 Jun 2021 04:02:24 -0700", "from dc5-exch01.marvell.com ([199.233.59.181])\n by mx0b-0016f401.pphosted.com with ESMTP id 398tu0v61c-1\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT)\n for <dev@dpdk.org>; Sat, 19 Jun 2021 04:02:24 -0700", "from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com\n (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18;\n Sat, 19 Jun 2021 04:02:22 -0700", "from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com\n (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend\n Transport; Sat, 19 Jun 2021 04:02:21 -0700", "from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176])\n by maili.marvell.com (Postfix) with ESMTP id 8512F5B6965;\n Sat, 19 Jun 2021 04:02:19 -0700 (PDT)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;\n h=from : to : cc :\n subject : date : message-id : in-reply-to : references : mime-version :\n content-transfer-encoding : content-type; s=pfpt0220;\n bh=GXk32ln9wkaHXCSM9xZdXDOB/cEDGT6EUcVVy56oZdo=;\n b=SPexT6no4d7Y1pidea8DyXJba+mYAV8pjNMM+UvlHJi78o3AycndiL+gEl4agx8hwQwb\n thx+pdZ7Qp0CBCKKi9yksiD9iCeOAu/Wk2U/XBiFxmBjCkNgCfXXjOUS/oUTjBZPfX1Y\n BIMXhgUKSZE3o1OmUwZGlZrHZQmt4pvyRAesIfCmkiA74v9QxsK/Pr/qT5x4zpL6btKK\n SbUr7LzWJwnIcXcduVyXy6dQ9V5ynkLi+K0rOxO9Kp3nbVUe39ksW+7Xfqy6k+mEa3S2\n /86jaOwh0owKxg2MIsXb4QZbqqlhPLbKxYFTXEzDtWNtm+r0Q55U7x5ntwrYskPFBGuc HQ==", "From": "<pbhagavatula@marvell.com>", "To": "<jerinj@marvell.com>, Nithin Dabilpuram <ndabilpuram@marvell.com>, \"Kiran\n Kumar K\" <kirankumark@marvell.com>, Sunil Kumar Kori <skori@marvell.com>,\n Satha Rao <skoteshwar@marvell.com>", "CC": "<dev@dpdk.org>, Pavan Nikhilesh <pbhagavatula@marvell.com>", "Date": "Sat, 19 Jun 2021 16:31:45 +0530", "Message-ID": "<20210619110154.10301-5-pbhagavatula@marvell.com>", "X-Mailer": "git-send-email 2.17.1", "In-Reply-To": "<20210619110154.10301-1-pbhagavatula@marvell.com>", "References": "<20210524122303.1116-1-pbhagavatula@marvell.com>\n <20210619110154.10301-1-pbhagavatula@marvell.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "Content-Type": "text/plain", "X-Proofpoint-GUID": "PIDguo7_P01jgJj8bNv9-wxwaYnLmILu", "X-Proofpoint-ORIG-GUID": "PIDguo7_P01jgJj8bNv9-wxwaYnLmILu", "X-Proofpoint-Virus-Version": "vendor=fsecure engine=2.50.10434:6.0.391, 18.0.790\n definitions=2021-06-19_09:2021-06-18,\n 2021-06-19 signatures=0", "Subject": "[dpdk-dev] [PATCH v2 05/13] net/cnxk: enable TSO processing in\n vector Tx", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "From: Pavan Nikhilesh <pbhagavatula@marvell.com>\n\nEnable TSO offload in vector Tx burst function.\n\nSigned-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>\n---\n drivers/net/cnxk/cn10k_tx.c | 2 +-\n drivers/net/cnxk/cn10k_tx.h | 97 +++++++++++++++++++++++++++++++++\n drivers/net/cnxk/cn10k_tx_vec.c | 5 +-\n drivers/net/cnxk/cn9k_tx.c | 2 +-\n drivers/net/cnxk/cn9k_tx.h | 94 ++++++++++++++++++++++++++++++++\n drivers/net/cnxk/cn9k_tx_vec.c | 5 +-\n 6 files changed, 199 insertions(+), 6 deletions(-)", "diff": "diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c\nindex c4c3e65704..d06879163f 100644\n--- a/drivers/net/cnxk/cn10k_tx.c\n+++ b/drivers/net/cnxk/cn10k_tx.c\n@@ -67,7 +67,7 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)\n #undef T\n \t};\n \n-\tif (dev->scalar_ena || (dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F))\n+\tif (dev->scalar_ena)\n \t\tpick_tx_func(eth_dev, nix_eth_tx_burst);\n \telse\n \t\tpick_tx_func(eth_dev, nix_eth_tx_vec_burst);\ndiff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h\nindex d5812c5c28..cea7c6cd34 100644\n--- a/drivers/net/cnxk/cn10k_tx.h\n+++ b/drivers/net/cnxk/cn10k_tx.h\n@@ -689,6 +689,46 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,\n \n #if defined(RTE_ARCH_ARM64)\n \n+static __rte_always_inline void\n+cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,\n+\t\t union nix_send_ext_w0_u *w0, uint64_t ol_flags,\n+\t\t const uint64_t flags, const uint64_t lso_tun_fmt)\n+{\n+\tuint16_t lso_sb;\n+\tuint64_t mask;\n+\n+\tif (!(ol_flags & PKT_TX_TCP_SEG))\n+\t\treturn;\n+\n+\tmask = -(!w1->il3type);\n+\tlso_sb = (mask & w1->ol4ptr) + (~mask & w1->il4ptr) + m->l4_len;\n+\n+\tw0->u |= BIT(14);\n+\tw0->lso_sb = lso_sb;\n+\tw0->lso_mps = m->tso_segsz;\n+\tw0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);\n+\tw1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;\n+\n+\t/* Handle tunnel tso */\n+\tif ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&\n+\t (ol_flags & PKT_TX_TUNNEL_MASK)) {\n+\t\tconst uint8_t is_udp_tun =\n+\t\t\t(CNXK_NIX_UDP_TUN_BITMASK >>\n+\t\t\t ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &\n+\t\t\t0x1;\n+\t\tuint8_t shift = is_udp_tun ? 32 : 0;\n+\n+\t\tshift += (!!(ol_flags & PKT_TX_OUTER_IPV6) << 4);\n+\t\tshift += (!!(ol_flags & PKT_TX_IPV6) << 3);\n+\n+\t\tw1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;\n+\t\tw1->ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;\n+\t\t/* Update format for UDP tunneled packet */\n+\n+\t\tw0->lso_format = (lso_tun_fmt >> shift);\n+\t}\n+}\n+\n #define NIX_DESCS_PER_LOOP 4\n static __rte_always_inline uint16_t\n cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n@@ -723,6 +763,11 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \n \t/* Reduce the cached count */\n \ttxq->fc_cache_pkts -= pkts;\n+\t/* Perform header writes before barrier for TSO */\n+\tif (flags & NIX_TX_OFFLOAD_TSO_F) {\n+\t\tfor (i = 0; i < pkts; i++)\n+\t\t\tcn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);\n+\t}\n \n \tsenddesc01_w0 = vld1q_dup_u64(&txq->send_hdr_w0);\n \tsenddesc23_w0 = senddesc01_w0;\n@@ -781,6 +826,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\tsendmem23_w1 = sendmem01_w1;\n \t\t}\n \n+\t\tif (flags & NIX_TX_OFFLOAD_TSO_F) {\n+\t\t\t/* Clear the LSO enable bit. */\n+\t\t\tsendext01_w0 = vbicq_u64(sendext01_w0,\n+\t\t\t\t\t\t vdupq_n_u64(BIT_ULL(14)));\n+\t\t\tsendext23_w0 = sendext01_w0;\n+\t\t}\n+\n \t\t/* Move mbufs to iova */\n \t\tmbuf0 = (uint64_t *)tx_pkts[0];\n \t\tmbuf1 = (uint64_t *)tx_pkts[1];\n@@ -1430,6 +1482,51 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\tcmd3[3] = vzip2q_u64(sendmem23_w0, sendmem23_w1);\n \t\t}\n \n+\t\tif (flags & NIX_TX_OFFLOAD_TSO_F) {\n+\t\t\tconst uint64_t lso_fmt = txq->lso_tun_fmt;\n+\t\t\tuint64_t sx_w0[NIX_DESCS_PER_LOOP];\n+\t\t\tuint64_t sd_w1[NIX_DESCS_PER_LOOP];\n+\n+\t\t\t/* Extract SD W1 as we need to set L4 types. */\n+\t\t\tvst1q_u64(sd_w1, senddesc01_w1);\n+\t\t\tvst1q_u64(sd_w1 + 2, senddesc23_w1);\n+\n+\t\t\t/* Extract SX W0 as we need to set LSO fields. */\n+\t\t\tvst1q_u64(sx_w0, sendext01_w0);\n+\t\t\tvst1q_u64(sx_w0 + 2, sendext23_w0);\n+\n+\t\t\t/* Extract ol_flags. */\n+\t\t\txtmp128 = vzip1q_u64(len_olflags0, len_olflags1);\n+\t\t\tytmp128 = vzip1q_u64(len_olflags2, len_olflags3);\n+\n+\t\t\t/* Prepare individual mbufs. */\n+\t\t\tcn10k_nix_prepare_tso(tx_pkts[0],\n+\t\t\t\t(union nix_send_hdr_w1_u *)&sd_w1[0],\n+\t\t\t\t(union nix_send_ext_w0_u *)&sx_w0[0],\n+\t\t\t\tvgetq_lane_u64(xtmp128, 0), flags, lso_fmt);\n+\n+\t\t\tcn10k_nix_prepare_tso(tx_pkts[1],\n+\t\t\t\t(union nix_send_hdr_w1_u *)&sd_w1[1],\n+\t\t\t\t(union nix_send_ext_w0_u *)&sx_w0[1],\n+\t\t\t\tvgetq_lane_u64(xtmp128, 1), flags, lso_fmt);\n+\n+\t\t\tcn10k_nix_prepare_tso(tx_pkts[2],\n+\t\t\t\t(union nix_send_hdr_w1_u *)&sd_w1[2],\n+\t\t\t\t(union nix_send_ext_w0_u *)&sx_w0[2],\n+\t\t\t\tvgetq_lane_u64(ytmp128, 0), flags, lso_fmt);\n+\n+\t\t\tcn10k_nix_prepare_tso(tx_pkts[3],\n+\t\t\t\t(union nix_send_hdr_w1_u *)&sd_w1[3],\n+\t\t\t\t(union nix_send_ext_w0_u *)&sx_w0[3],\n+\t\t\t\tvgetq_lane_u64(ytmp128, 1), flags, lso_fmt);\n+\n+\t\t\tsenddesc01_w1 = vld1q_u64(sd_w1);\n+\t\t\tsenddesc23_w1 = vld1q_u64(sd_w1 + 2);\n+\n+\t\t\tsendext01_w0 = vld1q_u64(sx_w0);\n+\t\t\tsendext23_w0 = vld1q_u64(sx_w0 + 2);\n+\t\t}\n+\n \t\tif (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {\n \t\t\t/* Set don't free bit if reference count > 1 */\n \t\t\txmask01 = vdupq_n_u64(0);\ndiff --git a/drivers/net/cnxk/cn10k_tx_vec.c b/drivers/net/cnxk/cn10k_tx_vec.c\nindex 0b4a4c7bae..34e3737501 100644\n--- a/drivers/net/cnxk/cn10k_tx_vec.c\n+++ b/drivers/net/cnxk/cn10k_tx_vec.c\n@@ -13,8 +13,9 @@\n \t{ \\\n \t\tuint64_t cmd[sz]; \\\n \t\t\t\t\t\t\t\t\t \\\n-\t\t/* TSO is not supported by vec */ \\\n-\t\tif ((flags) & NIX_TX_OFFLOAD_TSO_F)\t\t\t \\\n+\t\t/* For TSO inner checksum is a must */ \\\n+\t\tif (((flags) & NIX_TX_OFFLOAD_TSO_F) &&\t\t\t \\\n+\t\t !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F))\t\t \\\n \t\t\treturn 0; \\\n \t\treturn cn10k_nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, cmd,\\\n \t\t\t\t\t\t (flags)); \\\ndiff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c\nindex c32681ed44..735e21cc60 100644\n--- a/drivers/net/cnxk/cn9k_tx.c\n+++ b/drivers/net/cnxk/cn9k_tx.c\n@@ -66,7 +66,7 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)\n #undef T\n \t};\n \n-\tif (dev->scalar_ena || (dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F))\n+\tif (dev->scalar_ena)\n \t\tpick_tx_func(eth_dev, nix_eth_tx_burst);\n \telse\n \t\tpick_tx_func(eth_dev, nix_eth_tx_vec_burst);\ndiff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h\nindex bfb34abb23..2adff45705 100644\n--- a/drivers/net/cnxk/cn9k_tx.h\n+++ b/drivers/net/cnxk/cn9k_tx.h\n@@ -545,6 +545,43 @@ cn9k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,\n \n #if defined(RTE_ARCH_ARM64)\n \n+static __rte_always_inline void\n+cn9k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1,\n+\t\t union nix_send_ext_w0_u *w0, uint64_t ol_flags,\n+\t\t uint64_t flags)\n+{\n+\tuint16_t lso_sb;\n+\tuint64_t mask;\n+\n+\tif (!(ol_flags & PKT_TX_TCP_SEG))\n+\t\treturn;\n+\n+\tmask = -(!w1->il3type);\n+\tlso_sb = (mask & w1->ol4ptr) + (~mask & w1->il4ptr) + m->l4_len;\n+\n+\tw0->u |= BIT(14);\n+\tw0->lso_sb = lso_sb;\n+\tw0->lso_mps = m->tso_segsz;\n+\tw0->lso_format = NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & PKT_TX_IPV6);\n+\tw1->ol4type = NIX_SENDL4TYPE_TCP_CKSUM;\n+\n+\t/* Handle tunnel tso */\n+\tif ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&\n+\t (ol_flags & PKT_TX_TUNNEL_MASK)) {\n+\t\tconst uint8_t is_udp_tun =\n+\t\t\t(CNXK_NIX_UDP_TUN_BITMASK >>\n+\t\t\t ((ol_flags & PKT_TX_TUNNEL_MASK) >> 45)) &\n+\t\t\t0x1;\n+\n+\t\tw1->il4type = NIX_SENDL4TYPE_TCP_CKSUM;\n+\t\tw1->ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;\n+\t\t/* Update format for UDP tunneled packet */\n+\t\tw0->lso_format += is_udp_tun ? 2 : 6;\n+\n+\t\tw0->lso_format += !!(ol_flags & PKT_TX_OUTER_IPV6) << 1;\n+\t}\n+}\n+\n #define NIX_DESCS_PER_LOOP 4\n static __rte_always_inline uint16_t\n cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n@@ -580,6 +617,12 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t/* Reduce the cached count */\n \ttxq->fc_cache_pkts -= pkts;\n \n+\t/* Perform header writes before barrier for TSO */\n+\tif (flags & NIX_TX_OFFLOAD_TSO_F) {\n+\t\tfor (i = 0; i < pkts; i++)\n+\t\t\tcn9k_nix_xmit_prepare_tso(tx_pkts[i], flags);\n+\t}\n+\n \t/* Lets commit any changes in the packet here as no further changes\n \t * to the packet will be done unless no fast free is enabled.\n \t */\n@@ -637,6 +680,13 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\tsendmem23_w1 = sendmem01_w1;\n \t\t}\n \n+\t\tif (flags & NIX_TX_OFFLOAD_TSO_F) {\n+\t\t\t/* Clear the LSO enable bit. */\n+\t\t\tsendext01_w0 = vbicq_u64(sendext01_w0,\n+\t\t\t\t\t\t vdupq_n_u64(BIT_ULL(14)));\n+\t\t\tsendext23_w0 = sendext01_w0;\n+\t\t}\n+\n \t\t/* Move mbufs to iova */\n \t\tmbuf0 = (uint64_t *)tx_pkts[0];\n \t\tmbuf1 = (uint64_t *)tx_pkts[1];\n@@ -1286,6 +1336,50 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\tcmd3[3] = vzip2q_u64(sendmem23_w0, sendmem23_w1);\n \t\t}\n \n+\t\tif (flags & NIX_TX_OFFLOAD_TSO_F) {\n+\t\t\tuint64_t sx_w0[NIX_DESCS_PER_LOOP];\n+\t\t\tuint64_t sd_w1[NIX_DESCS_PER_LOOP];\n+\n+\t\t\t/* Extract SD W1 as we need to set L4 types. */\n+\t\t\tvst1q_u64(sd_w1, senddesc01_w1);\n+\t\t\tvst1q_u64(sd_w1 + 2, senddesc23_w1);\n+\n+\t\t\t/* Extract SX W0 as we need to set LSO fields. */\n+\t\t\tvst1q_u64(sx_w0, sendext01_w0);\n+\t\t\tvst1q_u64(sx_w0 + 2, sendext23_w0);\n+\n+\t\t\t/* Extract ol_flags. */\n+\t\t\txtmp128 = vzip1q_u64(len_olflags0, len_olflags1);\n+\t\t\tytmp128 = vzip1q_u64(len_olflags2, len_olflags3);\n+\n+\t\t\t/* Prepare individual mbufs. */\n+\t\t\tcn9k_nix_prepare_tso(tx_pkts[0],\n+\t\t\t\t(union nix_send_hdr_w1_u *)&sd_w1[0],\n+\t\t\t\t(union nix_send_ext_w0_u *)&sx_w0[0],\n+\t\t\t\tvgetq_lane_u64(xtmp128, 0), flags);\n+\n+\t\t\tcn9k_nix_prepare_tso(tx_pkts[1],\n+\t\t\t\t(union nix_send_hdr_w1_u *)&sd_w1[1],\n+\t\t\t\t(union nix_send_ext_w0_u *)&sx_w0[1],\n+\t\t\t\tvgetq_lane_u64(xtmp128, 1), flags);\n+\n+\t\t\tcn9k_nix_prepare_tso(tx_pkts[2],\n+\t\t\t\t(union nix_send_hdr_w1_u *)&sd_w1[2],\n+\t\t\t\t(union nix_send_ext_w0_u *)&sx_w0[2],\n+\t\t\t\tvgetq_lane_u64(ytmp128, 0), flags);\n+\n+\t\t\tcn9k_nix_prepare_tso(tx_pkts[3],\n+\t\t\t\t(union nix_send_hdr_w1_u *)&sd_w1[3],\n+\t\t\t\t(union nix_send_ext_w0_u *)&sx_w0[3],\n+\t\t\t\tvgetq_lane_u64(ytmp128, 1), flags);\n+\n+\t\t\tsenddesc01_w1 = vld1q_u64(sd_w1);\n+\t\t\tsenddesc23_w1 = vld1q_u64(sd_w1 + 2);\n+\n+\t\t\tsendext01_w0 = vld1q_u64(sx_w0);\n+\t\t\tsendext23_w0 = vld1q_u64(sx_w0 + 2);\n+\t\t}\n+\n \t\tif (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {\n \t\t\t/* Set don't free bit if reference count > 1 */\n \t\t\txmask01 = vdupq_n_u64(0);\ndiff --git a/drivers/net/cnxk/cn9k_tx_vec.c b/drivers/net/cnxk/cn9k_tx_vec.c\nindex 9ade66db2b..56a3e2514a 100644\n--- a/drivers/net/cnxk/cn9k_tx_vec.c\n+++ b/drivers/net/cnxk/cn9k_tx_vec.c\n@@ -13,8 +13,9 @@\n \t{ \\\n \t\tuint64_t cmd[sz]; \\\n \t\t\t\t\t\t\t\t\t \\\n-\t\t/* TSO is not supported by vec */ \\\n-\t\tif ((flags) & NIX_TX_OFFLOAD_TSO_F)\t\t\t \\\n+\t\t/* For TSO inner checksum is a must */ \\\n+\t\tif (((flags) & NIX_TX_OFFLOAD_TSO_F) && \\\n+\t\t !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \\\n \t\t\treturn 0; \\\n \t\treturn cn9k_nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, cmd, \\\n \t\t\t\t\t\t (flags));\t\t \\\n", "prefixes": [ "v2", "05/13" ] }{ "id": 94545, "url": "