From patchwork Wed Dec 6 14:48:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matan Azrad X-Patchwork-Id: 31949 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 702EA1AEEB; Wed, 6 Dec 2017 15:49:00 +0100 (CET) Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-ve1eur01on0045.outbound.protection.outlook.com [104.47.1.45]) by dpdk.org (Postfix) with ESMTP id 39F702B99 for ; Wed, 6 Dec 2017 15:48:56 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=yDzSoyGwS/dpdpkc9LJl7X0cUAGz2eCH3HcTRlJkAuI=; b=TSuNL8ZQjG3dSdeKz+KqjOnJ2qjWv8SmT8RIwkM3HY0CAsEbbozk4nBUnQwrv3B/R22dsTT9m3O6vlVCsiEDd12HAXEB2ovRtKx/S2qRhp81cga/DBK9JMtuERcItbMbGXfQsx/lyP+7jPKirPbhnRfwNuuW5QDW8D9ErTDV1T4= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=matan@mellanox.com; Received: from mellanox.com (37.142.13.130) by HE1PR0502MB3659.eurprd05.prod.outlook.com (2603:10a6:7:85::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.282.5; Wed, 6 Dec 2017 14:48:54 +0000 From: Matan Azrad To: Adrien Mazarguil Cc: dev@dpdk.org Date: Wed, 6 Dec 2017 14:48:09 +0000 Message-Id: <1512571693-15338-5-git-send-email-matan@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1512571693-15338-1-git-send-email-matan@mellanox.com> References: <1511871570-16826-1-git-send-email-matan@mellanox.com> <1512571693-15338-1-git-send-email-matan@mellanox.com> MIME-Version: 1.0 X-Originating-IP: [37.142.13.130] X-ClientProxiedBy: HE1P191CA0003.EURP191.PROD.OUTLOOK.COM (2603:10a6:3:cf::13) To HE1PR0502MB3659.eurprd05.prod.outlook.com (2603:10a6:7:85::17) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 93b2590b-70af-4cb1-1ee0-08d53cb8796c X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(48565401081)(2017052603286); SRVR:HE1PR0502MB3659; X-Microsoft-Exchange-Diagnostics: 1; HE1PR0502MB3659; 3:ECYftF6s+y91gZekyzGaS2EB0ghktSenNCoOM5YJIX3T90ZDTNkRx5RE3PWbDzoN56KEXfsQIKYqqJyjFefAc0pacUrbKV4wLebWQMGvjeI7gHk9yyc0btg6alwZAhCvsHP31mRSiUjcrS6Bq4MUh8Dq7D6UaJLRWYIjtVo+mvy0FTVd4pkZI8XBIL6gXK2jy593SqwDJ2VVZbbujSTob60hmk8vuiU73kj0OqQc6ZEV63gMmuoGjtHk4Mw6YxGo; 25:0ihBkpN6v8t9cobSU4qeX5ZRrzTMMw6WZaY+W/AbfINaVuD8BIZIPSMbyQRPZoHBStZ5IHBfTE3nZQqBc/bOU/biSm0Et0H8kLYNa9lPkuh/FtOiCT9ODzk1372otrss8sRflDHLoluri5+LxwDigjGPq2zmd/3+rAS2ty2kJgl0eX5SY7Zw7bAxECwJNIt390kErqVwyPsWi+j8XQtAUa7YJ7rU3wuZHaGk70oxEANE5+Dj5zsmXOl/vm0LCQABIlf3FHr0BWPHodav5DZdoptiR1JDEV9gyTm+6a9cPKBUnfN9K3nr/HT/JO/BlvFZzH/Th198EjySnPTi7y+NGQ==; 31:Icx5/Q9lT+vupPftiNj8qGRjXvS1OfecV1wYYowF3sCBz8lMbWVnoP4+UxIjST0KpjvTKLdsgAO+3XhbzvxOYRImHEa7pWod05N774Jt9G6WRD7NR4DnsW8rwffN0ZYPCPIMrVGQ3PpLNpfvTNfO9QaTwxIvnVs7eAKahCzo4P2/EpGmMONOeilZ+1I0iEkInHaYji/owHfNjaD83yuRFbA6xPAO46L5UXmCu0uIIeY= X-MS-TrafficTypeDiagnostic: HE1PR0502MB3659: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr X-Microsoft-Exchange-Diagnostics: 1; HE1PR0502MB3659; 20:u7RhAhGRn42cLSRJPV0WXWGS9RhQ3HWs2oOrK9mo8cBJzmRY2vU68uvDkQQ2eiociY1CRT3Kn74gkUDT8F2+kaNzgUEOCE5BSHrEHNEYrwgOUqjO4+B1rel9CNJaefHkOxO0eTxMXDpM4cewyMcnTIYfPR1OWngF1PskQLysv49xH9/JMvumqR2rL8xUGVayxNZX55GTi0W758SRCc0yp7RQ0gD0h/sAz59o1SChrdzmuRkVYQ9ILrOpkMIWnLudIMG4Ph+23zXvyLenekYdlvbFmos2EuO5aX2XWSR6CgsU32Gtn44UO0x0bYmfJWjyLrBKC+ChuR/qw2gPi3q/DrEO3FQgQO0kpJzaLgY5xzLLhra5bewjXHu7+vOU0DnHvuIOc58U97CxJLZC/QieQQcwuhioQWbOclspNe/y9BFOTVfrAPrhkyy9hjzUKHze/gg84eLP3WpwCHbF0GUwtt5CZFYroiPzqjGVqTFPSUh32TbQBWd88YrzWR2ovfND; 4:e+4GUXY5bR8BsGk4XuqJ8xApkLgdhsEJijIzvhJrHUzHyoKBcvXlhp+dvTDwevchPGrNJ15mCuE0oeWIz596zwgomNUxwLOK1pcyE1/W7jBr7kkulghThZZzwKfWjK3t27bhirf4odfOaGV1Asli9CxdNIjJwph0Zht4SObDySy3J5N/peLU3qGwiFm58KjfKM/jlVVMLvoMvf34FGdYiC8Utqg6xYrh52TjFngPJ5qirxYY6MeKbRBvTpJEeVBkROLue3sMZRC10QqstBOK9w== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(5005006)(8121501046)(3231022)(10201501046)(3002001)(93006095)(93001095)(6055026)(6041248)(20161123562025)(20161123564025)(201703131423075)(201703011903075)(201702281528075)(201703061421075)(20161123555025)(20161123560025)(20161123558100)(6072148)(201708071742011); SRVR:HE1PR0502MB3659; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:HE1PR0502MB3659; X-Forefront-PRVS: 05134F8B4F X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(7966004)(366004)(39860400002)(346002)(376002)(189003)(199004)(6916009)(2950100002)(6116002)(4326008)(16526018)(6666003)(3846002)(47776003)(8936002)(76176011)(508600001)(8676002)(81166006)(101416001)(97736004)(33026002)(86362001)(55016002)(4720700003)(21086003)(81156014)(105586002)(7736002)(66066001)(25786009)(2906002)(69596002)(305945005)(5660300001)(106356001)(33646002)(50226002)(51416003)(50466002)(16586007)(7696005)(52116002)(316002)(53936002)(68736007)(36756003)(48376002); DIR:OUT; SFP:1101; SCL:1; SRVR:HE1PR0502MB3659; H:mellanox.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; HE1PR0502MB3659; 23:zfWEGVpMHIpLRVLqadCq16FcD3w4B1SqNdGCUlK?= pja07mZycKxZU697BfIocyjAfSqNQbFCOnnAqbr/MhQwoEAul/Hlaj3k+7VXO5+EUFi/jG2vUZDgk1x3u25ItPIZit9/SB6WcIT3oInaMNa8tr9gp7mBkBQPkhuc08jcbCsZ1PD1oJkUkyCDmy1Pp2CyHhvHH2tloFlJzc6hZZr16STkTSZJFhqfLPo4ce9IEORIVKyAjh18Te+QVRAX0mMtuV6e7RoGNmGtWOHvPhE+IvCNpzadSgRn7xPKfVsxL7rYGoo3Z45IIjHwK9KjkHFpQBL8AH01Z9AP/QD67m6z2Y4Yfnuci5oXG5rtAfhxmrCruHPl9mDp+cstBlftk/RKa+LAhHnFhE1eeKtYmhBbNS9KcUJ6Py8fKP3IJRjk4ocM/zK3NgfMzueUZDt2XVB7hMhmx1xt0Guae6EAO/Piehrzihnd0meLrl2ApwW1jTKbkpYLokT2M0rCCaudkRa/oBatGE0drhY3oBF6Yf5y0vg1AxuVvXeOK2b2xyVKCpU2tWqUbygOzZjwlQfCdc/NuANGD4A84fxREum0YslvgMnj1mvCrIE1V1UsF37XZe5LKx2PnCyjQNkAhovGaHf7YDmcOwp5BOpaNSZdo99H6igdILV0pVp237lp+Ina2YdK4YpH5BDByeua7J0c3dAu0064yqITpvu0wmQZ1iOB26vrenbk4IFSJknI6tP/osrqJccaPsANFXX1eZcaZfvWMAltrCawxHfPN0Uek8JppSEhjCjXKp75PxcNm3kM39eijS/ppSJM2AI09YxYcZanREhAb6f9/J34rZRAfkf79ncHDAwj0tKaUpWSzURzRpyOvJwJ7lZYWvuYkjEWaSXdK+iZXDe5cEBY8JgFsCLsdx0D7u7UBo7ErPlbYJrj8x5gjOs5fqQidAKtp9Rw1Np4cjfcd9N/BWPFRmZ1RGB3+u1t/NfF8++p+y+yMkK6HPqGl+GmCTsBMWtuHv2JClVkoQ6rsTwITgLTLV8e8r2EJg1hg2NAZEVUpRn1q0+YXw0/ArcmHPsXUQ0B/JHIIoSgKovGA29x4JycShb5f9M/gsdL4khSm4XBSIGaNzlDz7aKCwb4bzWmb8LWxU6/73GuD X-Microsoft-Exchange-Diagnostics: 1; HE1PR0502MB3659; 6:MFlnSi9d+637n9r9TKMGhe3w51jiW8KxC0AZu/QtU7ueB+CAj95WGwpBrpDzP0RNDl0r9LR2uYQJ4059Wxp+oOmYb1GSatbIpj2+RPcnzFmlF364CQtt1DVpCEM0kzkXU1w/EoxX+mUHVfSLMK5sASKGtP/8W+aMuqRKha9UDso+xVuhDa81a82HHblLC2ixzbpjaQUc5lPVbxmodN7DQC18gOzLzwDrs4LnGefHeh9VePaBzzF+qUT5BqhXzlQTmSIsi5wDEU5e6krt6yKwYDDIqBxQQ3oLB7K1HuFA24ew2+QlVcnKeNqxJQeapofA0QxVbQWf11p8C3KmwBKolpOQG/wqV1RQXrKQN3F7uno=; 5:6AGrGvqB9a2eG0Sr5DwzoJmXoQ43Ha0+Fq6OQ5fVOA2enUiJ85R2PfM9zdx1gIC9bZatYenF83cwdJ59gtojDihzlBtbG4rASDy+gZrvpmVTwwzBO8xqSKbpCNMOv99NoOQVjZednIqyvyo79w6TfQ1SKQWLusrDYWdV7Fi4E24=; 24:T7ksGOygFujNcNyAg0Z9ZSlZyv1u2gJRAdHqShserPKT9CfpyER0BjxkxzWMVrqSKYca5PannzeW4n7aKCFJnpfUEzxredz6ifKYL4eTLD4=; 7:i0ZbskLCIgfJkzWfOW0SPpXkMhodvcMjGRqueDVjnXDDuNYA9I/csNNfdoGeYthoK7y8FI/pepWkTjkI11manPKmmNmWIH2R4odmPG8/6HFRaJDkF5CD/RHQ0RgeTLDJfzRRkRRLhyQBpdQVC19XMdiLYdG0bBSfoQqSdTzil/Stpu+4+isNeeHSsRWxmknm1QbK5OGo+MIB31VXcSAAhqzolNNRQjtYJXD3mMvGhkkCCJUjzImZGLSNlEq4GOu5 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Dec 2017 14:48:54.6258 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 93b2590b-70af-4cb1-1ee0-08d53cb8796c X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0502MB3659 Subject: [dpdk-dev] [PATCH v2 4/8] net/mlx4: optimize Tx multi-segment case X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" mlx4 Tx block can handle up to 4 data segments or control segment + up to 3 data segments. The first data segment in each not first Tx block must validate Tx queue wraparound and must use IO memory barrier before writing the byte count. The previous multi-segment code used "for" loop to iterate over all packet segments and separated first Tx block data case by "if" statements. Use switch case and unconditional branches instead of "for" loop can optimize the case and prevents the unnecessary checks for each data segment; This hints to compiler to create optimized jump table. Optimize this case by switch case and unconditional branches usage. Signed-off-by: Matan Azrad Acked-by: Adrien Mazarguil --- drivers/net/mlx4/mlx4_rxtx.c | 198 ++++++++++++++++++++++++++++--------------- 1 file changed, 128 insertions(+), 70 deletions(-) diff --git a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c index 1d8240a..adf02c0 100644 --- a/drivers/net/mlx4/mlx4_rxtx.c +++ b/drivers/net/mlx4/mlx4_rxtx.c @@ -421,6 +421,39 @@ struct pv { return buf->pool; } +/** + * Write Tx data segment to the SQ. + * + * @param dseg + * Pointer to data segment in SQ. + * @param lkey + * Memory region lkey. + * @param addr + * Data address. + * @param byte_count + * Big endian bytes count of the data to send. + */ +static inline void +mlx4_fill_tx_data_seg(volatile struct mlx4_wqe_data_seg *dseg, + uint32_t lkey, uintptr_t addr, rte_be32_t byte_count) +{ + dseg->addr = rte_cpu_to_be_64(addr); + dseg->lkey = rte_cpu_to_be_32(lkey); +#if RTE_CACHE_LINE_SIZE < 64 + /* + * Need a barrier here before writing the byte_count + * fields to make sure that all the data is visible + * before the byte_count field is set. + * Otherwise, if the segment begins a new cacheline, + * the HCA prefetcher could grab the 64-byte chunk and + * get a valid (!= 0xffffffff) byte count but stale + * data, and end up sending the wrong data. + */ + rte_io_wmb(); +#endif /* RTE_CACHE_LINE_SIZE */ + dseg->byte_count = byte_count; +} + static int mlx4_tx_burst_segs(struct rte_mbuf *buf, struct txq *txq, volatile struct mlx4_wqe_ctrl_seg **pctrl) @@ -432,15 +465,14 @@ struct pv { uint32_t head_idx = sq->head & sq->txbb_cnt_mask; volatile struct mlx4_wqe_ctrl_seg *ctrl; volatile struct mlx4_wqe_data_seg *dseg; - struct rte_mbuf *sbuf; + struct rte_mbuf *sbuf = buf; uint32_t lkey; - uintptr_t addr; - uint32_t byte_count; int pv_counter = 0; + int nb_segs = buf->nb_segs; /* Calculate the needed work queue entry size for this packet. */ wqe_real_size = sizeof(volatile struct mlx4_wqe_ctrl_seg) + - buf->nb_segs * sizeof(volatile struct mlx4_wqe_data_seg); + nb_segs * sizeof(volatile struct mlx4_wqe_data_seg); nr_txbbs = MLX4_SIZE_TO_TXBBS(wqe_real_size); /* * Check that there is room for this WQE in the send queue and that @@ -457,67 +489,99 @@ struct pv { dseg = (volatile struct mlx4_wqe_data_seg *) ((uintptr_t)ctrl + sizeof(struct mlx4_wqe_ctrl_seg)); *pctrl = ctrl; - /* Fill the data segments with buffer information. */ - for (sbuf = buf; sbuf != NULL; sbuf = sbuf->next, dseg++) { - addr = rte_pktmbuf_mtod(sbuf, uintptr_t); - rte_prefetch0((volatile void *)addr); - /* Memory region key (big endian) for this memory pool. */ + /* + * Fill the data segments with buffer information. + * First WQE TXBB head segment is always control segment, + * so jump to tail TXBB data segments code for the first + * WQE data segments filling. + */ + goto txbb_tail_segs; +txbb_head_seg: + /* Memory region key (big endian) for this memory pool. */ + lkey = mlx4_txq_mp2mr(txq, mlx4_txq_mb2mp(sbuf)); + if (unlikely(lkey == (uint32_t)-1)) { + DEBUG("%p: unable to get MP <-> MR association", + (void *)txq); + return -1; + } + /* Handle WQE wraparound. */ + if (dseg >= + (volatile struct mlx4_wqe_data_seg *)sq->eob) + dseg = (volatile struct mlx4_wqe_data_seg *) + sq->buf; + dseg->addr = rte_cpu_to_be_64(rte_pktmbuf_mtod(sbuf, uintptr_t)); + dseg->lkey = rte_cpu_to_be_32(lkey); + /* + * This data segment starts at the beginning of a new + * TXBB, so we need to postpone its byte_count writing + * for later. + */ + pv[pv_counter].dseg = dseg; + /* + * Zero length segment is treated as inline segment + * with zero data. + */ + pv[pv_counter++].val = rte_cpu_to_be_32(sbuf->data_len ? + sbuf->data_len : 0x80000000); + sbuf = sbuf->next; + dseg++; + nb_segs--; +txbb_tail_segs: + /* Jump to default if there are more than two segments remaining. */ + switch (nb_segs) { + default: lkey = mlx4_txq_mp2mr(txq, mlx4_txq_mb2mp(sbuf)); - dseg->lkey = rte_cpu_to_be_32(lkey); - /* Calculate the needed work queue entry size for this packet */ - if (unlikely(lkey == rte_cpu_to_be_32((uint32_t)-1))) { - /* MR does not exist. */ + if (unlikely(lkey == (uint32_t)-1)) { DEBUG("%p: unable to get MP <-> MR association", (void *)txq); return -1; } - if (likely(sbuf->data_len)) { - byte_count = rte_cpu_to_be_32(sbuf->data_len); - } else { - /* - * Zero length segment is treated as inline segment - * with zero data. - */ - byte_count = RTE_BE32(0x80000000); + mlx4_fill_tx_data_seg(dseg, lkey, + rte_pktmbuf_mtod(sbuf, uintptr_t), + rte_cpu_to_be_32(sbuf->data_len ? + sbuf->data_len : + 0x80000000)); + sbuf = sbuf->next; + dseg++; + nb_segs--; + /* fallthrough */ + case 2: + lkey = mlx4_txq_mp2mr(txq, mlx4_txq_mb2mp(sbuf)); + if (unlikely(lkey == (uint32_t)-1)) { + DEBUG("%p: unable to get MP <-> MR association", + (void *)txq); + return -1; } - /* - * If the data segment is not at the beginning of a - * Tx basic block (TXBB) then write the byte count, - * else postpone the writing to just before updating the - * control segment. - */ - if ((uintptr_t)dseg & (uintptr_t)(MLX4_TXBB_SIZE - 1)) { - dseg->addr = rte_cpu_to_be_64(addr); - dseg->lkey = rte_cpu_to_be_32(lkey); -#if RTE_CACHE_LINE_SIZE < 64 - /* - * Need a barrier here before writing the byte_count - * fields to make sure that all the data is visible - * before the byte_count field is set. - * Otherwise, if the segment begins a new cacheline, - * the HCA prefetcher could grab the 64-byte chunk and - * get a valid (!= 0xffffffff) byte count but stale - * data, and end up sending the wrong data. - */ - rte_io_wmb(); -#endif /* RTE_CACHE_LINE_SIZE */ - dseg->byte_count = byte_count; - } else { - /* - * This data segment starts at the beginning of a new - * TXBB, so we need to postpone its byte_count writing - * for later. - */ - /* Handle WQE wraparound. */ - if (dseg >= - (volatile struct mlx4_wqe_data_seg *)sq->eob) - dseg = (volatile struct mlx4_wqe_data_seg *) - sq->buf; - dseg->addr = rte_cpu_to_be_64(addr); - dseg->lkey = rte_cpu_to_be_32(lkey); - pv[pv_counter].dseg = dseg; - pv[pv_counter++].val = byte_count; + mlx4_fill_tx_data_seg(dseg, lkey, + rte_pktmbuf_mtod(sbuf, uintptr_t), + rte_cpu_to_be_32(sbuf->data_len ? + sbuf->data_len : + 0x80000000)); + sbuf = sbuf->next; + dseg++; + nb_segs--; + /* fallthrough */ + case 1: + lkey = mlx4_txq_mp2mr(txq, mlx4_txq_mb2mp(sbuf)); + if (unlikely(lkey == (uint32_t)-1)) { + DEBUG("%p: unable to get MP <-> MR association", + (void *)txq); + return -1; + } + mlx4_fill_tx_data_seg(dseg, lkey, + rte_pktmbuf_mtod(sbuf, uintptr_t), + rte_cpu_to_be_32(sbuf->data_len ? + sbuf->data_len : + 0x80000000)); + nb_segs--; + if (nb_segs) { + sbuf = sbuf->next; + dseg++; + goto txbb_head_seg; } + /* fallthrough */ + case 0: + break; } /* Write the first DWORD of each TXBB save earlier. */ if (pv_counter) { @@ -583,7 +647,6 @@ struct pv { } srcrb; uint32_t head_idx = sq->head & sq->txbb_cnt_mask; uint32_t lkey; - uintptr_t addr; /* Clean up old buffer. */ if (likely(elt->buf != NULL)) { @@ -618,24 +681,19 @@ struct pv { dseg = (volatile struct mlx4_wqe_data_seg *) ((uintptr_t)ctrl + sizeof(struct mlx4_wqe_ctrl_seg)); - addr = rte_pktmbuf_mtod(buf, uintptr_t); - rte_prefetch0((volatile void *)addr); - dseg->addr = rte_cpu_to_be_64(addr); - /* Memory region key (big endian). */ + + ctrl->fence_size = (WQE_ONE_DATA_SEG_SIZE >> 4) & 0x3f; lkey = mlx4_txq_mp2mr(txq, mlx4_txq_mb2mp(buf)); - dseg->lkey = rte_cpu_to_be_32(lkey); - if (unlikely(dseg->lkey == - rte_cpu_to_be_32((uint32_t)-1))) { + if (unlikely(lkey == (uint32_t)-1)) { /* MR does not exist. */ DEBUG("%p: unable to get MP <-> MR association", (void *)txq); elt->buf = NULL; break; } - /* Never be TXBB aligned, no need compiler barrier. */ - dseg->byte_count = rte_cpu_to_be_32(buf->data_len); - /* Fill the control parameters for this packet. */ - ctrl->fence_size = (WQE_ONE_DATA_SEG_SIZE >> 4) & 0x3f; + mlx4_fill_tx_data_seg(dseg, lkey, + rte_pktmbuf_mtod(buf, uintptr_t), + rte_cpu_to_be_32(buf->data_len)); nr_txbbs = 1; } else { nr_txbbs = mlx4_tx_burst_segs(buf, txq, &ctrl);