From patchwork Tue Apr 30 19:04:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongseok Koh X-Patchwork-Id: 53173 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3147F4F9C; Tue, 30 Apr 2019 21:04:32 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id E87BF4F9A for ; Tue, 30 Apr 2019 21:04:30 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE2 (envelope-from yskoh@mellanox.com) with ESMTPS (AES256-SHA encrypted); 30 Apr 2019 22:04:29 +0300 Received: from scfae-sc-2.mti.labs.mlnx (scfae-sc-2.mti.labs.mlnx [10.101.0.96]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x3UJ4Srj007662; Tue, 30 Apr 2019 22:04:28 +0300 From: Yongseok Koh To: shahafs@mellanox.com Cc: dev@dpdk.org Date: Tue, 30 Apr 2019 12:04:26 -0700 Message-Id: <20190430190426.44018-1-yskoh@mellanox.com> X-Mailer: git-send-email 2.11.0 Subject: [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If Tx packet inlining is enabled, rdma-core library should allocate large Tx WQ enough to support it. It is better for PMD to calculate the size of WQ based on the parameters and return error with appropriate message if it exceeds the device capability. Signed-off-by: Yongseok Koh --- drivers/net/mlx5/mlx5_txq.c | 35 +++++++++++++++++++++++++++++++---- 1 file changed, 31 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 4d55fd413c..dfc9afbe75 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -679,6 +679,27 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev) } /** + * Calcuate the total number of WQEBB for Tx queue. + * + * Simplified version of calc_sq_size() in rdma-core. + * + * @param txq_ctrl + * Pointer to Tx queue control structure. + * + * @return + * The number of WQEBB. + */ +static int +txq_calc_wqebb_cnt(struct mlx5_txq_ctrl *txq_ctrl) +{ + unsigned int wqe_size; + const unsigned int desc = 1 << txq_ctrl->txq.elts_n; + + wqe_size = MLX5_WQE_SIZE + txq_ctrl->max_inline_data; + return rte_align32pow2(wqe_size * desc) / MLX5_WQE_SIZE; +} + +/** * Set Tx queue parameters from device configuration. * * @param txq_ctrl @@ -824,10 +845,16 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->txq.port_id = dev->data->port_id; tmpl->txq.idx = idx; txq_set_params(tmpl); - DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d", - dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr); - DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d", - dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge); + if (txq_calc_wqebb_cnt(tmpl) > + priv->sh->device_attr.orig_attr.max_qp_wr) { + DRV_LOG(DEBUG, + "port %u Tx WQEBB count exceeds the limit (%d)," + " try smaller queue size again", + dev->data->port_id, + priv->sh->device_attr.orig_attr.max_qp_wr); + rte_errno = ENOMEM; + goto error; + } tmpl->txq.elts = (struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1); rte_atomic32_inc(&tmpl->refcnt);