From patchwork Thu Jul 12 09:31:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 42962 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C853E1B73C; Thu, 12 Jul 2018 11:32:01 +0200 (CEST) Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by dpdk.org (Postfix) with ESMTP id 8F3D51B5A2 for ; Thu, 12 Jul 2018 11:31:38 +0200 (CEST) Received: by mail-wr1-f66.google.com with SMTP id h9-v6so20975059wro.3 for ; Thu, 12 Jul 2018 02:31:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xJzgqsVWFFxhMbBwv4Wwq/ocLS0OQKkx1Al/MkWldM8=; b=V+268iO4bO7btYHoTRwRnsjLi4RKqmjtMYXcYuD8MY3NWRl6baDjSrE4MWgBmQoehH AsTBGmd6tkh9mqK54ulYbtR4ejYjEwnH9TUEYUSMmoUQlPT3C47z6dKvoFza/vNuJWyp +/sYkB9xgr2pF4ddocYo/NpE1mtXpMTDGQzhJZ6kWC7VC5XJDp6Eq/+h2wXXuN461RcW mjeNcrIll7E7pD1z7tU4PvMvmRo2i5bvaDjP1i2BuUL4QVMbptwNvoj8QKr8SAHFOXLo Nbrzh98hSoq/TpW6am5YfCFkIN3epdanH8D65OiIeYbQLNB6P/vD1ZjnMjtgWGL8EIqW RYrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xJzgqsVWFFxhMbBwv4Wwq/ocLS0OQKkx1Al/MkWldM8=; b=uRpqzqM2VUPLTzy+PM9tRsnckMi2NnMqcfoi+ugW8a/Ct+P2IDtq2gZTN4zPnaY60C 1VLjRYTFwuSSGOU0/2VSsg+4ZcKlDaZmzEWOIler+2N8wey80ds9E5irHOQMtXuHaCXJ tYaA4owEwUOljnZ8UyVmoh/i4qvCvZC4gQNFwdSaHk3jWJXQJ9XAEOlZzRJ0vPVGIS4l pANTpBjp07UvjTB2AWXDpc0iDRrDSyz/Wxzv4/SqcqC7DZ7IUpfkpY3p4WvTeXv3/oUg U0X/NxIHZyuHv2IY2q02cUvnC4wjMksi3n288kJbAv9oxfQ+hvGAVrbxwqwBl5Ms+W7t Pr5Q== X-Gm-Message-State: AOUpUlF8U7KvMOm48MqWvKgnSNvqWRMDq+pmFExAnFyE7ZHNB3ZbQAYA O79UXS/jatSOVewcZlzWo5JgpOd7Ng== X-Google-Smtp-Source: AAOMgpdv8b7R0ADUCXCrwPvGIdvy8Jdce87aZ+lkwbW2cJeUZS+nRrw/2Ebs11r18TNzGfS9ImVjYw== X-Received: by 2002:adf:bc92:: with SMTP id g18-v6mr1130951wrh.266.1531387898159; Thu, 12 Jul 2018 02:31:38 -0700 (PDT) Received: from laranjeiro-vm.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id s2-v6sm18717603wrn.75.2018.07.12.02.31.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Jul 2018 02:31:37 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org, Yongseok Koh Cc: Adrien Mazarguil Date: Thu, 12 Jul 2018 11:31:01 +0200 Message-Id: <7fd6fb9da56709dc3fa03961f78ff00f98da0fdd.1531387413.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v4 15/21] net/mlx5: remove useless arguments in hrxq API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" RSS level is necessary to had a bit in the hash_fields which is already provided in this API, for the tunnel, it is necessary to request such queue to compute the checksum on the inner most, this last one should always be activated. Signed-off-by: Nelio Laranjeiro Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_flow.c | 4 ++-- drivers/net/mlx5/mlx5_rxq.c | 39 +++++++++--------------------------- drivers/net/mlx5/mlx5_rxtx.h | 8 ++------ 3 files changed, 13 insertions(+), 38 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 758c611a6..730360b22 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1875,13 +1875,13 @@ mlx5_flow_apply(struct rte_eth_dev *dev, struct rte_flow *flow, MLX5_RSS_HASH_KEY_LEN, verbs->hash_fields, (*flow->queue), - flow->rss.queue_num, 0, 0); + flow->rss.queue_num); if (!hrxq) hrxq = mlx5_hrxq_new(dev, flow->key, MLX5_RSS_HASH_KEY_LEN, verbs->hash_fields, (*flow->queue), - flow->rss.queue_num, 0, 0); + flow->rss.queue_num); if (!hrxq) { rte_flow_error_set (error, rte_errno, diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index d50b82c69..071740b6d 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1740,10 +1740,6 @@ mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev) * first queue index will be taken for the indirection table. * @param queues_n * Number of queues. - * @param tunnel - * Tunnel type, implies tunnel offloading like inner checksum if available. - * @param rss_level - * RSS hash on tunnel level. * * @return * The Verbs object initialised, NULL otherwise and rte_errno is set. @@ -1752,17 +1748,13 @@ struct mlx5_hrxq * mlx5_hrxq_new(struct rte_eth_dev *dev, const uint8_t *rss_key, uint32_t rss_key_len, uint64_t hash_fields, - const uint16_t *queues, uint32_t queues_n, - uint32_t tunnel, uint32_t rss_level) + const uint16_t *queues, uint32_t queues_n) { struct priv *priv = dev->data->dev_private; struct mlx5_hrxq *hrxq; struct mlx5_ind_table_ibv *ind_tbl; struct ibv_qp *qp; int err; -#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - struct mlx5dv_qp_init_attr qp_init_attr = {0}; -#endif queues_n = hash_fields ? queues_n : 1; ind_tbl = mlx5_ind_table_ibv_get(dev, queues, queues_n); @@ -1777,11 +1769,6 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, rss_key = rss_hash_default_key; } #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - if (tunnel) { - qp_init_attr.comp_mask = - MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS; - qp_init_attr.create_flags = MLX5DV_QP_CREATE_TUNNEL_OFFLOADS; - } qp = mlx5_glue->dv_create_qp (priv->ctx, &(struct ibv_qp_init_attr_ex){ @@ -1797,14 +1784,17 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, .rx_hash_key = rss_key ? (void *)(uintptr_t)rss_key : rss_hash_default_key, - .rx_hash_fields_mask = hash_fields | - (tunnel && rss_level > 1 ? - (uint32_t)IBV_RX_HASH_INNER : 0), + .rx_hash_fields_mask = hash_fields, }, .rwq_ind_tbl = ind_tbl->ind_table, .pd = priv->pd, }, - &qp_init_attr); + &(struct mlx5dv_qp_init_attr){ + .comp_mask = (hash_fields & IBV_RX_HASH_INNER) ? + MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS : + 0, + .create_flags = MLX5DV_QP_CREATE_TUNNEL_OFFLOADS, + }); #else qp = mlx5_glue->create_qp_ex (priv->ctx, @@ -1838,8 +1828,6 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, hrxq->qp = qp; hrxq->rss_key_len = rss_key_len; hrxq->hash_fields = hash_fields; - hrxq->tunnel = tunnel; - hrxq->rss_level = rss_level; memcpy(hrxq->rss_key, rss_key, rss_key_len); rte_atomic32_inc(&hrxq->refcnt); LIST_INSERT_HEAD(&priv->hrxqs, hrxq, next); @@ -1865,10 +1853,6 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, * first queue index will be taken for the indirection table. * @param queues_n * Number of queues. - * @param tunnel - * Tunnel type, implies tunnel offloading like inner checksum if available. - * @param rss_level - * RSS hash on tunnel level * * @return * An hash Rx queue on success. @@ -1877,8 +1861,7 @@ struct mlx5_hrxq * mlx5_hrxq_get(struct rte_eth_dev *dev, const uint8_t *rss_key, uint32_t rss_key_len, uint64_t hash_fields, - const uint16_t *queues, uint32_t queues_n, - uint32_t tunnel, uint32_t rss_level) + const uint16_t *queues, uint32_t queues_n) { struct priv *priv = dev->data->dev_private; struct mlx5_hrxq *hrxq; @@ -1893,10 +1876,6 @@ mlx5_hrxq_get(struct rte_eth_dev *dev, continue; if (hrxq->hash_fields != hash_fields) continue; - if (hrxq->tunnel != tunnel) - continue; - if (hrxq->rss_level != rss_level) - continue; ind_tbl = mlx5_ind_table_ibv_get(dev, queues, queues_n); if (!ind_tbl) continue; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 59e374d8d..808118e50 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -157,8 +157,6 @@ struct mlx5_hrxq { struct mlx5_ind_table_ibv *ind_table; /* Indirection table. */ struct ibv_qp *qp; /* Verbs queue pair. */ uint64_t hash_fields; /* Verbs Hash fields. */ - uint32_t tunnel; /* Tunnel type. */ - uint32_t rss_level; /* RSS on tunnel level. */ uint32_t rss_key_len; /* Hash key length in bytes. */ uint8_t rss_key[]; /* Hash key. */ }; @@ -271,13 +269,11 @@ void mlx5_ind_table_ibv_drop_release(struct rte_eth_dev *dev); struct mlx5_hrxq *mlx5_hrxq_new(struct rte_eth_dev *dev, const uint8_t *rss_key, uint32_t rss_key_len, uint64_t hash_fields, - const uint16_t *queues, uint32_t queues_n, - uint32_t tunnel, uint32_t rss_level); + const uint16_t *queues, uint32_t queues_n); struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev, const uint8_t *rss_key, uint32_t rss_key_len, uint64_t hash_fields, - const uint16_t *queues, uint32_t queues_n, - uint32_t tunnel, uint32_t rss_level); + const uint16_t *queues, uint32_t queues_n); int mlx5_hrxq_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hxrq); int mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev); struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev);