From patchwork Wed Sep 7 07:02:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 15638 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id C3E4B8D90; Wed, 7 Sep 2016 09:02:58 +0200 (CEST) Received: from mail-wm0-f47.google.com (mail-wm0-f47.google.com [74.125.82.47]) by dpdk.org (Postfix) with ESMTP id 540D08D3C for ; Wed, 7 Sep 2016 09:02:51 +0200 (CEST) Received: by mail-wm0-f47.google.com with SMTP id i204so69665927wma.0 for ; Wed, 07 Sep 2016 00:02:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1rpAur/K7Zi+2Z7IQUuTy5hmr24RbDc9xr5wIJ6pppU=; b=HFm8ibH3M+z2IadFU1QhsQ30Q5X2s7PErcib9TcT/kRiqAlYXcsCc9teTeQZwLGeWt /RZPBoLBjvGUoh9tegpoZCo1Wjil5SWCFJIarnQmBM9NansaKtfRn2b4rJq2ro7OR6ky whG7b8oVq506o7YAqZ6M0UnJ8Dcmj8Lo3tkQCHLeLc/xvXORQbsKpE+3dKuSvd1uF5tS 9C6tfk614o6ddi36g52kZv9t2HwD+LFZRkEtgNBoGHSdzrIHOqvdGB9uw72N5v7tNRPt NiIBneFO8Sy1Ix9VPvh7LtvaBaf/kCe8MTwMByC5mgEpyiCL65aUSfUqBqyobhq8qnzj rPFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1rpAur/K7Zi+2Z7IQUuTy5hmr24RbDc9xr5wIJ6pppU=; b=AnDwQewiUOXKXClwLdGj7eqD3tzyWmNQMY8dzZSyXgEJ3ImKjxz+fjpagb/uLRBZed WGW9KN4QuFP8gtiY/WNqSgcs7mx8AcPT6gMLk3eJdQLv2dq9DMDe2nsF7gDePHjdZRJu 0NtncIDWgZtymldWuAva5+oKBq7XOtBYRkja4kW1lzguRl2xS+odcc/p0l41EZg2zrAA sQ1qjqWQ/R4ygvROqYSHhssC6ytQB4sb6RWcYu/UdOxZzMU7Ymv/ZrW1X5My+rVjQsTT t2zoeakEu7/PVM5wu1oMzl7vId6h74QYiKERmkyJ/z1yM5TREIeAMWfvL49ViU3sNSYl i1Mg== X-Gm-Message-State: AE9vXwMPYGx+qu2dfZJ8LLxE1zA3euXR6HPsCLhGBsYtMoZXeN3fs7RclS6+fTX+qhllCGIz X-Received: by 10.28.51.146 with SMTP id z140mr1029450wmz.9.1473231770905; Wed, 07 Sep 2016 00:02:50 -0700 (PDT) Received: from ping.vm.6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id z17sm2556475wmz.23.2016.09.07.00.02.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 07 Sep 2016 00:02:50 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Yaacov Hazan , Adrien Mazarguil Date: Wed, 7 Sep 2016 09:02:22 +0200 Message-Id: X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH 4/8] net/mlx5: refactor allocation of flow director queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Yaacov Hazan This is done to prepare support for drop queues, which are not related to existing RX queues and need to be managed separately. Signed-off-by: Yaacov Hazan Signed-off-by: Adrien Mazarguil --- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_fdir.c | 229 ++++++++++++++++++++++++++++--------------- drivers/net/mlx5/mlx5_rxq.c | 2 + drivers/net/mlx5/mlx5_rxtx.h | 4 +- 4 files changed, 156 insertions(+), 80 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3a86609..fa78623 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -257,6 +257,7 @@ void mlx5_dev_stop(struct rte_eth_dev *); /* mlx5_fdir.c */ +void priv_fdir_queue_destroy(struct priv *, struct fdir_queue *); int fdir_init_filters_list(struct priv *); void priv_fdir_delete_filters_list(struct priv *); void priv_fdir_disable(struct priv *); diff --git a/drivers/net/mlx5/mlx5_fdir.c b/drivers/net/mlx5/mlx5_fdir.c index 8207573..802669b 100644 --- a/drivers/net/mlx5/mlx5_fdir.c +++ b/drivers/net/mlx5/mlx5_fdir.c @@ -400,6 +400,145 @@ create_flow: } /** + * Destroy a flow director queue. + * + * @param fdir_queue + * Flow director queue to be destroyed. + */ +void +priv_fdir_queue_destroy(struct priv *priv, struct fdir_queue *fdir_queue) +{ + struct mlx5_fdir_filter *fdir_filter; + + /* Disable filter flows still applying to this queue. */ + LIST_FOREACH(fdir_filter, priv->fdir_filter_list, next) { + unsigned int idx = fdir_filter->queue; + struct rxq_ctrl *rxq_ctrl = + container_of((*priv->rxqs)[idx], struct rxq_ctrl, rxq); + + assert(idx < priv->rxqs_n); + if (fdir_queue == rxq_ctrl->fdir_queue && + fdir_filter->flow != NULL) { + claim_zero(ibv_exp_destroy_flow(fdir_filter->flow)); + fdir_filter->flow = NULL; + } + } + assert(fdir_queue->qp); + claim_zero(ibv_destroy_qp(fdir_queue->qp)); + assert(fdir_queue->ind_table); + claim_zero(ibv_exp_destroy_rwq_ind_table(fdir_queue->ind_table)); + if (fdir_queue->wq) + claim_zero(ibv_exp_destroy_wq(fdir_queue->wq)); + if (fdir_queue->cq) + claim_zero(ibv_destroy_cq(fdir_queue->cq)); +#ifndef NDEBUG + memset(fdir_queue, 0x2a, sizeof(*fdir_queue)); +#endif + rte_free(fdir_queue); +} + +/** + * Create a flow director queue. + * + * @param priv + * Private structure. + * @param wq + * Work queue to route matched packets to, NULL if one needs to + * be created. + * + * @return + * Related flow director queue on success, NULL otherwise. + */ +static struct fdir_queue * +priv_fdir_queue_create(struct priv *priv, struct ibv_exp_wq *wq, + unsigned int socket) +{ + struct fdir_queue *fdir_queue; + + fdir_queue = rte_calloc_socket(__func__, 1, sizeof(*fdir_queue), + 0, socket); + if (!fdir_queue) { + ERROR("cannot allocate flow director queue"); + return NULL; + } + assert(priv->pd); + assert(priv->ctx); + if (!wq) { + fdir_queue->cq = ibv_exp_create_cq( + priv->ctx, 1, NULL, NULL, 0, + &(struct ibv_exp_cq_init_attr){ + .comp_mask = 0, + }); + if (!fdir_queue->cq) { + ERROR("cannot create flow director CQ"); + goto error; + } + fdir_queue->wq = ibv_exp_create_wq( + priv->ctx, + &(struct ibv_exp_wq_init_attr){ + .wq_type = IBV_EXP_WQT_RQ, + .max_recv_wr = 1, + .max_recv_sge = 1, + .pd = priv->pd, + .cq = fdir_queue->cq, + }); + if (!fdir_queue->wq) { + ERROR("cannot create flow director WQ"); + goto error; + } + wq = fdir_queue->wq; + } + fdir_queue->ind_table = ibv_exp_create_rwq_ind_table( + priv->ctx, + &(struct ibv_exp_rwq_ind_table_init_attr){ + .pd = priv->pd, + .log_ind_tbl_size = 0, + .ind_tbl = &wq, + .comp_mask = 0, + }); + if (!fdir_queue->ind_table) { + ERROR("cannot create flow director indirection table"); + goto error; + } + fdir_queue->qp = ibv_exp_create_qp( + priv->ctx, + &(struct ibv_exp_qp_init_attr){ + .qp_type = IBV_QPT_RAW_PACKET, + .comp_mask = + IBV_EXP_QP_INIT_ATTR_PD | + IBV_EXP_QP_INIT_ATTR_PORT | + IBV_EXP_QP_INIT_ATTR_RX_HASH, + .pd = priv->pd, + .rx_hash_conf = &(struct ibv_exp_rx_hash_conf){ + .rx_hash_function = + IBV_EXP_RX_HASH_FUNC_TOEPLITZ, + .rx_hash_key_len = rss_hash_default_key_len, + .rx_hash_key = rss_hash_default_key, + .rx_hash_fields_mask = 0, + .rwq_ind_tbl = fdir_queue->ind_table, + }, + .port_num = priv->port, + }); + if (!fdir_queue->qp) { + ERROR("cannot create flow director hash RX QP"); + goto error; + } + return fdir_queue; +error: + assert(fdir_queue); + assert(!fdir_queue->qp); + if (fdir_queue->ind_table) + claim_zero(ibv_exp_destroy_rwq_ind_table + (fdir_queue->ind_table)); + if (fdir_queue->wq) + claim_zero(ibv_exp_destroy_wq(fdir_queue->wq)); + if (fdir_queue->cq) + claim_zero(ibv_destroy_cq(fdir_queue->cq)); + rte_free(fdir_queue); + return NULL; +} + +/** * Get flow director queue for a specific RX queue, create it in case * it does not exist. * @@ -416,74 +555,15 @@ priv_get_fdir_queue(struct priv *priv, uint16_t idx) { struct rxq_ctrl *rxq_ctrl = container_of((*priv->rxqs)[idx], struct rxq_ctrl, rxq); - struct fdir_queue *fdir_queue = &rxq_ctrl->fdir_queue; - struct ibv_exp_rwq_ind_table *ind_table = NULL; - struct ibv_qp *qp = NULL; - struct ibv_exp_rwq_ind_table_init_attr ind_init_attr; - struct ibv_exp_rx_hash_conf hash_conf; - struct ibv_exp_qp_init_attr qp_init_attr; - int err = 0; - - /* Return immediately if it has already been created. */ - if (fdir_queue->qp != NULL) - return fdir_queue; - - ind_init_attr = (struct ibv_exp_rwq_ind_table_init_attr){ - .pd = priv->pd, - .log_ind_tbl_size = 0, - .ind_tbl = &rxq_ctrl->wq, - .comp_mask = 0, - }; + struct fdir_queue *fdir_queue = rxq_ctrl->fdir_queue; - errno = 0; - ind_table = ibv_exp_create_rwq_ind_table(priv->ctx, - &ind_init_attr); - if (ind_table == NULL) { - /* Not clear whether errno is set. */ - err = (errno ? errno : EINVAL); - ERROR("RX indirection table creation failed with error %d: %s", - err, strerror(err)); - goto error; - } - - /* Create fdir_queue qp. */ - hash_conf = (struct ibv_exp_rx_hash_conf){ - .rx_hash_function = IBV_EXP_RX_HASH_FUNC_TOEPLITZ, - .rx_hash_key_len = rss_hash_default_key_len, - .rx_hash_key = rss_hash_default_key, - .rx_hash_fields_mask = 0, - .rwq_ind_tbl = ind_table, - }; - qp_init_attr = (struct ibv_exp_qp_init_attr){ - .max_inl_recv = 0, /* Currently not supported. */ - .qp_type = IBV_QPT_RAW_PACKET, - .comp_mask = (IBV_EXP_QP_INIT_ATTR_PD | - IBV_EXP_QP_INIT_ATTR_RX_HASH), - .pd = priv->pd, - .rx_hash_conf = &hash_conf, - .port_num = priv->port, - }; - - qp = ibv_exp_create_qp(priv->ctx, &qp_init_attr); - if (qp == NULL) { - err = (errno ? errno : EINVAL); - ERROR("hash RX QP creation failure: %s", strerror(err)); - goto error; + assert(rxq_ctrl->wq); + if (fdir_queue == NULL) { + fdir_queue = priv_fdir_queue_create(priv, rxq_ctrl->wq, + rxq_ctrl->socket); + rxq_ctrl->fdir_queue = fdir_queue; } - - fdir_queue->ind_table = ind_table; - fdir_queue->qp = qp; - return fdir_queue; - -error: - if (qp != NULL) - claim_zero(ibv_destroy_qp(qp)); - - if (ind_table != NULL) - claim_zero(ibv_exp_destroy_rwq_ind_table(ind_table)); - - return NULL; } /** @@ -601,7 +681,6 @@ priv_fdir_disable(struct priv *priv) { unsigned int i; struct mlx5_fdir_filter *mlx5_fdir_filter; - struct fdir_queue *fdir_queue; /* Run on every flow director filter and destroy flow handle. */ LIST_FOREACH(mlx5_fdir_filter, priv->fdir_filter_list, next) { @@ -618,23 +697,15 @@ priv_fdir_disable(struct priv *priv) } } - /* Run on every RX queue to destroy related flow director QP and - * indirection table. */ + /* Destroy flow director context in each RX queue. */ for (i = 0; (i != priv->rxqs_n); i++) { struct rxq_ctrl *rxq_ctrl = container_of((*priv->rxqs)[i], struct rxq_ctrl, rxq); - fdir_queue = &rxq_ctrl->fdir_queue; - if (fdir_queue->qp != NULL) { - claim_zero(ibv_destroy_qp(fdir_queue->qp)); - fdir_queue->qp = NULL; - } - - if (fdir_queue->ind_table != NULL) { - claim_zero(ibv_exp_destroy_rwq_ind_table - (fdir_queue->ind_table)); - fdir_queue->ind_table = NULL; - } + if (!rxq_ctrl->fdir_queue) + continue; + priv_fdir_queue_destroy(priv, rxq_ctrl->fdir_queue); + rxq_ctrl->fdir_queue = NULL; } } diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 29c137c..44889d1 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -745,6 +745,8 @@ rxq_cleanup(struct rxq_ctrl *rxq_ctrl) DEBUG("cleaning up %p", (void *)rxq_ctrl); rxq_free_elts(rxq_ctrl); + if (rxq_ctrl->fdir_queue != NULL) + priv_fdir_queue_destroy(rxq_ctrl->priv, rxq_ctrl->fdir_queue); if (rxq_ctrl->if_wq != NULL) { assert(rxq_ctrl->priv != NULL); assert(rxq_ctrl->priv->ctx != NULL); diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index f6e2cba..c8a93c0 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -87,6 +87,8 @@ struct mlx5_txq_stats { struct fdir_queue { struct ibv_qp *qp; /* Associated RX QP. */ struct ibv_exp_rwq_ind_table *ind_table; /* Indirection table. */ + struct ibv_exp_wq *wq; /* Work queue. */ + struct ibv_cq *cq; /* Completion queue. */ }; struct priv; @@ -128,7 +130,7 @@ struct rxq_ctrl { struct ibv_cq *cq; /* Completion Queue. */ struct ibv_exp_wq *wq; /* Work Queue. */ struct ibv_exp_res_domain *rd; /* Resource Domain. */ - struct fdir_queue fdir_queue; /* Flow director queue. */ + struct fdir_queue *fdir_queue; /* Flow director queue. */ struct ibv_mr *mr; /* Memory Region (for mp). */ struct ibv_exp_wq_family *if_wq; /* WQ burst interface. */ struct ibv_exp_cq_family_v1 *if_cq; /* CQ interface. */