From patchwork Wed Jul 11 07:22:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 42794 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A08EE1B471; Wed, 11 Jul 2018 09:23:20 +0200 (CEST) Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by dpdk.org (Postfix) with ESMTP id 58C251B459 for ; Wed, 11 Jul 2018 09:23:19 +0200 (CEST) Received: by mail-wr1-f67.google.com with SMTP id b15-v6so17029672wrv.10 for ; Wed, 11 Jul 2018 00:23:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9QrB3D6rQEj2UXsjVsZp8Rc98kaLh98v2f/ioKj6NtY=; b=mb7wxJU8/1puPgobJk3FIhXEV8wG8pxF+tugRq2aDBSZFqldqtSwgc2TyJg14Mw27g Y3nM06mhX4K9Pi2qw/eNZ/cixrCG2KOQ6+oeF4r/Th40I8uTomR8vNmiwCMf0sKey8Lt 1pxKQ/GExOJoldvD55QxZ6eiycGDY5yNzbsxXnV6Qs4MiU4/IWfpOeXowkL1sAkXD8MY TfZPTmIdMYrNhM/8yI2R9/cEvF0xs6zFjD2UYwuI31kD48v7+YJ8JWcPo3jKZ3v1X0M1 u0ciAuaRhVzL5UN7nLDhE6raGpNdUS1g2LwNSx7a+7WIqkspLzUh0xZUWlwRC4gsO4XP 8FOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9QrB3D6rQEj2UXsjVsZp8Rc98kaLh98v2f/ioKj6NtY=; b=fyvi1Zc7iZ7FUNr/ZXlgGX8rEPnaI/wndDxV0XKHJoHOLUwkbHnLb0hZEn11CoCFK4 rrl1X8nkgh6msDKpVZgVPDCKGYgZLtJ/Xbb0aU+dqWR6kQGx7sXRtIuP7U5xWVALFfgS PWPtCHOXbFGFuFcqqi0GBA4olGZlWFBEh6WJcv29lPksweUqVwiPqEHU8SMPyHMvtz+6 7iUjUOvIoIeiXrLUH04iwZe1f0k8bUXB2DACVHG+YDqRVbA1odPsZRUmjasF+3uPJGJQ WXuzOaRkgirD4bQnyBqucBaArC6Z2HPwAjs3uDwPGs1milVZg+RXd3QDBX/0jP9xcch0 /HUQ== X-Gm-Message-State: APt69E14/g21co9T1LP/Bxk2MlzqTW+nli4oVIzwN4ozUnVLjhyswsbs T63UyE8Z7wxYVAOoUioGfZFWPmA2bg== X-Google-Smtp-Source: AAOMgpcx5ofuyLczahRKwdVUPXK0kPRXGUubi7PxZ8nhoGqrnReJ+iQJaa6RUEA7nPymIidEy0P80A== X-Received: by 2002:adf:8e10:: with SMTP id n16-v6mr19946913wrb.100.1531293798644; Wed, 11 Jul 2018 00:23:18 -0700 (PDT) Received: from laranjeiro-vm.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id t10-v6sm31314212wre.95.2018.07.11.00.23.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jul 2018 00:23:18 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org, Yongseok Koh Cc: Adrien Mazarguil Date: Wed, 11 Jul 2018 09:22:35 +0200 Message-Id: X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v3 02/21] net/mlx5: handle drop queues as regular queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Drop queues are essentially used in flows due to Verbs API, the information if the fate of the flow is a drop or not is already present in the flow. Due to this, drop queues can be fully mapped on regular queues. Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5.c | 24 ++-- drivers/net/mlx5/mlx5.h | 14 ++- drivers/net/mlx5/mlx5_flow.c | 94 +++++++------- drivers/net/mlx5/mlx5_rxq.c | 232 +++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rxtx.h | 6 + 5 files changed, 308 insertions(+), 62 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index df7f39844..e9780ac8f 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -261,7 +261,6 @@ mlx5_dev_close(struct rte_eth_dev *dev) priv->txqs_n = 0; priv->txqs = NULL; } - mlx5_flow_delete_drop_queue(dev); mlx5_mprq_free_mp(dev); mlx5_mr_release(dev); if (priv->pd != NULL) { @@ -1139,22 +1138,15 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, mlx5_link_update(eth_dev, 0); /* Store device configuration on private structure. */ priv->config = config; - /* Create drop queue. */ - err = mlx5_flow_create_drop_queue(eth_dev); - if (err) { - DRV_LOG(ERR, "port %u drop queue allocation failed: %s", - eth_dev->data->port_id, strerror(rte_errno)); - err = rte_errno; - goto error; - } /* Supported Verbs flow priority number detection. */ - if (verb_priorities == 0) - verb_priorities = mlx5_get_max_verbs_prio(eth_dev); - if (verb_priorities < MLX5_VERBS_FLOW_PRIO_8) { - DRV_LOG(ERR, "port %u wrong Verbs flow priorities: %u", - eth_dev->data->port_id, verb_priorities); - err = ENOTSUP; - goto error; + if (verb_priorities == 0) { + err = mlx5_verbs_max_prio(eth_dev); + if (err < 0) { + DRV_LOG(ERR, "port %u wrong Verbs flow priorities", + eth_dev->data->port_id); + goto error; + } + verb_priorities = err; } priv->config.max_verbs_prio = verb_priorities; /* diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index cc01310e0..227429848 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -139,9 +139,6 @@ enum mlx5_verbs_alloc_type { MLX5_VERBS_ALLOC_TYPE_RX_QUEUE, }; -/* 8 Verbs priorities. */ -#define MLX5_VERBS_FLOW_PRIO_8 8 - /** * Verbs allocator needs a context to know in the callback which kind of * resources it is allocating. @@ -153,6 +150,12 @@ struct mlx5_verbs_alloc_ctx { LIST_HEAD(mlx5_mr_list, mlx5_mr); +/* Flow drop context necessary due to Verbs API. */ +struct mlx5_drop { + struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */ + struct mlx5_rxq_ibv *rxq; /* Verbs Rx queue. */ +}; + struct priv { LIST_ENTRY(priv) mem_event_cb; /* Called by memory event callback. */ struct rte_eth_dev_data *dev_data; /* Pointer to device data. */ @@ -182,7 +185,7 @@ struct priv { struct rte_intr_handle intr_handle; /* Interrupt handler. */ unsigned int (*reta_idx)[]; /* RETA index table. */ unsigned int reta_idx_n; /* RETA index size. */ - struct mlx5_hrxq_drop *flow_drop_queue; /* Flow drop queue. */ + struct mlx5_drop drop_queue; /* Flow drop queues. */ struct mlx5_flows flows; /* RTE Flow rules. */ struct mlx5_flows ctrl_flows; /* Control flow rules. */ struct { @@ -314,7 +317,8 @@ int mlx5_traffic_restart(struct rte_eth_dev *dev); /* mlx5_flow.c */ -unsigned int mlx5_get_max_verbs_prio(struct rte_eth_dev *dev); +int mlx5_verbs_max_prio(struct rte_eth_dev *dev); +void mlx5_flow_print(struct rte_flow *flow); int mlx5_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index a45cb06e1..5e325be37 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -75,6 +75,58 @@ struct ibv_spec_header { uint16_t size; }; + /** + * Get the maximum number of priority available. + * + * @param[in] dev + * Pointer to Ethernet device. + * + * @return + * number of supported Verbs flow priority on success, a negative errno + * value otherwise and rte_errno is set. + */ +int +mlx5_verbs_max_prio(struct rte_eth_dev *dev) +{ + struct { + struct ibv_flow_attr attr; + struct ibv_flow_spec_eth eth; + struct ibv_flow_spec_action_drop drop; + } flow_attr = { + .attr = { + .num_of_specs = 2, + }, + .eth = { + .type = IBV_FLOW_SPEC_ETH, + .size = sizeof(struct ibv_flow_spec_eth), + }, + .drop = { + .size = sizeof(struct ibv_flow_spec_action_drop), + .type = IBV_FLOW_SPEC_ACTION_DROP, + }, + }; + struct ibv_flow *flow; + uint32_t verb_priorities; + struct mlx5_hrxq *drop = mlx5_hrxq_drop_new(dev); + + if (!drop) { + rte_errno = ENOTSUP; + return -rte_errno; + } + for (verb_priorities = 0; 1; verb_priorities++) { + flow_attr.attr.priority = verb_priorities; + flow = mlx5_glue->create_flow(drop->qp, + &flow_attr.attr); + if (!flow) + break; + claim_zero(mlx5_glue->destroy_flow(flow)); + } + mlx5_hrxq_drop_release(dev); + DRV_LOG(INFO, "port %u flow maximum priority: %d", + dev->data->port_id, verb_priorities); + return verb_priorities; +} + /** * Convert a flow. * @@ -184,32 +236,6 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, struct mlx5_flows *list) } } -/** - * Create drop queue. - * - * @param dev - * Pointer to Ethernet device. - * - * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. - */ -int -mlx5_flow_create_drop_queue(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -/** - * Delete drop queue. - * - * @param dev - * Pointer to Ethernet device. - */ -void -mlx5_flow_delete_drop_queue(struct rte_eth_dev *dev __rte_unused) -{ -} - /** * Remove all flows. * @@ -292,6 +318,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, struct priv *priv = dev->data->dev_private; const struct rte_flow_attr attr = { .ingress = 1, + .priority = priv->config.max_verbs_prio - 1, }; struct rte_flow_item items[] = { { @@ -830,18 +857,3 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev, } return 0; } - -/** - * Detect number of Verbs flow priorities supported. - * - * @param dev - * Pointer to Ethernet device. - * - * @return - * number of supported Verbs flow priority. - */ -unsigned int -mlx5_get_max_verbs_prio(struct rte_eth_dev *dev __rte_unused) -{ - return 8; -} diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index fd0df177e..d960daa43 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1957,3 +1957,235 @@ mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev) } return ret; } + +/** + * Create a drop Rx queue Verbs object. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The Verbs object initialised, NULL otherwise and rte_errno is set. + */ +struct mlx5_rxq_ibv * +mlx5_rxq_ibv_drop_new(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct ibv_cq *cq; + struct ibv_wq *wq = NULL; + struct mlx5_rxq_ibv *rxq; + + if (priv->drop_queue.rxq) + return priv->drop_queue.rxq; + cq = mlx5_glue->create_cq(priv->ctx, 1, NULL, NULL, 0); + if (!cq) { + DEBUG("port %u cannot allocate CQ for drop queue", + dev->data->port_id); + rte_errno = errno; + goto error; + } + wq = mlx5_glue->create_wq(priv->ctx, + &(struct ibv_wq_init_attr){ + .wq_type = IBV_WQT_RQ, + .max_wr = 1, + .max_sge = 1, + .pd = priv->pd, + .cq = cq, + }); + if (!wq) { + DEBUG("port %u cannot allocate WQ for drop queue", + dev->data->port_id); + rte_errno = errno; + goto error; + } + rxq = rte_calloc(__func__, 1, sizeof(*rxq), 0); + if (!rxq) { + DEBUG("port %u cannot allocate drop Rx queue memory", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + rxq->cq = cq; + rxq->wq = wq; + priv->drop_queue.rxq = rxq; + return rxq; +error: + if (wq) + claim_zero(mlx5_glue->destroy_wq(wq)); + if (cq) + claim_zero(mlx5_glue->destroy_cq(cq)); + return NULL; +} + +/** + * Release a drop Rx queue Verbs object. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The Verbs object initialised, NULL otherwise and rte_errno is set. + */ +void +mlx5_rxq_ibv_drop_release(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct mlx5_rxq_ibv *rxq = priv->drop_queue.rxq; + + if (rxq->wq) + claim_zero(mlx5_glue->destroy_wq(rxq->wq)); + if (rxq->cq) + claim_zero(mlx5_glue->destroy_cq(rxq->cq)); + rte_free(rxq); + priv->drop_queue.rxq = NULL; +} + +/** + * Create a drop indirection table. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The Verbs object initialised, NULL otherwise and rte_errno is set. + */ +struct mlx5_ind_table_ibv * +mlx5_ind_table_ibv_drop_new(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct mlx5_ind_table_ibv *ind_tbl; + struct mlx5_rxq_ibv *rxq; + struct mlx5_ind_table_ibv tmpl; + + rxq = mlx5_rxq_ibv_drop_new(dev); + if (!rxq) + return NULL; + tmpl.ind_table = mlx5_glue->create_rwq_ind_table + (priv->ctx, + &(struct ibv_rwq_ind_table_init_attr){ + .log_ind_tbl_size = 0, + .ind_tbl = &rxq->wq, + .comp_mask = 0, + }); + if (!tmpl.ind_table) { + DEBUG("port %u cannot allocate indirection table for drop" + " queue", + dev->data->port_id); + rte_errno = errno; + goto error; + } + ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl), 0); + if (!ind_tbl) { + rte_errno = ENOMEM; + goto error; + } + ind_tbl->ind_table = tmpl.ind_table; + return ind_tbl; +error: + mlx5_rxq_ibv_drop_release(dev); + return NULL; +} + +/** + * Release a drop indirection table. + * + * @param dev + * Pointer to Ethernet device. + */ +void +mlx5_ind_table_ibv_drop_release(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct mlx5_ind_table_ibv *ind_tbl = priv->drop_queue.hrxq->ind_table; + + claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table)); + mlx5_rxq_ibv_drop_release(dev); + rte_free(ind_tbl); + priv->drop_queue.hrxq->ind_table = NULL; +} + +/** + * Create a drop Rx Hash queue. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The Verbs object initialised, NULL otherwise and rte_errno is set. + */ +struct mlx5_hrxq * +mlx5_hrxq_drop_new(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct mlx5_ind_table_ibv *ind_tbl; + struct ibv_qp *qp; + struct mlx5_hrxq *hrxq; + + if (priv->drop_queue.hrxq) { + rte_atomic32_inc(&priv->drop_queue.hrxq->refcnt); + return priv->drop_queue.hrxq; + } + ind_tbl = mlx5_ind_table_ibv_drop_new(dev); + if (!ind_tbl) + return NULL; + qp = mlx5_glue->create_qp_ex(priv->ctx, + &(struct ibv_qp_init_attr_ex){ + .qp_type = IBV_QPT_RAW_PACKET, + .comp_mask = + IBV_QP_INIT_ATTR_PD | + IBV_QP_INIT_ATTR_IND_TABLE | + IBV_QP_INIT_ATTR_RX_HASH, + .rx_hash_conf = (struct ibv_rx_hash_conf){ + .rx_hash_function = + IBV_RX_HASH_FUNC_TOEPLITZ, + .rx_hash_key_len = rss_hash_default_key_len, + .rx_hash_key = rss_hash_default_key, + .rx_hash_fields_mask = 0, + }, + .rwq_ind_tbl = ind_tbl->ind_table, + .pd = priv->pd + }); + if (!qp) { + DEBUG("port %u cannot allocate QP for drop queue", + dev->data->port_id); + rte_errno = errno; + goto error; + } + hrxq = rte_calloc(__func__, 1, sizeof(*hrxq), 0); + if (!hrxq) { + DRV_LOG(WARNING, + "port %u cannot allocate memory for drop queue", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + hrxq->ind_table = ind_tbl; + hrxq->qp = qp; + priv->drop_queue.hrxq = hrxq; + rte_atomic32_set(&hrxq->refcnt, 1); + return hrxq; +error: + if (ind_tbl) + mlx5_ind_table_ibv_drop_release(dev); + return NULL; +} + +/** + * Release a drop hash Rx queue. + * + * @param dev + * Pointer to Ethernet device. + */ +void +mlx5_hrxq_drop_release(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq = priv->drop_queue.hrxq; + + if (rte_atomic32_dec_and_test(&hrxq->refcnt)) { + claim_zero(mlx5_glue->destroy_qp(hrxq->qp)); + mlx5_ind_table_ibv_drop_release(dev); + rte_free(hrxq); + priv->drop_queue.hrxq = NULL; + } +} diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 97b4d9eb6..708fdd4fa 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -244,6 +244,8 @@ struct mlx5_rxq_ibv *mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_ibv *mlx5_rxq_ibv_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_ibv_release(struct mlx5_rxq_ibv *rxq_ibv); int mlx5_rxq_ibv_releasable(struct mlx5_rxq_ibv *rxq_ibv); +struct mlx5_rxq_ibv *mlx5_rxq_ibv_drop_new(struct rte_eth_dev *dev); +void mlx5_rxq_ibv_drop_release(struct rte_eth_dev *dev); int mlx5_rxq_ibv_verify(struct rte_eth_dev *dev); struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, unsigned int socket, @@ -264,6 +266,8 @@ struct mlx5_ind_table_ibv *mlx5_ind_table_ibv_get(struct rte_eth_dev *dev, int mlx5_ind_table_ibv_release(struct rte_eth_dev *dev, struct mlx5_ind_table_ibv *ind_tbl); int mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev); +struct mlx5_ind_table_ibv *mlx5_ind_table_ibv_drop_new(struct rte_eth_dev *dev); +void mlx5_ind_table_ibv_drop_release(struct rte_eth_dev *dev); struct mlx5_hrxq *mlx5_hrxq_new(struct rte_eth_dev *dev, const uint8_t *rss_key, uint32_t rss_key_len, uint64_t hash_fields, @@ -276,6 +280,8 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev, uint32_t tunnel, uint32_t rss_level); int mlx5_hrxq_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hxrq); int mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev); +struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev); +void mlx5_hrxq_drop_release(struct rte_eth_dev *dev); uint64_t mlx5_get_rx_port_offloads(void); uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev);