From patchwork Wed Jun 27 15:07:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 41675 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CFAE21BF9C; Wed, 27 Jun 2018 17:07:37 +0200 (CEST) Received: from mail-wr0-f195.google.com (mail-wr0-f195.google.com [209.85.128.195]) by dpdk.org (Postfix) with ESMTP id 99E081BF92 for ; Wed, 27 Jun 2018 17:07:36 +0200 (CEST) Received: by mail-wr0-f195.google.com with SMTP id b8-v6so2405812wro.6 for ; Wed, 27 Jun 2018 08:07:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=ToivEB5ymVB3JKbO2dxhXZxYCY3xI42unLIODHnd6iQ=; b=ZlUDkeqlk51Lv09BeO1DLrYrwvcwdHTCqvmQI0h2yTCAaU2wjqvLrP2C9wvHiVZlDR 4c4SO0ULynEPW2wfdmQlBKskEvwQiCKUSQrdlzGv8zsnSlNexhqhV7OEoFEuOaBgnkjK DwTWd5XJNUIKWkQ5MNhbw3xqEOJs6HZzemPbNfH5+w3YP2d96ZvQB0R93Vp/WONx0K+S VeqmSsYIYLYL1Wr+1UuQGIq/mTsGs/uwCsO3VBjJ4uzMWBqZOUhLKrD/zxOqiv++Ocz3 DSP0kjS49Aemx4Mz+8Q/VnAgTSGjngEZEWwY071oqdrEM6HBmlr4u1BSM/lSbVFfLxm0 w9ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=ToivEB5ymVB3JKbO2dxhXZxYCY3xI42unLIODHnd6iQ=; b=pj7AE3eRjZ1PLO6RmjmOSFiY8PrtconK6dc54URui/Kh4tubAogJZcv0rjPJEHq12p C7CJsdk8DoWvQirgdES4vm0ioi4cZWAMZ+uLpua3Kpm2kft3OqvvOzzp1zoITOS18Va9 l03moUHuEtEqVtqXoMK8JE2qyRsiMwOt5Rhkdv+hoEK1pQm0ztIumKrcOtloXyVMASC1 0Nq/Uwmc6XfiytaqDtGQ94k6qYT+t5kmsX2PwgNggnXMjMmtcSKkNyUVb1DsXlwouZch lQCZFw0/AtG8F4l0aVPebLLC8sloUwULEMk66VgkZfo0BGfMLwmYeRFPF2313Cn8+wm7 bhtw== X-Gm-Message-State: APt69E2TxNvoXi4nv/7cedC2abI/V/Zmr05hf84JK8Ikq5I7r3sQlrxU 1Z1DEhCeeHKJS3+0CykR+4UwIzbV8w== X-Google-Smtp-Source: AAOMgpc2ewLDlrHHCecgcf77zwl21VY7c5si30FNIgmx44GhvUPDLbey9exkKiws+VlqGAUp/wuhjQ== X-Received: by 2002:adf:fc8c:: with SMTP id g12-v6mr5508760wrr.216.1530112055837; Wed, 27 Jun 2018 08:07:35 -0700 (PDT) Received: from laranjeiro-vm.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id k17-v6sm4872513wrp.19.2018.06.27.08.07.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 08:07:35 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org, Adrien Mazarguil , Yongseok Koh Date: Wed, 27 Jun 2018 17:07:34 +0200 Message-Id: <1368a8720f3ec3c40e47a6b1d9ef1edf0f38a646.1530111623.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 02/20] net/mlx5: handle drop queues are regular queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Drop queues are essentially used in flows due to Verbs API, the information if the fate of the flow is a drop or not is already present in the flow. Due to this, drop queues can be fully mapped on regular queues. Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5.c | 24 ++-- drivers/net/mlx5/mlx5.h | 14 ++- drivers/net/mlx5/mlx5_flow.c | 94 +++++++------- drivers/net/mlx5/mlx5_rxq.c | 232 +++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rxtx.h | 8 ++ 5 files changed, 310 insertions(+), 62 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index d01c0f6cf..aba6c1f9f 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -260,7 +260,6 @@ mlx5_dev_close(struct rte_eth_dev *dev) priv->txqs_n = 0; priv->txqs = NULL; } - mlx5_flow_delete_drop_queue(dev); mlx5_mprq_free_mp(dev); mlx5_mr_release(dev); if (priv->pd != NULL) { @@ -1100,22 +1099,15 @@ mlx5_dev_spawn_one(struct rte_device *dpdk_dev, mlx5_link_update(eth_dev, 0); /* Store device configuration on private structure. */ priv->config = config; - /* Create drop queue. */ - err = mlx5_flow_create_drop_queue(eth_dev); - if (err) { - DRV_LOG(ERR, "port %u drop queue allocation failed: %s", - eth_dev->data->port_id, strerror(rte_errno)); - err = rte_errno; - goto error; - } /* Supported Verbs flow priority number detection. */ - if (verb_priorities == 0) - verb_priorities = mlx5_get_max_verbs_prio(eth_dev); - if (verb_priorities < MLX5_VERBS_FLOW_PRIO_8) { - DRV_LOG(ERR, "port %u wrong Verbs flow priorities: %u", - eth_dev->data->port_id, verb_priorities); - err = ENOTSUP; - goto error; + if (verb_priorities == 0) { + err = mlx5_verbs_max_prio(eth_dev); + if (err < 0) { + DRV_LOG(ERR, "port %u wrong Verbs flow priorities", + eth_dev->data->port_id); + goto error; + } + verb_priorities = err; } priv->config.max_verbs_prio = verb_priorities; /* diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 6674a77d4..11aa0932f 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -131,9 +131,6 @@ enum mlx5_verbs_alloc_type { MLX5_VERBS_ALLOC_TYPE_RX_QUEUE, }; -/* 8 Verbs priorities. */ -#define MLX5_VERBS_FLOW_PRIO_8 8 - /** * Verbs allocator needs a context to know in the callback which kind of * resources it is allocating. @@ -145,6 +142,12 @@ struct mlx5_verbs_alloc_ctx { LIST_HEAD(mlx5_mr_list, mlx5_mr); +/* Flow drop context necessary due to Verbs API. */ +struct mlx5_drop { + struct mlx5_hrxq *hrxq; /* Hash Rx queue queue. */ + struct mlx5_rxq_ibv *rxq; /* Verbs Rx queue. */ +}; + struct priv { LIST_ENTRY(priv) mem_event_cb; /* Called by memory event callback. */ struct rte_eth_dev_data *dev_data; /* Pointer to device data. */ @@ -175,7 +178,7 @@ struct priv { struct rte_intr_handle intr_handle; /* Interrupt handler. */ unsigned int (*reta_idx)[]; /* RETA index table. */ unsigned int reta_idx_n; /* RETA index size. */ - struct mlx5_hrxq_drop *flow_drop_queue; /* Flow drop queue. */ + struct mlx5_drop drop; /* Flow drop queues. */ struct mlx5_flows flows; /* RTE Flow rules. */ struct mlx5_flows ctrl_flows; /* Control flow rules. */ struct { @@ -306,7 +309,8 @@ int mlx5_traffic_restart(struct rte_eth_dev *dev); /* mlx5_flow.c */ -unsigned int mlx5_get_max_verbs_prio(struct rte_eth_dev *dev); +int mlx5_verbs_max_prio(struct rte_eth_dev *dev); +void mlx5_flow_print(struct rte_flow *flow); int mlx5_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index a45cb06e1..8b5b695d4 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -75,6 +75,58 @@ struct ibv_spec_header { uint16_t size; }; + /** + * Get the maximum number of priority available. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * number of supported Verbs flow priority on success, a negative errno + * value otherwise and rte_errno is set. + */ +int +mlx5_verbs_max_prio(struct rte_eth_dev *dev) +{ + struct { + struct ibv_flow_attr attr; + struct ibv_flow_spec_eth eth; + struct ibv_flow_spec_action_drop drop; + } flow_attr = { + .attr = { + .num_of_specs = 2, + }, + .eth = { + .type = IBV_FLOW_SPEC_ETH, + .size = sizeof(struct ibv_flow_spec_eth), + }, + .drop = { + .size = sizeof(struct ibv_flow_spec_action_drop), + .type = IBV_FLOW_SPEC_ACTION_DROP, + }, + }; + struct ibv_flow *flow; + uint32_t verb_priorities; + struct mlx5_hrxq *drop = mlx5_hrxq_drop_new(dev); + + if (!drop) { + rte_errno = ENOTSUP; + return -rte_errno; + } + for (verb_priorities = 0; 1; verb_priorities++) { + flow_attr.attr.priority = verb_priorities; + flow = mlx5_glue->create_flow(drop->qp, + &flow_attr.attr); + if (!flow) + break; + claim_zero(mlx5_glue->destroy_flow(flow)); + } + mlx5_hrxq_drop_release(dev, drop); + DRV_LOG(INFO, "port %u flow maximum priority: %d", + dev->data->port_id, verb_priorities); + return verb_priorities; +} + /** * Convert a flow. * @@ -184,32 +236,6 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, struct mlx5_flows *list) } } -/** - * Create drop queue. - * - * @param dev - * Pointer to Ethernet device. - * - * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. - */ -int -mlx5_flow_create_drop_queue(struct rte_eth_dev *dev __rte_unused) -{ - return 0; -} - -/** - * Delete drop queue. - * - * @param dev - * Pointer to Ethernet device. - */ -void -mlx5_flow_delete_drop_queue(struct rte_eth_dev *dev __rte_unused) -{ -} - /** * Remove all flows. * @@ -292,6 +318,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev, struct priv *priv = dev->data->dev_private; const struct rte_flow_attr attr = { .ingress = 1, + .priority = priv->config.max_verbs_prio - 1, }; struct rte_flow_item items[] = { { @@ -830,18 +857,3 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev, } return 0; } - -/** - * Detect number of Verbs flow priorities supported. - * - * @param dev - * Pointer to Ethernet device. - * - * @return - * number of supported Verbs flow priority. - */ -unsigned int -mlx5_get_max_verbs_prio(struct rte_eth_dev *dev __rte_unused) -{ - return 8; -} diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 17db7c160..5f17dce50 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1949,3 +1949,235 @@ mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev) } return ret; } + +/** + * Create a drop Rx queue Verbs object. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The Verbs object initialised, NULL otherwise and rte_errno is set. + */ +struct mlx5_rxq_ibv * +mlx5_rxq_ibv_drop_new(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct ibv_cq *cq; + struct ibv_wq *wq = NULL; + struct mlx5_rxq_ibv *rxq; + + if (priv->drop.rxq) + return priv->drop.rxq; + cq = mlx5_glue->create_cq(priv->ctx, 1, NULL, NULL, 0); + if (!cq) { + DEBUG("port %u cannot allocate CQ for drop queue", + dev->data->port_id); + rte_errno = errno; + goto error; + } + wq = mlx5_glue->create_wq(priv->ctx, + &(struct ibv_wq_init_attr){ + .wq_type = IBV_WQT_RQ, + .max_wr = 1, + .max_sge = 1, + .pd = priv->pd, + .cq = cq, + }); + if (!wq) { + DEBUG("port %u cannot allocate WQ for drop queue", + dev->data->port_id); + rte_errno = errno; + goto error; + } + rxq = rte_calloc(__func__, 1, sizeof(*rxq), 0); + if (!rxq) { + DEBUG("port %u cannot allocate drop Rx queue memory", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + rxq->cq = cq; + rxq->wq = wq; + priv->drop.rxq = rxq; + return rxq; +error: + if (wq) + claim_zero(mlx5_glue->destroy_wq(wq)); + if (cq) + claim_zero(mlx5_glue->destroy_cq(cq)); + return NULL; +} + +/** + * Release a drop Rx queue Verbs object. + * + * @param dev + * Pointer to Ethernet device. + * @param rxq + * Pointer to the drop Verbs Rx queue. + * + * @return + * The Verbs object initialised, NULL otherwise and rte_errno is set. + */ +void +mlx5_rxq_ibv_drop_release(struct rte_eth_dev *dev, struct mlx5_rxq_ibv *rxq) +{ + if (rxq->wq) + claim_zero(mlx5_glue->destroy_wq(rxq->wq)); + if (rxq->cq) + claim_zero(mlx5_glue->destroy_cq(rxq->cq)); + rte_free(rxq); + ((struct priv *)dev->data->dev_private)->drop.rxq = NULL; +} + +/** + * Create a drop indirection table. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The Verbs object initialised, NULL otherwise and rte_errno is set. + */ +struct mlx5_ind_table_ibv * +mlx5_ind_table_ibv_drop_new(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct mlx5_ind_table_ibv *ind_tbl; + struct mlx5_rxq_ibv *rxq; + struct mlx5_ind_table_ibv tmpl; + + rxq = mlx5_rxq_ibv_drop_new(dev); + if (!rxq) + return NULL; + tmpl.ind_table = mlx5_glue->create_rwq_ind_table + (priv->ctx, + &(struct ibv_rwq_ind_table_init_attr){ + .log_ind_tbl_size = 0, + .ind_tbl = &rxq->wq, + .comp_mask = 0, + }); + if (!tmpl.ind_table) { + DEBUG("port %u cannot allocate indirection table for drop" + " queue", + dev->data->port_id); + rte_errno = errno; + goto error; + } + ind_tbl = rte_calloc(__func__, 1, sizeof(*ind_tbl), 0); + if (!ind_tbl) { + rte_errno = ENOMEM; + goto error; + } + ind_tbl->ind_table = tmpl.ind_table; + return ind_tbl; +error: + mlx5_rxq_ibv_drop_release(dev, rxq); + return NULL; +} + +/** + * Release a drop indirection table. + * + * @param dev + * Pointer to Ethernet device. + * @param ind_tbl + * Indirection table to release. + */ +void +mlx5_ind_table_ibv_drop_release(struct rte_eth_dev *dev, + struct mlx5_ind_table_ibv *ind_tbl) +{ + claim_zero(mlx5_glue->destroy_rwq_ind_table(ind_tbl->ind_table)); + mlx5_rxq_ibv_drop_release + (dev, ((struct priv *)dev->data->dev_private)->drop.rxq); + rte_free(ind_tbl); +} + +/** + * Create a drop Rx Hash queue. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * The Verbs object initialised, NULL otherwise and rte_errno is set. + */ +struct mlx5_hrxq * +mlx5_hrxq_drop_new(struct rte_eth_dev *dev) +{ + struct priv *priv = dev->data->dev_private; + struct mlx5_ind_table_ibv *ind_tbl; + struct ibv_qp *qp; + struct mlx5_hrxq *hrxq; + + if (priv->drop.hrxq) { + rte_atomic32_inc(&priv->drop.hrxq->refcnt); + return priv->drop.hrxq; + } + ind_tbl = mlx5_ind_table_ibv_drop_new(dev); + if (!ind_tbl) + return NULL; + qp = mlx5_glue->create_qp_ex(priv->ctx, + &(struct ibv_qp_init_attr_ex){ + .qp_type = IBV_QPT_RAW_PACKET, + .comp_mask = + IBV_QP_INIT_ATTR_PD | + IBV_QP_INIT_ATTR_IND_TABLE | + IBV_QP_INIT_ATTR_RX_HASH, + .rx_hash_conf = (struct ibv_rx_hash_conf){ + .rx_hash_function = + IBV_RX_HASH_FUNC_TOEPLITZ, + .rx_hash_key_len = rss_hash_default_key_len, + .rx_hash_key = rss_hash_default_key, + .rx_hash_fields_mask = 0, + }, + .rwq_ind_tbl = ind_tbl->ind_table, + .pd = priv->pd + }); + if (!qp) { + DEBUG("port %u cannot allocate QP for drop queue", + dev->data->port_id); + rte_errno = errno; + goto error; + } + hrxq = rte_calloc(__func__, 1, sizeof(*hrxq), 0); + if (!hrxq) { + DRV_LOG(WARNING, + "port %u cannot allocate memory for drop queue", + dev->data->port_id); + rte_errno = ENOMEM; + goto error; + } + hrxq->ind_table = ind_tbl; + hrxq->qp = qp; + priv->drop.hrxq = hrxq; + rte_atomic32_set(&hrxq->refcnt, 1); + return hrxq; +error: + if (ind_tbl) + mlx5_ind_table_ibv_drop_release(dev, ind_tbl); + return NULL; +} + +/** + * Release a drop hash Rx queue. + * + * @param dev + * Pointer to Ethernet device. + * @param hrxq + * Pointer to Hash Rx queue to release. + */ +void +mlx5_hrxq_drop_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) +{ + struct priv *priv = dev->data->dev_private; + + if (rte_atomic32_dec_and_test(&hrxq->refcnt)) { + claim_zero(mlx5_glue->destroy_qp(hrxq->qp)); + mlx5_ind_table_ibv_drop_release(dev, hrxq->ind_table); + rte_free(hrxq); + priv->drop.hrxq = NULL; + } +} diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index f53bb43c3..51f7f678b 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -245,6 +245,9 @@ struct mlx5_rxq_ibv *mlx5_rxq_ibv_new(struct rte_eth_dev *dev, uint16_t idx); struct mlx5_rxq_ibv *mlx5_rxq_ibv_get(struct rte_eth_dev *dev, uint16_t idx); int mlx5_rxq_ibv_release(struct mlx5_rxq_ibv *rxq_ibv); int mlx5_rxq_ibv_releasable(struct mlx5_rxq_ibv *rxq_ibv); +struct mlx5_rxq_ibv *mlx5_rxq_ibv_drop_new(struct rte_eth_dev *dev); +void mlx5_rxq_ibv_drop_release(struct rte_eth_dev *dev, + struct mlx5_rxq_ibv *rxq); int mlx5_rxq_ibv_verify(struct rte_eth_dev *dev); struct mlx5_rxq_ctrl *mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, unsigned int socket, @@ -265,6 +268,9 @@ struct mlx5_ind_table_ibv *mlx5_ind_table_ibv_get(struct rte_eth_dev *dev, int mlx5_ind_table_ibv_release(struct rte_eth_dev *dev, struct mlx5_ind_table_ibv *ind_tbl); int mlx5_ind_table_ibv_verify(struct rte_eth_dev *dev); +struct mlx5_ind_table_ibv *mlx5_ind_table_ibv_drop_new(struct rte_eth_dev *dev); +void mlx5_ind_table_ibv_drop_release(struct rte_eth_dev *dev, + struct mlx5_ind_table_ibv *ind_tbl); struct mlx5_hrxq *mlx5_hrxq_new(struct rte_eth_dev *dev, const uint8_t *rss_key, uint32_t rss_key_len, uint64_t hash_fields, @@ -277,6 +283,8 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev, uint32_t tunnel, uint32_t rss_level); int mlx5_hrxq_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hxrq); int mlx5_hrxq_ibv_verify(struct rte_eth_dev *dev); +struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev); +void mlx5_hrxq_drop_release(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq); uint64_t mlx5_get_rx_port_offloads(void); uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev);