From patchwork Wed Jun 27 15:07:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 41694 X-Patchwork-Delegate: shahafs@mellanox.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 940761C02D; Wed, 27 Jun 2018 17:08:15 +0200 (CEST) Received: from mail-wr0-f171.google.com (mail-wr0-f171.google.com [209.85.128.171]) by dpdk.org (Postfix) with ESMTP id 8936F1BFB3 for ; Wed, 27 Jun 2018 17:07:52 +0200 (CEST) Received: by mail-wr0-f171.google.com with SMTP id h10-v6so2402099wrq.8 for ; Wed, 27 Jun 2018 08:07:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=i+7dXZTu+kAwCGAzc5tAnD6uziD6l5YID1kqtZ1r89s=; b=cR+rV83T2efHvxu7joBazNoouPQYf8O9LjRoJqYUIfFS0y4WFKQS4PCOfOP64IMf8N 2ZBiNNkLlthojuuACFUsArz/6S4sPTG33fsKQWuqpGG4pJm1OrIKFrRi5uyMOBr0vvSw ORSD1TMlyq0cmaEjdAJepgOz8pB5wyft2ljTgMpkGYw1PbNAzB7lEgn7WEo9EgEYlOEE tLhD1xMCgF1JGifg4dkRki8YRZ7fRQ5qiQq20VtY4kKvSlt1Oolg7ly03fA/pSFSJfsk gSrlCuRN+A3naO59QVC9njES566hFswHAiIdVktOTehjB0QSucDQYJscRcJgeczA0TCX lOTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=i+7dXZTu+kAwCGAzc5tAnD6uziD6l5YID1kqtZ1r89s=; b=RpGwqB4GbuTbBZUplShtMjjkO06tpveLHLz0g6AdrQOb68PRa7Wztq6NeDY3ORP/jd 1FgPGOuHlQMPSUGRrcg4BpoLrZrZNZB+nQlmAdDLuQfWiPOHr8MDBpTQ3Ypcia6gDHp1 oYr4aOX/yjEqvIx+JgmfS0d2uWy1ae24vATWhBCXAZcFgAdNZyzEu1UfROIa5gxmYTb7 DUmcK6Uc0OdA3xqe/rCeaa/gxmkuLT83VGlPsSaH+kqjTDRHWswHrl/ESOe+ydryr4pX cNjjyOirxIpCdCmYY5J34VM3XMksg+aQNhVIl8s3fv7GOmNVcbGYOgL3bCqPuytKL+Fk z56g== X-Gm-Message-State: APt69E2Sok4Wg3wAV8sU3AyWpM3KYYWEiQtXnOg/heaS+3Mhd3eeqtp/ h6Jsg/naBUfhYzYM9cuk7WEh32KOmw== X-Google-Smtp-Source: AAOMgpfqCB4z9bAwlaRPHa/CUOqE7TZxRMcKZYXRtdsZggF9FRqjL73CsKWtx+23LplwDFNIsVzqQg== X-Received: by 2002:a5d:4c4c:: with SMTP id n12-v6mr5405724wrt.71.1530112071985; Wed, 27 Jun 2018 08:07:51 -0700 (PDT) Received: from laranjeiro-vm.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id k17-v6sm4872513wrp.19.2018.06.27.08.07.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 08:07:51 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org, Adrien Mazarguil , Yongseok Koh Date: Wed, 27 Jun 2018 17:07:52 +0200 Message-Id: X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 20/20] net/mlx5: add count flow action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This is only supported by Mellanox OFED. Signed-off-by: Nelio Laranjeiro Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow.c | 238 +++++++++++++++++++++++++++++++++++ 2 files changed, 240 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ed8c1c9a2..1d8e156c8 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -181,6 +181,8 @@ struct priv { struct mlx5_drop drop; /* Flow drop queues. */ struct mlx5_flows flows; /* RTE Flow rules. */ struct mlx5_flows ctrl_flows; /* Control flow rules. */ + LIST_HEAD(counters, mlx5_flow_counter) flow_counters; + /* Flow counters. */ struct { uint32_t dev_gen; /* Generation number to flush local caches. */ rte_rwlock_t rwlock; /* MR Lock. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 7aa4e6ed5..9241855be 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -88,6 +88,7 @@ extern const struct eth_dev_ops mlx5_dev_ops_isolate; /* Modify a packet. */ #define MLX5_FLOW_MOD_FLAG (1u << 0) #define MLX5_FLOW_MOD_MARK (1u << 1) +#define MLX5_FLOW_MOD_COUNT (1u << 2) /* Priority reserved for default flows. */ #define MLX5_FLOW_PRIO_RSVD ((uint32_t)-1) @@ -239,6 +240,17 @@ struct mlx5_flow_verbs { uint64_t hash_fields; /**< Verbs hash Rx queue hash fields. */ }; +/* Counters information. */ +struct mlx5_flow_counter { + LIST_ENTRY(mlx5_flow_counter) next; /**< Pointer to the next counter. */ + uint32_t shared:1; /**< Share counter ID with other flow rules. */ + uint32_t ref_cnt:31; /**< Reference counter. */ + uint32_t id; /**< Counter ID. */ + struct ibv_counter_set *cs; /**< Holds the counters for the rule. */ + uint64_t hits; /**< Number of packets matched by the rule. */ + uint64_t bytes; /**< Number of bytes matched by the rule. */ +}; + /* Flow structure. */ struct rte_flow { TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */ @@ -253,6 +265,7 @@ struct rte_flow { LIST_HEAD(verbs, mlx5_flow_verbs) verbs; /**< Verbs flows list. */ struct mlx5_flow_verbs *cur_verbs; /**< Current Verbs flow structure being filled. */ + struct mlx5_flow_counter *counter; /**< Holds Verbs flow counter. */ struct rte_flow_action_rss rss;/**< RSS context. */ uint32_t ptype; /**< Store tunnel packet type data to store in Rx queue. */ @@ -266,6 +279,7 @@ static const struct rte_flow_ops mlx5_flow_ops = { .destroy = mlx5_flow_destroy, .flush = mlx5_flow_flush, .isolate = mlx5_flow_isolate, + .query = mlx5_flow_query, }; /* Convert FDIR request to Generic flow. */ @@ -407,6 +421,81 @@ mlx5_flow_priority(struct rte_eth_dev *dev, uint32_t priority, return priority; } +/** + * Get a flow counter. + * + * @param dev + * Pointer to Ethernet device. + * @param id + * Counter identifier. + * + * @return + * A pointer to the counter, NULL otherwise and rte_errno is set. + */ +static struct mlx5_flow_counter * +mlx5_flow_counter_new(struct rte_eth_dev *dev, uint32_t shared, uint32_t id) +{ + struct priv *priv = dev->data->dev_private; + struct mlx5_flow_counter *cnt; + + LIST_FOREACH(cnt, &priv->flow_counters, next) { + if (cnt->shared != shared) + continue; + if (cnt->id != id) + continue; + cnt->ref_cnt++; + return cnt; + } +#ifdef HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT + + struct mlx5_flow_counter tmpl = { + .shared = shared, + .id = id, + .cs = mlx5_glue->create_counter_set + (priv->ctx, + &(struct ibv_counter_set_init_attr){ + .counter_set_id = id, + }), + .hits = 0, + .bytes = 0, + }; + + if (!tmpl.cs) { + rte_errno = errno; + return NULL; + } + cnt = rte_calloc(__func__, 1, sizeof(*cnt), 0); + if (!cnt) { + rte_errno = ENOMEM; + return NULL; + } + *cnt = tmpl; + LIST_INSERT_HEAD(&priv->flow_counters, cnt, next); + return cnt; +#endif + rte_errno = ENOTSUP; + return NULL; +} + +/** + * Release a flow counter. + * + * @param id + * Counter identifier. + * + * @return + * A pointer to the counter, NULL otherwise and rte_errno is set. + */ +static void +mlx5_flow_counter_release(struct mlx5_flow_counter *counter) +{ + if (--counter->ref_cnt == 0) { + claim_zero(mlx5_glue->destroy_counter_set(counter->cs)); + LIST_REMOVE(counter, next); + rte_free(counter); + } +} + /** * Flow debug purpose function only available when * CONFIG_RTE_LIBRTE_MLX5_DEBUG=y @@ -2169,6 +2258,65 @@ mlx5_flow_action_mark(const struct rte_flow_action *actions, return size; } +/** + * Validate action count provided by the user. + * + * @param dev + * Pointer to Ethernet device. + * @param actions + * Pointer to flow actions array. + * @param flow + * Pointer to the rte_flow structure. + * @param flow_size[in] + * Size in bytes of the available space for to store the flow information. + * @param error + * Pointer to error structure. + * + * @return + * size in bytes necessary for the conversion, a negative errno value + * otherwise and rte_errno is set. + */ +static int +mlx5_flow_action_count(struct rte_eth_dev *dev, + const struct rte_flow_action *actions, + struct rte_flow *flow, + const size_t flow_size __rte_unused, + struct rte_flow_error *error) +{ + const struct rte_flow_action_count *count = actions->conf; +#ifdef HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT + unsigned int size = sizeof(struct ibv_flow_spec_counter_action); + struct ibv_flow_spec_counter_action counter = { + .type = IBV_FLOW_SPEC_ACTION_COUNT, + .size = size, + }; +#endif + + if (!flow->counter) { + flow->counter = mlx5_flow_counter_new(dev, count->shared, + count->id); + if (!flow->counter) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "cannot get counter" + " context."); + } + if (!((struct priv *)dev->data->dev_private)->config.flow_counter_en) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "flow counters are not supported."); + flow->modifier |= MLX5_FLOW_MOD_COUNT; +#ifdef HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT + counter.counter_set_handle = flow->counter->cs->handle; + if (size <= flow_size) + mlx5_flow_spec_verbs_add(flow, &counter, size); + return size; +#endif + return 0; +} + /** * Validate actions provided by the user. * @@ -2228,6 +2376,10 @@ mlx5_flow_actions(struct rte_eth_dev *dev, case RTE_FLOW_ACTION_TYPE_RSS: ret = mlx5_flow_action_rss(dev, actions, flow, error); break; + case RTE_FLOW_ACTION_TYPE_COUNT: + ret = mlx5_flow_action_count(dev, actions, flow, remain, + error); + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, @@ -2417,6 +2569,10 @@ mlx5_flow_fate_remove(struct rte_eth_dev *dev, struct rte_flow *flow) verbs->hrxq = NULL; } } + if (flow->counter) { + mlx5_flow_counter_release(flow->counter); + flow->counter = NULL; + } } /** @@ -2974,6 +3130,88 @@ mlx5_flow_isolate(struct rte_eth_dev *dev, return 0; } +/** + * Query flow counter. + * + * @param flow + * Pointer to the flow. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_query_count(struct rte_flow *flow __rte_unused, + void *data __rte_unused, + struct rte_flow_error *error) +{ +#ifdef HAVE_IBV_DEVICE_COUNTERS_SET_SUPPORT + struct rte_flow_query_count *qc = data; + uint64_t counters[2] = {0, 0}; + struct ibv_query_counter_set_attr query_cs_attr = { + .cs = flow->counter->cs, + .query_flags = IBV_COUNTER_SET_FORCE_UPDATE, + }; + struct ibv_counter_set_data query_out = { + .out = counters, + .outlen = 2 * sizeof(uint64_t), + }; + int err = mlx5_glue->query_counter_set(&query_cs_attr, &query_out); + + if (err) + return rte_flow_error_set(error, err, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot read counter"); + qc->hits_set = 1; + qc->bytes_set = 1; + qc->hits = counters[0] - flow->counter->hits; + qc->bytes = counters[1] - flow->counter->bytes; + if (qc->reset) { + flow->counter->hits = counters[0]; + flow->counter->bytes = counters[1]; + } + return 0; +#endif + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "counters are not available"); +} + +/** + * Query a flows. + * + * @see rte_flow_query() + * @see rte_flow_ops + */ +int +mlx5_flow_query(struct rte_eth_dev *dev __rte_unused, + struct rte_flow *flow, + const struct rte_flow_action *actions, + void *data, + struct rte_flow_error *error) +{ + int ret = 0; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + ret = mlx5_flow_query_count(flow, data, error); + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "action not supported"); + } + if (ret < 0) + return ret; + } + return 0; +} + /** * Convert a flow director filter to a generic flow. *