From patchwork Wed Dec 28 10:37:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 18605 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 5D6C25599; Wed, 28 Dec 2016 11:38:13 +0100 (CET) Received: from mail-wm0-f48.google.com (mail-wm0-f48.google.com [74.125.82.48]) by dpdk.org (Postfix) with ESMTP id 74BBD3977 for ; Wed, 28 Dec 2016 11:37:38 +0100 (CET) Received: by mail-wm0-f48.google.com with SMTP id u144so74621390wmu.1 for ; Wed, 28 Dec 2016 02:37:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=r6km1yKcTtfMuLuY1kQanCiw8ewRJyGRhHUMX1/2hc8=; b=AlnlDbbVWkxM7ZiLsESGey8LDce5U5G/OQVk98cz7UlXJGbnKh6uFYILljy9B94XlF tnZXSI6TUUxaS/hTGXIJv6zyoM3yNwcxYMx3F7UIZzgD6ZdTbXoBNTUZzMGp35J2UcL7 nPwNwrLkowCqp7MP0S2RE1fcboDGF+iao8wJ3f/btHUiEIlL/I9UV+ZnPay8gp6NJsqR uToncK3cHcv8w0CwPl+4zUSnf/BP+HRmS6iDcnv3nCjnFUJSUQt/TYgd/L4VEK8X5fJl bVuJ4W3aoy9clWWyoAqNTIIwf3cU8nMCCoGRaLxmNgFCt2io5SM41QX1RM6ZLgy36M8G Ermw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=r6km1yKcTtfMuLuY1kQanCiw8ewRJyGRhHUMX1/2hc8=; b=idrNMBK4PhaN1trEm9i0zHtFoM85ZmtfTJck07SBlJG2LHaRKnt+HPY+ebiUaJ4XvZ LqmypaZWNWCdBuHiodnfpiH/McBgUOnH8vAAOYcbj7k/Goqo3DEJYLzX4qPU1m4ECv8D mCA/LoTpMbesIBtcNBoBgBpEE94GoiYvuMnE7gGLa/UMXs/Zri0yU5Qi7BOYoyDVIVZv iKY+3DDvCDBemTOdW/6kut1sTC6CeBOh+WqCk326Y53FixZDCfDaPSK/4XUebmq3TXoc 9jnp4+qEePOwuSDVDCKBCskNBsKH/4D/LI0J5buPV5SoC8ruO8ioNN4J/sMes5qF1jJh YciQ== X-Gm-Message-State: AIkVDXKk6BsPOfe1Tn5PV6Ek2ktv8PtFuT5X506hqG9bKb4HvihgHs+ka7kv7KX9Wi+xRpIl X-Received: by 10.28.17.20 with SMTP id 20mr30300016wmr.99.1482921457460; Wed, 28 Dec 2016 02:37:37 -0800 (PST) Received: from ping.vm.6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id kq6sm30022703wjc.7.2016.12.28.02.37.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 28 Dec 2016 02:37:37 -0800 (PST) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Adrien Mazarguil Date: Wed, 28 Dec 2016 11:37:15 +0100 Message-Id: <6d3ec6ba808383e779583ea917a7498b4dcf5585.1482920437.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v4 2/6] net/mlx5: support basic flow items and actions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Introduce initial software for rte_flow rules. VLAN, VXLAN are still not supported. Signed-off-by: Nelio Laranjeiro Acked-by: Adrien Mazarguil --- drivers/net/mlx5/mlx5.h | 3 + drivers/net/mlx5/mlx5_flow.c | 928 ++++++++++++++++++++++++++++++++++++++-- drivers/net/mlx5/mlx5_trigger.c | 2 + 3 files changed, 904 insertions(+), 29 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 04f4eaa..c415ce3 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -136,6 +136,7 @@ struct priv { unsigned int reta_idx_n; /* RETA index size. */ struct fdir_filter_list *fdir_filter_list; /* Flow director rules. */ struct fdir_queue *fdir_drop_queue; /* Flow director drop queue. */ + LIST_HEAD(mlx5_flows, rte_flow) flows; /* RTE Flow rules. */ uint32_t link_speed_capa; /* Link speed capabilities. */ rte_spinlock_t lock; /* Lock for control functions. */ }; @@ -283,5 +284,7 @@ struct rte_flow *mlx5_flow_create(struct rte_eth_dev *, int mlx5_flow_destroy(struct rte_eth_dev *, struct rte_flow *, struct rte_flow_error *); int mlx5_flow_flush(struct rte_eth_dev *, struct rte_flow_error *); +int priv_flow_start(struct priv *); +void priv_flow_stop(struct priv *); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 4fdefa0..ebae2b5 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -31,12 +31,380 @@ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ +#include +#include + +/* Verbs header. */ +/* ISO C doesn't support unnamed structs/unions, disabling -pedantic. */ +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +#include +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif + #include #include #include +#include #include "mlx5.h" +static int +mlx5_flow_create_eth(const struct rte_flow_item *item, void *data); + +static int +mlx5_flow_create_ipv4(const struct rte_flow_item *item, void *data); + +static int +mlx5_flow_create_ipv6(const struct rte_flow_item *item, void *data); + +static int +mlx5_flow_create_udp(const struct rte_flow_item *item, void *data); + +static int +mlx5_flow_create_tcp(const struct rte_flow_item *item, void *data); + +struct rte_flow { + LIST_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */ + struct ibv_exp_flow_attr *ibv_attr; /**< Pointer to Verbs attributes. */ + struct ibv_exp_rwq_ind_table *ind_table; /**< Indirection table. */ + struct ibv_qp *qp; /**< Verbs queue pair. */ + struct ibv_exp_flow *ibv_flow; /**< Verbs flow. */ + struct ibv_exp_wq *wq; /**< Verbs work queue. */ + struct ibv_cq *cq; /**< Verbs completion queue. */ + struct rxq *rxq; /**< Pointer to the queue, NULL if drop queue. */ +}; + +/** Static initializer for items. */ +#define ITEMS(...) \ + (const enum rte_flow_item_type []){ \ + __VA_ARGS__, RTE_FLOW_ITEM_TYPE_END, \ + } + +/** Structure to generate a simple graph of layers supported by the NIC. */ +struct mlx5_flow_items { + /** List of possible following items. */ + const enum rte_flow_item_type *const items; + /** List of possible actions for these items. */ + const enum rte_flow_action_type *const actions; + /** Bit-masks corresponding to the possibilities for the item. */ + const void *mask; + /** Bit-masks size in bytes. */ + const unsigned int mask_sz; + /** + * Conversion function from rte_flow to NIC specific flow. + * + * @param item + * rte_flow item to convert. + * @param data + * Internal structure to store the conversion. + * + * @return + * 0 on success, negative value otherwise. + */ + int (*convert)(const struct rte_flow_item *item, void *data); + /** Size in bytes of the destination structure. */ + const unsigned int dst_sz; +}; + +/** Valid action for this PMD. */ +static const enum rte_flow_action_type valid_actions[] = { + RTE_FLOW_ACTION_TYPE_DROP, + RTE_FLOW_ACTION_TYPE_QUEUE, + RTE_FLOW_ACTION_TYPE_END, +}; + +/** Graph of supported items and associated actions. */ +static const struct mlx5_flow_items mlx5_flow_items[] = { + [RTE_FLOW_ITEM_TYPE_VOID] = { + .items = ITEMS(RTE_FLOW_ITEM_TYPE_VOID, + RTE_FLOW_ITEM_TYPE_ETH), + .actions = valid_actions, + .mask = &(const struct rte_flow_item_eth){ + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", + }, + .mask_sz = sizeof(struct rte_flow_item_eth), + .convert = mlx5_flow_create_eth, + .dst_sz = sizeof(struct ibv_exp_flow_spec_eth), + }, + [RTE_FLOW_ITEM_TYPE_ETH] = { + .items = ITEMS(RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_IPV6), + .actions = valid_actions, + .mask = &(const struct rte_flow_item_eth){ + .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", + .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", + }, + .mask_sz = sizeof(struct rte_flow_item_eth), + .convert = mlx5_flow_create_eth, + .dst_sz = sizeof(struct ibv_exp_flow_spec_eth), + }, + [RTE_FLOW_ITEM_TYPE_IPV4] = { + .items = ITEMS(RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_TCP), + .actions = valid_actions, + .mask = &(const struct rte_flow_item_ipv4){ + .hdr = { + .src_addr = -1, + .dst_addr = -1, + }, + }, + .mask_sz = sizeof(struct rte_flow_item_ipv4), + .convert = mlx5_flow_create_ipv4, + .dst_sz = sizeof(struct ibv_exp_flow_spec_ipv4), + }, + [RTE_FLOW_ITEM_TYPE_IPV6] = { + .items = ITEMS(RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_TCP), + .actions = valid_actions, + .mask = &(const struct rte_flow_item_ipv6){ + .hdr = { + .src_addr = { + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + }, + .dst_addr = { + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, + }, + }, + }, + .mask_sz = sizeof(struct rte_flow_item_ipv6), + .convert = mlx5_flow_create_ipv6, + .dst_sz = sizeof(struct ibv_exp_flow_spec_ipv6), + }, + [RTE_FLOW_ITEM_TYPE_UDP] = { + .actions = valid_actions, + .mask = &(const struct rte_flow_item_udp){ + .hdr = { + .src_port = -1, + .dst_port = -1, + }, + }, + .mask_sz = sizeof(struct rte_flow_item_udp), + .convert = mlx5_flow_create_udp, + .dst_sz = sizeof(struct ibv_exp_flow_spec_tcp_udp), + }, + [RTE_FLOW_ITEM_TYPE_TCP] = { + .actions = valid_actions, + .mask = &(const struct rte_flow_item_tcp){ + .hdr = { + .src_port = -1, + .dst_port = -1, + }, + }, + .mask_sz = sizeof(struct rte_flow_item_tcp), + .convert = mlx5_flow_create_tcp, + .dst_sz = sizeof(struct ibv_exp_flow_spec_tcp_udp), + }, +}; + +/** Structure to pass to the conversion function. */ +struct mlx5_flow { + struct ibv_exp_flow_attr *ibv_attr; /**< Verbs attribute. */ + unsigned int offset; /**< Offset in bytes in the ibv_attr buffer. */ +}; + +struct mlx5_flow_action { + uint32_t queue:1; /**< Target is a receive queue. */ + uint32_t drop:1; /**< Target is a drop queue. */ + uint32_t queue_id; /**< Identifier of the queue. */ +}; + +/** + * Check support for a given item. + * + * @param item[in] + * Item specification. + * @param mask[in] + * Bit-masks covering supported fields to compare with spec, last and mask in + * \item. + * @param size + * Bit-Mask size in bytes. + * + * @return + * 0 on success. + */ +static int +mlx5_flow_item_validate(const struct rte_flow_item *item, + const uint8_t *mask, unsigned int size) +{ + int ret = 0; + + if (item->spec && !item->mask) { + unsigned int i; + const uint8_t *spec = item->spec; + + for (i = 0; i < size; ++i) + if ((spec[i] | mask[i]) != mask[i]) + return -1; + } + if (item->last && !item->mask) { + unsigned int i; + const uint8_t *spec = item->last; + + for (i = 0; i < size; ++i) + if ((spec[i] | mask[i]) != mask[i]) + return -1; + } + if (item->mask) { + unsigned int i; + const uint8_t *spec = item->mask; + + for (i = 0; i < size; ++i) + if ((spec[i] | mask[i]) != mask[i]) + return -1; + } + if (item->spec && item->last) { + uint8_t spec[size]; + uint8_t last[size]; + const uint8_t *apply = mask; + unsigned int i; + + if (item->mask) + apply = item->mask; + for (i = 0; i < size; ++i) { + spec[i] = ((const uint8_t *)item->spec)[i] & apply[i]; + last[i] = ((const uint8_t *)item->last)[i] & apply[i]; + } + ret = memcmp(spec, last, size); + } + return ret; +} + +/** + * Validate a flow supported by the NIC. + * + * @param priv + * Pointer to private structure. + * @param[in] attr + * Flow rule attributes. + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[out] error + * Perform verbose error reporting if not NULL. + * @param[in, out] flow + * Flow structure to update. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +priv_flow_validate(struct priv *priv, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct mlx5_flow *flow) +{ + const struct mlx5_flow_items *cur_item = mlx5_flow_items; + struct mlx5_flow_action action = { + .queue = 0, + .drop = 0, + }; + + (void)priv; + if (attr->group) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + NULL, + "groups are not supported"); + return -rte_errno; + } + if (attr->priority) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + NULL, + "priorities are not supported"); + return -rte_errno; + } + if (attr->egress) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + NULL, + "egress is not supported"); + return -rte_errno; + } + if (!attr->ingress) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + NULL, + "only ingress is supported"); + return -rte_errno; + } + for (; items->type != RTE_FLOW_ITEM_TYPE_END; ++items) { + const struct mlx5_flow_items *token = NULL; + unsigned int i; + int err; + + if (items->type == RTE_FLOW_ITEM_TYPE_VOID) + continue; + for (i = 0; + cur_item->items && + cur_item->items[i] != RTE_FLOW_ITEM_TYPE_END; + ++i) { + if (cur_item->items[i] == items->type) { + token = &mlx5_flow_items[items->type]; + break; + } + } + if (!token) + goto exit_item_not_supported; + cur_item = token; + err = mlx5_flow_item_validate(items, + (const uint8_t *)cur_item->mask, + sizeof(cur_item->mask_sz)); + if (err) + goto exit_item_not_supported; + if (flow->ibv_attr && cur_item->convert) { + err = cur_item->convert(items, flow); + if (err) + goto exit_item_not_supported; + } + flow->offset += cur_item->dst_sz; + } + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; ++actions) { + if (actions->type == RTE_FLOW_ACTION_TYPE_VOID) { + continue; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_DROP) { + action.drop = 1; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *queue = + (const struct rte_flow_action_queue *) + actions->conf; + + if (!queue || (queue->index > (priv->rxqs_n - 1))) + goto exit_action_not_supported; + action.queue = 1; + } else { + goto exit_action_not_supported; + } + } + if (!action.queue && !action.drop) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "no valid action"); + return -rte_errno; + } + return 0; +exit_item_not_supported: + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, + items, "item not supported"); + return -rte_errno; +exit_action_not_supported: + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + actions, "action not supported"); + return -rte_errno; +} + /** * Validate a flow supported by the NIC. * @@ -50,15 +418,417 @@ mlx5_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error) { - (void)dev; - (void)attr; - (void)items; - (void)actions; - (void)error; - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "not implemented yet"); - return -rte_errno; + struct priv *priv = dev->data->dev_private; + int ret; + struct mlx5_flow flow = { .offset = sizeof(struct ibv_exp_flow_attr) }; + + priv_lock(priv); + ret = priv_flow_validate(priv, attr, items, actions, error, &flow); + priv_unlock(priv); + return ret; +} + +/** + * Convert Ethernet item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param data[in, out] + * User structure. + */ +static int +mlx5_flow_create_eth(const struct rte_flow_item *item, void *data) +{ + const struct rte_flow_item_eth *spec = item->spec; + const struct rte_flow_item_eth *mask = item->mask; + struct mlx5_flow *flow = (struct mlx5_flow *)data; + struct ibv_exp_flow_spec_eth *eth; + const unsigned int eth_size = sizeof(struct ibv_exp_flow_spec_eth); + unsigned int i; + + eth = (void *)((uintptr_t)flow->ibv_attr + flow->offset); + *eth = (struct ibv_exp_flow_spec_eth) { + .type = IBV_EXP_FLOW_SPEC_ETH, + .size = eth_size, + }; + if (spec) { + memcpy(eth->val.dst_mac, spec->dst.addr_bytes, ETHER_ADDR_LEN); + memcpy(eth->val.src_mac, spec->src.addr_bytes, ETHER_ADDR_LEN); + } + if (mask) { + memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, ETHER_ADDR_LEN); + memcpy(eth->mask.src_mac, mask->src.addr_bytes, ETHER_ADDR_LEN); + } + /* Remove unwanted bits from values. */ + for (i = 0; i < ETHER_ADDR_LEN; ++i) { + eth->val.dst_mac[i] &= eth->mask.dst_mac[i]; + eth->val.src_mac[i] &= eth->mask.src_mac[i]; + } + /* Finalise the flow. */ + ++flow->ibv_attr->num_of_specs; + flow->ibv_attr->priority = 2; + return 0; +} + +/** + * Convert IPv4 item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param data[in, out] + * User structure. + */ +static int +mlx5_flow_create_ipv4(const struct rte_flow_item *item, void *data) +{ + const struct rte_flow_item_ipv4 *spec = item->spec; + const struct rte_flow_item_ipv4 *mask = item->mask; + struct mlx5_flow *flow = (struct mlx5_flow *)data; + struct ibv_exp_flow_spec_ipv4 *ipv4; + unsigned int ipv4_size = sizeof(struct ibv_exp_flow_spec_ipv4); + + ipv4 = (void *)((uintptr_t)flow->ibv_attr + flow->offset); + *ipv4 = (struct ibv_exp_flow_spec_ipv4) { + .type = IBV_EXP_FLOW_SPEC_IPV4, + .size = ipv4_size, + }; + if (spec) { + ipv4->val = (struct ibv_exp_flow_ipv4_filter){ + .src_ip = spec->hdr.src_addr, + .dst_ip = spec->hdr.dst_addr, + }; + } + if (mask) { + ipv4->mask = (struct ibv_exp_flow_ipv4_filter){ + .src_ip = mask->hdr.src_addr, + .dst_ip = mask->hdr.dst_addr, + }; + } + /* Remove unwanted bits from values. */ + ipv4->val.src_ip &= ipv4->mask.src_ip; + ipv4->val.dst_ip &= ipv4->mask.dst_ip; + ++flow->ibv_attr->num_of_specs; + flow->ibv_attr->priority = 1; + return 0; +} + +/** + * Convert IPv6 item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param data[in, out] + * User structure. + */ +static int +mlx5_flow_create_ipv6(const struct rte_flow_item *item, void *data) +{ + const struct rte_flow_item_ipv6 *spec = item->spec; + const struct rte_flow_item_ipv6 *mask = item->mask; + struct mlx5_flow *flow = (struct mlx5_flow *)data; + struct ibv_exp_flow_spec_ipv6 *ipv6; + unsigned int ipv6_size = sizeof(struct ibv_exp_flow_spec_ipv6); + unsigned int i; + + ipv6 = (void *)((uintptr_t)flow->ibv_attr + flow->offset); + *ipv6 = (struct ibv_exp_flow_spec_ipv6) { + .type = IBV_EXP_FLOW_SPEC_IPV6, + .size = ipv6_size, + }; + if (spec) { + memcpy(ipv6->val.src_ip, spec->hdr.src_addr, + RTE_DIM(ipv6->val.src_ip)); + memcpy(ipv6->val.dst_ip, spec->hdr.dst_addr, + RTE_DIM(ipv6->val.dst_ip)); + } + if (mask) { + memcpy(ipv6->mask.src_ip, mask->hdr.src_addr, + RTE_DIM(ipv6->mask.src_ip)); + memcpy(ipv6->mask.dst_ip, mask->hdr.dst_addr, + RTE_DIM(ipv6->mask.dst_ip)); + } + /* Remove unwanted bits from values. */ + for (i = 0; i < RTE_DIM(ipv6->val.src_ip); ++i) { + ipv6->val.src_ip[i] &= ipv6->mask.src_ip[i]; + ipv6->val.dst_ip[i] &= ipv6->mask.dst_ip[i]; + } + ++flow->ibv_attr->num_of_specs; + flow->ibv_attr->priority = 1; + return 0; +} + +/** + * Convert UDP item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param data[in, out] + * User structure. + */ +static int +mlx5_flow_create_udp(const struct rte_flow_item *item, void *data) +{ + const struct rte_flow_item_udp *spec = item->spec; + const struct rte_flow_item_udp *mask = item->mask; + struct mlx5_flow *flow = (struct mlx5_flow *)data; + struct ibv_exp_flow_spec_tcp_udp *udp; + unsigned int udp_size = sizeof(struct ibv_exp_flow_spec_tcp_udp); + + udp = (void *)((uintptr_t)flow->ibv_attr + flow->offset); + *udp = (struct ibv_exp_flow_spec_tcp_udp) { + .type = IBV_EXP_FLOW_SPEC_UDP, + .size = udp_size, + }; + if (spec) { + udp->val.dst_port = spec->hdr.dst_port; + udp->val.src_port = spec->hdr.src_port; + } + if (mask) { + udp->mask.dst_port = mask->hdr.dst_port; + udp->mask.src_port = mask->hdr.src_port; + } + /* Remove unwanted bits from values. */ + udp->val.src_port &= udp->mask.src_port; + udp->val.dst_port &= udp->mask.dst_port; + ++flow->ibv_attr->num_of_specs; + flow->ibv_attr->priority = 0; + return 0; +} + +/** + * Convert TCP item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param data[in, out] + * User structure. + */ +static int +mlx5_flow_create_tcp(const struct rte_flow_item *item, void *data) +{ + const struct rte_flow_item_tcp *spec = item->spec; + const struct rte_flow_item_tcp *mask = item->mask; + struct mlx5_flow *flow = (struct mlx5_flow *)data; + struct ibv_exp_flow_spec_tcp_udp *tcp; + unsigned int tcp_size = sizeof(struct ibv_exp_flow_spec_tcp_udp); + + tcp = (void *)((uintptr_t)flow->ibv_attr + flow->offset); + *tcp = (struct ibv_exp_flow_spec_tcp_udp) { + .type = IBV_EXP_FLOW_SPEC_TCP, + .size = tcp_size, + }; + if (spec) { + tcp->val.dst_port = spec->hdr.dst_port; + tcp->val.src_port = spec->hdr.src_port; + } + if (mask) { + tcp->mask.dst_port = mask->hdr.dst_port; + tcp->mask.src_port = mask->hdr.src_port; + } + /* Remove unwanted bits from values. */ + tcp->val.src_port &= tcp->mask.src_port; + tcp->val.dst_port &= tcp->mask.dst_port; + ++flow->ibv_attr->num_of_specs; + flow->ibv_attr->priority = 0; + return 0; +} + +/** + * Complete flow rule creation. + * + * @param priv + * Pointer to private structure. + * @param ibv_attr + * Verbs flow attributes. + * @param action + * Target action structure. + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * A flow if the rule could be created. + */ +static struct rte_flow * +priv_flow_create_action_queue(struct priv *priv, + struct ibv_exp_flow_attr *ibv_attr, + struct mlx5_flow_action *action, + struct rte_flow_error *error) +{ + struct rxq_ctrl *rxq; + struct rte_flow *rte_flow; + + assert(priv->pd); + assert(priv->ctx); + rte_flow = rte_calloc(__func__, 1, sizeof(*rte_flow), 0); + if (!rte_flow) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "cannot allocate flow memory"); + return NULL; + } + if (action->drop) { + rte_flow->cq = + ibv_exp_create_cq(priv->ctx, 1, NULL, NULL, 0, + &(struct ibv_exp_cq_init_attr){ + .comp_mask = 0, + }); + if (!rte_flow->cq) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "cannot allocate CQ"); + goto error; + } + rte_flow->wq = ibv_exp_create_wq(priv->ctx, + &(struct ibv_exp_wq_init_attr){ + .wq_type = IBV_EXP_WQT_RQ, + .max_recv_wr = 1, + .max_recv_sge = 1, + .pd = priv->pd, + .cq = rte_flow->cq, + }); + } else { + rxq = container_of((*priv->rxqs)[action->queue_id], + struct rxq_ctrl, rxq); + rte_flow->rxq = &rxq->rxq; + rte_flow->wq = rxq->wq; + } + rte_flow->ibv_attr = ibv_attr; + rte_flow->ind_table = ibv_exp_create_rwq_ind_table( + priv->ctx, + &(struct ibv_exp_rwq_ind_table_init_attr){ + .pd = priv->pd, + .log_ind_tbl_size = 0, + .ind_tbl = &rte_flow->wq, + .comp_mask = 0, + }); + if (!rte_flow->ind_table) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "cannot allocate indirection table"); + goto error; + } + rte_flow->qp = ibv_exp_create_qp( + priv->ctx, + &(struct ibv_exp_qp_init_attr){ + .qp_type = IBV_QPT_RAW_PACKET, + .comp_mask = + IBV_EXP_QP_INIT_ATTR_PD | + IBV_EXP_QP_INIT_ATTR_PORT | + IBV_EXP_QP_INIT_ATTR_RX_HASH, + .pd = priv->pd, + .rx_hash_conf = &(struct ibv_exp_rx_hash_conf){ + .rx_hash_function = + IBV_EXP_RX_HASH_FUNC_TOEPLITZ, + .rx_hash_key_len = rss_hash_default_key_len, + .rx_hash_key = rss_hash_default_key, + .rx_hash_fields_mask = 0, + .rwq_ind_tbl = rte_flow->ind_table, + }, + .port_num = priv->port, + }); + if (!rte_flow->qp) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "cannot allocate QP"); + goto error; + } + rte_flow->ibv_flow = ibv_exp_create_flow(rte_flow->qp, + rte_flow->ibv_attr); + if (!rte_flow->ibv_flow) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "flow rule creation failure"); + goto error; + } + return rte_flow; +error: + assert(rte_flow); + if (rte_flow->qp) + ibv_destroy_qp(rte_flow->qp); + if (rte_flow->ind_table) + ibv_exp_destroy_rwq_ind_table(rte_flow->ind_table); + if (!rte_flow->rxq && rte_flow->wq) + ibv_exp_destroy_wq(rte_flow->wq); + if (!rte_flow->rxq && rte_flow->cq) + ibv_destroy_cq(rte_flow->cq); + rte_free(rte_flow->ibv_attr); + rte_free(rte_flow); + return NULL; +} + +/** + * Convert a flow. + * + * @param priv + * Pointer to private structure. + * @param[in] attr + * Flow rule attributes. + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * A flow on success, NULL otherwise. + */ +static struct rte_flow * +priv_flow_create(struct priv *priv, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow *rte_flow; + struct mlx5_flow_action action; + struct mlx5_flow flow = { .offset = sizeof(struct ibv_exp_flow_attr), }; + int err; + + err = priv_flow_validate(priv, attr, items, actions, error, &flow); + if (err) + goto exit; + flow.ibv_attr = rte_malloc(__func__, flow.offset, 0); + flow.offset = sizeof(struct ibv_exp_flow_attr); + if (!flow.ibv_attr) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "cannot allocate ibv_attr memory"); + goto exit; + } + *flow.ibv_attr = (struct ibv_exp_flow_attr){ + .type = IBV_EXP_FLOW_ATTR_NORMAL, + .size = sizeof(struct ibv_exp_flow_attr), + .priority = attr->priority, + .num_of_specs = 0, + .port = 0, + .flags = 0, + .reserved = 0, + }; + claim_zero(priv_flow_validate(priv, attr, items, actions, + error, &flow)); + action = (struct mlx5_flow_action){ + .queue = 0, + .drop = 0, + }; + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; ++actions) { + if (actions->type == RTE_FLOW_ACTION_TYPE_VOID) { + continue; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + action.queue = 1; + action.queue_id = + ((const struct rte_flow_action_queue *) + actions->conf)->index; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_DROP) { + action.drop = 1; + } else { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, "unsupported action"); + goto exit; + } + } + rte_flow = priv_flow_create_action_queue(priv, flow.ibv_attr, + &action, error); + return rte_flow; +exit: + rte_free(flow.ibv_attr); + return NULL; } /** @@ -74,15 +844,46 @@ mlx5_flow_create(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error) { - (void)dev; - (void)attr; - (void)items; - (void)actions; - (void)error; - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "not implemented yet"); - return NULL; + struct priv *priv = dev->data->dev_private; + struct rte_flow *flow; + + priv_lock(priv); + flow = priv_flow_create(priv, attr, items, actions, error); + if (flow) { + LIST_INSERT_HEAD(&priv->flows, flow, next); + DEBUG("Flow created %p", (void *)flow); + } + priv_unlock(priv); + return flow; +} + +/** + * Destroy a flow. + * + * @param priv + * Pointer to private structure. + * @param[in] flow + * Flow to destroy. + */ +static void +priv_flow_destroy(struct priv *priv, + struct rte_flow *flow) +{ + (void)priv; + LIST_REMOVE(flow, next); + if (flow->ibv_flow) + claim_zero(ibv_exp_destroy_flow(flow->ibv_flow)); + if (flow->qp) + claim_zero(ibv_destroy_qp(flow->qp)); + if (flow->ind_table) + claim_zero(ibv_exp_destroy_rwq_ind_table(flow->ind_table)); + if (!flow->rxq && flow->wq) + claim_zero(ibv_exp_destroy_wq(flow->wq)); + if (!flow->rxq && flow->cq) + claim_zero(ibv_destroy_cq(flow->cq)); + rte_free(flow->ibv_attr); + DEBUG("Flow destroyed %p", (void *)flow); + rte_free(flow); } /** @@ -96,13 +897,30 @@ mlx5_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, struct rte_flow_error *error) { - (void)dev; - (void)flow; + struct priv *priv = dev->data->dev_private; + (void)error; - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "not implemented yet"); - return -rte_errno; + priv_lock(priv); + priv_flow_destroy(priv, flow); + priv_unlock(priv); + return 0; +} + +/** + * Destroy all flows. + * + * @param priv + * Pointer to private structure. + */ +static void +priv_flow_flush(struct priv *priv) +{ + while (!LIST_EMPTY(&priv->flows)) { + struct rte_flow *flow; + + flow = LIST_FIRST(&priv->flows); + priv_flow_destroy(priv, flow); + } } /** @@ -115,10 +933,62 @@ int mlx5_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) { - (void)dev; + struct priv *priv = dev->data->dev_private; + (void)error; - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "not implemented yet"); - return -rte_errno; + priv_lock(priv); + priv_flow_flush(priv); + priv_unlock(priv); + return 0; +} + +/** + * Remove all flows. + * + * Called by dev_stop() to remove all flows. + * + * @param priv + * Pointer to private structure. + */ +void +priv_flow_stop(struct priv *priv) +{ + struct rte_flow *flow; + + for (flow = LIST_FIRST(&priv->flows); + flow; + flow = LIST_NEXT(flow, next)) { + claim_zero(ibv_exp_destroy_flow(flow->ibv_flow)); + flow->ibv_flow = NULL; + DEBUG("Flow %p removed", (void *)flow); + } +} + +/** + * Add all flows. + * + * @param priv + * Pointer to private structure. + * + * @return + * 0 on success, a errno value otherwise and rte_errno is set. + */ +int +priv_flow_start(struct priv *priv) +{ + struct rte_flow *flow; + + for (flow = LIST_FIRST(&priv->flows); + flow; + flow = LIST_NEXT(flow, next)) { + flow->ibv_flow = ibv_exp_create_flow(flow->qp, + flow->ibv_attr); + if (!flow->ibv_flow) { + DEBUG("Flow %p cannot be applied", (void *)flow); + rte_errno = EINVAL; + return rte_errno; + } + DEBUG("Flow %p applied", (void *)flow); + } + return 0; } diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index d4dccd8..2399243 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -90,6 +90,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) if (dev->data->dev_conf.fdir_conf.mode != RTE_FDIR_MODE_NONE) priv_fdir_enable(priv); priv_dev_interrupt_handler_install(priv, dev); + err = priv_flow_start(priv); priv_unlock(priv); return -err; } @@ -120,6 +121,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) priv_mac_addrs_disable(priv); priv_destroy_hash_rxqs(priv); priv_fdir_disable(priv); + priv_flow_stop(priv); priv_dev_interrupt_handler_uninstall(priv, dev); priv->started = 0; priv_unlock(priv);