From patchwork Fri Nov 25 18:14:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 17264 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id E1D31FA45; Fri, 25 Nov 2016 19:15:46 +0100 (CET) Received: from mail-wm0-f49.google.com (mail-wm0-f49.google.com [74.125.82.49]) by dpdk.org (Postfix) with ESMTP id D4929FA32 for ; Fri, 25 Nov 2016 19:14:50 +0100 (CET) Received: by mail-wm0-f49.google.com with SMTP id t79so98677066wmt.0 for ; Fri, 25 Nov 2016 10:14:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=4Fgmr8noW3T3p2zZWVuSrVgan8PVeex5aI0UrJrDMTs=; b=GtxH3CVBNWMFNv//kodYEYEBZb0nwi4lNGFfnszs+5eMP+P0Z5A9ZzlcUzS+A+ErIy 866Zsyc+aDXHTn2L1yBuGG8UUP/T735R3PnlfeWEZ9135wxrX8IFmyXXUL+kpB/cq/1N KIFpIR1PkVcOQh0hCRVuuyAfJ9uNa2NAk14wIikRF41GqxslsoxYGfSQJuF0Ghg3LHHA moHgZw2C1vjgeXltFxX4J8ewAilIRew+ysvke0YvN5q7n1W8u5YY0BV2VcWFcJcavsN8 lDvMSm/GPG0kRwfbQvu6xPViYBke2Orl7fg6G/rDbuMQzpx8RWR211y1bWouhjuCc7ba fS+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=4Fgmr8noW3T3p2zZWVuSrVgan8PVeex5aI0UrJrDMTs=; b=RabbvInlSLb6sGTNKdci/DWlDjd/Rr72A2ls5XVgaQdZQNT6kVCONssMj6M/gAXE5X oSWHufemEVPWXNo87PaiM50nAYXEWaQIE0Fx8e5J33l+QHq3vaR6EhRF3Hpu4aO8rC/0 CfeSo5vxGWJMrs85huEKvuMoU1WsfxLzdy+2dPLz/aOTzTVVFZh7aHoXKgJWKOucCq8W HUwC1BH1gcykrqSVUcXcneTnlzt1BowYoqz/WyRVn7EUwQK0HEPAIU4xwvKmE5BnhccB RGD2qoFpI37mZ4UFJLsAN0EZvDKVdS5q3wKdpu9wuj6TgLc0Es5ro4rVh4BgrXvXppob DSqw== X-Gm-Message-State: AKaTC02n/AMnwmicxT6RMNQdEHe2s2Z0DYEujGd3UZNKhBbyiLAa/8tZKinkD86X1legEqB7 X-Received: by 10.28.57.197 with SMTP id g188mr8623582wma.26.1480097690066; Fri, 25 Nov 2016 10:14:50 -0800 (PST) Received: from ping.vm.6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id 135sm14610323wmh.14.2016.11.25.10.14.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 Nov 2016 10:14:49 -0800 (PST) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Adrien Mazarguil Date: Fri, 25 Nov 2016 19:14:23 +0100 Message-Id: X-Mailer: git-send-email 2.1.4 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH 3/3] net/mlx5: add rte_flow rule creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Convert Ethernet, IPv4, IPv6, TCP, UDP layers into ibv_flow and create those rules when after validation (i.e. NIC supports the rule). VLAN is still not supported in this commit. Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5_flow.c | 645 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 631 insertions(+), 14 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 54807ad..e948000 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -31,6 +31,17 @@ */ #include +#include + +/* Verbs header. */ +/* ISO C doesn't support unnamed structs/unions, disabling -pedantic. */ +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +#include +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif #include #include @@ -39,11 +50,82 @@ #include "mlx5.h" +/** Define a value to use as index for the drop queue. */ +#define MLX5_FLOW_DROP_QUEUE ((uint32_t)-1) + struct rte_flow { LIST_ENTRY(rte_flow) next; + struct ibv_exp_flow_attr *ibv_attr; + struct ibv_exp_rwq_ind_table *ind_table; + struct ibv_qp *qp; + struct ibv_exp_flow *ibv_flow; + struct ibv_exp_wq *wq; + struct ibv_cq *cq; + uint8_t drop; }; /** + * Check support for a given item. + * + * @param item[in] + * Item specification. + * @param mask[in] + * Bit-mask covering supported fields to compare with spec, last and mask in + * \item. + * @param size + * Bit-Mask size in bytes. + * + * @return + * 0 on success. + */ +static int +mlx5_flow_item_validate(const struct rte_flow_item *item, + const uint8_t *mask, unsigned int size) +{ + int ret = 0; + + if (item->spec && !item->mask) { + unsigned int i; + const uint8_t *spec = item->spec; + + for (i = 0; i < size; ++i) + if ((spec[i] | mask[i]) != mask[i]) + return -1; + } + if (item->last && !item->mask) { + unsigned int i; + const uint8_t *spec = item->last; + + for (i = 0; i < size; ++i) + if ((spec[i] | mask[i]) != mask[i]) + return -1; + } + if (item->mask) { + unsigned int i; + const uint8_t *spec = item->mask; + + for (i = 0; i < size; ++i) + if ((spec[i] | mask[i]) != mask[i]) + return -1; + } + if (item->spec && item->last) { + uint8_t spec[size]; + uint8_t last[size]; + const uint8_t *apply = mask; + unsigned int i; + + if (item->mask) + apply = item->mask; + for (i = 0; i < size; ++i) { + spec[i] = ((const uint8_t *)item->spec)[i] & apply[i]; + last[i] = ((const uint8_t *)item->last)[i] & apply[i]; + } + ret = memcmp(spec, last, size); + } + return ret; +} + +/** * Validate a flow supported by the NIC. * * @param priv @@ -67,9 +149,43 @@ priv_flow_validate(struct priv *priv, const struct rte_flow_action actions[], struct rte_flow_error *error) { - (void)priv; const struct rte_flow_item *ilast = NULL; const struct rte_flow_action *alast = NULL; + /* Supported mask. */ + const struct rte_flow_item_eth eth_mask = { + .dst.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + .src.addr_bytes = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }, + }; + const struct rte_flow_item_ipv4 ipv4_mask = { + .hdr = { + .src_addr = -1, + .dst_addr = -1, + }, + }; + const struct rte_flow_item_ipv6 ipv6_mask = { + .hdr = { + .src_addr = { + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + }, + .dst_addr = { + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, + }, + }, + }; + const struct rte_flow_item_udp udp_mask = { + .hdr = { + .src_port = -1, + .dst_port = -1, + }, + }; + const struct rte_flow_item_tcp tcp_mask = { + .hdr = { + .src_port = -1, + .dst_port = -1, + }, + }; if (attr->group) { rte_flow_error_set(error, ENOTSUP, @@ -100,27 +216,70 @@ priv_flow_validate(struct priv *priv, return -rte_errno; } for (; items->type != RTE_FLOW_ITEM_TYPE_END; ++items) { + int err = 0; + if (items->type == RTE_FLOW_ITEM_TYPE_VOID) { continue; } else if (items->type == RTE_FLOW_ITEM_TYPE_ETH) { if (ilast) goto exit_item_not_supported; ilast = items; - } else if ((items->type == RTE_FLOW_ITEM_TYPE_IPV4) || - (items->type == RTE_FLOW_ITEM_TYPE_IPV6)) { + err = mlx5_flow_item_validate( + items, + (const uint8_t *)ð_mask, + sizeof(eth_mask)); + if (err) + goto exit_item_not_supported; + } else if (items->type == RTE_FLOW_ITEM_TYPE_IPV4) { if (!ilast) goto exit_item_not_supported; else if (ilast->type != RTE_FLOW_ITEM_TYPE_ETH) goto exit_item_not_supported; ilast = items; - } else if ((items->type == RTE_FLOW_ITEM_TYPE_UDP) || - (items->type == RTE_FLOW_ITEM_TYPE_TCP)) { + err = mlx5_flow_item_validate( + items, + (const uint8_t *)&ipv4_mask, + sizeof(ipv4_mask)); + if (err) + goto exit_item_not_supported; + } else if (items->type == RTE_FLOW_ITEM_TYPE_IPV6) { + if (!ilast) + goto exit_item_not_supported; + else if (ilast->type != RTE_FLOW_ITEM_TYPE_ETH) + goto exit_item_not_supported; + ilast = items; + err = mlx5_flow_item_validate( + items, + (const uint8_t *)&ipv6_mask, + sizeof(ipv6_mask)); + if (err) + goto exit_item_not_supported; + } else if (items->type == RTE_FLOW_ITEM_TYPE_UDP) { if (!ilast) goto exit_item_not_supported; else if ((ilast->type != RTE_FLOW_ITEM_TYPE_IPV4) && (ilast->type != RTE_FLOW_ITEM_TYPE_IPV6)) goto exit_item_not_supported; ilast = items; + err = mlx5_flow_item_validate( + items, + (const uint8_t *)&udp_mask, + sizeof(udp_mask)); + if (err) + goto exit_item_not_supported; + } else if (items->type == RTE_FLOW_ITEM_TYPE_TCP) { + if (!ilast) + goto exit_item_not_supported; + else if ((ilast->type != RTE_FLOW_ITEM_TYPE_IPV4) && + (ilast->type != RTE_FLOW_ITEM_TYPE_IPV6)) + goto exit_item_not_supported; + ilast = items; + err = mlx5_flow_item_validate( + items, + (const uint8_t *)&tcp_mask, + sizeof(tcp_mask)); + if (err) + goto exit_item_not_supported; } else { goto exit_item_not_supported; } @@ -128,8 +287,23 @@ priv_flow_validate(struct priv *priv, for (; actions->type != RTE_FLOW_ACTION_TYPE_END; ++actions) { if (actions->type == RTE_FLOW_ACTION_TYPE_VOID) { continue; - } else if ((actions->type == RTE_FLOW_ACTION_TYPE_QUEUE) || - (actions->type == RTE_FLOW_ACTION_TYPE_DROP)) { + } else if (actions->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *queue = + (const struct rte_flow_action_queue *) + actions->conf; + + if (alast && + alast->type != actions->type) + goto exit_action_not_supported; + if (queue->index > (priv->rxqs_n - 1)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "queue index error"); + goto exit; + } + alast = actions; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_DROP) { if (alast && alast->type != actions->type) goto exit_action_not_supported; @@ -146,6 +320,7 @@ priv_flow_validate(struct priv *priv, exit_action_not_supported: rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, actions, "action not supported"); +exit: return -rte_errno; } @@ -172,6 +347,310 @@ mlx5_flow_validate(struct rte_eth_dev *dev, } /** + * Convert Ethernet item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param eth[in, out] + * Verbs Ethernet specification structure. + */ +static void +mlx5_flow_create_eth(const struct rte_flow_item *item, + struct ibv_exp_flow_spec_eth *eth) +{ + const struct rte_flow_item_eth *spec = item->spec; + const struct rte_flow_item_eth *mask = item->mask; + unsigned int i; + + memset(eth, 0, sizeof(struct ibv_exp_flow_spec_eth)); + *eth = (struct ibv_exp_flow_spec_eth) { + .type = IBV_EXP_FLOW_SPEC_ETH, + .size = sizeof(struct ibv_exp_flow_spec_eth), + }; + if (spec) { + memcpy(eth->val.dst_mac, spec->dst.addr_bytes, ETHER_ADDR_LEN); + memcpy(eth->val.src_mac, spec->src.addr_bytes, ETHER_ADDR_LEN); + } + if (mask) { + memcpy(eth->mask.dst_mac, mask->dst.addr_bytes, ETHER_ADDR_LEN); + memcpy(eth->mask.src_mac, mask->src.addr_bytes, ETHER_ADDR_LEN); + } + /* Remove unwanted bits from values. */ + for (i = 0; i < ETHER_ADDR_LEN; ++i) { + eth->val.dst_mac[i] &= eth->mask.dst_mac[i]; + eth->val.src_mac[i] &= eth->mask.src_mac[i]; + } + eth->val.ether_type &= eth->mask.ether_type; + eth->val.vlan_tag &= eth->mask.vlan_tag; +} + +/** + * Convert IPv4 item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param ipv4[in, out] + * Verbs IPv4 specification structure. + */ +static void +mlx5_flow_create_ipv4(const struct rte_flow_item *item, + struct ibv_exp_flow_spec_ipv4 *ipv4) +{ + const struct rte_flow_item_ipv4 *spec = item->spec; + const struct rte_flow_item_ipv4 *mask = item->mask; + + memset(ipv4, 0, sizeof(struct ibv_exp_flow_spec_ipv4)); + *ipv4 = (struct ibv_exp_flow_spec_ipv4) { + .type = IBV_EXP_FLOW_SPEC_IPV4, + .size = sizeof(struct ibv_exp_flow_spec_ipv4), + }; + if (spec) { + ipv4->val = (struct ibv_exp_flow_ipv4_filter){ + .src_ip = spec->hdr.src_addr, + .dst_ip = spec->hdr.dst_addr, + }; + } + if (mask) { + ipv4->mask = (struct ibv_exp_flow_ipv4_filter){ + .src_ip = mask->hdr.src_addr, + .dst_ip = mask->hdr.dst_addr, + }; + } + /* Remove unwanted bits from values. */ + ipv4->val.src_ip &= ipv4->mask.src_ip; + ipv4->val.dst_ip &= ipv4->mask.dst_ip; +} + +/** + * Convert IPv6 item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param ipv6[in, out] + * Verbs IPv6 specification structure. + */ +static void +mlx5_flow_create_ipv6(const struct rte_flow_item *item, + struct ibv_exp_flow_spec_ipv6 *ipv6) +{ + const struct rte_flow_item_ipv6 *spec = item->spec; + const struct rte_flow_item_ipv6 *mask = item->mask; + unsigned int i; + + memset(ipv6, 0, sizeof(struct ibv_exp_flow_spec_ipv6)); + ipv6->type = IBV_EXP_FLOW_SPEC_IPV6; + ipv6->size = sizeof(struct ibv_exp_flow_spec_ipv6); + if (spec) { + memcpy(ipv6->val.src_ip, spec->hdr.src_addr, + RTE_DIM(ipv6->val.src_ip)); + memcpy(ipv6->val.dst_ip, spec->hdr.dst_addr, + RTE_DIM(ipv6->val.dst_ip)); + } + if (mask) { + memcpy(ipv6->mask.src_ip, mask->hdr.src_addr, + RTE_DIM(ipv6->mask.src_ip)); + memcpy(ipv6->mask.dst_ip, mask->hdr.dst_addr, + RTE_DIM(ipv6->mask.dst_ip)); + } + /* Remove unwanted bits from values. */ + for (i = 0; i < RTE_DIM(ipv6->val.src_ip); ++i) { + ipv6->val.src_ip[i] &= ipv6->mask.src_ip[i]; + ipv6->val.dst_ip[i] &= ipv6->mask.dst_ip[i]; + } +} + +/** + * Convert UDP item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param udp[in, out] + * Verbs UDP specification structure. + */ +static void +mlx5_flow_create_udp(const struct rte_flow_item *item, + struct ibv_exp_flow_spec_tcp_udp *udp) +{ + const struct rte_flow_item_udp *spec = item->spec; + const struct rte_flow_item_udp *mask = item->mask; + + memset(udp, 0, sizeof(struct ibv_exp_flow_spec_tcp_udp)); + *udp = (struct ibv_exp_flow_spec_tcp_udp) { + .type = IBV_EXP_FLOW_SPEC_UDP, + .size = sizeof(struct ibv_exp_flow_spec_tcp_udp), + }; + udp->type = IBV_EXP_FLOW_SPEC_UDP; + if (spec) { + udp->val.dst_port = spec->hdr.dst_port; + udp->val.src_port = spec->hdr.src_port; + } + if (mask) { + udp->mask.dst_port = mask->hdr.dst_port; + udp->mask.src_port = mask->hdr.src_port; + } + /* Remove unwanted bits from values. */ + udp->val.src_port &= udp->mask.src_port; + udp->val.dst_port &= udp->mask.dst_port; +} + +/** + * Convert TCP item to Verbs specification. + * + * @param item[in] + * Item specification. + * @param tcp[in, out] + * Verbs TCP specification structure. + */ +static void +mlx5_flow_create_tcp(const struct rte_flow_item *item, + struct ibv_exp_flow_spec_tcp_udp *tcp) +{ + const struct rte_flow_item_tcp *spec = item->spec; + const struct rte_flow_item_tcp *mask = item->mask; + + memset(tcp, 0, sizeof(struct ibv_exp_flow_spec_tcp_udp)); + *tcp = (struct ibv_exp_flow_spec_tcp_udp) { + .type = IBV_EXP_FLOW_SPEC_TCP, + .size = sizeof(struct ibv_exp_flow_spec_tcp_udp), + }; + tcp->type = IBV_EXP_FLOW_SPEC_TCP; + if (spec) { + tcp->val.dst_port = spec->hdr.dst_port; + tcp->val.src_port = spec->hdr.src_port; + } + if (mask) { + tcp->mask.dst_port = mask->hdr.dst_port; + tcp->mask.src_port = mask->hdr.src_port; + } + /* Remove unwanted bits from values. */ + tcp->val.src_port &= tcp->mask.src_port; + tcp->val.dst_port &= tcp->mask.dst_port; +} + +/** + * Complete flow rule creation. + * + * @param priv + * Pointer to private structure. + * @param ibv_attr + * Verbs flow attributes. + * @param queue + * Destination queue. + * @param[out] error + * Perform verbose error reporting if not NULL. + * + * @return + * A flow if the rule could be created. + */ +static struct rte_flow * +priv_flow_create_action_queue(struct priv *priv, + struct ibv_exp_flow_attr *ibv_attr, + uint32_t queue, + struct rte_flow_error *error) +{ + struct rxq_ctrl *rxq; + struct rte_flow *rte_flow; + + assert(priv->pd); + assert(priv->ctx); + rte_flow = rte_calloc(__func__, 1, sizeof(*rte_flow), 0); + if (!rte_flow) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "cannot allocate flow memory"); + return NULL; + } + if (queue == MLX5_FLOW_DROP_QUEUE) { + rte_flow->drop = 1; + rte_flow->cq = + ibv_exp_create_cq(priv->ctx, 1, NULL, NULL, 0, + &(struct ibv_exp_cq_init_attr){ + .comp_mask = 0, + }); + if (!rte_flow->cq) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "cannot allocate CQ"); + goto error; + } + rte_flow->wq = ibv_exp_create_wq( + priv->ctx, + &(struct ibv_exp_wq_init_attr){ + .wq_type = IBV_EXP_WQT_RQ, + .max_recv_wr = 1, + .max_recv_sge = 1, + .pd = priv->pd, + .cq = rte_flow->cq, + }); + } else { + rxq = container_of((*priv->rxqs)[queue], struct rxq_ctrl, rxq); + rte_flow->drop = 0; + rte_flow->wq = rxq->wq; + } + rte_flow->ibv_attr = ibv_attr; + rte_flow->ind_table = ibv_exp_create_rwq_ind_table( + priv->ctx, + &(struct ibv_exp_rwq_ind_table_init_attr){ + .pd = priv->pd, + .log_ind_tbl_size = 0, + .ind_tbl = &rte_flow->wq, + .comp_mask = 0, + }); + if (!rte_flow->ind_table) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "cannot allocate indirection table"); + goto error; + } + rte_flow->qp = ibv_exp_create_qp( + priv->ctx, + &(struct ibv_exp_qp_init_attr){ + .qp_type = IBV_QPT_RAW_PACKET, + .comp_mask = + IBV_EXP_QP_INIT_ATTR_PD | + IBV_EXP_QP_INIT_ATTR_PORT | + IBV_EXP_QP_INIT_ATTR_RX_HASH, + .pd = priv->pd, + .rx_hash_conf = &(struct ibv_exp_rx_hash_conf){ + .rx_hash_function = + IBV_EXP_RX_HASH_FUNC_TOEPLITZ, + .rx_hash_key_len = rss_hash_default_key_len, + .rx_hash_key = rss_hash_default_key, + .rx_hash_fields_mask = 0, + .rwq_ind_tbl = rte_flow->ind_table, + }, + .port_num = priv->port, + }); + if (!rte_flow->qp) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "cannot allocate QP"); + goto error; + } + rte_flow->ibv_flow = ibv_exp_create_flow(rte_flow->qp, + rte_flow->ibv_attr); + if (!rte_flow->ibv_flow) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "flow rule creation failure"); + goto error; + } + if (LIST_EMPTY(&priv->flows)) + LIST_INIT(&priv->flows); + LIST_INSERT_HEAD(&priv->flows, rte_flow, next); + return rte_flow; +error: + assert(rte_flow); + if (rte_flow->qp) + ibv_destroy_qp(rte_flow->qp); + if (rte_flow->ind_table) + ibv_exp_destroy_rwq_ind_table(rte_flow->ind_table); + if (rte_flow->drop && rte_flow->wq) + ibv_exp_destroy_wq(rte_flow->wq); + if (rte_flow->drop && rte_flow->cq) + ibv_destroy_cq(rte_flow->cq); + rte_free(rte_flow->ibv_attr); + rte_free(rte_flow); + return NULL; +} + +/** * Create a flow. * * @see rte_flow_create() @@ -185,17 +664,143 @@ mlx5_flow_create(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct priv *priv = dev->data->dev_private; - struct rte_flow *flow; + struct rte_flow *rte_flow = NULL; + struct ibv_exp_flow_attr *ibv_attr; + unsigned int flow_size = sizeof(struct ibv_exp_flow_attr); priv_lock(priv); - if (priv_flow_validate(priv, attr, items, actions, error)) { - priv_unlock(priv); - return NULL; + if (priv_flow_validate(priv, attr, items, actions, error)) + goto exit; + ibv_attr = rte_malloc(__func__, flow_size, 0); + if (!ibv_attr) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "cannot allocate ibv_attr memory"); + goto exit; + } + *ibv_attr = (struct ibv_exp_flow_attr){ + .type = IBV_EXP_FLOW_ATTR_NORMAL, + .size = sizeof(struct ibv_exp_flow_attr), + .priority = attr->priority, + .num_of_specs = 0, + .port = 0, + .flags = 0, + .reserved = 0, + }; + /* Update ibv_flow_spec. */ + for (; items->type != RTE_FLOW_ITEM_TYPE_END; ++items) { + if (items->type == RTE_FLOW_ITEM_TYPE_VOID) { + continue; + } else if (items->type == RTE_FLOW_ITEM_TYPE_ETH) { + struct ibv_exp_flow_spec_eth *eth; + unsigned int eth_size = + sizeof(struct ibv_exp_flow_spec_eth); + + ibv_attr = rte_realloc(ibv_attr, + flow_size + eth_size, 0); + if (!ibv_attr) + goto error_no_memory; + eth = (void *)((uintptr_t)ibv_attr + flow_size); + mlx5_flow_create_eth(items, eth); + flow_size += eth_size; + ++ibv_attr->num_of_specs; + ibv_attr->priority = 2; + } else if (items->type == RTE_FLOW_ITEM_TYPE_IPV4) { + struct ibv_exp_flow_spec_ipv4 *ipv4; + unsigned int ipv4_size = + sizeof(struct ibv_exp_flow_spec_ipv4); + + ibv_attr = rte_realloc(ibv_attr, + flow_size + ipv4_size, 0); + if (!ibv_attr) + goto error_no_memory; + ipv4 = (void *)((uintptr_t)ibv_attr + flow_size); + mlx5_flow_create_ipv4(items, ipv4); + flow_size += ipv4_size; + ++ibv_attr->num_of_specs; + ibv_attr->priority = 1; + } else if (items->type == RTE_FLOW_ITEM_TYPE_IPV6) { + struct ibv_exp_flow_spec_ipv6 *ipv6; + unsigned int ipv6_size = + sizeof(struct ibv_exp_flow_spec_ipv6); + + ibv_attr = rte_realloc(ibv_attr, + flow_size + ipv6_size, 0); + if (!ibv_attr) + goto error_no_memory; + ipv6 = (void *)((uintptr_t)ibv_attr + flow_size); + mlx5_flow_create_ipv6(items, ipv6); + flow_size += ipv6_size; + ++ibv_attr->num_of_specs; + ibv_attr->priority = 1; + } else if (items->type == RTE_FLOW_ITEM_TYPE_UDP) { + struct ibv_exp_flow_spec_tcp_udp *udp; + unsigned int udp_size = + sizeof(struct ibv_exp_flow_spec_tcp_udp); + + ibv_attr = rte_realloc(ibv_attr, + flow_size + udp_size, 0); + if (!ibv_attr) + goto error_no_memory; + udp = (void *)((uintptr_t)ibv_attr + flow_size); + mlx5_flow_create_udp(items, udp); + flow_size += udp_size; + ++ibv_attr->num_of_specs; + ibv_attr->priority = 0; + } else if (items->type == RTE_FLOW_ITEM_TYPE_TCP) { + struct ibv_exp_flow_spec_tcp_udp *tcp; + unsigned int tcp_size = + sizeof(struct ibv_exp_flow_spec_tcp_udp); + + ibv_attr = rte_realloc(ibv_attr, + flow_size + tcp_size, 0); + if (!ibv_attr) + goto error_no_memory; + tcp = (void *)((uintptr_t)ibv_attr + flow_size); + mlx5_flow_create_tcp(items, tcp); + flow_size += tcp_size; + ++ibv_attr->num_of_specs; + ibv_attr->priority = 0; + } else { + /* This default rule should not happen. */ + rte_free(ibv_attr); + rte_flow_error_set( + error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, + items, "unsupported item"); + goto exit; + } } - flow = rte_malloc(__func__, sizeof(struct rte_flow), 0); - LIST_INSERT_HEAD(&priv->flows, flow, next); + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; ++actions) { + if (actions->type == RTE_FLOW_ACTION_TYPE_VOID) { + continue; + } else if (actions->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *queue = + (const struct rte_flow_action_queue *) + actions->conf; + + rte_flow = priv_flow_create_action_queue( + priv, ibv_attr, + queue->index, error); + } else if (actions->type == RTE_FLOW_ACTION_TYPE_DROP) { + rte_flow = priv_flow_create_action_queue( + priv, ibv_attr, + MLX5_FLOW_DROP_QUEUE, error); + } else { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, "unsupported action"); + goto exit; + } + } + priv_unlock(priv); + return rte_flow; +error_no_memory: + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ITEM, + items, + "cannot allocate memory"); +exit: priv_unlock(priv); - return flow; + return NULL; } /** @@ -212,6 +817,18 @@ priv_flow_destroy(struct priv *priv, { (void)priv; LIST_REMOVE(flow, next); + claim_zero(ibv_exp_destroy_flow(flow->ibv_flow)); + if (flow->qp) + claim_zero(ibv_destroy_qp(flow->qp)); + if (flow->ind_table) + claim_zero( + ibv_exp_destroy_rwq_ind_table( + flow->ind_table)); + if (flow->drop && flow->wq) + claim_zero(ibv_exp_destroy_wq(flow->wq)); + if (flow->drop && flow->cq) + claim_zero(ibv_destroy_cq(flow->cq)); + rte_free(flow->ibv_attr); rte_free(flow); }