From patchwork Thu Oct 8 12:18:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Vesnovaty X-Patchwork-Id: 80049 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D20DA04BC; Thu, 8 Oct 2020 14:19:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AFB111BF43; Thu, 8 Oct 2020 14:19:13 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 15A1F1BF19 for ; Thu, 8 Oct 2020 14:19:11 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from andreyv@nvidia.com) with SMTP; 8 Oct 2020 15:19:05 +0300 Received: from nvidia.com (r-arch-host11.mtr.labs.mlnx [10.213.43.60]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 098CJ3US014340; Thu, 8 Oct 2020 15:19:05 +0300 From: Andrey Vesnovaty To: dev@dpdk.org Cc: jer@marvell.com, jerinjacobk@gmail.com, thomas@monjalon.net, ferruh.yigit@intel.com, stephen@networkplumber.org, bruce.richardson@intel.com, orika@nvidia.com, viacheslavo@nvidia.com, andrey.vesnovaty@gmail.com, mdr@ashroe.eu, nhorman@tuxdriver.com, ajit.khaparde@broadcom.com, samik.gupta@broadcom.com, Andrey Vesnovaty , Matan Azrad , Shahaf Shuler Date: Thu, 8 Oct 2020 15:18:44 +0300 Message-Id: <20201008121848.15330-2-andreyv@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201008121848.15330-1-andreyv@nvidia.com> References: <20201008121848.15330-1-andreyv@nvidia.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/4] common/mlx5: modify advanced Rx object via DevX X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Andrey Vesnovaty Implement mlx5_devx_cmd_modify_tir() to modify TIR object using DevX API. Add related structs in mlx5_prm.h. Signed-off-by: Andrey Vesnovaty --- drivers/common/mlx5/mlx5_devx_cmds.c | 84 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 10 +++ drivers/common/mlx5/mlx5_prm.h | 29 +++++++ .../common/mlx5/rte_common_mlx5_version.map | 1 + 4 files changed, 124 insertions(+) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 7c81ae15a9..2b109c4f65 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1080,6 +1080,90 @@ mlx5_devx_cmd_create_tir(void *ctx, return tir; } +/** + * Modify TIR using DevX API. + * + * @param[in] tir + * Pointer to TIR DevX object structure. + * @param [in] modify_tir_attr + * Pointer to TIR modification attributes structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir, + struct mlx5_devx_modify_tir_attr *modify_tir_attr) +{ + struct mlx5_devx_tir_attr *tir_attr = &modify_tir_attr->tir; + uint32_t in[MLX5_ST_SZ_DW(modify_tir_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(modify_tir_out)] = {0}; + void *tir_ctx; + int ret; + + MLX5_SET(modify_tir_in, in, opcode, MLX5_CMD_OP_MODIFY_TIR); + MLX5_SET(modify_tir_in, in, tirn, modify_tir_attr->tirn); + MLX5_SET64(modify_tir_in, in, modify_bitmask, + modify_tir_attr->modify_bitmask); + + tir_ctx = MLX5_ADDR_OF(modify_rq_in, in, ctx); + if (modify_tir_attr->modify_bitmask & + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_LRO) { + MLX5_SET(tirc, tir_ctx, lro_timeout_period_usecs, + tir_attr->lro_timeout_period_usecs); + MLX5_SET(tirc, tir_ctx, lro_enable_mask, + tir_attr->lro_enable_mask); + MLX5_SET(tirc, tir_ctx, lro_max_msg_sz, + tir_attr->lro_max_msg_sz); + } + if (modify_tir_attr->modify_bitmask & + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE) + MLX5_SET(tirc, tir_ctx, indirect_table, + tir_attr->indirect_table); + if (modify_tir_attr->modify_bitmask & + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH) { + int i; + void *outer, *inner; + MLX5_SET(tirc, tir_ctx, rx_hash_symmetric, + tir_attr->rx_hash_symmetric); + MLX5_SET(tirc, tir_ctx, rx_hash_fn, tir_attr->rx_hash_fn); + for (i = 0; i < 10; i++) { + MLX5_SET(tirc, tir_ctx, rx_hash_toeplitz_key[i], + tir_attr->rx_hash_toeplitz_key[i]); + } + outer = MLX5_ADDR_OF(tirc, tir_ctx, + rx_hash_field_selector_outer); + MLX5_SET(rx_hash_field_select, outer, l3_prot_type, + tir_attr->rx_hash_field_selector_outer.l3_prot_type); + MLX5_SET(rx_hash_field_select, outer, l4_prot_type, + tir_attr->rx_hash_field_selector_outer.l4_prot_type); + MLX5_SET + (rx_hash_field_select, outer, selected_fields, + tir_attr->rx_hash_field_selector_outer.selected_fields); + inner = MLX5_ADDR_OF(tirc, tir_ctx, + rx_hash_field_selector_inner); + MLX5_SET(rx_hash_field_select, inner, l3_prot_type, + tir_attr->rx_hash_field_selector_inner.l3_prot_type); + MLX5_SET(rx_hash_field_select, inner, l4_prot_type, + tir_attr->rx_hash_field_selector_inner.l4_prot_type); + MLX5_SET + (rx_hash_field_select, inner, selected_fields, + tir_attr->rx_hash_field_selector_inner.selected_fields); + } + if (modify_tir_attr->modify_bitmask & + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_SELF_LB_EN) { + MLX5_SET(tirc, tir_ctx, self_lb_block, tir_attr->self_lb_block); + } + ret = mlx5_glue->devx_obj_modify(tir->obj, in, sizeof(in), + out, sizeof(out)); + if (ret) { + DRV_LOG(ERR, "Failed to modify TIR using DevX"); + rte_errno = errno; + return -errno; + } + return ret; +} + /** * Create RQT using DevX API. * diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 1c84cea851..ba6cb6ed51 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -190,6 +190,13 @@ struct mlx5_devx_tir_attr { struct mlx5_rx_hash_field_select rx_hash_field_selector_inner; }; +/* TIR attributes structure, used by TIR modify */ +struct mlx5_devx_modify_tir_attr { + uint32_t tirn:24; + uint64_t modify_bitmask; + struct mlx5_devx_tir_attr tir; +}; + /* RQT attributes structure, used by RQT operations. */ struct mlx5_devx_rqt_attr { uint8_t rq_type; @@ -434,6 +441,9 @@ __rte_internal int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt, struct mlx5_devx_rqt_attr *rqt_attr); __rte_internal +int mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir, + struct mlx5_devx_modify_tir_attr *tir_attr); +__rte_internal int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj, uint32_t ids[], uint32_t num); diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 20f2fccd4f..2dbae445b3 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -830,6 +830,7 @@ enum { MLX5_CMD_OP_ACCESS_REGISTER = 0x805, MLX5_CMD_OP_ALLOC_TRANSPORT_DOMAIN = 0x816, MLX5_CMD_OP_CREATE_TIR = 0x900, + MLX5_CMD_OP_MODIFY_TIR = 0x901, MLX5_CMD_OP_CREATE_SQ = 0X904, MLX5_CMD_OP_MODIFY_SQ = 0X905, MLX5_CMD_OP_CREATE_RQ = 0x908, @@ -1858,6 +1859,34 @@ struct mlx5_ifc_create_tir_in_bits { struct mlx5_ifc_tirc_bits ctx; }; +enum { + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_LRO = 1ULL << 0, + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE = 1ULL << 1, + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH = 1ULL << 2, + /* bit 3 - tunneled_offload_en modify not supported */ + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_SELF_LB_EN = 1ULL << 4, +}; + +struct mlx5_ifc_modify_tir_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_modify_tir_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 reserved_at_40[0x8]; + u8 tirn[0x18]; + u8 reserved_at_60[0x20]; + u8 modify_bitmask[0x40]; + u8 reserved_at_c0[0x40]; + struct mlx5_ifc_tirc_bits ctx; +}; + enum { MLX5_INLINE_Q_TYPE_RQ = 0x0, MLX5_INLINE_Q_TYPE_VIRTQ = 0x1, diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map index c4d57c08a7..884001ca7d 100644 --- a/drivers/common/mlx5/rte_common_mlx5_version.map +++ b/drivers/common/mlx5/rte_common_mlx5_version.map @@ -30,6 +30,7 @@ INTERNAL { mlx5_devx_cmd_modify_rq; mlx5_devx_cmd_modify_rqt; mlx5_devx_cmd_modify_sq; + mlx5_devx_cmd_modify_tir; mlx5_devx_cmd_modify_virtq; mlx5_devx_cmd_qp_query_tis_td; mlx5_devx_cmd_query_hca_attr; From patchwork Thu Oct 8 12:18:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Vesnovaty X-Patchwork-Id: 80048 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C9F2BA04BC; Thu, 8 Oct 2020 14:19:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1A7751BF23; Thu, 8 Oct 2020 14:19:12 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 21E8B1BF23 for ; Thu, 8 Oct 2020 14:19:11 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from andreyv@nvidia.com) with SMTP; 8 Oct 2020 15:19:07 +0300 Received: from nvidia.com (r-arch-host11.mtr.labs.mlnx [10.213.43.60]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 098CJ3UT014340; Thu, 8 Oct 2020 15:19:07 +0300 From: Andrey Vesnovaty To: dev@dpdk.org Cc: jer@marvell.com, jerinjacobk@gmail.com, thomas@monjalon.net, ferruh.yigit@intel.com, stephen@networkplumber.org, bruce.richardson@intel.com, orika@nvidia.com, viacheslavo@nvidia.com, andrey.vesnovaty@gmail.com, mdr@ashroe.eu, nhorman@tuxdriver.com, ajit.khaparde@broadcom.com, samik.gupta@broadcom.com, Andrey Vesnovaty , Matan Azrad , Shahaf Shuler Date: Thu, 8 Oct 2020 15:18:45 +0300 Message-Id: <20201008121848.15330-3-andreyv@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201008121848.15330-1-andreyv@nvidia.com> References: <20201008121848.15330-1-andreyv@nvidia.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 2/4] net/mlx5: modify hash Rx queue objects X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Andrey Vesnovaty Implement mlx5_hrxq_modify() to modify hash RX queue object. This commit relays on capability to modify TIR object via DevX. Signed-off-by: Andrey Vesnovaty --- drivers/net/mlx5/mlx5.h | 4 + drivers/net/mlx5/mlx5_devx.c | 173 +++++++++++++++++++++++++++-------- drivers/net/mlx5/mlx5_rxq.c | 103 +++++++++++++++++++++ drivers/net/mlx5/mlx5_rxtx.h | 5 +- 4 files changed, 246 insertions(+), 39 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 87d3c15f07..7b85f64167 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -784,6 +784,10 @@ struct mlx5_obj_ops { void (*ind_table_destroy)(struct mlx5_ind_table_obj *ind_tbl); int (*hrxq_new)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, int tunnel __rte_unused); + int (*hrxq_modify)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, + const uint8_t *rss_key, + uint64_t hash_fields, + const struct mlx5_ind_table_obj *ind_tbl); void (*hrxq_destroy)(struct mlx5_hrxq *hrxq); int (*drop_action_create)(struct rte_eth_dev *dev); void (*drop_action_destroy)(struct rte_eth_dev *dev); diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index 11bda32557..600afd5929 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -731,33 +731,39 @@ mlx5_devx_ind_table_destroy(struct mlx5_ind_table_obj *ind_tbl) } /** - * Create an Rx Hash queue. + * Set TIR attribute struct with relevant input values. * - * @param dev + * @param[in] dev * Pointer to Ethernet device. - * @param hrxq - * Pointer to Rx Hash queue. - * @param tunnel + * @param[in] rss_key + * RSS key for the Rx hash queue. + * @param[in] hash_fields + * Verbs protocol hash field to make the RSS on. + * @param[in] ind_tbl + * Indirection table for TIR. + * @param[in] queues + * Queues entering in hash queue. In case of empty hash_fields only the + * first queue index will be taken for the indirection table. + * @param[in] queues_n + * Number of queues. + * @param[in] tunnel * Tunnel type. + * @param[out] tir_attr + * Parameters structure for TIR creation/modification. * * @return - * 0 on success, a negative errno value otherwise and rte_errno is set. + * The Verbs/DevX object initialised index, 0 otherwise and rte_errno is set. */ -static int -mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, - int tunnel __rte_unused) +static void +mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key, + uint64_t hash_fields, + const struct mlx5_ind_table_obj *ind_tbl, + int tunnel, enum mlx5_rxq_type rxq_obj_type, + struct mlx5_devx_tir_attr *tir_attr) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_ind_table_obj *ind_tbl = hrxq->ind_table; - struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]]; - struct mlx5_rxq_ctrl *rxq_ctrl = - container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); - struct mlx5_devx_tir_attr tir_attr; - const uint8_t *rss_key = hrxq->rss_key; - uint64_t hash_fields = hrxq->hash_fields; bool lro = true; uint32_t i; - int err; /* Enable TIR LRO only if all the queues were configured for. */ for (i = 0; i < ind_tbl->queues_n; ++i) { @@ -766,26 +772,24 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, break; } } - memset(&tir_attr, 0, sizeof(tir_attr)); - tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; - tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ; - tir_attr.tunneled_offload_en = !!tunnel; + memset(tir_attr, 0, sizeof(*tir_attr)); + tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; + tir_attr->rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ; + tir_attr->tunneled_offload_en = !!tunnel; /* If needed, translate hash_fields bitmap to PRM format. */ if (hash_fields) { - struct mlx5_rx_hash_field_select *rx_hash_field_select = NULL; + struct mlx5_rx_hash_field_select *rx_hash_field_select = #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - rx_hash_field_select = hash_fields & IBV_RX_HASH_INNER ? - &tir_attr.rx_hash_field_selector_inner : - &tir_attr.rx_hash_field_selector_outer; -#else - rx_hash_field_select = &tir_attr.rx_hash_field_selector_outer; + hash_fields & IBV_RX_HASH_INNER ? + &tir_attr->rx_hash_field_selector_inner : #endif + &tir_attr->rx_hash_field_selector_outer; /* 1 bit: 0: IPv4, 1: IPv6. */ rx_hash_field_select->l3_prot_type = !!(hash_fields & MLX5_IPV6_IBV_RX_HASH); /* 1 bit: 0: TCP, 1: UDP. */ rx_hash_field_select->l4_prot_type = - !!(hash_fields & MLX5_UDP_IBV_RX_HASH); + !!(hash_fields & MLX5_UDP_IBV_RX_HASH); /* Bitmask which sets which fields to use in RX Hash. */ rx_hash_field_select->selected_fields = ((!!(hash_fields & MLX5_L3_SRC_IBV_RX_HASH)) << @@ -797,20 +801,53 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, (!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) << MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT; } - if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN) - tir_attr.transport_domain = priv->sh->td->id; + if (rxq_obj_type == MLX5_RXQ_TYPE_HAIRPIN) + tir_attr->transport_domain = priv->sh->td->id; else - tir_attr.transport_domain = priv->sh->tdn; - memcpy(tir_attr.rx_hash_toeplitz_key, rss_key, MLX5_RSS_HASH_KEY_LEN); - tir_attr.indirect_table = ind_tbl->rqt->id; + tir_attr->transport_domain = priv->sh->tdn; + memcpy(tir_attr->rx_hash_toeplitz_key, rss_key, MLX5_RSS_HASH_KEY_LEN); + tir_attr->indirect_table = ind_tbl->rqt->id; if (dev->data->dev_conf.lpbk_mode) - tir_attr.self_lb_block = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; + tir_attr->self_lb_block = + MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; if (lro) { - tir_attr.lro_timeout_period_usecs = priv->config.lro.timeout; - tir_attr.lro_max_msg_sz = priv->max_lro_msg_size; - tir_attr.lro_enable_mask = MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO | - MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO; + tir_attr->lro_timeout_period_usecs = priv->config.lro.timeout; + tir_attr->lro_max_msg_sz = priv->max_lro_msg_size; + tir_attr->lro_enable_mask = + MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO | + MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO; } +} + +/** + * Create an Rx Hash queue. + * + * @param dev + * Pointer to Ethernet device. + * @param hrxq + * Pointer to Rx Hash queue. + * @param tunnel + * Tunnel type. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, + int tunnel __rte_unused) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_ind_table_obj *ind_tbl = hrxq->ind_table; + struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]]; + struct mlx5_rxq_ctrl *rxq_ctrl = + container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_devx_tir_attr tir_attr = {0}; + const uint8_t *rss_key = hrxq->rss_key; + uint64_t hash_fields = hrxq->hash_fields; + int err; + + mlx5_devx_tir_attr_set(dev, rss_key, hash_fields, ind_tbl, tunnel, + rxq_ctrl->type, &tir_attr); hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr); if (!hrxq->tir) { DRV_LOG(ERR, "Port %u cannot create DevX TIR.", @@ -847,6 +884,65 @@ mlx5_devx_tir_destroy(struct mlx5_hrxq *hrxq) claim_zero(mlx5_devx_cmd_destroy(hrxq->tir)); } +/** + * Modify an Rx Hash queue configuration. + * + * @param dev + * Pointer to Ethernet device. + * @param hrxq + * Hash Rx queue to modify. + * @param rss_key + * RSS key for the Rx hash queue. + * @param hash_fields + * Verbs protocol hash field to make the RSS on. + * @param queues + * Queues entering in hash queue. In case of empty hash_fields only the + * first queue index will be taken for the indirection table. + * @param queues_n + * Number of queues. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, + const uint8_t *rss_key, + uint64_t hash_fields, + const struct mlx5_ind_table_obj *ind_tbl) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]]; + struct mlx5_rxq_ctrl *rxq_ctrl = + container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + enum mlx5_rxq_type rxq_obj_type = rxq_ctrl->type; + struct mlx5_devx_modify_tir_attr modify_tir = {0}; + + /* + * untested for modification fields: + * - rx_hash_symmetric not set in hrxq_new(), + * - rx_hash_fn set hard-coded in hrxq_new(), + * - lro_xxx not set after rxq setup + */ + if (ind_tbl != hrxq->ind_table) + modify_tir.modify_bitmask |= + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE; + if (hash_fields != hrxq->hash_fields || + memcmp(hrxq->rss_key, rss_key, MLX5_RSS_HASH_KEY_LEN)) + modify_tir.modify_bitmask |= + MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH; + mlx5_devx_tir_attr_set(dev, rss_key, hash_fields, ind_tbl, + 0, /* N/A - tunnel modification unsupported */ + rxq_obj_type, &modify_tir.tir); + modify_tir.tirn = hrxq->tir->id; + if (mlx5_devx_cmd_modify_tir(hrxq->tir, &modify_tir)) { + DRV_LOG(ERR, "port %u cannot modify DevX TIR", + dev->data->port_id); + rte_errno = errno; + return -rte_errno; + } + return 0; +} + /** * Create a DevX drop action for Rx Hash queue. * @@ -1357,6 +1453,7 @@ struct mlx5_obj_ops devx_obj_ops = { .ind_table_destroy = mlx5_devx_ind_table_destroy, .hrxq_new = mlx5_devx_hrxq_new, .hrxq_destroy = mlx5_devx_tir_destroy, + .hrxq_modify = mlx5_devx_hrxq_modify, .drop_action_create = mlx5_devx_drop_action_create, .drop_action_destroy = mlx5_devx_drop_action_destroy, .txq_obj_new = mlx5_txq_devx_obj_new, diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index f1d8373079..deb07428df 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1706,6 +1706,29 @@ mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx) return MLX5_RXQ_TYPE_UNDEFINED; } +/** + * Match queues listed in arguments to queues contained in indirection table + * object. + * + * @param ind_tbl + * Pointer to indirection table to match. + * @param queues + * Queues to match to ques in indirection table. + * @param queues_n + * Number of queues in the array. + * + * @return + * 1 if all queues in indirection table match 0 othrwise. + */ +static int +mlx5_ind_table_obj_match_queues(const struct mlx5_ind_table_obj *ind_tbl, + const uint16_t *queues, uint32_t queues_n) +{ + return (ind_tbl->queues_n == queues_n) && + (!memcmp(ind_tbl->queues, queues, + ind_tbl->queues_n * sizeof(ind_tbl->queues[0]))); +} + /** * Get an indirection table. * @@ -1902,6 +1925,86 @@ mlx5_hrxq_get(struct rte_eth_dev *dev, return 0; } +/** + * Modify an Rx Hash queue configuration. + * + * @param dev + * Pointer to Ethernet device. + * @param hrxq + * Index to Hash Rx queue to modify. + * @param rss_key + * RSS key for the Rx hash queue. + * @param rss_key_len + * RSS key length. + * @param hash_fields + * Verbs protocol hash field to make the RSS on. + * @param queues + * Queues entering in hash queue. In case of empty hash_fields only the + * first queue index will be taken for the indirection table. + * @param queues_n + * Number of queues. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, + const uint8_t *rss_key, uint32_t rss_key_len, + uint64_t hash_fields, + const uint16_t *queues, uint32_t queues_n) +{ + int err; + struct mlx5_ind_table_obj *ind_tbl = NULL; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_hrxq *hrxq = + mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); + int ret; + + if (!hrxq) { + rte_errno = EINVAL; + return -rte_errno; + } + /* validations */ + if (hrxq->rss_key_len != rss_key_len) { + /* rss_key_len is fixed size 40 byte & not supposed to change */ + rte_errno = EINVAL; + return -rte_errno; + } + + queues_n = hash_fields ? queues_n : 1; + if (mlx5_ind_table_obj_match_queues(hrxq->ind_table, + queues, queues_n)) { + ind_tbl = hrxq->ind_table; + } else { + ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n); + if (!ind_tbl) + ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n); + } + if (!ind_tbl) { + rte_errno = ENOMEM; + return -rte_errno; + } + ret = priv->obj_ops.hrxq_modify(dev, hrxq, rss_key, hash_fields, + ind_tbl); + if (ret) { + rte_errno = errno; + goto error; + } + if (ind_tbl != hrxq->ind_table) { + mlx5_ind_table_obj_release(dev, hrxq->ind_table); + hrxq->ind_table = ind_tbl; + } + hrxq->hash_fields = hash_fields; + memcpy(hrxq->rss_key, rss_key, rss_key_len); + return 0; +error: + err = rte_errno; + if (ind_tbl != hrxq->ind_table) + mlx5_ind_table_obj_release(dev, ind_tbl); + rte_errno = err; + return -rte_errno; +} + /** * Release the hash Rx queue. * diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 674296ee98..09499cc730 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -347,7 +347,10 @@ void mlx5_drop_action_destroy(struct rte_eth_dev *dev); uint64_t mlx5_get_rx_port_offloads(void); uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev); void mlx5_rxq_timestamp_set(struct rte_eth_dev *dev); - +int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, + const uint8_t *rss_key, uint32_t rss_key_len, + uint64_t hash_fields, + const uint16_t *queues, uint32_t queues_n); /* mlx5_txq.c */ From patchwork Thu Oct 8 12:18:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Vesnovaty X-Patchwork-Id: 80050 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 53965A04BC; Thu, 8 Oct 2020 14:20:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B9EF61BFC6; Thu, 8 Oct 2020 14:19:17 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 1819E1BFC3 for ; Thu, 8 Oct 2020 14:19:16 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from andreyv@nvidia.com) with SMTP; 8 Oct 2020 15:19:09 +0300 Received: from nvidia.com (r-arch-host11.mtr.labs.mlnx [10.213.43.60]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 098CJ3UU014340; Thu, 8 Oct 2020 15:19:09 +0300 From: Andrey Vesnovaty To: dev@dpdk.org Cc: jer@marvell.com, jerinjacobk@gmail.com, thomas@monjalon.net, ferruh.yigit@intel.com, stephen@networkplumber.org, bruce.richardson@intel.com, orika@nvidia.com, viacheslavo@nvidia.com, andrey.vesnovaty@gmail.com, mdr@ashroe.eu, nhorman@tuxdriver.com, ajit.khaparde@broadcom.com, samik.gupta@broadcom.com, Andrey Vesnovaty , Matan Azrad , Shahaf Shuler Date: Thu, 8 Oct 2020 15:18:46 +0300 Message-Id: <20201008121848.15330-4-andreyv@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201008121848.15330-1-andreyv@nvidia.com> References: <20201008121848.15330-1-andreyv@nvidia.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 3/4] net/mlx5: shared action PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Andrey Vesnovaty Implement rte_flow shared action API for mlx5 PMD. Handle shared action on flow create/destroy. Signed-off-by: Andrey Vesnovaty --- drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_defs.h | 3 + drivers/net/mlx5/mlx5_flow.c | 497 ++++++++++++++++++++++++++++++++--- drivers/net/mlx5/mlx5_flow.h | 86 ++++++ 5 files changed, 557 insertions(+), 32 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index e5ca392fed..562c4a3e33 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1384,6 +1384,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) * then this will return directly without any action. */ mlx5_flow_list_flush(dev, &priv->flows, true); + mlx5_shared_action_flush(dev); mlx5_flow_meter_flush(dev, NULL); /* Free the intermediate buffers for flow creation. */ mlx5_flow_free_intermediate(dev); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7b85f64167..879cc9a51e 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -877,6 +877,8 @@ struct mlx5_priv { uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */ struct mlx5_mp_id mp_id; /* ID of a multi-process process */ LIST_HEAD(fdir, mlx5_fdir_flow) fdir_flows; /* fdir flows. */ + LIST_HEAD(shared_action, rte_flow_shared_action) shared_actions; + /* shared actions */ }; #define PORT_ID(priv) ((priv)->dev_data->port_id) diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 0df47391ee..22e41df1eb 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -197,6 +197,9 @@ #define MLX5_HAIRPIN_QUEUE_STRIDE 6 #define MLX5_HAIRPIN_JUMBO_LOG_SIZE (14 + 2) +/* Maximum number of shared actions supported by rte_flow */ +#define MLX5_MAX_SHARED_ACTIONS 1 + /* Definition of static_assert found in /usr/include/assert.h */ #ifndef HAVE_STATIC_ASSERT #define static_assert _Static_assert diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index a94f63005c..91e9e546bc 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -220,6 +220,26 @@ static const struct rte_flow_expand_node mlx5_support_expansion[] = { }, }; +static struct rte_flow_shared_action * +mlx5_shared_action_create(struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); +static int mlx5_shared_action_destroy + (struct rte_eth_dev *dev, + struct rte_flow_shared_action *shared_action, + struct rte_flow_error *error); +static int mlx5_shared_action_update + (struct rte_eth_dev *dev, + struct rte_flow_shared_action *shared_action, + const struct rte_flow_action *action, + struct rte_flow_error *error); +static int mlx5_shared_action_query + (struct rte_eth_dev *dev, + const struct rte_flow_shared_action *action, + void *data, + struct rte_flow_error *error); + static const struct rte_flow_ops mlx5_flow_ops = { .validate = mlx5_flow_validate, .create = mlx5_flow_create, @@ -229,6 +249,10 @@ static const struct rte_flow_ops mlx5_flow_ops = { .query = mlx5_flow_query, .dev_dump = mlx5_flow_dev_dump, .get_aged_flows = mlx5_flow_get_aged_flows, + .shared_action_create = mlx5_shared_action_create, + .shared_action_destroy = mlx5_shared_action_destroy, + .shared_action_update = mlx5_shared_action_update, + .shared_action_query = mlx5_shared_action_query, }; /* Convert FDIR request to Generic flow. */ @@ -995,16 +1019,10 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, /* * Validate the rss action. * - * @param[in] action - * Pointer to the queue action. - * @param[in] action_flags - * Bit-fields that holds the actions detected until now. * @param[in] dev * Pointer to the Ethernet device structure. - * @param[in] attr - * Attributes of flow that includes this action. - * @param[in] item_flags - * Items that were detected. + * @param[in] action + * Pointer to the queue action. * @param[out] error * Pointer to error structure. * @@ -1012,23 +1030,14 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action, * 0 on success, a negative errno value otherwise and rte_errno is set. */ int -mlx5_flow_validate_action_rss(const struct rte_flow_action *action, - uint64_t action_flags, - struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - uint64_t item_flags, - struct rte_flow_error *error) +mlx5_validate_action_rss(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_action_rss *rss = action->conf; - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); unsigned int i; - if (action_flags & MLX5_FLOW_FATE_ACTIONS) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "can't have 2 fate actions" - " in same flow"); if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT && rss->func != RTE_ETH_HASH_FUNCTION_TOEPLITZ) return rte_flow_error_set(error, ENOTSUP, @@ -1074,15 +1083,17 @@ mlx5_flow_validate_action_rss(const struct rte_flow_action *action, if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) && !(rss->types & ETH_RSS_IP)) return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, - "L3 partial RSS requested but L3 RSS" - " type not specified"); + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + NULL, + "L3 partial RSS requested but L3 " + "RSS type not specified"); if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) && !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP))) return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, - "L4 partial RSS requested but L4 RSS" - " type not specified"); + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + NULL, + "L4 partial RSS requested but L4 " + "RSS type not specified"); if (!priv->rxqs_n) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, @@ -1099,17 +1110,62 @@ mlx5_flow_validate_action_rss(const struct rte_flow_action *action, &rss->queue[i], "queue index out of range"); if (!(*priv->rxqs)[rss->queue[i]]) return rte_flow_error_set - (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF, + (error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, &rss->queue[i], "queue is not configured"); } + return 0; +} + +/* + * Validate the rss action. + * + * @param[in] action + * Pointer to the queue action. + * @param[in] action_flags + * Bit-fields that holds the actions detected until now. + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] attr + * Attributes of flow that includes this action. + * @param[in] item_flags + * Items that were detected. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_flow_validate_action_rss(const struct rte_flow_action *action, + uint64_t action_flags, + struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + uint64_t item_flags, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss = action->conf; + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int ret; + + if (action_flags & MLX5_FLOW_FATE_ACTIONS) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "can't have 2 fate actions" + " in same flow"); + ret = mlx5_validate_action_rss(dev, action, error); + if (ret) + return ret; if (attr->egress) return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + NULL, "rss action not supported for " "egress"); if (rss->level > 1 && !tunnel) return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + NULL, "inner RSS is not supported for " "non-tunnel flows"); if ((item_flags & MLX5_FLOW_LAYER_ECPRI) && @@ -2726,6 +2782,131 @@ flow_get_rss_action(const struct rte_flow_action actions[]) return NULL; } +/* maps shared action to translated non shared in some actions array */ +struct mlx5_translated_shared_action { + struct rte_flow_shared_action *action; /**< Shared action */ + int index; /**< Index in related array of rte_flow_action */ +}; + +/** + * Translates actions of type RTE_FLOW_ACTION_TYPE_SHARED to related + * non shared action if translation possible. + * This functionality used to run same execution path for both shared & non + * shared actions on flow create. All necessary preparations for shared + * action handling should be preformed on *shared* actions list returned by + * from this call. + * + * @param[in] actions + * List of actions to translate. + * @param[out] shared + * List to store translated shared actions. + * @param[in, out] shared_n + * Size of *shared* array. On return should be updated with number of shared + * actions retrieved from the *actions* list. + * @param[out] translated_actions + * List of actions where all shared actions were translated to non shared + * if possible. NULL if no translation took place. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_shared_actions_translate(const struct rte_flow_action actions[], + struct mlx5_translated_shared_action *shared, + int *shared_n, + struct rte_flow_action **translated_actions, + struct rte_flow_error *error) +{ + struct rte_flow_action *translated = NULL; + int n; + int copied_n = 0; + struct mlx5_translated_shared_action *shared_end = NULL; + + for (n = 0; actions[n].type != RTE_FLOW_ACTION_TYPE_END; n++) { + if (actions[n].type != RTE_FLOW_ACTION_TYPE_SHARED) + continue; + if (copied_n == *shared_n) { + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "too many shared actions"); + } + rte_memcpy(&shared[copied_n].action, &actions[n].conf, + sizeof(actions[n].conf)); + shared[copied_n].index = n; + copied_n++; + } + n++; + *shared_n = copied_n; + if (!copied_n) + return 0; + translated = rte_calloc(__func__, n, sizeof(struct rte_flow_action), 0); + rte_memcpy(translated, actions, n * sizeof(struct rte_flow_action)); + for (shared_end = shared + copied_n; shared < shared_end; shared++) { + const struct rte_flow_shared_action *shared_action; + + shared_action = shared->action; + switch (shared_action->type) { + case MLX5_FLOW_ACTION_SHARED_RSS: + translated[shared->index].type = + RTE_FLOW_ACTION_TYPE_RSS; + translated[shared->index].conf = + &shared_action->rss.origin; + break; + default: + rte_free(translated); + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "invalid shared action type"); + } + } + *translated_actions = translated; + return 0; +} + +/** + * Get Shared RSS action from the action list. + * + * @param[in] shared + * Pointer to the list of actions. + * @param[in] shared_n + * Actions list length. + * + * @return + * Pointer to the MLX5 RSS action if exist, else return NULL. + */ +static struct mlx5_shared_action_rss * +flow_get_shared_rss_action(struct mlx5_translated_shared_action *shared, + int shared_n) +{ + struct mlx5_translated_shared_action *shared_end; + + for (shared_end = shared + shared_n; shared < shared_end; shared++) { + struct rte_flow_shared_action *shared_action; + + shared_action = shared->action; + switch (shared_action->type) { + case MLX5_FLOW_ACTION_SHARED_RSS: + rte_atomic32_inc(&shared_action->refcnt); + return &shared_action->rss; + default: + break; + } + } + return NULL; +} + +struct rte_flow_shared_action * +mlx5_flow_get_shared_rss(struct rte_flow *flow) +{ + if (flow->shared_rss) + return container_of(flow->shared_rss, + struct rte_flow_shared_action, rss); + else + return NULL; +} + static unsigned int find_graph_root(const struct rte_flow_item pattern[], uint32_t rss_level) { @@ -4324,13 +4505,16 @@ static uint32_t flow_list_create(struct rte_eth_dev *dev, uint32_t *list, const struct rte_flow_attr *attr, const struct rte_flow_item items[], - const struct rte_flow_action actions[], + const struct rte_flow_action original_actions[], bool external, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow *flow = NULL; struct mlx5_flow *dev_flow; const struct rte_flow_action_rss *rss; + struct mlx5_translated_shared_action + shared_actions[MLX5_MAX_SHARED_ACTIONS]; + int shared_actions_n = MLX5_MAX_SHARED_ACTIONS; union { struct rte_flow_expand_rss buf; uint8_t buffer[2048]; @@ -4350,14 +4534,23 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, struct rte_flow_expand_rss *buf = &expand_buffer.buf; struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *) priv->rss_desc)[!!priv->flow_idx]; - const struct rte_flow_action *p_actions_rx = actions; + const struct rte_flow_action *p_actions_rx; uint32_t i; uint32_t idx = 0; int hairpin_flow; uint32_t hairpin_id = 0; struct rte_flow_attr attr_tx = { .priority = 0 }; - int ret; + const struct rte_flow_action *actions; + struct rte_flow_action *translated_actions = NULL; + int ret = flow_shared_actions_translate(original_actions, + shared_actions, + &shared_actions_n, + &translated_actions, error); + if (ret < 0) + return 0; + actions = (translated_actions) ? translated_actions : original_actions; + p_actions_rx = actions; hairpin_flow = flow_check_hairpin_split(dev, attr, actions); ret = flow_drv_validate(dev, attr, items, p_actions_rx, external, hairpin_flow, error); @@ -4409,6 +4602,8 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, buf->entries = 1; buf->entry[0].pattern = (void *)(uintptr_t)items; } + flow->shared_rss = flow_get_shared_rss_action(shared_actions, + shared_actions_n); /* * Record the start index when there is a nested call. All sub-flows * need to be translated before another calling. @@ -4480,6 +4675,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, idx, flow, next); flow_rxq_flags_set(dev, flow); + rte_free(translated_actions); /* Nested flow creation index recovery. */ priv->flow_idx = priv->flow_nested_idx; if (priv->flow_nested_idx) @@ -4494,6 +4690,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list, rte_errno = ret; /* Restore rte_errno. */ error_before_flow: ret = rte_errno; + rte_free(translated_actions); if (hairpin_id) mlx5_flow_id_release(priv->sh->flow_id_pool, hairpin_id); @@ -6310,3 +6507,239 @@ mlx5_flow_get_aged_flows(struct rte_eth_dev *dev, void **contexts, dev->data->port_id); return -ENOTSUP; } + +/** + * Retrieve driver ops struct. + * + * @param[in] dev + * Pointer to the dev structure. + * @param[in] error_message + * Error message to set if driver ops struct not found. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * Pointer to driver ops on success, otherwise NULL and rte_errno is set. + */ +static const struct mlx5_flow_driver_ops * +flow_drv_dv_ops_get(struct rte_eth_dev *dev, + const char *error_message, + struct rte_flow_error *error) +{ + struct rte_flow_attr attr = { .transfer = 0 }; + + if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_DV) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, error_message); + DRV_LOG(ERR, "port %u %s.", dev->data->port_id, error_message); + return NULL; + } + + return flow_get_drv_ops(MLX5_FLOW_TYPE_DV); +} + +/* Wrapper for driver action_validate op callback */ +static int +flow_drv_action_validate(struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = flow_drv_dv_ops_get(dev, + "action registration unsupported", error); + return (fops) ? fops->action_validate(dev, conf, action, error) + : -rte_errno; +} + +/* Wrapper for driver action_create op callback */ +static struct rte_flow_shared_action * +flow_drv_action_create(struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = flow_drv_dv_ops_get(dev, + "action registration unsupported", error); + return (fops) ? fops->action_create(dev, conf, action, error) : NULL; +} + +/** + * Destroys the shared action by handle. + * + * @param dev + * Pointer to Ethernet device structure. + * @param[in] action + * Handle for the shared action to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + * + * @note: wrapper for driver action_create op callback. + */ +static int +mlx5_shared_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_shared_action *action, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = flow_drv_dv_ops_get(dev, + "action registration unsupported", error); + return (fops) ? fops->action_destroy(dev, action, error) : -rte_errno; +} + +/* Wrapper for driver action_destroy op callback */ +static int +flow_drv_action_update(struct rte_eth_dev *dev, + struct rte_flow_shared_action *action, + const void *action_conf, + struct rte_flow_error *error) +{ + const struct mlx5_flow_driver_ops *fops = flow_drv_dv_ops_get(dev, + "action registration unsupported", error); + return (fops) ? fops->action_update(dev, action, + action_conf, error) + : -rte_errno; +} + +/** + * Create shared action for reuse in multiple flow rules. + * + * @param dev + * Pointer to Ethernet device structure. + * @param[in] action + * Action configuration for shared action creation. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set. + */ +static struct rte_flow_shared_action * +mlx5_shared_action_create(struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + if (flow_drv_action_validate(dev, conf, action, error)) + return NULL; + return flow_drv_action_create(dev, conf, action, error); +} + +/** + * Updates inplace the shared action configuration pointed by *action* handle + * with the configuration provided as *update* argument. + * The update of the shared action configuration effects all flow rules reusing + * the action via handle. + * + * @param dev + * Pointer to Ethernet device structure. + * @param[in] action + * Handle for the shared action to be updated. + * @param[in] update + * Action specification used to modify the action pointed by handle. + * *update* should be of same type with the action pointed by the *action* + * handle argument, otherwise considered as invalid. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_shared_action_update(struct rte_eth_dev *dev, + struct rte_flow_shared_action *shared_action, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + int ret; + + switch (shared_action->type) { + case MLX5_FLOW_ACTION_SHARED_RSS: + if (action->type != RTE_FLOW_ACTION_TYPE_RSS) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "update action type invalid"); + } + ret = flow_drv_action_validate(dev, NULL, action, error); + if (ret) + return ret; + return flow_drv_action_update(dev, shared_action, action->conf, + error); + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "action type not supported"); + } +} + +/** + * Query the shared action by handle. + * + * This function allows retrieving action-specific data such as counters. + * Data is gathered by special action which may be present/referenced in + * more than one flow rule definition. + * + * \see RTE_FLOW_ACTION_TYPE_COUNT + * + * @param dev + * Pointer to Ethernet device structure. + * @param[in] action + * Handle for the shared action to query. + * @param[in, out] data + * Pointer to storage for the associated query data type. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_shared_action_query(struct rte_eth_dev *dev, + const struct rte_flow_shared_action *action, + void *data, + struct rte_flow_error *error) +{ + (void)dev; + switch (action->type) { + case MLX5_FLOW_ACTION_SHARED_RSS: + *((int32_t *)data) = rte_atomic32_read(&action->refcnt); + return 0; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "action type not supported"); + } +} + +/** + * Destroy all shared actions. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_shared_action_flush(struct rte_eth_dev *dev) +{ + struct rte_flow_error error; + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_shared_action *action; + int ret = 0; + + while (!LIST_EMPTY(&priv->shared_actions)) { + action = LIST_FIRST(&priv->shared_actions); + ret = mlx5_shared_action_destroy(dev, action, &error); + } + return ret; +} diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 279daf21f5..c2d715a60b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -196,6 +196,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_SET_IPV6_DSCP (1ull << 33) #define MLX5_FLOW_ACTION_AGE (1ull << 34) #define MLX5_FLOW_ACTION_DEFAULT_MISS (1ull << 35) +#define MLX5_FLOW_ACTION_SHARED_RSS (1ull << 36) #define MLX5_FLOW_FATE_ACTIONS \ (MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \ @@ -843,6 +844,7 @@ struct mlx5_fdir_flow { /* Flow structure. */ struct rte_flow { ILIST_ENTRY(uint32_t)next; /**< Index to the next flow structure. */ + struct mlx5_shared_action_rss *shared_rss; /** < Shred RSS action. */ uint32_t dev_handles; /**< Device flow handles that are part of the flow. */ uint32_t drv_type:2; /**< Driver type. */ @@ -856,6 +858,62 @@ struct rte_flow { uint16_t meter; /**< Holds flow meter id. */ } __rte_packed; +/* + * Define list of valid combinations of RX Hash fields + * (see enum ibv_rx_hash_fields). + */ +#define MLX5_RSS_HASH_IPV4 (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4) +#define MLX5_RSS_HASH_IPV4_TCP \ + (MLX5_RSS_HASH_IPV4 | \ + IBV_RX_HASH_SRC_PORT_TCP | IBV_RX_HASH_SRC_PORT_TCP) +#define MLX5_RSS_HASH_IPV4_UDP \ + (MLX5_RSS_HASH_IPV4 | \ + IBV_RX_HASH_SRC_PORT_UDP | IBV_RX_HASH_SRC_PORT_UDP) +#define MLX5_RSS_HASH_IPV6 (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6) +#define MLX5_RSS_HASH_IPV6_TCP \ + (MLX5_RSS_HASH_IPV6 | \ + IBV_RX_HASH_SRC_PORT_TCP | IBV_RX_HASH_SRC_PORT_TCP) +#define MLX5_RSS_HASH_IPV6_UDP \ + (MLX5_RSS_HASH_IPV6 | \ + IBV_RX_HASH_SRC_PORT_UDP | IBV_RX_HASH_SRC_PORT_UDP) +#define MLX5_RSS_HASH_NONE 0ULL + +/* array of valid combinations of RX Hash fields for RSS */ +static const uint64_t mlx5_rss_hash_fields[] = { + MLX5_RSS_HASH_IPV4, + MLX5_RSS_HASH_IPV4_TCP, + MLX5_RSS_HASH_IPV4_UDP, + MLX5_RSS_HASH_IPV6, + MLX5_RSS_HASH_IPV6_TCP, + MLX5_RSS_HASH_IPV6_UDP, + MLX5_RSS_HASH_NONE, +}; + +#define MLX5_RSS_HASH_FIELDS_LEN RTE_DIM(mlx5_rss_hash_fields) + +/* Shared RSS action structure */ +struct mlx5_shared_action_rss { + struct rte_flow_action_rss origin; /**< Original rte RSS action. */ + uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */ + uint16_t *queue; /**< Queue indices to use. */ + uint32_t hrxq[MLX5_RSS_HASH_FIELDS_LEN]; + /**< Hash RX queue indexes mapped to mlx5_rss_hash_fields */ + uint32_t hrxq_tunnel[MLX5_RSS_HASH_FIELDS_LEN]; + /**< Hash RX queue indexes for tunneled RSS */ +}; + +struct rte_flow_shared_action { + LIST_ENTRY(rte_flow_shared_action) next; + /**< Pointer to the next element. */ + rte_atomic32_t refcnt; + uint64_t type; + /**< Shared action type (see MLX5_FLOW_ACTION_SHARED_*). */ + union { + struct mlx5_shared_action_rss rss; + /**< Shared RSS action. */ + }; +}; + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], @@ -910,6 +968,25 @@ typedef int (*mlx5_flow_get_aged_flows_t) void **context, uint32_t nb_contexts, struct rte_flow_error *error); +typedef int (*mlx5_flow_action_validate_t) + (struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); +typedef struct rte_flow_shared_action *(*mlx5_flow_action_create_t) + (struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); +typedef int (*mlx5_flow_action_destroy_t) + (struct rte_eth_dev *dev, + struct rte_flow_shared_action *action, + struct rte_flow_error *error); +typedef int (*mlx5_flow_action_update_t) + (struct rte_eth_dev *dev, + struct rte_flow_shared_action *action, + const void *action_conf, + struct rte_flow_error *error); struct mlx5_flow_driver_ops { mlx5_flow_validate_t validate; mlx5_flow_prepare_t prepare; @@ -926,6 +1003,10 @@ struct mlx5_flow_driver_ops { mlx5_flow_counter_free_t counter_free; mlx5_flow_counter_query_t counter_query; mlx5_flow_get_aged_flows_t get_aged_flows; + mlx5_flow_action_validate_t action_validate; + mlx5_flow_action_create_t action_create; + mlx5_flow_action_destroy_t action_destroy; + mlx5_flow_action_update_t action_update; }; /* mlx5_flow.c */ @@ -951,6 +1032,9 @@ int mlx5_flow_get_reg_id(struct rte_eth_dev *dev, const struct rte_flow_action *mlx5_flow_find_action (const struct rte_flow_action *actions, enum rte_flow_action_type action); +int mlx5_validate_action_rss(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct rte_flow_error *error); int mlx5_flow_validate_action_count(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, struct rte_flow_error *error); @@ -1069,4 +1153,6 @@ int mlx5_flow_destroy_policer_rules(struct rte_eth_dev *dev, const struct rte_flow_attr *attr); int mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error); +struct rte_flow_shared_action *mlx5_flow_get_shared_rss(struct rte_flow *flow); +int mlx5_shared_action_flush(struct rte_eth_dev *dev); #endif /* RTE_PMD_MLX5_FLOW_H_ */ From patchwork Thu Oct 8 12:18:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Vesnovaty X-Patchwork-Id: 80051 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E722BA04BC; Thu, 8 Oct 2020 14:20:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 423851BFD9; Thu, 8 Oct 2020 14:19:19 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 187BC1BFC6 for ; Thu, 8 Oct 2020 14:19:16 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from andreyv@nvidia.com) with SMTP; 8 Oct 2020 15:19:12 +0300 Received: from nvidia.com (r-arch-host11.mtr.labs.mlnx [10.213.43.60]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 098CJ3UV014340; Thu, 8 Oct 2020 15:19:12 +0300 From: Andrey Vesnovaty To: dev@dpdk.org Cc: jer@marvell.com, jerinjacobk@gmail.com, thomas@monjalon.net, ferruh.yigit@intel.com, stephen@networkplumber.org, bruce.richardson@intel.com, orika@nvidia.com, viacheslavo@nvidia.com, andrey.vesnovaty@gmail.com, mdr@ashroe.eu, nhorman@tuxdriver.com, ajit.khaparde@broadcom.com, samik.gupta@broadcom.com, Andrey Vesnovaty , Matan Azrad , Shahaf Shuler Date: Thu, 8 Oct 2020 15:18:47 +0300 Message-Id: <20201008121848.15330-5-andreyv@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201008121848.15330-1-andreyv@nvidia.com> References: <20201008121848.15330-1-andreyv@nvidia.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 4/4] net/mlx5: driver support for shared action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Andrey Vesnovaty Implement shared action create/destroy/update/query. Implement RSS shared action and handle shared RSS on flow apply and release. Note: currently implemented for sharede RSS action only Signed-off-by: Andrey Vesnovaty --- drivers/net/mlx5/mlx5_flow_dv.c | 684 ++++++++++++++++++++++++++++++-- 1 file changed, 661 insertions(+), 23 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 79fdf34c0e..b99db65d2d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -8928,6 +8928,157 @@ __flow_dv_translate(struct rte_eth_dev *dev, return 0; } +/** + * Set hash RX queue by hash fields (see enum ibv_rx_hash_fields) + * and tunnel. + * + * @param[in, out] action + * Shred RSS action holding hash RX queue objects. + * @param[in] hash_fields + * Defines combination of packet fields to participate in RX hash. + * @param[in] tunnel + * Tunnel type + * @param[in] hrxq_idx + * Hash RX queue index to set. + * + * @return + * 0 on success, otherwise negative errno value. + */ +static int +__flow_dv_action_rss_hrxq_set(struct mlx5_shared_action_rss *action, + const uint64_t hash_fields, + const int tunnel, + uint32_t hrxq_idx) +{ + uint32_t *hrxqs = (tunnel) ? action->hrxq : action->hrxq_tunnel; + + switch (hash_fields & ~IBV_RX_HASH_INNER) { + case MLX5_RSS_HASH_IPV4: + hrxqs[0] = hrxq_idx; + return 0; + case MLX5_RSS_HASH_IPV4_TCP: + hrxqs[1] = hrxq_idx; + return 0; + case MLX5_RSS_HASH_IPV4_UDP: + hrxqs[2] = hrxq_idx; + return 0; + case MLX5_RSS_HASH_IPV6: + hrxqs[3] = hrxq_idx; + return 0; + case MLX5_RSS_HASH_IPV6_TCP: + hrxqs[4] = hrxq_idx; + return 0; + case MLX5_RSS_HASH_IPV6_UDP: + hrxqs[5] = hrxq_idx; + return 0; + case MLX5_RSS_HASH_NONE: + hrxqs[6] = hrxq_idx; + return 0; + default: + return -1; + } +} + +/** + * Look up for hash RX queue by hash fields (see enum ibv_rx_hash_fields) + * and tunnel. + * + * @param[in] action + * Shred RSS action holding hash RX queue objects. + * @param[in] hash_fields + * Defines combination of packet fields to participate in RX hash. + * @param[in] tunnel + * Tunnel type + * + * @return + * Valid hash RX queue index, otherwise 0. + */ +static uint32_t +__flow_dv_action_rss_hrxq_lookup(const struct mlx5_shared_action_rss *action, + const uint64_t hash_fields, + const int tunnel) +{ + const uint32_t *hrxqs = (tunnel) ? action->hrxq : action->hrxq_tunnel; + + switch (hash_fields & ~IBV_RX_HASH_INNER) { + case MLX5_RSS_HASH_IPV4: + return hrxqs[0]; + case MLX5_RSS_HASH_IPV4_TCP: + return hrxqs[1]; + case MLX5_RSS_HASH_IPV4_UDP: + return hrxqs[2]; + case MLX5_RSS_HASH_IPV6: + return hrxqs[3]; + case MLX5_RSS_HASH_IPV6_TCP: + return hrxqs[4]; + case MLX5_RSS_HASH_IPV6_UDP: + return hrxqs[5]; + case MLX5_RSS_HASH_NONE: + return hrxqs[6]; + default: + return 0; + } +} + +/** + * Retrieves hash RX queue suitable for the *flow*. + * If shared action configured for *flow* suitable hash RX queue will be + * retrieved from attached shared action. + * + * @param[in] flow + * Shred RSS action holding hash RX queue objects. + * @param[in] dev_flow + * Pointer to the sub flow. + * @param[out] hrxq + * Pointer to retrieved hash RX queue object. + * + * @return + * Valid hash RX queue index, otherwise 0 and rte_errno is set. + */ +static uint32_t +__flow_dv_rss_get_hrxq(struct rte_eth_dev *dev, struct rte_flow *flow, + struct mlx5_flow *dev_flow, + struct mlx5_hrxq **hrxq) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t hrxq_idx; + + if (flow->shared_rss) { + hrxq_idx = __flow_dv_action_rss_hrxq_lookup + (flow->shared_rss, dev_flow->hash_fields, + !!(dev_flow->handle->layers & + MLX5_FLOW_LAYER_TUNNEL)); + if (hrxq_idx) { + *hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], + hrxq_idx); + rte_atomic32_inc(&(*hrxq)->refcnt); + } + } else { + struct mlx5_flow_rss_desc *rss_desc = + &((struct mlx5_flow_rss_desc *)priv->rss_desc) + [!!priv->flow_nested_idx]; + + MLX5_ASSERT(rss_desc->queue_num); + hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, + MLX5_RSS_HASH_KEY_LEN, + dev_flow->hash_fields, + rss_desc->queue, rss_desc->queue_num); + if (!hrxq_idx) { + hrxq_idx = mlx5_hrxq_new(dev, + rss_desc->key, + MLX5_RSS_HASH_KEY_LEN, + dev_flow->hash_fields, + rss_desc->queue, + rss_desc->queue_num, + !!(dev_flow->handle->layers & + MLX5_FLOW_LAYER_TUNNEL)); + } + *hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], + hrxq_idx); + } + return hrxq_idx; +} + /** * Apply the flow to the NIC, lock free, * (mutex should be acquired by caller). @@ -8986,30 +9137,10 @@ __flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow, dv->actions[n++] = drop_hrxq->action; } } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) { - struct mlx5_hrxq *hrxq; - uint32_t hrxq_idx; - struct mlx5_flow_rss_desc *rss_desc = - &((struct mlx5_flow_rss_desc *)priv->rss_desc) - [!!priv->flow_nested_idx]; + struct mlx5_hrxq *hrxq = NULL; + uint32_t hrxq_idx = __flow_dv_rss_get_hrxq + (dev, flow, dev_flow, &hrxq); - MLX5_ASSERT(rss_desc->queue_num); - hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, - MLX5_RSS_HASH_KEY_LEN, - dev_flow->hash_fields, - rss_desc->queue, - rss_desc->queue_num); - if (!hrxq_idx) { - hrxq_idx = mlx5_hrxq_new - (dev, rss_desc->key, - MLX5_RSS_HASH_KEY_LEN, - dev_flow->hash_fields, - rss_desc->queue, - rss_desc->queue_num, - !!(dh->layers & - MLX5_FLOW_LAYER_TUNNEL)); - } - hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], - hrxq_idx); if (!hrxq) { rte_flow_error_set (error, rte_errno, @@ -9427,12 +9558,16 @@ __flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow) static void __flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) { + struct rte_flow_shared_action *shared; struct mlx5_flow_handle *dev_handle; struct mlx5_priv *priv = dev->data->dev_private; if (!flow) return; __flow_dv_remove(dev, flow); + shared = mlx5_flow_get_shared_rss(flow); + if (shared) + rte_atomic32_dec(&shared->refcnt); if (flow->counter) { flow_dv_counter_release(dev, flow->counter); flow->counter = 0; @@ -9472,6 +9607,419 @@ __flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow) } } +/** + * Release array of hash RX queue objects. + * Helper function. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in, out] hrxqs + * Array of hash RX queue objects. + * + * @return + * Total number of references to hash RX queue objects in *hrxqs* array + * after this operation. + */ +static int +__flow_dv_hrxqs_release(struct rte_eth_dev *dev, + uint32_t (*hrxqs)[MLX5_RSS_HASH_FIELDS_LEN]) +{ + size_t i; + int remaining = 0, ret = 0, ret_tunnel = 0; + + for (i = 0; i < RTE_DIM(*hrxqs); i++) { + ret = mlx5_hrxq_release(dev, (*hrxqs)[i]); + if (!ret) + (*hrxqs)[i] = 0; + remaining += ret + ret_tunnel; + } + return remaining; +} + +/** + * Release all hash RX queue objects representing shared RSS action. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in, out] action + * Shared RSS action to remove hash RX queue objects from. + * + * @return + * Total number of references to hash RX queue objects stored in *action* + * after this operation. + * Expected to be 0 if no external references held. + */ +static int +__flow_dv_action_rss_hrxqs_release(struct rte_eth_dev *dev, + struct mlx5_shared_action_rss *action) +{ + return __flow_dv_hrxqs_release(dev, &action->hrxq) + + __flow_dv_hrxqs_release(dev, &action->hrxq_tunnel); +} + +/** + * Setup shared RSS action. + * Prepare set of hash RX queue objects sufficient to handle all valid + * hash_fields combinations (see enum ibv_rx_hash_fields). + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in, out] action + * Partially initialized shared RSS action. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +static int +__flow_dv_action_rss_setup(struct rte_eth_dev *dev, + struct mlx5_shared_action_rss *action, + struct rte_flow_error *error) +{ + size_t i; + int err; + + for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) { + uint32_t hrxq_idx; + uint64_t hash_fields = mlx5_rss_hash_fields[i]; + int tunnel; + + for (tunnel = 0; tunnel < 2; tunnel++) { + hrxq_idx = mlx5_hrxq_new(dev, action->origin.key, + MLX5_RSS_HASH_KEY_LEN, + hash_fields, + action->origin.queue, + action->origin.queue_num, + tunnel); + if (!hrxq_idx) { + rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot get hash queue"); + goto error_hrxq_new; + } + err = __flow_dv_action_rss_hrxq_set + (action, hash_fields, tunnel, hrxq_idx); + MLX5_ASSERT(!err); + } + } + return 0; +error_hrxq_new: + err = rte_errno; + __flow_dv_action_rss_hrxqs_release(dev, action); + rte_errno = err; + return -rte_errno; +} + +/** + * Create shared RSS action. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] conf + * Shared action configuration. + * @param[in] rss + * RSS action specification used to create shared action. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * A valid shared action handle in case of success, NULL otherwise and + * rte_errno is set. + */ +static struct rte_flow_shared_action * +__flow_dv_action_rss_create(struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action_rss *rss, + struct rte_flow_error *error) +{ + struct rte_flow_shared_action *shared_action = NULL; + void *queue = NULL; + uint32_t queue_size; + struct mlx5_shared_action_rss *shared_rss; + struct rte_flow_action_rss *origin; + const uint8_t *rss_key; + + (void)conf; + queue_size = RTE_ALIGN_CEIL(rss->queue_num * sizeof(uint16_t), + sizeof(void *)); + queue = rte_calloc(__func__, 1, queue_size, 0); + shared_action = rte_calloc(__func__, 1, sizeof(*shared_action), 0); + if (!shared_action || !queue) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate resource memory"); + goto error_rss_init; + } + shared_rss = &shared_action->rss; + shared_rss->queue = queue; + origin = &shared_rss->origin; + origin->func = rss->func; + origin->level = rss->level; + /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */ + origin->types = !rss->types ? ETH_RSS_IP : rss->types; + /* NULL RSS key indicates default RSS key. */ + rss_key = !rss->key ? rss_hash_default_key : rss->key; + rte_memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN); + origin->key = &shared_rss->key[0]; + origin->key_len = MLX5_RSS_HASH_KEY_LEN; + rte_memcpy(shared_rss->queue, rss->queue, queue_size); + origin->queue = shared_rss->queue; + origin->queue_num = rss->queue_num; + if (__flow_dv_action_rss_setup(dev, shared_rss, error)) + goto error_rss_init; + shared_action->type = MLX5_FLOW_ACTION_SHARED_RSS; + return shared_action; +error_rss_init: + rte_free(shared_action); + rte_free(queue); + return NULL; +} + +/** + * Destroy the shared RSS action. + * Release related hash RX queue objects. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] shared_rss + * The shared RSS action object to be removed. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +static int +__flow_dv_action_rss_release(struct rte_eth_dev *dev, + struct mlx5_shared_action_rss *shared_rss, + struct rte_flow_error *error) +{ + struct rte_flow_shared_action *shared_action = NULL; + int remaining = __flow_dv_action_rss_hrxqs_release(dev, shared_rss); + + if (remaining) { + return rte_flow_error_set(error, ETOOMANYREFS, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "shared rss hrxq has references"); + } + shared_action = container_of(shared_rss, + struct rte_flow_shared_action, rss); + if (!rte_atomic32_dec_and_test(&shared_action->refcnt)) { + return rte_flow_error_set(error, ETOOMANYREFS, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "shared rss has references"); + } + rte_free(shared_rss->queue); + return 0; +} + +/** + * Create shared action, lock free, + * (mutex should be acquired by caller). + * Dispatcher for action type specific call. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] conf + * Shared action configuration. + * @param[in] action + * Action specification used to create shared action. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * A valid shared action handle in case of success, NULL otherwise and + * rte_errno is set. + */ +static struct rte_flow_shared_action * +__flow_dv_action_create(struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + struct rte_flow_shared_action *shared_action = NULL; + struct mlx5_priv *priv = dev->data->dev_private; + + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + shared_action = __flow_dv_action_rss_create(dev, conf, + action->conf, + error); + break; + default: + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "action type not supported"); + break; + } + if (shared_action) { + rte_atomic32_inc(&shared_action->refcnt); + LIST_INSERT_HEAD(&priv->shared_actions, shared_action, next); + } + return shared_action; +} + +/** + * Destroy the shared action. + * Release action related resources on the NIC and the memory. + * Lock free, (mutex should be acquired by caller). + * Dispatcher for action type specific call. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] action + * The shared action object to be removed. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +static int +__flow_dv_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_shared_action *action, + struct rte_flow_error *error) +{ + int ret; + + switch (action->type) { + case MLX5_FLOW_ACTION_SHARED_RSS: + ret = __flow_dv_action_rss_release(dev, &action->rss, error); + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "action type not supported"); + } + if (ret) + return ret; + LIST_REMOVE(action, next); + rte_free(action); + return 0; +} + +/** + * Updates in place shared RSS action configuration. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] shared_rss + * The shared RSS action object to be updated. + * @param[in] action_conf + * RSS action specification used to modify *shared_rss*. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + * @note: currently only support update of RSS queues. + */ +static int +__flow_dv_action_rss_update(struct rte_eth_dev *dev, + struct mlx5_shared_action_rss *shared_rss, + const struct rte_flow_action_rss *action_conf, + struct rte_flow_error *error) +{ + size_t i; + int ret; + void *queue = NULL; + uint32_t queue_size; + const uint8_t *rss_key; + uint32_t rss_key_len; + + queue_size = RTE_ALIGN_CEIL(action_conf->queue_num * sizeof(uint16_t), + sizeof(void *)); + queue = rte_calloc(__func__, 1, queue_size, 0); + if (!queue) { + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate resource memory"); + } + if (action_conf->key) { + rss_key = action_conf->key; + rss_key_len = action_conf->key_len; + } else { + rss_key = rss_hash_default_key; + rss_key_len = MLX5_RSS_HASH_KEY_LEN; + } + for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) { + uint32_t hrxq_idx; + uint64_t hash_fields = mlx5_rss_hash_fields[i]; + int tunnel; + + for (tunnel = 0; tunnel < 2; tunnel++) { + hrxq_idx = __flow_dv_action_rss_hrxq_lookup + (shared_rss, hash_fields, tunnel); + MLX5_ASSERT(hrxq_idx); + ret = mlx5_hrxq_modify + (dev, hrxq_idx, + rss_key, rss_key_len, + hash_fields, + action_conf->queue, action_conf->queue_num); + if (ret) { + rte_free(queue); + return rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "cannot update hash queue"); + } + } + } + rte_free(shared_rss->queue); + shared_rss->queue = queue; + rte_memcpy(shared_rss->queue, action_conf->queue, queue_size); + shared_rss->origin.queue = shared_rss->queue; + shared_rss->origin.queue_num = action_conf->queue_num; + return 0; +} + +/** + * Updates in place shared action configuration, lock free, + * (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] action + * The shared action object to be updated. + * @param[in] action_conf + * Action specification used to modify *action*. + * *action_conf* should be of type correlating with type of the *action*, + * otherwise considered as invalid. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +static int +__flow_dv_action_update(struct rte_eth_dev *dev, + struct rte_flow_shared_action *action, + const void *action_conf, + struct rte_flow_error *error) +{ + switch (action->type) { + case MLX5_FLOW_ACTION_SHARED_RSS: + return __flow_dv_action_rss_update(dev, &action->rss, + action_conf, error); + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "action type not supported"); + } +} /** * Query a dv flow rule for its statistics via devx. * @@ -10150,6 +10698,92 @@ flow_dv_counter_free(struct rte_eth_dev *dev, uint32_t cnt) flow_dv_shared_unlock(dev); } +/** + * Validate shared action. + * Dispatcher for action type specific validation. + * + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] conf + * Shared action configuration. + * @param[in] action + * The shared action object to validate. + * @param[out] error + * Perform verbose error reporting if not NULL. Initialized in case of + * error only. + * + * @return + * 0 on success, otherwise negative errno value. + */ +static int +flow_dv_action_validate(struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + (void)conf; + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_RSS: + return mlx5_validate_action_rss(dev, action, error); + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "action type not supported"); + } +} + +/* + * Mutex-protected thunk to lock-free __flow_dv_action_create(). + */ +static struct rte_flow_shared_action * +flow_dv_action_create(struct rte_eth_dev *dev, + const struct rte_flow_shared_action_conf *conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + struct rte_flow_shared_action *shared_action = NULL; + + flow_dv_shared_lock(dev); + shared_action = __flow_dv_action_create(dev, conf, action, error); + flow_dv_shared_unlock(dev); + return shared_action; +} + +/* + * Mutex-protected thunk to lock-free __flow_dv_action_destroy(). + */ +static int +flow_dv_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_shared_action *action, + struct rte_flow_error *error) +{ + int ret; + + flow_dv_shared_lock(dev); + ret = __flow_dv_action_destroy(dev, action, error); + flow_dv_shared_unlock(dev); + return ret; +} + +/* + * Mutex-protected thunk to lock-free __flow_dv_action_update(). + */ +static int +flow_dv_action_update(struct rte_eth_dev *dev, + struct rte_flow_shared_action *action, + const void *action_conf, + struct rte_flow_error *error) +{ + int ret; + + flow_dv_shared_lock(dev); + ret = __flow_dv_action_update(dev, action, action_conf, + error); + flow_dv_shared_unlock(dev); + return ret; +} + const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .validate = flow_dv_validate, .prepare = flow_dv_prepare, @@ -10166,6 +10800,10 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = { .counter_free = flow_dv_counter_free, .counter_query = flow_dv_counter_query, .get_aged_flows = flow_get_aged_flows, + .action_validate = flow_dv_action_validate, + .action_create = flow_dv_action_create, + .action_destroy = flow_dv_action_destroy, + .action_update = flow_dv_action_update, }; #endif /* HAVE_IBV_FLOW_DV_SUPPORT */