From patchwork Thu Jul 2 12:05:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Vesnovaty X-Patchwork-Id: 73018 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37006A0519; Fri, 3 Jul 2020 16:37:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DDDE41DC09; Fri, 3 Jul 2020 16:37:14 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 97E7A1D98C for ; Thu, 2 Jul 2020 14:05:19 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from andreyv@mellanox.com) with SMTP; 2 Jul 2020 15:05:15 +0300 Received: from r-arch-host11.mtr.labs.mlnx. (r-arch-host11.mtr.labs.mlnx [10.213.43.60]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 062C5FYb021057; Thu, 2 Jul 2020 15:05:15 +0300 From: Andrey Vesnovaty To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , Ori Kam Cc: dev@dpdk.org, Andrey Vesnovaty Date: Thu, 2 Jul 2020 15:05:10 +0300 Message-Id: <20200702120511.16315-1-andreyv@mellanox.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Mailman-Approved-At: Fri, 03 Jul 2020 16:37:14 +0200 Subject: [dpdk-dev] [PATCH] add flow shared action API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Andrey Vesnovaty This commit introduces extension of DPDK flow action API enabling sharing of single rte_flow_action in multiple flows. The API intended for PMDs where multiple HW offloaded flows can reuse the same HW essence/object representing flow action and modification of such an essence/object effects all the rules using it. Motivation and example === Adding or removing one or more queues to RSS used by multiple flow rules imposes per rule toll for current DPDK flow API; the scenario requires for each flow sharing cloned RSS action: - call `rte_flow_destroy()` - call `rte_flow_create()` with modified RSS action API for sharing action and its in-place update benefits: - reduce the overhead of multiple RSS flow rules reconfiguration . - optimize resource utilization by sharing action across of of multiple flows Change description === Shared action === In order to represent flow action shared by multiple flows new action type RTE_FLOW_ACTION_TYPE_SHARED introduced (see `enum rte_flow_action_type`). Actually the introduced API decouples action from any specific flow and enables sharing of single action by its handle for multiple flows. Shared action create/use/destroy === Shared action may be reused by some or none flow rules at any given moment, IOW shared action reside outside of the context of any flow. Shared action represent HW resources/objects used for action offloading implementation. For allocation/release of all HW resources and all related initializations/cleanups in PMD space required for shared action implementation added new API rte_flow_shared_action_create()/rte_flow_shared_action_destroy(). In addition to the above all preparations needed to maintain shared access to the action resources, configuration and state should be done in rte_flow_shared_action_create(). In order to share some flow action reuse the handle of type `struct rte_flow_shared_action` returned by rte_flow_shared_action_create() as a `conf` field of `struct rte_flow_action` (see "example" section). If some shared action not used by any flow rule all resources allocated by the shared action can be released by rte_flow_shared_action_destroy() (see "example" section). The shared action handle passed as argument to destroy API should not be used i.e. result of the usage is undefined. Shared action re-configuration === Shared action behavior defined by its configuration & and can be updated via rte_flow_shared_action_update() (see "example" section). The shared action update operation modifies HW related resources/objects allocated by the action. The number of operations performed by the update operation should not be dependent on number of flows sharing the related action. On return of shared action updated API action behavior should be according to updated configuration for all flows sharing the action. Shared action query === Provide separate API to query shared action sate (see rte_flow_shared_action_update()). Taking a counter as an example: query returns value aggregating all counter increments across all flow rules sharing the counter. PMD support === The support of introduced API is pure PMD specific design and responsibility for each action type (see struct rte_flow_ops). testpmd === In order to utilize introduced API testpmd cli may implement following extension create/update/destroy/query shared action accordingly flow shared_action create {port_id} [index] {action} flow shared_action update {port_id} {index} {action} flow shared_action destroy {port_id} {index} flow shared_action query {port_id} {index} testpmd example === configure rss to queues 1 & 2 testpmd> flow shared_action create 0 100 rss 1 2 create flow rule utilizing shared action testpmd> flow create 0 ingress \ pattern eth dst is 0c:42:a1:15:fd:ac / ipv6 / tcp / end \ actions shared 100 end / end add 2 more queues testpmd> flow shared_action modify 0 100 rss 1 2 3 4 example === struct rte_flow_action actions[2]; struct rte_flow_action action; /* skipped: initialize action */ struct rte_flow_shared_action *handle = rte_flow_shared_action_create( port_id, &action, &error); actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED; actions[0].conf = handle; actions[1].type = RTE_FLOW_ACTION_TYPE_END; /* skipped: init attr0 & pattern0 args */ struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0, actions, error); /* create more rules reusing shared action */ struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1, actions, error); /* skipped: for flows 2 till N */ struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN, actions, error); /* update shared action */ struct rte_flow_action updated_action; /* * skipped: initialize updated_action according to desired action * configuration change */ rte_flow_shared_action_update(port_id, handle, updated_action.conf, error); /* * from now on all flows 1 till N will act according to configuration of * updated_action */ /* skipped: destroy all flows 1 till N */ rte_flow_shared_action_destroy(port_id, handle, error); Signed-off-by: Andrey Vesnovaty Signed-off-by: Andrey Vesnovaty Signed-off-by: Andrey Vesnovaty Acked-by: Ray Kinsella Acked-by: Ray Kinsella --- This patch based on RFC: https://patches.dpdk.org/patch/71820/ --- lib/librte_ethdev/rte_ethdev_version.map | 6 + lib/librte_ethdev/rte_flow.c | 81 +++++++++++++ lib/librte_ethdev/rte_flow.h | 148 ++++++++++++++++++++++- lib/librte_ethdev/rte_flow_driver.h | 22 ++++ 4 files changed, 256 insertions(+), 1 deletion(-) diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map index 3f32fdecf..e291c2bd9 100644 --- a/lib/librte_ethdev/rte_ethdev_version.map +++ b/lib/librte_ethdev/rte_ethdev_version.map @@ -230,4 +230,10 @@ EXPERIMENTAL { # added in 20.02 rte_flow_dev_dump; + + # added in 20.08 + rte_flow_shared_action_create; + rte_flow_shared_action_destoy; + rte_flow_shared_action_update; + rte_flow_shared_action_query; }; diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c index 885a7ff9a..7728057c3 100644 --- a/lib/librte_ethdev/rte_flow.c +++ b/lib/librte_ethdev/rte_flow.c @@ -1231,3 +1231,84 @@ rte_flow_dev_dump(uint16_t port_id, FILE *file, struct rte_flow_error *error) RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOSYS)); } + +struct rte_flow_shared_action * +rte_flow_shared_action_create(uint16_t port_id, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + struct rte_flow_shared_action *shared_action; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->shared_action_create)) { + shared_action = ops->shared_action_create(dev, action, error); + if (shared_action == NULL) + flow_err(port_id, -rte_errno, error); + return shared_action; + } + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + return NULL; +} + +int +rte_flow_shared_action_destoy(uint16_t port_id, + struct rte_flow_shared_action *action, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->shared_action_destroy)) + return flow_err(port_id, + ops->shared_action_destroy(dev, action, error), + error); + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); +} + +int +rte_flow_shared_action_update(uint16_t port_id, + struct rte_flow_shared_action *action, + const void *action_conf, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->shared_action_update)) + return flow_err(port_id, ops->shared_action_update(dev, action, + action_conf, error), + error); + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); +} + +int +rte_flow_shared_action_query(uint16_t port_id, + const struct rte_flow_shared_action *action, + void *data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->shared_action_query)) + return flow_err(port_id, ops->shared_action_query(dev, action, + data, error), + error); + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); +} diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index 5625dc491..98140ebb1 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -1643,7 +1643,8 @@ enum rte_flow_action_type { /** * Enables counters for this flow rule. * - * These counters can be retrieved and reset through rte_flow_query(), + * These counters can be retrieved and reset through rte_flow_query() or + * rte_flow_shared_action_query() if the action provided via handle, * see struct rte_flow_query_count. * * See struct rte_flow_action_count. @@ -2051,6 +2052,14 @@ enum rte_flow_action_type { * See struct rte_flow_action_set_dscp. */ RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP, + + /** + * Describes action shared a cross multiple flow rules. + * + * Enables multiple rules reference the same action by handle (see + * struct rte_flow_shared_action). + */ + RTE_FLOW_ACTION_TYPE_SHARED, }; /** @@ -2593,6 +2602,20 @@ struct rte_flow_action_set_dscp { uint8_t dscp; }; + +/** + * RTE_FLOW_ACTION_TYPE_SHARED + * + * Opaque type returned after successfully creating a shared action. + * + * This handle can be used to manage and query the related action: + * - share it a cross multiple flow rules + * - update action configuration + * - query action data + * - destroy action + */ +struct rte_flow_shared_action; + /* Mbuf dynamic field offset for metadata. */ extern int rte_flow_dynf_metadata_offs; @@ -3224,6 +3247,129 @@ rte_flow_conv(enum rte_flow_conv_op op, const void *src, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create shared action for reuse in multiple flow rules. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] action + * Action configuration for shared action creation. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * A valid handle in case of success, NULL otherwise and rte_errno is set + * to one of the error codes defined: + * - (ENOSYS) if underlying device does not support this functionality. + * - (EIO) if underlying device is removed. + * - (EINVAL) if *action* invalid. + * - (ENOTSUP) if *action* valid but unsupported. + */ +__rte_experimental +struct rte_flow_shared_action * +rte_flow_shared_action_create(uint16_t port_id, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Destroys the shared action by handle. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] action + * Handle for the shared action to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * - (0) if success. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-ETOOMANYREFS) if action pointed by *action* handle still used by one or + * more rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_shared_action_destoy(uint16_t port_id, + struct rte_flow_shared_action *action, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Updates inplace the shared action configuration pointed by *action* handle + * with the configuration provided as *action_conf* argument. + * The update of the shared action configuration effects all flow rules reusing + * the action via handle. + * + * @param[in] port_id + * The port identifier of the Ethernet device. + * @param[in] action + * Handle for the shared action to be updated. + * @param[in] action_conf + * Action specification used to modify the action pointed by handle. + * action_conf should be of same type with the action pointed by the *action* + * handle argument, otherwise function behavior undefined. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * @return + * - (0) if success. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-EINVAL) if *action_conf* invalid. + * - (-ENOTSUP) if *action_conf* valid but unsupported. + * - (-ENOENT) if action pointed by *ctx* was not found. + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_shared_action_update(uint16_t port_id, + struct rte_flow_shared_action *action, + const void *action_conf, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Query the shared action by handle. + * + * This function allows retrieving action-specific data such as counters. + * Data is gathered by special action which may be present/referenced in + * more than one flow rule definition. + * + * \see RTE_FLOW_ACTION_TYPE_COUNT + * + * @param port_id + * Port identifier of Ethernet device. + * @param[in] action + * Handle for the shared action to query. + * @param[in, out] data + * Pointer to storage for the associated query data type. + * @param[out] error + * Perform verbose error reporting if not NULL. PMDs initialize this + * structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_shared_action_query(uint16_t port_id, + const struct rte_flow_shared_action *action, + void *data, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/librte_ethdev/rte_flow_driver.h b/lib/librte_ethdev/rte_flow_driver.h index 51a9a57a0..c103d159e 100644 --- a/lib/librte_ethdev/rte_flow_driver.h +++ b/lib/librte_ethdev/rte_flow_driver.h @@ -101,6 +101,28 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, FILE *file, struct rte_flow_error *error); + /** See rte_flow_shared_action_create() */ + struct rte_flow_shared_action *(*shared_action_create) + (struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct rte_flow_error *error); + /** See rte_flow_shared_action_destroy() */ + int (*shared_action_destroy) + (struct rte_eth_dev *dev, + struct rte_flow_shared_action *shared_action, + struct rte_flow_error *error); + /** See rte_flow_shared_action_update() */ + int (*shared_action_update) + (struct rte_eth_dev *dev, + struct rte_flow_shared_action *shared_action, + const void *action_conf, + struct rte_flow_error *error); + /** See rte_flow_shared_action_query() */ + int (*shared_action_query) + (struct rte_eth_dev *dev, + const struct rte_flow_shared_action *shared_action, + void *data, + struct rte_flow_error *error); }; /** From patchwork Wed Jul 8 21:39:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Vesnovaty X-Patchwork-Id: 73554 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 756B2A0526; Wed, 8 Jul 2020 23:39:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DC6821DFED; Wed, 8 Jul 2020 23:39:58 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id CEE1D1DFEF for ; Wed, 8 Jul 2020 23:39:57 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from andreyv@mellanox.com) with SMTP; 9 Jul 2020 00:39:54 +0300 Received: from r-arch-host11.mtr.labs.mlnx. (r-arch-host11.mtr.labs.mlnx [10.213.43.60]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 068LdrB4032740; Thu, 9 Jul 2020 00:39:54 +0300 From: Andrey Vesnovaty To: dev@dpdk.org Cc: jer@marvell.com, jerinjacobk@gmail.com, thomas@monjalon.net, ferruh.yigit@intel.com, stephen@networkplumber.org, bruce.richardson@intel.com, orika@mellanox.com, viacheslavo@mellanox.com, andrey.vesnovaty@gmail.com, Matan Azrad , Shahaf Shuler Date: Thu, 9 Jul 2020 00:39:42 +0300 Message-Id: <20200708213946.30108-4-andreyv@mellanox.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200708213946.30108-1-andreyv@mellanox.com> References: <20200702120511.16315-1-andreyv@mellanox.com> <20200708213946.30108-1-andreyv@mellanox.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/6] net/mlx5: modify hash Rx queue objects X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implement mlx5_hrxq_modify() to modify hash RX queue object. This commit relays on capability to modify TIR object via DevX. Signed-off-by: Andrey Vesnovaty --- drivers/net/mlx5/mlx5_rxq.c | 300 ++++++++++++++++++++++++++++------- drivers/net/mlx5/mlx5_rxtx.h | 4 + 2 files changed, 243 insertions(+), 61 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index b436f06107..80c402c4b7 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2274,6 +2274,29 @@ mlx5_ind_table_obj_new(struct rte_eth_dev *dev, const uint16_t *queues, return NULL; } +/** + * Match queues listed in arguments to queues contained in indirection table + * object. + * + * @param ind_tbl + * Pointer to indirection table to match. + * @param queues + * Queues to match to ques in indirection table. + * @param queues_n + * Number of queues in the array. + * + * @return + * 1 if all queues in indirection table match 0 othrwise. + */ +static int +mlx5_ind_table_obj_match_queues(const struct mlx5_ind_table_obj *ind_tbl, + const uint16_t *queues, uint32_t queues_n) +{ + return (ind_tbl->queues_n == queues_n) && + (!memcmp(ind_tbl->queues, queues, + ind_tbl->queues_n * sizeof(ind_tbl->queues[0]))); +} + /** * Get an indirection table. * @@ -2370,6 +2393,102 @@ mlx5_ind_table_obj_verify(struct rte_eth_dev *dev) return ret; } +/* + * Set TIR attribute struct with relevant input values. + * + * @param[in] dev + * Pointer to Ethernet device. + * @param[in] rss_key + * RSS key for the Rx hash queue. + * @param[in] rss_key_len + * RSS key length. + * @param[in] hash_fields + * Verbs protocol hash field to make the RSS on. + * @param[in] queues + * Queues entering in hash queue. In case of empty hash_fields only the + * first queue index will be taken for the indirection table. + * @param[in] queues_n + * Number of queues. + * @param[in] tunnel + * Tunnel type. + * @param[out] tir_attr + * Parameters structure for TIR creation/modification. + * + * @return + * The Verbs/DevX object initialised index, 0 otherwise and rte_errno is set. + */ +static void +mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, + const uint8_t *rss_key, uint32_t rss_key_len, + uint64_t hash_fields, + const uint16_t *queues, uint32_t queues_n, + int tunnel, + enum mlx5_rxq_obj_type rxq_obj_type, int ind_tbl_id, + struct mlx5_devx_tir_attr *tir_attr) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t i; + uint32_t lro = 1; + + /* Enable TIR LRO only if all the queues were configured for. */ + for (i = 0; i < queues_n; ++i) { + if (!(*priv->rxqs)[queues[i]]->lro) { + lro = 0; + break; + } + } + memset(tir_attr, 0, sizeof(*tir_attr)); + tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; + tir_attr->rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ; + tir_attr->tunneled_offload_en = !!tunnel; + /* If needed, translate hash_fields bitmap to PRM format. */ + if (hash_fields) { +#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT + struct mlx5_rx_hash_field_select *rx_hash_field_select = + hash_fields & IBV_RX_HASH_INNER ? + &tir_attr->rx_hash_field_selector_inner : + &tir_attr->rx_hash_field_selector_outer; +#else + struct mlx5_rx_hash_field_select *rx_hash_field_select = + &tir_attr->rx_hash_field_selector_outer; +#endif + + /* 1 bit: 0: IPv4, 1: IPv6. */ + rx_hash_field_select->l3_prot_type = + !!(hash_fields & MLX5_IPV6_IBV_RX_HASH); + /* 1 bit: 0: TCP, 1: UDP. */ + rx_hash_field_select->l4_prot_type = + !!(hash_fields & MLX5_UDP_IBV_RX_HASH); + /* Bitmask which sets which fields to use in RX Hash. */ + rx_hash_field_select->selected_fields = + ((!!(hash_fields & MLX5_L3_SRC_IBV_RX_HASH)) << + MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_SRC_IP) | + (!!(hash_fields & MLX5_L3_DST_IBV_RX_HASH)) << + MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_DST_IP | + (!!(hash_fields & MLX5_L4_SRC_IBV_RX_HASH)) << + MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_SPORT | + (!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) << + MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT; + } + if (rxq_obj_type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN) + tir_attr->transport_domain = priv->sh->td->id; + else + tir_attr->transport_domain = priv->sh->tdn; + memcpy(tir_attr->rx_hash_toeplitz_key, rss_key, rss_key_len); + tir_attr->indirect_table = ind_tbl_id; + if (dev->data->dev_conf.lpbk_mode) + tir_attr->self_lb_block = + MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; + if (lro) { + tir_attr->lro_timeout_period_usecs = + priv->config.lro.timeout; + tir_attr->lro_max_msg_sz = priv->max_lro_msg_size; + tir_attr->lro_enable_mask = + MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO | + MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO; + } +} + /** * Create an Rx Hash queue. * @@ -2493,67 +2612,11 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, } } else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */ struct mlx5_devx_tir_attr tir_attr; - uint32_t i; - uint32_t lro = 1; - - /* Enable TIR LRO only if all the queues were configured for. */ - for (i = 0; i < queues_n; ++i) { - if (!(*priv->rxqs)[queues[i]]->lro) { - lro = 0; - break; - } - } - memset(&tir_attr, 0, sizeof(tir_attr)); - tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT; - tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ; - tir_attr.tunneled_offload_en = !!tunnel; - /* If needed, translate hash_fields bitmap to PRM format. */ - if (hash_fields) { -#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - struct mlx5_rx_hash_field_select *rx_hash_field_select = - hash_fields & IBV_RX_HASH_INNER ? - &tir_attr.rx_hash_field_selector_inner : - &tir_attr.rx_hash_field_selector_outer; -#else - struct mlx5_rx_hash_field_select *rx_hash_field_select = - &tir_attr.rx_hash_field_selector_outer; -#endif - - /* 1 bit: 0: IPv4, 1: IPv6. */ - rx_hash_field_select->l3_prot_type = - !!(hash_fields & MLX5_IPV6_IBV_RX_HASH); - /* 1 bit: 0: TCP, 1: UDP. */ - rx_hash_field_select->l4_prot_type = - !!(hash_fields & MLX5_UDP_IBV_RX_HASH); - /* Bitmask which sets which fields to use in RX Hash. */ - rx_hash_field_select->selected_fields = - ((!!(hash_fields & MLX5_L3_SRC_IBV_RX_HASH)) << - MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_SRC_IP) | - (!!(hash_fields & MLX5_L3_DST_IBV_RX_HASH)) << - MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_DST_IP | - (!!(hash_fields & MLX5_L4_SRC_IBV_RX_HASH)) << - MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_SPORT | - (!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) << - MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT; - } - if (rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN) - tir_attr.transport_domain = priv->sh->td->id; - else - tir_attr.transport_domain = priv->sh->tdn; - memcpy(tir_attr.rx_hash_toeplitz_key, rss_key, - MLX5_RSS_HASH_KEY_LEN); - tir_attr.indirect_table = ind_tbl->rqt->id; - if (dev->data->dev_conf.lpbk_mode) - tir_attr.self_lb_block = - MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST; - if (lro) { - tir_attr.lro_timeout_period_usecs = - priv->config.lro.timeout; - tir_attr.lro_max_msg_sz = priv->max_lro_msg_size; - tir_attr.lro_enable_mask = - MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO | - MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO; - } + mlx5_devx_tir_attr_set + (dev, rss_key, rss_key_len, hash_fields, + queues, queues_n, tunnel, + rxq_ctrl->obj->type, ind_tbl->rqt->id, + &tir_attr); tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr); if (!tir) { DRV_LOG(ERR, "port %u cannot create DevX TIR", @@ -2616,6 +2679,7 @@ mlx5_hrxq_new(struct rte_eth_dev *dev, * Queues entering in hash queue. In case of empty hash_fields only the * first queue index will be taken for the indirection table. * @param queues_n + * * Number of queues. * * @return @@ -2655,6 +2719,120 @@ mlx5_hrxq_get(struct rte_eth_dev *dev, return 0; } +/** + * Modify an Rx Hash queue configuration. + * + * @param dev + * Pointer to Ethernet device. + * @param hrxq + * Index to Hash Rx queue to modify. + * @param rss_key + * RSS key for the Rx hash queue. + * @param rss_key_len + * RSS key length. + * @param hash_fields + * Verbs protocol hash field to make the RSS on. + * @param queues + * Queues entering in hash queue. In case of empty hash_fields only the + * first queue index will be taken for the indirection table. + * @param queues_n + * Number of queues. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, + const uint8_t *rss_key, uint32_t rss_key_len, + uint64_t hash_fields, + const uint16_t *queues, uint32_t queues_n) +{ + int err; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[queues[0]]; + struct mlx5_rxq_ctrl *rxq_ctrl = + container_of(rxq_data, struct mlx5_rxq_ctrl, rxq); + struct mlx5_devx_modify_tir_attr modify_tir = {0}; + struct mlx5_ind_table_obj *ind_tbl = NULL; + enum mlx5_ind_tbl_type rxq_obj_type = + rxq_ctrl->obj->type == MLX5_RXQ_OBJ_TYPE_IBV ? + MLX5_IND_TBL_TYPE_IBV : MLX5_IND_TBL_TYPE_DEVX; + struct mlx5_hrxq *hrxq = + mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); + + if (!hrxq) { + rte_errno = EINVAL; + return -rte_errno; + } + /* validations */ + if (hrxq->ind_table->type != MLX5_IND_TBL_TYPE_DEVX || + rxq_obj_type != MLX5_IND_TBL_TYPE_DEVX) { + /* shared action supported by devx interface only */ + rte_errno = EINVAL; + return -rte_errno; + } + if (hrxq->rss_key_len != rss_key_len) { + /* rss_key_len is fixed size 40 byte & not supposed to change */ + rte_errno = EINVAL; + return -rte_errno; + } + + queues_n = hash_fields ? queues_n : 1; + if (mlx5_ind_table_obj_match_queues(hrxq->ind_table, + queues, queues_n)) { + ind_tbl = hrxq->ind_table; + } else { + ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n); + if (!ind_tbl) + ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n, + rxq_obj_type); + } + if (!ind_tbl) { + rte_errno = ENOMEM; + return -rte_errno; + } + + /* + * untested for modification fields: + * - rx_hash_symmetric not set in hrxq_new(), + * - rx_hash_fn set hard-coded in hrxq_new(), + * - lro_xxx not set after rxq setup + */ + modify_tir.modify_bitmask |= + (MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE & + !!(ind_tbl != hrxq->ind_table)); + modify_tir.modify_bitmask |= + (MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH & + !!(hash_fields != hrxq->hash_fields || + hrxq->rss_key_len != rss_key_len || + memcmp(hrxq->rss_key, rss_key, rss_key_len))); + + mlx5_devx_tir_attr_set(dev, rss_key, rss_key_len, hash_fields, + queues, queues_n, + 0, /* N/A - tunnel modification unsupported */ + rxq_obj_type, ind_tbl->rqt->id, + &modify_tir.tir); + if (mlx5_devx_cmd_modify_tir(hrxq->tir, &modify_tir)) { + DRV_LOG(ERR, "port %u cannot modify DevX TIR", + dev->data->port_id); + rte_errno = errno; + goto error; + } + if (ind_tbl != hrxq->ind_table) { + mlx5_ind_table_obj_release(dev, hrxq->ind_table); + hrxq->ind_table = ind_tbl; + } + hrxq->hash_fields = hash_fields; + memcpy(hrxq->rss_key, rss_key, rss_key_len); + return 0; +error: + err = rte_errno; + if (ind_tbl != hrxq->ind_table) + mlx5_ind_table_obj_release(dev, ind_tbl); + rte_errno = err; + return -rte_errno; +} + /** * Release the hash Rx queue. * diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 26621ff193..5cff28196c 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -424,6 +424,10 @@ struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev); void mlx5_hrxq_drop_release(struct rte_eth_dev *dev); uint64_t mlx5_get_rx_port_offloads(void); uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev); +int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx, + const uint8_t *rss_key, uint32_t rss_key_len, + uint64_t hash_fields, + const uint16_t *queues, uint32_t queues_n); /* mlx5_txq.c */