From patchwork Wed Sep 9 06:48:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77006 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B869A04B1; Wed, 9 Sep 2020 08:49:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7177F1C12F; Wed, 9 Sep 2020 08:48:48 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 92BFE1C0CA for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYFu029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:23 +0300 Message-Id: <1599634114-148013-2-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 01/12] ethdev: introduce sample action for rte flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When using full offload, all traffic will be handled by the HW, and directed to the requested VF or wire, the control application loses visibility on the traffic. So there's a need for an action that will enable the control application some visibility. The solution is introduced a new action that will sample the incoming traffic and send a duplicated traffic with the specified ratio to the application, while the original packet will continue to the target destination. The packets sampled equals is '1/ratio', if the ratio value be set to 1, means that the packets would be completely mirrored. The sample packet can be assigned with different set of actions from the original packet. In order to support the sample packet in rte_flow, new rte_flow action definition RTE_FLOW_ACTION_TYPE_SAMPLE and structure rte_flow_action_sample will be introduced. Signed-off-by: Jiawei Wang Acked-by: Ori Kam Acked-by: Jerin Jacob Acked-by: Andrew Rybchenko --- doc/guides/prog_guide/rte_flow.rst | 25 +++++++++++++++++++++++++ doc/guides/rel_notes/release_20_11.rst | 6 ++++++ lib/librte_ethdev/rte_flow.c | 1 + lib/librte_ethdev/rte_flow.h | 30 ++++++++++++++++++++++++++++++ 4 files changed, 62 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 3e5cd1e..f8f3f51 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2653,6 +2653,31 @@ timeout passed without any matching on the flow. | ``context`` | user input flow context | +--------------+---------------------------------+ +Action: ``SAMPLE`` +^^^^^^^^^^^^^^^^^^ + +Adds a sample action to a matched flow. + +The matching packets will be duplicated with the specified ``ratio`` and +applied with own set of actions with a fate action, the packets sampled +equals is '1/ratio'. All the packets continue to the target destination. + +When the ``ratio`` is set to 1 then the packets will be 100% mirrored. +``actions`` represent the different set of actions for the sampled or mirrored +packets, and must have a fate action. + +.. _table_rte_flow_action_sample: + +.. table:: SAMPLE + + +--------------+---------------------------------+ + | Field | Value | + +==============+=================================+ + | ``ratio`` | 32 bits sample ratio value | + +--------------+---------------------------------+ + | ``actions`` | sub-action list for sampling | + +--------------+---------------------------------+ + Negative types ~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index df227a1..7f99563 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -55,6 +55,12 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added flow-based traffic sampling support.** + + Added new action: ``RTE_FLOW_ACTION_TYPE_SAMPLE`` to duplicate the matching + packets with specified ratio, and apply with own set of actions with a fate + action. When the ratio is set to 1 then the packets will be 100% mirrored. + Removed Items ------------- diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c index f8fdd68..035671d 100644 --- a/lib/librte_ethdev/rte_flow.c +++ b/lib/librte_ethdev/rte_flow.c @@ -174,6 +174,7 @@ struct rte_flow_desc_data { MK_FLOW_ACTION(SET_IPV4_DSCP, sizeof(struct rte_flow_action_set_dscp)), MK_FLOW_ACTION(SET_IPV6_DSCP, sizeof(struct rte_flow_action_set_dscp)), MK_FLOW_ACTION(AGE, sizeof(struct rte_flow_action_age)), + MK_FLOW_ACTION(SAMPLE, sizeof(struct rte_flow_action_sample)), }; int diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index da8bfa5..fa70d40 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -2132,6 +2132,14 @@ enum rte_flow_action_type { * see enum RTE_ETH_EVENT_FLOW_AGED */ RTE_FLOW_ACTION_TYPE_AGE, + + /** + * The matching packets will be duplicated with specified ratio and + * applied with own set of actions with a fate action. + * + * See struct rte_flow_action_sample. + */ + RTE_FLOW_ACTION_TYPE_SAMPLE, }; /** @@ -2742,6 +2750,28 @@ struct rte_flow_action { struct rte_flow; /** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_SAMPLE + * + * Adds a sample action to a matched flow. + * + * The matching packets will be duplicated with specified ratio and applied + * with own set of actions with a fate action, the sampled packet could be + * redirected to queue or port. All the packets continue processing on the + * default flow path. + * + * When the sample ratio is set to 1 then the packets will be 100% mirrored. + * Additional action list be supported to add for sampled or mirrored packets. + */ +struct rte_flow_action_sample { + uint32_t ratio; /**< packets sampled equals to '1/ratio'. */ + const struct rte_flow_action *actions; + /**< sub-action list specific for the sampling hit cases. */ +}; + +/** * Verbose error types. * * Most of them provide the type of the object referenced by struct From patchwork Wed Sep 9 06:48:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77002 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A408AA04B1; Wed, 9 Sep 2020 08:49:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A21471C10C; Wed, 9 Sep 2020 08:48:43 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 8AA951C0BE for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYFv029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:24 +0300 Message-Id: <1599634114-148013-3-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 02/12] common/mlx5: glue for sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rdma-core introduce a new DR sample action. Add the rdma-core commands in glue to create this action. Sample action is used for creating the sample object to implement the sampling/mirroring function. Signed-off-by: Jiawei Wang --- drivers/common/mlx5/linux/meson.build | 2 ++ drivers/common/mlx5/linux/mlx5_glue.c | 15 +++++++++++++++ drivers/common/mlx5/linux/mlx5_glue.h | 13 +++++++++++++ drivers/common/mlx5/mlx5_prm.h | 10 ++++++++++ 4 files changed, 40 insertions(+) diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index 48e8ad6..1aa137d 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -172,6 +172,8 @@ has_sym_args = [ 'RDMA_NLDEV_ATTR_NDEV_INDEX' ], [ 'HAVE_MLX5_DR_FLOW_DUMP', 'infiniband/mlx5dv.h', 'mlx5dv_dump_dr_domain'], + [ 'HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE', 'infiniband/mlx5dv.h', + 'mlx5dv_dr_action_create_flow_sampler'], [ 'HAVE_MLX5DV_DR_MEM_RECLAIM', 'infiniband/mlx5dv.h', 'mlx5dv_dr_domain_set_reclaim_device_memory'], [ 'HAVE_DEVLINK', 'linux/devlink.h', 'DEVLINK_GENL_NAME' ], diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index fcf03e8..771a47c 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -1063,6 +1063,19 @@ #endif } +static void * +mlx5_glue_dr_create_flow_action_sampler( + struct mlx5dv_dr_flow_sampler_attr *attr) +{ +#ifdef HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE + return mlx5dv_dr_action_create_flow_sampler(attr); +#else + (void)attr; + errno = ENOTSUP; + return NULL; +#endif +} + static int mlx5_glue_devx_query_eqn(struct ibv_context *ctx, uint32_t cpus, uint32_t *eqn) @@ -1339,6 +1352,8 @@ .devx_port_query = mlx5_glue_devx_port_query, .dr_dump_domain = mlx5_glue_dr_dump_domain, .dr_reclaim_domain_memory = mlx5_glue_dr_reclaim_domain_memory, + .dr_create_flow_action_sampler = + mlx5_glue_dr_create_flow_action_sampler, .devx_query_eqn = mlx5_glue_devx_query_eqn, .devx_create_event_channel = mlx5_glue_devx_create_event_channel, .devx_destroy_event_channel = mlx5_glue_devx_destroy_event_channel, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index 734ace2..a77d239 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -77,6 +77,7 @@ #ifndef HAVE_MLX5DV_DR enum mlx5dv_dr_domain_type { unused, }; struct mlx5dv_dr_domain; +struct mlx5dv_dr_action; #endif #ifndef HAVE_MLX5DV_DR_DEVX_PORT @@ -87,6 +88,16 @@ struct mlx5dv_dr_flow_meter_attr; #endif +#ifndef HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE +struct mlx5dv_dr_flow_sampler_attr { + uint32_t sample_ratio; + void *default_next_table; + size_t num_sample_actions; + struct mlx5dv_dr_action **sample_actions; + uint64_t action; +}; +#endif + #ifndef HAVE_IBV_DEVX_EVENT struct mlx5dv_devx_event_channel { int fd; }; struct mlx5dv_devx_async_event_hdr; @@ -309,6 +320,8 @@ struct mlx5_glue { const void *pp_context, uint32_t flags); void (*dv_free_pp)(struct mlx5dv_pp *pp); + void *(*dr_create_flow_action_sampler) + (struct mlx5dv_dr_flow_sampler_attr *attr); }; extern const struct mlx5_glue *mlx5_glue; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 563e7c8..0a270a2 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1479,6 +1479,16 @@ struct mlx5_ifc_virtio_emulation_cap_bits { u8 reserved_at_0[0x8000]; }; +struct mlx5_ifc_set_action_in_bits { + u8 action_type[0x4]; + u8 field[0xc]; + u8 reserved_at_10[0x3]; + u8 offset[0x5]; + u8 reserved_at_18[0x3]; + u8 length[0x5]; + u8 data[0x20]; +}; + struct mlx5_ifc_query_hca_cap_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; From patchwork Wed Sep 9 06:48:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77007 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B31F4A04B1; Wed, 9 Sep 2020 08:50:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5CBD61C136; Wed, 9 Sep 2020 08:48:49 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id B89931C0D0 for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYFw029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:25 +0300 Message-Id: <1599634114-148013-4-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 03/12] common/mlx5: query sampler object capability via DevX X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update function mlx5_devx_cmd_query_hca_attr() to add the NIC Flow Table attributes query, then get the log_max_flow_sampler_num from flow table properties. Add the related structs definition in mlx5_prm.h. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- drivers/common/mlx5/mlx5_devx_cmds.c | 27 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 1 + drivers/common/mlx5/mlx5_prm.h | 51 ++++++++++++++++++++++++++++++++++++ 3 files changed, 79 insertions(+) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 7c81ae1..fd4e3f2 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -751,6 +751,33 @@ struct mlx5_devx_obj * if (!attr->eth_net_offloads) return 0; + /* Query Flow Sampler Capability From FLow Table Properties Layout. */ + memset(in, 0, sizeof(in)); + memset(out, 0, sizeof(out)); + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + rc = mlx5_glue->devx_general_cmd(ctx, + in, sizeof(in), + out, sizeof(out)); + if (rc) + goto error; + status = MLX5_GET(query_hca_cap_out, out, status); + syndrome = MLX5_GET(query_hca_cap_out, out, syndrome); + if (status) { + DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, " + "status %x, syndrome = %x", + status, syndrome); + attr->log_max_ft_sampler_num = 0; + return -1; + } + hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability); + attr->log_max_ft_sampler_num = + MLX5_GET(flow_table_nic_cap, + hcattr, flow_table_properties.log_max_ft_sampler_num); + /* Query HCA offloads for Ethernet protocol. */ memset(in, 0, sizeof(in)); memset(out, 0, sizeof(out)); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 1c84cea..cfa7a7b 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -102,6 +102,7 @@ struct mlx5_hca_attr { uint32_t scatter_fcs_w_decap_disable:1; uint32_t regex:1; uint32_t regexp_num_of_engines; + uint32_t log_max_ft_sampler_num:8; struct mlx5_hca_qos_attr qos; struct mlx5_hca_vdpa_attr vdpa; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 0a270a2..b312b3d 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1036,6 +1036,7 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE = 0x0 << 1, MLX5_GET_HCA_CAP_OP_MOD_ETHERNET_OFFLOAD_CAPS = 0x1 << 1, MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP = 0xc << 1, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, }; @@ -1470,12 +1471,62 @@ struct mlx5_ifc_virtio_emulation_cap_bits { u8 reserved_at_1c0[0x620]; }; +struct mlx5_ifc_flow_table_prop_layout_bits { + u8 ft_support[0x1]; + u8 flow_tag[0x1]; + u8 flow_counter[0x1]; + u8 flow_modify_en[0x1]; + u8 modify_root[0x1]; + u8 identified_miss_table[0x1]; + u8 flow_table_modify[0x1]; + u8 reformat[0x1]; + u8 decap[0x1]; + u8 reset_root_to_default[0x1]; + u8 pop_vlan[0x1]; + u8 push_vlan[0x1]; + u8 fpga_vendor_acceleration[0x1]; + u8 pop_vlan_2[0x1]; + u8 push_vlan_2[0x1]; + u8 reformat_and_vlan_action[0x1]; + u8 modify_and_vlan_action[0x1]; + u8 sw_owner[0x1]; + u8 reformat_l3_tunnel_to_l2[0x1]; + u8 reformat_l2_to_l3_tunnel[0x1]; + u8 reformat_and_modify_action[0x1]; + u8 reserved_at_15[0x9]; + u8 sw_owner_v2[0x1]; + u8 reserved_at_1f[0x1]; + u8 reserved_at_20[0x2]; + u8 log_max_ft_size[0x6]; + u8 log_max_modify_header_context[0x8]; + u8 max_modify_header_actions[0x8]; + u8 max_ft_level[0x8]; + u8 reserved_at_40[0x8]; + u8 log_max_ft_sampler_num[8]; + u8 metadata_reg_b_width[0x8]; + u8 metadata_reg_a_width[0x8]; + u8 reserved_at_60[0x18]; + u8 log_max_ft_num[0x8]; + u8 reserved_at_80[0x10]; + u8 log_max_flow_counter[0x8]; + u8 log_max_destination[0x8]; + u8 reserved_at_a0[0x18]; + u8 log_max_flow[0x8]; + u8 reserved_at_c0[0x140]; +}; + +struct mlx5_ifc_flow_table_nic_cap_bits { + u8 reserved_at_0[0x200]; + struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties; +}; + union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap; struct mlx5_ifc_per_protocol_networking_offload_caps_bits per_protocol_networking_offload_caps; struct mlx5_ifc_qos_cap_bits qos_cap; struct mlx5_ifc_virtio_emulation_cap_bits vdpa_caps; + struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap; u8 reserved_at_0[0x8000]; }; From patchwork Wed Sep 9 06:48:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77000 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AF4F4A04B1; Wed, 9 Sep 2020 08:48:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 23ACC1C0D1; Wed, 9 Sep 2020 08:48:41 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 751A0DE0 for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYFx029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:26 +0300 Message-Id: <1599634114-148013-5-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 04/12] net/mlx5: add the validate sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add sample action validate function. For Sample flow support NIC-RX and FDB domain, must include an action of a dest TIR in NIC_RX. Only NIC_RX support with addition optional actions. FDB doesn't support any optional action, the sampled packets is always goes to e-switch manager port. Signed-off-by: Jiawei Wang --- drivers/net/mlx5/linux/mlx5_os.c | 14 +++++ drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 130 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 146 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index bf1f82b..fb6ce4a 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1016,6 +1016,20 @@ } } #endif +#if defined(HAVE_MLX5DV_DR) && defined(HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE) + if (config->hca_attr.log_max_ft_sampler_num > 0 && + config->dv_flow_en) { + priv->sampler_en = 1; + DRV_LOG(DEBUG, "The Sampler enabled!\n"); + } else { + priv->sampler_en = 0; + if (!config->hca_attr.log_max_ft_sampler_num) + DRV_LOG(WARNING, "No available register for" + " Sampler."); + else + DRV_LOG(DEBUG, "DV flow is not supported!\n"); + } +#endif } if (config->tx_pp) { DRV_LOG(DEBUG, "Timestamp counter frequency %u kHz", diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index f29a12c..8e42483 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -701,6 +701,7 @@ struct mlx5_priv { unsigned int counter_fallback:1; /* Use counter fallback management. */ unsigned int mtr_en:1; /* Whether support meter. */ unsigned int mtr_reg_share:1; /* Whether support meter REG_C share. */ + unsigned int sampler_en:1; /* Whether support sampler. */ uint16_t domain_id; /* Switch domain identifier. */ uint16_t vport_id; /* Associated VF vport index (if any). */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 92301e4..41404de 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -196,6 +196,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_SET_IPV6_DSCP (1ull << 33) #define MLX5_FLOW_ACTION_AGE (1ull << 34) #define MLX5_FLOW_ACTION_DEFAULT_MISS (1ull << 35) +#define MLX5_FLOW_ACTION_SAMPLE (1ull << 36) #define MLX5_FLOW_FATE_ACTIONS \ (MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 58358ce..3df875d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3992,6 +3992,127 @@ struct field_modify_info modify_tcp[] = { } /** + * Validate the sample action. + * + * @param[in] action_flags + * Holds the actions detected until now. + * @param[in] action + * Pointer to the sample action. + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] attr + * Attributes of flow that includes this action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_validate_action_sample(uint64_t action_flags, + const struct rte_flow_action *action, + struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_config *dev_conf = &priv->config; + const struct rte_flow_action_sample *sample = action->conf; + const struct rte_flow_action *act; + uint64_t sub_action_flags = 0; + int actions_n = 0; + int ret; + + if (!priv->config.devx || !priv->sampler_en) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "sample action not supported"); + if (!sample) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "configuration cannot be null"); + if (sample->ratio == 0) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "ratio value start from 1"); + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Duplicate sample actions set"); + if (action_flags & MLX5_FLOW_ACTION_METER) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "wrong action order, meter should " + "be after sample action"); + act = sample->actions; + for (; act->type != RTE_FLOW_ACTION_TYPE_END; act++) { + if (actions_n == MLX5_DV_MAX_NUMBER_OF_ACTIONS) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "too many actions"); + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + ret = mlx5_flow_validate_action_queue(act, + sub_action_flags, + dev, + attr, error); + if (ret < 0) + return ret; + sub_action_flags |= MLX5_FLOW_ACTION_QUEUE; + ++actions_n; + break; + case RTE_FLOW_ACTION_TYPE_MARK: + ret = flow_dv_validate_action_mark(dev, act, + sub_action_flags, + attr, error); + if (ret < 0) + return ret; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) + sub_action_flags |= MLX5_FLOW_ACTION_MARK | + MLX5_FLOW_ACTION_MARK_EXT; + else + sub_action_flags |= MLX5_FLOW_ACTION_MARK; + ++actions_n; + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + ret = flow_dv_validate_action_count(dev, error); + if (ret < 0) + return ret; + sub_action_flags |= MLX5_FLOW_ACTION_COUNT; + ++actions_n; + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "Doesn't support optional " + "action"); + } + } + if (attr->ingress && !attr->transfer) { + if (!(sub_action_flags & MLX5_FLOW_ACTION_QUEUE)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "Ingress must has a dest " + "QUEUE for Sample"); + } else if (attr->egress && !attr->transfer) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "Sample Only support Ingress " + "or E-Switch"); + } else if (sample->actions->type != RTE_FLOW_ACTION_TYPE_END) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "E-Switch doesn't support any " + "optional action for sampling"); + } + return 0; +} + +/** * Find existing modify-header resource or create and register a new one. * * @param dev[in, out] @@ -5753,6 +5874,15 @@ struct field_modify_info modify_tcp[] = { action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; rw_act_num += MLX5_ACT_NUM_SET_DSCP; break; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + ret = flow_dv_validate_action_sample(action_flags, + actions, dev, + attr, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + ++actions_n; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, From patchwork Wed Sep 9 06:48:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77004 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC726A04B1; Wed, 9 Sep 2020 08:49:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2F25A1C11D; Wed, 9 Sep 2020 08:48:46 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 7E0EB1BF8F for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYG0029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:27 +0300 Message-Id: <1599634114-148013-6-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 05/12] net/mlx5: split sample flow into two sub flows X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the sampler action resource structs definition. The flow with sample action will be splited into two sub flows, the prefix flow with sample action, the suffix flow with the left actions. For the prefix flow, add the extra the tag action with unique id to metadata register, and suffix flow will add the extra tag item to match that unique id. Signed-off-by: Jiawei Wang --- drivers/net/mlx5/mlx5.c | 11 ++ drivers/net/mlx5/mlx5.h | 3 + drivers/net/mlx5/mlx5_flow.c | 306 ++++++++++++++++++++++++++++++++++++++++--- drivers/net/mlx5/mlx5_flow.h | 37 ++++++ 4 files changed, 341 insertions(+), 16 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 4a807fb..8d5f517 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -241,6 +241,17 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .free = mlx5_free, .type = "mlx5_jump_ipool", }, + { + .size = sizeof(struct mlx5_flow_dv_sample_resource), + .trunk_size = 64, + .grow_trunk = 3, + .grow_shift = 2, + .need_lock = 0, + .release_mem_en = 1, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_sample_ipool", + }, #endif { .size = sizeof(struct mlx5_flow_meter), diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 8e42483..9115391 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -39,6 +39,7 @@ enum mlx5_ipool_index { MLX5_IPOOL_TAG, /* Pool for tag resource. */ MLX5_IPOOL_PORT_ID, /* Pool for port id resource. */ MLX5_IPOOL_JUMP, /* Pool for jump resource. */ + MLX5_IPOOL_SAMPLE, /* Pool for sample resource. */ #endif MLX5_IPOOL_MTR, /* Pool for meter resource. */ MLX5_IPOOL_MCP, /* Pool for metadata resource. */ @@ -512,6 +513,7 @@ struct mlx5_flow_tbl_resource { /* Tables for metering splits should be added here. */ #define MLX5_MAX_TABLES_EXTERNAL (MLX5_MAX_TABLES - 3) #define MLX5_MAX_TABLES_FDB UINT16_MAX +#define MLX5_FLOW_TABLE_FACTOR 10 /* ID generation structure. */ struct mlx5_flow_id_pool { @@ -640,6 +642,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_hlist *tag_table; uint32_t port_id_action_list; /* List of port ID actions. */ uint32_t push_vlan_action_list; /* List of push VLAN actions. */ + uint32_t sample_action_list; /* List of sample actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ struct mlx5_flow_default_miss_resource default_miss; /* Default miss action resource structure. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 4c29898..b5eec48 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3543,6 +3543,8 @@ struct mlx5_flow_tunnel_info { * Pointer to return the created subflow, may be NULL. * @param[in] prefix_layers * Prefix subflow layers, may be 0. + * @param[in] prefix_mark + * Prefix subflow mark flag, may be 0. * @param[in] attr * Flow rule attributes. * @param[in] items @@ -3563,6 +3565,7 @@ struct mlx5_flow_tunnel_info { struct rte_flow *flow, struct mlx5_flow **sub_flow, uint64_t prefix_layers, + uint32_t prefix_mark, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action actions[], @@ -3582,10 +3585,13 @@ struct mlx5_flow_tunnel_info { dev_flow->handle, next); /* * If dev_flow is as one of the suffix flow, some actions in suffix - * flow may need some user defined item layer flags. + * flow may need some user defined item layer flags, and pass the + * Metadate rxq mark flag to suffix flow as well. */ if (prefix_layers) dev_flow->handle->layers = prefix_layers; + if (prefix_mark) + dev_flow->handle->mark = 1; if (sub_flow) *sub_flow = dev_flow; return flow_drv_translate(dev, dev_flow, attr, items, actions, error); @@ -3915,6 +3921,143 @@ struct mlx5_flow_tunnel_info { return 0; } + +/** + * Check the match action from the action list. + * + * @param[in] actions + * Pointer to the list of actions. + * @param[in] action + * The action to be check if exist. + * + * @return + * > 0 the total number of actions. + * 0 if not found match action in action list. + */ +static int +flow_check_match_action(const struct rte_flow_action actions[], + enum rte_flow_action_type action) +{ + int actions_n = 0; + int flag = 0; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + if (actions->type == action) + flag = 1; + actions_n++; + } + /* Count RTE_FLOW_ACTION_TYPE_END. */ + return flag ? actions_n + 1 : 0; +} + +/** + * Split the sample flow. + * + * As sample flow will split to two sub flow, sample flow with + * sample action, the other actions will move to new suffix flow. + * + * Also add unique tag id with tag action in the sample flow, + * the same tag id will be as match in the suffix flow. + * + * @param dev + * Pointer to Ethernet device. + * @param[in] attr + * Flow rule attributes. + * @param[out] sfx_items + * Suffix flow match items (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[out] actions_sfx + * Suffix flow actions. + * @param[out] actions_pre + * Prefix flow actions. + * + * @return + * 0 on success, or unique flow_id, a negative errno value + * otherwise and rte_errno is set. + */ +static int +flow_sample_split_prep(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct rte_flow_item sfx_items[], + const struct rte_flow_action actions[], + struct rte_flow_action actions_sfx[], + struct rte_flow_action actions_pre[]) +{ + struct mlx5_rte_flow_action_set_tag *set_tag; + struct mlx5_rte_flow_item_tag *tag_spec; + struct mlx5_rte_flow_item_tag *tag_mask; + struct rte_flow_item *tag_item; + struct rte_flow_action *tag_action = NULL; + bool pre_sample = true; + struct rte_flow_error error; + uint32_t tag_id = 0; + int ret; + + /* Prepare the actions for prefix and suffix flow. */ + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + struct rte_flow_action **action_cur = NULL; + + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_SAMPLE: + if (!attr->transfer) { + /* Add the extra tag action first for NIC-RX. */ + tag_action = actions_pre; + tag_action->type = (enum rte_flow_action_type) + MLX5_RTE_FLOW_ACTION_TYPE_TAG; + actions_pre++; + } + break; + case RTE_FLOW_ACTION_TYPE_JUMP: + case RTE_FLOW_ACTION_TYPE_METER: + case RTE_FLOW_ACTION_TYPE_QUEUE: + action_cur = &actions_sfx; + break; + default: + break; + } + if (pre_sample && !action_cur) + action_cur = &actions_pre; + else + action_cur = &actions_sfx; + memcpy(*action_cur, actions, sizeof(struct rte_flow_action)); + (*action_cur)++; + if (actions->type == RTE_FLOW_ACTION_TYPE_SAMPLE) + pre_sample = false; + } + /* Add end action to the actions. */ + actions_sfx->type = RTE_FLOW_ACTION_TYPE_END; + actions_pre->type = RTE_FLOW_ACTION_TYPE_END; + if (!attr->transfer) { + actions_pre++; + /* Set the tag. */ + set_tag = (void *)actions_pre; + ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, 0, &error); + if (ret < 0) + return ret; + set_tag->id = ret; + tag_id = flow_qrss_get_id(dev); + set_tag->data = tag_id; + assert(tag_action); + tag_action->conf = set_tag; + /* Prepare the suffix subflow items. */ + tag_item = sfx_items++; + sfx_items->type = RTE_FLOW_ITEM_TYPE_END; + sfx_items++; + tag_spec = (struct mlx5_rte_flow_item_tag *)sfx_items; + tag_spec->data = tag_id; + tag_spec->id = set_tag->id; + tag_mask = tag_spec + 1; + tag_mask->data = UINT32_MAX; + tag_item->type = (enum rte_flow_item_type) + MLX5_RTE_FLOW_ITEM_TYPE_TAG; + tag_item->spec = tag_spec; + tag_item->last = NULL; + tag_item->mask = tag_mask; + } + return tag_id; +} + /** * The splitting for metadata feature. * @@ -3931,6 +4074,8 @@ struct mlx5_flow_tunnel_info { * Parent flow structure pointer. * @param[in] prefix_layers * Prefix flow layer flags. + * @param[in] prefix_mark + * Prefix subflow mark flag, may be 0. * @param[in] attr * Flow rule attributes. * @param[in] items @@ -3950,6 +4095,7 @@ struct mlx5_flow_tunnel_info { flow_create_split_metadata(struct rte_eth_dev *dev, struct rte_flow *flow, uint64_t prefix_layers, + uint32_t prefix_mark, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action actions[], @@ -3973,8 +4119,9 @@ struct mlx5_flow_tunnel_info { config->dv_xmeta_en == MLX5_XMETA_MODE_LEGACY || !mlx5_flow_ext_mreg_supported(dev)) return flow_create_split_inner(dev, flow, NULL, prefix_layers, - attr, items, actions, external, - flow_idx, error); + prefix_mark, attr, items, + actions, external, flow_idx, + error); actions_n = flow_parse_metadata_split_actions_info(actions, &qrss, &encap_idx); if (qrss) { @@ -4059,7 +4206,8 @@ struct mlx5_flow_tunnel_info { goto exit; } /* Add the unmodified original or prefix subflow. */ - ret = flow_create_split_inner(dev, flow, &dev_flow, prefix_layers, attr, + ret = flow_create_split_inner(dev, flow, &dev_flow, prefix_layers, + prefix_mark, attr, items, ext_actions ? ext_actions : actions, external, flow_idx, error); if (ret < 0) @@ -4122,7 +4270,7 @@ struct mlx5_flow_tunnel_info { } dev_flow = NULL; /* Add suffix subflow to execute Q/RSS. */ - ret = flow_create_split_inner(dev, flow, &dev_flow, layers, + ret = flow_create_split_inner(dev, flow, &dev_flow, layers, 0, &q_attr, mtr_sfx ? items : q_items, q_actions, external, flow_idx, error); @@ -4158,6 +4306,10 @@ struct mlx5_flow_tunnel_info { * Pointer to Ethernet device. * @param[in] flow * Parent flow structure pointer. + * @param[in] prefix_layers + * Prefix subflow layers, may be 0. + * @param[in] prefix_mark + * Prefix subflow mark flag, may be 0. * @param[in] attr * Flow rule attributes. * @param[in] items @@ -4175,12 +4327,14 @@ struct mlx5_flow_tunnel_info { */ static int flow_create_split_meter(struct rte_eth_dev *dev, - struct rte_flow *flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - bool external, uint32_t flow_idx, - struct rte_flow_error *error) + struct rte_flow *flow, + uint64_t prefix_layers, + uint32_t prefix_mark, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + bool external, uint32_t flow_idx, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct rte_flow_action *sfx_actions = NULL; @@ -4223,8 +4377,10 @@ struct mlx5_flow_tunnel_info { goto exit; } /* Add the prefix subflow. */ - ret = flow_create_split_inner(dev, flow, &dev_flow, 0, attr, - items, pre_actions, external, + ret = flow_create_split_inner(dev, flow, &dev_flow, + prefix_layers, 0, + attr, items, + pre_actions, external, flow_idx, error); if (ret) { ret = -rte_errno; @@ -4239,8 +4395,10 @@ struct mlx5_flow_tunnel_info { /* Add the prefix subflow. */ ret = flow_create_split_metadata(dev, flow, dev_flow ? flow_get_prefix_layer_flags(dev_flow) : - 0, &sfx_attr, - sfx_items ? sfx_items : items, + prefix_layers, dev_flow ? + dev_flow->handle->mark : prefix_mark, + &sfx_attr, sfx_items ? + sfx_items : items, sfx_actions ? sfx_actions : actions, external, flow_idx, error); exit: @@ -4250,6 +4408,122 @@ struct mlx5_flow_tunnel_info { } /** + * The splitting for sample feature. + * + * The sample flow will be split to two flows as prefix and + * suffix flow. + * + * @param dev + * Pointer to Ethernet device. + * @param[in] flow + * Parent flow structure pointer. + * @param[in] attr + * Flow rule attributes. + * @param[in] items + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[in] external + * This flow rule is created by request external to PMD. + * @param[in] flow_idx + * This memory pool index to the flow. + * @param[out] error + * Perform verbose error reporting if not NULL. + * @return + * 0 on success, negative value otherwise + */ +static int +flow_create_split_sample(struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + bool external, uint32_t flow_idx, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action *sfx_actions = NULL; + struct rte_flow_action *pre_actions = NULL; + struct rte_flow_item *sfx_items = NULL; + struct mlx5_flow *dev_flow = NULL; + struct rte_flow_attr sfx_attr = *attr; +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + struct mlx5_flow_dv_sample_resource *sample_res; + struct mlx5_flow_tbl_data_entry *sfx_tbl_data; + struct mlx5_flow_tbl_resource *sfx_tbl; + union mlx5_flow_tbl_key sfx_table_key; +#endif + size_t act_size; + size_t item_size; + int32_t tag_id = 0; + int actions_n = 0; + int ret = 0; + + if (priv->sampler_en) + actions_n = flow_check_match_action(actions, + RTE_FLOW_ACTION_TYPE_SAMPLE); + if (actions_n) { + /* The prefix actions must includes sample, tag, end. */ + act_size = sizeof(struct rte_flow_action) * (actions_n * 2 + 1) + + sizeof(struct mlx5_rte_flow_action_set_tag); + /* tag, end. */ +#define SAMPLE_SUFFIX_ITEM 2 + item_size = sizeof(struct rte_flow_item) * SAMPLE_SUFFIX_ITEM + + sizeof(struct mlx5_rte_flow_item_tag) * 2; + sfx_actions = rte_zmalloc(__func__, (act_size + item_size), 0); + if (!sfx_actions) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "no memory to split " + "sample flow"); + if (!attr->transfer) + sfx_items = (struct rte_flow_item *)((char *)sfx_actions + + act_size); + pre_actions = sfx_actions + actions_n; + tag_id = flow_sample_split_prep(dev, attr, sfx_items, + actions, sfx_actions, + pre_actions); + if (tag_id < 0 || (!attr->transfer && !tag_id)) { + ret = -rte_errno; + goto exit; + } + /* Add the prefix subflow. */ + ret = flow_create_split_inner(dev, flow, &dev_flow, 0, 0, attr, + items, pre_actions, external, + flow_idx, error); + if (ret) { + ret = -rte_errno; + goto exit; + } + dev_flow->handle->split_flow_id = tag_id; +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + /* Set the sfx group attr. */ + sample_res = (struct mlx5_flow_dv_sample_resource *) + dev_flow->dv.sample_res; + sfx_tbl = (struct mlx5_flow_tbl_resource *) + sample_res->normal_path_tbl; + sfx_tbl_data = container_of(sfx_tbl, + struct mlx5_flow_tbl_data_entry, tbl); + sfx_table_key.v64 = sfx_tbl_data->entry.key; + sfx_attr.group = sfx_attr.transfer ? + (sfx_table_key.table_id - 1) : + sfx_table_key.table_id; +#endif + } + /* Add the suffix subflow. */ + ret = flow_create_split_meter(dev, flow, dev_flow ? + flow_get_prefix_layer_flags(dev_flow) : 0, + dev_flow ? dev_flow->handle->mark : 0, + &sfx_attr, sfx_items ? sfx_items : items, + sfx_actions ? sfx_actions : actions, + external, flow_idx, error); +exit: + if (sfx_actions) + rte_free(sfx_actions); + return ret; +} + +/** * Split the flow to subflow set. The splitters might be linked * in the chain, like this: * flow_create_split_outer() calls: @@ -4297,7 +4571,7 @@ struct mlx5_flow_tunnel_info { { int ret; - ret = flow_create_split_meter(dev, flow, attr, items, + ret = flow_create_split_sample(dev, flow, attr, items, actions, external, flow_idx, error); MLX5_ASSERT(ret <= 0); return ret; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 41404de..9c609cb 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -507,6 +507,39 @@ struct mlx5_flow_tbl_data_entry { uint32_t idx; /**< index for the indexed mempool. */ }; +/* Sub rdma-core actions list. */ +struct mlx5_flow_sub_actions_list { + uint32_t actions_num; /**< Number of sample actions. */ + uint64_t action_flags; + void *dr_queue_action; + void *dr_tag_action; + void *dr_cnt_action; +}; + +/* Sample sub-actions resource list. */ +struct mlx5_flow_sub_actions_idx { + uint32_t rix_hrxq; /**< Hash Rx queue object index. */ + uint32_t rix_tag; /**< Index to the tag action. */ + uint32_t cnt; +}; + +/* Sample action resource structure. */ +struct mlx5_flow_dv_sample_resource { + ILIST_ENTRY(uint32_t)next; /**< Pointer to next element. */ + rte_atomic32_t refcnt; /**< Reference counter. */ + void *verbs_action; /**< Verbs sample action object. */ + uint8_t ft_type; /** Flow Table Type */ + uint32_t ft_id; /** Flow Table Level */ + uint32_t ratio; /** Sample Ratio */ + uint64_t set_action; /** Restore reg_c0 value */ + void *normal_path_tbl; /** Flow Table pointer */ + void *default_miss; /** default_miss dr_action. */ + struct mlx5_flow_sub_actions_idx sample_idx; + /**< Action index resources. */ + struct mlx5_flow_sub_actions_list sample_act; + /**< Action resources. */ +}; + /* Verbs specification header. */ struct ibv_spec_header { enum ibv_flow_spec_type type; @@ -539,6 +572,8 @@ struct mlx5_flow_handle_dv { /**< Index to push VLAN action resource in cache. */ uint32_t rix_tag; /**< Index to the tag action. */ + uint32_t rix_sample; + /**< Index to sample action resource in cache. */ } __rte_packed; /** Device flow handle structure: used both for creating & destroying. */ @@ -604,6 +639,8 @@ struct mlx5_flow_dv_workspace { /**< Pointer to the jump action resource. */ struct mlx5_flow_dv_match_params value; /**< Holds the value that the packet is compared to. */ + struct mlx5_flow_dv_sample_resource *sample_res; + /**< Pointer to the sample action resource. */ }; /* From patchwork Wed Sep 9 06:48:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77003 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 58DAAA04B1; Wed, 9 Sep 2020 08:49:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 070F01C116; Wed, 9 Sep 2020 08:48:45 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 8E3811C0C9 for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYG1029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:28 +0300 Message-Id: <1599634114-148013-7-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 06/12] net/mlx5: update translate function for sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Translate the attribute of sample action that include sample ratio and sub actions list, then create the sample DR action. The metadata register value will be lost in the default path after Sampler in FDB due to CX5 HW limitation. Since source vport also be shared with metadata register c0, MLX5 PMD would set the source vport to rdma-core and rdma-core will restore the regc0 value after sampler. Signed-off-by: Jiawei Wang --- drivers/net/mlx5/mlx5_flow.c | 16 +- drivers/net/mlx5/mlx5_flow.h | 14 +- drivers/net/mlx5/mlx5_flow_dv.c | 507 +++++++++++++++++++++++++++++++++++++++- 3 files changed, 515 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index b5eec48..ce03210 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4637,10 +4637,14 @@ struct mlx5_flow_tunnel_info { int hairpin_flow; uint32_t hairpin_id = 0; struct rte_flow_attr attr_tx = { .priority = 0 }; + struct rte_flow_attr attr_factor = {0}; int ret; - hairpin_flow = flow_check_hairpin_split(dev, attr, actions); - ret = flow_drv_validate(dev, attr, items, p_actions_rx, + memcpy((void *)&attr_factor, (const void *)attr, sizeof(*attr)); + if (external) + attr_factor.group *= MLX5_FLOW_TABLE_FACTOR; + hairpin_flow = flow_check_hairpin_split(dev, &attr_factor, actions); + ret = flow_drv_validate(dev, &attr_factor, items, p_actions_rx, external, hairpin_flow, error); if (ret < 0) return 0; @@ -4659,7 +4663,7 @@ struct mlx5_flow_tunnel_info { rte_errno = ENOMEM; goto error_before_flow; } - flow->drv_type = flow_get_drv_type(dev, attr); + flow->drv_type = flow_get_drv_type(dev, &attr_factor); if (hairpin_id != 0) flow->hairpin_flow_id = hairpin_id; MLX5_ASSERT(flow->drv_type > MLX5_FLOW_TYPE_MIN && @@ -4705,7 +4709,7 @@ struct mlx5_flow_tunnel_info { * depending on configuration. In the simplest * case it just creates unmodified original flow. */ - ret = flow_create_split_outer(dev, flow, attr, + ret = flow_create_split_outer(dev, flow, &attr_factor, buf->entry[i].pattern, p_actions_rx, external, idx, error); @@ -4742,8 +4746,8 @@ struct mlx5_flow_tunnel_info { * the egress Flows belong to the different device and * copy table should be updated in peer NIC Rx domain. */ - if (attr->ingress && - (external || attr->group != MLX5_FLOW_MREG_CP_TABLE_GROUP)) { + if (attr_factor.ingress && + (external || attr_factor.group != MLX5_FLOW_MREG_CP_TABLE_GROUP)) { ret = flow_mreg_update_copy_table(dev, flow, actions, error); if (ret) goto error; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 9c609cb..50e230f 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -366,6 +366,13 @@ enum mlx5_flow_fate_type { MLX5_FLOW_FATE_MAX, }; +/* + * Max number of actions per DV flow. + * See CREATE_FLOW_MAX_FLOW_ACTIONS_SUPPORTED + * in rdma-core file providers/mlx5/verbs.c. + */ +#define MLX5_DV_MAX_NUMBER_OF_ACTIONS 8 + /* Matcher PRM representation */ struct mlx5_flow_dv_match_params { size_t size; @@ -614,13 +621,6 @@ struct mlx5_flow_handle { #define MLX5_FLOW_HANDLE_VERBS_SIZE (sizeof(struct mlx5_flow_handle)) #endif -/* - * Max number of actions per DV flow. - * See CREATE_FLOW_MAX_FLOW_ACTIONS_SUPPORTED - * in rdma-core file providers/mlx5/verbs.c. - */ -#define MLX5_DV_MAX_NUMBER_OF_ACTIONS 8 - /** Device flow structure only for DV flow creation. */ struct mlx5_flow_dv_workspace { uint32_t group; /**< The group index. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 3df875d..2241e36 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -76,6 +76,10 @@ static int flow_dv_default_miss_resource_release(struct rte_eth_dev *dev); +static int +flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, + uint32_t encap_decap_idx); + /** * Initialize flow attributes structure according to flow items' types. * @@ -8230,6 +8234,386 @@ struct field_modify_info modify_tcp[] = { } /** + * Create an Rx Hash queue. + * + * @param dev + * Pointer to Ethernet device. + * @param[in] dev_flow + * Pointer to the mlx5_flow. + * @param[in] rss_desc + * Pointer to the mlx5_flow_rss_desc. + * @param[out] hrxq_idx + * Hash Rx queue index. + * + * @return + * The Verbs/DevX object initialised, NULL otherwise and rte_errno is set. + */ +static struct mlx5_hrxq * +flow_dv_handle_rx_queue(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + struct mlx5_flow_rss_desc *rss_desc, + uint32_t *hrxq_idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_handle *dh = dev_flow->handle; + struct mlx5_hrxq *hrxq; + + MLX5_ASSERT(rss_desc->queue_num); + *hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, + MLX5_RSS_HASH_KEY_LEN, + dev_flow->hash_fields, + rss_desc->queue, + rss_desc->queue_num); + if (!*hrxq_idx) { + *hrxq_idx = mlx5_hrxq_new + (dev, rss_desc->key, + MLX5_RSS_HASH_KEY_LEN, + dev_flow->hash_fields, + rss_desc->queue, + rss_desc->queue_num, + !!(dh->layers & + MLX5_FLOW_LAYER_TUNNEL)); + if (!*hrxq_idx) + return NULL; + } + hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], + *hrxq_idx); + return hrxq; +} + +/** + * Find existing sample resource or create and register a new one. + * + * @param[in, out] dev + * Pointer to rte_eth_dev structure. + * @param[in] attr + * Attributes of flow that includes this item. + * @param[in] resource + * Pointer to sample resource. + * @parm[in, out] dev_flow + * Pointer to the dev_flow. + * @param[in, out] sample_dv_actions + * Pointer to sample actions list. + * @param[out] error + * pointer to error structure. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_sample_resource_register(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_flow_dv_sample_resource *resource, + struct mlx5_flow *dev_flow, + void **sample_dv_actions, + struct rte_flow_error *error) +{ + struct mlx5_flow_dv_sample_resource *cache_resource; + struct mlx5dv_dr_flow_sampler_attr sampler_attr; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_flow_tbl_resource *tbl; + uint32_t idx = 0; + const uint32_t next_ft_step = 1; + uint32_t next_ft_id = resource->ft_id + next_ft_step; + + /* Lookup a matching resource from cache. */ + ILIST_FOREACH(sh->ipool[MLX5_IPOOL_SAMPLE], sh->sample_action_list, + idx, cache_resource, next) { + if (resource->ratio == cache_resource->ratio && + resource->ft_type == cache_resource->ft_type && + resource->ft_id == cache_resource->ft_id && + resource->set_action == cache_resource->set_action && + !memcmp((void *)&resource->sample_act, + (void *)&cache_resource->sample_act, + sizeof(struct mlx5_flow_sub_actions_list))) { + DRV_LOG(DEBUG, "sample resource %p: refcnt %d++", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + rte_atomic32_inc(&cache_resource->refcnt); + dev_flow->handle->dvh.rix_sample = idx; + dev_flow->dv.sample_res = cache_resource; + return 0; + } + } + /* Register new sample resource. */ + cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_SAMPLE], + &dev_flow->handle->dvh.rix_sample); + if (!cache_resource) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate resource memory"); + *cache_resource = *resource; + /* Create normal path table level */ + tbl = flow_dv_tbl_resource_get(dev, next_ft_id, + attr->egress, attr->transfer, error); + if (!tbl) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "fail to create normal path table " + "for sample"); + goto error; + } + cache_resource->normal_path_tbl = tbl; + if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) { + cache_resource->default_miss = + mlx5_glue->dr_create_flow_action_default_miss(); + if (!cache_resource->default_miss) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot create default miss " + "action"); + goto error; + } + sample_dv_actions[resource->sample_act.actions_num++] = + cache_resource->default_miss; + } + /* Create a DR sample action */ + sampler_attr.sample_ratio = cache_resource->ratio; + sampler_attr.default_next_table = tbl->obj; + sampler_attr.num_sample_actions = resource->sample_act.actions_num; + sampler_attr.sample_actions = (struct mlx5dv_dr_action **) + &sample_dv_actions[0]; + sampler_attr.action = cache_resource->set_action; + cache_resource->verbs_action = + mlx5_glue->dr_create_flow_action_sampler(&sampler_attr); + if (!cache_resource->verbs_action) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot create sample action"); + goto error; + } + rte_atomic32_init(&cache_resource->refcnt); + rte_atomic32_inc(&cache_resource->refcnt); + ILIST_INSERT(sh->ipool[MLX5_IPOOL_SAMPLE], &sh->sample_action_list, + dev_flow->handle->dvh.rix_sample, cache_resource, + next); + dev_flow->dv.sample_res = cache_resource; + DRV_LOG(DEBUG, "new sample resource %p: refcnt %d++", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + return 0; +error: + if (cache_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) { + if (cache_resource->default_miss) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->default_miss)); + } else { + if (cache_resource->sample_idx.rix_hrxq && + !mlx5_hrxq_release(dev, + cache_resource->sample_idx.rix_hrxq)) + cache_resource->sample_idx.rix_hrxq = 0; + if (cache_resource->sample_idx.rix_tag && + !flow_dv_tag_release(dev, + cache_resource->sample_idx.rix_tag)) + cache_resource->sample_idx.rix_tag = 0; + if (cache_resource->sample_idx.cnt) { + flow_dv_counter_release(dev, + cache_resource->sample_idx.cnt); + cache_resource->sample_idx.cnt = 0; + } + } + if (cache_resource->normal_path_tbl) + flow_dv_tbl_resource_release(dev, + cache_resource->normal_path_tbl); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_SAMPLE], + dev_flow->handle->dvh.rix_sample); + dev_flow->handle->dvh.rix_sample = 0; + return -rte_errno; +} + +/** + * Convert Sample action to DV specification. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to action structure. + * @param[in, out] dev_flow + * Pointer to the mlx5_flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in, out] sample_actions + * Pointer to sample actions list. + * @param[in, out] res + * Pointer to sample resource. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate_action_sample(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + void **sample_actions, + struct mlx5_flow_dv_sample_resource *res, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_action_sample *sample_action; + const struct rte_flow_action *sub_actions; + const struct rte_flow_action_queue *queue; + struct mlx5_flow_sub_actions_list *sample_act; + struct mlx5_flow_sub_actions_idx *sample_idx; + struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *) + priv->rss_desc) + [!!priv->flow_nested_idx]; + uint64_t action_flags = 0; + union { + uint32_t action_in[MLX5_ST_SZ_DW(set_action_in)]; + uint64_t set_action; + } action_ctx = { .set_action = 0 }; + + sample_act = &res->sample_act; + sample_idx = &res->sample_idx; + sample_action = (const struct rte_flow_action_sample *)action->conf; + res->ratio = sample_action->ratio; + sub_actions = sample_action->actions; + for (; sub_actions->type != RTE_FLOW_ACTION_TYPE_END; sub_actions++) { + int type = sub_actions->type; + uint32_t pre_rix = 0; + void *pre_r; + switch (type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + struct mlx5_hrxq *hrxq; + uint32_t hrxq_idx; + + queue = sub_actions->conf; + rss_desc->queue_num = 1; + rss_desc->queue[0] = queue->index; + hrxq = flow_dv_handle_rx_queue(dev, dev_flow, + rss_desc, &hrxq_idx); + if (!hrxq) + return rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create fate queue"); + sample_act->dr_queue_action = hrxq->action; + sample_idx->rix_hrxq = hrxq_idx; + sample_actions[sample_act->actions_num++] = + hrxq->action; + action_flags |= MLX5_FLOW_ACTION_QUEUE; + if (action_flags & MLX5_FLOW_ACTION_MARK) + dev_flow->handle->rix_hrxq = hrxq_idx; + dev_flow->handle->fate_action = + MLX5_FLOW_FATE_QUEUE; + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + uint32_t tag_be = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (sub_actions->conf))->id); + dev_flow->handle->mark = 1; + pre_rix = dev_flow->handle->dvh.rix_tag; + /* Save the mark resource before sample */ + pre_r = dev_flow->dv.tag_resource; + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) + return -rte_errno; + MLX5_ASSERT(dev_flow->dv.tag_resource); + sample_act->dr_tag_action = + dev_flow->dv.tag_resource->action; + sample_idx->rix_tag = + dev_flow->handle->dvh.rix_tag; + sample_actions[sample_act->actions_num++] = + sample_act->dr_tag_action; + /* Recover the mark resource after sample */ + dev_flow->dv.tag_resource = pre_r; + dev_flow->handle->dvh.rix_tag = pre_rix; + action_flags |= MLX5_FLOW_ACTION_MARK; + break; + } + case RTE_FLOW_ACTION_TYPE_COUNT: + { + uint32_t counter; + + counter = flow_dv_translate_create_counter(dev, + dev_flow, sub_actions->conf, 0); + if (!counter) + return rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create counter" + " object."); + sample_idx->cnt = counter; + sample_act->dr_cnt_action = + (flow_dv_counter_get_by_idx(dev, + counter, NULL))->action; + sample_actions[sample_act->actions_num++] = + sample_act->dr_cnt_action; + action_flags |= MLX5_FLOW_ACTION_COUNT; + break; + } + default: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Not support for sampler action"); + } + } + sample_act->action_flags = action_flags; + res->ft_id = dev_flow->dv.group; + if (attr->transfer) { + res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + MLX5_SET(set_action_in, action_ctx.action_in, action_type, + MLX5_MODIFICATION_TYPE_SET); + MLX5_SET(set_action_in, action_ctx.action_in, field, + MLX5_MODI_META_REG_C_0); + MLX5_SET(set_action_in, action_ctx.action_in, data, + priv->vport_meta_tag); + res->set_action = action_ctx.set_action; + } else if (attr->ingress) { + res->ft_type = MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + } + return 0; +} + +/** + * Convert Sample action to DV specification. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the mlx5_flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in, out] res + * Pointer to sample resource. + * @param[in] sample_actions + * Pointer to sample path actions list. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_create_action_sample(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + struct mlx5_flow_dv_sample_resource *res, + void **sample_actions, + struct rte_flow_error *error) +{ + if (flow_dv_sample_resource_register(dev, attr, res, dev_flow, + sample_actions, error)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "can't create sample action"); + return 0; +} + +/** * Fill the flow with DV spec, lock free * (mutex should be acquired by caller). * @@ -8293,9 +8677,13 @@ struct field_modify_info modify_tcp[] = { void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_sample_resource sample_res; + void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + uint32_t sample_act_pos = UINT32_MAX; uint32_t table; int ret = 0; + memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : MLX5DV_FLOW_TABLE_TYPE_NIC_RX; ret = mlx5_flow_group_to_table(attr, dev_flow->external, attr->group, @@ -8314,7 +8702,6 @@ struct field_modify_info modify_tcp[] = { const struct rte_flow_action_rss *rss; const struct rte_flow_action *action = actions; const uint8_t *rss_key; - const struct rte_flow_action_jump *jump_data; const struct rte_flow_action_meter *mtr; struct mlx5_flow_tbl_resource *tbl; uint32_t port_id = 0; @@ -8322,6 +8709,7 @@ struct field_modify_info modify_tcp[] = { int action_type = actions->type; const struct rte_flow_action *found_action = NULL; struct mlx5_flow_meter *fm = NULL; + uint32_t jump_group = 0; if (!mlx5_flow_os_action_supported(action_type)) return rte_flow_error_set(error, ENOTSUP, @@ -8560,9 +8948,13 @@ struct field_modify_info modify_tcp[] = { action_flags |= MLX5_FLOW_ACTION_DECAP; break; case RTE_FLOW_ACTION_TYPE_JUMP: - jump_data = action->conf; + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + if (dev_flow->external && jump_group < + MLX5_MAX_TABLES_EXTERNAL) + jump_group *= MLX5_FLOW_TABLE_FACTOR; ret = mlx5_flow_group_to_table(attr, dev_flow->external, - jump_data->group, + jump_group, !!priv->fdb_def_rule, &table, error); if (ret) @@ -8728,6 +9120,19 @@ struct field_modify_info modify_tcp[] = { return -rte_errno; action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; break; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + sample_act_pos = actions_n; + ret = flow_dv_translate_action_sample(dev, + actions, + dev_flow, attr, + sample_actions, + &sample_res, + error); + if (ret < 0) + return ret; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; if (mhdr_res->actions_num) { @@ -8754,6 +9159,21 @@ struct field_modify_info modify_tcp[] = { (flow_dv_counter_get_by_idx(dev, flow->counter, NULL))->action; } + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) { + ret = flow_dv_create_action_sample(dev, + dev_flow, attr, + &sample_res, + sample_actions, + error); + if (ret < 0) + return rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create sample action"); + dev_flow->dv.actions[sample_act_pos] = + dev_flow->dv.sample_res->verbs_action; + } break; default: break; @@ -9065,7 +9485,8 @@ struct field_modify_info modify_tcp[] = { dh->rix_hrxq = UINT32_MAX; dv->actions[n++] = drop_hrxq->action; } - } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) { + } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE && + !dv_h->rix_sample) { struct mlx5_hrxq *hrxq; uint32_t hrxq_idx; struct mlx5_flow_rss_desc *rss_desc = @@ -9197,18 +9618,18 @@ struct field_modify_info modify_tcp[] = { * * @param dev * Pointer to Ethernet device. - * @param handle - * Pointer to mlx5_flow_handle. + * @param encap_decap_idx + * Index of encap decap resource. * * @return * 1 while a reference on it exists, 0 when freed. */ static int flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, - struct mlx5_flow_handle *handle) + uint32_t encap_decap_idx) { struct mlx5_priv *priv = dev->data->dev_private; - uint32_t idx = handle->dvh.rix_encap_decap; + uint32_t idx = encap_decap_idx; struct mlx5_flow_dv_encap_decap_resource *cache_resource; cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_DECAP_ENCAP], @@ -9459,6 +9880,71 @@ struct field_modify_info modify_tcp[] = { } /** + * Release an sample resource. + * + * @param dev + * Pointer to Ethernet device. + * @param handle + * Pointer to mlx5_flow_handle. + * + * @return + * 1 while a reference on it exists, 0 when freed. + */ +static int +flow_dv_sample_resource_release(struct rte_eth_dev *dev, + struct mlx5_flow_handle *handle) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t idx = handle->dvh.rix_sample; + struct mlx5_flow_dv_sample_resource *cache_resource; + + cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_SAMPLE], + idx); + if (!cache_resource) + return 0; + MLX5_ASSERT(cache_resource->verbs_action); + DRV_LOG(DEBUG, "sample resource %p: refcnt %d--", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { + if (cache_resource->verbs_action) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->verbs_action)); + if (cache_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) { + if (cache_resource->default_miss) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->default_miss)); + } + if (cache_resource->normal_path_tbl) + flow_dv_tbl_resource_release(dev, + cache_resource->normal_path_tbl); + } + if (cache_resource->sample_idx.rix_hrxq && + !mlx5_hrxq_release(dev, + cache_resource->sample_idx.rix_hrxq)) + cache_resource->sample_idx.rix_hrxq = 0; + if (cache_resource->sample_idx.rix_tag && + !flow_dv_tag_release(dev, + cache_resource->sample_idx.rix_tag)) + cache_resource->sample_idx.rix_tag = 0; + if (cache_resource->sample_idx.cnt) { + flow_dv_counter_release(dev, + cache_resource->sample_idx.cnt); + cache_resource->sample_idx.cnt = 0; + } + if (!rte_atomic32_read(&cache_resource->refcnt)) { + ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_SAMPLE], + &priv->sh->sample_action_list, idx, + cache_resource, next); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_SAMPLE], idx); + DRV_LOG(DEBUG, "sample resource %p: removed", + (void *)cache_resource); + return 0; + } + return 1; +} + +/** * Remove the flow from the NIC but keeps it in memory. * Lock free, (mutex should be acquired by caller). * @@ -9537,8 +10023,11 @@ struct field_modify_info modify_tcp[] = { flow->dev_handles = dev_handle->next.next; if (dev_handle->dvh.matcher) flow_dv_matcher_release(dev, dev_handle); + if (dev_handle->dvh.rix_sample) + flow_dv_sample_resource_release(dev, dev_handle); if (dev_handle->dvh.rix_encap_decap) - flow_dv_encap_decap_resource_release(dev, dev_handle); + flow_dv_encap_decap_resource_release(dev, + dev_handle->dvh.rix_encap_decap); if (dev_handle->dvh.modify_hdr) flow_dv_modify_hdr_resource_release(dev, dev_handle); if (dev_handle->dvh.rix_push_vlan) From patchwork Wed Sep 9 06:48:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 76998 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7491A04B1; Wed, 9 Sep 2020 08:48:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D1CDB1BF8E; Wed, 9 Sep 2020 08:48:37 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 79ADC1BF8E for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYG2029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:29 +0300 Message-Id: <1599634114-148013-8-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 07/12] app/testpmd: add testpmd command for sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a new testpmd command 'set sample_actions' that supports the multiple sample actions list configuration by using the index: set sample_actions The examples for the sample flow use case and result as below: 1. set sample_actions 0 mark id 0x8 / queue index 2 / end .. pattern eth / end actions sample ratio 2 index 0 / jump group 2 ... This flow will result in all the matched ingress packets will be jumped to next flow table, and the each second packet will be marked and sent to queue 2 of the control application. 2. ...pattern eth / end actions sample ratio 2 / port_id id 2 ... The flow will result in all the matched ingress packets will be sent to port 2, and the each second packet will also be sent to e-switch manager vport. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 285 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 276 insertions(+), 9 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 6263d30..27fa294 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -56,6 +56,8 @@ enum index { SET_RAW_ENCAP, SET_RAW_DECAP, SET_RAW_INDEX, + SET_SAMPLE_ACTIONS, + SET_SAMPLE_INDEX, /* Top-level command. */ FLOW, @@ -358,6 +360,10 @@ enum index { ACTION_SET_IPV6_DSCP_VALUE, ACTION_AGE, ACTION_AGE_TIMEOUT, + ACTION_SAMPLE, + ACTION_SAMPLE_RATIO, + ACTION_SAMPLE_INDEX, + ACTION_SAMPLE_INDEX_VALUE, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -493,6 +499,22 @@ struct action_nvgre_encap_data { struct mplsoudp_decap_conf mplsoudp_decap_conf; +#define ACTION_SAMPLE_ACTIONS_NUM 10 +#define RAW_SAMPLE_CONFS_MAX_NUM 8 +/** Storage for struct rte_flow_action_sample including external data. */ +struct action_sample_data { + struct rte_flow_action_sample conf; + uint32_t idx; +}; +/** Storage for struct rte_flow_action_sample. */ +struct raw_sample_conf { + struct rte_flow_action data[ACTION_SAMPLE_ACTIONS_NUM]; +}; +struct raw_sample_conf raw_sample_confs[RAW_SAMPLE_CONFS_MAX_NUM]; +struct rte_flow_action_mark sample_mark[RAW_SAMPLE_CONFS_MAX_NUM]; +struct rte_flow_action_queue sample_queue[RAW_SAMPLE_CONFS_MAX_NUM]; +struct rte_flow_action_count sample_count[RAW_SAMPLE_CONFS_MAX_NUM]; + /** Maximum number of subsequent tokens and arguments on the stack. */ #define CTX_STACK_SIZE 16 @@ -1189,6 +1211,7 @@ struct parse_action_priv { ACTION_SET_IPV4_DSCP, ACTION_SET_IPV6_DSCP, ACTION_AGE, + ACTION_SAMPLE, ZERO, }; @@ -1421,9 +1444,28 @@ struct parse_action_priv { ZERO, }; +static const enum index action_sample[] = { + ACTION_SAMPLE, + ACTION_SAMPLE_RATIO, + ACTION_SAMPLE_INDEX, + ACTION_NEXT, + ZERO, +}; + +static const enum index next_action_sample[] = { + ACTION_QUEUE, + ACTION_MARK, + ACTION_COUNT, + ACTION_NEXT, + ZERO, +}; + static int parse_set_raw_encap_decap(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_set_sample_action(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_set_init(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -1491,7 +1533,15 @@ static int parse_vc_action_raw_decap_index(struct context *, static int parse_vc_action_set_meta(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, + unsigned int size); +static int parse_vc_action_sample(struct context *ctx, + const struct token *token, const char *str, + unsigned int len, void *buf, unsigned int size); +static int +parse_vc_action_sample_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); static int parse_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -1562,6 +1612,8 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_raw_index(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_set_sample_index(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -3703,11 +3755,13 @@ static int comp_set_raw_index(struct context *, const struct token *, /* Top level command. */ [SET] = { .name = "set", - .help = "set raw encap/decap data", - .type = "set raw_encap|raw_decap ", + .help = "set raw encap/decap/sample data", + .type = "set raw_encap|raw_decap " + " or set sample_actions ", .next = NEXT(NEXT_ENTRY (SET_RAW_ENCAP, - SET_RAW_DECAP)), + SET_RAW_DECAP, + SET_SAMPLE_ACTIONS)), .call = parse_set_init, }, /* Sub-level commands. */ @@ -3738,6 +3792,23 @@ static int comp_set_raw_index(struct context *, const struct token *, .next = NEXT(next_item), .call = parse_port, }, + [SET_SAMPLE_INDEX] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "index of sample actions", + .next = NEXT(next_action_sample), + .call = parse_port, + }, + [SET_SAMPLE_ACTIONS] = { + .name = "sample_actions", + .help = "set sample actions list", + .next = NEXT(NEXT_ENTRY(SET_SAMPLE_INDEX)), + .args = ARGS(ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct buffer, port), + sizeof(((struct buffer *)0)->port), + 0, RAW_SAMPLE_CONFS_MAX_NUM - 1)), + .call = parse_set_sample_action, + }, [ACTION_SET_TAG] = { .name = "set_tag", .help = "set tag", @@ -3841,6 +3912,37 @@ static int comp_set_raw_index(struct context *, const struct token *, .next = NEXT(action_age, NEXT_ENTRY(UNSIGNED)), .call = parse_vc_conf, }, + [ACTION_SAMPLE] = { + .name = "sample", + .help = "set a sample action", + .next = NEXT(action_sample), + .priv = PRIV_ACTION(SAMPLE, + sizeof(struct action_sample_data)), + .call = parse_vc_action_sample, + }, + [ACTION_SAMPLE_RATIO] = { + .name = "ratio", + .help = "flow sample ratio value", + .next = NEXT(action_sample, NEXT_ENTRY(UNSIGNED)), + .args = ARGS(ARGS_ENTRY_ARB + (offsetof(struct action_sample_data, conf) + + offsetof(struct rte_flow_action_sample, ratio), + sizeof(((struct rte_flow_action_sample *)0)-> + ratio))), + }, + [ACTION_SAMPLE_INDEX] = { + .name = "index", + .help = "the index of sample actions list", + .next = NEXT(NEXT_ENTRY(ACTION_SAMPLE_INDEX_VALUE)), + }, + [ACTION_SAMPLE_INDEX_VALUE] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "unsigned integer value", + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_sample_index, + .comp = comp_set_sample_index, + }, }; /** Remove and return last entry from argument stack. */ @@ -5351,6 +5453,76 @@ static int comp_set_raw_index(struct context *, const struct token *, return len; } +static int +parse_vc_action_sample(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_sample_data *action_sample_data = NULL; + static struct rte_flow_action end_action = { + RTE_FLOW_ACTION_TYPE_END, 0 + }; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Copy the headers to the buffer. */ + action_sample_data = ctx->object; + action_sample_data->conf.actions = &end_action; + action->conf = &action_sample_data->conf; + return ret; +} + +static int +parse_vc_action_sample_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct action_sample_data *action_sample_data; + struct rte_flow_action *action; + const struct arg *arg; + struct buffer *out = buf; + int ret; + uint16_t idx; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + if (ctx->curr != ACTION_SAMPLE_INDEX_VALUE) + return -1; + arg = ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct action_sample_data, idx), + sizeof(((struct action_sample_data *)0)->idx), + 0, RAW_SAMPLE_CONFS_MAX_NUM - 1); + if (push_args(ctx, arg)) + return -1; + ret = parse_int(ctx, token, str, len, NULL, 0); + if (ret < 0) { + pop_args(ctx); + return -1; + } + if (!ctx->object) + return len; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + action_sample_data = ctx->object; + idx = action_sample_data->idx; + action_sample_data->conf.actions = raw_sample_confs[idx].data; + action->conf = &action_sample_data->conf; + return len; +} + /** Parse tokens for destroy command. */ static int parse_destroy(struct context *ctx, const struct token *token, @@ -6115,6 +6287,38 @@ static int comp_set_raw_index(struct context *, const struct token *, if (!out->command) return -1; out->command = ctx->curr; + /* For encap/decap we need is pattern */ + out->args.vc.pattern = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; +} + +/** Parse set command, initialize output buffer for subsequent tokens. */ +static int +parse_set_sample_action(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + /* Make sure buffer is large enough. */ + if (size < sizeof(*out)) + return -1; + ctx->objdata = 0; + ctx->objmask = NULL; + ctx->object = out; + if (!out->command) + return -1; + out->command = ctx->curr; + /* For sampler we need is actions */ + out->args.vc.actions = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); return len; } @@ -6151,11 +6355,8 @@ static int comp_set_raw_index(struct context *, const struct token *, return -1; out->command = ctx->curr; out->args.vc.data = (uint8_t *)out + size; - /* All we need is pattern */ - out->args.vc.pattern = - (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), - sizeof(double)); - ctx->object = out->args.vc.pattern; + ctx->object = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); } return len; } @@ -6306,6 +6507,24 @@ static int comp_set_raw_index(struct context *, const struct token *, return nb; } +/** Complete index number for set raw_encap/raw_decap commands. */ +static int +comp_set_sample_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + uint16_t idx = 0; + uint16_t nb = 0; + + RTE_SET_USED(ctx); + RTE_SET_USED(token); + for (idx = 0; idx < RAW_SAMPLE_CONFS_MAX_NUM; ++idx) { + if (buf && idx == ent) + return snprintf(buf, size, "%u", idx); + ++nb; + } + return nb; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -6751,7 +6970,53 @@ static int comp_set_raw_index(struct context *, const struct token *, return mask; } - +/** Dispatch parsed buffer to function calls. */ +static void +cmd_set_raw_parsed_sample(const struct buffer *in) +{ + uint32_t n = in->args.vc.actions_n; + uint32_t i = 0; + struct rte_flow_action *action = NULL; + struct rte_flow_action *data = NULL; + size_t size = 0; + uint16_t idx = in->port; /* We borrow port field as index */ + uint32_t max_size = sizeof(struct rte_flow_action) * + ACTION_SAMPLE_ACTIONS_NUM; + + RTE_ASSERT(in->command == SET_SAMPLE_ACTIONS); + data = (struct rte_flow_action *)&raw_sample_confs[idx].data; + memset(data, 0x00, max_size); + for (; i <= n - 1; i++) { + action = in->args.vc.actions + i; + if (action->type == RTE_FLOW_ACTION_TYPE_END) + break; + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_MARK: + size = sizeof(struct rte_flow_action_mark); + rte_memcpy(&sample_mark[idx], + (const void *)action->conf, size); + action->conf = &sample_mark[idx]; + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + size = sizeof(struct rte_flow_action_count); + rte_memcpy(&sample_count[idx], + (const void *)action->conf, size); + action->conf = &sample_count[idx]; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + size = sizeof(struct rte_flow_action_queue); + rte_memcpy(&sample_queue[idx], + (const void *)action->conf, size); + action->conf = &sample_queue[idx]; + break; + default: + printf("Error - Not supported action\n"); + return; + } + rte_memcpy(data, action, sizeof(struct rte_flow_action)); + data++; + } +} /** Dispatch parsed buffer to function calls. */ static void @@ -6768,6 +7033,8 @@ static int comp_set_raw_index(struct context *, const struct token *, uint16_t proto = 0; uint16_t idx = in->port; /* We borrow port field as index */ + if (in->command == SET_SAMPLE_ACTIONS) + return cmd_set_raw_parsed_sample(in); RTE_ASSERT(in->command == SET_RAW_ENCAP || in->command == SET_RAW_DECAP); if (in->command == SET_RAW_ENCAP) { From patchwork Wed Sep 9 06:48:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77001 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4FA6A04B1; Wed, 9 Sep 2020 08:49:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 99EF31C0D9; Wed, 9 Sep 2020 08:48:42 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 85C691C0BD for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYG3029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:30 +0300 Message-Id: <1599634114-148013-9-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 08/12] common/mlx5: add glue function for mirroring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Using extend Destination list in the FTE to implement the mirroring feature. Add two new rdma-core command for create group action and multiple destination action. Group action which is a group of sub-action will be used together for input to multiple destination action. Multiple destination action can be used for mirroring packet to different vport or Queue. Signed-off-by: Jiawei Wang --- drivers/common/mlx5/linux/meson.build | 2 ++ drivers/common/mlx5/linux/mlx5_glue.c | 37 +++++++++++++++++++++++++++++++++++ drivers/common/mlx5/linux/mlx5_glue.h | 5 +++++ 3 files changed, 44 insertions(+) diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index 1aa137d..d53b891 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -176,6 +176,8 @@ has_sym_args = [ 'mlx5dv_dr_action_create_flow_sampler'], [ 'HAVE_MLX5DV_DR_MEM_RECLAIM', 'infiniband/mlx5dv.h', 'mlx5dv_dr_domain_set_reclaim_device_memory'], + [ 'HAVE_MLX5_DR_CREATE_ACTION_MULTI_DEST', 'infiniband/mlx5dv.h', + 'mlx5dv_dr_action_create_multi_dest'], [ 'HAVE_DEVLINK', 'linux/devlink.h', 'DEVLINK_GENL_NAME' ], ] config = configuration_data() diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index 771a47c..e44d822 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -1076,6 +1076,39 @@ #endif } +static void * +mlx5_glue_dr_create_flow_action_group(size_t num_actions, void *actions[]) +{ +#ifdef HAVE_MLX5_DR_CREATE_ACTION_MULTI_DEST + return mlx5dv_dr_action_create_group(num_actions, + (struct mlx5dv_dr_action **)actions); +#else + (void)num_actions; + (void)actions; + errno = ENOTSUP; + return NULL; +#endif +} + +static void * +mlx5_glue_dr_create_flow_action_multi_dest(void *domain, + size_t num_actions, + void *actions[]) +{ +#ifdef HAVE_MLX5_DR_CREATE_ACTION_MULTI_DEST + return mlx5dv_dr_action_create_multi_dest + (domain, + num_actions, + (struct mlx5dv_dr_action **)actions); +#else + (void)domain; + (void)num_actions; + (void)actions; + errno = ENOTSUP; + return NULL; +#endif +} + static int mlx5_glue_devx_query_eqn(struct ibv_context *ctx, uint32_t cpus, uint32_t *eqn) @@ -1354,6 +1387,10 @@ .dr_reclaim_domain_memory = mlx5_glue_dr_reclaim_domain_memory, .dr_create_flow_action_sampler = mlx5_glue_dr_create_flow_action_sampler, + .dr_create_flow_action_group = + mlx5_glue_dr_create_flow_action_group, + .dr_create_flow_action_multi_dest = + mlx5_glue_dr_create_flow_action_multi_dest, .devx_query_eqn = mlx5_glue_devx_query_eqn, .devx_create_event_channel = mlx5_glue_devx_create_event_channel, .devx_destroy_event_channel = mlx5_glue_devx_destroy_event_channel, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index a77d239..d5232d1 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -322,6 +322,11 @@ struct mlx5_glue { void (*dv_free_pp)(struct mlx5dv_pp *pp); void *(*dr_create_flow_action_sampler) (struct mlx5dv_dr_flow_sampler_attr *attr); + void *(*dr_create_flow_action_group)(size_t num_actions, + void *actions[]); + void *(*dr_create_flow_action_multi_dest)(void *domain, + size_t num_actions, + void *actions[]); }; extern const struct mlx5_glue *mlx5_glue; From patchwork Wed Sep 9 06:48:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77005 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3D93AA04B1; Wed, 9 Sep 2020 08:49:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 918041C128; Wed, 9 Sep 2020 08:48:47 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 9D0CC1C0CC for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYG4029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:31 +0300 Message-Id: <1599634114-148013-10-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 09/12] net/mlx5: update validation for mirroring flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Mirroring flow using sample action with ratio is 1, and it doesn't support jump action with the same one flow. Sample action must have destination actions like port or queue for mirroring, and don't need split function as sampling flow. Signed-off-by: Jiawei Wang --- drivers/net/mlx5/mlx5_flow.c | 26 +++++++++++++++-- drivers/net/mlx5/mlx5_flow_dv.c | 62 ++++++++++++++++++++++++++++++++++++++--- 2 files changed, 82 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index ce03210..61104a8 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3936,16 +3936,38 @@ struct mlx5_flow_tunnel_info { */ static int flow_check_match_action(const struct rte_flow_action actions[], - enum rte_flow_action_type action) + const struct rte_flow_attr *attr, + enum rte_flow_action_type action) { + const struct rte_flow_action_sample *sample; int actions_n = 0; + int jump_flag = 0; int flag = 0; + uint32_t ratio = 0; + int sub_type = 0; for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { if (actions->type == action) flag = 1; + if (actions->type == RTE_FLOW_ACTION_TYPE_JUMP) + jump_flag = 1; + if (actions->type == RTE_FLOW_ACTION_TYPE_SAMPLE) { + sample = actions->conf; + ratio = sample->ratio; + sub_type = ((const struct rte_flow_action *) + (sample->actions))->type; + } actions_n++; } + if (flag && action == RTE_FLOW_ACTION_TYPE_SAMPLE && attr->transfer) { + if (ratio == 1) { + /* JUMP Action not support for Mirroring; + * Mirroring support multi-destination; + */ + if (!jump_flag && sub_type != RTE_FLOW_ACTION_TYPE_END) + flag = 0; + } + } /* Count RTE_FLOW_ACTION_TYPE_END. */ return flag ? actions_n + 1 : 0; } @@ -4460,7 +4482,7 @@ struct mlx5_flow_tunnel_info { int ret = 0; if (priv->sampler_en) - actions_n = flow_check_match_action(actions, + actions_n = flow_check_match_action(actions, attr, RTE_FLOW_ACTION_TYPE_SAMPLE); if (actions_n) { /* The prefix actions must includes sample, tag, end. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 2241e36..19ad9fc 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -4024,6 +4024,7 @@ struct field_modify_info modify_tcp[] = { const struct rte_flow_action_sample *sample = action->conf; const struct rte_flow_action *act; uint64_t sub_action_flags = 0; + uint16_t queue_index = 0xFFFF; int actions_n = 0; int ret; @@ -4063,6 +4064,8 @@ struct field_modify_info modify_tcp[] = { attr, error); if (ret < 0) return ret; + queue_index = ((const struct rte_flow_action_queue *) + (act->conf))->index; sub_action_flags |= MLX5_FLOW_ACTION_QUEUE; ++actions_n; break; @@ -4086,6 +4089,25 @@ struct field_modify_info modify_tcp[] = { sub_action_flags |= MLX5_FLOW_ACTION_COUNT; ++actions_n; break; + case RTE_FLOW_ACTION_TYPE_PORT_ID: + ret = flow_dv_validate_action_port_id(dev, + sub_action_flags, + act, + attr, + error); + if (ret) + return ret; + sub_action_flags |= MLX5_FLOW_ACTION_PORT_ID; + ++actions_n; + break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + ret = flow_dv_validate_action_raw_encap_decap + (dev, NULL, act->conf, attr, &sub_action_flags, + &actions_n, error); + if (ret < 0) + return ret; + ++actions_n; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, @@ -4108,10 +4130,42 @@ struct field_modify_info modify_tcp[] = { "Sample Only support Ingress " "or E-Switch"); } else if (sample->actions->type != RTE_FLOW_ACTION_TYPE_END) { - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "E-Switch doesn't support any " - "optional action for sampling"); + if (sample->ratio > 1) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "E-Switch doesn't support " + "any optional action " + "for sampling"); + if (sub_action_flags & MLX5_FLOW_ACTION_QUEUE) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "unsupported action QUEUE"); + if (!(sub_action_flags & MLX5_FLOW_ACTION_PORT_ID)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "E-Switch must has a dest " + "port for mirroring"); + } + /* Continue validation for Xcap actions.*/ + if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) && + (queue_index == 0xFFFF || + mlx5_rxq_get_type(dev, queue_index) != MLX5_RXQ_TYPE_HAIRPIN)) { + if ((sub_action_flags & MLX5_FLOW_XCAP_ACTIONS) == + MLX5_FLOW_XCAP_ACTIONS) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "encap and decap " + "combination aren't " + "supported"); + if (!attr->transfer && attr->ingress && (sub_action_flags & + MLX5_FLOW_ACTION_ENCAP)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "encap is not supported" + " for ingress traffic"); } return 0; } From patchwork Wed Sep 9 06:48:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77010 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA7B0A04B1; Wed, 9 Sep 2020 08:50:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B16561C1A0; Wed, 9 Sep 2020 08:48:52 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id AE7F91C0CF for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYG5029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:32 +0300 Message-Id: <1599634114-148013-11-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 10/12] net/mlx5: update translate function for mirror X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Translate the attribute of sample action that include sample ratio and sub actions list. PMD will check the destination action number in current flow, if found multiple destination action, then create the new group rdma action for each destination, these group rdma actions will be used for creating the multiple destination DR action and add it into the flow. Signed-off-by: Jiawei Wang --- drivers/net/mlx5/mlx5.c | 11 + drivers/net/mlx5/mlx5.h | 2 + drivers/net/mlx5/mlx5_flow.h | 24 +++ drivers/net/mlx5/mlx5_flow_dv.c | 459 +++++++++++++++++++++++++++++++++++++--- 4 files changed, 467 insertions(+), 29 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 8d5f517..adb27de 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -252,6 +252,17 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .free = mlx5_free, .type = "mlx5_sample_ipool", }, + { + .size = sizeof(struct mlx5_flow_dv_multi_dest_resource), + .trunk_size = 64, + .grow_trunk = 3, + .grow_shift = 2, + .need_lock = 0, + .release_mem_en = 1, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_multi_dest_ipool", + }, #endif { .size = sizeof(struct mlx5_flow_meter), diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9115391..4bd0cdb 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -40,6 +40,7 @@ enum mlx5_ipool_index { MLX5_IPOOL_PORT_ID, /* Pool for port id resource. */ MLX5_IPOOL_JUMP, /* Pool for jump resource. */ MLX5_IPOOL_SAMPLE, /* Pool for sample resource. */ + MLX5_IPOOL_MULTI_DEST, /* Pool for multi-dest resource. */ #endif MLX5_IPOOL_MTR, /* Pool for meter resource. */ MLX5_IPOOL_MCP, /* Pool for metadata resource. */ @@ -643,6 +644,7 @@ struct mlx5_dev_ctx_shared { uint32_t port_id_action_list; /* List of port ID actions. */ uint32_t push_vlan_action_list; /* List of push VLAN actions. */ uint32_t sample_action_list; /* List of sample actions. */ + uint32_t multi_dest_list; /* List of multi_dest. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ struct mlx5_flow_default_miss_resource default_miss; /* Default miss action resource structure. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 50e230f..33dd49b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -521,6 +521,8 @@ struct mlx5_flow_sub_actions_list { void *dr_queue_action; void *dr_tag_action; void *dr_cnt_action; + void *dr_port_id_action; + void *dr_encap_action; }; /* Sample sub-actions resource list. */ @@ -528,6 +530,8 @@ struct mlx5_flow_sub_actions_idx { uint32_t rix_hrxq; /**< Hash Rx queue object index. */ uint32_t rix_tag; /**< Index to the tag action. */ uint32_t cnt; + uint32_t rix_port_id_action; /**< Index to port ID action resource. */ + uint32_t rix_encap_decap; /**< Index to encap/decap resource. */ }; /* Sample action resource structure. */ @@ -547,6 +551,22 @@ struct mlx5_flow_dv_sample_resource { /**< Action resources. */ }; +#define MLX5_MAX_GROUP_NUM 2 + +/* Multi-dest action resource structure. */ +struct mlx5_flow_dv_multi_dest_resource { + ILIST_ENTRY(uint32_t)next; /**< Pointer to next element. */ + rte_atomic32_t refcnt; /**< Reference counter. */ + uint8_t ft_type; /** Flow Table Type */ + uint8_t num_of_group; /**< Number of group actions. */ + void *action; /**< Pointer to the rdma core action. */ + void *group_action[MLX5_MAX_GROUP_NUM]; + struct mlx5_flow_sub_actions_idx sample_idx[MLX5_MAX_GROUP_NUM]; + /**< Action index resources. */ + struct mlx5_flow_sub_actions_list sample_act[MLX5_MAX_GROUP_NUM]; + /**< Action resources. */ +}; + /* Verbs specification header. */ struct ibv_spec_header { enum ibv_flow_spec_type type; @@ -581,6 +601,8 @@ struct mlx5_flow_handle_dv { /**< Index to the tag action. */ uint32_t rix_sample; /**< Index to sample action resource in cache. */ + uint32_t rix_multi_dest; + /**< Index to multi-dest resource in cache. */ } __rte_packed; /** Device flow handle structure: used both for creating & destroying. */ @@ -641,6 +663,8 @@ struct mlx5_flow_dv_workspace { /**< Holds the value that the packet is compared to. */ struct mlx5_flow_dv_sample_resource *sample_res; /**< Pointer to the sample action resource. */ + struct mlx5_flow_dv_multi_dest_resource *multi_dest_res; + /**< Pointer to the multi_dest resource. */ }; /* diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 19ad9fc..cf85780 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -80,6 +80,10 @@ flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, uint32_t encap_decap_idx); +static int +flow_dv_port_id_action_resource_release(struct rte_eth_dev *dev, + uint32_t port_id); + /** * Initialize flow attributes structure according to flow items' types. * @@ -8304,9 +8308,9 @@ struct field_modify_info modify_tcp[] = { */ static struct mlx5_hrxq * flow_dv_handle_rx_queue(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - struct mlx5_flow_rss_desc *rss_desc, - uint32_t *hrxq_idx) + struct mlx5_flow *dev_flow, + struct mlx5_flow_rss_desc *rss_desc, + uint32_t *hrxq_idx) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_handle *dh = dev_flow->handle; @@ -8314,19 +8318,19 @@ struct field_modify_info modify_tcp[] = { MLX5_ASSERT(rss_desc->queue_num); *hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, - MLX5_RSS_HASH_KEY_LEN, - dev_flow->hash_fields, - rss_desc->queue, - rss_desc->queue_num); + MLX5_RSS_HASH_KEY_LEN, + dev_flow->hash_fields, + rss_desc->queue, + rss_desc->queue_num); if (!*hrxq_idx) { *hrxq_idx = mlx5_hrxq_new (dev, rss_desc->key, - MLX5_RSS_HASH_KEY_LEN, - dev_flow->hash_fields, - rss_desc->queue, - rss_desc->queue_num, - !!(dh->layers & - MLX5_FLOW_LAYER_TUNNEL)); + MLX5_RSS_HASH_KEY_LEN, + dev_flow->hash_fields, + rss_desc->queue, + rss_desc->queue_num, + !!(dh->layers & + MLX5_FLOW_LAYER_TUNNEL)); if (!*hrxq_idx) return NULL; } @@ -8480,6 +8484,146 @@ struct field_modify_info modify_tcp[] = { } /** + * Find existing multi_dest resource or create and register a new one. + * + * @param[in, out] dev + * Pointer to rte_eth_dev structure. + * @param[in] attr + * Attributes of flow that includes this item. + * @param[in] resource + * Pointer to multi_dest resource. + * @param[in] sample_actions + * Pointer to sample path actions list. + * @param[in] normal_actions + * Pointer to normal path actions list. + * @parm[in, out] dev_flow + * Pointer to the dev_flow. + * @param[out] error + * pointer to error structure. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_multi_dest_resource_register(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_flow_dv_multi_dest_resource *resource, + void **sample_actions, + void **normal_actions, + struct mlx5_flow *dev_flow, + struct rte_flow_error *error) +{ + struct mlx5_flow_dv_multi_dest_resource *cache_resource; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_flow_sub_actions_list *sample_act; + struct mlx5dv_dr_domain *domain; + uint32_t idx = 0; + + /* Lookup a matching resource from cache. */ + ILIST_FOREACH(sh->ipool[MLX5_IPOOL_MULTI_DEST], + sh->multi_dest_list, + idx, cache_resource, next) { + if (resource->num_of_group == cache_resource->num_of_group && + resource->ft_type == cache_resource->ft_type && + !memcmp((void *)cache_resource->sample_act, + (void *)resource->sample_act, + (resource->num_of_group * + sizeof(struct mlx5_flow_sub_actions_list)))) { + DRV_LOG(DEBUG, "sample resource %p: refcnt %d++", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + rte_atomic32_inc(&cache_resource->refcnt); + dev_flow->handle->dvh.rix_multi_dest = idx; + dev_flow->dv.multi_dest_res = cache_resource; + return 0; + } + } + /* Register new sample resource. */ + cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_MULTI_DEST], + &dev_flow->handle->dvh.rix_multi_dest); + if (!cache_resource) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate resource memory"); + *cache_resource = *resource; + /* create the groups actions */ + for (idx = 0; idx < resource->num_of_group; idx++) { + sample_act = &resource->sample_act[idx]; + uint32_t actions_num = sample_act->actions_num; + void **dr_actions = (idx == 0) ? normal_actions : + sample_actions; + cache_resource->group_action[idx] = + mlx5_glue->dr_create_flow_action_group(actions_num, + dr_actions); + if (!cache_resource->group_action[idx]) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot create group " + "action"); + goto error; + } + } + if (attr->transfer) + domain = sh->fdb_domain; + else if (attr->ingress) + domain = sh->rx_domain; + else + domain = sh->tx_domain; + /* create a multi-dest actioin */ + cache_resource->action = + mlx5_glue->dr_create_flow_action_multi_dest + (domain, + cache_resource->num_of_group, + &cache_resource->group_action[0]); + if (!cache_resource->action) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot create multi-dest " + "action"); + goto error; + } + rte_atomic32_init(&cache_resource->refcnt); + rte_atomic32_inc(&cache_resource->refcnt); + ILIST_INSERT(sh->ipool[MLX5_IPOOL_MULTI_DEST], + &sh->multi_dest_list, + dev_flow->handle->dvh.rix_multi_dest, cache_resource, + next); + dev_flow->dv.multi_dest_res = cache_resource; + DRV_LOG(DEBUG, "new sample resource %p: refcnt %d++", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + return 0; +error: + for (idx = 0; idx < resource->num_of_group; idx++) { + struct mlx5_flow_sub_actions_idx *act_res = + &cache_resource->sample_idx[idx]; + if (cache_resource->group_action[idx]) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->group_action[idx])); + if (act_res->rix_hrxq && + !mlx5_hrxq_release(dev, + act_res->rix_hrxq)) + act_res->rix_hrxq = 0; + if (act_res->rix_encap_decap && + flow_dv_encap_decap_resource_release(dev, + act_res->rix_encap_decap)) + act_res->rix_encap_decap = 0; + if (act_res->rix_port_id_action && + !flow_dv_port_id_action_resource_release(dev, + act_res->rix_port_id_action)) + act_res->rix_port_id_action = 0; + } + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_MULTI_DEST], + dev_flow->handle->dvh.rix_multi_dest); + dev_flow->handle->dvh.rix_multi_dest = 0; + return -rte_errno; +} + +/** * Convert Sample action to DV specification. * * @param[in] dev @@ -8490,6 +8634,8 @@ struct field_modify_info modify_tcp[] = { * Pointer to the mlx5_flow. * @param[in] attr * Pointer to the flow attributes. + * @param[in, out] num_of_dest + * Pointer to the num of destination. * @param[in, out] sample_actions * Pointer to sample actions list. * @param[in, out] res @@ -8505,6 +8651,7 @@ struct field_modify_info modify_tcp[] = { const struct rte_flow_action *action, struct mlx5_flow *dev_flow, const struct rte_flow_attr *attr, + uint32_t *num_of_dest, void **sample_actions, struct mlx5_flow_dv_sample_resource *res, struct rte_flow_error *error) @@ -8554,6 +8701,7 @@ struct field_modify_info modify_tcp[] = { sample_idx->rix_hrxq = hrxq_idx; sample_actions[sample_act->actions_num++] = hrxq->action; + (*num_of_dest)++; action_flags |= MLX5_FLOW_ACTION_QUEUE; if (action_flags & MLX5_FLOW_ACTION_MARK) dev_flow->handle->rix_hrxq = hrxq_idx; @@ -8566,6 +8714,7 @@ struct field_modify_info modify_tcp[] = { uint32_t tag_be = mlx5_flow_mark_set (((const struct rte_flow_action_mark *) (sub_actions->conf))->id); + dev_flow->handle->mark = 1; pre_rix = dev_flow->handle->dvh.rix_tag; /* Save the mark resource before sample */ @@ -8608,6 +8757,56 @@ struct field_modify_info modify_tcp[] = { action_flags |= MLX5_FLOW_ACTION_COUNT; break; } + case RTE_FLOW_ACTION_TYPE_PORT_ID: + { + struct mlx5_flow_dv_port_id_action_resource + port_id_resource; + uint32_t port_id = 0; + + memset(&port_id_resource, 0, sizeof(port_id_resource)); + /* Save the port id resource before sample */ + pre_rix = dev_flow->handle->rix_port_id_action; + pre_r = dev_flow->dv.port_id_action; + if (flow_dv_translate_action_port_id(dev, sub_actions, + &port_id, error)) + return -rte_errno; + port_id_resource.port_id = port_id; + if (flow_dv_port_id_action_resource_register + (dev, &port_id_resource, dev_flow, error)) + return -rte_errno; + sample_act->dr_port_id_action = + dev_flow->dv.port_id_action->action; + sample_idx->rix_port_id_action = + dev_flow->handle->rix_port_id_action; + sample_actions[sample_act->actions_num++] = + sample_act->dr_port_id_action; + /* Recover the port id resource after sample */ + dev_flow->dv.port_id_action = pre_r; + dev_flow->handle->rix_port_id_action = pre_rix; + (*num_of_dest)++; + action_flags |= MLX5_FLOW_ACTION_PORT_ID; + break; + } + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* Save the encap resource before sample */ + pre_rix = dev_flow->handle->dvh.rix_encap_decap; + pre_r = dev_flow->dv.encap_decap; + if (flow_dv_create_action_l2_encap(dev, sub_actions, + dev_flow, + attr->transfer, + error)) + return -rte_errno; + sample_act->dr_encap_action = + dev_flow->dv.encap_decap->action; + sample_idx->rix_encap_decap = + dev_flow->handle->dvh.rix_encap_decap; + sample_actions[sample_act->actions_num++] = + sample_act->dr_encap_action; + /* Recover the encap resource after sample */ + dev_flow->dv.encap_decap = pre_r; + dev_flow->handle->dvh.rix_encap_decap = pre_rix; + action_flags |= MLX5_FLOW_ACTION_ENCAP; + break; default: return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -8641,10 +8840,16 @@ struct field_modify_info modify_tcp[] = { * Pointer to the mlx5_flow. * @param[in] attr * Pointer to the flow attributes. + * @param[in] num_of_dest + * The num of destination. * @param[in, out] res * Pointer to sample resource. + * @param[in, out] mdest_res + * Pointer to multi-dest resource. * @param[in] sample_actions * Pointer to sample path actions list. + * @param[in] action_flags + * Holds the actions detected until now. * @param[out] error * Pointer to the error structure. * @@ -8653,17 +8858,84 @@ struct field_modify_info modify_tcp[] = { */ static int flow_dv_create_action_sample(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - struct mlx5_flow_dv_sample_resource *res, - void **sample_actions, - struct rte_flow_error *error) + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + uint32_t num_of_dest, + struct mlx5_flow_dv_sample_resource *res, + struct mlx5_flow_dv_multi_dest_resource *mdest_res, + void **sample_actions, + uint64_t action_flags, + struct rte_flow_error *error) { - if (flow_dv_sample_resource_register(dev, attr, res, dev_flow, - sample_actions, error)) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, "can't create sample action"); + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_sub_actions_list *sample_act = + &mdest_res->sample_act[0]; + void *normal_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *) + priv->rss_desc) + [!!priv->flow_nested_idx]; + uint32_t normal_idx = 0; + struct mlx5_hrxq *hrxq; + uint32_t hrxq_idx; + + if (num_of_dest > 1) { + if (sample_act->action_flags & MLX5_FLOW_ACTION_QUEUE) { + /* Handle QP action for mirroring */ + hrxq = flow_dv_handle_rx_queue(dev, dev_flow, + rss_desc, &hrxq_idx); + if (!hrxq) + return rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create rx queue"); + normal_actions[normal_idx++] = hrxq->action; + mdest_res->sample_idx[0].rix_hrxq = hrxq_idx; + sample_act->dr_queue_action = hrxq->action; + if (action_flags & MLX5_FLOW_ACTION_MARK) + dev_flow->handle->rix_hrxq = hrxq_idx; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; + } + if (sample_act->action_flags & MLX5_FLOW_ACTION_ENCAP) { + normal_actions[normal_idx++] = + dev_flow->dv.encap_decap->action; + mdest_res->sample_idx[0].rix_encap_decap = + dev_flow->handle->dvh.rix_encap_decap; + sample_act->dr_encap_action = + dev_flow->dv.encap_decap->action; + } + if (sample_act->action_flags & MLX5_FLOW_ACTION_PORT_ID) { + normal_actions[normal_idx++] = + dev_flow->dv.port_id_action->action; + mdest_res->sample_idx[0].rix_port_id_action = + dev_flow->handle->rix_port_id_action; + sample_act->dr_port_id_action = + dev_flow->dv.port_id_action->action; + } + sample_act->actions_num = normal_idx; + /* update sample action resource into multi-dest index 1 */ + mdest_res->ft_type = res->ft_type; + memcpy(&mdest_res->sample_idx[1], &res->sample_idx, + sizeof(struct mlx5_flow_sub_actions_idx)); + memcpy(&mdest_res->sample_act[1], &res->sample_act, + sizeof(struct mlx5_flow_sub_actions_list)); + mdest_res->num_of_group = num_of_dest; + if (flow_dv_multi_dest_resource_register(dev, attr, mdest_res, + sample_actions, + normal_actions, + dev_flow, error)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "can't create sample " + "action"); + } else { + if (flow_dv_sample_resource_register(dev, attr, res, dev_flow, + sample_actions, error)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "can't create sample action"); + } return 0; } @@ -8731,15 +9003,21 @@ struct field_modify_info modify_tcp[] = { void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_multi_dest_resource mdest_res; struct mlx5_flow_dv_sample_resource sample_res; void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + struct mlx5_flow_sub_actions_list *sample_act; uint32_t sample_act_pos = UINT32_MAX; + uint32_t num_of_dest = 0; + int tmp_actions_n = 0; uint32_t table; int ret = 0; + memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_multi_dest_resource)); memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + sample_act = &mdest_res.sample_act[0]; ret = mlx5_flow_group_to_table(attr, dev_flow->external, attr->group, !!priv->fdb_def_rule, &table, error); if (ret) @@ -8786,6 +9064,8 @@ struct field_modify_info modify_tcp[] = { dev_flow->dv.port_id_action->action; action_flags |= MLX5_FLOW_ACTION_PORT_ID; dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; + sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; + num_of_dest++; break; case RTE_FLOW_ACTION_TYPE_FLAG: action_flags |= MLX5_FLOW_ACTION_FLAG; @@ -8871,6 +9151,8 @@ struct field_modify_info modify_tcp[] = { rss_desc->queue[0] = queue->index; action_flags |= MLX5_FLOW_ACTION_QUEUE; dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; + sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; + num_of_dest++; break; case RTE_FLOW_ACTION_TYPE_RSS: rss = actions->conf; @@ -8958,6 +9240,9 @@ struct field_modify_info modify_tcp[] = { dev_flow->dv.actions[actions_n++] = dev_flow->dv.encap_decap->action; action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: @@ -8987,6 +9272,9 @@ struct field_modify_info modify_tcp[] = { dev_flow->dv.encap_decap->action; } action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; case RTE_FLOW_ACTION_TYPE_RAW_DECAP: while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) @@ -9179,6 +9467,7 @@ struct field_modify_info modify_tcp[] = { ret = flow_dv_translate_action_sample(dev, actions, dev_flow, attr, + &num_of_dest, sample_actions, &sample_res, error); @@ -9186,6 +9475,11 @@ struct field_modify_info modify_tcp[] = { return ret; actions_n++; action_flags |= MLX5_FLOW_ACTION_SAMPLE; + /* put encap action into group if work with port id */ + if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && + (action_flags & MLX5_FLOW_ACTION_PORT_ID)) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; @@ -9209,15 +9503,19 @@ struct field_modify_info modify_tcp[] = { NULL, "cannot create counter" " object."); - dev_flow->dv.actions[actions_n++] = + dev_flow->dv.actions[actions_n] = (flow_dv_counter_get_by_idx(dev, flow->counter, NULL))->action; + actions_n++; } if (action_flags & MLX5_FLOW_ACTION_SAMPLE) { ret = flow_dv_create_action_sample(dev, dev_flow, attr, + num_of_dest, &sample_res, + &mdest_res, sample_actions, + action_flags, error); if (ret < 0) return rte_flow_error_set @@ -9225,8 +9523,13 @@ struct field_modify_info modify_tcp[] = { RTE_FLOW_ERROR_TYPE_ACTION, NULL, "cannot create sample action"); - dev_flow->dv.actions[sample_act_pos] = + if (num_of_dest > 1) { + dev_flow->dv.actions[sample_act_pos] = + dev_flow->dv.multi_dest_res->action; + } else { + dev_flow->dv.actions[sample_act_pos] = dev_flow->dv.sample_res->verbs_action; + } } break; default: @@ -9236,6 +9539,31 @@ struct field_modify_info modify_tcp[] = { modify_action_position == UINT32_MAX) modify_action_position = actions_n++; } + /* + * For multiple destination (sample action with ratio=1), the encap + * action and port id action will be combined into group action. + * So need remove the original these actions in the flow and only + * use the sample action instead of. + */ + if (num_of_dest > 1 && sample_act->dr_port_id_action) { + int i; + void *temp_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + + for (i = 0; i < actions_n; i++) { + if ((sample_act->dr_encap_action && + sample_act->dr_encap_action == + dev_flow->dv.actions[i]) || + (sample_act->dr_port_id_action && + sample_act->dr_port_id_action == + dev_flow->dv.actions[i])) + continue; + temp_actions[tmp_actions_n++] = dev_flow->dv.actions[i]; + } + memcpy((void *)dev_flow->dv.actions, + (void *)temp_actions, + tmp_actions_n * sizeof(void *)); + actions_n = tmp_actions_n; + } dev_flow->dv.actions_n = actions_n; dev_flow->act_flags = action_flags; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { @@ -9540,7 +9868,7 @@ struct field_modify_info modify_tcp[] = { dv->actions[n++] = drop_hrxq->action; } } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE && - !dv_h->rix_sample) { + !dv_h->rix_sample && !dv_h->rix_multi_dest) { struct mlx5_hrxq *hrxq; uint32_t hrxq_idx; struct mlx5_flow_rss_desc *rss_desc = @@ -9827,11 +10155,11 @@ struct field_modify_info modify_tcp[] = { */ static int flow_dv_port_id_action_resource_release(struct rte_eth_dev *dev, - struct mlx5_flow_handle *handle) + uint32_t port_id) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_flow_dv_port_id_action_resource *cache_resource; - uint32_t idx = handle->rix_port_id_action; + uint32_t idx = port_id; cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_PORT_ID], idx); @@ -9921,7 +10249,8 @@ struct field_modify_info modify_tcp[] = { flow_dv_jump_tbl_resource_release(dev, handle); break; case MLX5_FLOW_FATE_PORT_ID: - flow_dv_port_id_action_resource_release(dev, handle); + flow_dv_port_id_action_resource_release(dev, + handle->rix_port_id_action); break; case MLX5_FLOW_FATE_DEFAULT_MISS: flow_dv_default_miss_resource_release(dev); @@ -9999,6 +10328,76 @@ struct field_modify_info modify_tcp[] = { } /** + * Release an multi-dest resource. + * + * @param dev + * Pointer to Ethernet device. + * @param handle + * Pointer to mlx5_flow_handle. + * + * @return + * 1 while a reference on it exists, 0 when freed. + */ +static int +flow_dv_multi_dest_resource_release(struct rte_eth_dev *dev, + struct mlx5_flow_handle *handle) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_dv_multi_dest_resource *cache_resource; + struct mlx5_flow_sub_actions_idx *mdest_act_res; + uint32_t idx = handle->dvh.rix_multi_dest; + uint32_t i = 0; + + cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_MULTI_DEST], + idx); + if (!cache_resource) + return 0; + MLX5_ASSERT(cache_resource->action); + DRV_LOG(DEBUG, "multi-dest resource %p: refcnt %d--", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { + if (cache_resource->action) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->action)); + for (; i < cache_resource->num_of_group; i++) { + mdest_act_res = &cache_resource->sample_idx[i]; + if (cache_resource->group_action[i]) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->group_action[i])); + if (mdest_act_res->rix_hrxq) { + mlx5_hrxq_release(dev, + mdest_act_res->rix_hrxq); + mdest_act_res->rix_hrxq = 0; + } + if (mdest_act_res->rix_encap_decap) { + flow_dv_encap_decap_resource_release(dev, + mdest_act_res->rix_encap_decap); + mdest_act_res->rix_encap_decap = 0; + } + if (mdest_act_res->rix_port_id_action) { + flow_dv_port_id_action_resource_release(dev, + mdest_act_res->rix_port_id_action); + mdest_act_res->rix_port_id_action = 0; + } + if (mdest_act_res->rix_tag) { + flow_dv_tag_release(dev, + mdest_act_res->rix_tag); + mdest_act_res->rix_tag = 0; + } + } + ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_MULTI_DEST], + &priv->sh->multi_dest_list, idx, + cache_resource, next); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_MULTI_DEST], idx); + DRV_LOG(DEBUG, "multi-dest resource %p: removed", + (void *)cache_resource); + return 0; + } + return 1; +} + +/** * Remove the flow from the NIC but keeps it in memory. * Lock free, (mutex should be acquired by caller). * @@ -10079,6 +10478,8 @@ struct field_modify_info modify_tcp[] = { flow_dv_matcher_release(dev, dev_handle); if (dev_handle->dvh.rix_sample) flow_dv_sample_resource_release(dev, dev_handle); + if (dev_handle->dvh.rix_multi_dest) + flow_dv_multi_dest_resource_release(dev, dev_handle); if (dev_handle->dvh.rix_encap_decap) flow_dv_encap_decap_resource_release(dev, dev_handle->dvh.rix_encap_decap); From patchwork Wed Sep 9 06:48:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77008 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 133DFA04B1; Wed, 9 Sep 2020 08:50:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4147D1C18E; Wed, 9 Sep 2020 08:48:50 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id A19381C0CD for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYG6029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:33 +0300 Message-Id: <1599634114-148013-12-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 11/12] app/testpmd: add port and encap support for sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use sample action with ratio is 1 for mirroring flow, add supports to set the different port or encap action for mirrored packets. The example of test-pmd command: 1. set sample_actions 1 port_id id 1 / end flow create 0 ... pattern eth / end actions sample ratio 1 index 1 / port_id id 2... The flow will result in all the matched ingress packets will be sent to port 2, and also mirrored the packets and sent to port 2. 2. set raw_encap 0 eth src.../ ipv4.../... set raw_encap 1 eth src.../ ipv4.../... set sample_actions 2 raw_encap index 0 / port_id id 0 / end flow create 0 ... pattern eth / end actions sample ratio 1 index 2 / raw_encap index 1 / port_id id 0... The flow will result in all the matched egress packets will be encapsulated and sent to wire, and also mirrored the packets and with the different encapsulated data and sent to wire. Signed-off-by: Jiawei Wang --- app/test-pmd/cmdline_flow.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 27fa294..1860657 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -514,6 +514,8 @@ struct raw_sample_conf { struct rte_flow_action_mark sample_mark[RAW_SAMPLE_CONFS_MAX_NUM]; struct rte_flow_action_queue sample_queue[RAW_SAMPLE_CONFS_MAX_NUM]; struct rte_flow_action_count sample_count[RAW_SAMPLE_CONFS_MAX_NUM]; +struct rte_flow_action_port_id sample_port_id[RAW_SAMPLE_CONFS_MAX_NUM]; +struct rte_flow_action_raw_encap sample_encap[RAW_SAMPLE_CONFS_MAX_NUM]; /** Maximum number of subsequent tokens and arguments on the stack. */ #define CTX_STACK_SIZE 16 @@ -1456,6 +1458,8 @@ struct parse_action_priv { ACTION_QUEUE, ACTION_MARK, ACTION_COUNT, + ACTION_PORT_ID, + ACTION_RAW_ENCAP, ACTION_NEXT, ZERO, }; @@ -7009,6 +7013,18 @@ static int comp_set_sample_index(struct context *, const struct token *, (const void *)action->conf, size); action->conf = &sample_queue[idx]; break; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + size = sizeof(struct rte_flow_action_raw_encap); + rte_memcpy(&sample_encap[idx], + (const void *)action->conf, size); + action->conf = &sample_encap[idx]; + break; + case RTE_FLOW_ACTION_TYPE_PORT_ID: + size = sizeof(struct rte_flow_action_port_id); + rte_memcpy(&sample_port_id[idx], + (const void *)action->conf, size); + action->conf = &sample_port_id[idx]; + break; default: printf("Error - Not supported action\n"); return; From patchwork Wed Sep 9 06:48:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 77009 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 97FCEA04B1; Wed, 9 Sep 2020 08:50:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5B2FF1C197; Wed, 9 Sep 2020 08:48:51 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id A6ACC1C0CE for ; Wed, 9 Sep 2020 08:48:36 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 9 Sep 2020 09:48:34 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0896mYG7029471; Wed, 9 Sep 2020 09:48:34 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com, thomas@monjalon.net, ferruh.yigit@intel.com, marko.kovacevic@intel.com, arybchenko@solarflare.com Cc: dev@dpdk.org, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, asafp@nvidia.com Date: Wed, 9 Sep 2020 09:48:34 +0300 Message-Id: <1599634114-148013-13-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> References: <1598540492-406340-1-git-send-email-jiaweiw@nvidia.com> <1599634114-148013-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v6 12/12] net/mlx5: support the native port id actions for mirroring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch to support the mirroring with native port id action even without the sample action in the flow. The flow would be like: flow create 0 ingress transfer pattern eth / end actions port_id id 1 / port_id id 2 / end Ingress packet match from uplink, send to VF1 but also mirror to VF2. While PMD parse a e-switch rule and found the multiple port actions, then will create a new multiple destination DR action and add it into the flow instead of two port id action. Signed-off-by: Jiawei Wang --- drivers/net/mlx5/mlx5_flow_dv.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index cf85780..8c195d4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3738,7 +3738,8 @@ struct field_modify_info modify_tcp[] = { "port id action parameters must be" " specified"); if (action_flags & (MLX5_FLOW_FATE_ACTIONS | - MLX5_FLOW_FATE_ESWITCH_ACTIONS)) + (MLX5_FLOW_FATE_ESWITCH_ACTIONS & + ~MLX5_FLOW_ACTION_PORT_ID))) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, NULL, "can have only one fate actions in" @@ -9007,6 +9008,7 @@ struct field_modify_info modify_tcp[] = { struct mlx5_flow_dv_sample_resource sample_res; void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; struct mlx5_flow_sub_actions_list *sample_act; + struct mlx5_flow_sub_actions_list *mirror_act; uint32_t sample_act_pos = UINT32_MAX; uint32_t num_of_dest = 0; int tmp_actions_n = 0; @@ -9056,7 +9058,21 @@ struct field_modify_info modify_tcp[] = { &port_id, error)) return -rte_errno; port_id_resource.port_id = port_id; - MLX5_ASSERT(!handle->rix_port_id_action); + if (handle->rix_port_id_action) { + // update for native mirror port id action + struct mlx5_flow_sub_actions_idx *mirror_idx = + &sample_res.sample_idx; + mirror_act = &sample_res.sample_act; + mirror_act->dr_port_id_action = + dev_flow->dv.port_id_action->action; + mirror_idx->rix_port_id_action = + dev_flow->handle->rix_port_id_action; + sample_actions[mirror_act->actions_num++] = + mirror_act->dr_port_id_action; + mirror_act->action_flags |= + MLX5_FLOW_ACTION_PORT_ID; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + } if (flow_dv_port_id_action_resource_register (dev, &port_id_resource, dev_flow, error)) return -rte_errno; @@ -9523,6 +9539,8 @@ struct field_modify_info modify_tcp[] = { RTE_FLOW_ERROR_TYPE_ACTION, NULL, "cannot create sample action"); + if (sample_act_pos == UINT32_MAX) + sample_act_pos = actions_n++; if (num_of_dest > 1) { dev_flow->dv.actions[sample_act_pos] = dev_flow->dv.multi_dest_res->action; @@ -9555,6 +9573,9 @@ struct field_modify_info modify_tcp[] = { dev_flow->dv.actions[i]) || (sample_act->dr_port_id_action && sample_act->dr_port_id_action == + dev_flow->dv.actions[i]) || + (mirror_act->dr_port_id_action && + mirror_act->dr_port_id_action == dev_flow->dv.actions[i])) continue; temp_actions[tmp_actions_n++] = dev_flow->dv.actions[i];