From patchwork Wed Aug 26 16:01:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 76041 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62519A04B1; Wed, 26 Aug 2020 18:02:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B82881C0B0; Wed, 26 Aug 2020 18:02:14 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id F14541BFE3 for ; Wed, 26 Aug 2020 18:02:10 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 26 Aug 2020 19:02:05 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07QG253L017365; Wed, 26 Aug 2020 19:02:05 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com Cc: dev@dpdk.org, thomas@monjalon.net, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, jiaweiw@nvidia.com Date: Wed, 26 Aug 2020 19:01:59 +0300 Message-Id: <1598457725-396788-2-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> References: <1594057868-18724-1-git-send-email-jiaweiw@mellanox.com> <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v4 1/7] ethdev: introduce sample action for rte flow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When using full offload, all traffic will be handled by the HW, and directed to the requested VF or wire, the control application loses visibility on the traffic. So there's a need for an action that will enable the control application some visibility. The solution is introduced a new action that will sample the incoming traffic and send a duplicated traffic with the specified ratio to the application, while the original packet will continue to the target destination. The packets sampled equals is '1/ratio', if the ratio value be set to 1, means that the packets would be completely mirrored. The sample packet can be assigned with different set of actions from the original packet. In order to support the sample packet in rte_flow, new rte_flow action definition RTE_FLOW_ACTION_TYPE_SAMPLE and structure rte_flow_action_sample will be introduced. Signed-off-by: Jiawei Wang Acked-by: Ori Kam Acked-by: Jerin Jacob Acked-by: Andrew Rybchenko --- doc/guides/prog_guide/rte_flow.rst | 25 +++++++++++++++++++++++++ lib/librte_ethdev/rte_flow.c | 1 + lib/librte_ethdev/rte_flow.h | 30 ++++++++++++++++++++++++++++++ 3 files changed, 56 insertions(+) diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 3e5cd1e..f8f3f51 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -2653,6 +2653,31 @@ timeout passed without any matching on the flow. | ``context`` | user input flow context | +--------------+---------------------------------+ +Action: ``SAMPLE`` +^^^^^^^^^^^^^^^^^^ + +Adds a sample action to a matched flow. + +The matching packets will be duplicated with the specified ``ratio`` and +applied with own set of actions with a fate action, the packets sampled +equals is '1/ratio'. All the packets continue to the target destination. + +When the ``ratio`` is set to 1 then the packets will be 100% mirrored. +``actions`` represent the different set of actions for the sampled or mirrored +packets, and must have a fate action. + +.. _table_rte_flow_action_sample: + +.. table:: SAMPLE + + +--------------+---------------------------------+ + | Field | Value | + +==============+=================================+ + | ``ratio`` | 32 bits sample ratio value | + +--------------+---------------------------------+ + | ``actions`` | sub-action list for sampling | + +--------------+---------------------------------+ + Negative types ~~~~~~~~~~~~~~ diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c index f8fdd68..035671d 100644 --- a/lib/librte_ethdev/rte_flow.c +++ b/lib/librte_ethdev/rte_flow.c @@ -174,6 +174,7 @@ struct rte_flow_desc_data { MK_FLOW_ACTION(SET_IPV4_DSCP, sizeof(struct rte_flow_action_set_dscp)), MK_FLOW_ACTION(SET_IPV6_DSCP, sizeof(struct rte_flow_action_set_dscp)), MK_FLOW_ACTION(AGE, sizeof(struct rte_flow_action_age)), + MK_FLOW_ACTION(SAMPLE, sizeof(struct rte_flow_action_sample)), }; int diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h index da8bfa5..fa70d40 100644 --- a/lib/librte_ethdev/rte_flow.h +++ b/lib/librte_ethdev/rte_flow.h @@ -2132,6 +2132,14 @@ enum rte_flow_action_type { * see enum RTE_ETH_EVENT_FLOW_AGED */ RTE_FLOW_ACTION_TYPE_AGE, + + /** + * The matching packets will be duplicated with specified ratio and + * applied with own set of actions with a fate action. + * + * See struct rte_flow_action_sample. + */ + RTE_FLOW_ACTION_TYPE_SAMPLE, }; /** @@ -2742,6 +2750,28 @@ struct rte_flow_action { struct rte_flow; /** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * RTE_FLOW_ACTION_TYPE_SAMPLE + * + * Adds a sample action to a matched flow. + * + * The matching packets will be duplicated with specified ratio and applied + * with own set of actions with a fate action, the sampled packet could be + * redirected to queue or port. All the packets continue processing on the + * default flow path. + * + * When the sample ratio is set to 1 then the packets will be 100% mirrored. + * Additional action list be supported to add for sampled or mirrored packets. + */ +struct rte_flow_action_sample { + uint32_t ratio; /**< packets sampled equals to '1/ratio'. */ + const struct rte_flow_action *actions; + /**< sub-action list specific for the sampling hit cases. */ +}; + +/** * Verbose error types. * * Most of them provide the type of the object referenced by struct From patchwork Wed Aug 26 16:02:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 76043 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 992D2A04B1; Wed, 26 Aug 2020 18:02:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C54031C0C3; Wed, 26 Aug 2020 18:02:17 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 054381C00D for ; Wed, 26 Aug 2020 18:02:10 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 26 Aug 2020 19:02:05 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07QG253M017365; Wed, 26 Aug 2020 19:02:05 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com Cc: dev@dpdk.org, thomas@monjalon.net, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, jiaweiw@nvidia.com Date: Wed, 26 Aug 2020 19:02:00 +0300 Message-Id: <1598457725-396788-3-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> References: <1594057868-18724-1-git-send-email-jiaweiw@mellanox.com> <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v4 2/7] common/mlx5: glue for sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" rdma-core introduce a new DR sample action. Add the rdma-core commands in glue to create this action. Sample action is used for creating the sample object to implement the sampling/mirroring function. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- drivers/common/mlx5/Makefile | 5 +++++ drivers/common/mlx5/linux/meson.build | 2 ++ drivers/common/mlx5/linux/mlx5_glue.c | 15 +++++++++++++++ drivers/common/mlx5/linux/mlx5_glue.h | 12 ++++++++++++ 4 files changed, 34 insertions(+) diff --git a/drivers/common/mlx5/Makefile b/drivers/common/mlx5/Makefile index 4edd541..24b932c 100644 --- a/drivers/common/mlx5/Makefile +++ b/drivers/common/mlx5/Makefile @@ -205,6 +205,11 @@ mlx5_autoconf.h.new: $(RTE_SDK)/buildtools/auto-config-h.sh func mlx5dv_dump_dr_domain \ $(AUTOCONF_OUTPUT) $Q sh -- '$<' '$@' \ + HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE \ + infiniband/mlx5dv.h \ + func mlx5dv_dr_action_create_flow_sampler \ + $(AUTOCONF_OUTPUT) + $Q sh -- '$<' '$@' \ HAVE_MLX5DV_MMAP_GET_NC_PAGES_CMD \ infiniband/mlx5dv.h \ enum MLX5_MMAP_GET_NC_PAGES_CMD \ diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index 48e8ad6..1aa137d 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -172,6 +172,8 @@ has_sym_args = [ 'RDMA_NLDEV_ATTR_NDEV_INDEX' ], [ 'HAVE_MLX5_DR_FLOW_DUMP', 'infiniband/mlx5dv.h', 'mlx5dv_dump_dr_domain'], + [ 'HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE', 'infiniband/mlx5dv.h', + 'mlx5dv_dr_action_create_flow_sampler'], [ 'HAVE_MLX5DV_DR_MEM_RECLAIM', 'infiniband/mlx5dv.h', 'mlx5dv_dr_domain_set_reclaim_device_memory'], [ 'HAVE_DEVLINK', 'linux/devlink.h', 'DEVLINK_GENL_NAME' ], diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index fcf03e8..771a47c 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -1063,6 +1063,19 @@ #endif } +static void * +mlx5_glue_dr_create_flow_action_sampler( + struct mlx5dv_dr_flow_sampler_attr *attr) +{ +#ifdef HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE + return mlx5dv_dr_action_create_flow_sampler(attr); +#else + (void)attr; + errno = ENOTSUP; + return NULL; +#endif +} + static int mlx5_glue_devx_query_eqn(struct ibv_context *ctx, uint32_t cpus, uint32_t *eqn) @@ -1339,6 +1352,8 @@ .devx_port_query = mlx5_glue_devx_port_query, .dr_dump_domain = mlx5_glue_dr_dump_domain, .dr_reclaim_domain_memory = mlx5_glue_dr_reclaim_domain_memory, + .dr_create_flow_action_sampler = + mlx5_glue_dr_create_flow_action_sampler, .devx_query_eqn = mlx5_glue_devx_query_eqn, .devx_create_event_channel = mlx5_glue_devx_create_event_channel, .devx_destroy_event_channel = mlx5_glue_devx_destroy_event_channel, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index 734ace2..85b43b9 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -77,6 +77,7 @@ #ifndef HAVE_MLX5DV_DR enum mlx5dv_dr_domain_type { unused, }; struct mlx5dv_dr_domain; +struct mlx5dv_dr_action; #endif #ifndef HAVE_MLX5DV_DR_DEVX_PORT @@ -87,6 +88,15 @@ struct mlx5dv_dr_flow_meter_attr; #endif +#ifndef HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE +struct mlx5dv_dr_flow_sampler_attr { + uint32_t sample_ratio; + void *default_next_table; + size_t num_sample_actions; + struct mlx5dv_dr_action **sample_actions; +}; +#endif + #ifndef HAVE_IBV_DEVX_EVENT struct mlx5dv_devx_event_channel { int fd; }; struct mlx5dv_devx_async_event_hdr; @@ -309,6 +319,8 @@ struct mlx5_glue { const void *pp_context, uint32_t flags); void (*dv_free_pp)(struct mlx5dv_pp *pp); + void *(*dr_create_flow_action_sampler) + (struct mlx5dv_dr_flow_sampler_attr *attr); }; extern const struct mlx5_glue *mlx5_glue; From patchwork Wed Aug 26 16:02:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 76044 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32D68A04B1; Wed, 26 Aug 2020 18:03:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EA7DE1C0CC; Wed, 26 Aug 2020 18:02:18 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 0CC231C0AE for ; Wed, 26 Aug 2020 18:02:10 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 26 Aug 2020 19:02:05 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07QG253N017365; Wed, 26 Aug 2020 19:02:05 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com Cc: dev@dpdk.org, thomas@monjalon.net, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, jiaweiw@nvidia.com Date: Wed, 26 Aug 2020 19:02:01 +0300 Message-Id: <1598457725-396788-4-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> References: <1594057868-18724-1-git-send-email-jiaweiw@mellanox.com> <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v4 3/7] common/mlx5: query sampler object capability via DevX X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update function mlx5_devx_cmd_query_hca_attr() to add the NIC Flow Table attributes query, then get the log_max_flow_sampler_num from flow table properties. Add the related structs definition in mlx5_prm.h. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- drivers/common/mlx5/mlx5_devx_cmds.c | 27 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 1 + drivers/common/mlx5/mlx5_prm.h | 51 ++++++++++++++++++++++++++++++++++++ 3 files changed, 79 insertions(+) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 7c81ae1..fd4e3f2 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -751,6 +751,33 @@ struct mlx5_devx_obj * if (!attr->eth_net_offloads) return 0; + /* Query Flow Sampler Capability From FLow Table Properties Layout. */ + memset(in, 0, sizeof(in)); + memset(out, 0, sizeof(out)); + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + rc = mlx5_glue->devx_general_cmd(ctx, + in, sizeof(in), + out, sizeof(out)); + if (rc) + goto error; + status = MLX5_GET(query_hca_cap_out, out, status); + syndrome = MLX5_GET(query_hca_cap_out, out, syndrome); + if (status) { + DRV_LOG(DEBUG, "Failed to query devx HCA capabilities, " + "status %x, syndrome = %x", + status, syndrome); + attr->log_max_ft_sampler_num = 0; + return -1; + } + hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability); + attr->log_max_ft_sampler_num = + MLX5_GET(flow_table_nic_cap, + hcattr, flow_table_properties.log_max_ft_sampler_num); + /* Query HCA offloads for Ethernet protocol. */ memset(in, 0, sizeof(in)); memset(out, 0, sizeof(out)); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 1c84cea..cfa7a7b 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -102,6 +102,7 @@ struct mlx5_hca_attr { uint32_t scatter_fcs_w_decap_disable:1; uint32_t regex:1; uint32_t regexp_num_of_engines; + uint32_t log_max_ft_sampler_num:8; struct mlx5_hca_qos_attr qos; struct mlx5_hca_vdpa_attr vdpa; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index e0ebe12..4a624e1 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1036,6 +1036,7 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE = 0x0 << 1, MLX5_GET_HCA_CAP_OP_MOD_ETHERNET_OFFLOAD_CAPS = 0x1 << 1, MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP = 0xc << 1, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, }; @@ -1470,12 +1471,62 @@ struct mlx5_ifc_virtio_emulation_cap_bits { u8 reserved_at_1c0[0x620]; }; +struct mlx5_ifc_flow_table_prop_layout_bits { + u8 ft_support[0x1]; + u8 flow_tag[0x1]; + u8 flow_counter[0x1]; + u8 flow_modify_en[0x1]; + u8 modify_root[0x1]; + u8 identified_miss_table[0x1]; + u8 flow_table_modify[0x1]; + u8 reformat[0x1]; + u8 decap[0x1]; + u8 reset_root_to_default[0x1]; + u8 pop_vlan[0x1]; + u8 push_vlan[0x1]; + u8 fpga_vendor_acceleration[0x1]; + u8 pop_vlan_2[0x1]; + u8 push_vlan_2[0x1]; + u8 reformat_and_vlan_action[0x1]; + u8 modify_and_vlan_action[0x1]; + u8 sw_owner[0x1]; + u8 reformat_l3_tunnel_to_l2[0x1]; + u8 reformat_l2_to_l3_tunnel[0x1]; + u8 reformat_and_modify_action[0x1]; + u8 reserved_at_15[0x9]; + u8 sw_owner_v2[0x1]; + u8 reserved_at_1f[0x1]; + u8 reserved_at_20[0x2]; + u8 log_max_ft_size[0x6]; + u8 log_max_modify_header_context[0x8]; + u8 max_modify_header_actions[0x8]; + u8 max_ft_level[0x8]; + u8 reserved_at_40[0x8]; + u8 log_max_ft_sampler_num[8]; + u8 metadata_reg_b_width[0x8]; + u8 metadata_reg_a_width[0x8]; + u8 reserved_at_60[0x18]; + u8 log_max_ft_num[0x8]; + u8 reserved_at_80[0x10]; + u8 log_max_flow_counter[0x8]; + u8 log_max_destination[0x8]; + u8 reserved_at_a0[0x18]; + u8 log_max_flow[0x8]; + u8 reserved_at_c0[0x140]; +}; + +struct mlx5_ifc_flow_table_nic_cap_bits { + u8 reserved_at_0[0x200]; + struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties; +}; + union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap; struct mlx5_ifc_per_protocol_networking_offload_caps_bits per_protocol_networking_offload_caps; struct mlx5_ifc_qos_cap_bits qos_cap; struct mlx5_ifc_virtio_emulation_cap_bits vdpa_caps; + struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap; u8 reserved_at_0[0x8000]; }; From patchwork Wed Aug 26 16:02:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 76045 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C9ECDA04B1; Wed, 26 Aug 2020 18:03:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C735E1C0D4; Wed, 26 Aug 2020 18:02:19 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 147321C0B0 for ; Wed, 26 Aug 2020 18:02:10 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 26 Aug 2020 19:02:05 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07QG253O017365; Wed, 26 Aug 2020 19:02:05 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com Cc: dev@dpdk.org, thomas@monjalon.net, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, jiaweiw@nvidia.com Date: Wed, 26 Aug 2020 19:02:02 +0300 Message-Id: <1598457725-396788-5-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> References: <1594057868-18724-1-git-send-email-jiaweiw@mellanox.com> <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v4 4/7] net/mlx5: add the validate sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add sample action validate function. For Sample flow support NIC-RX and FDB domain, must include an action of a dest TIR in NIC_RX. Only NIC_RX support with addition optional actions. FDB doesn't support any optional action, the sampled packets is always goes to e-switch manager port. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- drivers/net/mlx5/linux/mlx5_os.c | 14 +++++ drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 133 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 149 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index db955ae..5b663a1 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1015,6 +1015,20 @@ } } #endif +#if defined(HAVE_MLX5DV_DR) && defined(HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE) + if (config->hca_attr.log_max_ft_sampler_num > 0 && + config->dv_flow_en) { + priv->sampler_en = 1; + DRV_LOG(DEBUG, "The Sampler enabled!\n"); + } else { + priv->sampler_en = 0; + if (!config->hca_attr.log_max_ft_sampler_num) + DRV_LOG(WARNING, "No available register for" + " Sampler."); + else + DRV_LOG(DEBUG, "DV flow is not supported!\n"); + } +#endif } if (config->tx_pp) { DRV_LOG(DEBUG, "Timestamp counter frequency %u kHz", diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 1880a82..ae0c7cc 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -698,6 +698,7 @@ struct mlx5_priv { unsigned int counter_fallback:1; /* Use counter fallback management. */ unsigned int mtr_en:1; /* Whether support meter. */ unsigned int mtr_reg_share:1; /* Whether support meter REG_C share. */ + unsigned int sampler_en:1; /* Whether support sampler. */ uint16_t domain_id; /* Switch domain identifier. */ uint16_t vport_id; /* Associated VF vport index (if any). */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 92301e4..41404de 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -196,6 +196,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_ACTION_SET_IPV6_DSCP (1ull << 33) #define MLX5_FLOW_ACTION_AGE (1ull << 34) #define MLX5_FLOW_ACTION_DEFAULT_MISS (1ull << 35) +#define MLX5_FLOW_ACTION_SAMPLE (1ull << 36) #define MLX5_FLOW_FATE_ACTIONS \ (MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index dd35959..a8db8ab 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -3992,6 +3992,130 @@ struct field_modify_info modify_tcp[] = { } /** + * Validate the sample action. + * + * @param[in] action_flags + * Holds the actions detected until now. + * @param[in] action + * Pointer to the sample action. + * @param[in] dev + * Pointer to the Ethernet device structure. + * @param[in] attr + * Attributes of flow that includes this action. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_validate_action_sample(uint64_t action_flags, + const struct rte_flow_action *action, + struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_config *dev_conf = &priv->config; + const struct rte_flow_action_sample *sample = action->conf; + const struct rte_flow_action *act = sample->actions; + uint64_t sub_action_flags = 0; + int actions_n = 0; + int ret; + + if (!attr->group) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, + NULL, "root table is not supported"); + if (!priv->config.devx || !priv->sampler_en) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "sample action not supported"); + if (!(action->conf)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "configuration cannot be null"); + if (sample->ratio == 0) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "ratio value start from 1"); + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Duplicate sample actions set"); + if (action_flags & MLX5_FLOW_ACTION_METER) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "wrong action order, meter should " + "be after sample action"); + for (; act->type != RTE_FLOW_ACTION_TYPE_END; act++) { + if (actions_n == MLX5_DV_MAX_NUMBER_OF_ACTIONS) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "too many actions"); + switch (act->type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + ret = mlx5_flow_validate_action_queue(act, + sub_action_flags, + dev, + attr, error); + if (ret < 0) + return ret; + sub_action_flags |= MLX5_FLOW_ACTION_QUEUE; + ++actions_n; + break; + case RTE_FLOW_ACTION_TYPE_MARK: + ret = flow_dv_validate_action_mark(dev, act, + sub_action_flags, + attr, error); + if (ret < 0) + return ret; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) + sub_action_flags |= MLX5_FLOW_ACTION_MARK | + MLX5_FLOW_ACTION_MARK_EXT; + else + sub_action_flags |= MLX5_FLOW_ACTION_MARK; + ++actions_n; + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + ret = flow_dv_validate_action_count(dev, error); + if (ret < 0) + return ret; + sub_action_flags |= MLX5_FLOW_ACTION_COUNT; + ++actions_n; + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "Doesn't support optional " + "action"); + } + } + if (attr->ingress && !attr->transfer) { + if (!(sub_action_flags & MLX5_FLOW_ACTION_QUEUE)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "Ingress must has a dest " + "QUEUE for Sample"); + } else if (attr->egress && !attr->transfer) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "Sample Only support Ingress " + "or E-Switch"); + } else if (sample->actions->type != RTE_FLOW_ACTION_TYPE_END) { + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "E-Switch doesn't support any " + "optional action for sampling"); + } + return 0; +} + +/** * Find existing modify-header resource or create and register a new one. * * @param dev[in, out] @@ -5753,6 +5877,15 @@ struct field_modify_info modify_tcp[] = { action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; rw_act_num += MLX5_ACT_NUM_SET_DSCP; break; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + ret = flow_dv_validate_action_sample(action_flags, + actions, dev, + attr, error); + if (ret < 0) + return ret; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + ++actions_n; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, From patchwork Wed Aug 26 16:02:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 76046 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D761CA04B1; Wed, 26 Aug 2020 18:03:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9F6881C0DC; Wed, 26 Aug 2020 18:02:20 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 0FF941C0AF for ; Wed, 26 Aug 2020 18:02:10 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 26 Aug 2020 19:02:05 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07QG253P017365; Wed, 26 Aug 2020 19:02:05 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com Cc: dev@dpdk.org, thomas@monjalon.net, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, jiaweiw@nvidia.com Date: Wed, 26 Aug 2020 19:02:03 +0300 Message-Id: <1598457725-396788-6-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> References: <1594057868-18724-1-git-send-email-jiaweiw@mellanox.com> <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v4 5/7] net/mlx5: split sample flow into two sub flows X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the sampler action resource structs definition. The flow with sample action will be splited into two sub flows, the prefix flow with sample action, the suffix flow with the left actions. For the prefix flow, add the extra the tag action with unique id to metadata register, and suffix flow will add the extra tag item to match that unique id. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- drivers/net/mlx5/mlx5.c | 11 ++ drivers/net/mlx5/mlx5.h | 3 + drivers/net/mlx5/mlx5_flow.c | 258 ++++++++++++++++++++++++++++++++++++++++++- drivers/net/mlx5/mlx5_flow.h | 36 ++++++ 4 files changed, 304 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 1e4c695..7b33a3e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -244,6 +244,17 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list = .free = mlx5_free, .type = "mlx5_jump_ipool", }, + { + .size = sizeof(struct mlx5_flow_dv_sample_resource), + .trunk_size = 64, + .grow_trunk = 3, + .grow_shift = 2, + .need_lock = 0, + .release_mem_en = 1, + .malloc = mlx5_malloc, + .free = mlx5_free, + .type = "mlx5_sample_ipool", + }, #endif { .size = sizeof(struct mlx5_flow_meter), diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ae0c7cc..a76c161 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -41,6 +41,7 @@ enum mlx5_ipool_index { MLX5_IPOOL_TAG, /* Pool for tag resource. */ MLX5_IPOOL_PORT_ID, /* Pool for port id resource. */ MLX5_IPOOL_JUMP, /* Pool for jump resource. */ + MLX5_IPOOL_SAMPLE, /* Pool for sample resource. */ #endif MLX5_IPOOL_MTR, /* Pool for meter resource. */ MLX5_IPOOL_MCP, /* Pool for metadata resource. */ @@ -514,6 +515,7 @@ struct mlx5_flow_tbl_resource { /* Tables for metering splits should be added here. */ #define MLX5_MAX_TABLES_EXTERNAL (MLX5_MAX_TABLES - 3) #define MLX5_MAX_TABLES_FDB UINT16_MAX +#define MLX5_FLOW_TABLE_FACTOR 10 /* ID generation structure. */ struct mlx5_flow_id_pool { @@ -642,6 +644,7 @@ struct mlx5_dev_ctx_shared { struct mlx5_hlist *tag_table; uint32_t port_id_action_list; /* List of port ID actions. */ uint32_t push_vlan_action_list; /* List of push VLAN actions. */ + uint32_t sample_action_list; /* List of sample actions. */ struct mlx5_flow_counter_mng cmng; /* Counters management structure. */ struct mlx5_flow_default_miss_resource default_miss; /* Default miss action resource structure. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 7150173..49f49e7 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3908,6 +3908,139 @@ struct mlx5_flow_tunnel_info { return 0; } + +/** + * Check the match action from the action list. + * + * @param[in] actions + * Pointer to the list of actions. + * @param[in] action + * The action to be check if exist. + * + * @return + * > 0 the total number of actions. + * 0 if not found match action in action list. + */ +static int +flow_check_match_action(const struct rte_flow_action actions[], + enum rte_flow_action_type action) +{ + int actions_n = 0; + int flag = 0; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + if (actions->type == action) + flag = 1; + actions_n++; + } + /* Count RTE_FLOW_ACTION_TYPE_END. */ + return flag ? actions_n + 1 : 0; +} + +/** + * Split the sample flow. + * + * As sample flow will split to two sub flow, sample flow with + * sample action, the other actions will move to new suffix flow. + * + * Also add unique tag id with tag action in the sample flow, + * the same tag id will be as match in the suffix flow. + * + * @param dev + * Pointer to Ethernet device. + * @param[in] attr + * Flow rule attributes. + * @param[out] sfx_items + * Suffix flow match items (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[out] actions_sfx + * Suffix flow actions. + * @param[out] actions_pre + * Prefix flow actions. + * + * @return + * 0 on success, or unique flow_id. + */ +static int +flow_sample_split_prep(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct rte_flow_item sfx_items[], + const struct rte_flow_action actions[], + struct rte_flow_action actions_sfx[], + struct rte_flow_action actions_pre[]) +{ + struct mlx5_rte_flow_action_set_tag *set_tag; + struct mlx5_rte_flow_item_tag *tag_spec; + struct mlx5_rte_flow_item_tag *tag_mask; + struct rte_flow_item *tag_item; + struct rte_flow_action *tag_action = NULL; + bool pre_sample = true; + struct rte_flow_error error; + uint32_t tag_id = 0; + + /* Prepare the actions for prefix and suffix flow. */ + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + struct rte_flow_action **action_cur = NULL; + + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_SAMPLE: + if (!attr->transfer) { + /* Add the extra tag action first for NIC-RX. */ + tag_action = actions_pre; + tag_action->type = (enum rte_flow_action_type) + MLX5_RTE_FLOW_ACTION_TYPE_TAG; + actions_pre++; + } + break; + case RTE_FLOW_ACTION_TYPE_JUMP: + case RTE_FLOW_ACTION_TYPE_METER: + action_cur = &actions_sfx; + break; + default: + break; + } + if (pre_sample && !action_cur) + action_cur = &actions_pre; + else + action_cur = &actions_sfx; + memcpy(*action_cur, actions, sizeof(struct rte_flow_action)); + (*action_cur)++; + if (actions->type == RTE_FLOW_ACTION_TYPE_SAMPLE) + pre_sample = false; + } + /* Add end action to the actions. */ + actions_sfx->type = RTE_FLOW_ACTION_TYPE_END; + actions_pre->type = RTE_FLOW_ACTION_TYPE_END; + if (!attr->transfer) { + actions_pre++; + /* Set the tag. */ + set_tag = (void *)actions_pre; + set_tag->id = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, + 0, &error); + tag_id = flow_qrss_get_id(dev); + set_tag->data = tag_id; + assert(tag_action); + tag_action->conf = set_tag; + /* Prepare the suffix subflow items. */ + tag_item = sfx_items++; + sfx_items->type = RTE_FLOW_ITEM_TYPE_END; + sfx_items++; + tag_spec = (struct mlx5_rte_flow_item_tag *)sfx_items; + tag_spec->data = tag_id; + tag_spec->id = set_tag->id; + tag_mask = tag_spec + 1; + tag_mask->data = UINT32_MAX; + tag_mask->id = UINT16_MAX; + tag_item->type = (enum rte_flow_item_type) + MLX5_RTE_FLOW_ITEM_TYPE_TAG; + tag_item->spec = tag_spec; + tag_item->last = NULL; + tag_item->mask = tag_mask; + } + return tag_id; +} + /** * The splitting for metadata feature. * @@ -4169,6 +4302,7 @@ struct mlx5_flow_tunnel_info { static int flow_create_split_meter(struct rte_eth_dev *dev, struct rte_flow *flow, + uint64_t prefix_layers, const struct rte_flow_attr *attr, const struct rte_flow_item items[], const struct rte_flow_action actions[], @@ -4216,8 +4350,9 @@ struct mlx5_flow_tunnel_info { goto exit; } /* Add the prefix subflow. */ - ret = flow_create_split_inner(dev, flow, &dev_flow, 0, attr, - items, pre_actions, external, + ret = flow_create_split_inner(dev, flow, &dev_flow, + prefix_layers, attr, items, + pre_actions, external, flow_idx, error); if (ret) { ret = -rte_errno; @@ -4232,7 +4367,7 @@ struct mlx5_flow_tunnel_info { /* Add the prefix subflow. */ ret = flow_create_split_metadata(dev, flow, dev_flow ? flow_get_prefix_layer_flags(dev_flow) : - 0, &sfx_attr, + prefix_layers, &sfx_attr, sfx_items ? sfx_items : items, sfx_actions ? sfx_actions : actions, external, flow_idx, error); @@ -4243,6 +4378,121 @@ struct mlx5_flow_tunnel_info { } /** + * The splitting for sample feature. + * + * The sample flow will be split to two flows as prefix and + * suffix flow. + * + * @param dev + * Pointer to Ethernet device. + * @param[in] flow + * Parent flow structure pointer. + * @param[in] attr + * Flow rule attributes. + * @param[in] items + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * @param[in] external + * This flow rule is created by request external to PMD. + * @param[in] flow_idx + * This memory pool index to the flow. + * @param[out] error + * Perform verbose error reporting if not NULL. + * @return + * 0 on success, negative value otherwise + */ +static int +flow_create_split_sample(struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + bool external, uint32_t flow_idx, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct rte_flow_action *sfx_actions = NULL; + struct rte_flow_action *pre_actions = NULL; + struct rte_flow_item *sfx_items = NULL; + struct mlx5_flow *dev_flow = NULL; + struct rte_flow_attr sfx_attr = *attr; +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + struct mlx5_flow_dv_sample_resource *sample_res; + struct mlx5_flow_tbl_data_entry *sfx_tbl_data; + struct mlx5_flow_tbl_resource *sfx_tbl; + union mlx5_flow_tbl_key sfx_table_key; +#endif + size_t act_size; + size_t item_size; + uint32_t tag_id = 0; + int actions_n = 0; + int ret = 0; + + if (priv->sampler_en) + actions_n = flow_check_match_action(actions, + RTE_FLOW_ACTION_TYPE_SAMPLE); + if (actions_n) { + /* The prefix actions must includes sample, tag, end. */ + act_size = sizeof(struct rte_flow_action) * (actions_n * 2) + + sizeof(struct mlx5_rte_flow_action_set_tag); + /* tag, end. */ +#define SAMPLE_SUFFIX_ITEM 2 + item_size = sizeof(struct rte_flow_item) * SAMPLE_SUFFIX_ITEM + + sizeof(struct mlx5_rte_flow_item_tag) * 2; + sfx_actions = rte_zmalloc(__func__, (act_size + item_size), 0); + if (!sfx_actions) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "no memory to split " + "sample flow"); + if (!attr->transfer) + sfx_items = (struct rte_flow_item *)((char *)sfx_actions + + act_size); + pre_actions = sfx_actions + actions_n; + tag_id = flow_sample_split_prep(dev, attr, sfx_items, + actions, sfx_actions, + pre_actions); + if (!attr->transfer && !tag_id) { + ret = -rte_errno; + goto exit; + } + /* Add the prefix subflow. */ + ret = flow_create_split_inner(dev, flow, &dev_flow, 0, attr, + items, pre_actions, external, + flow_idx, error); + if (ret) { + ret = -rte_errno; + goto exit; + } + dev_flow->handle->split_flow_id = tag_id; +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + /* Set the sfx group attr. */ + sample_res = (struct mlx5_flow_dv_sample_resource *) + dev_flow->dv.sample_res; + sfx_tbl = (struct mlx5_flow_tbl_resource *) + sample_res->normal_path_tbl; + sfx_tbl_data = container_of(sfx_tbl, + struct mlx5_flow_tbl_data_entry, tbl); + sfx_table_key.v64 = sfx_tbl_data->entry.key; + sfx_attr.group = sfx_attr.transfer ? + (sfx_table_key.table_id - 1) : + sfx_table_key.table_id; +#endif + } + /* Add the suffix subflow. */ + ret = flow_create_split_meter(dev, flow, dev_flow ? + flow_get_prefix_layer_flags(dev_flow) : 0, + &sfx_attr, sfx_items ? sfx_items : items, + sfx_actions ? sfx_actions : actions, + external, flow_idx, error); +exit: + if (sfx_actions) + rte_free(sfx_actions); + return ret; +} + +/** * Split the flow to subflow set. The splitters might be linked * in the chain, like this: * flow_create_split_outer() calls: @@ -4290,7 +4540,7 @@ struct mlx5_flow_tunnel_info { { int ret; - ret = flow_create_split_meter(dev, flow, attr, items, + ret = flow_create_split_sample(dev, flow, attr, items, actions, external, flow_idx, error); MLX5_ASSERT(ret <= 0); return ret; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 41404de..9d2493a 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -507,6 +507,38 @@ struct mlx5_flow_tbl_data_entry { uint32_t idx; /**< index for the indexed mempool. */ }; +/* Sub rdma-core actions list. */ +struct mlx5_flow_sub_actions_list { + uint32_t actions_num; /**< Number of sample actions. */ + uint64_t action_flags; + void *dr_queue_action; + void *dr_tag_action; + void *dr_cnt_action; +}; + +/* Sample sub-actions resource list. */ +struct mlx5_flow_sub_actions_idx { + uint32_t rix_hrxq; /**< Hash Rx queue object index. */ + uint32_t rix_tag; /**< Index to the tag action. */ + uint32_t cnt; +}; + +/* Sample action resource structure. */ +struct mlx5_flow_dv_sample_resource { + ILIST_ENTRY(uint32_t)next; /**< Pointer to next element. */ + rte_atomic32_t refcnt; /**< Reference counter. */ + void *verbs_action; /**< Verbs sample action object. */ + uint8_t ft_type; /** Flow Table Type */ + uint32_t ft_id; /** Flow Table Level */ + void *normal_path_tbl; /** Flow Table pointer */ + void *default_miss; /** default_miss dr_action. */ + uint32_t ratio; /** Sample Ratio */ + struct mlx5_flow_sub_actions_idx sample_idx; + /**< Action index resources. */ + struct mlx5_flow_sub_actions_list sample_act; + /**< Action resources. */ +}; + /* Verbs specification header. */ struct ibv_spec_header { enum ibv_flow_spec_type type; @@ -539,6 +571,8 @@ struct mlx5_flow_handle_dv { /**< Index to push VLAN action resource in cache. */ uint32_t rix_tag; /**< Index to the tag action. */ + uint32_t rix_sample; + /**< Index to sample action resource in cache. */ } __rte_packed; /** Device flow handle structure: used both for creating & destroying. */ @@ -604,6 +638,8 @@ struct mlx5_flow_dv_workspace { /**< Pointer to the jump action resource. */ struct mlx5_flow_dv_match_params value; /**< Holds the value that the packet is compared to. */ + struct mlx5_flow_dv_sample_resource *sample_res; + /**< Pointer to the sample action resource. */ }; /* From patchwork Wed Aug 26 16:02:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 76047 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F4D4A04B1; Wed, 26 Aug 2020 18:03:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8FEA41C116; Wed, 26 Aug 2020 18:02:21 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 07E711C08E for ; Wed, 26 Aug 2020 18:02:10 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 26 Aug 2020 19:02:05 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07QG253Q017365; Wed, 26 Aug 2020 19:02:05 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com Cc: dev@dpdk.org, thomas@monjalon.net, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, jiaweiw@nvidia.com Date: Wed, 26 Aug 2020 19:02:04 +0300 Message-Id: <1598457725-396788-7-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> References: <1594057868-18724-1-git-send-email-jiaweiw@mellanox.com> <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v4 6/7] net/mlx5: update translate function for sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Translate the attribute of sample action that include sample ratio and sub actions list, then create the sample DR action. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- drivers/net/mlx5/mlx5_flow.c | 16 +- drivers/net/mlx5/mlx5_flow.h | 14 +- drivers/net/mlx5/mlx5_flow_dv.c | 494 +++++++++++++++++++++++++++++++++++++++- 3 files changed, 502 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 49f49e7..d2b79f0 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -4606,10 +4606,14 @@ struct mlx5_flow_tunnel_info { int hairpin_flow; uint32_t hairpin_id = 0; struct rte_flow_attr attr_tx = { .priority = 0 }; + struct rte_flow_attr attr_factor = {0}; int ret; - hairpin_flow = flow_check_hairpin_split(dev, attr, actions); - ret = flow_drv_validate(dev, attr, items, p_actions_rx, + memcpy((void *)&attr_factor, (const void *)attr, sizeof(*attr)); + if (external) + attr_factor.group *= MLX5_FLOW_TABLE_FACTOR; + hairpin_flow = flow_check_hairpin_split(dev, &attr_factor, actions); + ret = flow_drv_validate(dev, &attr_factor, items, p_actions_rx, external, hairpin_flow, error); if (ret < 0) return 0; @@ -4628,7 +4632,7 @@ struct mlx5_flow_tunnel_info { rte_errno = ENOMEM; goto error_before_flow; } - flow->drv_type = flow_get_drv_type(dev, attr); + flow->drv_type = flow_get_drv_type(dev, &attr_factor); if (hairpin_id != 0) flow->hairpin_flow_id = hairpin_id; MLX5_ASSERT(flow->drv_type > MLX5_FLOW_TYPE_MIN && @@ -4674,7 +4678,7 @@ struct mlx5_flow_tunnel_info { * depending on configuration. In the simplest * case it just creates unmodified original flow. */ - ret = flow_create_split_outer(dev, flow, attr, + ret = flow_create_split_outer(dev, flow, &attr_factor, buf->entry[i].pattern, p_actions_rx, external, idx, error); @@ -4711,8 +4715,8 @@ struct mlx5_flow_tunnel_info { * the egress Flows belong to the different device and * copy table should be updated in peer NIC Rx domain. */ - if (attr->ingress && - (external || attr->group != MLX5_FLOW_MREG_CP_TABLE_GROUP)) { + if (attr_factor.ingress && + (external || attr_factor.group != MLX5_FLOW_MREG_CP_TABLE_GROUP)) { ret = flow_mreg_update_copy_table(dev, flow, actions, error); if (ret) goto error; diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 9d2493a..f3c0406 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -366,6 +366,13 @@ enum mlx5_flow_fate_type { MLX5_FLOW_FATE_MAX, }; +/* + * Max number of actions per DV flow. + * See CREATE_FLOW_MAX_FLOW_ACTIONS_SUPPORTED + * in rdma-core file providers/mlx5/verbs.c. + */ +#define MLX5_DV_MAX_NUMBER_OF_ACTIONS 8 + /* Matcher PRM representation */ struct mlx5_flow_dv_match_params { size_t size; @@ -613,13 +620,6 @@ struct mlx5_flow_handle { #define MLX5_FLOW_HANDLE_VERBS_SIZE (sizeof(struct mlx5_flow_handle)) #endif -/* - * Max number of actions per DV flow. - * See CREATE_FLOW_MAX_FLOW_ACTIONS_SUPPORTED - * in rdma-core file providers/mlx5/verbs.c. - */ -#define MLX5_DV_MAX_NUMBER_OF_ACTIONS 8 - /** Device flow structure only for DV flow creation. */ struct mlx5_flow_dv_workspace { uint32_t group; /**< The group index. */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a8db8ab..3d85140 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -76,6 +76,10 @@ static int flow_dv_default_miss_resource_release(struct rte_eth_dev *dev); +static int +flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, + uint32_t encap_decap_idx); + /** * Initialize flow attributes structure according to flow items' types. * @@ -8233,6 +8237,373 @@ struct field_modify_info modify_tcp[] = { } /** + * Create an Rx Hash queue. + * + * @param dev + * Pointer to Ethernet device. + * @param[in] dev_flow + * Pointer to the mlx5_flow. + * @param[in] rss_desc + * Pointer to the mlx5_flow_rss_desc. + * @param[out] hrxq_idx + * Hash Rx queue index. + * + * @return + * The Verbs/DevX object initialised, NULL otherwise and rte_errno is set. + */ +static struct mlx5_hrxq * +flow_dv_handle_rx_queue(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + struct mlx5_flow_rss_desc *rss_desc, + uint32_t *hrxq_idx) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_flow_handle *dh = dev_flow->handle; + struct mlx5_hrxq *hrxq; + + MLX5_ASSERT(rss_desc->queue_num); + *hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, + MLX5_RSS_HASH_KEY_LEN, + dev_flow->hash_fields, + rss_desc->queue, + rss_desc->queue_num); + if (!*hrxq_idx) { + *hrxq_idx = mlx5_hrxq_new + (dev, rss_desc->key, + MLX5_RSS_HASH_KEY_LEN, + dev_flow->hash_fields, + rss_desc->queue, + rss_desc->queue_num, + !!(dh->layers & + MLX5_FLOW_LAYER_TUNNEL)); + if (!*hrxq_idx) + return NULL; + } + hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], + *hrxq_idx); + return hrxq; +} + +/** + * Find existing sample resource or create and register a new one. + * + * @param[in, out] dev + * Pointer to rte_eth_dev structure. + * @param[in] attr + * Attributes of flow that includes this item. + * @param[in] resource + * Pointer to sample resource. + * @parm[in, out] dev_flow + * Pointer to the dev_flow. + * @param[in, out] sample_dv_actions + * Pointer to sample actions list. + * @param[out] error + * pointer to error structure. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_sample_resource_register(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + struct mlx5_flow_dv_sample_resource *resource, + struct mlx5_flow *dev_flow, + void **sample_dv_actions, + struct rte_flow_error *error) +{ + struct mlx5_flow_dv_sample_resource *cache_resource; + struct mlx5dv_dr_flow_sampler_attr sampler_attr; + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct mlx5_flow_tbl_resource *tbl; + uint32_t idx = 0; + const uint32_t next_ft_step = 1; + uint32_t next_ft_id = resource->ft_id + next_ft_step; + + /* Lookup a matching resource from cache. */ + ILIST_FOREACH(sh->ipool[MLX5_IPOOL_SAMPLE], sh->sample_action_list, + idx, cache_resource, next) { + if (resource->ratio == cache_resource->ratio && + resource->ft_type == cache_resource->ft_type && + resource->ft_id == cache_resource->ft_id && + !memcmp((void *)&resource->sample_act, + (void *)&cache_resource->sample_act, + sizeof(struct mlx5_flow_sub_actions_list))) { + DRV_LOG(DEBUG, "sample resource %p: refcnt %d++", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + rte_atomic32_inc(&cache_resource->refcnt); + dev_flow->handle->dvh.rix_sample = idx; + dev_flow->dv.sample_res = cache_resource; + return 0; + } + } + /* Register new sample resource. */ + cache_resource = mlx5_ipool_zmalloc(sh->ipool[MLX5_IPOOL_SAMPLE], + &dev_flow->handle->dvh.rix_sample); + if (!cache_resource) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot allocate resource memory"); + *cache_resource = *resource; + /* Create normal path table level */ + tbl = flow_dv_tbl_resource_get(dev, next_ft_id, + attr->egress, attr->transfer, error); + if (!tbl) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "fail to create normal path table " + "for sample"); + goto error; + } + cache_resource->normal_path_tbl = tbl; + if (resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) { + cache_resource->default_miss = + mlx5_glue->dr_create_flow_action_default_miss(); + if (!cache_resource->default_miss) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "cannot create default miss " + "action"); + goto error; + } + sample_dv_actions[resource->sample_act.actions_num++] = + cache_resource->default_miss; + } + /* Create a DR sample action */ + sampler_attr.sample_ratio = cache_resource->ratio; + sampler_attr.default_next_table = tbl->obj; + sampler_attr.num_sample_actions = resource->sample_act.actions_num; + sampler_attr.sample_actions = (struct mlx5dv_dr_action **) + &sample_dv_actions[0]; + cache_resource->verbs_action = + mlx5_glue->dr_create_flow_action_sampler(&sampler_attr); + if (!cache_resource->verbs_action) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot create sample action"); + goto error; + } + rte_atomic32_init(&cache_resource->refcnt); + rte_atomic32_inc(&cache_resource->refcnt); + ILIST_INSERT(sh->ipool[MLX5_IPOOL_SAMPLE], &sh->sample_action_list, + dev_flow->handle->dvh.rix_sample, cache_resource, + next); + dev_flow->dv.sample_res = cache_resource; + DRV_LOG(DEBUG, "new sample resource %p: refcnt %d++", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + return 0; +error: + if (cache_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) { + if (cache_resource->default_miss) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->default_miss)); + } else { + if (cache_resource->sample_idx.rix_hrxq && + !mlx5_hrxq_release(dev, + cache_resource->sample_idx.rix_hrxq)) + cache_resource->sample_idx.rix_hrxq = 0; + if (cache_resource->sample_idx.rix_tag && + !flow_dv_tag_release(dev, + cache_resource->sample_idx.rix_tag)) + cache_resource->sample_idx.rix_tag = 0; + if (cache_resource->sample_idx.cnt) { + flow_dv_counter_release(dev, + cache_resource->sample_idx.cnt); + cache_resource->sample_idx.cnt = 0; + } + } + if (cache_resource->normal_path_tbl) + flow_dv_tbl_resource_release(dev, + cache_resource->normal_path_tbl); + mlx5_ipool_free(sh->ipool[MLX5_IPOOL_SAMPLE], + dev_flow->handle->dvh.rix_sample); + dev_flow->handle->dvh.rix_sample = 0; + return -rte_errno; +} + +/** + * Convert Sample action to DV specification. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] action + * Pointer to action structure. + * @param[in, out] dev_flow + * Pointer to the mlx5_flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in, out] sample_actions + * Pointer to sample actions list. + * @param[in, out] res + * Pointer to sample resource. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate_action_sample(struct rte_eth_dev *dev, + const struct rte_flow_action *action, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + void **sample_actions, + struct mlx5_flow_dv_sample_resource *res, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + const struct rte_flow_action_sample *sample_action; + const struct rte_flow_action *sub_actions; + const struct rte_flow_action_queue *queue; + struct mlx5_flow_sub_actions_list *sample_act; + struct mlx5_flow_sub_actions_idx *sample_idx; + struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *) + priv->rss_desc) + [!!priv->flow_nested_idx]; + uint64_t action_flags = 0; + + sample_act = &res->sample_act; + sample_idx = &res->sample_idx; + sample_action = (const struct rte_flow_action_sample *)action->conf; + res->ratio = sample_action->ratio; + sub_actions = sample_action->actions; + for (; sub_actions->type != RTE_FLOW_ACTION_TYPE_END; sub_actions++) { + int type = sub_actions->type; + uint32_t pre_rix = 0; + void *pre_r; + switch (type) { + case RTE_FLOW_ACTION_TYPE_QUEUE: + { + struct mlx5_hrxq *hrxq; + uint32_t hrxq_idx; + + queue = sub_actions->conf; + rss_desc->queue_num = 1; + rss_desc->queue[0] = queue->index; + hrxq = flow_dv_handle_rx_queue(dev, dev_flow, + rss_desc, &hrxq_idx); + if (!hrxq) + return rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create fate queue"); + sample_act->dr_queue_action = hrxq->action; + sample_idx->rix_hrxq = hrxq_idx; + sample_actions[sample_act->actions_num++] = + hrxq->action; + action_flags |= MLX5_FLOW_ACTION_QUEUE; + if (action_flags & MLX5_FLOW_ACTION_MARK) + dev_flow->handle->rix_hrxq = hrxq_idx; + dev_flow->handle->fate_action = + MLX5_FLOW_FATE_QUEUE; + break; + } + case RTE_FLOW_ACTION_TYPE_MARK: + { + uint32_t tag_be = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (sub_actions->conf))->id); + dev_flow->handle->mark = 1; + pre_rix = dev_flow->handle->dvh.rix_tag; + /* Save the mark resource before sample */ + pre_r = dev_flow->dv.tag_resource; + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) + return -rte_errno; + MLX5_ASSERT(dev_flow->dv.tag_resource); + sample_act->dr_tag_action = + dev_flow->dv.tag_resource->action; + sample_idx->rix_tag = + dev_flow->handle->dvh.rix_tag; + sample_actions[sample_act->actions_num++] = + sample_act->dr_tag_action; + /* Recover the mark resource after sample */ + dev_flow->dv.tag_resource = pre_r; + dev_flow->handle->dvh.rix_tag = pre_rix; + action_flags |= MLX5_FLOW_ACTION_MARK; + break; + } + case RTE_FLOW_ACTION_TYPE_COUNT: + { + uint32_t counter; + + counter = flow_dv_translate_create_counter(dev, + dev_flow, sub_actions->conf, 0); + if (!counter) + return rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create counter" + " object."); + sample_idx->cnt = counter; + sample_act->dr_cnt_action = + (flow_dv_counter_get_by_idx(dev, + counter, NULL))->action; + sample_actions[sample_act->actions_num++] = + sample_act->dr_cnt_action; + action_flags |= MLX5_FLOW_ACTION_COUNT; + break; + } + default: + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Not support for sampler action"); + } + } + sample_act->action_flags = action_flags; + res->ft_id = dev_flow->dv.group; + if (attr->transfer) + res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + else if (attr->ingress) + res->ft_type = MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + + return 0; +} + +/** + * Convert Sample action to DV specification. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the mlx5_flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in, out] res + * Pointer to sample resource. + * @param[in] sample_actions + * Pointer to sample path actions list. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_create_action_sample(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + struct mlx5_flow_dv_sample_resource *res, + void **sample_actions, + struct rte_flow_error *error) +{ + if (flow_dv_sample_resource_register(dev, attr, res, dev_flow, + sample_actions, error)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "can't create sample action"); + return 0; +} + +/** * Fill the flow with DV spec, lock free * (mutex should be acquired by caller). * @@ -8296,9 +8667,13 @@ struct field_modify_info modify_tcp[] = { void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_sample_resource sample_res; + void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + uint32_t sample_act_pos = UINT32_MAX; uint32_t table; int ret = 0; + memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : MLX5DV_FLOW_TABLE_TYPE_NIC_RX; ret = mlx5_flow_group_to_table(attr, dev_flow->external, attr->group, @@ -8317,7 +8692,6 @@ struct field_modify_info modify_tcp[] = { const struct rte_flow_action_rss *rss; const struct rte_flow_action *action = actions; const uint8_t *rss_key; - const struct rte_flow_action_jump *jump_data; const struct rte_flow_action_meter *mtr; struct mlx5_flow_tbl_resource *tbl; uint32_t port_id = 0; @@ -8325,6 +8699,7 @@ struct field_modify_info modify_tcp[] = { int action_type = actions->type; const struct rte_flow_action *found_action = NULL; struct mlx5_flow_meter *fm = NULL; + uint32_t jump_group = 0; if (!mlx5_flow_os_action_supported(action_type)) return rte_flow_error_set(error, ENOTSUP, @@ -8563,9 +8938,13 @@ struct field_modify_info modify_tcp[] = { action_flags |= MLX5_FLOW_ACTION_DECAP; break; case RTE_FLOW_ACTION_TYPE_JUMP: - jump_data = action->conf; + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + if (dev_flow->external && jump_group < + MLX5_MAX_TABLES_EXTERNAL) + jump_group *= MLX5_FLOW_TABLE_FACTOR; ret = mlx5_flow_group_to_table(attr, dev_flow->external, - jump_data->group, + jump_group, !!priv->fdb_def_rule, &table, error); if (ret) @@ -8731,6 +9110,19 @@ struct field_modify_info modify_tcp[] = { return -rte_errno; action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; break; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + sample_act_pos = actions_n; + ret = flow_dv_translate_action_sample(dev, + actions, + dev_flow, attr, + sample_actions, + &sample_res, + error); + if (ret < 0) + return ret; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + break; case RTE_FLOW_ACTION_TYPE_END: actions_end = true; if (mhdr_res->actions_num) { @@ -8757,6 +9149,21 @@ struct field_modify_info modify_tcp[] = { (flow_dv_counter_get_by_idx(dev, flow->counter, NULL))->action; } + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) { + ret = flow_dv_create_action_sample(dev, + dev_flow, attr, + &sample_res, + sample_actions, + error); + if (ret < 0) + return rte_flow_error_set + (error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create sample action"); + dev_flow->dv.actions[sample_act_pos] = + dev_flow->dv.sample_res->verbs_action; + } break; default: break; @@ -9068,7 +9475,8 @@ struct field_modify_info modify_tcp[] = { dh->rix_hrxq = UINT32_MAX; dv->actions[n++] = drop_hrxq->action; } - } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) { + } else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE && + !dv_h->rix_sample) { struct mlx5_hrxq *hrxq; uint32_t hrxq_idx; struct mlx5_flow_rss_desc *rss_desc = @@ -9200,18 +9608,18 @@ struct field_modify_info modify_tcp[] = { * * @param dev * Pointer to Ethernet device. - * @param handle - * Pointer to mlx5_flow_handle. + * @param encap_decap_idx + * Index of encap decap resource. * * @return * 1 while a reference on it exists, 0 when freed. */ static int flow_dv_encap_decap_resource_release(struct rte_eth_dev *dev, - struct mlx5_flow_handle *handle) + uint32_t encap_decap_idx) { struct mlx5_priv *priv = dev->data->dev_private; - uint32_t idx = handle->dvh.rix_encap_decap; + uint32_t idx = encap_decap_idx; struct mlx5_flow_dv_encap_decap_resource *cache_resource; cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_DECAP_ENCAP], @@ -9462,6 +9870,71 @@ struct field_modify_info modify_tcp[] = { } /** + * Release an sample resource. + * + * @param dev + * Pointer to Ethernet device. + * @param handle + * Pointer to mlx5_flow_handle. + * + * @return + * 1 while a reference on it exists, 0 when freed. + */ +static int +flow_dv_sample_resource_release(struct rte_eth_dev *dev, + struct mlx5_flow_handle *handle) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t idx = handle->dvh.rix_sample; + struct mlx5_flow_dv_sample_resource *cache_resource; + + cache_resource = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_SAMPLE], + idx); + if (!cache_resource) + return 0; + MLX5_ASSERT(cache_resource->verbs_action); + DRV_LOG(DEBUG, "sample resource %p: refcnt %d--", + (void *)cache_resource, + rte_atomic32_read(&cache_resource->refcnt)); + if (rte_atomic32_dec_and_test(&cache_resource->refcnt)) { + if (cache_resource->verbs_action) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->verbs_action)); + if (cache_resource->ft_type == MLX5DV_FLOW_TABLE_TYPE_FDB) { + if (cache_resource->default_miss) + claim_zero(mlx5_glue->destroy_flow_action + (cache_resource->default_miss)); + } + if (cache_resource->normal_path_tbl) + flow_dv_tbl_resource_release(dev, + cache_resource->normal_path_tbl); + } + if (cache_resource->sample_idx.rix_hrxq && + !mlx5_hrxq_release(dev, + cache_resource->sample_idx.rix_hrxq)) + cache_resource->sample_idx.rix_hrxq = 0; + if (cache_resource->sample_idx.rix_tag && + !flow_dv_tag_release(dev, + cache_resource->sample_idx.rix_tag)) + cache_resource->sample_idx.rix_tag = 0; + if (cache_resource->sample_idx.cnt) { + flow_dv_counter_release(dev, + cache_resource->sample_idx.cnt); + cache_resource->sample_idx.cnt = 0; + } + if (!rte_atomic32_read(&cache_resource->refcnt)) { + ILIST_REMOVE(priv->sh->ipool[MLX5_IPOOL_SAMPLE], + &priv->sh->sample_action_list, idx, + cache_resource, next); + mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_SAMPLE], idx); + DRV_LOG(DEBUG, "sample resource %p: removed", + (void *)cache_resource); + return 0; + } + return 1; +} + +/** * Remove the flow from the NIC but keeps it in memory. * Lock free, (mutex should be acquired by caller). * @@ -9540,8 +10013,11 @@ struct field_modify_info modify_tcp[] = { flow->dev_handles = dev_handle->next.next; if (dev_handle->dvh.matcher) flow_dv_matcher_release(dev, dev_handle); + if (dev_handle->dvh.rix_sample) + flow_dv_sample_resource_release(dev, dev_handle); if (dev_handle->dvh.rix_encap_decap) - flow_dv_encap_decap_resource_release(dev, dev_handle); + flow_dv_encap_decap_resource_release(dev, + dev_handle->dvh.rix_encap_decap); if (dev_handle->dvh.modify_hdr) flow_dv_modify_hdr_resource_release(dev, dev_handle); if (dev_handle->dvh.rix_push_vlan) From patchwork Wed Aug 26 16:02:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawei Wang X-Patchwork-Id: 76040 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7BCADA04B1; Wed, 26 Aug 2020 18:02:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 404651BFE3; Wed, 26 Aug 2020 18:02:13 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id E5B451BECF for ; Wed, 26 Aug 2020 18:02:10 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from jiaweiw@nvidia.com) with SMTP; 26 Aug 2020 19:02:05 +0300 Received: from nvidia.com (gen-l-vrt-280.mtl.labs.mlnx [10.237.45.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 07QG253R017365; Wed, 26 Aug 2020 19:02:05 +0300 From: Jiawei Wang To: orika@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com Cc: dev@dpdk.org, thomas@monjalon.net, rasland@nvidia.com, ian.stokes@intel.com, fbl@redhat.com, jiaweiw@nvidia.com Date: Wed, 26 Aug 2020 19:02:05 +0300 Message-Id: <1598457725-396788-8-git-send-email-jiaweiw@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> References: <1594057868-18724-1-git-send-email-jiaweiw@mellanox.com> <1598457725-396788-1-git-send-email-jiaweiw@nvidia.com> Subject: [dpdk-dev] [PATCH v4 7/7] app/testpmd: add testpmd command for sample action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a new testpmd command 'set sample_actions' that supports the multiple sample actions list configuration by using the index: set sample_actions The examples for the sample flow use case and result as below: 1. set sample_actions 0 mark id 0x8 / queue index 2 / end .. pattern eth / end actions sample ratio 2 index 0 / jump group 2 ... This flow will result in all the matched ingress packets will be jumped to next flow table, and the each second packet will be marked and sent to queue 2 of the control application. 2. ...pattern eth / end actions sample ratio 2 / port_id id 2 ... The flow will result in all the matched ingress packets will be sent to port 2, and the each second packet will also be sent to e-switch manager vport. Signed-off-by: Jiawei Wang Acked-by: Ori Kam --- app/test-pmd/cmdline_flow.c | 285 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 276 insertions(+), 9 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 6263d30..27fa294 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -56,6 +56,8 @@ enum index { SET_RAW_ENCAP, SET_RAW_DECAP, SET_RAW_INDEX, + SET_SAMPLE_ACTIONS, + SET_SAMPLE_INDEX, /* Top-level command. */ FLOW, @@ -358,6 +360,10 @@ enum index { ACTION_SET_IPV6_DSCP_VALUE, ACTION_AGE, ACTION_AGE_TIMEOUT, + ACTION_SAMPLE, + ACTION_SAMPLE_RATIO, + ACTION_SAMPLE_INDEX, + ACTION_SAMPLE_INDEX_VALUE, }; /** Maximum size for pattern in struct rte_flow_item_raw. */ @@ -493,6 +499,22 @@ struct action_nvgre_encap_data { struct mplsoudp_decap_conf mplsoudp_decap_conf; +#define ACTION_SAMPLE_ACTIONS_NUM 10 +#define RAW_SAMPLE_CONFS_MAX_NUM 8 +/** Storage for struct rte_flow_action_sample including external data. */ +struct action_sample_data { + struct rte_flow_action_sample conf; + uint32_t idx; +}; +/** Storage for struct rte_flow_action_sample. */ +struct raw_sample_conf { + struct rte_flow_action data[ACTION_SAMPLE_ACTIONS_NUM]; +}; +struct raw_sample_conf raw_sample_confs[RAW_SAMPLE_CONFS_MAX_NUM]; +struct rte_flow_action_mark sample_mark[RAW_SAMPLE_CONFS_MAX_NUM]; +struct rte_flow_action_queue sample_queue[RAW_SAMPLE_CONFS_MAX_NUM]; +struct rte_flow_action_count sample_count[RAW_SAMPLE_CONFS_MAX_NUM]; + /** Maximum number of subsequent tokens and arguments on the stack. */ #define CTX_STACK_SIZE 16 @@ -1189,6 +1211,7 @@ struct parse_action_priv { ACTION_SET_IPV4_DSCP, ACTION_SET_IPV6_DSCP, ACTION_AGE, + ACTION_SAMPLE, ZERO, }; @@ -1421,9 +1444,28 @@ struct parse_action_priv { ZERO, }; +static const enum index action_sample[] = { + ACTION_SAMPLE, + ACTION_SAMPLE_RATIO, + ACTION_SAMPLE_INDEX, + ACTION_NEXT, + ZERO, +}; + +static const enum index next_action_sample[] = { + ACTION_QUEUE, + ACTION_MARK, + ACTION_COUNT, + ACTION_NEXT, + ZERO, +}; + static int parse_set_raw_encap_decap(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_set_sample_action(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_set_init(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -1491,7 +1533,15 @@ static int parse_vc_action_raw_decap_index(struct context *, static int parse_vc_action_set_meta(struct context *ctx, const struct token *token, const char *str, unsigned int len, void *buf, + unsigned int size); +static int parse_vc_action_sample(struct context *ctx, + const struct token *token, const char *str, + unsigned int len, void *buf, unsigned int size); +static int +parse_vc_action_sample_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size); static int parse_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -1562,6 +1612,8 @@ static int comp_vc_action_rss_queue(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_set_raw_index(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_set_sample_index(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -3703,11 +3755,13 @@ static int comp_set_raw_index(struct context *, const struct token *, /* Top level command. */ [SET] = { .name = "set", - .help = "set raw encap/decap data", - .type = "set raw_encap|raw_decap ", + .help = "set raw encap/decap/sample data", + .type = "set raw_encap|raw_decap " + " or set sample_actions ", .next = NEXT(NEXT_ENTRY (SET_RAW_ENCAP, - SET_RAW_DECAP)), + SET_RAW_DECAP, + SET_SAMPLE_ACTIONS)), .call = parse_set_init, }, /* Sub-level commands. */ @@ -3738,6 +3792,23 @@ static int comp_set_raw_index(struct context *, const struct token *, .next = NEXT(next_item), .call = parse_port, }, + [SET_SAMPLE_INDEX] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "index of sample actions", + .next = NEXT(next_action_sample), + .call = parse_port, + }, + [SET_SAMPLE_ACTIONS] = { + .name = "sample_actions", + .help = "set sample actions list", + .next = NEXT(NEXT_ENTRY(SET_SAMPLE_INDEX)), + .args = ARGS(ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct buffer, port), + sizeof(((struct buffer *)0)->port), + 0, RAW_SAMPLE_CONFS_MAX_NUM - 1)), + .call = parse_set_sample_action, + }, [ACTION_SET_TAG] = { .name = "set_tag", .help = "set tag", @@ -3841,6 +3912,37 @@ static int comp_set_raw_index(struct context *, const struct token *, .next = NEXT(action_age, NEXT_ENTRY(UNSIGNED)), .call = parse_vc_conf, }, + [ACTION_SAMPLE] = { + .name = "sample", + .help = "set a sample action", + .next = NEXT(action_sample), + .priv = PRIV_ACTION(SAMPLE, + sizeof(struct action_sample_data)), + .call = parse_vc_action_sample, + }, + [ACTION_SAMPLE_RATIO] = { + .name = "ratio", + .help = "flow sample ratio value", + .next = NEXT(action_sample, NEXT_ENTRY(UNSIGNED)), + .args = ARGS(ARGS_ENTRY_ARB + (offsetof(struct action_sample_data, conf) + + offsetof(struct rte_flow_action_sample, ratio), + sizeof(((struct rte_flow_action_sample *)0)-> + ratio))), + }, + [ACTION_SAMPLE_INDEX] = { + .name = "index", + .help = "the index of sample actions list", + .next = NEXT(NEXT_ENTRY(ACTION_SAMPLE_INDEX_VALUE)), + }, + [ACTION_SAMPLE_INDEX_VALUE] = { + .name = "{index}", + .type = "UNSIGNED", + .help = "unsigned integer value", + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)), + .call = parse_vc_action_sample_index, + .comp = comp_set_sample_index, + }, }; /** Remove and return last entry from argument stack. */ @@ -5351,6 +5453,76 @@ static int comp_set_raw_index(struct context *, const struct token *, return len; } +static int +parse_vc_action_sample(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct buffer *out = buf; + struct rte_flow_action *action; + struct action_sample_data *action_sample_data = NULL; + static struct rte_flow_action end_action = { + RTE_FLOW_ACTION_TYPE_END, 0 + }; + int ret; + + ret = parse_vc(ctx, token, str, len, buf, size); + if (ret < 0) + return ret; + /* Nothing else to do if there is no buffer. */ + if (!out) + return ret; + if (!out->args.vc.actions_n) + return -1; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + /* Point to selected object. */ + ctx->object = out->args.vc.data; + ctx->objmask = NULL; + /* Copy the headers to the buffer. */ + action_sample_data = ctx->object; + action_sample_data->conf.actions = &end_action; + action->conf = &action_sample_data->conf; + return ret; +} + +static int +parse_vc_action_sample_index(struct context *ctx, const struct token *token, + const char *str, unsigned int len, void *buf, + unsigned int size) +{ + struct action_sample_data *action_sample_data; + struct rte_flow_action *action; + const struct arg *arg; + struct buffer *out = buf; + int ret; + uint16_t idx; + + RTE_SET_USED(token); + RTE_SET_USED(buf); + RTE_SET_USED(size); + if (ctx->curr != ACTION_SAMPLE_INDEX_VALUE) + return -1; + arg = ARGS_ENTRY_ARB_BOUNDED + (offsetof(struct action_sample_data, idx), + sizeof(((struct action_sample_data *)0)->idx), + 0, RAW_SAMPLE_CONFS_MAX_NUM - 1); + if (push_args(ctx, arg)) + return -1; + ret = parse_int(ctx, token, str, len, NULL, 0); + if (ret < 0) { + pop_args(ctx); + return -1; + } + if (!ctx->object) + return len; + action = &out->args.vc.actions[out->args.vc.actions_n - 1]; + action_sample_data = ctx->object; + idx = action_sample_data->idx; + action_sample_data->conf.actions = raw_sample_confs[idx].data; + action->conf = &action_sample_data->conf; + return len; +} + /** Parse tokens for destroy command. */ static int parse_destroy(struct context *ctx, const struct token *token, @@ -6115,6 +6287,38 @@ static int comp_set_raw_index(struct context *, const struct token *, if (!out->command) return -1; out->command = ctx->curr; + /* For encap/decap we need is pattern */ + out->args.vc.pattern = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; +} + +/** Parse set command, initialize output buffer for subsequent tokens. */ +static int +parse_set_sample_action(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + /* Make sure buffer is large enough. */ + if (size < sizeof(*out)) + return -1; + ctx->objdata = 0; + ctx->objmask = NULL; + ctx->object = out; + if (!out->command) + return -1; + out->command = ctx->curr; + /* For sampler we need is actions */ + out->args.vc.actions = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); return len; } @@ -6151,11 +6355,8 @@ static int comp_set_raw_index(struct context *, const struct token *, return -1; out->command = ctx->curr; out->args.vc.data = (uint8_t *)out + size; - /* All we need is pattern */ - out->args.vc.pattern = - (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), - sizeof(double)); - ctx->object = out->args.vc.pattern; + ctx->object = (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); } return len; } @@ -6306,6 +6507,24 @@ static int comp_set_raw_index(struct context *, const struct token *, return nb; } +/** Complete index number for set raw_encap/raw_decap commands. */ +static int +comp_set_sample_index(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + uint16_t idx = 0; + uint16_t nb = 0; + + RTE_SET_USED(ctx); + RTE_SET_USED(token); + for (idx = 0; idx < RAW_SAMPLE_CONFS_MAX_NUM; ++idx) { + if (buf && idx == ent) + return snprintf(buf, size, "%u", idx); + ++nb; + } + return nb; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -6751,7 +6970,53 @@ static int comp_set_raw_index(struct context *, const struct token *, return mask; } - +/** Dispatch parsed buffer to function calls. */ +static void +cmd_set_raw_parsed_sample(const struct buffer *in) +{ + uint32_t n = in->args.vc.actions_n; + uint32_t i = 0; + struct rte_flow_action *action = NULL; + struct rte_flow_action *data = NULL; + size_t size = 0; + uint16_t idx = in->port; /* We borrow port field as index */ + uint32_t max_size = sizeof(struct rte_flow_action) * + ACTION_SAMPLE_ACTIONS_NUM; + + RTE_ASSERT(in->command == SET_SAMPLE_ACTIONS); + data = (struct rte_flow_action *)&raw_sample_confs[idx].data; + memset(data, 0x00, max_size); + for (; i <= n - 1; i++) { + action = in->args.vc.actions + i; + if (action->type == RTE_FLOW_ACTION_TYPE_END) + break; + switch (action->type) { + case RTE_FLOW_ACTION_TYPE_MARK: + size = sizeof(struct rte_flow_action_mark); + rte_memcpy(&sample_mark[idx], + (const void *)action->conf, size); + action->conf = &sample_mark[idx]; + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + size = sizeof(struct rte_flow_action_count); + rte_memcpy(&sample_count[idx], + (const void *)action->conf, size); + action->conf = &sample_count[idx]; + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + size = sizeof(struct rte_flow_action_queue); + rte_memcpy(&sample_queue[idx], + (const void *)action->conf, size); + action->conf = &sample_queue[idx]; + break; + default: + printf("Error - Not supported action\n"); + return; + } + rte_memcpy(data, action, sizeof(struct rte_flow_action)); + data++; + } +} /** Dispatch parsed buffer to function calls. */ static void @@ -6768,6 +7033,8 @@ static int comp_set_raw_index(struct context *, const struct token *, uint16_t proto = 0; uint16_t idx = in->port; /* We borrow port field as index */ + if (in->command == SET_SAMPLE_ACTIONS) + return cmd_set_raw_parsed_sample(in); RTE_ASSERT(in->command == SET_RAW_ENCAP || in->command == SET_RAW_DECAP); if (in->command == SET_RAW_ENCAP) {