From patchwork Tue Jun 1 01:40:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 93669 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 926A8A0524; Tue, 1 Jun 2021 03:42:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D923B40E6E; Tue, 1 Jun 2021 03:42:06 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 6066040040 for ; Tue, 1 Jun 2021 03:42:05 +0200 (CEST) IronPort-SDR: gyt29VC/XOXbAZ1MSprZ6btdWrlEpZ6ignO0IHitzxEVKblMkr55RBr5IGkGlRgOiwofQtSSY8 hUFpk1Cl3+kQ== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="289068867" X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="289068867" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2021 18:42:03 -0700 IronPort-SDR: ohPeDxNqgWV+1Hce5Vw/XB2Nw2ioh9aC/+zghT4ZOltdCBHcdj6oDyYVVxSaM2VsxgGFUa9YUt yn7nQ2VQgEuA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="479088104" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.76]) by orsmga001.jf.intel.com with ESMTP; 31 May 2021 18:42:01 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, qiming.yang@intel.com Date: Tue, 1 Jun 2021 09:40:30 +0800 Message-Id: <20210601014034.36100-2-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210601014034.36100-1-ting.xu@intel.com> References: <20210601014034.36100-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 1/5] common/iavf: add support for ETS-based Tx QoS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds support to configure ETS-based Tx QoS. Three parts of new virtchnl structures and opcodes are added to achieve: 1. Configure VF TC bandwidth limits. 2. VF queries current QoS configuration from PF. 3. Set up VF queue TC mapping. Signed-off-by: Ting Xu --- drivers/common/iavf/iavf_type.h | 2 + drivers/common/iavf/virtchnl.h | 117 ++++++++++++++++++++++++++++++++ 2 files changed, 119 insertions(+) diff --git a/drivers/common/iavf/iavf_type.h b/drivers/common/iavf/iavf_type.h index f3815d523b..73dfb47e70 100644 --- a/drivers/common/iavf/iavf_type.h +++ b/drivers/common/iavf/iavf_type.h @@ -141,6 +141,8 @@ enum iavf_debug_mask { #define IAVF_PHY_LED_MODE_MASK 0xFFFF #define IAVF_PHY_LED_MODE_ORIG 0x80000000 +#define IAVF_MAX_TRAFFIC_CLASS 8 + /* Memory types */ enum iavf_memset_type { IAVF_NONDMA_MEM = 0, diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h index 3a60faff93..a00cd76118 100644 --- a/drivers/common/iavf/virtchnl.h +++ b/drivers/common/iavf/virtchnl.h @@ -130,6 +130,7 @@ enum virtchnl_ops { VIRTCHNL_OP_ADD_CLOUD_FILTER = 32, VIRTCHNL_OP_DEL_CLOUD_FILTER = 33, /* opcodes 34, 35, 36, and 37 are reserved */ + VIRTCHNL_OP_DCF_CONFIG_VF_TC = 37, VIRTCHNL_OP_DCF_VLAN_OFFLOAD = 38, VIRTCHNL_OP_DCF_CMD_DESC = 39, VIRTCHNL_OP_DCF_CMD_BUFF = 40, @@ -152,6 +153,8 @@ enum virtchnl_ops { VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 = 57, VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2 = 58, VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2 = 59, + VIRTCHNL_OP_GET_QOS_CAPS = 66, + VIRTCHNL_OP_CONFIG_TC_MAP = 67, VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107, VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108, VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111, @@ -398,6 +401,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource); #define VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC BIT(26) #define VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF BIT(27) #define VIRTCHNL_VF_OFFLOAD_FDIR_PF BIT(28) +#define VIRTCHNL_VF_OFFLOAD_TC BIT(29) #define VIRTCHNL_VF_CAP_DCF BIT(30) /* BIT(31) is reserved */ @@ -1786,6 +1790,91 @@ struct virtchnl_fdir_query { VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_fdir_query); +/* VIRTCHNL_OP_DCF_CONFIG_VF_TC + * VF send this message to set the configuration of each TC with a + * specific vf id. + */ +enum virtchnl_bw_limit_type { + VIRTCHNL_BW_SHAPER = 0, +}; + +struct virtchnl_shaper_bw { + u32 committed; + u32 peak; +}; +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_shaper_bw); + +struct virtchnl_dcf_vf_bw_cfg { + u8 tc_id; + u8 pad[3]; + enum virtchnl_bw_limit_type type; + union { + struct virtchnl_shaper_bw shaper; + u8 pad2[32]; + }; +}; +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_dcf_vf_bw_cfg); + +struct virtchnl_dcf_vf_bw_cfg_list { + u16 vf_id; + u16 num_elem; + struct virtchnl_dcf_vf_bw_cfg cfg[1]; +}; +VIRTCHNL_CHECK_STRUCT_LEN(44, virtchnl_dcf_vf_bw_cfg_list); + +/* VIRTCHNL_OP_GET_QOS_CAPS + * VF sends this message to get its QoS Caps, such as + * TC number, Arbiter and Bandwidth. + */ +struct virtchnl_qos_cap_elem { + u8 tc_id; + u8 prio_of_tc; +#define VIRTCHNL_ABITER_STRICT 0 +#define VIRTCHNL_ABITER_ETS 2 + u8 arbiter; + u8 weight; + enum virtchnl_bw_limit_type type; + union { + struct virtchnl_shaper_bw shaper; + u8 pad2[32]; + }; +}; +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_qos_cap_elem); + +struct virtchnl_qos_cap_list { + u16 vsi_id; + u16 num_elem; + struct virtchnl_qos_cap_elem cap[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(44, virtchnl_qos_cap_list); + +/* VIRTCHNL_OP_CONFIG_TC_MAP + * VF sends message virtchnl_queue_tc_mapping to set queue to tc + * mapping for all the Tx and Rx queues with a specified VSI, and + * would get response about bitmap of valid user priorities + * associated with queues. + */ +struct virtchnl_queue_tc_mapping { + u16 vsi_id; + u16 num_tc; + u16 num_queue_pairs; + u8 pad[2]; + union { + struct { + u16 start_queue_id; + u16 queue_count; + } req; + struct { +#define VIRTCHNL_USER_PRIO_TYPE_UP 0 +#define VIRTCHNL_USER_PRIO_TYPE_DSCP 1 + u16 prio_type; + u16 valid_prio_bitmap; + } resp; + } tc[1]; +}; +VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_tc_mapping); + /* TX and RX queue types are valid in legacy as well as split queue models. * With Split Queue model, 2 additional types are introduced - TX_COMPLETION * and RX_BUFFER. In split queue model, RX corresponds to the queue where HW @@ -2117,6 +2206,19 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_DCF_GET_VSI_MAP: case VIRTCHNL_OP_DCF_GET_PKG_INFO: break; + case VIRTCHNL_OP_DCF_CONFIG_VF_TC: + valid_len = sizeof(struct virtchnl_dcf_vf_bw_cfg_list); + if (msglen >= valid_len) { + struct virtchnl_dcf_vf_bw_cfg_list *cfg_list = + (struct virtchnl_dcf_vf_bw_cfg_list *)msg; + if (cfg_list->num_elem == 0) { + err_msg_format = true; + break; + } + valid_len += (cfg_list->num_elem - 1) * + sizeof(struct virtchnl_dcf_vf_bw_cfg); + } + break; case VIRTCHNL_OP_GET_SUPPORTED_RXDIDS: break; case VIRTCHNL_OP_ADD_RSS_CFG: @@ -2132,6 +2234,21 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_QUERY_FDIR_FILTER: valid_len = sizeof(struct virtchnl_fdir_query); break; + case VIRTCHNL_OP_GET_QOS_CAPS: + break; + case VIRTCHNL_OP_CONFIG_TC_MAP: + valid_len = sizeof(struct virtchnl_queue_tc_mapping); + if (msglen >= valid_len) { + struct virtchnl_queue_tc_mapping *q_tc = + (struct virtchnl_queue_tc_mapping *)msg; + if (q_tc->num_tc == 0) { + err_msg_format = true; + break; + } + valid_len += (q_tc->num_tc - 1) * + sizeof(q_tc->tc[0]); + } + break; case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: break; case VIRTCHNL_OP_ADD_VLAN_V2: From patchwork Tue Jun 1 01:40:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 93670 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 30391A0524; Tue, 1 Jun 2021 03:42:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 16B60410E0; Tue, 1 Jun 2021 03:42:08 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 2847E40040 for ; Tue, 1 Jun 2021 03:42:06 +0200 (CEST) IronPort-SDR: WUSdXW+5dNgy1xWlrjRCe6ADQnIKUmChi36/RlILedYduhohvHGMw6+HVzi2Omx2GETRdokocT zUCsvp4jDOLA== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="289068872" X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="289068872" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2021 18:42:05 -0700 IronPort-SDR: cxY+TbazA92xOiljUGHS0mJ05PKa8MVdKbZu4ecDSwEy11h65iceKME9LjujyfRiW/BspBBLNq CbG0Aa5e3Oow== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="479088117" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.76]) by orsmga001.jf.intel.com with ESMTP; 31 May 2021 18:42:03 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, qiming.yang@intel.com Date: Tue, 1 Jun 2021 09:40:31 +0800 Message-Id: <20210601014034.36100-3-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210601014034.36100-1-ting.xu@intel.com> References: <20210601014034.36100-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 2/5] net/ice/base: support DCF query port ETS adminq X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In the adminq command query port ETS function, the root node teid is needed. However, for DCF, the root node is not initialized, which will cause error when we refer to the variable. In this patch, we will check whether the root node is available or not first. Signed-off-by: Ting Xu --- drivers/net/ice/base/ice_dcb.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c index 0aaa5ae8c1..08c950cd9a 100644 --- a/drivers/net/ice/base/ice_dcb.c +++ b/drivers/net/ice/base/ice_dcb.c @@ -1483,7 +1483,8 @@ ice_aq_query_port_ets(struct ice_port_info *pi, return ICE_ERR_PARAM; cmd = &desc.params.port_ets; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_port_ets); - cmd->port_teid = pi->root->info.node_teid; + if (pi->root) + cmd->port_teid = pi->root->info.node_teid; status = ice_aq_send_cmd(pi->hw, &desc, buf, buf_size, cd); return status; From patchwork Tue Jun 1 01:40:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 93671 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38B9DA0524; Tue, 1 Jun 2021 03:42:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36082410ED; Tue, 1 Jun 2021 03:42:10 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 3A440410E4 for ; Tue, 1 Jun 2021 03:42:08 +0200 (CEST) IronPort-SDR: nQ+S/9LmSONL0YAaJWR2eW+ZajG/XSo2kPv9S51qo4C+ofT+1Qot+n/aNo9K964zhjQDsq8OPC jrs/zM9zRsXA== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="289068881" X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="289068881" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2021 18:42:07 -0700 IronPort-SDR: Nr4I+kRBxGwiJx0PJ/iDfiecnYF7alU3nwxAW+JMUvBVLIyN4NXl94ig3qd+w2Ph9XK21QoQWU g04xtAEi4ZpQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="479088123" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.76]) by orsmga001.jf.intel.com with ESMTP; 31 May 2021 18:42:05 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, qiming.yang@intel.com Date: Tue, 1 Jun 2021 09:40:32 +0800 Message-Id: <20210601014034.36100-4-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210601014034.36100-1-ting.xu@intel.com> References: <20210601014034.36100-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 3/5] net/ice: support DCF link status event handling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When link status changes, DCF will receive virtchnl PF event message. Add support to handle this event, change link status and update link info. Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.h | 6 ++++ drivers/net/ice/ice_dcf_ethdev.c | 54 ++++++++++++++++++++++++++++++-- drivers/net/ice/ice_dcf_parent.c | 51 ++++++++++++++++++++++++++++++ 3 files changed, 108 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 0cb90b5e9f..587093b909 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -60,6 +60,10 @@ struct ice_dcf_hw { uint16_t nb_msix; uint16_t rxq_map[16]; struct virtchnl_eth_stats eth_stats_offset; + + /* Link status */ + bool link_up; + uint32_t link_speed; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, @@ -77,5 +81,7 @@ int ice_dcf_disable_queues(struct ice_dcf_hw *hw); int ice_dcf_query_stats(struct ice_dcf_hw *hw, struct virtchnl_eth_stats *pstats); int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add); +int ice_dcf_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index b937cbbb03..332ce340cf 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -880,11 +880,59 @@ ice_dcf_dev_close(struct rte_eth_dev *dev) return 0; } -static int -ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev, +int +ice_dcf_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { - return 0; + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct rte_eth_link new_link; + + memset(&new_link, 0, sizeof(new_link)); + + /* Only read status info stored in VF, and the info is updated + * when receive LINK_CHANGE evnet from PF by Virtchnnl. + */ + switch (hw->link_speed) { + case 10: + new_link.link_speed = ETH_SPEED_NUM_10M; + break; + case 100: + new_link.link_speed = ETH_SPEED_NUM_100M; + break; + case 1000: + new_link.link_speed = ETH_SPEED_NUM_1G; + break; + case 10000: + new_link.link_speed = ETH_SPEED_NUM_10G; + break; + case 20000: + new_link.link_speed = ETH_SPEED_NUM_20G; + break; + case 25000: + new_link.link_speed = ETH_SPEED_NUM_25G; + break; + case 40000: + new_link.link_speed = ETH_SPEED_NUM_40G; + break; + case 50000: + new_link.link_speed = ETH_SPEED_NUM_50G; + break; + case 100000: + new_link.link_speed = ETH_SPEED_NUM_100G; + break; + default: + new_link.link_speed = ETH_SPEED_NUM_NONE; + break; + } + + new_link.link_duplex = ETH_LINK_FULL_DUPLEX; + new_link.link_status = hw->link_up ? ETH_LINK_UP : + ETH_LINK_DOWN; + new_link.link_autoneg = !(dev->data->dev_conf.link_speeds & + ETH_LINK_SPEED_FIXED); + + return rte_eth_linkstatus_set(dev, &new_link); } /* Add UDP tunneling port */ diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index 1d7aa8bc87..0c0706316d 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -178,6 +178,44 @@ start_vsi_reset_thread(struct ice_dcf_hw *dcf_hw, bool vfr, uint16_t vf_id) } } +static uint32_t +ice_dcf_convert_link_speed(enum virtchnl_link_speed virt_link_speed) +{ + uint32_t speed; + + switch (virt_link_speed) { + case VIRTCHNL_LINK_SPEED_100MB: + speed = 100; + break; + case VIRTCHNL_LINK_SPEED_1GB: + speed = 1000; + break; + case VIRTCHNL_LINK_SPEED_10GB: + speed = 10000; + break; + case VIRTCHNL_LINK_SPEED_40GB: + speed = 40000; + break; + case VIRTCHNL_LINK_SPEED_20GB: + speed = 20000; + break; + case VIRTCHNL_LINK_SPEED_25GB: + speed = 25000; + break; + case VIRTCHNL_LINK_SPEED_2_5GB: + speed = 2500; + break; + case VIRTCHNL_LINK_SPEED_5GB: + speed = 5000; + break; + default: + speed = 0; + break; + } + + return speed; +} + void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, uint8_t *msg, uint16_t msglen) @@ -196,6 +234,19 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, break; case VIRTCHNL_EVENT_LINK_CHANGE: PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event"); + dcf_hw->link_up = pf_msg->event_data.link_event.link_status; + if (dcf_hw->vf_res->vf_cap_flags & + VIRTCHNL_VF_CAP_ADV_LINK_SPEED) { + dcf_hw->link_speed = + pf_msg->event_data.link_event_adv.link_speed; + } else { + enum virtchnl_link_speed speed; + speed = pf_msg->event_data.link_event.link_speed; + dcf_hw->link_speed = ice_dcf_convert_link_speed(speed); + } + ice_dcf_link_update(dcf_hw->eth_dev, 0); + rte_eth_dev_callback_process(dcf_hw->eth_dev, + RTE_ETH_EVENT_INTR_LSC, NULL); break; case VIRTCHNL_EVENT_PF_DRIVER_CLOSE: PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event"); From patchwork Tue Jun 1 01:40:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 93672 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4AC47A0524; Tue, 1 Jun 2021 03:42:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6CA4410FC; Tue, 1 Jun 2021 03:42:12 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 0AFB0410E4 for ; Tue, 1 Jun 2021 03:42:09 +0200 (CEST) IronPort-SDR: jtbjQodNh7GYf5r8JO7h4Gxg6kVKxil98N8GDYXpECXOFo/wAC/2jUfTGFIJZ67qe1CtkOOflU +ADc9oEpSwvQ== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="289068885" X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="289068885" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2021 18:42:09 -0700 IronPort-SDR: LMOll9c819ql8v8s9v7FlHIU8pZH8yNrDn0BvbbRmyu/uhu/Uf9ZSNa8ZTd/JOw7U+mdEXtQOW +ffiSJ6R2Kqg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="479088137" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.76]) by orsmga001.jf.intel.com with ESMTP; 31 May 2021 18:42:07 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, qiming.yang@intel.com Date: Tue, 1 Jun 2021 09:40:33 +0800 Message-Id: <20210601014034.36100-5-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210601014034.36100-1-ting.xu@intel.com> References: <20210601014034.36100-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 4/5] net/ice: support QoS config VF bandwidth in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch supports the ETS-based QoS configuration. It enables the DCF to configure bandwidth limits for each VF VSI of different TCs. A hierarchy scheduler tree is built with port, TC and VSI nodes. Signed-off-by: Qiming Yang Signed-off-by: Ting Xu --- drivers/net/ice/ice_dcf.c | 6 +- drivers/net/ice/ice_dcf.h | 47 +++ drivers/net/ice/ice_dcf_ethdev.c | 13 + drivers/net/ice/ice_dcf_ethdev.h | 3 + drivers/net/ice/ice_dcf_parent.c | 30 ++ drivers/net/ice/ice_dcf_sched.c | 604 +++++++++++++++++++++++++++++++ drivers/net/ice/meson.build | 3 +- 7 files changed, 704 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ice/ice_dcf_sched.c diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index d72a6f357e..f8b4e07d86 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -235,7 +235,8 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw) caps = VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RX_POLLING | VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF | VIRTCHNL_VF_OFFLOAD_VLAN_V2 | - VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC; + VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC | + VIRTCHNL_VF_OFFLOAD_TC; err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES, (uint8_t *)&caps, sizeof(caps)); @@ -668,6 +669,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) } } + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_TC) + ice_dcf_tm_conf_init(eth_dev); + hw->eth_dev = eth_dev; rte_intr_callback_register(&pci_dev->intr_handle, ice_dcf_dev_interrupt_handler, hw); diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 587093b909..e74e5d7e81 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -6,6 +6,7 @@ #define _ICE_DCF_H_ #include +#include #include #include @@ -30,6 +31,49 @@ struct dcf_virtchnl_cmd { volatile int pending; }; +struct ice_dcf_tm_shaper_profile { + TAILQ_ENTRY(ice_dcf_tm_shaper_profile) node; + uint32_t shaper_profile_id; + uint32_t reference_count; + struct rte_tm_shaper_params profile; +}; + +TAILQ_HEAD(ice_dcf_shaper_profile_list, ice_dcf_tm_shaper_profile); + +/* Struct to store Traffic Manager node configuration. */ +struct ice_dcf_tm_node { + TAILQ_ENTRY(ice_dcf_tm_node) node; + uint32_t id; + uint32_t tc; + uint32_t priority; + uint32_t weight; + uint32_t reference_count; + struct ice_dcf_tm_node *parent; + struct ice_dcf_tm_shaper_profile *shaper_profile; + struct rte_tm_node_params params; +}; + +TAILQ_HEAD(ice_dcf_tm_node_list, ice_dcf_tm_node); + +/* node type of Traffic Manager */ +enum ice_dcf_tm_node_type { + ICE_DCF_TM_NODE_TYPE_PORT, + ICE_DCF_TM_NODE_TYPE_TC, + ICE_DCF_TM_NODE_TYPE_VSI, + ICE_DCF_TM_NODE_TYPE_MAX, +}; + +/* Struct to store all the Traffic Manager configuration. */ +struct ice_dcf_tm_conf { + struct ice_dcf_shaper_profile_list shaper_profile_list; + struct ice_dcf_tm_node *root; /* root node - port */ + struct ice_dcf_tm_node_list tc_list; /* node list for all the TCs */ + struct ice_dcf_tm_node_list vsi_list; /* node list for all the queues */ + uint32_t nb_tc_node; + uint32_t nb_vsi_node; + bool committed; +}; + struct ice_dcf_hw { struct iavf_hw avf; @@ -45,6 +89,8 @@ struct ice_dcf_hw { uint16_t *vf_vsi_map; uint16_t pf_vsi_id; + struct ice_dcf_tm_conf tm_conf; + struct ice_aqc_port_ets_elem *ets_config; struct virtchnl_version_info virtchnl_version; struct virtchnl_vf_resource *vf_res; /* VF resource */ struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */ @@ -83,5 +129,6 @@ int ice_dcf_query_stats(struct ice_dcf_hw *hw, int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add); int ice_dcf_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); +void ice_dcf_tm_conf_init(struct rte_eth_dev *dev); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 332ce340cf..91c4486260 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -993,6 +993,18 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev, return ret; } +static int +ice_dcf_tm_ops_get(struct rte_eth_dev *dev __rte_unused, + void *arg) +{ + if (!arg) + return -EINVAL; + + *(const void **)arg = &ice_dcf_tm_ops; + + return 0; +} + static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .dev_start = ice_dcf_dev_start, .dev_stop = ice_dcf_dev_stop, @@ -1017,6 +1029,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .flow_ops_get = ice_dcf_dev_flow_ops_get, .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add, .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del, + .tm_ops_get = ice_dcf_tm_ops_get, }; static int diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index e7c9d7fe41..8510e37119 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -7,6 +7,8 @@ #include "base/ice_common.h" #include "base/ice_adminq_cmd.h" +#include "base/ice_dcb.h" +#include "base/ice_sched.h" #include "ice_ethdev.h" #include "ice_dcf.h" @@ -52,6 +54,7 @@ struct ice_dcf_vf_repr { struct ice_dcf_vlan outer_vlan_info; /* DCF always handle outer VLAN */ }; +extern const struct rte_tm_ops ice_dcf_tm_ops; void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, uint8_t *msg, uint16_t msglen); int ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index 0c0706316d..2403d9c259 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -264,6 +264,29 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, } } +static int +ice_dcf_query_port_ets(struct ice_hw *parent_hw, struct ice_dcf_hw *real_hw) +{ + int ret; + + real_hw->ets_config = (struct ice_aqc_port_ets_elem *) + ice_malloc(real_hw, sizeof(*real_hw->ets_config)); + if (!real_hw->ets_config) + return ICE_ERR_NO_MEMORY; + + ret = ice_aq_query_port_ets(parent_hw->port_info, + real_hw->ets_config, sizeof(*real_hw->ets_config), + NULL); + if (ret) { + PMD_DRV_LOG(ERR, "DCF Query Port ETS failed"); + rte_free(real_hw->ets_config); + real_hw->ets_config = NULL; + return ret; + } + + return ICE_SUCCESS; +} + static int ice_dcf_init_parent_hw(struct ice_hw *hw) { @@ -487,6 +510,13 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev) return err; } + err = ice_dcf_query_port_ets(parent_hw, hw); + if (err) { + PMD_INIT_LOG(ERR, "failed to query port ets with error %d", + err); + goto uninit_hw; + } + err = ice_dcf_load_pkg(parent_hw); if (err) { PMD_INIT_LOG(ERR, "failed to load package with error %d", diff --git a/drivers/net/ice/ice_dcf_sched.c b/drivers/net/ice/ice_dcf_sched.c new file mode 100644 index 0000000000..06d835ca24 --- /dev/null +++ b/drivers/net/ice/ice_dcf_sched.c @@ -0,0 +1,604 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2017 Intel Corporation + */ +#include + +#include "base/ice_sched.h" +#include "ice_dcf_ethdev.h" + +static int ice_dcf_hierarchy_commit(struct rte_eth_dev *dev, + __rte_unused int clear_on_fail, + __rte_unused struct rte_tm_error *error); +static int ice_dcf_node_add(struct rte_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint32_t level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error); +static int ice_dcf_node_delete(struct rte_eth_dev *dev, uint32_t node_id, + struct rte_tm_error *error); +static int ice_dcf_shaper_profile_add(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_shaper_params *profile, + struct rte_tm_error *error); +static int ice_dcf_shaper_profile_del(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_error *error); + +const struct rte_tm_ops ice_dcf_tm_ops = { + .shaper_profile_add = ice_dcf_shaper_profile_add, + .shaper_profile_delete = ice_dcf_shaper_profile_del, + .hierarchy_commit = ice_dcf_hierarchy_commit, + .node_add = ice_dcf_node_add, + .node_delete = ice_dcf_node_delete, +}; + +void +ice_dcf_tm_conf_init(struct rte_eth_dev *dev) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + + /* initialize shaper profile list */ + TAILQ_INIT(&hw->tm_conf.shaper_profile_list); + + /* initialize node configuration */ + hw->tm_conf.root = NULL; + TAILQ_INIT(&hw->tm_conf.tc_list); + TAILQ_INIT(&hw->tm_conf.vsi_list); + hw->tm_conf.nb_tc_node = 0; + hw->tm_conf.nb_vsi_node = 0; + hw->tm_conf.committed = false; +} + +static inline struct ice_dcf_tm_node * +dcf_tm_node_search(struct rte_eth_dev *dev, + uint32_t node_id, enum ice_dcf_tm_node_type *node_type) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct ice_dcf_tm_node_list *vsi_list = &hw->tm_conf.vsi_list; + struct ice_dcf_tm_node_list *tc_list = &hw->tm_conf.tc_list; + struct ice_dcf_tm_node *tm_node; + + if (hw->tm_conf.root && hw->tm_conf.root->id == node_id) { + *node_type = ICE_DCF_TM_NODE_TYPE_PORT; + return hw->tm_conf.root; + } + + TAILQ_FOREACH(tm_node, tc_list, node) { + if (tm_node->id == node_id) { + *node_type = ICE_DCF_TM_NODE_TYPE_TC; + return tm_node; + } + } + + TAILQ_FOREACH(tm_node, vsi_list, node) { + if (tm_node->id == node_id) { + *node_type = ICE_DCF_TM_NODE_TYPE_VSI; + return tm_node; + } + } + + return NULL; +} + +static inline struct ice_dcf_tm_shaper_profile * +dcf_shaper_profile_search(struct rte_eth_dev *dev, + uint32_t shaper_profile_id) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct ice_dcf_shaper_profile_list *shaper_profile_list = + &hw->tm_conf.shaper_profile_list; + struct ice_dcf_tm_shaper_profile *shaper_profile; + + TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) { + if (shaper_profile_id == shaper_profile->shaper_profile_id) + return shaper_profile; + } + + return NULL; +} + +static int +dcf_node_param_check(struct ice_dcf_hw *hw, uint32_t node_id, + uint32_t priority, uint32_t weight, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + /* checked all the unsupported parameter */ + if (node_id == RTE_TM_NODE_ID_NULL) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "invalid node id"; + return -EINVAL; + } + + if (priority) { + error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY; + error->message = "priority should be 0"; + return -EINVAL; + } + + if (weight != 1) { + error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT; + error->message = "weight must be 1"; + return -EINVAL; + } + + /* not support shared shaper */ + if (params->shared_shaper_id) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID; + error->message = "shared shaper not supported"; + return -EINVAL; + } + if (params->n_shared_shapers) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS; + error->message = "shared shaper not supported"; + return -EINVAL; + } + + /* for non-leaf node */ + if (node_id >= 8 * hw->num_vfs) { + if (params->nonleaf.wfq_weight_mode) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; + error->message = "WFQ not supported"; + return -EINVAL; + } + if (params->nonleaf.n_sp_priorities != 1) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES; + error->message = "SP priority not supported"; + return -EINVAL; + } else if (params->nonleaf.wfq_weight_mode && + !(*params->nonleaf.wfq_weight_mode)) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; + error->message = "WFP should be byte mode"; + return -EINVAL; + } + + return 0; + } + + /* for leaf node */ + if (params->leaf.cman) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN; + error->message = "Congestion management not supported"; + return -EINVAL; + } + if (params->leaf.wred.wred_profile_id != + RTE_TM_WRED_PROFILE_ID_NONE) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID; + error->message = "WRED not supported"; + return -EINVAL; + } + if (params->leaf.wred.shared_wred_context_id) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID; + error->message = "WRED not supported"; + return -EINVAL; + } + if (params->leaf.wred.n_shared_wred_contexts) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS; + error->message = "WRED not supported"; + return -EINVAL; + } + + return 0; +} + +static int +ice_dcf_node_add(struct rte_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint32_t level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + enum ice_dcf_tm_node_type parent_node_type = ICE_DCF_TM_NODE_TYPE_MAX; + enum ice_dcf_tm_node_type node_type = ICE_DCF_TM_NODE_TYPE_MAX; + struct ice_dcf_tm_shaper_profile *shaper_profile = NULL; + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct ice_dcf_tm_node *parent_node; + struct ice_dcf_tm_node *tm_node; + uint16_t tc_nb = 1; + int i, ret; + + if (!params || !error) + return -EINVAL; + + /* if already committed */ + if (hw->tm_conf.committed) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + error->message = "already committed"; + return -EINVAL; + } + + ret = dcf_node_param_check(hw, node_id, priority, weight, + params, error); + if (ret) + return ret; + + for (i = 1; i < ICE_MAX_TRAFFIC_CLASS; i++) { + if (hw->ets_config->tc_valid_bits & (1 << i)) + tc_nb++; + } + + /* check if the node is already existed */ + if (dcf_tm_node_search(dev, node_id, &node_type)) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "node id already used"; + return -EINVAL; + } + + /* check the shaper profile id */ + if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) { + shaper_profile = dcf_shaper_profile_search(dev, + params->shaper_profile_id); + if (!shaper_profile) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID; + error->message = "shaper profile not exist"; + return -EINVAL; + } + } + + /* add root node if not have a parent */ + if (parent_node_id == RTE_TM_NODE_ID_NULL) { + /* check level */ + if (level_id != ICE_DCF_TM_NODE_TYPE_PORT) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; + error->message = "Wrong level"; + return -EINVAL; + } + + /* obviously no more than one root */ + if (hw->tm_conf.root) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; + error->message = "already have a root"; + return -EINVAL; + } + + /* add the root node */ + tm_node = rte_zmalloc("ice_dcf_tm_node", + sizeof(struct ice_dcf_tm_node), + 0); + if (!tm_node) + return -ENOMEM; + tm_node->id = node_id; + tm_node->parent = NULL; + tm_node->reference_count = 0; + rte_memcpy(&tm_node->params, params, + sizeof(struct rte_tm_node_params)); + hw->tm_conf.root = tm_node; + + return 0; + } + + /* TC or vsi node */ + /* check the parent node */ + parent_node = dcf_tm_node_search(dev, parent_node_id, + &parent_node_type); + if (!parent_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; + error->message = "parent not exist"; + return -EINVAL; + } + if (parent_node_type != ICE_DCF_TM_NODE_TYPE_PORT && + parent_node_type != ICE_DCF_TM_NODE_TYPE_TC) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; + error->message = "parent is not port or TC"; + return -EINVAL; + } + /* check level */ + if (level_id != RTE_TM_NODE_LEVEL_ID_ANY && + level_id != parent_node_type + 1) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; + error->message = "Wrong level"; + return -EINVAL; + } + + /* check the TC node number */ + if (parent_node_type == ICE_DCF_TM_NODE_TYPE_PORT) { + /* check the TC number */ + if (hw->tm_conf.nb_tc_node >= tc_nb) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "too many TCs"; + return -EINVAL; + } + } else { + /* check the vsi node number */ + if (parent_node->reference_count >= hw->num_vfs) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "too many VSI for one TC"; + return -EINVAL; + } + /* check the vsi node id */ + if (node_id > tc_nb * hw->num_vfs) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "too large VSI id"; + return -EINVAL; + } + } + + /* add the TC or vsi node */ + tm_node = rte_zmalloc("ice_dcf_tm_node", + sizeof(struct ice_dcf_tm_node), + 0); + if (!tm_node) + return -ENOMEM; + tm_node->id = node_id; + tm_node->priority = priority; + tm_node->weight = weight; + tm_node->shaper_profile = shaper_profile; + tm_node->reference_count = 0; + tm_node->parent = parent_node; + rte_memcpy(&tm_node->params, params, + sizeof(struct rte_tm_node_params)); + if (parent_node_type == ICE_DCF_TM_NODE_TYPE_PORT) { + TAILQ_INSERT_TAIL(&hw->tm_conf.tc_list, + tm_node, node); + tm_node->tc = hw->tm_conf.nb_tc_node; + hw->tm_conf.nb_tc_node++; + } else { + TAILQ_INSERT_TAIL(&hw->tm_conf.vsi_list, + tm_node, node); + tm_node->tc = parent_node->tc; + hw->tm_conf.nb_vsi_node++; + } + tm_node->parent->reference_count++; + + /* increase the reference counter of the shaper profile */ + if (shaper_profile) + shaper_profile->reference_count++; + + return 0; +} + +static int +ice_dcf_node_delete(struct rte_eth_dev *dev, uint32_t node_id, + struct rte_tm_error *error) +{ + enum ice_dcf_tm_node_type node_type = ICE_DCF_TM_NODE_TYPE_MAX; + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct ice_dcf_tm_node *tm_node; + + if (!error) + return -EINVAL; + + /* if already committed */ + if (hw->tm_conf.committed) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + error->message = "already committed"; + return -EINVAL; + } + + if (node_id == RTE_TM_NODE_ID_NULL) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "invalid node id"; + return -EINVAL; + } + + /* check if the node id exists */ + tm_node = dcf_tm_node_search(dev, node_id, &node_type); + if (!tm_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "no such node"; + return -EINVAL; + } + + /* the node should have no child */ + if (tm_node->reference_count) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = + "cannot delete a node which has children"; + return -EINVAL; + } + + /* root node */ + if (node_type == ICE_DCF_TM_NODE_TYPE_PORT) { + if (tm_node->shaper_profile) + tm_node->shaper_profile->reference_count--; + rte_free(tm_node); + hw->tm_conf.root = NULL; + return 0; + } + + /* TC or VSI node */ + if (tm_node->shaper_profile) + tm_node->shaper_profile->reference_count--; + tm_node->parent->reference_count--; + if (node_type == ICE_DCF_TM_NODE_TYPE_TC) { + TAILQ_REMOVE(&hw->tm_conf.tc_list, tm_node, node); + hw->tm_conf.nb_tc_node--; + } else { + TAILQ_REMOVE(&hw->tm_conf.vsi_list, tm_node, node); + hw->tm_conf.nb_vsi_node--; + } + rte_free(tm_node); + + return 0; +} + +static int +dcf_shaper_profile_param_check(struct rte_tm_shaper_params *profile, + struct rte_tm_error *error) +{ + /* min bucket size not supported */ + if (profile->committed.size) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE; + error->message = "committed bucket size not supported"; + return -EINVAL; + } + /* max bucket size not supported */ + if (profile->peak.size) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE; + error->message = "peak bucket size not supported"; + return -EINVAL; + } + /* length adjustment not supported */ + if (profile->pkt_length_adjust) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN; + error->message = "packet length adjustment not supported"; + return -EINVAL; + } + + return 0; +} + +static int +ice_dcf_shaper_profile_add(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_shaper_params *profile, + struct rte_tm_error *error) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct ice_dcf_tm_shaper_profile *shaper_profile; + int ret; + + if (!profile || !error) + return -EINVAL; + + ret = dcf_shaper_profile_param_check(profile, error); + if (ret) + return ret; + + shaper_profile = dcf_shaper_profile_search(dev, shaper_profile_id); + + if (shaper_profile) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "profile ID exist"; + return -EINVAL; + } + + shaper_profile = rte_zmalloc("ice_dcf_tm_shaper_profile", + sizeof(struct ice_dcf_tm_shaper_profile), + 0); + if (!shaper_profile) + return -ENOMEM; + shaper_profile->shaper_profile_id = shaper_profile_id; + rte_memcpy(&shaper_profile->profile, profile, + sizeof(struct rte_tm_shaper_params)); + TAILQ_INSERT_TAIL(&hw->tm_conf.shaper_profile_list, + shaper_profile, node); + + return 0; +} + +static int +ice_dcf_shaper_profile_del(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_error *error) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct ice_dcf_tm_shaper_profile *shaper_profile; + + if (!error) + return -EINVAL; + + shaper_profile = dcf_shaper_profile_search(dev, shaper_profile_id); + + if (!shaper_profile) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "profile ID not exist"; + return -EINVAL; + } + + /* don't delete a profile if it's used by one or several nodes */ + if (shaper_profile->reference_count) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE; + error->message = "profile in use"; + return -EINVAL; + } + + TAILQ_REMOVE(&hw->tm_conf.shaper_profile_list, shaper_profile, node); + rte_free(shaper_profile); + + return 0; +} + +static int +ice_dcf_set_vf_bw(struct ice_dcf_hw *hw, + struct virtchnl_dcf_vf_bw_cfg_list *vf_bw, + uint16_t len) +{ + struct dcf_virtchnl_cmd args; + int err; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_DCF_CONFIG_VF_TC; + args.req_msg = (uint8_t *)vf_bw; + args.req_msglen = len; + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command %s", + "VIRTCHNL_OP_DCF_CONFIG_VF_TC"); + return err; +} + +static int ice_dcf_hierarchy_commit(struct rte_eth_dev *dev, + __rte_unused int clear_on_fail, + __rte_unused struct rte_tm_error *error) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct virtchnl_dcf_vf_bw_cfg_list *vf_bw; + struct ice_dcf_tm_node_list *vsi_list = &hw->tm_conf.vsi_list; + struct rte_tm_shaper_params *profile; + struct ice_dcf_tm_node *tm_node; + uint32_t port_bw, cir_total; + uint16_t size, vf_id; + int ret; + int num_elem = 0; + + size = sizeof(*vf_bw) + + sizeof(vf_bw->cfg[0]) * (hw->tm_conf.nb_tc_node - 1); + vf_bw = rte_zmalloc("vf_bw", size, 0); + if (!vf_bw) + return ICE_ERR_NO_MEMORY; + + /* port bandwidth (Kbps) */ + port_bw = hw->link_speed * 1000; + cir_total = 0; + + for (vf_id = 0; vf_id < hw->num_vfs; vf_id++) { + num_elem = 0; + vf_bw->vf_id = vf_id; + TAILQ_FOREACH(tm_node, vsi_list, node) { + /* scan the nodes belong to one VSI */ + if (tm_node->id - hw->num_vfs * tm_node->tc != vf_id) + continue; + vf_bw->cfg[num_elem].tc_id = tm_node->tc; + vf_bw->cfg[num_elem].type = VIRTCHNL_BW_SHAPER; + if (tm_node->shaper_profile) { + /* Transfer from Byte per seconds to Kbps */ + profile = &tm_node->shaper_profile->profile; + vf_bw->cfg[num_elem].shaper.peak = + profile->peak.rate / 1000 * BITS_PER_BYTE; + vf_bw->cfg[num_elem].shaper.committed = + profile->committed.rate / 1000 * BITS_PER_BYTE; + } + cir_total += vf_bw->cfg[num_elem].shaper.committed; + num_elem++; + } + + /* check if total CIR is larger than port bandwidth */ + if (cir_total > port_bw) { + PMD_DRV_LOG(ERR, "Total CIR of all VFs is larger than port bandwidth"); + return ICE_ERR_PARAM; + } + vf_bw->num_elem = num_elem; + ret = ice_dcf_set_vf_bw(hw, vf_bw, size); + if (ret) + return ret; + } + + hw->tm_conf.committed = true; + return ICE_SUCCESS; +} diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build index 65750d3501..0b86d74a49 100644 --- a/drivers/net/ice/meson.build +++ b/drivers/net/ice/meson.build @@ -70,6 +70,7 @@ endif sources += files('ice_dcf.c', 'ice_dcf_vf_representor.c', 'ice_dcf_ethdev.c', - 'ice_dcf_parent.c') + 'ice_dcf_parent.c', + 'ice_dcf_sched.c') headers = files('rte_pmd_ice.h') From patchwork Tue Jun 1 01:40:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 93673 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B222DA0524; Tue, 1 Jun 2021 03:42:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 48A0040E64; Tue, 1 Jun 2021 03:42:14 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 508E8410FA for ; Tue, 1 Jun 2021 03:42:12 +0200 (CEST) IronPort-SDR: UPI6mYR+NuMfqwudmHhWgXamsVK+PnkOZk2Nsy+3c3wxJb3iKveQBU5tw1EakDsLY65LCLVPu4 UESQT0iUtVqA== X-IronPort-AV: E=McAfee;i="6200,9189,10001"; a="289068889" X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="289068889" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 May 2021 18:42:11 -0700 IronPort-SDR: qAgJSpv3tc5d4D5P1oNac51ZlvFcc9pqKkusEEV6J4b3vlBEPweDgVRM5L3W6EnrrrObE389qw AlcO1Q/aktFg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,238,1616482800"; d="scan'208";a="479088148" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.76]) by orsmga001.jf.intel.com with ESMTP; 31 May 2021 18:42:09 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com, qiming.yang@intel.com Date: Tue, 1 Jun 2021 09:40:34 +0800 Message-Id: <20210601014034.36100-6-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210601014034.36100-1-ting.xu@intel.com> References: <20210601014034.36100-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 5/5] net/iavf: query QoS cap and set queue TC mapping X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch added the support for VF to config the ETS-based Tx QoS, including querying current QoS configuration from PF and config queue TC mapping. PF QoS is configured in advance and the queried info is provided to the user for future usage. VF queues are mapped to different TCs in PF through virtchnl. Signed-off-by: Qiming Yang Signed-off-by: Ting Xu --- drivers/net/iavf/iavf.h | 45 +++ drivers/net/iavf/iavf_ethdev.c | 31 ++ drivers/net/iavf/iavf_tm.c | 675 +++++++++++++++++++++++++++++++++ drivers/net/iavf/iavf_vchnl.c | 56 ++- drivers/net/iavf/meson.build | 1 + 5 files changed, 807 insertions(+), 1 deletion(-) create mode 100644 drivers/net/iavf/iavf_tm.c diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 4f5811ae87..77ddf15f42 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -6,6 +6,8 @@ #define _IAVF_ETHDEV_H_ #include +#include + #include #include #include @@ -82,6 +84,8 @@ #define IAVF_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03 #define IAVF_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01 +#define IAVF_BITS_PER_BYTE 8 + struct iavf_adapter; struct iavf_rx_queue; struct iavf_tx_queue; @@ -129,6 +133,38 @@ enum iavf_aq_result { IAVF_MSG_CMD, /* Read async command result */ }; +/* Struct to store Traffic Manager node configuration. */ +struct iavf_tm_node { + TAILQ_ENTRY(iavf_tm_node) node; + uint32_t id; + uint32_t tc; + uint32_t priority; + uint32_t weight; + uint32_t reference_count; + struct iavf_tm_node *parent; + struct rte_tm_node_params params; +}; + +TAILQ_HEAD(iavf_tm_node_list, iavf_tm_node); + +/* node type of Traffic Manager */ +enum iavf_tm_node_type { + IAVF_TM_NODE_TYPE_PORT, + IAVF_TM_NODE_TYPE_TC, + IAVF_TM_NODE_TYPE_QUEUE, + IAVF_TM_NODE_TYPE_MAX, +}; + +/* Struct to store all the Traffic Manager configuration. */ +struct iavf_tm_conf { + struct iavf_tm_node *root; /* root node - vf vsi */ + struct iavf_tm_node_list tc_list; /* node list for all the TCs */ + struct iavf_tm_node_list queue_list; /* node list for all the queues */ + uint32_t nb_tc_node; + uint32_t nb_queue_node; + bool committed; +}; + /* Structure to store private data specific for VF instance. */ struct iavf_info { uint16_t num_queue_pairs; @@ -175,6 +211,9 @@ struct iavf_info { struct iavf_fdir_info fdir; /* flow director info */ /* indicate large VF support enabled or not */ bool lv_enabled; + + struct virtchnl_qos_cap_list *qos_cap; + struct iavf_tm_conf tm_conf; }; #define IAVF_MAX_PKT_TYPE 1024 @@ -344,4 +383,10 @@ int iavf_add_del_mc_addr_list(struct iavf_adapter *adapter, uint32_t mc_addrs_num, bool add); int iavf_request_queues(struct iavf_adapter *adapter, uint16_t num); int iavf_get_max_rss_queue_region(struct iavf_adapter *adapter); +int iavf_get_qos_cap(struct iavf_adapter *adapter); +int iavf_set_q_tc_map(struct rte_eth_dev *dev, + struct virtchnl_queue_tc_mapping *q_tc_mapping, + uint16_t size); +void iavf_tm_conf_init(struct rte_eth_dev *dev); +extern const struct rte_tm_ops iavf_tm_ops; #endif /* _IAVF_ETHDEV_H_ */ diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index cb38fe81e1..e0a03a0bee 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -122,6 +122,7 @@ static int iavf_dev_flow_ops_get(struct rte_eth_dev *dev, static int iavf_set_mc_addr_list(struct rte_eth_dev *dev, struct rte_ether_addr *mc_addrs, uint32_t mc_addrs_num); +static int iavf_tm_ops_get(struct rte_eth_dev *dev __rte_unused, void *arg); static const struct rte_pci_id pci_id_iavf_map[] = { { RTE_PCI_DEVICE(IAVF_INTEL_VENDOR_ID, IAVF_DEV_ID_ADAPTIVE_VF) }, @@ -200,8 +201,21 @@ static const struct eth_dev_ops iavf_eth_dev_ops = { .flow_ops_get = iavf_dev_flow_ops_get, .tx_done_cleanup = iavf_dev_tx_done_cleanup, .get_monitor_addr = iavf_get_monitor_addr, + .tm_ops_get = iavf_tm_ops_get, }; +static int +iavf_tm_ops_get(struct rte_eth_dev *dev __rte_unused, + void *arg) +{ + if (!arg) + return -EINVAL; + + *(const void **)arg = &iavf_tm_ops; + + return 0; +} + static int iavf_set_mc_addr_list(struct rte_eth_dev *dev, struct rte_ether_addr *mc_addrs, @@ -806,6 +820,11 @@ iavf_dev_start(struct rte_eth_dev *dev) dev->data->nb_tx_queues); num_queue_pairs = vf->num_queue_pairs; + if (iavf_get_qos_cap(adapter)) { + PMD_INIT_LOG(ERR, "Failed to get qos capability"); + return -1; + } + if (iavf_init_queues(dev) != 0) { PMD_DRV_LOG(ERR, "failed to do Queue init"); return -1; @@ -2090,6 +2109,15 @@ iavf_init_vf(struct rte_eth_dev *dev) PMD_INIT_LOG(ERR, "unable to allocate vf_res memory"); goto err_api; } + + bufsz = sizeof(struct virtchnl_qos_cap_list) + + IAVF_MAX_TRAFFIC_CLASS * sizeof(struct virtchnl_qos_cap_elem); + vf->qos_cap = rte_zmalloc("qos_cap", bufsz, 0); + if (!vf->qos_cap) { + PMD_INIT_LOG(ERR, "unable to allocate qos_cap memory"); + goto err_api; + } + if (iavf_get_vf_resource(adapter) != 0) { PMD_INIT_LOG(ERR, "iavf_get_vf_config failed"); goto err_alloc; @@ -2131,6 +2159,7 @@ iavf_init_vf(struct rte_eth_dev *dev) rte_free(vf->rss_key); rte_free(vf->rss_lut); err_alloc: + rte_free(vf->qos_cap); rte_free(vf->vf_res); vf->vsi_res = NULL; err_api: @@ -2299,6 +2328,8 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) iavf_default_rss_disable(adapter); + iavf_tm_conf_init(eth_dev); + return 0; } diff --git a/drivers/net/iavf/iavf_tm.c b/drivers/net/iavf/iavf_tm.c new file mode 100644 index 0000000000..b8e11cbe84 --- /dev/null +++ b/drivers/net/iavf/iavf_tm.c @@ -0,0 +1,675 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2017 Intel Corporation + */ +#include + +#include "iavf.h" + +static int iavf_hierarchy_commit(struct rte_eth_dev *dev, + __rte_unused int clear_on_fail, + __rte_unused struct rte_tm_error *error); +static int iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint32_t level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error); +static int iavf_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, + struct rte_tm_error *error); +static int iavf_tm_capabilities_get(struct rte_eth_dev *dev, + struct rte_tm_capabilities *cap, + struct rte_tm_error *error); +static int iavf_level_capabilities_get(struct rte_eth_dev *dev, + uint32_t level_id, + struct rte_tm_level_capabilities *cap, + struct rte_tm_error *error); +static int iavf_node_capabilities_get(struct rte_eth_dev *dev, + uint32_t node_id, + struct rte_tm_node_capabilities *cap, + struct rte_tm_error *error); +static int iavf_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, + int *is_leaf, struct rte_tm_error *error); + +const struct rte_tm_ops iavf_tm_ops = { + .node_add = iavf_tm_node_add, + .node_delete = iavf_tm_node_delete, + .capabilities_get = iavf_tm_capabilities_get, + .level_capabilities_get = iavf_level_capabilities_get, + .node_capabilities_get = iavf_node_capabilities_get, + .node_type_get = iavf_node_type_get, + .hierarchy_commit = iavf_hierarchy_commit, +}; + +void +iavf_tm_conf_init(struct rte_eth_dev *dev) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + + /* initialize node configuration */ + vf->tm_conf.root = NULL; + TAILQ_INIT(&vf->tm_conf.tc_list); + TAILQ_INIT(&vf->tm_conf.queue_list); + vf->tm_conf.nb_tc_node = 0; + vf->tm_conf.nb_queue_node = 0; + vf->tm_conf.committed = false; +} + + +static inline struct iavf_tm_node * +iavf_tm_node_search(struct rte_eth_dev *dev, + uint32_t node_id, enum iavf_tm_node_type *node_type) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_tm_node_list *tc_list = &vf->tm_conf.tc_list; + struct iavf_tm_node_list *queue_list = &vf->tm_conf.queue_list; + struct iavf_tm_node *tm_node; + + if (vf->tm_conf.root && vf->tm_conf.root->id == node_id) { + *node_type = IAVF_TM_NODE_TYPE_PORT; + return vf->tm_conf.root; + } + + TAILQ_FOREACH(tm_node, tc_list, node) { + if (tm_node->id == node_id) { + *node_type = IAVF_TM_NODE_TYPE_TC; + return tm_node; + } + } + + TAILQ_FOREACH(tm_node, queue_list, node) { + if (tm_node->id == node_id) { + *node_type = IAVF_TM_NODE_TYPE_QUEUE; + return tm_node; + } + } + + return NULL; +} + +static int +iavf_node_param_check(struct iavf_info *vf, uint32_t node_id, + uint32_t priority, uint32_t weight, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + /* checked all the unsupported parameter */ + if (node_id == RTE_TM_NODE_ID_NULL) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "invalid node id"; + return -EINVAL; + } + + if (priority) { + error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY; + error->message = "priority should be 0"; + return -EINVAL; + } + + if (weight != 1) { + error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT; + error->message = "weight must be 1"; + return -EINVAL; + } + + /* not support shaper profile */ + if (params->shaper_profile_id) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID; + error->message = "shaper profile not supported"; + return -EINVAL; + } + + /* not support shared shaper */ + if (params->shared_shaper_id) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID; + error->message = "shared shaper not supported"; + return -EINVAL; + } + if (params->n_shared_shapers) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS; + error->message = "shared shaper not supported"; + return -EINVAL; + } + + /* for non-leaf node */ + if (node_id >= vf->num_queue_pairs) { + if (params->nonleaf.wfq_weight_mode) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; + error->message = "WFQ not supported"; + return -EINVAL; + } + if (params->nonleaf.n_sp_priorities != 1) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES; + error->message = "SP priority not supported"; + return -EINVAL; + } else if (params->nonleaf.wfq_weight_mode && + !(*params->nonleaf.wfq_weight_mode)) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE; + error->message = "WFP should be byte mode"; + return -EINVAL; + } + + return 0; + } + + /* for leaf node */ + if (params->leaf.cman) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN; + error->message = "Congestion management not supported"; + return -EINVAL; + } + if (params->leaf.wred.wred_profile_id != + RTE_TM_WRED_PROFILE_ID_NONE) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID; + error->message = "WRED not supported"; + return -EINVAL; + } + if (params->leaf.wred.shared_wred_context_id) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID; + error->message = "WRED not supported"; + return -EINVAL; + } + if (params->leaf.wred.n_shared_wred_contexts) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS; + error->message = "WRED not supported"; + return -EINVAL; + } + + return 0; +} + +static int +iavf_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, + int *is_leaf, struct rte_tm_error *error) +{ + enum iavf_tm_node_type node_type = IAVF_TM_NODE_TYPE_MAX; + struct iavf_tm_node *tm_node; + + if (!is_leaf || !error) + return -EINVAL; + + if (node_id == RTE_TM_NODE_ID_NULL) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "invalid node id"; + return -EINVAL; + } + + /* check if the node id exists */ + tm_node = iavf_tm_node_search(dev, node_id, &node_type); + if (!tm_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "no such node"; + return -EINVAL; + } + + if (node_type == IAVF_TM_NODE_TYPE_QUEUE) + *is_leaf = true; + else + *is_leaf = false; + + return 0; +} + +static int +iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint32_t level_id, + struct rte_tm_node_params *params, + struct rte_tm_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + enum iavf_tm_node_type node_type = IAVF_TM_NODE_TYPE_MAX; + enum iavf_tm_node_type parent_node_type = IAVF_TM_NODE_TYPE_MAX; + struct iavf_tm_node *tm_node; + struct iavf_tm_node *parent_node; + uint16_t tc_nb = vf->qos_cap->num_elem; + int ret; + + if (!params || !error) + return -EINVAL; + + /* if already committed */ + if (vf->tm_conf.committed) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + error->message = "already committed"; + return -EINVAL; + } + + ret = iavf_node_param_check(vf, node_id, priority, weight, + params, error); + if (ret) + return ret; + + /* check if the node is already existed */ + if (iavf_tm_node_search(dev, node_id, &node_type)) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "node id already used"; + return -EINVAL; + } + + /* root node if not have a parent */ + if (parent_node_id == RTE_TM_NODE_ID_NULL) { + /* check level */ + if (level_id != IAVF_TM_NODE_TYPE_PORT) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; + error->message = "Wrong level"; + return -EINVAL; + } + + /* obviously no more than one root */ + if (vf->tm_conf.root) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; + error->message = "already have a root"; + return -EINVAL; + } + + /* add the root node */ + tm_node = rte_zmalloc("iavf_tm_node", + sizeof(struct iavf_tm_node), + 0); + if (!tm_node) + return -ENOMEM; + tm_node->id = node_id; + tm_node->parent = NULL; + tm_node->reference_count = 0; + rte_memcpy(&tm_node->params, params, + sizeof(struct rte_tm_node_params)); + vf->tm_conf.root = tm_node; + return 0; + } + + /* TC or queue node */ + /* check the parent node */ + parent_node = iavf_tm_node_search(dev, parent_node_id, + &parent_node_type); + if (!parent_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; + error->message = "parent not exist"; + return -EINVAL; + } + if (parent_node_type != IAVF_TM_NODE_TYPE_PORT && + parent_node_type != IAVF_TM_NODE_TYPE_TC) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID; + error->message = "parent is not root or TC"; + return -EINVAL; + } + /* check level */ + if (level_id != RTE_TM_NODE_LEVEL_ID_ANY && + level_id != parent_node_type + 1) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; + error->message = "Wrong level"; + return -EINVAL; + } + + /* check the node number */ + if (parent_node_type == IAVF_TM_NODE_TYPE_PORT) { + /* check the TC number */ + if (vf->tm_conf.nb_tc_node >= tc_nb) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "too many TCs"; + return -EINVAL; + } + } else { + /* check the queue number */ + if (parent_node->reference_count >= vf->num_queue_pairs) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "too many queues"; + return -EINVAL; + } + if (node_id >= vf->num_queue_pairs) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "too large queue id"; + return -EINVAL; + } + } + + /* add the TC or queue node */ + tm_node = rte_zmalloc("iavf_tm_node", + sizeof(struct iavf_tm_node), + 0); + if (!tm_node) + return -ENOMEM; + tm_node->id = node_id; + tm_node->reference_count = 0; + tm_node->parent = parent_node; + rte_memcpy(&tm_node->params, params, + sizeof(struct rte_tm_node_params)); + if (parent_node_type == IAVF_TM_NODE_TYPE_PORT) { + TAILQ_INSERT_TAIL(&vf->tm_conf.tc_list, + tm_node, node); + tm_node->tc = vf->tm_conf.nb_tc_node; + vf->tm_conf.nb_tc_node++; + } else { + TAILQ_INSERT_TAIL(&vf->tm_conf.queue_list, + tm_node, node); + tm_node->tc = parent_node->tc; + vf->tm_conf.nb_queue_node++; + } + tm_node->parent->reference_count++; + + return 0; +} + +static int +iavf_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, + struct rte_tm_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + enum iavf_tm_node_type node_type = IAVF_TM_NODE_TYPE_MAX; + struct iavf_tm_node *tm_node; + + if (!error) + return -EINVAL; + + /* if already committed */ + if (vf->tm_conf.committed) { + error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED; + error->message = "already committed"; + return -EINVAL; + } + + if (node_id == RTE_TM_NODE_ID_NULL) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "invalid node id"; + return -EINVAL; + } + + /* check if the node id exists */ + tm_node = iavf_tm_node_search(dev, node_id, &node_type); + if (!tm_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "no such node"; + return -EINVAL; + } + + /* the node should have no child */ + if (tm_node->reference_count) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = + "cannot delete a node which has children"; + return -EINVAL; + } + + /* root node */ + if (node_type == IAVF_TM_NODE_TYPE_PORT) { + rte_free(tm_node); + vf->tm_conf.root = NULL; + return 0; + } + + /* TC or queue node */ + tm_node->parent->reference_count--; + if (node_type == IAVF_TM_NODE_TYPE_TC) { + TAILQ_REMOVE(&vf->tm_conf.tc_list, tm_node, node); + vf->tm_conf.nb_tc_node--; + } else { + TAILQ_REMOVE(&vf->tm_conf.queue_list, tm_node, node); + vf->tm_conf.nb_queue_node--; + } + rte_free(tm_node); + + return 0; +} + +static int +iavf_tm_capabilities_get(struct rte_eth_dev *dev, + struct rte_tm_capabilities *cap, + struct rte_tm_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + uint16_t tc_nb = vf->qos_cap->num_elem; + + if (!cap || !error) + return -EINVAL; + + if (tc_nb > vf->vf_res->num_queue_pairs) + return -EINVAL; + + error->type = RTE_TM_ERROR_TYPE_NONE; + + /* set all the parameters to 0 first. */ + memset(cap, 0, sizeof(struct rte_tm_capabilities)); + + /** + * support port + TCs + queues + * here shows the max capability not the current configuration. + */ + cap->n_nodes_max = 1 + IAVF_MAX_TRAFFIC_CLASS + + vf->num_queue_pairs; + cap->n_levels_max = 3; /* port, TC, queue */ + cap->non_leaf_nodes_identical = 1; + cap->leaf_nodes_identical = 1; + cap->shaper_n_max = cap->n_nodes_max; + cap->shaper_private_n_max = cap->n_nodes_max; + cap->shaper_private_dual_rate_n_max = 0; + cap->shaper_private_rate_min = 0; + /* GBps */ + cap->shaper_private_rate_max = + vf->link_speed * 1000 / IAVF_BITS_PER_BYTE; + cap->shaper_private_packet_mode_supported = 0; + cap->shaper_private_byte_mode_supported = 1; + cap->shaper_shared_n_max = 0; + cap->shaper_shared_n_nodes_per_shaper_max = 0; + cap->shaper_shared_n_shapers_per_node_max = 0; + cap->shaper_shared_dual_rate_n_max = 0; + cap->shaper_shared_rate_min = 0; + cap->shaper_shared_rate_max = 0; + cap->shaper_shared_packet_mode_supported = 0; + cap->shaper_shared_byte_mode_supported = 0; + cap->sched_n_children_max = vf->num_queue_pairs; + cap->sched_sp_n_priorities_max = 1; + cap->sched_wfq_n_children_per_group_max = 0; + cap->sched_wfq_n_groups_max = 0; + cap->sched_wfq_weight_max = 1; + cap->sched_wfq_packet_mode_supported = 0; + cap->sched_wfq_byte_mode_supported = 0; + cap->cman_head_drop_supported = 0; + cap->dynamic_update_mask = 0; + cap->shaper_pkt_length_adjust_min = RTE_TM_ETH_FRAMING_OVERHEAD; + cap->shaper_pkt_length_adjust_max = RTE_TM_ETH_FRAMING_OVERHEAD_FCS; + cap->cman_wred_context_n_max = 0; + cap->cman_wred_context_private_n_max = 0; + cap->cman_wred_context_shared_n_max = 0; + cap->cman_wred_context_shared_n_nodes_per_context_max = 0; + cap->cman_wred_context_shared_n_contexts_per_node_max = 0; + cap->stats_mask = 0; + + return 0; +} + +static int +iavf_level_capabilities_get(struct rte_eth_dev *dev, + uint32_t level_id, + struct rte_tm_level_capabilities *cap, + struct rte_tm_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + + if (!cap || !error) + return -EINVAL; + + if (level_id >= IAVF_TM_NODE_TYPE_MAX) { + error->type = RTE_TM_ERROR_TYPE_LEVEL_ID; + error->message = "too deep level"; + return -EINVAL; + } + + /* root node */ + if (level_id == IAVF_TM_NODE_TYPE_PORT) { + cap->n_nodes_max = 1; + cap->n_nodes_nonleaf_max = 1; + cap->n_nodes_leaf_max = 0; + } else if (level_id == IAVF_TM_NODE_TYPE_TC) { + /* TC */ + cap->n_nodes_max = IAVF_MAX_TRAFFIC_CLASS; + cap->n_nodes_nonleaf_max = IAVF_MAX_TRAFFIC_CLASS; + cap->n_nodes_leaf_max = 0; + } else { + /* queue */ + cap->n_nodes_max = vf->num_queue_pairs; + cap->n_nodes_nonleaf_max = 0; + cap->n_nodes_leaf_max = vf->num_queue_pairs; + } + + cap->non_leaf_nodes_identical = true; + cap->leaf_nodes_identical = true; + + if (level_id != IAVF_TM_NODE_TYPE_QUEUE) { + cap->nonleaf.shaper_private_supported = false; + cap->nonleaf.shaper_private_dual_rate_supported = false; + cap->nonleaf.shaper_private_rate_min = 0; + /* GBps */ + cap->nonleaf.shaper_private_rate_max = + vf->link_speed * 1000 / IAVF_BITS_PER_BYTE; + cap->nonleaf.shaper_private_packet_mode_supported = 0; + cap->nonleaf.shaper_private_byte_mode_supported = 1; + cap->nonleaf.shaper_shared_n_max = 0; + cap->nonleaf.shaper_shared_packet_mode_supported = 0; + cap->nonleaf.shaper_shared_byte_mode_supported = 0; + if (level_id == IAVF_TM_NODE_TYPE_PORT) + cap->nonleaf.sched_n_children_max = + IAVF_MAX_TRAFFIC_CLASS; + else + cap->nonleaf.sched_n_children_max = + vf->num_queue_pairs; + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = 0; + cap->nonleaf.sched_wfq_n_groups_max = 0; + cap->nonleaf.sched_wfq_weight_max = 1; + cap->nonleaf.sched_wfq_packet_mode_supported = 0; + cap->nonleaf.sched_wfq_byte_mode_supported = 0; + cap->nonleaf.stats_mask = 0; + + return 0; + } + + /* queue node */ + cap->leaf.shaper_private_supported = false; + cap->leaf.shaper_private_dual_rate_supported = false; + cap->leaf.shaper_private_rate_min = 0; + /* GBps */ + cap->leaf.shaper_private_rate_max = + vf->link_speed * 1000 / IAVF_BITS_PER_BYTE;; + cap->leaf.shaper_private_packet_mode_supported = 0; + cap->leaf.shaper_private_byte_mode_supported = 1; + cap->leaf.shaper_shared_n_max = 0; + cap->leaf.shaper_shared_packet_mode_supported = 0; + cap->leaf.shaper_shared_byte_mode_supported = 0; + cap->leaf.cman_head_drop_supported = false; + cap->leaf.cman_wred_context_private_supported = true; + cap->leaf.cman_wred_context_shared_n_max = 0; + cap->leaf.stats_mask = 0; + + return 0; +} + +static int +iavf_node_capabilities_get(struct rte_eth_dev *dev, + uint32_t node_id, + struct rte_tm_node_capabilities *cap, + struct rte_tm_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + enum iavf_tm_node_type node_type; + struct virtchnl_qos_cap_elem tc_cap; + struct iavf_tm_node *tm_node; + + if (!cap || !error) + return -EINVAL; + + if (node_id == RTE_TM_NODE_ID_NULL) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "invalid node id"; + return -EINVAL; + } + + /* check if the node id exists */ + tm_node = iavf_tm_node_search(dev, node_id, &node_type); + if (!tm_node) { + error->type = RTE_TM_ERROR_TYPE_NODE_ID; + error->message = "no such node"; + return -EINVAL; + } + + if (node_type != IAVF_TM_NODE_TYPE_TC) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; + error->message = "not support capability get"; + return -EINVAL; + } + + tc_cap = vf->qos_cap->cap[tm_node->tc]; + if (tc_cap.tc_id != tm_node->tc) { + error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS; + error->message = "tc not match"; + return -EINVAL; + } + + cap->shaper_private_supported = true; + cap->shaper_private_dual_rate_supported = false; + cap->shaper_private_rate_min = tc_cap.shaper.committed; + cap->shaper_private_rate_max = tc_cap.shaper.peak; + cap->shaper_shared_n_max = 0; + cap->nonleaf.sched_n_children_max = vf->num_queue_pairs; + + if (tc_cap.arbiter == VIRTCHNL_ABITER_ETS) { + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = + vf->num_queue_pairs; + cap->nonleaf.sched_wfq_n_groups_max = 1; + cap->nonleaf.sched_wfq_weight_max = tc_cap.weight; + } + + if (tc_cap.arbiter == VIRTCHNL_ABITER_STRICT) { + cap->nonleaf.sched_sp_n_priorities_max = 1; + cap->nonleaf.sched_wfq_n_children_per_group_max = 0; + cap->nonleaf.sched_wfq_n_groups_max = 0; + cap->nonleaf.sched_wfq_weight_max = 1; + } + + cap->stats_mask = 0; + + return 0; +} + +static int iavf_hierarchy_commit(struct rte_eth_dev *dev, + __rte_unused int clear_on_fail, + __rte_unused struct rte_tm_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct virtchnl_queue_tc_mapping *q_tc_mapping; + struct iavf_tm_node_list *queue_list = &vf->tm_conf.queue_list; + struct iavf_tm_node *tm_node; + uint16_t size; + int index = 0, node_committed = 0; + int ret, i; + + size = sizeof(*q_tc_mapping) + sizeof(q_tc_mapping->tc[0]) * + (vf->qos_cap->num_elem - 1); + q_tc_mapping = rte_zmalloc("q_tc", size, 0); + q_tc_mapping->vsi_id = vf->vsi.vsi_id; + q_tc_mapping->num_tc = vf->qos_cap->num_elem; + q_tc_mapping->num_queue_pairs = vf->num_queue_pairs; + TAILQ_FOREACH(tm_node, queue_list, node) { + q_tc_mapping->tc[tm_node->tc].req.queue_count++; + node_committed++; + } + + for (i = 0; i < IAVF_MAX_TRAFFIC_CLASS; i++) { + q_tc_mapping->tc[i].req.start_queue_id = index; + index += q_tc_mapping->tc[i].req.queue_count; + } + if (node_committed < vf->num_queue_pairs) { + PMD_DRV_LOG(ERR, "queue node is less than allocated queue pairs"); + return IAVF_ERR_PARAM; + } + + ret = iavf_set_q_tc_map(dev, q_tc_mapping, size); + if (ret) + return ret; + + return IAVF_SUCCESS; +} diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 5d57e8b541..daa1b3755c 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -467,7 +467,8 @@ iavf_get_vf_resource(struct iavf_adapter *adapter) VIRTCHNL_VF_OFFLOAD_REQ_QUEUES | VIRTCHNL_VF_OFFLOAD_CRC | VIRTCHNL_VF_OFFLOAD_VLAN_V2 | - VIRTCHNL_VF_LARGE_NUM_QPAIRS; + VIRTCHNL_VF_LARGE_NUM_QPAIRS | + VIRTCHNL_VF_OFFLOAD_TC; args.in_args = (uint8_t *)∩︀ args.in_args_size = sizeof(caps); @@ -1550,6 +1551,59 @@ iavf_set_hena(struct iavf_adapter *adapter, uint64_t hena) return err; } +int +iavf_get_qos_cap(struct iavf_adapter *adapter) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_cmd_info args; + uint32_t len; + int err; + + args.ops = VIRTCHNL_OP_GET_QOS_CAPS; + args.in_args = NULL; + args.in_args_size = 0; + args.out_buffer = vf->aq_resp; + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); + + if (err) { + PMD_DRV_LOG(ERR, + "Failed to execute command of OP_GET_VF_RESOURCE"); + return -1; + } + + len = sizeof(struct virtchnl_qos_cap_list) + + IAVF_MAX_TRAFFIC_CLASS * sizeof(struct virtchnl_qos_cap_elem); + + rte_memcpy(vf->qos_cap, args.out_buffer, + RTE_MIN(args.out_size, len)); + + return 0; +} + +int iavf_set_q_tc_map(struct rte_eth_dev *dev, + struct virtchnl_queue_tc_mapping *q_tc_mapping, uint16_t size) +{ + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_cmd_info args; + int err; + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL_OP_CONFIG_TC_MAP; + args.in_args = (uint8_t *)q_tc_mapping; + args.in_args_size = size; + args.out_buffer = vf->aq_resp; + args.out_size = IAVF_AQ_BUF_SZ; + + err = iavf_execute_vf_cmd(adapter, &args); + if (err) + PMD_DRV_LOG(ERR, "Failed to execute command of" + " VIRTCHNL_OP_CONFIG_TC_MAP"); + return err; +} + int iavf_add_del_mc_addr_list(struct iavf_adapter *adapter, struct rte_ether_addr *mc_addrs, diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build index 6f222a9e87..f2010a8337 100644 --- a/drivers/net/iavf/meson.build +++ b/drivers/net/iavf/meson.build @@ -19,6 +19,7 @@ sources = files( 'iavf_generic_flow.c', 'iavf_fdir.c', 'iavf_hash.c', + 'iavf_tm.c', ) if arch_subdir == 'x86'