From patchwork Tue Nov 3 08:28:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Yang X-Patchwork-Id: 83501 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98AB9A0521; Tue, 3 Nov 2020 09:31:24 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EBAFBC83A; Tue, 3 Nov 2020 09:29:56 +0100 (CET) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 22AD9C838 for ; Tue, 3 Nov 2020 09:29:54 +0100 (CET) IronPort-SDR: dV7yef90UxGPoz706iAE+Ni+hf3X+IrSblzNFIXp96Q16PpFVbxdSJtJ95gYCKVFgZkMksrvae zXf3GYM8+nZg== X-IronPort-AV: E=McAfee;i="6000,8403,9793"; a="169122379" X-IronPort-AV: E=Sophos;i="5.77,447,1596524400"; d="scan'208";a="169122379" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Nov 2020 00:29:54 -0800 IronPort-SDR: BCJZpcSt0RDfAMzBSBWMh/ixT8OXfsohEebG5qUz5jtarYngcfgqpHoXYxcSVkm71DTI1PGEs5 BCwpu0dmvL+g== X-IronPort-AV: E=Sophos;i="5.77,447,1596524400"; d="scan'208";a="470711532" Received: from intel-npg-odc-srv01.cd.intel.com ([10.240.178.136]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Nov 2020 00:29:51 -0800 From: Steve Yang To: dev@dpdk.org Cc: jia.guo@intel.com, haiyue.wang@intel.com, qiming.yang@intel.com, beilei.xing@intel.com, orika@nvidia.com, murphyx.yang@intel.com, Steve Yang Date: Tue, 3 Nov 2020 08:28:09 +0000 Message-Id: <20201103082809.41149-7-stevex.yang@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201103082809.41149-1-stevex.yang@intel.com> References: <20201014084131.72035-1-simonx.lu@intel.com> <20201103082809.41149-1-stevex.yang@intel.com> Subject: [dpdk-dev] [RFC v2 6/6] net/ixgbe: use flow sample to re-realize mirror rule X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When set follow sample rule's ratio equal to one, its behavior is same as mirror-rule, so we can use "flow create * pattern * actions sample *" to replace old "set port * mirror-rule *" command now. The example of mirror rule command mapping to flow management command: (in below command, port 0 is PF and port 1-2 is VF): 1) ingress: pf => pf set port 0 mirror-rule 0 uplink-mirror dst-pool 2 on or flow create 0 ingress pattern pf / end \ actions sample ratio 1 / port_id id 0 / end 2) egress: pf => pf set port 0 mirror-rule 0 downlink-mirror dst-pool 2 on or flow create 0 egress pattern pf / end \ actions sample ratio 1 / port_id id 0 / end 3) ingress: pf => vf 2 set port 0 mirror-rule 0 uplink-mirror dst-pool 1 on or flow create 0 ingress pattern pf / end \ actions sample ratio 1 / port_id id 2 / end 4) egress: pf => vf 2 set port 0 mirror-rule 0 downlink-mirror dst-pool 1 on or flow create 0 egress pattern pf / end \ actions sample ratio 1 / port_id id 2 / end 5) ingress: vf 0,1 => pf set port 0 mirror-rule 0 pool-mirror-up 0x3 dst-pool 2 on or flow create 0 ingress pattern vf id is 0 / end \ actions sample ratio 1 / port_id id 0 / end flow create 0 ingress pattern vf id is 1 / end \ actions sample ratio 1 / port_id id 0 / end 6) ingress: vf 0 => vf 1 set port 0 mirror-rule 0 pool-mirror-up 0x1 dst-pool 1 on or flow create 0 ingress pattern vf id is 0 / end \ actions sample ratio 1 / port_id id 2 / end 7) ingress: vlan 4,6 => vf 1 rx_vlan add 4 port 0 vf 0xf rx_vlan add 6 port 0 vf 0xf set port 0 mirror-rule 0 vlan-mirror 4,6 dst-pool 1 on or rx_vlan add 4 port 0 vf 0xf rx_vlan add 6 port 0 vf 0xf flow create 0 ingress pattern vlan vid is 4 / end \ actions sample ratio 1 / port_id id 2 / end flow create 0 ingress pattern vlan vid is 6 / end \ actions sample ratio 1 / port_id id 2 / end Signed-off-by: Steve Yang --- drivers/net/ixgbe/ixgbe_flow.c | 228 +++++++++++++++++++++++++++++++++ 1 file changed, 228 insertions(+) diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c index 0ad49ca48..5635bf585 100644 --- a/drivers/net/ixgbe/ixgbe_flow.c +++ b/drivers/net/ixgbe/ixgbe_flow.c @@ -117,6 +117,7 @@ static struct ixgbe_syn_filter_list filter_syn_list; static struct ixgbe_fdir_rule_filter_list filter_fdir_list; static struct ixgbe_l2_tunnel_filter_list filter_l2_tunnel_list; static struct ixgbe_rss_filter_list filter_rss_list; +static struct ixgbe_mirror_filter_list filter_mirror_list; static struct ixgbe_flow_mem_list ixgbe_flow_list; /** @@ -3176,6 +3177,185 @@ ixgbe_parse_sample_filter(struct rte_eth_dev *dev, return ixgbe_flow_parse_sample_action(dev, actions, error, conf); } +static int +ixgbe_config_mirror_filter_add(struct rte_eth_dev *dev, + struct ixgbe_flow_mirror_conf *mirror_conf) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + uint32_t mr_ctl, vlvf; + uint32_t mp_lsb = 0; + uint32_t mv_msb = 0; + uint32_t mv_lsb = 0; + uint32_t mp_msb = 0; + uint8_t i = 0; + int reg_index = 0; + uint64_t vlan_mask = 0; + + const uint8_t pool_mask_offset = 32; + const uint8_t vlan_mask_offset = 32; + const uint8_t dst_pool_offset = 8; + const uint8_t rule_mr_offset = 4; + const uint8_t mirror_rule_mask = 0x0F; + + struct ixgbe_hw *hw = + IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_filter_info *filter_info = + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + struct ixgbe_mirror_conf_ele *it; + int8_t rule_id; + uint8_t mirror_type = 0; + + if (ixgbe_vt_check(hw) < 0) + return -ENOTSUP; + + if (IXGBE_INVALID_MIRROR_TYPE(mirror_conf->rule_type)) { + PMD_DRV_LOG(ERR, "unsupported mirror type 0x%x.", + mirror_conf->rule_type); + return -EINVAL; + } + + TAILQ_FOREACH(it, &filter_mirror_list, entries) { + if (it->filter_info.rule_type == mirror_conf->rule_type && + it->filter_info.dst_pool == mirror_conf->dst_pool && + it->filter_info.pool_mask == mirror_conf->pool_mask && + it->filter_info.vlan_mask == mirror_conf->vlan_mask && + !memcmp(it->filter_info.vlan_id, mirror_conf->vlan_id, + ETH_MIRROR_MAX_VLANS * sizeof(mirror_conf->vlan_id[0]))) { + PMD_DRV_LOG(ERR, "mirror rule exists."); + return -EEXIST; + } + } + + rule_id = ixgbe_mirror_filter_insert(filter_info, mirror_conf); + if (rule_id < 0) { + PMD_DRV_LOG(ERR, "more than maximum mirror count(%d).", + IXGBE_MAX_MIRROR_RULES); + return -EINVAL; + } + + + if (mirror_conf->rule_type & ETH_MIRROR_VLAN) { + mirror_type |= IXGBE_MRCTL_VLME; + /* Check if vlan id is valid and find conresponding VLAN ID + * index in VLVF + */ + for (i = 0; i < pci_dev->max_vfs; i++) + if (mirror_conf->vlan_mask & (1ULL << i)) { + /* search vlan id related pool vlan filter + * index + */ + reg_index = ixgbe_find_vlvf_slot(hw, + mirror_conf->vlan_id[i], + false); + if (reg_index < 0) + return -EINVAL; + vlvf = IXGBE_READ_REG(hw, + IXGBE_VLVF(reg_index)); + if ((vlvf & IXGBE_VLVF_VIEN) && + ((vlvf & IXGBE_VLVF_VLANID_MASK) == + mirror_conf->vlan_id[i])) { + vlan_mask |= (1ULL << reg_index); + } else { + ixgbe_mirror_filter_remove(filter_info, + mirror_conf->rule_id); + return -EINVAL; + } + } + + mv_lsb = vlan_mask & 0xFFFFFFFF; + mv_msb = vlan_mask >> vlan_mask_offset; + } + + /** + * if enable pool mirror, write related pool mask register,if disable + * pool mirror, clear PFMRVM register + */ + if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) { + mirror_type |= IXGBE_MRCTL_VPME; + mp_lsb = mirror_conf->pool_mask & 0xFFFFFFFF; + mp_msb = mirror_conf->pool_mask >> pool_mask_offset; + } + if (mirror_conf->rule_type & ETH_MIRROR_UPLINK_PORT) + mirror_type |= IXGBE_MRCTL_UPME; + if (mirror_conf->rule_type & ETH_MIRROR_DOWNLINK_PORT) + mirror_type |= IXGBE_MRCTL_DPME; + + /* read mirror control register and recalculate it */ + mr_ctl = IXGBE_READ_REG(hw, IXGBE_MRCTL(rule_id)); + mr_ctl |= mirror_type; + mr_ctl &= mirror_rule_mask; + mr_ctl |= mirror_conf->dst_pool << dst_pool_offset; + + /* write mirrror control register */ + IXGBE_WRITE_REG(hw, IXGBE_MRCTL(rule_id), mr_ctl); + + /* write pool mirrror control register */ + if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) { + IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id), mp_lsb); + IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id + rule_mr_offset), + mp_msb); + } + /* write VLAN mirrror control register */ + if (mirror_conf->rule_type & ETH_MIRROR_VLAN) { + IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id), mv_lsb); + IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id + rule_mr_offset), + mv_msb); + } + + return 0; +} + +/* remove the mirror filter */ +static int +ixgbe_config_mirror_filter_del(struct rte_eth_dev *dev, + struct ixgbe_flow_mirror_conf *conf) +{ + struct ixgbe_hw *hw = + IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ixgbe_filter_info *filter_info = + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + uint8_t rule_id = conf->rule_id; + int mr_ctl = 0; + uint32_t lsb_val = 0; + uint32_t msb_val = 0; + const uint8_t rule_mr_offset = 4; + + if (ixgbe_vt_check(hw) < 0) + return -ENOTSUP; + + if (rule_id >= IXGBE_MAX_MIRROR_RULES) + return -EINVAL; + + /* clear PFVMCTL register */ + IXGBE_WRITE_REG(hw, IXGBE_MRCTL(rule_id), mr_ctl); + + /* clear pool mask register */ + IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id), lsb_val); + IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id + rule_mr_offset), msb_val); + + /* clear vlan mask register */ + IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id), lsb_val); + IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id + rule_mr_offset), msb_val); + + ixgbe_mirror_filter_remove(filter_info, rule_id); + return 0; +} + +static void +ixgbe_clear_all_mirror_filter(struct rte_eth_dev *dev) +{ + struct ixgbe_filter_info *filter_info = + IXGBE_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private); + int i; + + for (i = 0; i < IXGBE_MAX_MIRROR_RULES; i++) { + if (filter_info->mirror_mask & (1 << i)) { + ixgbe_config_mirror_filter_del(dev, + &filter_info->mirror_filters[i]); + } + } +} + void ixgbe_filterlist_init(void) { @@ -3185,6 +3365,7 @@ ixgbe_filterlist_init(void) TAILQ_INIT(&filter_fdir_list); TAILQ_INIT(&filter_l2_tunnel_list); TAILQ_INIT(&filter_rss_list); + TAILQ_INIT(&filter_mirror_list); TAILQ_INIT(&ixgbe_flow_list); } @@ -3198,6 +3379,7 @@ ixgbe_filterlist_flush(void) struct ixgbe_fdir_rule_ele *fdir_rule_ptr; struct ixgbe_flow_mem *ixgbe_flow_mem_ptr; struct ixgbe_rss_conf_ele *rss_filter_ptr; + struct ixgbe_mirror_conf_ele *mirror_filter_ptr; while ((ntuple_filter_ptr = TAILQ_FIRST(&filter_ntuple_list))) { TAILQ_REMOVE(&filter_ntuple_list, @@ -3241,6 +3423,13 @@ ixgbe_filterlist_flush(void) rte_free(rss_filter_ptr); } + while ((mirror_filter_ptr = TAILQ_FIRST(&filter_mirror_list))) { + TAILQ_REMOVE(&filter_mirror_list, + mirror_filter_ptr, + entries); + rte_free(mirror_filter_ptr); + } + while ((ixgbe_flow_mem_ptr = TAILQ_FIRST(&ixgbe_flow_list))) { TAILQ_REMOVE(&ixgbe_flow_list, ixgbe_flow_mem_ptr, @@ -3272,6 +3461,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, struct ixgbe_hw_fdir_info *fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); struct ixgbe_rte_flow_rss_conf rss_conf; + struct ixgbe_flow_mirror_conf mirror_conf; struct rte_flow *flow = NULL; struct ixgbe_ntuple_filter_ele *ntuple_filter_ptr; struct ixgbe_ethertype_filter_ele *ethertype_filter_ptr; @@ -3279,6 +3469,7 @@ ixgbe_flow_create(struct rte_eth_dev *dev, struct ixgbe_eth_l2_tunnel_conf_ele *l2_tn_filter_ptr; struct ixgbe_fdir_rule_ele *fdir_rule_ptr; struct ixgbe_rss_conf_ele *rss_filter_ptr; + struct ixgbe_mirror_conf_ele *mirror_filter_ptr; struct ixgbe_flow_mem *ixgbe_flow_mem_ptr; uint8_t first_mask = FALSE; @@ -3501,6 +3692,30 @@ ixgbe_flow_create(struct rte_eth_dev *dev, } } + memset(&mirror_conf, 0, sizeof(struct ixgbe_flow_mirror_conf)); + ret = ixgbe_parse_sample_filter(dev, attr, pattern, + actions, &mirror_conf, error); + if (!ret) { + /* Just support mirror behavior */ + ret = ixgbe_config_mirror_filter_add(dev, &mirror_conf); + if (ret) { + PMD_DRV_LOG(ERR, "failed to add mirror filter"); + goto out; + } + + mirror_filter_ptr = rte_zmalloc("ixgbe_mirror_filter", + sizeof(struct ixgbe_mirror_conf_ele), 0); + if (!mirror_filter_ptr) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + goto out; + } + mirror_filter_ptr->filter_info = mirror_conf; + TAILQ_INSERT_TAIL(&filter_mirror_list, + mirror_filter_ptr, entries); + flow->rule = mirror_filter_ptr; + flow->filter_type = RTE_ETH_FILTER_SAMPLE; + return flow; + } out: TAILQ_REMOVE(&ixgbe_flow_list, ixgbe_flow_mem_ptr, entries); @@ -3592,6 +3807,7 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, struct ixgbe_hw_fdir_info *fdir_info = IXGBE_DEV_PRIVATE_TO_FDIR_INFO(dev->data->dev_private); struct ixgbe_rss_conf_ele *rss_filter_ptr; + struct ixgbe_mirror_conf_ele *mirror_filter_ptr; switch (filter_type) { case RTE_ETH_FILTER_NTUPLE: @@ -3671,6 +3887,17 @@ ixgbe_flow_destroy(struct rte_eth_dev *dev, rte_free(rss_filter_ptr); } break; + case RTE_ETH_FILTER_SAMPLE: + mirror_filter_ptr = (struct ixgbe_mirror_conf_ele *) + pmd_flow->rule; + ret = ixgbe_config_mirror_filter_del(dev, + &mirror_filter_ptr->filter_info); + if (!ret) { + TAILQ_REMOVE(&filter_mirror_list, + mirror_filter_ptr, entries); + rte_free(mirror_filter_ptr); + } + break; default: PMD_DRV_LOG(WARNING, "Filter type (%d) not supported", filter_type); @@ -3723,6 +3950,7 @@ ixgbe_flow_flush(struct rte_eth_dev *dev, } ixgbe_clear_rss_filter(dev); + ixgbe_clear_all_mirror_filter(dev); ixgbe_filterlist_flush();