From patchwork Mon Dec 28 05:07:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Haiyue" X-Patchwork-Id: 85755 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4B4A7A09FF; Mon, 28 Dec 2020 06:22:56 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C7583C9AC; Mon, 28 Dec 2020 06:22:39 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id CC70F2C27 for ; Mon, 28 Dec 2020 06:22:34 +0100 (CET) IronPort-SDR: 3AuS6vJXloAm/TH52Hzx85lMgk4VHNyGdseKYqLfwU4g2neZSRAz07bibA7EeJV4V6rFu2USLv VCudqwovx3Ow== X-IronPort-AV: E=McAfee;i="6000,8403,9847"; a="175570587" X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="175570587" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2020 21:22:31 -0800 IronPort-SDR: JINB9Jk6patvx1paac/HtumNeyuXGuGJi2PyBhChZa4nGPodPXqa7pSe+ytL7+vNFdfHgtuiB0 UATlqDR3yISQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="375487254" Received: from npg-dpdk-haiyue-3.sh.intel.com ([10.67.118.172]) by orsmga008.jf.intel.com with ESMTP; 27 Dec 2020 21:22:23 -0800 From: Haiyue Wang To: dev@dpdk.org Cc: qiming.yang@intel.com, jingjing.wu@intel.com, qi.z.zhang@intel.com, qi.fu@intel.com, Haiyue Wang , Beilei Xing Date: Mon, 28 Dec 2020 13:07:19 +0800 Message-Id: <20201228050723.27265-2-haiyue.wang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201228050723.27265-1-haiyue.wang@intel.com> References: <20201214071155.98764-1-haiyue.wang@intel.com> <20201228050723.27265-1-haiyue.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 1/5] common/iavf: new VLAN opcode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add new VLAN opcode support. Signed-off-by: Haiyue Wang --- drivers/common/iavf/virtchnl.h | 259 +++++++++++++++++++++++++++++++++ 1 file changed, 259 insertions(+) diff --git a/drivers/common/iavf/virtchnl.h b/drivers/common/iavf/virtchnl.h index fcbaa31fa..13788e46b 100644 --- a/drivers/common/iavf/virtchnl.h +++ b/drivers/common/iavf/virtchnl.h @@ -129,6 +129,7 @@ enum virtchnl_ops { VIRTCHNL_OP_ADD_CLOUD_FILTER = 32, VIRTCHNL_OP_DEL_CLOUD_FILTER = 33, /* opcodes 34, 35, 36, 37 and 38 are reserved */ + VIRTCHNL_OP_DCF_VLAN_OFFLOAD = 38, VIRTCHNL_OP_DCF_CMD_DESC = 39, VIRTCHNL_OP_DCF_CMD_BUFF = 40, VIRTCHNL_OP_DCF_DISABLE = 41, @@ -141,6 +142,11 @@ enum virtchnl_ops { VIRTCHNL_OP_DEL_FDIR_FILTER = 48, VIRTCHNL_OP_QUERY_FDIR_FILTER = 49, VIRTCHNL_OP_GET_MAX_RSS_QREGION = 50, + VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS = 51, + VIRTCHNL_OP_ADD_VLAN_V2 = 52, + VIRTCHNL_OP_DEL_VLAN_V2 = 53, + VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 = 54, + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 = 55, VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107, VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108, VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111, @@ -251,6 +257,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource); #define VIRTCHNL_VF_OFFLOAD_CRC 0x00000080 /* 0X00000100 is reserved */ #define VIRTCHNL_VF_LARGE_NUM_QPAIRS 0x00000200 +#define VIRTCHNL_VF_OFFLOAD_VLAN_V2 0x00008000 #define VIRTCHNL_VF_OFFLOAD_VLAN 0x00010000 #define VIRTCHNL_VF_OFFLOAD_RX_POLLING 0x00020000 #define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 0x00040000 @@ -536,6 +543,202 @@ struct virtchnl_vlan_filter_list { VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list); +/* This enum is used for all of the VIRTCHNL_VF_OFFLOAD_VLAN_V2_CAPS related + * structures and opcodes. + * + * VIRTCHNL_VLAN_UNSUPPORTED - This field is not supported and if a VF driver + * populates it the PF should return VIRTCHNL_STATUS_ERR_NOT_SUPPORTED. + * + * VIRTCHNL_VLAN_ETHERTYPE_8100 - This field supports 0x8100 ethertype. + * VIRTCHNL_VLAN_ETHERTYPE_88A8 - This field supports 0x88A8 ethertype. + * VIRTCHNL_VLAN_ETHERTYPE_9100 - This field supports 0x9100 ethertype. + * + * VIRTCHNL_VLAN_ETHERTYPE_AND - Used when multiple ethertypes can be supported + * by the PF concurrently. For example, if the PF can support + * VIRTCHNL_VLAN_ETHERTYPE_8100 AND VIRTCHNL_VLAN_ETHERTYPE_88A8 filters it + * would OR the following in the virtchnl_vlan_filtering_caps.outer field: + * + * VIRTHCNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_AND; + * + * The VF would interpret this as VLAN filtering can be supported on both 0x8100 + * and 0x88A8 VLAN ethertypes. + * + * VIRTCHNL_ETHERTYPE_XOR - Used when only a single ethertype can be supported + * by the PF concurrently. For example if the PF can support + * VIRTCHNL_VLAN_ETHERTYPE_8100 XOR VIRTCHNL_VLAN_ETHERTYPE_88A8 stripping + * offload it would OR the following in the + * virtchnl_vlan_offload_caps.outer_stripping field: + * + * VIRTCHNL_VLAN_ETHERTYPE_8100 | + * VIRTCHNL_VLAN_ETHERTYPE_88A8 | + * VIRTCHNL_VLAN_ETHERTYPE_XOR; + * + * The VF would interpret this as VLAN stripping can be supported on either + * 0x8100 or 0x88a8 VLAN ethertypes. So when requesting VLAN stripping via + * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 the specified ethertype will override + * the previously set value. + * + * VIRTCHNL_VLAN_PRIO - This field supports VLAN priority bits. This is used for + * VLAN filtering if the underlying PF supports it. + * + * VIRTCHNL_VLAN_TOGGLE_ALLOWED - This field is used to say whether a + * certain VLAN capability can be toggled. For example if the underlying PF/CP + * allows the VF to toggle VLAN filtering, stripping, and/or insertion it should + * set this bit along with the supported ethertypes. + */ +enum virtchnl_vlan_support { + VIRTCHNL_VLAN_UNSUPPORTED = 0, + VIRTCHNL_VLAN_ETHERTYPE_8100 = 0x0001, + VIRTCHNL_VLAN_ETHERTYPE_88A8 = 0x0002, + VIRTCHNL_VLAN_ETHERTYPE_9100 = 0x0004, + VIRTCHNL_VLAN_PRIO = 0x0100, + VIRTCHNL_VLAN_FILTER_MASK = 0x1000, + VIRTCHNL_VLAN_ETHERTYPE_AND = 0x2000, + VIRTCHNL_VLAN_ETHERTYPE_XOR = 0x4000, + VIRTCHNL_VLAN_TOGGLE = 0x8000, +}; + +/* The PF populates these fields based on the supported VLAN filtering. If a + * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will + * reject any VIRTCHNL_OP_ADD_VLAN_V2 or VIRTCHNL_OP_DEL_VLAN_V2 messages using + * the unsupported fields. + * + * Also, a VF is only allowed to toggle its VLAN filtering setting if the + * VIRTCHNL_VFLAN_TOGGLE_ALLOWED bit is set. + * + * The max_filters field tells the VF how many VLAN filters it's allowed to have + * at any one time. If it exceeds this amount and tries to add another filter, + * then the request will be rejected by the PF. + */ +struct virtchnl_vlan_filtering_caps { + u16 outer; + u16 inner; + u16 max_filters; + u8 pad[4]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(10, virtchnl_vlan_filtering_caps); + +/* This enum is used for the virtchnl_vlan_offload_caps structure to specify + * if the PF supports a different ethertype for stripping and insertion. + * + * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION - The ethertype(s) specified + * for stripping affect the ethertype(s) sepcified for insertion and visa versa + * as well. If the VF tries to configure VLAN stripping via + * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 with VIRTCHNL_VLAN_ETHERTYPE_8100 then + * that will be the ethertype for both stripping and insertion. + * + * VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED - The ethertype(s) specified for + * stripping do not affect the ethertype(s) specified for insertion and visa + * versa. + */ +enum virtchnl_vlan_ethertype_match { + VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION = 0, + VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED = 1, +}; + +/* The PF populates these fields based on the supported VLAN offloads. If a + * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will + * reject any VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 or + * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 messages using the unsupported fields. + * + * Also, a VF is only allowed to toggle its VLAN offload setting if the + * VIRTCHNL_VFLAN_TOGGLE_ALLOWED bit is set. + * + * The VF driver needs to be aware of how the tags are stipped by hardware and + * inserted by the VF driver based on the level of offload support. + * + * outer_stripping supported - VLAN tag stripped into L2TAG2 field by hardware + * outer_insertion supported - VLAN tag inserted into L2TAG2 field by VF driver + * + * inner_stripping supported - VLAN tag stripped into L2TAG1 field by hardware + * inner_insertion supported - VLAN tag inserted into L2TAG1 field by VF driver + */ +struct virtchnl_vlan_offload_caps { + u16 outer_stripping; + u16 inner_stripping; + u16 outer_insertion; + u16 inner_insertion; + u16 ethertype_init; + u8 ethertype_match; + u8 pad[3]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_vlan_offload_caps); + +/* VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS + * VF sends this message to determine its VLAN capabilities. + * + * PF will mark which capabilities it supports based on hardware support and + * current configuration. For example, if a port VLAN is configured the PF will + * not allow outer VLAN filtering, stripping, or insertion to be configured so + * it will block these features from the VF. + * + * The VF will need to cross reference its capabilities with the PFs + * capabilities in the response message from the PF to determine the VLAN + * support. + */ +struct virtchnl_vlan_caps { + struct virtchnl_vlan_filtering_caps filtering; + struct virtchnl_vlan_offload_caps offloads; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_vlan_caps); + +struct virtchnl_vlan { + u16 tci; /* tci[15:13] = PCP and tci[11:0] = VID */ + u16 tci_mask; /* only valid if VIRTCHNL_VLAN_FILTER_MASK set in filtering caps */ + u16 tpid; /* 0x8100, 0x88a8, etc. and only type(s) set in filtering caps */ + u8 pad[2]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_vlan); + +struct virtchnl_vlan_filter { + struct virtchnl_vlan inner; + struct virtchnl_vlan outer; + u8 pad[16]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(32, virtchnl_vlan_filter); + +/* VIRTCHNL_OP_ADD_VLAN_V2 + * VIRTCHNL_OP_DEL_VLAN_V2 + * + * VF sends these messages to add/del one or more VLAN tag filters for Rx traffic. + * + * The PF/CP attempts to add the filters and returns status. + * + * The VF should only ever attempt to add/del virtchnl_vlan_filter(s) using the + * supported fields negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS. + */ +struct virtchnl_vlan_filter_list_v2 { + u16 vport_id; + u16 num_elements; + u8 pad[4]; + struct virtchnl_vlan_filter filters[1]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_filter_list_v2); + +/* VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 + * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 + * + * VF sends this message to enable or disable VLAN stripping. It also needs to + * specify an ethertype. The VF knows which virtchnl_vlan_filter fields are + * allowed via the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. + */ +struct virtchnl_vlan_strip { + u16 vsi_id; + u16 outer_ethertype_setting; /* 0x8100, 0x88a8, etc. and only type(s) set in the offload caps */ + u16 inner_ethertype_setting; /* 0x8100, 0x88a8, etc. and only type(s) set in the offload caps */ + u8 pad[2]; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_vlan_strip); + /* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE * VF sends VSI id and flags. * PF returns status code in retval. @@ -754,6 +957,32 @@ struct virtchnl_pkg_info { VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_pkg_info); +struct virtchnl_dcf_vlan_offload { + u16 vf_id; + u16 tpid; + u16 vlan_flags; +#define VIRTCHNL_DCF_VLAN_TYPE_S 0 +#define VIRTCHNL_DCF_VLAN_TYPE_M \ + (0x1 << VIRTCHNL_DCF_VLAN_TYPE_S) +#define VIRTCHNL_DCF_VLAN_TYPE_INNER 0x0 +#define VIRTCHNL_DCF_VLAN_TYPE_OUTER 0x1 +#define VIRTCHNL_DCF_VLAN_INSERT_MODE_S 1 +#define VIRTCHNL_DCF_VLAN_INSERT_MODE_M \ + (0x7 << VIRTCHNL_DCF_VLAN_INSERT_MODE_S) +#define VIRTCHNL_DCF_VLAN_INSERT_DISABLE 0x1 +#define VIRTCHNL_DCF_VLAN_INSERT_PORT_BASED 0x2 +#define VIRTCHNL_DCF_VLAN_INSERT_VIA_TX_DESC 0x3 +#define VIRTCHNL_DCF_VLAN_STRIP_MODE_S 4 +#define VIRTCHNL_DCF_VLAN_STRIP_MODE_M \ + (0x7 << VIRTCHNL_DCF_VLAN_STRIP_MODE_S) +#define VIRTCHNL_DCF_VLAN_STRIP_DISABLE 0x1 +#define VIRTCHNL_DCF_VLAN_STRIP_ONLY 0x2 +#define VIRTCHNL_DCF_VLAN_STRIP_INTO_RX_DESC 0x3 + u16 vlan_id; +}; + +VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_dcf_vlan_offload); + struct virtchnl_supported_rxdids { u64 supported_rxdids; }; @@ -1291,6 +1520,10 @@ enum virtchnl_vector_limits { VIRTCHNL_OP_MAP_UNMAP_QUEUE_VECTOR_MAX = ((u16)(~0) - sizeof(struct virtchnl_queue_vector_maps)) / sizeof(struct virtchnl_queue_vector), + + VIRTCHNL_OP_ADD_DEL_VLAN_V2_MAX = + ((u16)(~0) - sizeof(struct virtchnl_vlan_filter_list_v2)) / + sizeof(struct virtchnl_vlan_filter), }; /** @@ -1465,6 +1698,9 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_DEL_CLOUD_FILTER: valid_len = sizeof(struct virtchnl_filter); break; + case VIRTCHNL_OP_DCF_VLAN_OFFLOAD: + valid_len = sizeof(struct virtchnl_dcf_vlan_offload); + break; case VIRTCHNL_OP_DCF_CMD_DESC: case VIRTCHNL_OP_DCF_CMD_BUFF: /* These two opcodes are specific to handle the AdminQ command, @@ -1491,6 +1727,29 @@ virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode, case VIRTCHNL_OP_QUERY_FDIR_FILTER: valid_len = sizeof(struct virtchnl_fdir_query); break; + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: + break; + case VIRTCHNL_OP_ADD_VLAN_V2: + case VIRTCHNL_OP_DEL_VLAN_V2: + valid_len = sizeof(struct virtchnl_vlan_filter_list_v2); + if (msglen >= valid_len) { + struct virtchnl_vlan_filter_list_v2 *vfl = + (struct virtchnl_vlan_filter_list_v2 *)msg; + + if (vfl->num_elements == 0 || vfl->num_elements > + VIRTCHNL_OP_ADD_DEL_VLAN_V2_MAX) { + err_msg_format = true; + break; + } + + valid_len += (vfl->num_elements - 1) * + sizeof(struct virtchnl_vlan_filter); + } + break; + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2: + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2: + valid_len = sizeof(struct virtchnl_vlan_strip); + break; case VIRTCHNL_OP_ENABLE_QUEUES_V2: case VIRTCHNL_OP_DISABLE_QUEUES_V2: valid_len = sizeof(struct virtchnl_del_ena_dis_queues); From patchwork Mon Dec 28 05:07:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Haiyue" X-Patchwork-Id: 85754 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 662CBA09FF; Mon, 28 Dec 2020 06:22:37 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 382CA2BC7; Mon, 28 Dec 2020 06:22:35 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 097992BB5 for ; Mon, 28 Dec 2020 06:22:33 +0100 (CET) IronPort-SDR: wp31j5dz/hk19peeBsfArjRMDRZQsrit9yy3GdYQOdPdfc9DRK/yc9n6jBDV8pEiagpAiNq5aw 8nv2R07WUZeA== X-IronPort-AV: E=McAfee;i="6000,8403,9847"; a="175570588" X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="175570588" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2020 21:22:31 -0800 IronPort-SDR: 8iePZOl3haXvPB5a4pgsQrTF1YvVPCzZOM1lTD7yARLICuSO0rxJRvBat97fj6baThwHe9Yc0l BkyazNwha3lA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="375487256" Received: from npg-dpdk-haiyue-3.sh.intel.com ([10.67.118.172]) by orsmga008.jf.intel.com with ESMTP; 27 Dec 2020 21:22:25 -0800 From: Haiyue Wang To: dev@dpdk.org Cc: qiming.yang@intel.com, jingjing.wu@intel.com, qi.z.zhang@intel.com, qi.fu@intel.com, Haiyue Wang , Beilei Xing Date: Mon, 28 Dec 2020 13:07:20 +0800 Message-Id: <20201228050723.27265-3-haiyue.wang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201228050723.27265-1-haiyue.wang@intel.com> References: <20201214071155.98764-1-haiyue.wang@intel.com> <20201228050723.27265-1-haiyue.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 2/5] net/iavf: support Ethernet CRC strip disable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The VF will check the PF's CRC strip capability firstly, then set the 'CRC strip disable' value in the queue configuration according to the RX CRC offload setting. Signed-off-by: Haiyue Wang --- drivers/net/iavf/iavf_ethdev.c | 3 +++ drivers/net/iavf/iavf_rxtx.c | 6 +++++- drivers/net/iavf/iavf_vchnl.c | 3 ++- 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index d2fa16825..75361b73b 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -798,6 +798,9 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_TX_OFFLOAD_MULTI_SEGS | DEV_TX_OFFLOAD_MBUF_FAST_FREE; + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC) + dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC; + dev_info->default_rxconf = (struct rte_eth_rxconf) { .rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH, .rx_drop_en = 0, diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 21d508b3f..d53d7b984 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -550,11 +550,15 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, rxq->rx_free_thresh = rx_free_thresh; rxq->queue_id = queue_idx; rxq->port_id = dev->data->port_id; - rxq->crc_len = 0; /* crc stripping by default */ rxq->rx_deferred_start = rx_conf->rx_deferred_start; rxq->rx_hdr_len = 0; rxq->vsi = vsi; + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) + rxq->crc_len = RTE_ETHER_CRC_LEN; + else + rxq->crc_len = 0; + len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; rxq->rx_buf_len = RTE_ALIGN(len, (1 << IAVF_RXQ_CTX_DBUFF_SHIFT)); diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 25d5cdaf5..c33194cdc 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -458,6 +458,7 @@ iavf_get_vf_resource(struct iavf_adapter *adapter) VIRTCHNL_VF_OFFLOAD_FDIR_PF | VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES | + VIRTCHNL_VF_OFFLOAD_CRC | VIRTCHNL_VF_LARGE_NUM_QPAIRS; args.in_args = (uint8_t *)∩︀ @@ -853,7 +854,7 @@ iavf_configure_queues(struct iavf_adapter *adapter, vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc; vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr; vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len; - + vc_qp->rxq.crc_disable = rxq[i]->crc_len != 0 ? 1 : 0; #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && From patchwork Mon Dec 28 05:07:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Haiyue" X-Patchwork-Id: 85756 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 47B12A09FF; Mon, 28 Dec 2020 06:23:12 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2F784C9B8; Mon, 28 Dec 2020 06:22:41 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 514542C2D for ; Mon, 28 Dec 2020 06:22:36 +0100 (CET) IronPort-SDR: VeDvDo97ImZVy7lfJBokIhQooXEXvnbE5MzUv7GzqlaM5azzUE3b/hDhMx8yPEcHody1l3izsH VW31EQRCVFBw== X-IronPort-AV: E=McAfee;i="6000,8403,9847"; a="175570589" X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="175570589" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2020 21:22:31 -0800 IronPort-SDR: v7s4b5H9gtt7airm+PFybrh8ThPJsoPTQv4h+OXhcd+GfnSsMrS78p94OmXoagPtEPrzKMj5+g /552LYzRMIWA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="375487258" Received: from npg-dpdk-haiyue-3.sh.intel.com ([10.67.118.172]) by orsmga008.jf.intel.com with ESMTP; 27 Dec 2020 21:22:28 -0800 From: Haiyue Wang To: dev@dpdk.org Cc: qiming.yang@intel.com, jingjing.wu@intel.com, qi.z.zhang@intel.com, qi.fu@intel.com, Haiyue Wang , Wei Zhao Date: Mon, 28 Dec 2020 13:07:21 +0800 Message-Id: <20201228050723.27265-4-haiyue.wang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201228050723.27265-1-haiyue.wang@intel.com> References: <20201214071155.98764-1-haiyue.wang@intel.com> <20201228050723.27265-1-haiyue.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 3/5] net/ice: enable QinQ filter for switch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enable the double VLAN support for QinQ filter switch. Signed-off-by: Wei Zhao Signed-off-by: Haiyue Wang --- drivers/net/ice/ice_generic_flow.c | 8 +++ drivers/net/ice/ice_generic_flow.h | 1 + drivers/net/ice/ice_switch_filter.c | 104 +++++++++++++++++++++++++--- 3 files changed, 102 insertions(+), 11 deletions(-) diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 1429cbc3b..1712d3b2e 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -1455,6 +1455,14 @@ enum rte_flow_item_type pattern_eth_qinq_pppoes[] = { RTE_FLOW_ITEM_TYPE_PPPOES, RTE_FLOW_ITEM_TYPE_END, }; +enum rte_flow_item_type pattern_eth_qinq_pppoes_proto[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_PPPOES, + RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID, + RTE_FLOW_ITEM_TYPE_END, +}; enum rte_flow_item_type pattern_eth_pppoes_ipv4[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_PPPOES, diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h index 434d2f425..dc45d8dc6 100644 --- a/drivers/net/ice/ice_generic_flow.h +++ b/drivers/net/ice/ice_generic_flow.h @@ -426,6 +426,7 @@ extern enum rte_flow_item_type pattern_eth_pppoes_proto[]; extern enum rte_flow_item_type pattern_eth_vlan_pppoes[]; extern enum rte_flow_item_type pattern_eth_vlan_pppoes_proto[]; extern enum rte_flow_item_type pattern_eth_qinq_pppoes[]; +extern enum rte_flow_item_type pattern_eth_qinq_pppoes_proto[]; extern enum rte_flow_item_type pattern_eth_pppoes_ipv4[]; extern enum rte_flow_item_type pattern_eth_vlan_pppoes_ipv4[]; extern enum rte_flow_item_type pattern_eth_qinq_pppoes_ipv4[]; diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 8cba6eb7b..43c755e30 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -35,11 +35,15 @@ #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) #define ICE_SW_INSET_MAC_VLAN ( \ - ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE | \ - ICE_INSET_VLAN_OUTER) + ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE | \ + ICE_INSET_VLAN_INNER) +#define ICE_SW_INSET_MAC_QINQ ( \ + ICE_SW_INSET_MAC_VLAN | ICE_INSET_VLAN_OUTER) #define ICE_SW_INSET_MAC_IPV4 ( \ ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \ ICE_INSET_IPV4_PROTO | ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS) +#define ICE_SW_INSET_MAC_QINQ_IPV4 ( \ + ICE_SW_INSET_MAC_QINQ | ICE_SW_INSET_MAC_IPV4) #define ICE_SW_INSET_MAC_IPV4_TCP ( \ ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \ ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS | \ @@ -52,6 +56,8 @@ ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \ ICE_INSET_IPV6_TC | ICE_INSET_IPV6_HOP_LIMIT | \ ICE_INSET_IPV6_NEXT_HDR) +#define ICE_SW_INSET_MAC_QINQ_IPV6 ( \ + ICE_SW_INSET_MAC_QINQ | ICE_SW_INSET_MAC_IPV6) #define ICE_SW_INSET_MAC_IPV6_TCP ( \ ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \ ICE_INSET_IPV6_HOP_LIMIT | ICE_INSET_IPV6_TC | \ @@ -182,6 +188,8 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = { ICE_SW_INSET_ETHER, ICE_INSET_NONE}, {pattern_ethertype_vlan, ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE}, + {pattern_ethertype_qinq, + ICE_SW_INSET_MAC_QINQ, ICE_INSET_NONE}, {pattern_eth_arp, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4, @@ -262,6 +270,18 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = { ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv6_pfcp, ICE_INSET_NONE, ICE_INSET_NONE}, + {pattern_eth_qinq_ipv4, + ICE_SW_INSET_MAC_QINQ_IPV4, ICE_INSET_NONE}, + {pattern_eth_qinq_ipv6, + ICE_SW_INSET_MAC_QINQ_IPV6, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes, + ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_proto, + ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, }; static struct @@ -304,6 +324,8 @@ ice_pattern_match_item ice_switch_pattern_perm_comms[] = { ICE_SW_INSET_ETHER, ICE_INSET_NONE}, {pattern_ethertype_vlan, ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE}, + {pattern_ethertype_qinq, + ICE_SW_INSET_MAC_QINQ, ICE_INSET_NONE}, {pattern_eth_arp, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4, @@ -384,6 +406,18 @@ ice_pattern_match_item ice_switch_pattern_perm_comms[] = { ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv6_pfcp, ICE_INSET_NONE, ICE_INSET_NONE}, + {pattern_eth_qinq_ipv4, + ICE_SW_INSET_MAC_QINQ_IPV4, ICE_INSET_NONE}, + {pattern_eth_qinq_ipv6, + ICE_SW_INSET_MAC_QINQ_IPV6, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes, + ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_proto, + ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, }; static int @@ -516,6 +550,8 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], bool pppoe_elem_valid = 0; bool pppoe_patt_valid = 0; bool pppoe_prot_valid = 0; + bool inner_vlan_valid = 0; + bool outer_vlan_valid = 0; bool tunnel_valid = 0; bool profile_rule = 0; bool nvgre_valid = 0; @@ -1062,23 +1098,40 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid VLAN item"); return 0; } + + if (!outer_vlan_valid && + (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ || + *tun_type == ICE_NON_TUN_QINQ)) + outer_vlan_valid = 1; + else if (!inner_vlan_valid && + (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ || + *tun_type == ICE_NON_TUN_QINQ)) + inner_vlan_valid = 1; + else if (!inner_vlan_valid) + inner_vlan_valid = 1; + if (vlan_spec && vlan_mask) { - list[t].type = ICE_VLAN_OFOS; + if (outer_vlan_valid && !inner_vlan_valid) { + list[t].type = ICE_VLAN_EX; + input_set |= ICE_INSET_VLAN_OUTER; + } else if (inner_vlan_valid) { + list[t].type = ICE_VLAN_OFOS; + input_set |= ICE_INSET_VLAN_INNER; + } + if (vlan_mask->tci) { list[t].h_u.vlan_hdr.vlan = vlan_spec->tci; list[t].m_u.vlan_hdr.vlan = vlan_mask->tci; - input_set |= ICE_INSET_VLAN_OUTER; input_set_byte += 2; } if (vlan_mask->inner_type) { - list[t].h_u.vlan_hdr.type = - vlan_spec->inner_type; - list[t].m_u.vlan_hdr.type = - vlan_mask->inner_type; - input_set |= ICE_INSET_ETHERTYPE; - input_set_byte += 2; + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid VLAN input set."); + return 0; } t++; } @@ -1380,8 +1433,27 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } } + if (*tun_type == ICE_SW_TUN_PPPOE_PAY && + inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_SW_TUN_PPPOE_PAY_QINQ; + else if (*tun_type == ICE_SW_TUN_PPPOE && + inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_SW_TUN_PPPOE_QINQ; + else if (*tun_type == ICE_NON_TUN && + inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_NON_TUN_QINQ; + else if (*tun_type == ICE_SW_TUN_AND_NON_TUN && + inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_SW_TUN_AND_NON_TUN_QINQ; + if (pppoe_patt_valid && !pppoe_prot_valid) { - if (ipv6_valid && udp_valid) + if (inner_vlan_valid && outer_vlan_valid && ipv4_valid) + *tun_type = ICE_SW_TUN_PPPOE_IPV4_QINQ; + else if (inner_vlan_valid && outer_vlan_valid && ipv6_valid) + *tun_type = ICE_SW_TUN_PPPOE_IPV6_QINQ; + else if (inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_SW_TUN_PPPOE_QINQ; + else if (ipv6_valid && udp_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP; else if (ipv6_valid && tcp_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP; @@ -1659,6 +1731,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, uint16_t lkups_num = 0; const struct rte_flow_item *item = pattern; uint16_t item_num = 0; + uint16_t vlan_num = 0; enum ice_sw_tunnel_type tun_type = ICE_NON_TUN; struct ice_pattern_match_item *pattern_match_item = NULL; @@ -1674,6 +1747,10 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, if (eth_mask->type == UINT16_MAX) tun_type = ICE_SW_TUN_AND_NON_TUN; } + + if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) + vlan_num++; + /* reserve one more memory slot for ETH which may * consume 2 lookup items. */ @@ -1681,6 +1758,11 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, item_num++; } + if (vlan_num == 2 && tun_type == ICE_SW_TUN_AND_NON_TUN) + tun_type = ICE_SW_TUN_AND_NON_TUN_QINQ; + else if (vlan_num == 2) + tun_type = ICE_NON_TUN_QINQ; + list = rte_zmalloc(NULL, item_num * sizeof(*list), 0); if (!list) { rte_flow_error_set(error, EINVAL, From patchwork Mon Dec 28 05:07:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Haiyue" X-Patchwork-Id: 85759 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E26EA09FF; Mon, 28 Dec 2020 06:24:11 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 97571CA20; Mon, 28 Dec 2020 06:22:45 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 17288C9AC for ; Mon, 28 Dec 2020 06:22:36 +0100 (CET) IronPort-SDR: sOqT6MQteO/sK5GwOn3mLW/3J0hnWL5ZGnrpFrrQ+kCgxaGR9nx1Qe2R/RBjRPKc4Q6N1v/KEZ jiRS133d45XA== X-IronPort-AV: E=McAfee;i="6000,8403,9847"; a="175570590" X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="175570590" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2020 21:22:32 -0800 IronPort-SDR: JtaAsc30Sf6wqBpLF5Gzecgdpl+79PF8Uz2Wf3CfeYm1ZOw6FgA+ALJn9hyYxJSj+WfRBfr0EY PK9MNolsQ5uA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="375487269" Received: from npg-dpdk-haiyue-3.sh.intel.com ([10.67.118.172]) by orsmga008.jf.intel.com with ESMTP; 27 Dec 2020 21:22:30 -0800 From: Haiyue Wang To: dev@dpdk.org Cc: qiming.yang@intel.com, jingjing.wu@intel.com, qi.z.zhang@intel.com, qi.fu@intel.com, Haiyue Wang Date: Mon, 28 Dec 2020 13:07:22 +0800 Message-Id: <20201228050723.27265-5-haiyue.wang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201228050723.27265-1-haiyue.wang@intel.com> References: <20201214071155.98764-1-haiyue.wang@intel.com> <20201228050723.27265-1-haiyue.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 4/5] net/ice: add the DCF VLAN handling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the DCF port representor infrastructure for the VFs of DCF attached PF. Then the standard ethdev API like VLAN can be used to configure the VFs. Signed-off-by: Qiming Yang Signed-off-by: Haiyue Wang --- drivers/net/ice/ice_dcf.c | 1 + drivers/net/ice/ice_dcf_ethdev.c | 91 +++++- drivers/net/ice/ice_dcf_ethdev.h | 20 ++ drivers/net/ice/ice_dcf_vf_representor.c | 356 +++++++++++++++++++++++ drivers/net/ice/meson.build | 1 + 5 files changed, 462 insertions(+), 7 deletions(-) create mode 100644 drivers/net/ice/ice_dcf_vf_representor.c diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 44dbd3bb8..4a9af3292 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -234,6 +234,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw) caps = VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RX_POLLING | VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF | + VIRTCHNL_VF_OFFLOAD_VLAN_V2 | VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC; err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES, diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index b0b2ecb0d..a9e78064d 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -970,20 +970,97 @@ ice_dcf_cap_selected(struct rte_devargs *devargs) return ret; } -static int eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv, - struct rte_pci_device *pci_dev) +static int +eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv, + struct rte_pci_device *pci_dev) { + struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 }; + struct ice_dcf_vf_repr_param repr_param; + char repr_name[RTE_ETH_NAME_MAX_LEN]; + struct ice_dcf_adapter *dcf_adapter; + struct rte_eth_dev *dcf_ethdev; + uint16_t dcf_vsi_id; + int i, ret; + if (!ice_dcf_cap_selected(pci_dev->device.devargs)) return 1; - return rte_eth_dev_pci_generic_probe(pci_dev, - sizeof(struct ice_dcf_adapter), - ice_dcf_dev_init); + ret = rte_eth_devargs_parse(pci_dev->device.devargs->args, ð_da); + if (ret) + return ret; + + ret = rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct ice_dcf_adapter), + ice_dcf_dev_init); + if (ret || !eth_da.nb_representor_ports) + return ret; + + dcf_ethdev = rte_eth_dev_allocated(pci_dev->device.name); + if (dcf_ethdev == NULL) + return -ENODEV; + + dcf_adapter = dcf_ethdev->data->dev_private; + + if (eth_da.nb_representor_ports > dcf_adapter->real_hw.num_vfs || + eth_da.nb_representor_ports >= RTE_MAX_ETHPORTS) { + PMD_DRV_LOG(ERR, "the number of port representors is too large: %u", + eth_da.nb_representor_ports); + return -EINVAL; + } + + dcf_vsi_id = dcf_adapter->real_hw.vsi_id | VIRTCHNL_DCF_VF_VSI_VALID; + + repr_param.adapter = dcf_adapter; + repr_param.switch_domain_id = 0; + + for (i = 0; i < eth_da.nb_representor_ports; i++) { + uint16_t vf_id = eth_da.representor_ports[i]; + + if (vf_id >= dcf_adapter->real_hw.num_vfs) { + PMD_DRV_LOG(ERR, "VF ID %u is out of range (0 ~ %u)", + vf_id, dcf_adapter->real_hw.num_vfs - 1); + ret = -EINVAL; + break; + } + + if (dcf_adapter->real_hw.vf_vsi_map[vf_id] == dcf_vsi_id) { + PMD_DRV_LOG(ERR, "VF ID %u is DCF's ID.\n", vf_id); + ret = -EINVAL; + break; + } + + repr_param.vf_id = vf_id; + snprintf(repr_name, sizeof(repr_name), "net_%s_representor_%u", + pci_dev->device.name, vf_id); + ret = rte_eth_dev_create(&pci_dev->device, repr_name, + sizeof(struct ice_dcf_vf_repr), + NULL, NULL, ice_dcf_vf_repr_init, + &repr_param); + if (ret) { + PMD_DRV_LOG(ERR, "failed to create DCF VF representor %s", + repr_name); + break; + } + } + + return ret; } -static int eth_ice_dcf_pci_remove(struct rte_pci_device *pci_dev) +static int +eth_ice_dcf_pci_remove(struct rte_pci_device *pci_dev) { - return rte_eth_dev_pci_generic_remove(pci_dev, ice_dcf_dev_uninit); + struct rte_eth_dev *eth_dev; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (!eth_dev) + return 0; + + if (eth_dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) + return rte_eth_dev_pci_generic_remove(pci_dev, + ice_dcf_vf_repr_uninit); + else + return rte_eth_dev_pci_generic_remove(pci_dev, + ice_dcf_dev_uninit); } static const struct rte_pci_id pci_id_ice_dcf_map[] = { diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index b54528bea..bd5552332 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -22,9 +22,29 @@ struct ice_dcf_adapter { struct ice_dcf_hw real_hw; }; +struct ice_dcf_vf_repr_param { + struct ice_dcf_adapter *adapter; + uint16_t switch_domain_id; + uint16_t vf_id; +}; + +struct ice_dcf_vf_repr { + struct ice_dcf_adapter *dcf_adapter; + struct rte_ether_addr mac_addr; + uint16_t switch_domain_id; + uint16_t vf_id; + + uint16_t outer_vlan_tpid; + uint16_t pvid; + uint16_t hw_vlan_insert_pvid:1; +}; + void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, uint8_t *msg, uint16_t msglen); int ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev); void ice_dcf_uninit_parent_adapter(struct rte_eth_dev *eth_dev); +int ice_dcf_vf_repr_init(struct rte_eth_dev *ethdev, void *init_param); +int ice_dcf_vf_repr_uninit(struct rte_eth_dev *ethdev); + #endif /* _ICE_DCF_ETHDEV_H_ */ diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c new file mode 100644 index 000000000..e9806895d --- /dev/null +++ b/drivers/net/ice/ice_dcf_vf_representor.c @@ -0,0 +1,356 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ + +#include +#include + +#include + +#include "ice_dcf_ethdev.h" +#include "ice_rxtx.h" + +static uint16_t +ice_dcf_vf_repr_rx_burst(__rte_unused void *rxq, + __rte_unused struct rte_mbuf **rx_pkts, + __rte_unused uint16_t nb_pkts) +{ + return 0; +} + +static uint16_t +ice_dcf_vf_repr_tx_burst(__rte_unused void *txq, + __rte_unused struct rte_mbuf **tx_pkts, + __rte_unused uint16_t nb_pkts) +{ + return 0; +} + +static int +ice_dcf_vf_repr_dev_configure(__rte_unused struct rte_eth_dev *dev) +{ + return 0; +} + +static int +ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev) +{ + dev->data->dev_link.link_status = ETH_LINK_UP; + + return 0; +} + +static int +ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev) +{ + dev->data->dev_link.link_status = ETH_LINK_DOWN; + + return 0; +} + +static int +ice_dcf_vf_repr_dev_close(struct rte_eth_dev *dev) +{ + return ice_dcf_vf_repr_uninit(dev); +} + +static int +ice_dcf_vf_repr_rx_queue_setup(__rte_unused struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, + __rte_unused uint16_t nb_desc, + __rte_unused unsigned int socket_id, + __rte_unused const struct rte_eth_rxconf *conf, + __rte_unused struct rte_mempool *pool) +{ + return 0; +} + +static int +ice_dcf_vf_repr_tx_queue_setup(__rte_unused struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, + __rte_unused uint16_t nb_desc, + __rte_unused unsigned int socket_id, + __rte_unused const struct rte_eth_txconf *conf) +{ + return 0; +} + +static int +ice_dcf_vf_repr_promiscuous_enable(__rte_unused struct rte_eth_dev *ethdev) +{ + return 0; +} + +static int +ice_dcf_vf_repr_promiscuous_disable(__rte_unused struct rte_eth_dev *ethdev) +{ + return 0; +} + +static int +ice_dcf_vf_repr_allmulticast_enable(__rte_unused struct rte_eth_dev *dev) +{ + return 0; +} + +static int +ice_dcf_vf_repr_allmulticast_disable(__rte_unused struct rte_eth_dev *dev) +{ + return 0; +} + +static int +ice_dcf_vf_repr_link_update(__rte_unused struct rte_eth_dev *ethdev, + __rte_unused int wait_to_complete) +{ + return 0; +} + +static int +ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct ice_dcf_vf_repr *repr = dev->data->dev_private; + struct ice_dcf_hw *dcf_hw = + &repr->dcf_adapter->real_hw; + + dev_info->device = dev->device; + dev_info->max_mac_addrs = 1; + dev_info->max_rx_queues = dcf_hw->vsi_res->num_queue_pairs; + dev_info->max_tx_queues = dcf_hw->vsi_res->num_queue_pairs; + dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN; + dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX; + dev_info->hash_key_size = dcf_hw->vf_res->rss_key_size; + dev_info->reta_size = dcf_hw->vf_res->rss_lut_size; + dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL; + + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_RX_OFFLOAD_SCATTER | + DEV_RX_OFFLOAD_JUMBO_FRAME | + DEV_RX_OFFLOAD_VLAN_FILTER | + DEV_RX_OFFLOAD_VLAN_EXTEND | + DEV_RX_OFFLOAD_RSS_HASH; + dev_info->tx_offload_capa = + DEV_TX_OFFLOAD_VLAN_INSERT | + DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM | + DEV_TX_OFFLOAD_SCTP_CKSUM | + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_TX_OFFLOAD_TCP_TSO | + DEV_TX_OFFLOAD_VXLAN_TNL_TSO | + DEV_TX_OFFLOAD_GRE_TNL_TSO | + DEV_TX_OFFLOAD_IPIP_TNL_TSO | + DEV_TX_OFFLOAD_GENEVE_TNL_TSO | + DEV_TX_OFFLOAD_MULTI_SEGS; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = ICE_DEFAULT_RX_PTHRESH, + .hthresh = ICE_DEFAULT_RX_HTHRESH, + .wthresh = ICE_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = ICE_DEFAULT_TX_PTHRESH, + .hthresh = ICE_DEFAULT_TX_HTHRESH, + .wthresh = ICE_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = ICE_MAX_RING_DESC, + .nb_min = ICE_MIN_RING_DESC, + .nb_align = ICE_ALIGN_RING_DESC, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = ICE_MAX_RING_DESC, + .nb_min = ICE_MIN_RING_DESC, + .nb_align = ICE_ALIGN_RING_DESC, + }; + + dev_info->switch_info.name = dcf_hw->eth_dev->device->name; + dev_info->switch_info.domain_id = repr->switch_domain_id; + dev_info->switch_info.port_id = repr->vf_id; + + return 0; +} + +static int +ice_dcf_vlan_offload_config(struct ice_dcf_vf_repr *repr, + struct virtchnl_dcf_vlan_offload *vlan_offload) +{ + struct dcf_virtchnl_cmd args; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_DCF_VLAN_OFFLOAD; + args.req_msg = (uint8_t *)vlan_offload; + args.req_msglen = sizeof(*vlan_offload); + + return ice_dcf_execute_virtchnl_cmd(&repr->dcf_adapter->real_hw, &args); +} + +static __rte_always_inline bool +ice_dcf_vlan_offload_ena(struct ice_dcf_vf_repr *repr) +{ + return !!(repr->dcf_adapter->real_hw.vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_VLAN_V2); +} + +static int +ice_dcf_vf_repr_vlan_pvid_set(struct rte_eth_dev *dev, + uint16_t pvid, int on) +{ + struct ice_dcf_vf_repr *repr = dev->data->dev_private; + struct virtchnl_dcf_vlan_offload vlan_offload; + int err; + + if (!ice_dcf_vlan_offload_ena(repr)) + return -ENOTSUP; + + memset(&vlan_offload, 0, sizeof(vlan_offload)); + + vlan_offload.vf_id = repr->vf_id; + vlan_offload.tpid = repr->outer_vlan_tpid; + vlan_offload.vlan_flags = (VIRTCHNL_DCF_VLAN_TYPE_OUTER << + VIRTCHNL_DCF_VLAN_TYPE_S) | + (VIRTCHNL_DCF_VLAN_INSERT_PORT_BASED << + VIRTCHNL_DCF_VLAN_INSERT_MODE_S); + vlan_offload.vlan_id = on ? pvid : 0; + + err = ice_dcf_vlan_offload_config(repr, &vlan_offload); + if (!err) { + repr->pvid = vlan_offload.vlan_id; + repr->hw_vlan_insert_pvid = on ? 1 : 0; + } + + return err; +} + +static int +ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct ice_dcf_vf_repr *repr = dev->data->dev_private; + struct rte_eth_rxmode *rxmode; + + if (!ice_dcf_vlan_offload_ena(repr)) + return -ENOTSUP; + + rxmode = &dev->data->dev_conf.rxmode; + + if (mask & ETH_VLAN_EXTEND_MASK) { + if (!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)) + ice_dcf_vf_repr_vlan_pvid_set(dev, 0, 0); + } + + return 0; +} + +static int +ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev, + enum rte_vlan_type vlan_type, uint16_t tpid) +{ + struct ice_dcf_vf_repr *repr = dev->data->dev_private; + + if (!ice_dcf_vlan_offload_ena(repr)) + return -ENOTSUP; + + if (vlan_type != ETH_VLAN_TYPE_INNER && + vlan_type != ETH_VLAN_TYPE_OUTER) { + PMD_DRV_LOG(ERR, "Unsupported vlan type %d", vlan_type); + return -EINVAL; + } + + if (vlan_type == ETH_VLAN_TYPE_INNER) { + PMD_DRV_LOG(ERR, + "Can accelerate only outer VLAN in QinQ\n"); + return -EINVAL; + } + + if (!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND)) { + PMD_DRV_LOG(ERR, + "QinQ not enabled."); + return -EINVAL; + } + + if (tpid != RTE_ETHER_TYPE_QINQ || + tpid != RTE_ETHER_TYPE_VLAN || + tpid != RTE_ETHER_TYPE_QINQ1) { + PMD_DRV_LOG(ERR, + "Invalid TPID: 0x%04x\n", tpid); + return -EINVAL; + } + + repr->outer_vlan_tpid = tpid; + + return ice_dcf_vf_repr_vlan_pvid_set(dev, + repr->pvid, + repr->hw_vlan_insert_pvid); +} + +static const struct eth_dev_ops ice_dcf_vf_repr_dev_ops = { + .dev_configure = ice_dcf_vf_repr_dev_configure, + .dev_start = ice_dcf_vf_repr_dev_start, + .dev_stop = ice_dcf_vf_repr_dev_stop, + .dev_close = ice_dcf_vf_repr_dev_close, + .dev_infos_get = ice_dcf_vf_repr_dev_info_get, + .rx_queue_setup = ice_dcf_vf_repr_rx_queue_setup, + .tx_queue_setup = ice_dcf_vf_repr_tx_queue_setup, + .promiscuous_enable = ice_dcf_vf_repr_promiscuous_enable, + .promiscuous_disable = ice_dcf_vf_repr_promiscuous_disable, + .allmulticast_enable = ice_dcf_vf_repr_allmulticast_enable, + .allmulticast_disable = ice_dcf_vf_repr_allmulticast_disable, + .link_update = ice_dcf_vf_repr_link_update, + .vlan_offload_set = ice_dcf_vf_repr_vlan_offload_set, + .vlan_pvid_set = ice_dcf_vf_repr_vlan_pvid_set, + .vlan_tpid_set = ice_dcf_vf_repr_vlan_tpid_set, +}; + +int +ice_dcf_vf_repr_init(struct rte_eth_dev *ethdev, void *init_param) +{ + struct ice_dcf_vf_repr *repr = ethdev->data->dev_private; + struct ice_dcf_vf_repr_param *param = init_param; + + repr->dcf_adapter = param->adapter; + repr->switch_domain_id = param->switch_domain_id; + repr->vf_id = param->vf_id; + repr->outer_vlan_tpid = RTE_ETHER_TYPE_VLAN; + + ethdev->dev_ops = &ice_dcf_vf_repr_dev_ops; + + ethdev->rx_pkt_burst = ice_dcf_vf_repr_rx_burst; + ethdev->tx_pkt_burst = ice_dcf_vf_repr_tx_burst; + + ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR; + ethdev->data->representor_id = repr->vf_id; + + ethdev->data->mac_addrs = &repr->mac_addr; + + rte_eth_random_addr(repr->mac_addr.addr_bytes); + + return 0; +} + +int +ice_dcf_vf_repr_uninit(struct rte_eth_dev *ethdev) +{ + ethdev->data->mac_addrs = NULL; + + return 0; +} diff --git a/drivers/net/ice/meson.build b/drivers/net/ice/meson.build index 7b291269d..d58936089 100644 --- a/drivers/net/ice/meson.build +++ b/drivers/net/ice/meson.build @@ -61,6 +61,7 @@ if arch_subdir == 'x86' endif sources += files('ice_dcf.c', + 'ice_dcf_vf_representor.c', 'ice_dcf_ethdev.c', 'ice_dcf_parent.c') From patchwork Mon Dec 28 05:07:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Haiyue" X-Patchwork-Id: 85758 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4958CA09FF; Mon, 28 Dec 2020 06:23:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 19C6AC9FE; Mon, 28 Dec 2020 06:22:44 +0100 (CET) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 559362C2D for ; Mon, 28 Dec 2020 06:22:37 +0100 (CET) IronPort-SDR: v+puav7mFrgHSlY8bUOGFBcDblcM682m++7wSRJOJOo+emMxgG4ofZt3M1Jnf5tnsA0nVEQrWp TiqDOrr26iiw== X-IronPort-AV: E=McAfee;i="6000,8403,9847"; a="175570592" X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="175570592" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Dec 2020 21:22:35 -0800 IronPort-SDR: GvgIQIVXFK0fkf5C5GD8599uQE47TlaAZqerbEaXKPnYV0pdo2/pIYGjTvqqZmCKSf4XHd1uPF cKOE/XMb5vVQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,454,1599548400"; d="scan'208";a="375487282" Received: from npg-dpdk-haiyue-3.sh.intel.com ([10.67.118.172]) by orsmga008.jf.intel.com with ESMTP; 27 Dec 2020 21:22:33 -0800 From: Haiyue Wang To: dev@dpdk.org Cc: qiming.yang@intel.com, jingjing.wu@intel.com, qi.z.zhang@intel.com, qi.fu@intel.com, Haiyue Wang , Beilei Xing Date: Mon, 28 Dec 2020 13:07:23 +0800 Message-Id: <20201228050723.27265-6-haiyue.wang@intel.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201228050723.27265-1-haiyue.wang@intel.com> References: <20201214071155.98764-1-haiyue.wang@intel.com> <20201228050723.27265-1-haiyue.wang@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 5/5] net/iavf: implement new VLAN capability handling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The new VLAN virtchnl opcodes introduce new settings like different TPID filtering, stripping. Signed-off-by: Qiming Yang Signed-off-by: Haiyue Wang --- drivers/net/iavf/iavf.h | 10 +++ drivers/net/iavf/iavf_ethdev.c | 107 +++++++++++++++++++++++++ drivers/net/iavf/iavf_vchnl.c | 141 +++++++++++++++++++++++++++++++++ 3 files changed, 258 insertions(+) diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 9754273b2..c5d53bd9c 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -139,6 +139,7 @@ struct iavf_info { struct virtchnl_version_info virtchnl_version; struct virtchnl_vf_resource *vf_res; /* VF resource */ struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */ + struct virtchnl_vlan_caps vlan_v2_caps; uint64_t supported_rxdid; uint8_t *proto_xtr; /* proto xtr type for all queues */ volatile enum virtchnl_ops pend_cmd; /* pending command not finished */ @@ -173,6 +174,10 @@ struct iavf_info { struct iavf_fdir_info fdir; /* flow director info */ /* indicate large VF support enabled or not */ bool lv_enabled; + + /* used to set the VLAN Ethernet type for virtchnl VLAN V2 */ + uint16_t outer_vlan_tpid; + uint16_t inner_vlan_tpid; }; #define IAVF_MAX_PKT_TYPE 1024 @@ -297,6 +302,8 @@ int iavf_get_vf_resource(struct iavf_adapter *adapter); void iavf_handle_virtchnl_msg(struct rte_eth_dev *dev); int iavf_enable_vlan_strip(struct iavf_adapter *adapter); int iavf_disable_vlan_strip(struct iavf_adapter *adapter); +int iavf_config_vlan_strip_v2(struct iavf_adapter *adapter, uint16_t tpid, + bool enable); int iavf_switch_queue(struct iavf_adapter *adapter, uint16_t qid, bool rx, bool on); int iavf_switch_queue_lv(struct iavf_adapter *adapter, uint16_t qid, @@ -310,6 +317,7 @@ int iavf_configure_rss_key(struct iavf_adapter *adapter); int iavf_configure_queues(struct iavf_adapter *adapter, uint16_t num_queue_pairs, uint16_t index); int iavf_get_supported_rxdid(struct iavf_adapter *adapter); +int iavf_get_vlan_offload_caps_v2(struct iavf_adapter *adapter); int iavf_config_irq_map(struct iavf_adapter *adapter); int iavf_config_irq_map_lv(struct iavf_adapter *adapter, uint16_t num, uint16_t index); @@ -323,6 +331,8 @@ int iavf_config_promisc(struct iavf_adapter *adapter, bool enable_unicast, int iavf_add_del_eth_addr(struct iavf_adapter *adapter, struct rte_ether_addr *addr, bool add); int iavf_add_del_vlan(struct iavf_adapter *adapter, uint16_t vlanid, bool add); +int iavf_add_del_vlan_v2(struct iavf_adapter *adapter, uint16_t tpid, + uint16_t vlanid, bool add); int iavf_fdir_add(struct iavf_adapter *adapter, struct iavf_fdir_conf *filter); int iavf_fdir_del(struct iavf_adapter *adapter, struct iavf_fdir_conf *filter); int iavf_fdir_check(struct iavf_adapter *adapter, diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 75361b73b..d6771c0d9 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -100,6 +100,8 @@ static void iavf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index); static int iavf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); static int iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); +static int iavf_dev_vlan_tpid_set(struct rte_eth_dev *dev, + enum rte_vlan_type vlan_type, uint16_t tpid); static int iavf_dev_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); @@ -176,6 +178,7 @@ static const struct eth_dev_ops iavf_eth_dev_ops = { .mac_addr_remove = iavf_dev_del_mac_addr, .set_mc_addr_list = iavf_set_mc_addr_list, .vlan_filter_set = iavf_dev_vlan_filter_set, + .vlan_tpid_set = iavf_dev_vlan_tpid_set, .vlan_offload_set = iavf_dev_vlan_offload_set, .rx_queue_start = iavf_dev_rx_queue_start, .rx_queue_stop = iavf_dev_rx_queue_stop, @@ -326,6 +329,18 @@ iavf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num) return 0; } +static inline uint16_t +iavf_curr_vlan_tpid(struct rte_eth_dev *dev) +{ + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + bool qinq = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND); + + return qinq ? vf->outer_vlan_tpid : vf->inner_vlan_tpid; +} + static int iavf_dev_configure(struct rte_eth_dev *dev) { @@ -387,6 +402,12 @@ iavf_dev_configure(struct rte_eth_dev *dev) vf->max_rss_qregion = IAVF_MAX_NUM_QUEUES_DFLT; } + /* Vlan v2 stripping setting */ + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) + iavf_config_vlan_strip_v2(ad, iavf_curr_vlan_tpid(dev), + !!(dev_conf->rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_STRIP)); + /* Vlan stripping setting */ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN) { if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) @@ -782,6 +803,9 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_RSS_HASH; + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) + dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND; + dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT | @@ -987,6 +1011,47 @@ iavf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index) vf->mac_num--; } +static int +iavf_dev_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type, + uint16_t tpid) +{ + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + bool qinq = !!(dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_EXTEND); + + if (vlan_type != ETH_VLAN_TYPE_INNER && + vlan_type != ETH_VLAN_TYPE_OUTER) { + PMD_DRV_LOG(ERR, "Unsupported vlan type"); + return -EINVAL; + } + + if (!qinq) { + PMD_DRV_LOG(ERR, "QinQ not enabled"); + return -EINVAL; + } + + if (vlan_type == ETH_VLAN_TYPE_OUTER) { + switch (tpid) { + case RTE_ETHER_TYPE_QINQ: + case RTE_ETHER_TYPE_VLAN: + case RTE_ETHER_TYPE_QINQ1: + vf->outer_vlan_tpid = tpid; + break; + default: + PMD_DRV_LOG(ERR, "Invalid TPID: %x", tpid); + return -EINVAL; + } + } else { + PMD_DRV_LOG(ERR, + "Can accelerate only outer vlan in QinQ"); + return -EINVAL; + } + + return 0; +} + static int iavf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) { @@ -995,6 +1060,15 @@ iavf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); int err; + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) { + uint16_t tpid = iavf_curr_vlan_tpid(dev); + + err = iavf_add_del_vlan_v2(adapter, tpid, vlan_id, on); + if (err) + return -EIO; + return 0; + } + if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) return -ENOTSUP; @@ -1004,6 +1078,26 @@ iavf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) return 0; } +static int +iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask) +{ + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + uint16_t tpid = iavf_curr_vlan_tpid(dev); + int err; + + if (mask & ETH_VLAN_STRIP_MASK) { + err = iavf_config_vlan_strip_v2(adapter, tpid, + !!(dev_conf->rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_STRIP)); + if (err) + return -EIO; + } + + return 0; +} + static int iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) { @@ -1013,6 +1107,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) struct rte_eth_conf *dev_conf = &dev->data->dev_conf; int err; + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) + return iavf_dev_vlan_offload_set_v2(dev, mask); + if (!(vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) return -ENOTSUP; @@ -1896,6 +1993,16 @@ iavf_init_vf(struct rte_eth_dev *dev) } } + if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) { + if (iavf_get_vlan_offload_caps_v2(adapter) != 0) { + PMD_INIT_LOG(ERR, "failed to do get VLAN offload v2 capabilities"); + goto err_rss; + } + + vf->outer_vlan_tpid = RTE_ETHER_TYPE_VLAN; + vf->inner_vlan_tpid = RTE_ETHER_TYPE_VLAN; + } + iavf_init_proto_xtr(dev); return 0; diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index c33194cdc..b2b5ace31 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -174,6 +174,7 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct iavf_cmd_info *args) case VIRTCHNL_OP_VERSION: case VIRTCHNL_OP_GET_VF_RESOURCES: case VIRTCHNL_OP_GET_SUPPORTED_RXDIDS: + case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS: /* for init virtchnl ops, need to poll the response */ do { result = iavf_read_msg_from_pf(adapter, args->out_size, @@ -366,6 +367,28 @@ iavf_enable_vlan_strip(struct iavf_adapter *adapter) return ret; } +static uint16_t +iavf_vc_vlan_tpid_flag(uint16_t tpid) +{ + uint16_t flag = VIRTCHNL_VLAN_UNSUPPORTED; + + switch (tpid) { + case RTE_ETHER_TYPE_VLAN: + flag = VIRTCHNL_VLAN_ETHERTYPE_8100; + break; + case RTE_ETHER_TYPE_QINQ1: + flag = VIRTCHNL_VLAN_ETHERTYPE_9100; + break; + case RTE_ETHER_TYPE_QINQ: + flag = VIRTCHNL_VLAN_ETHERTYPE_88A8; + break; + default: + break; + } + + return flag; +} + int iavf_disable_vlan_strip(struct iavf_adapter *adapter) { @@ -387,6 +410,52 @@ iavf_disable_vlan_strip(struct iavf_adapter *adapter) return ret; } +int +iavf_config_vlan_strip_v2(struct iavf_adapter *adapter, uint16_t tpid, + bool enable) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct virtchnl_vlan_strip vlan_strip; + struct iavf_cmd_info args; + u16 stripping_caps; + u16 vlan_tpid_flag; + u16 *vlan_setting; + int ret; + + /* Give priority over outer if it's enabled */ + if (vf->vlan_v2_caps.offloads.outer_stripping) { + stripping_caps = vf->vlan_v2_caps.offloads.outer_stripping; + vlan_setting = &vlan_strip.outer_ethertype_setting; + } else if (vf->vlan_v2_caps.offloads.inner_stripping) { + stripping_caps = vf->vlan_v2_caps.offloads.inner_stripping; + vlan_setting = &vlan_strip.inner_ethertype_setting; + } else { + return -ENOTSUP; + } + + vlan_tpid_flag = iavf_vc_vlan_tpid_flag(tpid); + if (!(stripping_caps & vlan_tpid_flag)) + return -EINVAL; + + memset(&vlan_strip, 0, sizeof(vlan_strip)); + vlan_strip.vsi_id = vf->vsi_res->vsi_id; + *vlan_setting = vlan_tpid_flag; + + args.ops = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 : + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2; + args.in_args = (uint8_t *)&vlan_strip; + args.in_args_size = sizeof(vlan_strip); + args.out_buffer = vf->aq_resp; + args.out_size = IAVF_AQ_BUF_SZ; + ret = iavf_execute_vf_cmd(adapter, &args); + if (ret) + PMD_DRV_LOG(ERR, "fail to execute command %s", + enable ? "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" : + "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2"); + + return ret; +} + #define VIRTCHNL_VERSION_MAJOR_START 1 #define VIRTCHNL_VERSION_MINOR_START 1 @@ -459,6 +528,7 @@ iavf_get_vf_resource(struct iavf_adapter *adapter) VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES | VIRTCHNL_VF_OFFLOAD_CRC | + VIRTCHNL_VF_OFFLOAD_VLAN_V2 | VIRTCHNL_VF_LARGE_NUM_QPAIRS; args.in_args = (uint8_t *)∩︀ @@ -522,6 +592,31 @@ iavf_get_supported_rxdid(struct iavf_adapter *adapter) return 0; } +int +iavf_get_vlan_offload_caps_v2(struct iavf_adapter *adapter) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct iavf_cmd_info args; + int ret; + + args.ops = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS; + args.in_args = NULL; + args.in_args_size = 0; + args.out_buffer = vf->aq_resp; + args.out_size = IAVF_AQ_BUF_SZ; + + ret = iavf_execute_vf_cmd(adapter, &args); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to execute command of VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS"); + return ret; + } + + rte_memcpy(&vf->vlan_v2_caps, vf->aq_resp, sizeof(vf->vlan_v2_caps)); + + return 0; +} + int iavf_enable_queues(struct iavf_adapter *adapter) { @@ -1167,6 +1262,52 @@ iavf_add_del_vlan(struct iavf_adapter *adapter, uint16_t vlanid, bool add) return err; } +int +iavf_add_del_vlan_v2(struct iavf_adapter *adapter, uint16_t tpid, + uint16_t vlanid, bool add) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); + struct virtchnl_vlan_filter_list_v2 vlan_list; + struct virtchnl_vlan *vlan_setting; + struct iavf_cmd_info args; + uint16_t filtering_caps; + uint16_t vlan_tpid_flag; + int err; + + /* Give priority over outer if it's enabled */ + if (vf->vlan_v2_caps.filtering.outer) { + filtering_caps = vf->vlan_v2_caps.filtering.outer; + vlan_setting = &vlan_list.filters[0].outer; + } else if (vf->vlan_v2_caps.filtering.inner) { + filtering_caps = vf->vlan_v2_caps.filtering.inner; + vlan_setting = &vlan_list.filters[0].inner; + } else { + return -ENOTSUP; + } + + vlan_tpid_flag = iavf_vc_vlan_tpid_flag(tpid); + if (!(filtering_caps & vlan_tpid_flag)) + return -EINVAL; + + memset(&vlan_list, 0, sizeof(vlan_list)); + vlan_list.vport_id = vf->vsi_res->vsi_id; + vlan_list.num_elements = 1; + vlan_setting->tci = vlanid; + vlan_setting->tpid = tpid; + + args.ops = add ? VIRTCHNL_OP_ADD_VLAN_V2 : VIRTCHNL_OP_DEL_VLAN_V2; + args.in_args = (uint8_t *)&vlan_list; + args.in_args_size = sizeof(vlan_list); + args.out_buffer = vf->aq_resp; + args.out_size = IAVF_AQ_BUF_SZ; + err = iavf_execute_vf_cmd(adapter, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command %s", + add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2"); + + return err; +} + int iavf_fdir_add(struct iavf_adapter *adapter, struct iavf_fdir_conf *filter)