From patchwork Fri Jun 5 20:17:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70875 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9048A0350; Fri, 5 Jun 2020 14:18:42 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 670C61D5F7; Fri, 5 Jun 2020 14:18:38 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 4CCA41D5E6 for ; Fri, 5 Jun 2020 14:18:37 +0200 (CEST) IronPort-SDR: 9OwZWKSJ3nJD3CRchCZUaiVTLf7zNRWX2A4oBAMRkBgV+VyiXgIKFaBpZOZlj+R1KvMeAWoORn 7/rw+/CWtJWA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:36 -0700 IronPort-SDR: 3u/4GZPsQ2MPABQ5P2mMM9oMWmRtPiZg4KD+yKfF5F8hCRrbBkNiYJ8Ls/e73xfzi8jBlQ5tXo 1mVx8SQABOoA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673321" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:35 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:26 +0000 Message-Id: <20200605201737.33766-2-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 01/12] net/ice: init RSS and supported RXDID in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Enable RSS parameters initialization and get the supported flexible descriptor RXDIDs bitmap from PF during DCF init. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf.c | 54 ++++++++++++++++++++++++++++++++++++++- drivers/net/ice/ice_dcf.h | 3 +++ 2 files changed, 56 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 0cd5d1bf6..93fabd5f7 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -233,7 +233,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw) caps = VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RX_POLLING | VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF | - VF_BASE_MODE_OFFLOADS; + VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC; err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES, (uint8_t *)&caps, sizeof(caps)); @@ -547,6 +547,30 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw) return err; } +static int +ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw) +{ + int err; + + err = ice_dcf_send_cmd_req_no_irq(hw, + VIRTCHNL_OP_GET_SUPPORTED_RXDIDS, + NULL, 0); + if (err) { + PMD_INIT_LOG(ERR, "Failed to send OP_GET_SUPPORTED_RXDIDS"); + return -1; + } + + err = ice_dcf_recv_cmd_rsp_no_irq(hw, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS, + (uint8_t *)&hw->supported_rxdid, + sizeof(uint64_t), NULL); + if (err) { + PMD_INIT_LOG(ERR, "Failed to get response of OP_GET_SUPPORTED_RXDIDS"); + return -1; + } + + return 0; +} + int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) { @@ -620,6 +644,29 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) goto err_alloc; } + /* Allocate memory for RSS info */ + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { + hw->rss_key = rte_zmalloc(NULL, + hw->vf_res->rss_key_size, 0); + if (!hw->rss_key) { + PMD_INIT_LOG(ERR, "unable to allocate rss_key memory"); + goto err_alloc; + } + hw->rss_lut = rte_zmalloc("rss_lut", + hw->vf_res->rss_lut_size, 0); + if (!hw->rss_lut) { + PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory"); + goto err_rss; + } + } + + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) { + if (ice_dcf_get_supported_rxdid(hw) != 0) { + PMD_INIT_LOG(ERR, "failed to do get supported rxdid"); + goto err_rss; + } + } + hw->eth_dev = eth_dev; rte_intr_callback_register(&pci_dev->intr_handle, ice_dcf_dev_interrupt_handler, hw); @@ -628,6 +675,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) return 0; +err_rss: + rte_free(hw->rss_key); + rte_free(hw->rss_lut); err_alloc: rte_free(hw->vf_res); err_api: @@ -655,4 +705,6 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) rte_free(hw->arq_buf); rte_free(hw->vf_vsi_map); rte_free(hw->vf_res); + rte_free(hw->rss_lut); + rte_free(hw->rss_key); } diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index d2e447b48..152266e3c 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -50,6 +50,9 @@ struct ice_dcf_hw { uint16_t vsi_id; struct rte_eth_dev *eth_dev; + uint8_t *rss_lut; + uint8_t *rss_key; + uint64_t supported_rxdid; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, From patchwork Fri Jun 5 20:17:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70876 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA8A6A0350; Fri, 5 Jun 2020 14:18:51 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C43A31D615; Fri, 5 Jun 2020 14:18:40 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 129611D607 for ; Fri, 5 Jun 2020 14:18:38 +0200 (CEST) IronPort-SDR: NoKPJQ9MzQMvoOhgbJ5ORGrgGhH71IzPxfhGY5mVPXk3fMQy6EmRWXUpXknapi0CuR39flCR7G MSRDFhJNJHdA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:38 -0700 IronPort-SDR: lTLDlVRSV+wnXGvnjNS8TzthcbbmaRPDnAMkXV70gPlsaUgE8Wky49Im9VQl+BDVDSF3Z9G0SK Z1qnbuY/Fpow== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673331" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:36 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:27 +0000 Message-Id: <20200605201737.33766-3-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 02/12] net/ice: complete device info get in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add support to get complete device information for DCF, including Rx/Tx offload capabilities and default configuration. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf_ethdev.c | 72 ++++++++++++++++++++++++++++++-- 1 file changed, 69 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index e5ba1a61f..7f24ef81a 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -24,6 +24,7 @@ #include "ice_generic_flow.h" #include "ice_dcf_ethdev.h" +#include "ice_rxtx.h" static uint16_t ice_dcf_recv_pkts(__rte_unused void *rx_queue, @@ -66,11 +67,76 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) { struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; dev_info->max_mac_addrs = 1; - dev_info->max_rx_pktlen = (uint32_t)-1; - dev_info->max_rx_queues = RTE_DIM(adapter->rxqs); - dev_info->max_tx_queues = RTE_DIM(adapter->txqs); + dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs; + dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs; + dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN; + dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX; + dev_info->hash_key_size = hw->vf_res->rss_key_size; + dev_info->reta_size = hw->vf_res->rss_lut_size; + dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL; + + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_QINQ_STRIP | + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_RX_OFFLOAD_SCATTER | + DEV_RX_OFFLOAD_JUMBO_FRAME | + DEV_RX_OFFLOAD_VLAN_FILTER | + DEV_RX_OFFLOAD_RSS_HASH; + dev_info->tx_offload_capa = + DEV_TX_OFFLOAD_VLAN_INSERT | + DEV_TX_OFFLOAD_QINQ_INSERT | + DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM | + DEV_TX_OFFLOAD_SCTP_CKSUM | + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_TX_OFFLOAD_TCP_TSO | + DEV_TX_OFFLOAD_VXLAN_TNL_TSO | + DEV_TX_OFFLOAD_GRE_TNL_TSO | + DEV_TX_OFFLOAD_IPIP_TNL_TSO | + DEV_TX_OFFLOAD_GENEVE_TNL_TSO | + DEV_TX_OFFLOAD_MULTI_SEGS; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = ICE_DEFAULT_RX_PTHRESH, + .hthresh = ICE_DEFAULT_RX_HTHRESH, + .wthresh = ICE_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + .offloads = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = ICE_DEFAULT_TX_PTHRESH, + .hthresh = ICE_DEFAULT_TX_HTHRESH, + .wthresh = ICE_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH, + .offloads = 0, + }; + + dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = ICE_MAX_RING_DESC, + .nb_min = ICE_MIN_RING_DESC, + .nb_align = ICE_ALIGN_RING_DESC, + }; + + dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = ICE_MAX_RING_DESC, + .nb_min = ICE_MIN_RING_DESC, + .nb_align = ICE_ALIGN_RING_DESC, + }; return 0; } From patchwork Fri Jun 5 20:17:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70877 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 014CBA0350; Fri, 5 Jun 2020 14:19:01 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ED46A1D627; Fri, 5 Jun 2020 14:18:42 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id C36CB1D614 for ; Fri, 5 Jun 2020 14:18:40 +0200 (CEST) IronPort-SDR: RMg2eFsOA5ND11vQyY1CZEOubWHZI3WCZtsRcoSlaeiJp+20TfARoGOxPvpJOsnUy2DLaK2yLx 0JzeBoKJM7bQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:40 -0700 IronPort-SDR: r1Rl6cJAN5aGrmpOsJpltu0XEnRfk2kV2HNHdKdMRihs0QEAdHI8pVy5pTL9Xs4mrp63L5iVBZ AJ5WklI4v85g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673343" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:38 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:28 +0000 Message-Id: <20200605201737.33766-4-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 03/12] net/ice: complete dev configure in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Enable device configuration function in DCF. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf_ethdev.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 7f24ef81a..e8bed1362 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -59,6 +59,15 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev) static int ice_dcf_dev_configure(__rte_unused struct rte_eth_dev *dev) { + struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct ice_adapter *ad = &dcf_ad->parent; + + ad->rx_bulk_alloc_allowed = true; + ad->tx_simple_allowed = true; + + if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; + return 0; } From patchwork Fri Jun 5 20:17:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70878 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33194A0350; Fri, 5 Jun 2020 14:19:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 31066F94; Fri, 5 Jun 2020 14:18:44 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id DDA741D621 for ; Fri, 5 Jun 2020 14:18:42 +0200 (CEST) IronPort-SDR: oYxPHCWZWNB0gXgh8InXkVpOs/5A059JsAbkCNQqB/yZHTgPKoe9nHdTdsi1wgHGmm07DLO/L8 DgWBrhCFouHg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:42 -0700 IronPort-SDR: KxmjAWRh181IipCup+m7xtEwAvr8fZ6lsIs/TtSo9KILDNHuUBq66QOHDwlg+Fuz6dZ2DIw8c6 PVPmfK53ndtQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673354" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:40 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:29 +0000 Message-Id: <20200605201737.33766-5-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 04/12] net/ice: complete queue setup in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Delete original DCF queue setup functions and use ice queue setup and release functions instead. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf_ethdev.c | 42 +++----------------------------- drivers/net/ice/ice_dcf_ethdev.h | 3 --- drivers/net/ice/ice_dcf_parent.c | 7 ++++++ 3 files changed, 11 insertions(+), 41 deletions(-) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index e8bed1362..df906cd54 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -231,11 +231,6 @@ ice_dcf_dev_close(struct rte_eth_dev *dev) ice_dcf_uninit_hw(dev, &adapter->real_hw); } -static void -ice_dcf_queue_release(__rte_unused void *q) -{ -} - static int ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev, __rte_unused int wait_to_complete) @@ -243,45 +238,16 @@ ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev, return 0; } -static int -ice_dcf_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t rx_queue_id, - __rte_unused uint16_t nb_rx_desc, - __rte_unused unsigned int socket_id, - __rte_unused const struct rte_eth_rxconf *rx_conf, - __rte_unused struct rte_mempool *mb_pool) -{ - struct ice_dcf_adapter *adapter = dev->data->dev_private; - - dev->data->rx_queues[rx_queue_id] = &adapter->rxqs[rx_queue_id]; - - return 0; -} - -static int -ice_dcf_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t tx_queue_id, - __rte_unused uint16_t nb_tx_desc, - __rte_unused unsigned int socket_id, - __rte_unused const struct rte_eth_txconf *tx_conf) -{ - struct ice_dcf_adapter *adapter = dev->data->dev_private; - - dev->data->tx_queues[tx_queue_id] = &adapter->txqs[tx_queue_id]; - - return 0; -} - static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .dev_start = ice_dcf_dev_start, .dev_stop = ice_dcf_dev_stop, .dev_close = ice_dcf_dev_close, .dev_configure = ice_dcf_dev_configure, .dev_infos_get = ice_dcf_dev_info_get, - .rx_queue_setup = ice_dcf_rx_queue_setup, - .tx_queue_setup = ice_dcf_tx_queue_setup, - .rx_queue_release = ice_dcf_queue_release, - .tx_queue_release = ice_dcf_queue_release, + .rx_queue_setup = ice_rx_queue_setup, + .tx_queue_setup = ice_tx_queue_setup, + .rx_queue_release = ice_rx_queue_release, + .tx_queue_release = ice_tx_queue_release, .link_update = ice_dcf_link_update, .stats_get = ice_dcf_stats_get, .stats_reset = ice_dcf_stats_reset, diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index e60e808d8..b54528bea 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -19,10 +19,7 @@ struct ice_dcf_queue { struct ice_dcf_adapter { struct ice_adapter parent; /* Must be first */ - struct ice_dcf_hw real_hw; - struct ice_dcf_queue rxqs[ICE_DCF_MAX_RINGS]; - struct ice_dcf_queue txqs[ICE_DCF_MAX_RINGS]; }; void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index bdfc7d430..f9c7d9737 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -335,6 +335,13 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev) parent_adapter->eth_dev = eth_dev; parent_adapter->pf.adapter = parent_adapter; parent_adapter->pf.dev_data = eth_dev->data; + /* create a dummy main_vsi */ + parent_adapter->pf.main_vsi = + rte_zmalloc(NULL, sizeof(struct ice_vsi), 0); + if (!parent_adapter->pf.main_vsi) + return -ENOMEM; + parent_adapter->pf.main_vsi->adapter = parent_adapter; + parent_hw->back = parent_adapter; parent_hw->mac_type = ICE_MAC_GENERIC; parent_hw->vendor_id = ICE_INTEL_VENDOR_ID; From patchwork Fri Jun 5 20:17:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70879 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0BBCA0350; Fri, 5 Jun 2020 14:19:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 753B91C2F5; Fri, 5 Jun 2020 14:18:46 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 61F6C29D6 for ; Fri, 5 Jun 2020 14:18:44 +0200 (CEST) IronPort-SDR: 0w2llsaV+yrQciDQiGy/ad72jbvpNeiNjgpCTg+JPGKwCaNR1BmOXophiwnN1pp1Z0e3E6ouu+ l7HHq/1mLfFw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:44 -0700 IronPort-SDR: oy5ClT+urJG7wbz3XFSnnef9reiVP9B7wQZE0o4dC/avat+3pIXqNhMTm5J8pi0Ch9UOJimYh1 WnSlmk7ZRmyg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673358" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:42 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:30 +0000 Message-Id: <20200605201737.33766-6-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 05/12] net/ice: add stop flag for device start / stop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add stop flag for DCF device start and stop. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf_ethdev.c | 12 ++++++++++++ drivers/net/ice/ice_dcf_parent.c | 1 + 2 files changed, 13 insertions(+) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index df906cd54..62ef71ddb 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -45,6 +45,11 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue, static int ice_dcf_dev_start(struct rte_eth_dev *dev) { + struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct ice_adapter *ad = &dcf_ad->parent; + + ad->pf.adapter_stopped = 0; + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; @@ -53,7 +58,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) static void ice_dcf_dev_stop(struct rte_eth_dev *dev) { + struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct ice_adapter *ad = &dcf_ad->parent; + + if (ad->pf.adapter_stopped == 1) + return; + dev->data->dev_link.link_status = ETH_LINK_DOWN; + ad->pf.adapter_stopped = 1; } static int diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index f9c7d9737..8ad8bea1a 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -341,6 +341,7 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev) if (!parent_adapter->pf.main_vsi) return -ENOMEM; parent_adapter->pf.main_vsi->adapter = parent_adapter; + parent_adapter->pf.adapter_stopped = 1; parent_hw->back = parent_adapter; parent_hw->mac_type = ICE_MAC_GENERIC; From patchwork Fri Jun 5 20:17:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70880 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1D219A0350; Fri, 5 Jun 2020 14:19:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C33E31D63A; Fri, 5 Jun 2020 14:18:54 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id BD8E21D626 for ; Fri, 5 Jun 2020 14:18:52 +0200 (CEST) IronPort-SDR: xHD8bx7bvh7Z0pY8wXi11MH87HSQn/XcVbEsXLh84lFxZC8gEp3BvL61mrY8HQg8gWHpcu4CR5 s9PGjEJOhndw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:51 -0700 IronPort-SDR: 54a1MJHEaY1YCMdUHv8tXznrJdidsTaVRCt6QaHyiHQMNkvDvcwXV+h95+XbOG2euw4bWFrvAZ fryzAND/GTjQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673380" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:44 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:31 +0000 Message-Id: <20200605201737.33766-7-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 06/12] net/ice: add Rx queue init in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Enable Rx queues initialization during device start in DCF. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 83 ++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 152266e3c..dcb2a0283 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -53,6 +53,7 @@ struct ice_dcf_hw { uint8_t *rss_lut; uint8_t *rss_key; uint64_t supported_rxdid; + uint16_t num_queue_pairs; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 62ef71ddb..1f7474dc3 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -42,14 +42,97 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue, return 0; } +static int +ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq) +{ + struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct rte_eth_dev_data *dev_data = dev->data; + struct iavf_hw *hw = &dcf_ad->real_hw.avf; + uint16_t buf_size, max_pkt_len, len; + + buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; + + /* Calculate the maximum packet length allowed */ + len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS; + max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len); + + /* Check if the jumbo frame and maximum packet length are set + * correctly. + */ + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (max_pkt_len <= RTE_ETHER_MAX_LEN || + max_pkt_len > ICE_FRAME_SIZE_MAX) { + PMD_DRV_LOG(ERR, "maximum packet length must be " + "larger than %u and smaller than %u, " + "as jumbo frame is enabled", + (uint32_t)RTE_ETHER_MAX_LEN, + (uint32_t)ICE_FRAME_SIZE_MAX); + return -EINVAL; + } + } else { + if (max_pkt_len < RTE_ETHER_MIN_LEN || + max_pkt_len > RTE_ETHER_MAX_LEN) { + PMD_DRV_LOG(ERR, "maximum packet length must be " + "larger than %u and smaller than %u, " + "as jumbo frame is disabled", + (uint32_t)RTE_ETHER_MIN_LEN, + (uint32_t)RTE_ETHER_MAX_LEN); + return -EINVAL; + } + } + + rxq->max_pkt_len = max_pkt_len; + if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) || + (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) { + dev_data->scattered_rx = 1; + } + rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id); + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + IAVF_WRITE_FLUSH(hw); + + return 0; +} + +static int +ice_dcf_init_rx_queues(struct rte_eth_dev *dev) +{ + struct ice_rx_queue **rxq = + (struct ice_rx_queue **)dev->data->rx_queues; + int i, ret; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (!rxq[i] || !rxq[i]->q_set) + continue; + ret = ice_dcf_init_rxq(dev, rxq[i]); + if (ret) + return ret; + } + + ice_set_rx_function(dev); + ice_set_tx_function(dev); + + return 0; +} + static int ice_dcf_dev_start(struct rte_eth_dev *dev) { struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; struct ice_adapter *ad = &dcf_ad->parent; + struct ice_dcf_hw *hw = &dcf_ad->real_hw; + int ret; ad->pf.adapter_stopped = 0; + hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, + dev->data->nb_tx_queues); + + ret = ice_dcf_init_rx_queues(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Fail to init queues"); + return ret; + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; From patchwork Fri Jun 5 20:17:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70881 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7B305A0350; Fri, 5 Jun 2020 14:19:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0D5241D640; Fri, 5 Jun 2020 14:18:56 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 83BE01D636 for ; Fri, 5 Jun 2020 14:18:53 +0200 (CEST) IronPort-SDR: WYbCgTOW79uZapJm65YW1zCxiFNqbjmFhZKrAHWKGZ62PoKcxnpdyr3pfDVW7+dB+qvUwpxHnE o35BZISUfpAg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:51 -0700 IronPort-SDR: hK2duthF8LGk90zRAQ59Jhwh3dnGZ2POiMe0/TlcGpv78Eb4oSndnP4tcBO7dbSFg5QU4THJm9 9iV5F3gbbo1w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673388" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:48 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:32 +0000 Message-Id: <20200605201737.33766-8-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 07/12] net/ice: init RSS during DCF start X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Enable RSS initialization during DCF start. Add RSS LUT and RSS key configuration functions. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf.c | 123 +++++++++++++++++++++++++++++++ drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 14 +++- 3 files changed, 135 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 93fabd5f7..8d078163e 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -708,3 +708,126 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) rte_free(hw->rss_lut); rte_free(hw->rss_key); } + +static int +ice_dcf_configure_rss_key(struct ice_dcf_hw *hw) +{ + struct virtchnl_rss_key *rss_key; + struct dcf_virtchnl_cmd args; + int len, err; + + len = sizeof(*rss_key) + hw->vf_res->rss_key_size - 1; + rss_key = rte_zmalloc("rss_key", len, 0); + if (!rss_key) + return -ENOMEM; + + rss_key->vsi_id = hw->vsi_res->vsi_id; + rss_key->key_len = hw->vf_res->rss_key_size; + rte_memcpy(rss_key->key, hw->rss_key, hw->vf_res->rss_key_size); + + args.v_op = VIRTCHNL_OP_CONFIG_RSS_KEY; + args.req_msglen = len; + args.req_msg = (uint8_t *)rss_key; + args.rsp_msglen = 0; + args.rsp_buflen = 0; + args.rsp_msgbuf = NULL; + args.pending = 0; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) { + PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_KEY"); + return err; + } + + rte_free(rss_key); + return 0; +} + +static int +ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw) +{ + struct virtchnl_rss_lut *rss_lut; + struct dcf_virtchnl_cmd args; + int len, err; + + len = sizeof(*rss_lut) + hw->vf_res->rss_lut_size - 1; + rss_lut = rte_zmalloc("rss_lut", len, 0); + if (!rss_lut) + return -ENOMEM; + + rss_lut->vsi_id = hw->vsi_res->vsi_id; + rss_lut->lut_entries = hw->vf_res->rss_lut_size; + rte_memcpy(rss_lut->lut, hw->rss_lut, hw->vf_res->rss_lut_size); + + args.v_op = VIRTCHNL_OP_CONFIG_RSS_LUT; + args.req_msglen = len; + args.req_msg = (uint8_t *)rss_lut; + args.rsp_msglen = 0; + args.rsp_buflen = 0; + args.rsp_msgbuf = NULL; + args.pending = 0; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) { + PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_LUT"); + return err; + } + + rte_free(rss_lut); + return 0; +} + +int +ice_dcf_init_rss(struct ice_dcf_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + struct rte_eth_rss_conf *rss_conf; + uint8_t i, j, nb_q; + int ret; + + rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + nb_q = dev->data->nb_rx_queues; + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) { + PMD_DRV_LOG(DEBUG, "RSS is not supported"); + return -ENOTSUP; + } + if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) { + PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default"); + /* set all lut items to default queue */ + for (i = 0; i < hw->vf_res->rss_lut_size; i++) + hw->rss_lut[i] = 0; + ret = ice_dcf_configure_rss_lut(hw); + return ret; + } + + /* In IAVF, RSS enablement is set by PF driver. It is not supported + * to set based on rss_conf->rss_hf. + */ + + /* configure RSS key */ + if (!rss_conf->rss_key) + /* Calculate the default hash key */ + for (i = 0; i <= hw->vf_res->rss_key_size; i++) + hw->rss_key[i] = (uint8_t)rte_rand(); + else + rte_memcpy(hw->rss_key, rss_conf->rss_key, + RTE_MIN(rss_conf->rss_key_len, + hw->vf_res->rss_key_size)); + + /* init RSS LUT table */ + for (i = 0, j = 0; i < hw->vf_res->rss_lut_size; i++, j++) { + if (j >= nb_q) + j = 0; + hw->rss_lut[i] = j; + } + /* send virtchnnl ops to configure rss*/ + ret = ice_dcf_configure_rss_lut(hw); + if (ret) + return ret; + ret = ice_dcf_configure_rss_key(hw); + if (ret) + return ret; + + return 0; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index dcb2a0283..eea4b286b 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -63,5 +63,6 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc, int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw); int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); +int ice_dcf_init_rss(struct ice_dcf_hw *hw); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 1f7474dc3..5fbf70803 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -51,9 +51,9 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq) uint16_t buf_size, max_pkt_len, len; buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; - - /* Calculate the maximum packet length allowed */ - len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS; + rxq->rx_hdr_len = 0; + rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); + len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len; max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len); /* Check if the jumbo frame and maximum packet length are set @@ -133,6 +133,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) return ret; } + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) { + ret = ice_dcf_init_rss(hw); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure RSS"); + return ret; + } + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; From patchwork Fri Jun 5 20:17:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70882 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9518A0350; Fri, 5 Jun 2020 14:19:47 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 721D21D64D; Fri, 5 Jun 2020 14:18:57 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id EE5A71D63A for ; Fri, 5 Jun 2020 14:18:53 +0200 (CEST) IronPort-SDR: G7u9p06QaRA2lUpSDF8657zmdFpr1/qnfuxw1W3bN2yGMGcIlsLX6ASvvyK7ABuwfQHYQvXYs5 iS/IBXRSQemQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:52 -0700 IronPort-SDR: ZCRcdBLIoZ8TIoGFJilzycstargv3QB6bI062DOLUmE/BZ4XWZUDlVWyCdM6IjJBegoDeLEKCg 2t1gtWYUXPNA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673393" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:49 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:33 +0000 Message-Id: <20200605201737.33766-9-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 08/12] net/ice: add queue config in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add queues and Rx queue irqs configuration during device start in DCF. The setup is sent to PF via virtchnl. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf.c | 109 +++++++++++++++++++++++++++ drivers/net/ice/ice_dcf.h | 6 ++ drivers/net/ice/ice_dcf_ethdev.c | 125 +++++++++++++++++++++++++++++++ 3 files changed, 240 insertions(+) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 8d078163e..d864ae894 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -24,6 +24,7 @@ #include #include "ice_dcf.h" +#include "ice_rxtx.h" #define ICE_DCF_AQ_LEN 32 #define ICE_DCF_AQ_BUF_SZ 4096 @@ -831,3 +832,111 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw) return 0; } + +#define IAVF_RXDID_LEGACY_1 1 +#define IAVF_RXDID_COMMS_GENERIC 16 + +int +ice_dcf_configure_queues(struct ice_dcf_hw *hw) +{ + struct ice_rx_queue **rxq = + (struct ice_rx_queue **)hw->eth_dev->data->rx_queues; + struct ice_tx_queue **txq = + (struct ice_tx_queue **)hw->eth_dev->data->tx_queues; + struct virtchnl_vsi_queue_config_info *vc_config; + struct virtchnl_queue_pair_info *vc_qp; + struct dcf_virtchnl_cmd args; + uint16_t i, size; + int err; + + size = sizeof(*vc_config) + + sizeof(vc_config->qpair[0]) * hw->num_queue_pairs; + vc_config = rte_zmalloc("cfg_queue", size, 0); + if (!vc_config) + return -ENOMEM; + + vc_config->vsi_id = hw->vsi_res->vsi_id; + vc_config->num_queue_pairs = hw->num_queue_pairs; + + for (i = 0, vc_qp = vc_config->qpair; + i < hw->num_queue_pairs; + i++, vc_qp++) { + vc_qp->txq.vsi_id = hw->vsi_res->vsi_id; + vc_qp->txq.queue_id = i; + if (i < hw->eth_dev->data->nb_tx_queues) { + vc_qp->txq.ring_len = txq[i]->nb_tx_desc; + vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma; + } + vc_qp->rxq.vsi_id = hw->vsi_res->vsi_id; + vc_qp->rxq.queue_id = i; + vc_qp->rxq.max_pkt_size = rxq[i]->max_pkt_len; + if (i < hw->eth_dev->data->nb_rx_queues) { + vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc; + vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_dma; + vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len; + + if (hw->vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && + hw->supported_rxdid & + BIT(IAVF_RXDID_COMMS_GENERIC)) { + vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_GENERIC; + PMD_DRV_LOG(NOTICE, "request RXDID == %d in " + "Queue[%d]", vc_qp->rxq.rxdid, i); + } else { + PMD_DRV_LOG(ERR, "RXDID 16 is not supported"); + return -EINVAL; + } + } + } + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_CONFIG_VSI_QUEUES; + args.req_msg = (uint8_t *)vc_config; + args.req_msglen = size; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "Failed to execute command of" + " VIRTCHNL_OP_CONFIG_VSI_QUEUES"); + + rte_free(vc_config); + return err; +} + +int +ice_dcf_config_irq_map(struct ice_dcf_hw *hw) +{ + struct virtchnl_irq_map_info *map_info; + struct virtchnl_vector_map *vecmap; + struct dcf_virtchnl_cmd args; + int len, i, err; + + len = sizeof(struct virtchnl_irq_map_info) + + sizeof(struct virtchnl_vector_map) * hw->nb_msix; + + map_info = rte_zmalloc("map_info", len, 0); + if (!map_info) + return -ENOMEM; + + map_info->num_vectors = hw->nb_msix; + for (i = 0; i < hw->nb_msix; i++) { + vecmap = &map_info->vecmap[i]; + vecmap->vsi_id = hw->vsi_res->vsi_id; + vecmap->rxitr_idx = 0; + vecmap->vector_id = hw->msix_base + i; + vecmap->txq_map = 0; + vecmap->rxq_map = hw->rxq_map[hw->msix_base + i]; + } + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_CONFIG_IRQ_MAP; + args.req_msg = (u8 *)map_info; + args.req_msglen = len; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP"); + + rte_free(map_info); + return err; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index eea4b286b..9470d1df7 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -54,6 +54,10 @@ struct ice_dcf_hw { uint8_t *rss_key; uint64_t supported_rxdid; uint16_t num_queue_pairs; + + uint16_t msix_base; + uint16_t nb_msix; + uint16_t rxq_map[16]; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, @@ -64,5 +68,7 @@ int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw); int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); +int ice_dcf_configure_queues(struct ice_dcf_hw *hw); +int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 5fbf70803..9605fb8ed 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -114,10 +114,123 @@ ice_dcf_init_rx_queues(struct rte_eth_dev *dev) return 0; } +#define IAVF_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET +#define IAVF_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET + +#define IAVF_ITR_INDEX_DEFAULT 0 +#define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */ +#define IAVF_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ + +static inline uint16_t +iavf_calc_itr_interval(int16_t interval) +{ + if (interval < 0 || interval > IAVF_QUEUE_ITR_INTERVAL_MAX) + interval = IAVF_QUEUE_ITR_INTERVAL_DEFAULT; + + /* Convert to hardware count, as writing each 1 represents 2 us */ + return interval / 2; +} + +static int ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, + struct rte_intr_handle *intr_handle) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + uint16_t interval, i; + int vec; + + if (rte_intr_cap_multiple(intr_handle) && + dev->data->dev_conf.intr_conf.rxq) { + if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues)) + return -1; + } + + if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) { + intr_handle->intr_vec = + rte_zmalloc("intr_vec", + dev->data->nb_rx_queues * sizeof(int), 0); + if (!intr_handle->intr_vec) { + PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec", + dev->data->nb_rx_queues); + return -1; + } + } + + if (!dev->data->dev_conf.intr_conf.rxq || + !rte_intr_dp_is_en(intr_handle)) { + /* Rx interrupt disabled, Map interrupt only for writeback */ + hw->nb_msix = 1; + if (hw->vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { + /* If WB_ON_ITR supports, enable it */ + hw->msix_base = IAVF_RX_VEC_START; + IAVF_WRITE_REG(&hw->avf, + IAVF_VFINT_DYN_CTLN1(hw->msix_base - 1), + IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK | + IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_MASK); + } else { + /* If no WB_ON_ITR offload flags, need to set + * interrupt for descriptor write back. + */ + hw->msix_base = IAVF_MISC_VEC_ID; + + /* set ITR to max */ + interval = + iavf_calc_itr_interval(IAVF_QUEUE_ITR_INTERVAL_MAX); + IAVF_WRITE_REG(&hw->avf, IAVF_VFINT_DYN_CTL01, + IAVF_VFINT_DYN_CTL01_INTENA_MASK | + (IAVF_ITR_INDEX_DEFAULT << + IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) | + (interval << + IAVF_VFINT_DYN_CTL01_INTERVAL_SHIFT)); + } + IAVF_WRITE_FLUSH(&hw->avf); + /* map all queues to the same interrupt */ + for (i = 0; i < dev->data->nb_rx_queues; i++) + hw->rxq_map[hw->msix_base] |= 1 << i; + } else { + if (!rte_intr_allow_others(intr_handle)) { + hw->nb_msix = 1; + hw->msix_base = IAVF_MISC_VEC_ID; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + hw->rxq_map[hw->msix_base] |= 1 << i; + intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID; + } + PMD_DRV_LOG(DEBUG, + "vector %u are mapping to all Rx queues", + hw->msix_base); + } else { + /* If Rx interrupt is reuquired, and we can use + * multi interrupts, then the vec is from 1 + */ + hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors, + intr_handle->nb_efd); + hw->msix_base = IAVF_MISC_VEC_ID; + vec = IAVF_MISC_VEC_ID; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + hw->rxq_map[vec] |= 1 << i; + intr_handle->intr_vec[i] = vec++; + if (vec >= hw->nb_msix) + vec = IAVF_RX_VEC_START; + } + PMD_DRV_LOG(DEBUG, + "%u vectors are mapping to %u Rx queues", + hw->nb_msix, dev->data->nb_rx_queues); + } + } + + if (ice_dcf_config_irq_map(hw)) { + PMD_DRV_LOG(ERR, "config interrupt mapping failed"); + return -1; + } + return 0; +} + static int ice_dcf_dev_start(struct rte_eth_dev *dev) { struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct rte_intr_handle *intr_handle = dev->intr_handle; struct ice_adapter *ad = &dcf_ad->parent; struct ice_dcf_hw *hw = &dcf_ad->real_hw; int ret; @@ -141,6 +254,18 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) } } + ret = ice_dcf_configure_queues(hw); + if (ret) { + PMD_DRV_LOG(ERR, "Fail to config queues"); + return ret; + } + + ret = ice_dcf_config_rx_queues_irqs(dev, intr_handle); + if (ret) { + PMD_DRV_LOG(ERR, "Fail to config rx queues' irqs"); + return ret; + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; From patchwork Fri Jun 5 20:17:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70883 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 106DCA0350; Fri, 5 Jun 2020 14:19:58 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B19FC1D653; Fri, 5 Jun 2020 14:18:58 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 4D0B81D63B for ; Fri, 5 Jun 2020 14:18:54 +0200 (CEST) IronPort-SDR: e+89QgcVeF77AmKBBjMuzWkdEsFi2t/bhWaGFW+MCyln3yaBZ8pdjTQtUJPLkvtTxYSXLlImRa 7sgmO9WGJ7Zw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:53 -0700 IronPort-SDR: 2isk8jRt65WRqG621HPiXl7H52l0wBti0zRXWhKCzIwfu7fSVUuDflD3GBSdZWIloyv6Rur4dF 2fGZD4LK6bgA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673404" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:52 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:34 +0000 Message-Id: <20200605201737.33766-10-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add queue start and stop in DCF. Support queue enable and disable through virtual channel. Add support for Rx queue mbufs allocation and queue reset. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf.c | 57 ++++++ drivers/net/ice/ice_dcf.h | 3 +- drivers/net/ice/ice_dcf_ethdev.c | 309 +++++++++++++++++++++++++++++++ 3 files changed, 368 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index d864ae894..56b8c0d25 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -940,3 +940,60 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw) rte_free(map_info); return err; } + +int +ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on) +{ + struct virtchnl_queue_select queue_select; + struct dcf_virtchnl_cmd args; + int err; + + memset(&queue_select, 0, sizeof(queue_select)); + queue_select.vsi_id = hw->vsi_res->vsi_id; + if (rx) + queue_select.rx_queues |= 1 << qid; + else + queue_select.tx_queues |= 1 << qid; + + memset(&args, 0, sizeof(args)); + if (on) + args.v_op = VIRTCHNL_OP_ENABLE_QUEUES; + else + args.v_op = VIRTCHNL_OP_DISABLE_QUEUES; + + args.req_msg = (u8 *)&queue_select; + args.req_msglen = sizeof(queue_select); + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "Failed to execute command of %s", + on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES"); + return err; +} + +int +ice_dcf_disable_queues(struct ice_dcf_hw *hw) +{ + struct virtchnl_queue_select queue_select; + struct dcf_virtchnl_cmd args; + int err; + + memset(&queue_select, 0, sizeof(queue_select)); + queue_select.vsi_id = hw->vsi_res->vsi_id; + + queue_select.rx_queues = BIT(hw->eth_dev->data->nb_rx_queues) - 1; + queue_select.tx_queues = BIT(hw->eth_dev->data->nb_tx_queues) - 1; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_DISABLE_QUEUES; + args.req_msg = (u8 *)&queue_select; + args.req_msglen = sizeof(queue_select); + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) { + PMD_DRV_LOG(ERR, + "Failed to execute command of OP_DISABLE_QUEUES"); + return err; + } + return 0; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 9470d1df7..68e1661c0 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); int ice_dcf_configure_queues(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); - +int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); +int ice_dcf_disable_queues(struct ice_dcf_hw *hw); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 9605fb8ed..59113fc4b 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -226,6 +226,259 @@ static int ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, return 0; } +static int +alloc_rxq_mbufs(struct ice_rx_queue *rxq) +{ + volatile union ice_32b_rx_flex_desc *rxd; + struct rte_mbuf *mbuf = NULL; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + mbuf = rte_mbuf_raw_alloc(rxq->mp); + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + + rxd = &rxq->rx_ring[i]; + rxd->read.pkt_addr = dma_addr; + rxd->read.hdr_addr = 0; + rxd->read.rsvd1 = 0; + rxd->read.rsvd2 = 0; + + rxq->sw_ring[i].mbuf = (void *)mbuf; + } + + return 0; +} + +static int +ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct iavf_hw *hw = &ad->real_hw.avf; + struct ice_rx_queue *rxq; + int err = 0; + + if (rx_queue_id >= dev->data->nb_rx_queues) + return -EINVAL; + + rxq = dev->data->rx_queues[rx_queue_id]; + + err = alloc_rxq_mbufs(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf"); + return err; + } + + rte_wmb(); + + /* Init the RX tail register. */ + IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + IAVF_WRITE_FLUSH(hw); + + /* Ready to switch the queue on */ + err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", + rx_queue_id); + else + dev->data->rx_queue_state[rx_queue_id] = + RTE_ETH_QUEUE_STATE_STARTED; + + return err; +} + +static inline void +reset_rx_queue(struct ice_rx_queue *rxq) +{ + uint16_t len; + uint32_t i; + + if (!rxq) + return; + + len = rxq->nb_rx_desc + ICE_RX_MAX_BURST; + + for (i = 0; i < len * sizeof(union ice_rx_flex_desc); i++) + ((volatile char *)rxq->rx_ring)[i] = 0; + + memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf)); + + for (i = 0; i < ICE_RX_MAX_BURST; i++) + rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf; + + /* for rx bulk */ + rxq->rx_nb_avail = 0; + rxq->rx_next_avail = 0; + rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1); + + rxq->rx_tail = 0; + rxq->nb_rx_hold = 0; + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; +} + +static inline void +reset_tx_queue(struct ice_tx_queue *txq) +{ + struct ice_tx_entry *txe; + uint32_t i, size; + uint16_t prev; + + if (!txq) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + txe = txq->sw_ring; + size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc; + for (i = 0; i < size; i++) + ((volatile char *)txq->tx_ring)[i] = 0; + + prev = (uint16_t)(txq->nb_tx_desc - 1); + for (i = 0; i < txq->nb_tx_desc; i++) { + txq->tx_ring[i].cmd_type_offset_bsz = + rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE); + txe[i].mbuf = NULL; + txe[i].last_id = i; + txe[prev].next_id = i; + prev = i; + } + + txq->tx_tail = 0; + txq->nb_tx_used = 0; + + txq->last_desc_cleaned = txq->nb_tx_desc - 1; + txq->nb_tx_free = txq->nb_tx_desc - 1; + + txq->tx_next_dd = txq->tx_rs_thresh - 1; + txq->tx_next_rs = txq->tx_rs_thresh - 1; +} + +static int +ice_dcf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct ice_rx_queue *rxq; + int err; + + if (rx_queue_id >= dev->data->nb_rx_queues) + return -EINVAL; + + err = ice_dcf_switch_queue(hw, rx_queue_id, true, false); + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", + rx_queue_id); + return err; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + rxq->rx_rel_mbufs(rxq); + reset_rx_queue(rxq); + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + +static int +ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct iavf_hw *hw = &ad->real_hw.avf; + struct ice_tx_queue *txq; + int err = 0; + + if (tx_queue_id >= dev->data->nb_tx_queues) + return -EINVAL; + + txq = dev->data->tx_queues[tx_queue_id]; + + /* Init the RX tail register. */ + txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(tx_queue_id); + IAVF_PCI_REG_WRITE(txq->qtx_tail, 0); + IAVF_WRITE_FLUSH(hw); + + /* Ready to switch the queue on */ + err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true); + + if (err) + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", + tx_queue_id); + else + dev->data->tx_queue_state[tx_queue_id] = + RTE_ETH_QUEUE_STATE_STARTED; + + return err; +} + +static int +ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct ice_tx_queue *txq; + int err; + + if (tx_queue_id >= dev->data->nb_tx_queues) + return -EINVAL; + + err = ice_dcf_switch_queue(hw, tx_queue_id, false, false); + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off", + tx_queue_id); + return err; + } + + txq = dev->data->tx_queues[tx_queue_id]; + txq->tx_rel_mbufs(txq); + reset_tx_queue(txq); + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + +static int +ice_dcf_start_queues(struct rte_eth_dev *dev) +{ + struct ice_rx_queue *rxq; + struct ice_tx_queue *txq; + int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + if (txq->tx_deferred_start) + continue; + if (ice_dcf_tx_queue_start(dev, i) != 0) { + PMD_DRV_LOG(ERR, "Fail to start queue %u", i); + return -1; + } + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (rxq->rx_deferred_start) + continue; + if (ice_dcf_rx_queue_start(dev, i) != 0) { + PMD_DRV_LOG(ERR, "Fail to start queue %u", i); + return -1; + } + } + + return 0; +} + static int ice_dcf_dev_start(struct rte_eth_dev *dev) { @@ -266,20 +519,72 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) return ret; } + if (dev->data->dev_conf.intr_conf.rxq != 0) { + rte_intr_disable(intr_handle); + rte_intr_enable(intr_handle); + } + + ret = ice_dcf_start_queues(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to enable queues"); + return ret; + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; } +static void +ice_dcf_stop_queues(struct rte_eth_dev *dev) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct ice_rx_queue *rxq; + struct ice_tx_queue *txq; + int ret, i; + + /* Stop All queues */ + ret = ice_dcf_disable_queues(hw); + if (ret) + PMD_DRV_LOG(WARNING, "Fail to stop queues"); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + if (!txq) + continue; + txq->tx_rel_mbufs(txq); + reset_tx_queue(txq); + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + } + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (!rxq) + continue; + rxq->rx_rel_mbufs(rxq); + reset_rx_queue(rxq); + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + } +} + static void ice_dcf_dev_stop(struct rte_eth_dev *dev) { struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; + struct rte_intr_handle *intr_handle = dev->intr_handle; struct ice_adapter *ad = &dcf_ad->parent; if (ad->pf.adapter_stopped == 1) return; + ice_dcf_stop_queues(dev); + + rte_intr_efd_disable(intr_handle); + if (intr_handle->intr_vec) { + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; + } + dev->data->dev_link.link_status = ETH_LINK_DOWN; ad->pf.adapter_stopped = 1; } @@ -476,6 +781,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .tx_queue_setup = ice_tx_queue_setup, .rx_queue_release = ice_rx_queue_release, .tx_queue_release = ice_tx_queue_release, + .rx_queue_start = ice_dcf_rx_queue_start, + .tx_queue_start = ice_dcf_tx_queue_start, + .rx_queue_stop = ice_dcf_rx_queue_stop, + .tx_queue_stop = ice_dcf_tx_queue_stop, .link_update = ice_dcf_link_update, .stats_get = ice_dcf_stats_get, .stats_reset = ice_dcf_stats_reset, From patchwork Fri Jun 5 20:17:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70884 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56F3EA0350; Fri, 5 Jun 2020 14:20:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 386F31D65F; Fri, 5 Jun 2020 14:19:00 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 732BE1C1C9 for ; Fri, 5 Jun 2020 14:18:56 +0200 (CEST) IronPort-SDR: tftyj8focbJqgLcIuhPRn5BBnFsNQIuEAFEq4GsaLrWKO00ta+Rzm4vDWrEILbAbEtMvqbO4oA wEWUM8jZIuGg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:55 -0700 IronPort-SDR: JCMmU6U6FZKlWqc4X85A5OC27VRA9P4StMJ6clvfFnHtn8wpZiqB8oXiXrx/OyoE1nMt3y2XsM c+lFeqKBi1mQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673409" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:53 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:35 +0000 Message-Id: <20200605201737.33766-11-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 10/12] net/ice: enable stats for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add support to get and reset Rx/Tx stats in DCF. Query stats from PF. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf.c | 27 ++++++++ drivers/net/ice/ice_dcf.h | 4 ++ drivers/net/ice/ice_dcf_ethdev.c | 102 +++++++++++++++++++++++++++---- 3 files changed, 120 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 56b8c0d25..2338b46cf 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -997,3 +997,30 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw) } return 0; } + +int +ice_dcf_query_stats(struct ice_dcf_hw *hw, + struct virtchnl_eth_stats *pstats) +{ + struct virtchnl_queue_select q_stats; + struct dcf_virtchnl_cmd args; + int err; + + memset(&q_stats, 0, sizeof(q_stats)); + q_stats.vsi_id = hw->vsi_res->vsi_id; + + args.v_op = VIRTCHNL_OP_GET_STATS; + args.req_msg = (uint8_t *)&q_stats; + args.req_msglen = sizeof(q_stats); + args.rsp_msglen = sizeof(*pstats); + args.rsp_msgbuf = (uint8_t *)pstats; + args.rsp_buflen = sizeof(*pstats); + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) { + PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS"); + return err; + } + + return 0; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 68e1661c0..e82bc7748 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -58,6 +58,7 @@ struct ice_dcf_hw { uint16_t msix_base; uint16_t nb_msix; uint16_t rxq_map[16]; + struct virtchnl_eth_stats eth_stats_offset; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, @@ -72,4 +73,7 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); +int ice_dcf_query_stats(struct ice_dcf_hw *hw, + struct virtchnl_eth_stats *pstats); + #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 59113fc4b..869af0e45 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -683,19 +683,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, return 0; } -static int -ice_dcf_stats_get(__rte_unused struct rte_eth_dev *dev, - __rte_unused struct rte_eth_stats *igb_stats) -{ - return 0; -} - -static int -ice_dcf_stats_reset(__rte_unused struct rte_eth_dev *dev) -{ - return 0; -} - static int ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev) { @@ -748,6 +735,95 @@ ice_dcf_dev_filter_ctrl(struct rte_eth_dev *dev, return ret; } +#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4) +#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6) +#define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t) + +static void +ice_dcf_stat_update_48(uint64_t *offset, uint64_t *stat) +{ + if (*stat >= *offset) + *stat = *stat - *offset; + else + *stat = (uint64_t)((*stat + + ((uint64_t)1 << ICE_DCF_48_BIT_WIDTH)) - *offset); + + *stat &= ICE_DCF_48_BIT_MASK; +} + +static void +ice_dcf_stat_update_32(uint64_t *offset, uint64_t *stat) +{ + if (*stat >= *offset) + *stat = (uint64_t)(*stat - *offset); + else + *stat = (uint64_t)((*stat + + ((uint64_t)1 << ICE_DCF_32_BIT_WIDTH)) - *offset); +} + +static void +ice_dcf_update_stats(struct virtchnl_eth_stats *oes, + struct virtchnl_eth_stats *nes) +{ + ice_dcf_stat_update_48(&oes->rx_bytes, &nes->rx_bytes); + ice_dcf_stat_update_48(&oes->rx_unicast, &nes->rx_unicast); + ice_dcf_stat_update_48(&oes->rx_multicast, &nes->rx_multicast); + ice_dcf_stat_update_48(&oes->rx_broadcast, &nes->rx_broadcast); + ice_dcf_stat_update_32(&oes->rx_discards, &nes->rx_discards); + ice_dcf_stat_update_48(&oes->tx_bytes, &nes->tx_bytes); + ice_dcf_stat_update_48(&oes->tx_unicast, &nes->tx_unicast); + ice_dcf_stat_update_48(&oes->tx_multicast, &nes->tx_multicast); + ice_dcf_stat_update_48(&oes->tx_broadcast, &nes->tx_broadcast); + ice_dcf_stat_update_32(&oes->tx_errors, &nes->tx_errors); + ice_dcf_stat_update_32(&oes->tx_discards, &nes->tx_discards); +} + + +static int +ice_dcf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct virtchnl_eth_stats pstats; + int ret; + + ret = ice_dcf_query_stats(hw, &pstats); + if (ret == 0) { + ice_dcf_update_stats(&hw->eth_stats_offset, &pstats); + stats->ipackets = pstats.rx_unicast + pstats.rx_multicast + + pstats.rx_broadcast - pstats.rx_discards; + stats->opackets = pstats.tx_broadcast + pstats.tx_multicast + + pstats.tx_unicast; + stats->imissed = pstats.rx_discards; + stats->oerrors = pstats.tx_errors + pstats.tx_discards; + stats->ibytes = pstats.rx_bytes; + stats->ibytes -= stats->ipackets * RTE_ETHER_CRC_LEN; + stats->obytes = pstats.tx_bytes; + } else { + PMD_DRV_LOG(ERR, "Get statistics failed"); + } + return -EIO; +} + +static int +ice_dcf_stats_reset(struct rte_eth_dev *dev) +{ + struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *hw = &ad->real_hw; + struct virtchnl_eth_stats pstats; + int ret; + + /* read stat values to clear hardware registers */ + ret = ice_dcf_query_stats(hw, &pstats); + if (ret != 0) + return ret; + + /* set stats offset base on current values */ + hw->eth_stats_offset = pstats; + + return 0; +} + static void ice_dcf_dev_close(struct rte_eth_dev *dev) { From patchwork Fri Jun 5 20:17:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70885 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C6E7A0350; Fri, 5 Jun 2020 14:20:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 590341D658; Fri, 5 Jun 2020 14:19:01 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id D2AE31D650 for ; Fri, 5 Jun 2020 14:18:57 +0200 (CEST) IronPort-SDR: IbOJV7ef7HQ0841b/0ykzS5IvaON30o4ZGUnmI65Dz7NxMib7StCauPP6DhDd739Z/3a8U+vFL EwEPf0Hee/ng== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:57 -0700 IronPort-SDR: CDdBIIMPRMO1/H2nebqHirn403ri0zJ6LhQYPjmsuNpcVLVZ6plqxlKP/N7ik0CCYkYibE7qcE efPajNWRES3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673413" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:55 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:36 +0000 Message-Id: <20200605201737.33766-12-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 11/12] net/ice: set MAC filter during dev start for DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Qi Zhang Add support to add and delete MAC address filter in DCF. Signed-off-by: Qi Zhang --- drivers/net/ice/ice_dcf.c | 42 ++++++++++++++++++++++++++++++++ drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 7 ++++++ 3 files changed, 50 insertions(+) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 2338b46cf..7fd70a394 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -1024,3 +1024,45 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw, return 0; } + +int +ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add) +{ + struct virtchnl_ether_addr_list *list; + struct rte_ether_addr *addr; + struct dcf_virtchnl_cmd args; + int len, err = 0; + + len = sizeof(struct virtchnl_ether_addr_list); + addr = hw->eth_dev->data->mac_addrs; + len += sizeof(struct virtchnl_ether_addr); + + list = rte_zmalloc(NULL, len, 0); + if (!list) { + PMD_DRV_LOG(ERR, "fail to allocate memory"); + return -ENOMEM; + } + + rte_memcpy(list->list[0].addr, addr->addr_bytes, + sizeof(addr->addr_bytes)); + PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x", + addr->addr_bytes[0], addr->addr_bytes[1], + addr->addr_bytes[2], addr->addr_bytes[3], + addr->addr_bytes[4], addr->addr_bytes[5]); + + list->vsi_id = hw->vsi_res->vsi_id; + list->num_elements = 1; + + memset(&args, 0, sizeof(args)); + args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR : + VIRTCHNL_OP_DEL_ETH_ADDR; + args.req_msg = (uint8_t *)list; + args.req_msglen = len; + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command %s", + add ? "OP_ADD_ETHER_ADDRESS" : + "OP_DEL_ETHER_ADDRESS"); + rte_free(list); + return err; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index e82bc7748..a44a01e90 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -75,5 +75,6 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); int ice_dcf_query_stats(struct ice_dcf_hw *hw, struct virtchnl_eth_stats *pstats); +int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add); #endif /* _ICE_DCF_H_ */ diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 869af0e45..a1b1ffb56 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -530,6 +530,12 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) return ret; } + ret = ice_dcf_add_del_all_mac_addr(hw, true); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac addr"); + return ret; + } + dev->data->dev_link.link_status = ETH_LINK_UP; return 0; @@ -585,6 +591,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev) intr_handle->intr_vec = NULL; } + ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false); dev->data->dev_link.link_status = ETH_LINK_DOWN; ad->pf.adapter_stopped = 1; } From patchwork Fri Jun 5 20:17:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Xu, Ting" X-Patchwork-Id: 70886 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E0D9A0350; Fri, 5 Jun 2020 14:20:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6CA3D1D665; Fri, 5 Jun 2020 14:19:02 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 69B6B1D656 for ; Fri, 5 Jun 2020 14:18:59 +0200 (CEST) IronPort-SDR: 5ve4Po+2D37UoO8FRUWOacAVviQl3kJOXJp9EUAIvk+QbRw0ka68ugfLWXVqXoaOv2LaViyfVh YS+oCb2QyAxQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2020 05:18:59 -0700 IronPort-SDR: Kdl735Q95KoUmBuU9Z3Qy2EE0YI6hE7I2wr8uC4+aGhbo1Qyi1QSVAC/D5n5w782dzMt6A7bDY htljE/An3lxw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,476,1583222400"; d="scan'208";a="294673417" Received: from dpdk-xuting-main.sh.intel.com ([10.67.117.84]) by fmsmga004.fm.intel.com with ESMTP; 05 Jun 2020 05:18:57 -0700 From: Ting Xu To: dev@dpdk.org Cc: qi.z.zhang@intel.com, qiming.yang@intel.com, john.mcnamara@intel.com, marko.kovacevic@intel.com Date: Fri, 5 Jun 2020 20:17:37 +0000 Message-Id: <20200605201737.33766-13-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200605201737.33766-1-ting.xu@intel.com> References: <20200605201737.33766-1-ting.xu@intel.com> Subject: [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add doc for DCF datapath configuration in DPDK 20.08 release note. Signed-off-by: Ting Xu --- doc/guides/rel_notes/release_20_08.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index 39064afbe..3cda6111c 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -56,6 +56,11 @@ New Features Also, make sure to start the actual text at the margin. ========================================================= +* **Updated the Intel ice driver.** + + Updated the Intel ice driver with new features and improvements, including: + + * Added support for DCF datapath configuration. Removed Items -------------