From patchwork Tue Feb 7 09:57:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 123229 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E019D41C2D; Tue, 7 Feb 2023 11:54:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C1C5142D5D; Tue, 7 Feb 2023 11:54:16 +0100 (CET) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 833D442D17 for ; Tue, 7 Feb 2023 11:54:13 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675767253; x=1707303253; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i5of4zEhNVBpMNN9LFVvmxIAFuSa34YrL6q8emWEAsQ=; b=SPdRTda9zaV5SwkbnX100cEiO2cXxgN/Aa9om+Q7ATSOOzVNS1hvU6Kl Qy0xJsoAA0KALe1qVzKMXtW0blMl0rvpv0RZster1eS0zXw0kkh1tFUfQ DHJofRUV4Z5eRz8zGfBSixtrBax8JOaUc6TVqRTkcMmZZacvK5NX+L+0k 54lm31JAkaeDYryv0e2u0XjcHzPSnE3jD06qNQSHh1CCS0yrndrSKfZhg N0gnVlz70Ls4+yhTvFrMwHkzO/8Y0/u5KrAHAoznetJT+UGB8dAndvnAR vrNRFReR7Z8zHFz3LY+0dyNjOm+TVqRR0D8z9CzWW72ND5PHbE2EZzG/H A==; X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="391873466" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="391873466" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2023 02:54:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10613"; a="730388594" X-IronPort-AV: E=Sophos;i="5.97,278,1669104000"; d="scan'208";a="730388594" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by fmsmga008.fm.intel.com with ESMTP; 07 Feb 2023 02:54:11 -0800 From: Mingxia Liu To: dev@dpdk.org, qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: Mingxia Liu Subject: [PATCH v4 5/6] common/idpf: add alarm to support handle vchnl message Date: Tue, 7 Feb 2023 09:57:00 +0000 Message-Id: <20230207095701.2400179-6-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230207095701.2400179-1-mingxia.liu@intel.com> References: <20230118071440.902155-1-mingxia.liu@intel.com> <20230207095701.2400179-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Handle virtual channel message. Refine link status update. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Ling --- drivers/common/idpf/idpf_common_device.h | 5 + drivers/common/idpf/idpf_common_virtchnl.c | 33 ++-- drivers/common/idpf/idpf_common_virtchnl.h | 6 + drivers/common/idpf/version.map | 2 + drivers/net/idpf/idpf_ethdev.c | 169 ++++++++++++++++++++- drivers/net/idpf/idpf_ethdev.h | 2 + 6 files changed, 195 insertions(+), 22 deletions(-) diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index 7abc4d2a3a..364a60221a 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -118,6 +118,11 @@ struct idpf_vport { bool tx_use_avx512; struct virtchnl2_vport_stats eth_stats_offset; + + void *dev; + /* Event from ipf */ + bool link_up; + uint32_t link_speed; }; /* Message type read in virtual channel from PF */ diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 10cfa33704..99d9efbb7c 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -202,25 +202,6 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args) switch (args->ops) { case VIRTCHNL_OP_VERSION: case VIRTCHNL2_OP_GET_CAPS: - case VIRTCHNL2_OP_CREATE_VPORT: - case VIRTCHNL2_OP_DESTROY_VPORT: - case VIRTCHNL2_OP_SET_RSS_KEY: - case VIRTCHNL2_OP_SET_RSS_LUT: - case VIRTCHNL2_OP_SET_RSS_HASH: - case VIRTCHNL2_OP_CONFIG_RX_QUEUES: - case VIRTCHNL2_OP_CONFIG_TX_QUEUES: - case VIRTCHNL2_OP_ENABLE_QUEUES: - case VIRTCHNL2_OP_DISABLE_QUEUES: - case VIRTCHNL2_OP_ENABLE_VPORT: - case VIRTCHNL2_OP_DISABLE_VPORT: - case VIRTCHNL2_OP_MAP_QUEUE_VECTOR: - case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR: - case VIRTCHNL2_OP_ALLOC_VECTORS: - case VIRTCHNL2_OP_DEALLOC_VECTORS: - case VIRTCHNL2_OP_GET_STATS: - case VIRTCHNL2_OP_GET_RSS_KEY: - case VIRTCHNL2_OP_GET_RSS_HASH: - case VIRTCHNL2_OP_GET_RSS_LUT: /* for init virtchnl ops, need to poll the response */ err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer); clear_cmd(adapter); @@ -1111,3 +1092,17 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) return err; } + +int +idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, + struct idpf_ctlq_msg *q_msg) +{ + return idpf_ctlq_recv(cq, num_q_msg, q_msg); +} + +int +idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, + u16 *buff_count, struct idpf_dma_mem **buffs) +{ + return idpf_ctlq_post_rx_buffs(hw, cq, buff_count, buffs); +} diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h index 205d1a932d..d479d93c8e 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.h +++ b/drivers/common/idpf/idpf_common_virtchnl.h @@ -58,4 +58,10 @@ __rte_internal int idpf_vc_rss_lut_get(struct idpf_vport *vport); __rte_internal int idpf_vc_rss_hash_get(struct idpf_vport *vport); +__rte_internal +int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, + struct idpf_ctlq_msg *q_msg); +__rte_internal +int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, + u16 *buff_count, struct idpf_dma_mem **buffs); #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index e31f6ff4d9..70334a1b03 100644 --- a/drivers/common/idpf/version.map +++ b/drivers/common/idpf/version.map @@ -38,6 +38,8 @@ INTERNAL { idpf_vc_api_version_check; idpf_vc_caps_get; idpf_vc_cmd_execute; + idpf_vc_ctlq_post_rx_buffs; + idpf_vc_ctlq_recv; idpf_vc_irq_map_unmap_config; idpf_vc_one_msg_read; idpf_vc_ptype_info_query; diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index bd7cf41b43..c3a9e95388 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "idpf_ethdev.h" #include "idpf_rxtx.h" @@ -83,14 +84,51 @@ static int idpf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { + struct idpf_vport *vport = dev->data->dev_private; struct rte_eth_link new_link; memset(&new_link, 0, sizeof(new_link)); - new_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + switch (vport->link_speed) { + case RTE_ETH_SPEED_NUM_10M: + new_link.link_speed = RTE_ETH_SPEED_NUM_10M; + break; + case RTE_ETH_SPEED_NUM_100M: + new_link.link_speed = RTE_ETH_SPEED_NUM_100M; + break; + case RTE_ETH_SPEED_NUM_1G: + new_link.link_speed = RTE_ETH_SPEED_NUM_1G; + break; + case RTE_ETH_SPEED_NUM_10G: + new_link.link_speed = RTE_ETH_SPEED_NUM_10G; + break; + case RTE_ETH_SPEED_NUM_20G: + new_link.link_speed = RTE_ETH_SPEED_NUM_20G; + break; + case RTE_ETH_SPEED_NUM_25G: + new_link.link_speed = RTE_ETH_SPEED_NUM_25G; + break; + case RTE_ETH_SPEED_NUM_40G: + new_link.link_speed = RTE_ETH_SPEED_NUM_40G; + break; + case RTE_ETH_SPEED_NUM_50G: + new_link.link_speed = RTE_ETH_SPEED_NUM_50G; + break; + case RTE_ETH_SPEED_NUM_100G: + new_link.link_speed = RTE_ETH_SPEED_NUM_100G; + break; + case RTE_ETH_SPEED_NUM_200G: + new_link.link_speed = RTE_ETH_SPEED_NUM_200G; + break; + default: + new_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + } + new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; - new_link.link_autoneg = !(dev->data->dev_conf.link_speeds & - RTE_ETH_LINK_SPEED_FIXED); + new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP : + RTE_ETH_LINK_DOWN; + new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ? + RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG; return rte_eth_linkstatus_set(dev, &new_link); } @@ -891,6 +929,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap return ret; } +static struct idpf_vport * +idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id) +{ + struct idpf_vport *vport = NULL; + int i; + + for (i = 0; i < adapter->cur_vport_nb; i++) { + vport = adapter->vports[i]; + if (vport->vport_id != vport_id) + continue; + else + return vport; + } + + return vport; +} + +static void +idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen) +{ + struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg; + struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev; + + if (msglen < sizeof(struct virtchnl2_event)) { + PMD_DRV_LOG(ERR, "Error event"); + return; + } + + switch (vc_event->event) { + case VIRTCHNL2_EVENT_LINK_CHANGE: + PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE"); + vport->link_up = !!(vc_event->link_status); + vport->link_speed = vc_event->link_speed; + idpf_dev_link_update(dev, 0); + break; + default: + PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event); + break; + } +} + +static void +idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex) +{ + struct idpf_adapter *adapter = &adapter_ex->base; + struct idpf_dma_mem *dma_mem = NULL; + struct idpf_hw *hw = &adapter->hw; + struct virtchnl2_event *vc_event; + struct idpf_ctlq_msg ctlq_msg; + enum idpf_mbx_opc mbx_op; + struct idpf_vport *vport; + enum virtchnl_ops vc_op; + uint16_t pending = 1; + int ret; + + while (pending) { + ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg); + if (ret) { + PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret); + return; + } + + rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va, + IDPF_DFLT_MBX_BUF_SIZE); + + mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode); + vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode); + adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval); + + switch (mbx_op) { + case idpf_mbq_opc_send_msg_to_peer_pf: + if (vc_op == VIRTCHNL2_OP_EVENT) { + if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) { + PMD_DRV_LOG(ERR, "Error event"); + return; + } + vc_event = (struct virtchnl2_event *)adapter->mbx_resp; + vport = idpf_find_vport(adapter_ex, vc_event->vport_id); + if (!vport) { + PMD_DRV_LOG(ERR, "Can't find vport."); + return; + } + idpf_handle_event_msg(vport, adapter->mbx_resp, + ctlq_msg.data_len); + } else { + if (vc_op == adapter->pend_cmd) + notify_cmd(adapter, adapter->cmd_retval); + else + PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u", + adapter->pend_cmd, vc_op); + + PMD_DRV_LOG(DEBUG, " Virtual channel response is received," + "opcode = %d", vc_op); + } + goto post_buf; + default: + PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op); + } + } + +post_buf: + if (ctlq_msg.data_len) + dma_mem = ctlq_msg.ctx.indirect.payload; + else + pending = 0; + + ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem); + if (ret && dma_mem) + idpf_free_dma_mem(hw, dma_mem); +} + +static void +idpf_dev_alarm_handler(void *param) +{ + struct idpf_adapter_ext *adapter = param; + + idpf_handle_virtchnl_msg(adapter); + + rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter); +} + static int idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter) { @@ -913,6 +1072,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a goto err_adapter_init; } + rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter); + adapter->max_vport_nb = adapter->base.caps.max_vports; adapter->vports = rte_zmalloc("vports", @@ -996,6 +1157,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params) vport->adapter = &adapter->base; vport->sw_idx = param->idx; vport->devarg_id = param->devarg_id; + vport->dev = dev; memset(&create_vport_info, 0, sizeof(create_vport_info)); ret = idpf_vport_info_init(vport, &create_vport_info); @@ -1065,6 +1227,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev) static void idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter) { + rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter); idpf_adapter_deinit(&adapter->base); rte_free(adapter->vports); diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h index 839a2bd82c..3c2c932438 100644 --- a/drivers/net/idpf/idpf_ethdev.h +++ b/drivers/net/idpf/idpf_ethdev.h @@ -53,6 +53,8 @@ #define IDPF_ADAPTER_NAME_LEN (PCI_PRI_STR_SIZE + 1) +#define IDPF_ALARM_INTERVAL 50000 /* us */ + struct idpf_vport_param { struct idpf_adapter_ext *adapter; uint16_t devarg_id; /* arg id from user */