From patchwork Wed Jan 18 07:14:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 122251 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BD4F04240B; Wed, 18 Jan 2023 09:10:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 91E3A42D59; Wed, 18 Jan 2023 09:10:32 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B2B9D42C4D for ; Wed, 18 Jan 2023 09:10:28 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674029429; x=1705565429; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=C6Naudm+vYT28ST/aKXuAd2Uh6ag55PRe82U0qB65OM=; b=K/Pmllne2KKerjMMSRto+gMkqS1jwy7tsAhuRJvqtVqLxB5MxXybOHPD /AoGq9OI58X8rl0gyenjyCx6KcWPj4DjQEJCkDEDsJ/vsm6teyVayYUg5 qqVSYFy5A1bflCxtnt00pvK1enA0wnfojpqIpgXWLyuGGJNUeYxZtBFNY /VpxrxC8DkSCtCjkLPUKI5lNHxb0SYDDOihKFnqqJEKbq8V4pvbgrFfaT uzfZvXgu11teuLq5GFGrHn4UCjqSxy/ogP+DEo+OqYhbrwyS1tBkMhEGB WN6W+3IikY4idqHq6bOF3c3c++Ng7F7jfgbXqQArroU1Jz/ZkqlhHZHFL Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="411166188" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="411166188" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2023 00:10:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="783577536" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="783577536" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by orsmga004.jf.intel.com with ESMTP; 18 Jan 2023 00:10:26 -0800 From: Mingxia Liu To: dev@dpdk.org Cc: jingjing.wu@intel.com, beilei.xing@intel.com, Mingxia Liu Subject: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message Date: Wed, 18 Jan 2023 07:14:39 +0000 Message-Id: <20230118071440.902155-6-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230118071440.902155-1-mingxia.liu@intel.com> References: <20230111071545.504706-1-mingxia.liu@intel.com> <20230118071440.902155-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Handle virtual channel message. Refine link status update. Signed-off-by: Mingxia Liu Signed-off-by: Beilei Ling --- drivers/common/idpf/idpf_common_device.h | 5 + drivers/common/idpf/idpf_common_virtchnl.c | 19 --- drivers/net/idpf/idpf_ethdev.c | 165 ++++++++++++++++++++- drivers/net/idpf/idpf_ethdev.h | 2 + 4 files changed, 171 insertions(+), 20 deletions(-) diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h index f22ffde22e..2adeeff37e 100644 --- a/drivers/common/idpf/idpf_common_device.h +++ b/drivers/common/idpf/idpf_common_device.h @@ -118,6 +118,11 @@ struct idpf_vport { bool tx_use_avx512; struct virtchnl2_vport_stats eth_stats_offset; + + void *dev; + /* Event from ipf */ + bool link_up; + uint32_t link_speed; }; /* Message type read in virtual channel from PF */ diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c index 5965f9ee55..f36aae8a93 100644 --- a/drivers/common/idpf/idpf_common_virtchnl.c +++ b/drivers/common/idpf/idpf_common_virtchnl.c @@ -202,25 +202,6 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args) switch (args->ops) { case VIRTCHNL_OP_VERSION: case VIRTCHNL2_OP_GET_CAPS: - case VIRTCHNL2_OP_CREATE_VPORT: - case VIRTCHNL2_OP_DESTROY_VPORT: - case VIRTCHNL2_OP_SET_RSS_KEY: - case VIRTCHNL2_OP_SET_RSS_LUT: - case VIRTCHNL2_OP_SET_RSS_HASH: - case VIRTCHNL2_OP_CONFIG_RX_QUEUES: - case VIRTCHNL2_OP_CONFIG_TX_QUEUES: - case VIRTCHNL2_OP_ENABLE_QUEUES: - case VIRTCHNL2_OP_DISABLE_QUEUES: - case VIRTCHNL2_OP_ENABLE_VPORT: - case VIRTCHNL2_OP_DISABLE_VPORT: - case VIRTCHNL2_OP_MAP_QUEUE_VECTOR: - case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR: - case VIRTCHNL2_OP_ALLOC_VECTORS: - case VIRTCHNL2_OP_DEALLOC_VECTORS: - case VIRTCHNL2_OP_GET_STATS: - case VIRTCHNL2_OP_GET_RSS_KEY: - case VIRTCHNL2_OP_GET_RSS_HASH: - case VIRTCHNL2_OP_GET_RSS_LUT: /* for init virtchnl ops, need to poll the response */ err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer); clear_cmd(adapter); diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c index 2ab31792ba..b86f63f94e 100644 --- a/drivers/net/idpf/idpf_ethdev.c +++ b/drivers/net/idpf/idpf_ethdev.c @@ -9,6 +9,7 @@ #include #include #include +#include #include "idpf_ethdev.h" #include "idpf_rxtx.h" @@ -83,12 +84,49 @@ static int idpf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) { + struct idpf_vport *vport = dev->data->dev_private; struct rte_eth_link new_link; memset(&new_link, 0, sizeof(new_link)); - new_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + switch (vport->link_speed) { + case 10: + new_link.link_speed = RTE_ETH_SPEED_NUM_10M; + break; + case 100: + new_link.link_speed = RTE_ETH_SPEED_NUM_100M; + break; + case 1000: + new_link.link_speed = RTE_ETH_SPEED_NUM_1G; + break; + case 10000: + new_link.link_speed = RTE_ETH_SPEED_NUM_10G; + break; + case 20000: + new_link.link_speed = RTE_ETH_SPEED_NUM_20G; + break; + case 25000: + new_link.link_speed = RTE_ETH_SPEED_NUM_25G; + break; + case 40000: + new_link.link_speed = RTE_ETH_SPEED_NUM_40G; + break; + case 50000: + new_link.link_speed = RTE_ETH_SPEED_NUM_50G; + break; + case 100000: + new_link.link_speed = RTE_ETH_SPEED_NUM_100G; + break; + case 200000: + new_link.link_speed = RTE_ETH_SPEED_NUM_200G; + break; + default: + new_link.link_speed = RTE_ETH_SPEED_NUM_NONE; + } + new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP : + RTE_ETH_LINK_DOWN; new_link.link_autoneg = !(dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED); @@ -927,6 +965,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap return ret; } +static struct idpf_vport * +idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id) +{ + struct idpf_vport *vport = NULL; + int i; + + for (i = 0; i < adapter->cur_vport_nb; i++) { + vport = adapter->vports[i]; + if (vport->vport_id != vport_id) + continue; + else + return vport; + } + + return vport; +} + +static void +idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen) +{ + struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg; + struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev; + + if (msglen < sizeof(struct virtchnl2_event)) { + PMD_DRV_LOG(ERR, "Error event"); + return; + } + + switch (vc_event->event) { + case VIRTCHNL2_EVENT_LINK_CHANGE: + PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE"); + vport->link_up = vc_event->link_status; + vport->link_speed = vc_event->link_speed; + idpf_dev_link_update(dev, 0); + break; + default: + PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event); + break; + } +} + +static void +idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex) +{ + struct idpf_adapter *adapter = &adapter_ex->base; + struct idpf_dma_mem *dma_mem = NULL; + struct idpf_hw *hw = &adapter->hw; + struct virtchnl2_event *vc_event; + struct idpf_ctlq_msg ctlq_msg; + enum idpf_mbx_opc mbx_op; + struct idpf_vport *vport; + enum virtchnl_ops vc_op; + uint16_t pending = 1; + int ret; + + while (pending) { + ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg); + if (ret) { + PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret); + return; + } + + rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va, + IDPF_DFLT_MBX_BUF_SIZE); + + mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode); + vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode); + adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval); + + switch (mbx_op) { + case idpf_mbq_opc_send_msg_to_peer_pf: + if (vc_op == VIRTCHNL2_OP_EVENT) { + if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) { + PMD_DRV_LOG(ERR, "Error event"); + return; + } + vc_event = (struct virtchnl2_event *)adapter->mbx_resp; + vport = idpf_find_vport(adapter_ex, vc_event->vport_id); + if (!vport) { + PMD_DRV_LOG(ERR, "Can't find vport."); + return; + } + idpf_handle_event_msg(vport, adapter->mbx_resp, + ctlq_msg.data_len); + } else { + if (vc_op == adapter->pend_cmd) + notify_cmd(adapter, adapter->cmd_retval); + else + PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u", + adapter->pend_cmd, vc_op); + + PMD_DRV_LOG(DEBUG, " Virtual channel response is received," + "opcode = %d", vc_op); + } + goto post_buf; + default: + PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op); + } + } + +post_buf: + if (ctlq_msg.data_len) + dma_mem = ctlq_msg.ctx.indirect.payload; + else + pending = 0; + + ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem); + if (ret && dma_mem) + idpf_free_dma_mem(hw, dma_mem); +} + +static void +idpf_dev_alarm_handler(void *param) +{ + struct idpf_adapter_ext *adapter = param; + + idpf_handle_virtchnl_msg(adapter); + + rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter); +} + static int idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter) { @@ -949,6 +1108,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a goto err_adapter_init; } + rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter); + adapter->max_vport_nb = adapter->base.caps.max_vports; adapter->vports = rte_zmalloc("vports", @@ -1032,6 +1193,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params) vport->adapter = &adapter->base; vport->sw_idx = param->idx; vport->devarg_id = param->devarg_id; + vport->dev = dev; memset(&create_vport_info, 0, sizeof(create_vport_info)); ret = idpf_create_vport_info_init(vport, &create_vport_info); @@ -1101,6 +1263,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev) static void idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter) { + rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter); idpf_adapter_deinit(&adapter->base); rte_free(adapter->vports); diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h index 839a2bd82c..3c2c932438 100644 --- a/drivers/net/idpf/idpf_ethdev.h +++ b/drivers/net/idpf/idpf_ethdev.h @@ -53,6 +53,8 @@ #define IDPF_ADAPTER_NAME_LEN (PCI_PRI_STR_SIZE + 1) +#define IDPF_ALARM_INTERVAL 50000 /* us */ + struct idpf_vport_param { struct idpf_adapter_ext *adapter; uint16_t devarg_id; /* arg id from user */