From patchwork Thu Dec 7 01:37:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Hai X-Patchwork-Id: 134891 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5860343692; Thu, 7 Dec 2023 02:41:31 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3203C42E96; Thu, 7 Dec 2023 02:41:25 +0100 (CET) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id 7008F4028B for ; Thu, 7 Dec 2023 02:41:23 +0100 (CET) Received: from kwepemd100004.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Slxj92NDQz1Q6CD; Thu, 7 Dec 2023 09:37:33 +0800 (CST) Received: from localhost.localdomain (10.67.165.2) by kwepemd100004.china.huawei.com (7.221.188.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1258.28; Thu, 7 Dec 2023 09:41:20 +0800 From: Jie Hai To: , Yisen Zhuang , "Min Hu (Connor)" , Huisong Li , "Wei Hu (Xavier)" , Hao Chen , Ferruh Yigit CC: , , Subject: [PATCH v3 1/4] net/hns3: refactor VF mailbox message struct Date: Thu, 7 Dec 2023 09:37:29 +0800 Message-ID: <20231207013732.3987482-2-haijie1@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20231207013732.3987482-1-haijie1@huawei.com> References: <20231108034434.559030-1-haijie1@huawei.com> <20231207013732.3987482-1-haijie1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.2] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemd100004.china.huawei.com (7.221.188.31) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dengdui Huang The data region in VF to PF mbx message command is used to communicate with PF driver. And this data region exists as an array. As a result, some complicated feature commands, like setting promisc mode, map/unmap ring vector and setting VLAN id, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_vf_to_pf_msg structure. In addition, the PF link change event message is reported by the firmware and is reported in hns3_mbx_vf_to_pf_cmd format, it also needs to be modified. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang Signed-off-by: Jie Hai --- drivers/net/hns3/hns3_ethdev_vf.c | 54 +++++++++++++--------------- drivers/net/hns3/hns3_mbx.c | 24 ++++++------- drivers/net/hns3/hns3_mbx.h | 58 +++++++++++++++++++++++-------- 3 files changed, 78 insertions(+), 58 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 916cc0fb1b62..4cddf01d6f20 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -254,11 +254,12 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc, * the packets with vlan tag in promiscuous mode. */ hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = HNS3_MBX_SET_PROMISC_MODE; - req->msg[1] = en_bc_pmc ? 1 : 0; - req->msg[2] = en_uc_pmc ? 1 : 0; - req->msg[3] = en_mc_pmc ? 1 : 0; - req->msg[4] = hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; + req->msg.code = HNS3_MBX_SET_PROMISC_MODE; + req->msg.en_bc = en_bc_pmc ? 1 : 0; + req->msg.en_uc = en_uc_pmc ? 1 : 0; + req->msg.en_mc = en_mc_pmc ? 1 : 0; + req->msg.en_limit_promisc = + hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0; ret = hns3_cmd_send(hw, &desc, 1); if (ret) @@ -347,30 +348,28 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { - struct hns3_vf_bind_vector_msg bind_msg; +#define HNS3_RING_VECTOR_DATA_SIZE 14 + struct hns3_vf_to_pf_msg req = {0}; const char *op_str; - uint16_t code; int ret; - memset(&bind_msg, 0, sizeof(bind_msg)); - code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : + req.code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR : HNS3_MBX_UNMAP_RING_TO_VECTOR; - bind_msg.vector_id = (uint8_t)vector_id; + req.vector_id = (uint8_t)vector_id; + req.ring_num = 1; if (queue_type == HNS3_RING_TYPE_RX) - bind_msg.param[0].int_gl_index = HNS3_RING_GL_RX; + req.ring_param[0].int_gl_index = HNS3_RING_GL_RX; else - bind_msg.param[0].int_gl_index = HNS3_RING_GL_TX; - - bind_msg.param[0].ring_type = queue_type; - bind_msg.ring_num = 1; - bind_msg.param[0].tqp_index = queue_id; + req.ring_param[0].int_gl_index = HNS3_RING_GL_TX; + req.ring_param[0].ring_type = queue_type; + req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, code, 0, (uint8_t *)&bind_msg, - sizeof(bind_msg), false, NULL, 0); + ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, + HNS3_RING_VECTOR_DATA_SIZE, false, NULL, 0); if (ret) - hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret is %d.", - op_str, queue_id, bind_msg.vector_id, ret); + hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", + op_str, queue_id, req.vector_id, ret); return ret; } @@ -965,19 +964,16 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { -#define HNS3VF_VLAN_MBX_MSG_LEN 5 + struct hns3_mbx_vlan_filter vlan_filter = {0}; struct hns3_hw *hw = &hns->hw; - uint8_t msg_data[HNS3VF_VLAN_MBX_MSG_LEN]; - uint16_t proto = htons(RTE_ETHER_TYPE_VLAN); - uint8_t is_kill = on ? 0 : 1; - msg_data[0] = is_kill; - memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id)); - memcpy(&msg_data[3], &proto, sizeof(proto)); + vlan_filter.is_kill = on ? 0 : 1; + vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - msg_data, HNS3VF_VLAN_MBX_MSG_LEN, true, NULL, - 0); + (uint8_t *)&vlan_filter, sizeof(vlan_filter), + true, NULL, 0); } static int diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index f1743c195efa..ad5ec555b39e 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -11,8 +11,6 @@ #include "hns3_intr.h" #include "hns3_rxtx.h" -#define HNS3_CMD_CODE_OFFSET 2 - static const struct errno_respcode_map err_code_map[] = { {0, 0}, {1, -EPERM}, @@ -127,29 +125,30 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, struct hns3_mbx_vf_to_pf_cmd *req; struct hns3_cmd_desc desc; bool is_ring_vector_msg; - int offset; int ret; req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; /* first two bytes are reserved for code & subcode */ - if (msg_len > (HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET)) { + if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { hns3_err(hw, "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET); + msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); return -EINVAL; } hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg[0] = code; + req->msg.code = code; is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || (code == HNS3_MBX_GET_RING_VECTOR_MAP); if (!is_ring_vector_msg) - req->msg[1] = subcode; + req->msg.subcode = subcode; if (msg_data) { - offset = is_ring_vector_msg ? 1 : HNS3_CMD_CODE_OFFSET; - memcpy(&req->msg[offset], msg_data, msg_len); + if (is_ring_vector_msg) + memcpy(&req->msg.vector_id, msg_data, msg_len); + else + memcpy(&req->msg.data, msg_data, msg_len); } /* synchronous send */ @@ -296,11 +295,8 @@ static void hns3pf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_vf_to_pf_cmd *req) { -#define LINK_STATUS_OFFSET 1 -#define LINK_FAIL_CODE_OFFSET 2 - - if (!req->msg[LINK_STATUS_OFFSET]) - hns3_link_fail_parse(hw, req->msg[LINK_FAIL_CODE_OFFSET]); + if (!req->msg.link_status) + hns3_link_fail_parse(hw, req->msg.link_fail_code); hns3_update_linkstatus_and_event(hw, true); } diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 4a328802b920..3f623ba64ca4 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -89,7 +89,6 @@ enum hns3_mbx_link_fail_subcode { HNS3_MBX_LF_XSFP_ABSENT, }; -#define HNS3_MBX_MAX_MSG_SIZE 16 #define HNS3_MBX_MAX_RESP_DATA_SIZE 8 #define HNS3_MBX_DEF_TIME_LIMIT_MS 500 @@ -107,6 +106,48 @@ struct hns3_mbx_resp_status { uint8_t additional_info[HNS3_MBX_MAX_RESP_DATA_SIZE]; }; +struct hns3_ring_chain_param { + uint8_t ring_type; + uint8_t tqp_index; + uint8_t int_gl_index; +}; + +#pragma pack(1) +struct hns3_mbx_vlan_filter { + uint8_t is_kill; + uint16_t vlan_id; + uint16_t proto; +}; +#pragma pack() + +#define HNS3_MBX_MSG_MAX_DATA_SIZE 14 +#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 +struct hns3_vf_to_pf_msg { + uint8_t code; + union { + struct { + uint8_t subcode; + uint8_t data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; + struct { + uint8_t en_bc; + uint8_t en_uc; + uint8_t en_mc; + uint8_t en_limit_promisc; + }; + struct { + uint8_t vector_id; + uint8_t ring_num; + struct hns3_ring_chain_param + ring_param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; + }; + struct { + uint8_t link_status; + uint8_t link_fail_code; + }; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -122,7 +163,7 @@ struct hns3_mbx_vf_to_pf_cmd { uint8_t msg_len; uint8_t rsv2; uint16_t match_id; - uint8_t msg[HNS3_MBX_MAX_MSG_SIZE]; + struct hns3_vf_to_pf_msg msg; }; struct hns3_mbx_pf_to_vf_cmd { @@ -134,19 +175,6 @@ struct hns3_mbx_pf_to_vf_cmd { uint16_t msg[8]; }; -struct hns3_ring_chain_param { - uint8_t ring_type; - uint8_t tqp_index; - uint8_t int_gl_index; -}; - -#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4 -struct hns3_vf_bind_vector_msg { - uint8_t vector_id; - uint8_t ring_num; - struct hns3_ring_chain_param param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM]; -}; - struct hns3_pf_rst_done_cmd { uint8_t pf_rst_done; uint8_t rsv[23]; From patchwork Thu Dec 7 01:37:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Hai X-Patchwork-Id: 134892 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3F9F43692; Thu, 7 Dec 2023 02:41:38 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6635042E99; Thu, 7 Dec 2023 02:41:26 +0100 (CET) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id F1EED40295 for ; Thu, 7 Dec 2023 02:41:23 +0100 (CET) Received: from kwepemd100004.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4SlxhW4cCKzShxH; Thu, 7 Dec 2023 09:36:59 +0800 (CST) Received: from localhost.localdomain (10.67.165.2) by kwepemd100004.china.huawei.com (7.221.188.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1258.28; Thu, 7 Dec 2023 09:41:21 +0800 From: Jie Hai To: , Yisen Zhuang , Chunsong Feng , Huisong Li , Hao Chen , "Min Hu (Connor)" , Ferruh Yigit CC: , , Subject: [PATCH v3 2/4] net/hns3: refactor PF mailbox message struct Date: Thu, 7 Dec 2023 09:37:30 +0800 Message-ID: <20231207013732.3987482-3-haijie1@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20231207013732.3987482-1-haijie1@huawei.com> References: <20231108034434.559030-1-haijie1@huawei.com> <20231207013732.3987482-1-haijie1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.2] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemd100004.china.huawei.com (7.221.188.31) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dengdui Huang The data region in PF to VF mbx message command is used to communicate with VF driver. And this data region exists as an array. As a result, some complicated feature commands, like mailbox response, link change event, close promisc mode, reset request and update pvid state, have to use magic number to set them. This isn't good for maintenance of driver. So this patch refactors these messages by extracting an hns3_pf_to_vf_msg structure. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang Signed-off-by: Jie Hai --- drivers/net/hns3/hns3_mbx.c | 38 ++++++++++++++++++------------------- drivers/net/hns3/hns3_mbx.h | 25 +++++++++++++++++++++++- 2 files changed, 43 insertions(+), 20 deletions(-) diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index ad5ec555b39e..c90f5d59ba21 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -192,17 +192,17 @@ static void hns3vf_handle_link_change_event(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { + struct hns3_mbx_link_status *link_info = + (struct hns3_mbx_link_status *)req->msg.msg_data; uint8_t link_status, link_duplex; - uint16_t *msg_q = req->msg; uint8_t support_push_lsc; uint32_t link_speed; - memcpy(&link_speed, &msg_q[2], sizeof(link_speed)); - link_status = rte_le_to_cpu_16(msg_q[1]); - link_duplex = (uint8_t)rte_le_to_cpu_16(msg_q[4]); - hns3vf_update_link_status(hw, link_status, link_speed, - link_duplex); - support_push_lsc = (*(uint8_t *)&msg_q[5]) & 1u; + link_status = (uint8_t)rte_le_to_cpu_16(link_info->link_status); + link_speed = rte_le_to_cpu_32(link_info->speed); + link_duplex = (uint8_t)rte_le_to_cpu_16(link_info->duplex); + hns3vf_update_link_status(hw, link_status, link_speed, link_duplex); + support_push_lsc = (link_info->flag) & 1u; hns3vf_update_push_lsc_cap(hw, support_push_lsc); } @@ -211,7 +211,6 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { enum hns3_reset_level reset_level; - uint16_t *msg_q = req->msg; /* * PF has asserted reset hence VF should go in pending @@ -219,7 +218,7 @@ hns3_handle_asserting_reset(struct hns3_hw *hw, * has been completely reset. After this stack should * eventually be re-initialized. */ - reset_level = rte_le_to_cpu_16(msg_q[1]); + reset_level = rte_le_to_cpu_16(req->msg.reset_level); hns3_atomic_set_bit(reset_level, &hw->reset.pending); hns3_warn(hw, "PF inform reset level %d", reset_level); @@ -241,8 +240,9 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * to match the request. */ if (req->match_id == resp->match_id) { - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = + hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -255,7 +255,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) * support copy request's match_id to its response. So VF follows the * original scheme to process. */ - msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2]; + msg_data = (uint32_t)req->msg.vf_mbx_msg_code << + HNS3_MBX_RESP_CODE_OFFSET | req->msg.vf_mbx_msg_subcode; if (resp->req_msg_data != msg_data) { hns3_warn(hw, "received response tag (%u) is mismatched with requested tag (%u)", @@ -263,8 +264,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) return; } - resp->resp_status = hns3_resp_to_errno(req->msg[3]); - memcpy(resp->additional_info, &req->msg[4], + resp->resp_status = hns3_resp_to_errno(req->msg.resp_status); + memcpy(resp->additional_info, &req->msg.resp_data, HNS3_MBX_MAX_RESP_DATA_SIZE); rte_io_wmb(); resp->received_match_resp = true; @@ -305,8 +306,7 @@ static void hns3_update_port_base_vlan_info(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req) { -#define PVID_STATE_OFFSET 1 - uint16_t new_pvid_state = req->msg[PVID_STATE_OFFSET] ? + uint16_t new_pvid_state = req->msg.pvid_state ? HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE; /* * Currently, hardware doesn't support more than two layers VLAN offload @@ -355,7 +355,7 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) while (next_to_use != tail) { desc = &crq->desc[next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[next_to_use].flag); if (!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B)) @@ -428,7 +428,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) desc = &crq->desc[crq->next_to_use]; req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data; - opcode = req->msg[0] & 0xff; + opcode = req->msg.code & 0xff; flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { @@ -484,7 +484,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) * hns3 PF kernel driver, VF driver will receive this * mailbox message from PF driver. */ - hns3_handle_promisc_info(hw, req->msg[1]); + hns3_handle_promisc_info(hw, req->msg.promisc_en); break; default: hns3_err(hw, "received unsupported(%u) mbx msg", diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 3f623ba64ca4..64f30d2923ea 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -118,6 +118,13 @@ struct hns3_mbx_vlan_filter { uint16_t vlan_id; uint16_t proto; }; + +struct hns3_mbx_link_status { + uint16_t link_status; + uint32_t speed; + uint16_t duplex; + uint8_t flag; +}; #pragma pack() #define HNS3_MBX_MSG_MAX_DATA_SIZE 14 @@ -148,6 +155,22 @@ struct hns3_vf_to_pf_msg { }; }; +struct hns3_pf_to_vf_msg { + uint16_t code; + union { + struct { + uint16_t vf_mbx_msg_code; + uint16_t vf_mbx_msg_subcode; + uint16_t resp_status; + uint8_t resp_data[HNS3_MBX_MAX_RESP_DATA_SIZE]; + }; + uint16_t promisc_en; + uint16_t reset_level; + uint16_t pvid_state; + uint8_t msg_data[HNS3_MBX_MSG_MAX_DATA_SIZE]; + }; +}; + struct errno_respcode_map { uint16_t resp_code; int err_no; @@ -172,7 +195,7 @@ struct hns3_mbx_pf_to_vf_cmd { uint8_t msg_len; uint8_t rsv1; uint16_t match_id; - uint16_t msg[8]; + struct hns3_pf_to_vf_msg msg; }; struct hns3_pf_rst_done_cmd { From patchwork Thu Dec 7 01:37:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Hai X-Patchwork-Id: 134894 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A09B943692; Thu, 7 Dec 2023 02:41:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C41442EAD; Thu, 7 Dec 2023 02:41:29 +0100 (CET) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 8088D427DE for ; Thu, 7 Dec 2023 02:41:24 +0100 (CET) Received: from kwepemd100004.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Slxmp53lpzYsq4; Thu, 7 Dec 2023 09:40:42 +0800 (CST) Received: from localhost.localdomain (10.67.165.2) by kwepemd100004.china.huawei.com (7.221.188.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1258.28; Thu, 7 Dec 2023 09:41:22 +0800 From: Jie Hai To: , Yisen Zhuang , "Min Hu (Connor)" , "Wei Hu (Xavier)" , Chunsong Feng , Hao Chen , Ferruh Yigit CC: , , , Subject: [PATCH v3 3/4] net/hns3: refactor send mailbox function Date: Thu, 7 Dec 2023 09:37:31 +0800 Message-ID: <20231207013732.3987482-4-haijie1@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20231207013732.3987482-1-haijie1@huawei.com> References: <20231108034434.559030-1-haijie1@huawei.com> <20231207013732.3987482-1-haijie1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.2] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemd100004.china.huawei.com (7.221.188.31) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dengdui Huang The 'hns3_send_mbx_msg' function has following problem: 1. the name is vague, missing caller indication. 2. too many input parameters because the filling messages are placed in commands the send command. Therefore, a common interface is encapsulated to fill in the mailbox message before sending it. Fixes: 463e748964f5 ("net/hns3: support mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang Signed-off-by: Jie Hai --- drivers/net/hns3/hns3_ethdev_vf.c | 141 ++++++++++++++++++------------ drivers/net/hns3/hns3_mbx.c | 50 ++++------- drivers/net/hns3/hns3_mbx.h | 8 +- drivers/net/hns3/hns3_rxtx.c | 18 ++-- 4 files changed, 116 insertions(+), 101 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index 4cddf01d6f20..b0d0c29df191 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -91,11 +91,13 @@ hns3vf_add_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_ADD, mac_addr->addr_bytes, - RTE_ETHER_ADDR_LEN, false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -110,12 +112,13 @@ hns3vf_remove_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { /* mac address was checked by upper level interface */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -134,6 +137,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *old_addr; uint8_t addr_bytes[HNS3_TWO_ETHER_ADDR_LEN]; /* for 2 MAC addresses */ char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; /* @@ -146,9 +150,10 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev, memcpy(&addr_bytes[RTE_ETHER_ADDR_LEN], old_addr->addr_bytes, RTE_ETHER_ADDR_LEN); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST, - HNS3_MBX_MAC_VLAN_UC_MODIFY, addr_bytes, - HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST, + HNS3_MBX_MAC_VLAN_UC_MODIFY); + memcpy(req.data, addr_bytes, HNS3_TWO_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) { /* * The hns3 VF PMD depends on the hns3 PF kernel ethdev @@ -185,12 +190,13 @@ hns3vf_add_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_ADD, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_ADD); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -206,12 +212,13 @@ hns3vf_remove_mc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr) { char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST, - HNS3_MBX_MAC_VLAN_MC_REMOVE, - mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST, + HNS3_MBX_MAC_VLAN_MC_REMOVE); + memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); @@ -348,7 +355,6 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, bool mmap, enum hns3_ring_type queue_type, uint16_t queue_id) { -#define HNS3_RING_VECTOR_DATA_SIZE 14 struct hns3_vf_to_pf_msg req = {0}; const char *op_str; int ret; @@ -365,8 +371,7 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id, req.ring_param[0].ring_type = queue_type; req.ring_param[0].tqp_index = queue_id; op_str = mmap ? "Map" : "Unmap"; - ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id, - HNS3_RING_VECTOR_DATA_SIZE, false, NULL, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.", op_str, queue_id, req.vector_id, ret); @@ -452,10 +457,12 @@ hns3vf_dev_configure(struct rte_eth_dev *dev) static int hns3vf_config_mtu(struct hns3_hw *hw, uint16_t mtu) { + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MTU, 0, (const uint8_t *)&mtu, - sizeof(mtu), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_MTU, 0); + memcpy(req.data, &mtu, sizeof(mtu)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "Failed to set mtu (%u) for vf: %d", mtu, ret); @@ -646,12 +653,13 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) uint16_t val = HNS3_PF_PUSH_LSC_CAP_NOT_SUPPORTED; uint16_t exp = HNS3_PF_PUSH_LSC_CAP_UNKNOWN; struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; __atomic_store_n(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN, __ATOMIC_RELEASE); - (void)hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + (void)hns3vf_mbx_send(hw, &req, false, NULL, 0); while (remain_ms > 0) { rte_delay_ms(HNS3_POLL_RESPONE_MS); @@ -746,12 +754,13 @@ hns3vf_check_tqp_info(struct hns3_hw *hw) static int hns3vf_get_port_base_vlan_filter_state(struct hns3_hw *hw) { + struct hns3_vf_to_pf_msg req; uint8_t resp_msg; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_GET_PORT_BASE_VLAN_STATE, NULL, 0, - true, &resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_GET_PORT_BASE_VLAN_STATE); + ret = hns3vf_mbx_send(hw, &req, true, &resp_msg, sizeof(resp_msg)); if (ret) { if (ret == -ETIME) { /* @@ -792,10 +801,12 @@ hns3vf_get_queue_info(struct hns3_hw *hw) { #define HNS3VF_TQPS_RSS_INFO_LEN 6 uint8_t resp_msg[HNS3VF_TQPS_RSS_INFO_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_QINFO, 0, NULL, 0, true, - resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_QINFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, + resp_msg, HNS3VF_TQPS_RSS_INFO_LEN); if (ret) { PMD_INIT_LOG(ERR, "Failed to get tqp info from PF: %d", ret); return ret; @@ -833,10 +844,11 @@ hns3vf_get_basic_info(struct hns3_hw *hw) { uint8_t resp_msg[HNS3_MBX_MAX_RESP_DATA_SIZE]; struct hns3_basic_info *basic_info; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_BASIC_INFO, 0, NULL, 0, - true, resp_msg, sizeof(resp_msg)); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_BASIC_INFO, 0); + ret = hns3vf_mbx_send(hw, &req, true, resp_msg, sizeof(resp_msg)); if (ret) { hns3_err(hw, "failed to get basic info from PF, ret = %d.", ret); @@ -856,10 +868,11 @@ static int hns3vf_get_host_mac_addr(struct hns3_hw *hw) { uint8_t host_mac[RTE_ETHER_ADDR_LEN]; + struct hns3_vf_to_pf_msg req; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_MAC_ADDR, 0, NULL, 0, - true, host_mac, RTE_ETHER_ADDR_LEN); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_MAC_ADDR, 0); + ret = hns3vf_mbx_send(hw, &req, true, host_mac, RTE_ETHER_ADDR_LEN); if (ret) { hns3_err(hw, "Failed to get mac addr from PF: %d", ret); return ret; @@ -908,6 +921,7 @@ static void hns3vf_request_link_info(struct hns3_hw *hw) { struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw); + struct hns3_vf_to_pf_msg req; bool send_req; int ret; @@ -919,8 +933,8 @@ hns3vf_request_link_info(struct hns3_hw *hw) if (!send_req) return; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false, - NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) { hns3_err(hw, "failed to fetch link status, ret = %d", ret); return; @@ -964,16 +978,18 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status, static int hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on) { - struct hns3_mbx_vlan_filter vlan_filter = {0}; + struct hns3_mbx_vlan_filter *vlan_filter; + struct hns3_vf_to_pf_msg req = {0}; struct hns3_hw *hw = &hns->hw; - vlan_filter.is_kill = on ? 0 : 1; - vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); - vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id); + req.code = HNS3_MBX_SET_VLAN; + req.subcode = HNS3_MBX_VLAN_FILTER; + vlan_filter = (struct hns3_mbx_vlan_filter *)req.data; + vlan_filter->is_kill = on ? 0 : 1; + vlan_filter->proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN); + vlan_filter->vlan_id = rte_cpu_to_le_16(vlan_id); - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER, - (uint8_t *)&vlan_filter, sizeof(vlan_filter), - true, NULL, 0); + return hns3vf_mbx_send(hw, &req, true, NULL, 0); } static int @@ -1002,6 +1018,7 @@ hns3vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) static int hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; @@ -1009,9 +1026,10 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) return 0; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, - HNS3_MBX_ENABLE_VLAN_FILTER, &msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_ENABLE_VLAN_FILTER); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "%s vlan filter failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1022,12 +1040,15 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable) static int hns3vf_en_hw_strip_rxvtag(struct hns3_hw *hw, bool enable) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; int ret; msg_data = enable ? 1 : 0; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_RX_OFF_CFG, - &msg_data, sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN, + HNS3_MBX_VLAN_RX_OFF_CFG); + memcpy(req.data, &msg_data, sizeof(msg_data)); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "vf %s strip failed, ret = %d.", enable ? "enable" : "disable", ret); @@ -1171,11 +1192,13 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev) static int hns3vf_set_alive(struct hns3_hw *hw, bool alive) { + struct hns3_vf_to_pf_msg req; uint8_t msg_data; msg_data = alive ? 1 : 0; - return hns3_send_mbx_msg(hw, HNS3_MBX_SET_ALIVE, 0, &msg_data, - sizeof(msg_data), false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_SET_ALIVE, 0); + memcpy(req.data, &msg_data, sizeof(msg_data)); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static void @@ -1183,11 +1206,12 @@ hns3vf_keep_alive_handler(void *param) { struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; struct hns3_adapter *hns = eth_dev->data->dev_private; + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; - ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0, - false, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_KEEP_ALIVE, 0); + ret = hns3vf_mbx_send(hw, &req, false, NULL, 0); if (ret) hns3_err(hw, "VF sends keeping alive cmd failed(=%d)", ret); @@ -1326,9 +1350,11 @@ hns3vf_init_hardware(struct hns3_adapter *hns) static int hns3vf_clear_vport_list(struct hns3_hw *hw) { - return hns3_send_mbx_msg(hw, HNS3_MBX_HANDLE_VF_TBL, - HNS3_MBX_VPORT_LIST_CLEAR, NULL, 0, false, - NULL, 0); + struct hns3_vf_to_pf_msg req; + + hns3vf_mbx_setup(&req, HNS3_MBX_HANDLE_VF_TBL, + HNS3_MBX_VPORT_LIST_CLEAR); + return hns3vf_mbx_send(hw, &req, false, NULL, 0); } static int @@ -1797,12 +1823,13 @@ hns3vf_wait_hardware_ready(struct hns3_adapter *hns) static int hns3vf_prepare_reset(struct hns3_adapter *hns) { + struct hns3_vf_to_pf_msg req; struct hns3_hw *hw = &hns->hw; int ret; if (hw->reset.level == HNS3_VF_FUNC_RESET) { - ret = hns3_send_mbx_msg(hw, HNS3_MBX_RESET, 0, NULL, - 0, true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) return ret; } diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index c90f5d59ba21..43195ff184b1 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -24,6 +24,14 @@ static const struct errno_respcode_map err_code_map[] = { {95, -EOPNOTSUPP}, }; +void +hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode) +{ + memset(req, 0, sizeof(struct hns3_vf_to_pf_msg)); + req->code = code; + req->subcode = subcode; +} + static int hns3_resp_to_errno(uint16_t resp_code) { @@ -118,45 +126,24 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode) } int -hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len) +hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req, bool need_resp, + uint8_t *resp_data, uint16_t resp_len) { - struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_mbx_vf_to_pf_cmd *cmd; struct hns3_cmd_desc desc; - bool is_ring_vector_msg; int ret; - req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; - - /* first two bytes are reserved for code & subcode */ - if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) { - hns3_err(hw, - "VF send mbx msg fail, msg len %u exceeds max payload len %d", - msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE); - return -EINVAL; - } - hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false); - req->msg.code = code; - is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) || - (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) || - (code == HNS3_MBX_GET_RING_VECTOR_MAP); - if (!is_ring_vector_msg) - req->msg.subcode = subcode; - if (msg_data) { - if (is_ring_vector_msg) - memcpy(&req->msg.vector_id, msg_data, msg_len); - else - memcpy(&req->msg.data, msg_data, msg_len); - } + cmd = (struct hns3_mbx_vf_to_pf_cmd *)desc.data; + cmd->msg = *req; /* synchronous send */ if (need_resp) { - req->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; + cmd->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT; rte_spinlock_lock(&hw->mbx_resp.lock); - hns3_mbx_prepare_resp(hw, code, subcode); - req->match_id = hw->mbx_resp.match_id; + hns3_mbx_prepare_resp(hw, req->code, req->subcode); + cmd->match_id = hw->mbx_resp.match_id; ret = hns3_cmd_send(hw, &desc, 1); if (ret) { rte_spinlock_unlock(&hw->mbx_resp.lock); @@ -165,7 +152,8 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return ret; } - ret = hns3_get_mbx_resp(hw, code, subcode, resp_data, resp_len); + ret = hns3_get_mbx_resp(hw, req->code, req->subcode, + resp_data, resp_len); rte_spinlock_unlock(&hw->mbx_resp.lock); } else { /* asynchronous send */ diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 64f30d2923ea..360e91c30eb9 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -210,7 +210,9 @@ struct hns3_pf_rst_done_cmd { struct hns3_hw; void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); -int hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode, - const uint8_t *msg_data, uint8_t msg_len, bool need_resp, - uint8_t *resp_data, uint16_t resp_len); +void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, + uint8_t code, uint8_t subcode); +int hns3vf_mbx_send(struct hns3_hw *hw, + struct hns3_vf_to_pf_msg *req_msg, bool need_resp, + uint8_t *resp_data, uint16_t resp_len); #endif /* HNS3_MBX_H */ diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index 09b7e90c7000..9087bcffed9b 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -686,13 +686,12 @@ hns3pf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) static int hns3vf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id) { - uint8_t msg_data[2]; + struct hns3_vf_to_pf_msg req; int ret; - memcpy(msg_data, &queue_id, sizeof(uint16_t)); - - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, NULL, 0); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + memcpy(req.data, &queue_id, sizeof(uint16_t)); + ret = hns3vf_mbx_send(hw, &req, true, NULL, 0); if (ret) hns3_err(hw, "fail to reset tqp, queue_id = %u, ret = %d.", queue_id, ret); @@ -769,15 +768,14 @@ static int hns3vf_reset_all_tqps(struct hns3_hw *hw) { #define HNS3VF_RESET_ALL_TQP_DONE 1U + struct hns3_vf_to_pf_msg req; uint8_t reset_status; - uint8_t msg_data[2]; int ret; uint16_t i; - memset(msg_data, 0, sizeof(msg_data)); - ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data, - sizeof(msg_data), true, &reset_status, - sizeof(reset_status)); + hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0); + ret = hns3vf_mbx_send(hw, &req, true, + &reset_status, sizeof(reset_status)); if (ret) { hns3_err(hw, "fail to send rcb reset mbx, ret = %d.", ret); return ret; From patchwork Thu Dec 7 01:37:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jie Hai X-Patchwork-Id: 134893 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0C50943692; Thu, 7 Dec 2023 02:41:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BF08A42E9F; Thu, 7 Dec 2023 02:41:27 +0100 (CET) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 8FB5942E05 for ; Thu, 7 Dec 2023 02:41:24 +0100 (CET) Received: from kwepemd100004.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SlxjB4WNkzrVVq; Thu, 7 Dec 2023 09:37:34 +0800 (CST) Received: from localhost.localdomain (10.67.165.2) by kwepemd100004.china.huawei.com (7.221.188.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.2.1258.28; Thu, 7 Dec 2023 09:41:22 +0800 From: Jie Hai To: , Yisen Zhuang , Hao Chen , Ferruh Yigit , "Min Hu (Connor)" , Huisong Li , "Wei Hu (Xavier)" , Hongbo Zheng CC: , , Subject: [PATCH v3 4/4] net/hns3: refactor handle mailbox function Date: Thu, 7 Dec 2023 09:37:32 +0800 Message-ID: <20231207013732.3987482-5-haijie1@huawei.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20231207013732.3987482-1-haijie1@huawei.com> References: <20231108034434.559030-1-haijie1@huawei.com> <20231207013732.3987482-1-haijie1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.2] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemd100004.china.huawei.com (7.221.188.31) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dengdui Huang The mailbox messages of the PF and VF are processed in the same function. The PF and VF call the same function to process the messages. This code is excessive coupling and isn't good for maintenance. Therefore, this patch separates the interfaces that handle PF mailbox message and handle VF mailbox message. Fixes: 463e748964f5 ("net/hns3: support mailbox") Fixes: 109e4dd1bd7a ("net/hns3: get link state change through mailbox") Cc: stable@dpdk.org Signed-off-by: Dengdui Huang Signed-off-by: Jie Hai --- drivers/net/hns3/hns3_ethdev.c | 2 +- drivers/net/hns3/hns3_ethdev.h | 2 +- drivers/net/hns3/hns3_ethdev_vf.c | 4 +- drivers/net/hns3/hns3_mbx.c | 70 ++++++++++++++++++++++++------- drivers/net/hns3/hns3_mbx.h | 3 +- 5 files changed, 60 insertions(+), 21 deletions(-) diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index ae81368f68ae..bccd9db0dd4d 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -380,7 +380,7 @@ hns3_interrupt_handler(void *param) hns3_warn(hw, "received reset interrupt"); hns3_schedule_reset(hns); } else if (event_cause == HNS3_VECTOR0_EVENT_MBX) { - hns3_dev_handle_mbx_msg(hw); + hns3pf_handle_mbx_msg(hw); } else if (event_cause != HNS3_VECTOR0_EVENT_PTP) { hns3_warn(hw, "received unknown event: vector0_int_stat:0x%x " "ras_int_stat:0x%x cmdq_int_stat:0x%x", diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h index 12d8299def39..f6c705472d0e 100644 --- a/drivers/net/hns3/hns3_ethdev.h +++ b/drivers/net/hns3/hns3_ethdev.h @@ -403,7 +403,7 @@ struct hns3_reset_data { /* Reset flag, covering the entire reset process */ uint16_t resetting; /* Used to disable sending cmds during reset */ - uint16_t disable_cmd; + RTE_ATOMIC(uint16_t) disable_cmd; /* The reset level being processed */ enum hns3_reset_level level; /* Reset level set, each bit represents a reset level */ diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index b0d0c29df191..f5a7a2b1f46c 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -618,7 +618,7 @@ hns3vf_interrupt_handler(void *param) hns3_schedule_reset(hns); break; case HNS3VF_VECTOR0_EVENT_MBX: - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); break; default: break; @@ -670,7 +670,7 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw) * driver has to actively handle the HNS3_MBX_LINK_STAT_CHANGE * mailbox from PF driver to get this capability. */ - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); if (__atomic_load_n(&vf->pf_push_lsc_cap, __ATOMIC_ACQUIRE) != HNS3_PF_PUSH_LSC_CAP_UNKNOWN) break; diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c index 43195ff184b1..a8cbb153dc62 100644 --- a/drivers/net/hns3/hns3_mbx.c +++ b/drivers/net/hns3/hns3_mbx.c @@ -78,7 +78,7 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode, return -EIO; } - hns3_dev_handle_mbx_msg(hw); + hns3vf_handle_mbx_msg(hw); rte_delay_us(HNS3_WAIT_RESP_US); if (hw->mbx_resp.received_match_resp) @@ -372,9 +372,58 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw) } void -hns3_dev_handle_mbx_msg(struct hns3_hw *hw) +hns3pf_handle_mbx_msg(struct hns3_hw *hw) +{ + struct hns3_cmq_ring *crq = &hw->cmq.crq; + struct hns3_mbx_vf_to_pf_cmd *req; + struct hns3_cmd_desc *desc; + uint16_t flag; + + rte_spinlock_lock(&hw->cmq.crq.lock); + + while (!hns3_cmd_crq_empty(hw)) { + if (rte_atomic_load_explicit(&hw->reset.disable_cmd, + rte_memory_order_relaxed)) { + rte_spinlock_unlock(&hw->cmq.crq.lock); + return; + } + desc = &crq->desc[crq->next_to_use]; + req = (struct hns3_mbx_vf_to_pf_cmd *)desc->data; + + flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag); + if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) { + hns3_warn(hw, + "dropped invalid mailbox message, code = %u", + req->msg.code); + + /* dropping/not processing this invalid message */ + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + continue; + } + + switch (req->msg.code) { + case HNS3_MBX_PUSH_LINK_STATUS: + hns3pf_handle_link_change_event(hw, req); + break; + default: + hns3_err(hw, "received unsupported(%u) mbx msg", + req->msg.code); + break; + } + crq->desc[crq->next_to_use].flag = 0; + hns3_mbx_ring_ptr_move_crq(crq); + } + + /* Write back CMDQ_RQ header pointer, IMP need this pointer */ + hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, crq->next_to_use); + + rte_spinlock_unlock(&hw->cmq.crq.lock); +} + +void +hns3vf_handle_mbx_msg(struct hns3_hw *hw) { - struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw); struct hns3_cmq_ring *crq = &hw->cmq.crq; struct hns3_mbx_pf_to_vf_cmd *req; struct hns3_cmd_desc *desc; @@ -385,7 +434,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) rte_spinlock_lock(&hw->cmq.crq.lock); handle_out = (rte_eal_process_type() != RTE_PROC_PRIMARY || - !rte_thread_is_intr()) && hns->is_vf; + !rte_thread_is_intr()); if (handle_out) { /* * Currently, any threads in the primary and secondary processes @@ -430,8 +479,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) continue; } - handle_out = hns->is_vf && desc->opcode == 0; - if (handle_out) { + if (desc->opcode == 0) { /* Message already processed by other thread */ crq->desc[crq->next_to_use].flag = 0; hns3_mbx_ring_ptr_move_crq(crq); @@ -448,16 +496,6 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw) case HNS3_MBX_ASSERTING_RESET: hns3_handle_asserting_reset(hw, req); break; - case HNS3_MBX_PUSH_LINK_STATUS: - /* - * This message is reported by the firmware and is - * reported in 'struct hns3_mbx_vf_to_pf_cmd' format. - * Therefore, we should cast the req variable to - * 'struct hns3_mbx_vf_to_pf_cmd' and then process it. - */ - hns3pf_handle_link_change_event(hw, - (struct hns3_mbx_vf_to_pf_cmd *)req); - break; case HNS3_MBX_PUSH_VLAN_INFO: /* * When the PVID configuration status of VF device is diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h index 360e91c30eb9..967d9df3bcac 100644 --- a/drivers/net/hns3/hns3_mbx.h +++ b/drivers/net/hns3/hns3_mbx.h @@ -209,7 +209,8 @@ struct hns3_pf_rst_done_cmd { ((crq)->next_to_use = ((crq)->next_to_use + 1) % (crq)->desc_num) struct hns3_hw; -void hns3_dev_handle_mbx_msg(struct hns3_hw *hw); +void hns3pf_handle_mbx_msg(struct hns3_hw *hw); +void hns3vf_handle_mbx_msg(struct hns3_hw *hw); void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode); int hns3vf_mbx_send(struct hns3_hw *hw,