From patchwork Wed Sep 22 07:09:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "humin (Q)" X-Patchwork-Id: 99396 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 97E41A0C45; Wed, 22 Sep 2021 09:10:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D94A4119F; Wed, 22 Sep 2021 09:10:49 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id B7CB94003F for ; Wed, 22 Sep 2021 09:10:45 +0200 (CEST) Received: from dggeme756-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HDq8m380bzbmgR; Wed, 22 Sep 2021 15:06:32 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggeme756-chm.china.huawei.com (10.3.19.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.8; Wed, 22 Sep 2021 15:10:43 +0800 From: "Min Hu (Connor)" To: CC: , Date: Wed, 22 Sep 2021 15:09:12 +0800 Message-ID: <20210922070913.59515-2-humin29@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210922070913.59515-1-humin29@huawei.com> References: <20210922070913.59515-1-humin29@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggeme756-chm.china.huawei.com (10.3.19.102) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH 1/2] net/bonding: fix dedicated queue mode in vector burst X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Chengchang Tang If the vector burst mode is selected, the dedicated queue mode will not take effect on some PMDs because these PMDs may have some limitaions in vector burst mode. For example, the limit on burst size. Currently, both hns3 and intel I40E require four alignments when receiving packets in vector mode. As a result, they cannt accept packets if burst size below four. However, in dedicated queue mode, the burst size of periodic packets processing is one. This patch fixes the above problem by modifying the burst size to 32. This approach also makes the packet processing of the dedicated queue mode more reasonable. Currently, if multiple lacp protocol packets are received in the hardware queue in a cycle, only one lacp packet will be processed in this cycle, and the left packets will be processed in the following cycle. After the modification, all the lacp packets will be processed at one time, which seems more reasonable and closer to the behavior of the bonding driver when the dedicated queue is not turned on. Fixes: 112891cd27e5 ("net/bonding: add dedicated HW queues for LACP control") Cc: stable@dpdk.org Signed-off-by: Chengchang Tang Signed-off-by: Min Hu(Connor) --- drivers/net/bonding/rte_eth_bond_8023ad.c | 32 ++++++++++++++++------- 1 file changed, 23 insertions(+), 9 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c index 8b5b32fcaf..fc8ebd6320 100644 --- a/drivers/net/bonding/rte_eth_bond_8023ad.c +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c @@ -838,6 +838,27 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id, rx_machine(internals, slave_id, NULL); } +static void +bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals, + uint16_t slave_id) +{ +#define DEDICATED_QUEUE_BURST_SIZE 32 + struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE]; + uint16_t rx_count = rte_eth_rx_burst(slave_id, + internals->mode4.dedicated_queues.rx_qid, + lacp_pkt, DEDICATED_QUEUE_BURST_SIZE); + + if (rx_count) { + uint16_t i; + + for (i = 0; i < rx_count; i++) + bond_mode_8023ad_handle_slow_pkt(internals, slave_id, + lacp_pkt[i]); + } else { + rx_machine_update(internals, slave_id, NULL); + } +} + static void bond_mode_8023ad_periodic_cb(void *arg) { @@ -926,15 +947,8 @@ bond_mode_8023ad_periodic_cb(void *arg) rx_machine_update(internals, slave_id, lacp_pkt); } else { - uint16_t rx_count = rte_eth_rx_burst(slave_id, - internals->mode4.dedicated_queues.rx_qid, - &lacp_pkt, 1); - - if (rx_count == 1) - bond_mode_8023ad_handle_slow_pkt(internals, - slave_id, lacp_pkt); - else - rx_machine_update(internals, slave_id, NULL); + bond_mode_8023ad_dedicated_rxq_process(internals, + slave_id); } periodic_machine(internals, slave_id); From patchwork Wed Sep 22 07:09:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "humin (Q)" X-Patchwork-Id: 99397 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B989FA0C45; Wed, 22 Sep 2021 09:10:58 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1FC58411A7; Wed, 22 Sep 2021 09:10:50 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id E55234069E for ; Wed, 22 Sep 2021 09:10:45 +0200 (CEST) Received: from dggeme756-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HDqDG2jV5zWMqG; Wed, 22 Sep 2021 15:09:34 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggeme756-chm.china.huawei.com (10.3.19.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.8; Wed, 22 Sep 2021 15:10:43 +0800 From: "Min Hu (Connor)" To: CC: , Date: Wed, 22 Sep 2021 15:09:13 +0800 Message-ID: <20210922070913.59515-3-humin29@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210922070913.59515-1-humin29@huawei.com> References: <20210922070913.59515-1-humin29@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggeme756-chm.china.huawei.com (10.3.19.102) X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH 2/2] net/bonding: fix RSS key length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Chengchang Tang Now the hash_key_size information has not been set. So, apps can not get the key size from dev_info(), this make some problem. e.g, in testpmd, the hash_key_size will be checked before configure or get the hash key: testpmd> show port 4 rss-hash dev_info did not provide a valid hash key size testpmd> show port 4 rss-hash key dev_info did not provide a valid hash key size testpmd> port config 4 rss-hash-key ipv4 (hash key) dev_info did not provide a valid hash key size In this patch, the meaning of rss_key_len has been modified. It only indicated the length of the configured hash key before. Therefore, its value depends on the user's configuration. This seems unreasonable. And now, it indicates the minimum hash key length required by the bonded device. Its value will be the shortest hash key among all slave drivers. Fixes: 734ce47f71e0 ("bonding: support RSS dynamic configuration") Cc: stable@dpdk.org Signed-off-by: Chengchang Tang Signed-off-by: Min Hu(Connor) --- drivers/net/bonding/rte_eth_bond_api.c | 6 ++++ drivers/net/bonding/rte_eth_bond_pmd.c | 44 ++++++++++++++++---------- 2 files changed, 33 insertions(+), 17 deletions(-) diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c index eb8d15d160..5140ef14c2 100644 --- a/drivers/net/bonding/rte_eth_bond_api.c +++ b/drivers/net/bonding/rte_eth_bond_api.c @@ -290,6 +290,7 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals, struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf; internals->reta_size = di->reta_size; + internals->rss_key_len = di->hash_key_size; /* Inherit Rx offload capabilities from the first slave device */ internals->rx_offload_capa = di->rx_offload_capa; @@ -385,6 +386,11 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals, */ if (internals->reta_size > di->reta_size) internals->reta_size = di->reta_size; + if (internals->rss_key_len > di->hash_key_size) { + RTE_BOND_LOG(WARNING, "slave has different rss key size, " + "configuring rss may fail"); + internals->rss_key_len = di->hash_key_size; + } if (!internals->max_rx_pktlen && di->max_rx_pktlen < internals->candidate_max_rx_pktlen) diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index 54987d96b3..15e09f01ef 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -1701,14 +1701,11 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev, /* If RSS is enabled for bonding, try to enable it for slaves */ if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) { - if (internals->rss_key_len != 0) { - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = + /* rss_key won't be empty if RSS is configured in bonded dev */ + slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = internals->rss_key_len; - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = + slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = internals->rss_key; - } else { - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL; - } slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf; @@ -2251,6 +2248,7 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->flow_type_rss_offloads = internals->flow_type_rss_offloads; dev_info->reta_size = internals->reta_size; + dev_info->hash_key_size = internals->rss_key_len; return 0; } @@ -3040,13 +3038,15 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev, if (bond_rss_conf.rss_hf != 0) dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = bond_rss_conf.rss_hf; - if (bond_rss_conf.rss_key && bond_rss_conf.rss_key_len < - sizeof(internals->rss_key)) { - if (bond_rss_conf.rss_key_len == 0) - bond_rss_conf.rss_key_len = 40; - internals->rss_key_len = bond_rss_conf.rss_key_len; + if (bond_rss_conf.rss_key) { + if (bond_rss_conf.rss_key_len < internals->rss_key_len) + return -EINVAL; + else if (bond_rss_conf.rss_key_len > internals->rss_key_len) + RTE_BOND_LOG(WARNING, "rss_key will be truncated"); + memcpy(internals->rss_key, bond_rss_conf.rss_key, internals->rss_key_len); + bond_rss_conf.rss_key_len = internals->rss_key_len; } for (i = 0; i < internals->slave_count; i++) { @@ -3506,14 +3506,24 @@ bond_ethdev_configure(struct rte_eth_dev *dev) * Fall back to default RSS key if the key is not specified */ if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) { - if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) { - internals->rss_key_len = - dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len; - memcpy(internals->rss_key, - dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key, + struct rte_eth_rss_conf *rss_conf = + &dev->data->dev_conf.rx_adv_conf.rss_conf; + if (rss_conf->rss_key != NULL) { + if (internals->rss_key_len > rss_conf->rss_key_len) { + RTE_BOND_LOG(ERR, "Invalid rss key length(%u)", + rss_conf->rss_key_len); + return -EINVAL; + } + + memcpy(internals->rss_key, rss_conf->rss_key, internals->rss_key_len); } else { - internals->rss_key_len = sizeof(default_rss_key); + if (internals->rss_key_len > sizeof(default_rss_key)) { + RTE_BOND_LOG(ERR, + "There is no suitable default hash key"); + return -EINVAL; + } + memcpy(internals->rss_key, default_rss_key, internals->rss_key_len); }