From patchwork Mon Jun 9 07:04:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154166 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E478468B8; Mon, 9 Jun 2025 09:05:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 095E5406B7; Mon, 9 Jun 2025 09:05:20 +0200 (CEST) Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) by mails.dpdk.org (Postfix) with ESMTP id D5B3940661 for ; Mon, 9 Jun 2025 09:05:16 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452710t7128f9d1 X-QQ-Originating-IP: 3WwgU2pxAJSIufVrRUuDGa1vYAubu5/M/xr0i+XfKdk= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:09 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 2732149706183948159 EX-QQ-RecipientCnt: 3 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu Subject: [PATCH v2 01/12] net/txgbe: support flow filter for VF Date: Mon, 9 Jun 2025 15:04:43 +0800 Message-ID: <868897A5704A810E+20250609070454.223387-2-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: MSyoPQEuxKCuASKzubvcS3qIZ0zQMaPEPtGYq+DhZdlIRhGOMp84H6di QzLcWvW+MzB5yusxMO0xJvANQAvrFugkdx/IlyISuH4nDF6d0gqNCbZL84aZcRTKK8EQtbl UDboxIFdnOYMJQY0YkecM6Y5DFli/NfAYCKSxaq5lXIKEqDM9oqMY7Z9M/YqchyDaRamoEN 8gYnE8yuzMq3ATsLorCfOGfFizkcfW4Cd+Y4gsp3LRTgkFdUjMvHMuBpQo8KSiFSPN0jM9w 7Xa753kxiYBb3MIoMg0BntekY+xRxA4ri+kq5MF/KrXVTjJK+4RI3/Bo1v4PW6mhdX1you5 2leZplbHOWcxhRvzAUi7WU+jwPEapSJjOewGVI4wG5hefZ+e7NLUsPqeCp199fe+ItKTzKz CtNqLXUNOqb1W/ssxMYZKpvDnb2hja7EGbPERMQNecObB3jP7zhA835RkeJvYsm8f90pofi 6G4IZW4lNJpDUYpwmuYAbzaqKzkPZK1+X0WXeQovJSoJ7WR0VC+9jQg9DwNgz/DQkC/TRwe f+op6sjjDykMw45qfq4Tg3h4Lge496LSjjT9AhEAg8RhgwV/uRv62mxJtzgVAZ/IL0bMIIP jeuWU49SR0l+J49abxz4ALg0v4gvIGm3d0VPJBZLYpgcHdFhOe+lRjmuV2J3uov6ONpKtWv jeIOcVfkT01a3fBUyooxJXqmE77DVRBggi5Jc7LIuSQMoaQdhr1gNqAUucmQoh1FCIeoa8g eo/KvHqH/+8XTy+xI8JMpa51EOUFg2Z5hyFsFlRM80FmhV5bBYUmkqpdglKvbEjzfWE1xSE mpaukZ8R/6PDRIgz7MO8YE/OMZ7qCpRdJfxJCTN4qMhXu8GEOjUpU2yWLVFHIPa/x4zbfhP bxAicjuEYMm3JmQW9ooukJNWAyv6LQnalTszTMe+OIFP1J4C/KCs37IzrhaZ2zmE81qwWNm BEy+qiwZGUj0XZTZJc4UF5BucQrCqsOOG9Vik0OlHKt/3BCH7Q5+iFVvXggk1vsRgI2Y8tp X3WRtkZd7E+DeiPOgq X-QQ-XMRINFO: NS+P29fieYNw95Bth2bWPxk= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add 5-tuple filter for VF driver, which request PF driver to write the hardware configurations. So add new PF-VF mailbox API version 2.1 to implement it. Signed-off-by: Jiawen Wu --- drivers/net/txgbe/base/txgbe_hw.c | 10 ++++ drivers/net/txgbe/base/txgbe_hw.h | 1 + drivers/net/txgbe/base/txgbe_mbx.h | 17 ++++++ drivers/net/txgbe/base/txgbe_vf.c | 29 +++++++++++ drivers/net/txgbe/base/txgbe_vf.h | 2 + drivers/net/txgbe/txgbe_ethdev.c | 12 ++++- drivers/net/txgbe/txgbe_ethdev.h | 5 ++ drivers/net/txgbe/txgbe_ethdev_vf.c | 80 +++++++++++++++++++++++++++++ drivers/net/txgbe/txgbe_flow.c | 10 ++++ 9 files changed, 164 insertions(+), 2 deletions(-) diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c index dd5d3ea1fe..ae2ad87c83 100644 --- a/drivers/net/txgbe/base/txgbe_hw.c +++ b/drivers/net/txgbe/base/txgbe_hw.c @@ -2485,6 +2485,16 @@ s32 txgbe_init_shared_code(struct txgbe_hw *hw) return status; } +bool txgbe_is_pf(struct txgbe_hw *hw) +{ + switch (hw->mac.type) { + case txgbe_mac_raptor: + return true; + default: + return false; + } +} + /** * txgbe_set_mac_type - Sets MAC type * @hw: pointer to the HW structure diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h index 1ed2892f61..7a45020824 100644 --- a/drivers/net/txgbe/base/txgbe_hw.h +++ b/drivers/net/txgbe/base/txgbe_hw.h @@ -85,6 +85,7 @@ void txgbe_set_mta(struct txgbe_hw *hw, u8 *mc_addr); s32 txgbe_negotiate_fc(struct txgbe_hw *hw, u32 adv_reg, u32 lp_reg, u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm); s32 txgbe_init_shared_code(struct txgbe_hw *hw); +bool txgbe_is_pf(struct txgbe_hw *hw); s32 txgbe_set_mac_type(struct txgbe_hw *hw); s32 txgbe_init_ops_pf(struct txgbe_hw *hw); s32 txgbe_get_link_capabilities_raptor(struct txgbe_hw *hw, diff --git a/drivers/net/txgbe/base/txgbe_mbx.h b/drivers/net/txgbe/base/txgbe_mbx.h index 894ad6a2f7..31e2d51658 100644 --- a/drivers/net/txgbe/base/txgbe_mbx.h +++ b/drivers/net/txgbe/base/txgbe_mbx.h @@ -38,6 +38,7 @@ enum txgbe_pfvf_api_rev { txgbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */ txgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */ txgbe_mbox_api_20, /* API version 2.0, solaris Phase1 VF driver */ + txgbe_mbox_api_21, /* API version 2.1 */ /* This value should always be last */ txgbe_mbox_api_unknown, /* indicates that API version is not known */ }; @@ -61,6 +62,9 @@ enum txgbe_pfvf_api_rev { #define TXGBE_VF_GET_RSS_KEY 0x0b /* get RSS key */ #define TXGBE_VF_UPDATE_XCAST_MODE 0x0c +/* mailbox API, version 2.1 VF requests */ +#define TXGBE_VF_SET_5TUPLE 0x20 /* VF request PF for 5-tuple filter */ + #define TXGBE_VF_BACKUP 0x8001 /* VF requests backup */ /* mode choices for TXGBE_VF_UPDATE_XCAST_MODE */ @@ -71,6 +75,19 @@ enum txgbevf_xcast_modes { TXGBEVF_XCAST_MODE_PROMISC, }; +enum txgbevf_5tuple_msg { + TXGBEVF_5T_REQ = 0, + TXGBEVF_5T_CMD, + TXGBEVF_5T_CTRL0, + TXGBEVF_5T_CTRL1, + TXGBEVF_5T_PORT, + TXGBEVF_5T_DA, + TXGBEVF_5T_SA, + TXGBEVF_5T_MAX /* must be last */ +}; + +#define TXGBEVF_5T_ADD_SHIFT 31 + /* GET_QUEUES return data indices within the mailbox */ #define TXGBE_VF_TX_QUEUES 1 /* number of Tx queues supported */ #define TXGBE_VF_RX_QUEUES 2 /* number of Rx queues supported */ diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c index a73502351e..8c731b4776 100644 --- a/drivers/net/txgbe/base/txgbe_vf.c +++ b/drivers/net/txgbe/base/txgbe_vf.c @@ -357,6 +357,7 @@ s32 txgbevf_update_xcast_mode(struct txgbe_hw *hw, int xcast_mode) return TXGBE_ERR_FEATURE_NOT_SUPPORTED; /* Fall through */ case txgbe_mbox_api_13: + case txgbe_mbox_api_21: break; default: return TXGBE_ERR_FEATURE_NOT_SUPPORTED; @@ -610,6 +611,7 @@ int txgbevf_get_queues(struct txgbe_hw *hw, unsigned int *num_tcs, case txgbe_mbox_api_11: case txgbe_mbox_api_12: case txgbe_mbox_api_13: + case txgbe_mbox_api_21: break; default: return 0; @@ -656,3 +658,30 @@ int txgbevf_get_queues(struct txgbe_hw *hw, unsigned int *num_tcs, return err; } + +int +txgbevf_add_5tuple_filter(struct txgbe_hw *hw, u32 *msg, u16 index) +{ + if (hw->api_version < txgbe_mbox_api_21) + return TXGBE_ERR_FEATURE_NOT_SUPPORTED; + + msg[TXGBEVF_5T_REQ] = TXGBE_VF_SET_5TUPLE; + msg[TXGBEVF_5T_CMD] = index; + msg[TXGBEVF_5T_CMD] |= 1 << TXGBEVF_5T_ADD_SHIFT; + + return txgbevf_write_msg_read_ack(hw, msg, msg, TXGBEVF_5T_MAX); +} + +int +txgbevf_del_5tuple_filter(struct txgbe_hw *hw, u16 index) +{ + u32 msg[2] = {0, 0}; + + if (hw->api_version < txgbe_mbox_api_21) + return TXGBE_ERR_FEATURE_NOT_SUPPORTED; + + msg[TXGBEVF_5T_REQ] = TXGBE_VF_SET_5TUPLE; + msg[TXGBEVF_5T_CMD] = index; + + return txgbevf_write_msg_read_ack(hw, msg, msg, 2); +} diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h index c3a90ab861..1fac1c7e32 100644 --- a/drivers/net/txgbe/base/txgbe_vf.h +++ b/drivers/net/txgbe/base/txgbe_vf.h @@ -58,5 +58,7 @@ s32 txgbevf_rlpml_set_vf(struct txgbe_hw *hw, u16 max_size); int txgbevf_negotiate_api_version(struct txgbe_hw *hw, int api); int txgbevf_get_queues(struct txgbe_hw *hw, unsigned int *num_tcs, unsigned int *default_tc); +int txgbevf_add_5tuple_filter(struct txgbe_hw *hw, u32 *msg, u16 index); +int txgbevf_del_5tuple_filter(struct txgbe_hw *hw, u16 index); #endif /* __TXGBE_VF_H__ */ diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index ea9faba2c0..e5736bf387 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -826,7 +826,7 @@ eth_txgbe_dev_uninit(struct rte_eth_dev *eth_dev) return 0; } -static int txgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev) +int txgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev) { struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev); struct txgbe_5tuple_filter *p_5tuple; @@ -4236,7 +4236,10 @@ txgbe_add_5tuple_filter(struct rte_eth_dev *dev, return -ENOSYS; } - txgbe_inject_5tuple_filter(dev, filter); + if (txgbe_is_pf(TXGBE_DEV_HW(dev))) + txgbe_inject_5tuple_filter(dev, filter); + else + txgbevf_inject_5tuple_filter(dev, filter); return 0; } @@ -4261,6 +4264,11 @@ txgbe_remove_5tuple_filter(struct rte_eth_dev *dev, TAILQ_REMOVE(&filter_info->fivetuple_list, filter, entries); rte_free(filter); + if (!txgbe_is_pf(TXGBE_DEV_HW(dev))) { + txgbevf_remove_5tuple_filter(dev, index); + return; + } + wr32(hw, TXGBE_5TFDADDR(index), 0); wr32(hw, TXGBE_5TFSADDR(index), 0); wr32(hw, TXGBE_5TFPORT(index), 0); diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index 302ea9f037..36d51fcbb8 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -526,6 +526,11 @@ int txgbe_add_del_ethertype_filter(struct rte_eth_dev *dev, int txgbe_syn_filter_set(struct rte_eth_dev *dev, struct rte_eth_syn_filter *filter, bool add); +int txgbe_ntuple_filter_uninit(struct rte_eth_dev *eth_dev); + +int txgbevf_inject_5tuple_filter(struct rte_eth_dev *dev, + struct txgbe_5tuple_filter *filter); +void txgbevf_remove_5tuple_filter(struct rte_eth_dev *dev, u16 index); /** * l2 tunnel configuration. diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index d075f9d232..c0d8aa15b2 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -129,6 +129,7 @@ txgbevf_negotiate_api(struct txgbe_hw *hw) /* start with highest supported, proceed down */ static const int sup_ver[] = { + txgbe_mbox_api_21, txgbe_mbox_api_13, txgbe_mbox_api_12, txgbe_mbox_api_11, @@ -157,6 +158,59 @@ generate_random_mac_addr(struct rte_ether_addr *mac_addr) memcpy(&mac_addr->addr_bytes[3], &random, 3); } +int +txgbevf_inject_5tuple_filter(struct rte_eth_dev *dev, + struct txgbe_5tuple_filter *filter) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + uint32_t mask = TXGBE_5TFCTL0_MASK; + uint16_t index = filter->index; + uint32_t msg[TXGBEVF_5T_MAX]; + int err; + + memset(msg, 0, sizeof(*msg)); + + /* 0 means compare */ + mask &= ~TXGBE_5TFCTL0_MPOOL; + if (filter->filter_info.src_ip_mask == 0) + mask &= ~TXGBE_5TFCTL0_MSADDR; + if (filter->filter_info.dst_ip_mask == 0) + mask &= ~TXGBE_5TFCTL0_MDADDR; + if (filter->filter_info.src_port_mask == 0) + mask &= ~TXGBE_5TFCTL0_MSPORT; + if (filter->filter_info.dst_port_mask == 0) + mask &= ~TXGBE_5TFCTL0_MDPORT; + if (filter->filter_info.proto_mask == 0) + mask &= ~TXGBE_5TFCTL0_MPROTO; + + msg[TXGBEVF_5T_CTRL0] = mask; + msg[TXGBEVF_5T_CTRL0] |= TXGBE_5TFCTL0_ENA; + msg[TXGBEVF_5T_CTRL0] |= TXGBE_5TFCTL0_PROTO(filter->filter_info.proto); + msg[TXGBEVF_5T_CTRL0] |= TXGBE_5TFCTL0_PRI(filter->filter_info.priority); + msg[TXGBEVF_5T_CTRL1] = TXGBE_5TFCTL1_QP(filter->queue); + msg[TXGBEVF_5T_PORT] = TXGBE_5TFPORT_DST(be_to_le16(filter->filter_info.dst_port)); + msg[TXGBEVF_5T_PORT] |= TXGBE_5TFPORT_SRC(be_to_le16(filter->filter_info.src_port)); + msg[TXGBEVF_5T_DA] = be_to_le32(filter->filter_info.dst_ip); + msg[TXGBEVF_5T_SA] = be_to_le32(filter->filter_info.src_ip); + + err = txgbevf_add_5tuple_filter(hw, msg, index); + if (err) + PMD_DRV_LOG(ERR, "VF request PF to add 5tuple filters failed."); + + return err; +} + +void +txgbevf_remove_5tuple_filter(struct rte_eth_dev *dev, u16 index) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + int err; + + err = txgbevf_del_5tuple_filter(hw, index); + if (err) + PMD_DRV_LOG(ERR, "VF request PF to delete 5tuple filters failed."); +} + /* * Virtual Function device init */ @@ -173,6 +227,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev); struct rte_ether_addr *perm_addr = (struct rte_ether_addr *)hw->mac.perm_addr; + struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev); PMD_INIT_FUNC_TRACE(); @@ -308,6 +363,16 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev) rte_intr_enable(intr_handle); txgbevf_intr_enable(eth_dev); + /* initialize filter info */ + memset(filter_info, 0, + sizeof(struct txgbe_filter_info)); + + /* initialize 5tuple filter list */ + TAILQ_INIT(&filter_info->fivetuple_list); + + /* initialize flow filter lists */ + txgbe_filterlist_init(); + PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x mac.type=%s", eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id, "txgbe_mac_raptor_vf"); @@ -794,6 +859,12 @@ txgbevf_dev_close(struct rte_eth_dev *dev) rte_intr_callback_unregister(intr_handle, txgbevf_dev_interrupt_handler, dev); + /* Remove all ntuple filters of the device */ + txgbe_ntuple_filter_uninit(dev); + + /* clear all the filters list */ + txgbe_filterlist_flush(); + return ret; } @@ -1341,6 +1412,14 @@ txgbevf_dev_interrupt_handler(void *param) txgbevf_dev_interrupt_action(dev); } +static int +txgbevf_dev_flow_ops_get(__rte_unused struct rte_eth_dev *dev, + const struct rte_flow_ops **ops) +{ + *ops = &txgbe_flow_ops; + return 0; +} + /* * dev_ops for virtual function, bare necessities for basic vf * operation have been implemented @@ -1385,6 +1464,7 @@ static const struct eth_dev_ops txgbevf_eth_dev_ops = { .rss_hash_update = txgbe_dev_rss_hash_update, .rss_hash_conf_get = txgbe_dev_rss_hash_conf_get, .tx_done_cleanup = txgbe_dev_tx_done_cleanup, + .flow_ops_get = txgbevf_dev_flow_ops_get, }; RTE_PMD_REGISTER_PCI(net_txgbe_vf, rte_txgbevf_pmd); diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c index 5d2dd45368..0fc2cb1d3b 100644 --- a/drivers/net/txgbe/txgbe_flow.c +++ b/drivers/net/txgbe/txgbe_flow.c @@ -2768,6 +2768,11 @@ txgbe_flow_create(struct rte_eth_dev *dev, goto out; } + if (!txgbe_is_pf(TXGBE_DEV_HW(dev))) { + PMD_DRV_LOG(ERR, "Flow type not suppotted yet on VF."); + goto out; + } + memset(ðertype_filter, 0, sizeof(struct rte_eth_ethertype_filter)); ret = txgbe_parse_ethertype_filter(dev, attr, pattern, actions, ðertype_filter, error); @@ -3146,6 +3151,10 @@ txgbe_flow_flush(struct rte_eth_dev *dev, int ret = 0; txgbe_clear_all_ntuple_filter(dev); + + if (!txgbe_is_pf(TXGBE_DEV_HW(dev))) + goto out; + txgbe_clear_all_ethertype_filter(dev); txgbe_clear_syn_filter(dev); @@ -3165,6 +3174,7 @@ txgbe_flow_flush(struct rte_eth_dev *dev, txgbe_clear_rss_filter(dev); +out: txgbe_filterlist_flush(); return 0; From patchwork Mon Jun 9 07:04:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154167 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BCF35468B8; Mon, 9 Jun 2025 09:05:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2451E40A6F; Mon, 9 Jun 2025 09:05:21 +0200 (CEST) Received: from smtpbgau2.qq.com (smtpbgau2.qq.com [54.206.34.216]) by mails.dpdk.org (Postfix) with ESMTP id AFBA0406B6 for ; Mon, 9 Jun 2025 09:05:17 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452712tfcdeb363 X-QQ-Originating-IP: jbKdwiaP8+wPKX1drMeX1CxyW7aw6XR9bKt0AnOyx9Y= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:12 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 2855757115320827409 EX-QQ-RecipientCnt: 3 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu Subject: [PATCH v2 02/12] net/txgbe: refactor FDIR filter to improve functionality Date: Mon, 9 Jun 2025 15:04:44 +0800 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: N/EN6P+BmEafJytleDG7HGqD/VGSvYFf889D14xNNDusWdKeEfBqhrWR wzN3uE4737Bq6hOI4J8MBGo3jsCxFmwJ8OrVCDInmKinIzX3tq1RxOaJApEavjneuTXFSrU 9vIrcsYLSsbLoKCmmA2rVqNskjVN8PMTqfBhxrgvL9bUA4XMCPqI4t0drg05EJsjEHSBrgE 65rVbI/c4nHJjnEF+84GFQaV8VvTXUeSG/4Fqv6QyKcJlqQQZCYzhUEmOFbxz3svhiWSd1c dAWJIWP9nKE+t1p+0BjbjcDv/ynFOhS0Y6xo8SgwebL0ocZsPqCs3g5WQzoBA8kKhDqBLqq xVuYml/X9/Mdscajf7XcEAA4FdqNHQl2MKzsQdAIa+/RZee/uAoxgKB8Uff2g4Gfl9SKvdQ 722og5ok/kM36j3wnPoCCtTP8l4l9N/75k8ZF4dD6D55o/8OdKYVsVuBkjQRdALEoSXfmsx K1x4R00qBT2Jxr6eKItxlMXh4dHHwWFwwGUJH1ADT1CXC2IS1DzTrMnTA0HeErT8kmXZBqQ R7WsQh3Xl9lU28TsLPk7xvM6dA4lImuGdSulKqtyI+qh5tayoFB24DZe8FRVZzvPGqbtDjw Dl0j+OsdvAGry8z3kKFTNL/MgQjxPKEUGBWGUoNiPBhJrIETe7twdBHje4L2YhqSo5WNdQo 8yX5/7lfclwLfEI87vyzN+9tVvcikWYA0oKjYMpXcgs4pEKgoGs71vNwNSqFSBhMKtWreNb hO81vXnLBHhr8QT8o7v2VEMHjpRR7atNfiMLAizjQYMtQyzc+qwL+9WoXn3BbMVC7C2Wzw+ KExzGqnm9Z8Ufqi/AoluIiUxesPOx1Fl7SsmyZZcw8OHEJMihBKW0CRlmjtSU568GwXrn9c dUUKTnig3go8QpCFtj30AAP3DrJSWW2g1rmulRRTlcr8puCG7sjAblgeQlbhYJ7B2O1Mbsv IWMbhe/EpvOzfFvRc54tHykUUKrXKxa+8zadUeDFWf8Zibyg6rK8C6TktAE7BFS86+DxoKN eiYO4yhmZxe+/SEhFNIapFnwqd3fw= X-QQ-XMRINFO: NI4Ajvh11aEj8Xl/2s1/T8w= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There were some defects in the original configuration for flow director filter. Now make the following improvements: 1) Fix incorrect parsing to ntuple filter when set the pattern likes: flow create ... ipv4 / udp dst is ... / raw ... / end actions ... / end 2) Fix flex offset base to set item RAW relative = 1, and convert RAW pattern string to hex bytes to match the hardware identification. 3) Fix to create FDIR rules for VXLAN/GRE/NVGRE/GENEVE packets, they will match the rules in the inner layers. 4) Support IPv6 perfect mode. 5) Add packet type mask to match more types of packets if the pattern is default. Signed-off-by: Jiawen Wu --- doc/guides/nics/features/txgbe.ini | 2 + drivers/net/txgbe/base/txgbe_type.h | 20 +- drivers/net/txgbe/txgbe_ethdev.h | 9 +- drivers/net/txgbe/txgbe_fdir.c | 62 +- drivers/net/txgbe/txgbe_flow.c | 847 ++++++++++++++++++++-------- 5 files changed, 671 insertions(+), 269 deletions(-) diff --git a/doc/guides/nics/features/txgbe.ini b/doc/guides/nics/features/txgbe.ini index be0af3dfad..20f7cb8db8 100644 --- a/doc/guides/nics/features/txgbe.ini +++ b/doc/guides/nics/features/txgbe.ini @@ -67,6 +67,8 @@ tcp = Y udp = Y vlan = P vxlan = Y +geneve = Y +gre = Y [rte_flow actions] drop = Y diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h index 4371876649..383438ea3c 100644 --- a/drivers/net/txgbe/base/txgbe_type.h +++ b/drivers/net/txgbe/base/txgbe_type.h @@ -88,8 +88,11 @@ enum { #define TXGBE_ATR_L4TYPE_UDP 0x1 #define TXGBE_ATR_L4TYPE_TCP 0x2 #define TXGBE_ATR_L4TYPE_SCTP 0x3 -#define TXGBE_ATR_TUNNEL_MASK 0x10 -#define TXGBE_ATR_TUNNEL_ANY 0x10 +#define TXGBE_ATR_TYPE_MASK_TUN 0x80 +#define TXGBE_ATR_TYPE_MASK_TUN_OUTIP 0x40 +#define TXGBE_ATR_TYPE_MASK_TUN_TYPE 0x20 +#define TXGBE_ATR_TYPE_MASK_L3P 0x10 +#define TXGBE_ATR_TYPE_MASK_L4P 0x08 enum txgbe_atr_flow_type { TXGBE_ATR_FLOW_TYPE_IPV4 = 0x0, TXGBE_ATR_FLOW_TYPE_UDPV4 = 0x1, @@ -99,14 +102,6 @@ enum txgbe_atr_flow_type { TXGBE_ATR_FLOW_TYPE_UDPV6 = 0x5, TXGBE_ATR_FLOW_TYPE_TCPV6 = 0x6, TXGBE_ATR_FLOW_TYPE_SCTPV6 = 0x7, - TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV4 = 0x10, - TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV4 = 0x11, - TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV4 = 0x12, - TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV4 = 0x13, - TXGBE_ATR_FLOW_TYPE_TUNNELED_IPV6 = 0x14, - TXGBE_ATR_FLOW_TYPE_TUNNELED_UDPV6 = 0x15, - TXGBE_ATR_FLOW_TYPE_TUNNELED_TCPV6 = 0x16, - TXGBE_ATR_FLOW_TYPE_TUNNELED_SCTPV6 = 0x17, }; /* Flow Director ATR input struct. */ @@ -116,11 +111,8 @@ struct txgbe_atr_input { * * vm_pool - 1 byte * flow_type - 1 byte - * vlan_id - 2 bytes + * pkt_type - 2 bytes * src_ip - 16 bytes - * inner_mac - 6 bytes - * cloud_mode - 2 bytes - * tni_vni - 4 bytes * dst_ip - 16 bytes * src_port - 2 bytes * dst_port - 2 bytes diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index 36d51fcbb8..c2d0950d2c 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -90,9 +90,7 @@ struct txgbe_hw_fdir_mask { uint16_t src_port_mask; uint16_t dst_port_mask; uint16_t flex_bytes_mask; - uint8_t mac_addr_byte_mask; - uint32_t tunnel_id_mask; - uint8_t tunnel_type_mask; + uint8_t pkt_type_mask; /* reversed mask for hw */ }; struct txgbe_fdir_filter { @@ -116,11 +114,13 @@ struct txgbe_fdir_rule { uint32_t soft_id; /* an unique value for this rule */ uint8_t queue; /* assigned rx queue */ uint8_t flex_bytes_offset; + bool flex_relative; }; struct txgbe_hw_fdir_info { struct txgbe_hw_fdir_mask mask; uint8_t flex_bytes_offset; + bool flex_relative; uint16_t collision; uint16_t free; uint16_t maxhash; @@ -561,8 +561,9 @@ void txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction, */ int txgbe_fdir_configure(struct rte_eth_dev *dev); int txgbe_fdir_set_input_mask(struct rte_eth_dev *dev); +uint16_t txgbe_fdir_get_flex_base(struct txgbe_fdir_rule *rule); int txgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev, - uint16_t offset); + uint16_t offset, uint16_t flex_base); int txgbe_fdir_filter_program(struct rte_eth_dev *dev, struct txgbe_fdir_rule *rule, bool del, bool update); diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c index f627ab681d..0efd43b59a 100644 --- a/drivers/net/txgbe/txgbe_fdir.c +++ b/drivers/net/txgbe/txgbe_fdir.c @@ -187,18 +187,12 @@ txgbe_fdir_set_input_mask(struct rte_eth_dev *dev) return -ENOTSUP; } - /* - * Program the relevant mask registers. If src/dst_port or src/dst_addr - * are zero, then assume a full mask for that field. Also assume that - * a VLAN of 0 is unspecified, so mask that out as well. L4type - * cannot be masked out in this implementation. - */ - if (info->mask.dst_port_mask == 0 && info->mask.src_port_mask == 0) { - /* use the L4 protocol mask for raw IPv4/IPv6 traffic */ - fdirm |= TXGBE_FDIRMSK_L4P; - } + /* use the L4 protocol mask for raw IPv4/IPv6 traffic */ + if (info->mask.pkt_type_mask == 0 && info->mask.dst_port_mask == 0 && + info->mask.src_port_mask == 0) + info->mask.pkt_type_mask |= TXGBE_FDIRMSK_L4P; - /* TBD: don't support encapsulation yet */ + fdirm |= info->mask.pkt_type_mask; wr32(hw, TXGBE_FDIRMSK, fdirm); /* store the TCP/UDP port masks */ @@ -216,15 +210,12 @@ txgbe_fdir_set_input_mask(struct rte_eth_dev *dev) wr32(hw, TXGBE_FDIRSIP4MSK, ~info->mask.src_ipv4_mask); wr32(hw, TXGBE_FDIRDIP4MSK, ~info->mask.dst_ipv4_mask); - if (mode == RTE_FDIR_MODE_SIGNATURE) { - /* - * Store source and destination IPv6 masks (bit reversed) - */ - fdiripv6m = TXGBE_FDIRIP6MSK_DST(info->mask.dst_ipv6_mask) | - TXGBE_FDIRIP6MSK_SRC(info->mask.src_ipv6_mask); - - wr32(hw, TXGBE_FDIRIP6MSK, ~fdiripv6m); - } + /* + * Store source and destination IPv6 masks (bit reversed) + */ + fdiripv6m = TXGBE_FDIRIP6MSK_DST(info->mask.dst_ipv6_mask) | + TXGBE_FDIRIP6MSK_SRC(info->mask.src_ipv6_mask); + wr32(hw, TXGBE_FDIRIP6MSK, ~fdiripv6m); return 0; } @@ -258,9 +249,24 @@ txgbe_fdir_store_input_mask(struct rte_eth_dev *dev) return 0; } +uint16_t +txgbe_fdir_get_flex_base(struct txgbe_fdir_rule *rule) +{ + if (!rule->flex_relative) + return TXGBE_FDIRFLEXCFG_BASE_MAC; + + if (rule->input.flow_type & TXGBE_ATR_L4TYPE_MASK) + return TXGBE_FDIRFLEXCFG_BASE_PAY; + + if (rule->input.flow_type & TXGBE_ATR_L3TYPE_MASK) + return TXGBE_FDIRFLEXCFG_BASE_L3; + + return TXGBE_FDIRFLEXCFG_BASE_L2; +} + int txgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev, - uint16_t offset) + uint16_t offset, uint16_t flex_base) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); int i; @@ -268,7 +274,7 @@ txgbe_fdir_set_flexbytes_offset(struct rte_eth_dev *dev, for (i = 0; i < 64; i++) { uint32_t flexreg, flex; flexreg = rd32(hw, TXGBE_FDIRFLEXCFG(i / 4)); - flex = TXGBE_FDIRFLEXCFG_BASE_MAC; + flex = flex_base; flex |= TXGBE_FDIRFLEXCFG_OFST(offset / 2); flexreg &= ~(TXGBE_FDIRFLEXCFG_ALL(~0UL, i % 4)); flexreg |= TXGBE_FDIRFLEXCFG_ALL(flex, i % 4); @@ -633,6 +639,8 @@ fdir_write_perfect_filter(struct txgbe_hw *hw, fdircmd |= TXGBE_FDIRPICMD_QP(queue); fdircmd |= TXGBE_FDIRPICMD_POOL(input->vm_pool); + if (input->flow_type & TXGBE_ATR_L3TYPE_IPV6) + fdircmd |= TXGBE_FDIRPICMD_IP6; wr32(hw, TXGBE_FDIRPICMD, fdircmd); PMD_DRV_LOG(DEBUG, "Rx Queue=%x hash=%x", queue, fdirhash); @@ -801,11 +809,6 @@ txgbe_fdir_filter_program(struct rte_eth_dev *dev, is_perfect = TRUE; if (is_perfect) { - if (rule->input.flow_type & TXGBE_ATR_L3TYPE_IPV6) { - PMD_DRV_LOG(ERR, "IPv6 is not supported in" - " perfect mode!"); - return -ENOTSUP; - } fdirhash = atr_compute_perfect_hash(&rule->input, TXGBE_DEV_FDIR_CONF(dev)->pballoc); fdirhash |= TXGBE_FDIRPIHASH_IDX(rule->soft_id); @@ -910,6 +913,11 @@ txgbe_fdir_flush(struct rte_eth_dev *dev) info->add = 0; info->remove = 0; + memset(&info->mask, 0, sizeof(struct txgbe_hw_fdir_mask)); + info->mask_added = false; + info->flex_relative = false; + info->flex_bytes_offset = 0; + return ret; } diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c index 0fc2cb1d3b..82d0599d9a 100644 --- a/drivers/net/txgbe/txgbe_flow.c +++ b/drivers/net/txgbe/txgbe_flow.c @@ -361,7 +361,7 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, if (item->type != RTE_FLOW_ITEM_TYPE_END && (!item->spec && !item->mask)) { - goto action; + goto item_end; } /* get the TCP/UDP/SCTP info */ @@ -490,6 +490,7 @@ cons_parse_ntuple_filter(const struct rte_flow_attr *attr, goto action; } +item_end: /* check if the next not void item is END */ item = next_no_void_pattern(pattern, item); if (item->type != RTE_FLOW_ITEM_TYPE_END) { @@ -1486,8 +1487,41 @@ static inline uint8_t signature_match(const struct rte_flow_item pattern[]) return 0; } +static void +txgbe_fdir_parse_flow_type(struct txgbe_atr_input *input, u8 ptid, bool tun) +{ + if (!tun) + ptid = TXGBE_PTID_PKT_IP; + + switch (input->flow_type & TXGBE_ATR_L4TYPE_MASK) { + case TXGBE_ATR_L4TYPE_UDP: + ptid |= TXGBE_PTID_TYP_UDP; + break; + case TXGBE_ATR_L4TYPE_TCP: + ptid |= TXGBE_PTID_TYP_TCP; + break; + case TXGBE_ATR_L4TYPE_SCTP: + ptid |= TXGBE_PTID_TYP_SCTP; + break; + default: + break; + } + + switch (input->flow_type & TXGBE_ATR_L3TYPE_MASK) { + case TXGBE_ATR_L3TYPE_IPV4: + break; + case TXGBE_ATR_L3TYPE_IPV6: + ptid |= TXGBE_PTID_PKT_IPV6; + break; + default: + break; + } + + input->pkt_type = cpu_to_be16(ptid); +} + /** - * Parse the rule to see if it is a IP or MAC VLAN flow director rule. + * Parse the rule to see if it is a IP flow director rule. * And get the flow director filter info BTW. * UDP/TCP/SCTP PATTERN: * The first not void item can be ETH or IPV4 or IPV6 @@ -1554,7 +1588,6 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, const struct rte_flow_item_sctp *sctp_mask; const struct rte_flow_item_raw *raw_mask; const struct rte_flow_item_raw *raw_spec; - u32 ptype = 0; uint8_t j; if (!pattern) { @@ -1584,6 +1617,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, */ memset(rule, 0, sizeof(struct txgbe_fdir_rule)); memset(&rule->mask, 0, sizeof(struct txgbe_hw_fdir_mask)); + rule->mask.pkt_type_mask = TXGBE_ATR_TYPE_MASK_L3P | + TXGBE_ATR_TYPE_MASK_L4P; + memset(&rule->input, 0, sizeof(struct txgbe_atr_input)); /** * The first not void item should be @@ -1686,7 +1722,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, } } else { if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 && - item->type != RTE_FLOW_ITEM_TYPE_VLAN) { + item->type != RTE_FLOW_ITEM_TYPE_VLAN && + item->type != RTE_FLOW_ITEM_TYPE_IPV6 && + item->type != RTE_FLOW_ITEM_TYPE_RAW) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -1694,6 +1732,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, return -rte_errno; } } + if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) + item = next_no_fuzzy_pattern(pattern, item); } /* Get the IPV4 info. */ @@ -1703,7 +1743,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV4; - ptype = txgbe_ptype_table[TXGBE_PT_IPV4]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P; /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -1715,31 +1755,26 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * Only care about src & dst addresses, * others should be masked. */ - if (!item->mask) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; - } - rule->b_mask = TRUE; - ipv4_mask = item->mask; - if (ipv4_mask->hdr.version_ihl || - ipv4_mask->hdr.type_of_service || - ipv4_mask->hdr.total_length || - ipv4_mask->hdr.packet_id || - ipv4_mask->hdr.fragment_offset || - ipv4_mask->hdr.time_to_live || - ipv4_mask->hdr.next_proto_id || - ipv4_mask->hdr.hdr_checksum) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + if (item->mask) { + rule->b_mask = TRUE; + ipv4_mask = item->mask; + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.next_proto_id || + ipv4_mask->hdr.hdr_checksum) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr; + rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr; } - rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr; - rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr; if (item->spec) { rule->b_spec = TRUE; @@ -1775,16 +1810,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV6; - ptype = txgbe_ptype_table[TXGBE_PT_IPV6]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P; - /** - * 1. must signature match - * 2. not support last - * 3. mask must not null - */ - if (rule->mode != RTE_FDIR_MODE_SIGNATURE || - item->last || - !item->mask) { + if (item->last) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, @@ -1792,42 +1820,44 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, return -rte_errno; } - rule->b_mask = TRUE; - ipv6_mask = item->mask; - if (ipv6_mask->hdr.vtc_flow || - ipv6_mask->hdr.payload_len || - ipv6_mask->hdr.proto || - ipv6_mask->hdr.hop_limits) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; - } - - /* check src addr mask */ - for (j = 0; j < 16; j++) { - if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) { - rule->mask.src_ipv6_mask |= 1 << j; - } else if (ipv6_mask->hdr.src_addr.a[j] != 0) { + if (item->mask) { + rule->b_mask = TRUE; + ipv6_mask = item->mask; + if (ipv6_mask->hdr.vtc_flow || + ipv6_mask->hdr.payload_len || + ipv6_mask->hdr.proto || + ipv6_mask->hdr.hop_limits) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item, "Not supported by fdir filter"); return -rte_errno; } - } - /* check dst addr mask */ - for (j = 0; j < 16; j++) { - if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) { - rule->mask.dst_ipv6_mask |= 1 << j; - } else if (ipv6_mask->hdr.dst_addr.a[j] != 0) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + /* check src addr mask */ + for (j = 0; j < 16; j++) { + if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) { + rule->mask.src_ipv6_mask |= 1 << j; + } else if (ipv6_mask->hdr.src_addr.a[j] != 0) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + + /* check dst addr mask */ + for (j = 0; j < 16; j++) { + if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) { + rule->mask.dst_ipv6_mask |= 1 << j; + } else if (ipv6_mask->hdr.dst_addr.a[j] != 0) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } } @@ -1865,10 +1895,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type |= TXGBE_ATR_L4TYPE_TCP; - if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) - ptype = txgbe_ptype_table[TXGBE_PT_IPV6_TCP]; - else - ptype = txgbe_ptype_table[TXGBE_PT_IPV4_TCP]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -1932,10 +1960,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type |= TXGBE_ATR_L4TYPE_UDP; - if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) - ptype = txgbe_ptype_table[TXGBE_PT_IPV6_UDP]; - else - ptype = txgbe_ptype_table[TXGBE_PT_IPV4_UDP]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -1994,10 +2020,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, * as we must have a flow type. */ rule->input.flow_type |= TXGBE_ATR_L4TYPE_SCTP; - if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) - ptype = txgbe_ptype_table[TXGBE_PT_IPV6_SCTP]; - else - ptype = txgbe_ptype_table[TXGBE_PT_IPV4_SCTP]; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + /*Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -2038,19 +2062,6 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, rule->input.dst_port = sctp_spec->hdr.dst_port; } - /* others even sctp port is not supported */ - sctp_mask = item->mask; - if (sctp_mask && - (sctp_mask->hdr.src_port || - sctp_mask->hdr.dst_port || - sctp_mask->hdr.tag || - sctp_mask->hdr.cksum)) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; - } item = next_no_fuzzy_pattern(pattern, item); if (item->type != RTE_FLOW_ITEM_TYPE_RAW && @@ -2065,6 +2076,8 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, /* Get the flex byte info */ if (item->type == RTE_FLOW_ITEM_TYPE_RAW) { + uint16_t pattern = 0; + /* Not supported last point for range*/ if (item->last) { rte_flow_error_set(error, EINVAL, @@ -2081,6 +2094,7 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, return -rte_errno; } + rule->b_mask = TRUE; raw_mask = item->mask; /* check mask */ @@ -2097,19 +2111,21 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, return -rte_errno; } + rule->b_spec = TRUE; raw_spec = item->spec; /* check spec */ - if (raw_spec->relative != 0 || - raw_spec->search != 0 || + if (raw_spec->search != 0 || raw_spec->reserved != 0 || raw_spec->offset > TXGBE_MAX_FLX_SOURCE_OFF || raw_spec->offset % 2 || raw_spec->limit != 0 || - raw_spec->length != 2 || + raw_spec->length != 4 || /* pattern can't be 0xffff */ (raw_spec->pattern[0] == 0xff && - raw_spec->pattern[1] == 0xff)) { + raw_spec->pattern[1] == 0xff && + raw_spec->pattern[2] == 0xff && + raw_spec->pattern[3] == 0xff)) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2119,7 +2135,9 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, /* check pattern mask */ if (raw_mask->pattern[0] != 0xff || - raw_mask->pattern[1] != 0xff) { + raw_mask->pattern[1] != 0xff || + raw_mask->pattern[2] != 0xff || + raw_mask->pattern[3] != 0xff) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2128,10 +2146,19 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, } rule->mask.flex_bytes_mask = 0xffff; - rule->input.flex_bytes = - (((uint16_t)raw_spec->pattern[1]) << 8) | - raw_spec->pattern[0]; + /* Convert pattern string to hex bytes */ + if (sscanf((const char *)raw_spec->pattern, "%hx", &pattern) != 1) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Failed to parse raw pattern"); + return -rte_errno; + } + rule->input.flex_bytes = (pattern & 0x00FF) << 8; + rule->input.flex_bytes |= (pattern & 0xFF00) >> 8; + rule->flex_bytes_offset = raw_spec->offset; + rule->flex_relative = raw_spec->relative; } if (item->type != RTE_FLOW_ITEM_TYPE_END) { @@ -2146,57 +2173,35 @@ txgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev __rte_unused, } } - rule->input.pkt_type = cpu_to_be16(txgbe_encode_ptype(ptype)); - - if (rule->input.flow_type & TXGBE_ATR_FLOW_TYPE_IPV6) { - if (rule->input.flow_type & TXGBE_ATR_L4TYPE_MASK) - rule->input.pkt_type &= 0xFFFF; - else - rule->input.pkt_type &= 0xF8FF; - - rule->input.flow_type &= TXGBE_ATR_L3TYPE_MASK | - TXGBE_ATR_L4TYPE_MASK; - } + txgbe_fdir_parse_flow_type(&rule->input, 0, false); return txgbe_parse_fdir_act_attr(attr, actions, rule, error); } /** - * Parse the rule to see if it is a VxLAN or NVGRE flow director rule. + * Parse the rule to see if it is a IP tunnel flow director rule. * And get the flow director filter info BTW. - * VxLAN PATTERN: - * The first not void item must be ETH. - * The second not void item must be IPV4/ IPV6. - * The third not void item must be NVGRE. - * The next not void item must be END. - * NVGRE PATTERN: - * The first not void item must be ETH. - * The second not void item must be IPV4/ IPV6. - * The third not void item must be NVGRE. + * PATTERN: + * The first not void item can be ETH or IPV4 or IPV6 or UDP or tunnel type. + * The second not void item must be IPV4 or IPV6 if the first one is ETH. + * The next not void item could be UDP or tunnel type. + * The next not void item could be a certain inner layer. * The next not void item must be END. * ACTION: - * The first not void action should be QUEUE or DROP. - * The second not void optional action should be MARK, - * mark_id is a uint32_t number. + * The first not void action should be QUEUE. * The next not void action should be END. - * VxLAN pattern example: + * pattern example: * ITEM Spec Mask * ETH NULL NULL - * IPV4/IPV6 NULL NULL + * IPV4 NULL NULL * UDP NULL NULL - * VxLAN vni{0x00, 0x32, 0x54} {0xFF, 0xFF, 0xFF} - * MAC VLAN tci 0x2016 0xEFFF - * END - * NEGRV pattern example: - * ITEM Spec Mask + * VXLAN NULL NULL * ETH NULL NULL - * IPV4/IPV6 NULL NULL - * NVGRE protocol 0x6558 0xFFFF - * tni{0x00, 0x32, 0x54} {0xFF, 0xFF, 0xFF} - * MAC VLAN tci 0x2016 0xEFFF + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * UDP/TCP/SCTP src_port 80 0xFFFF + * dst_port 80 0xFFFF * END - * other members in mask and spec should set to 0x00. - * item->last should be NULL. */ static int txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, @@ -2207,6 +2212,17 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, { const struct rte_flow_item *item; const struct rte_flow_item_eth *eth_mask; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_ipv6 *ipv6_spec; + const struct rte_flow_item_ipv6 *ipv6_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + u8 ptid = 0; uint32_t j; if (!pattern) { @@ -2235,12 +2251,14 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, * value. So, we need not do anything for the not provided fields later. */ memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - memset(&rule->mask, 0xFF, sizeof(struct txgbe_hw_fdir_mask)); - rule->mask.vlan_tci_mask = 0; + memset(&rule->mask, 0, sizeof(struct txgbe_hw_fdir_mask)); + rule->mask.pkt_type_mask = TXGBE_ATR_TYPE_MASK_TUN_OUTIP | + TXGBE_ATR_TYPE_MASK_L3P | + TXGBE_ATR_TYPE_MASK_L4P; /** * The first not void item should be - * MAC or IPv4 or IPv6 or UDP or VxLAN. + * MAC or IPv4 or IPv6 or UDP or tunnel. */ item = next_no_void_pattern(pattern, NULL); if (item->type != RTE_FLOW_ITEM_TYPE_ETH && @@ -2248,7 +2266,9 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, item->type != RTE_FLOW_ITEM_TYPE_IPV6 && item->type != RTE_FLOW_ITEM_TYPE_UDP && item->type != RTE_FLOW_ITEM_TYPE_VXLAN && - item->type != RTE_FLOW_ITEM_TYPE_NVGRE) { + item->type != RTE_FLOW_ITEM_TYPE_NVGRE && + item->type != RTE_FLOW_ITEM_TYPE_GRE && + item->type != RTE_FLOW_ITEM_TYPE_GENEVE) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2256,7 +2276,8 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, return -rte_errno; } - rule->mode = RTE_FDIR_MODE_PERFECT_TUNNEL; + rule->mode = RTE_FDIR_MODE_PERFECT; + ptid = TXGBE_PTID_PKT_TUN; /* Skip MAC. */ if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { @@ -2278,6 +2299,8 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, /* Check if the next not void item is IPv4 or IPv6. */ item = next_no_void_pattern(pattern, item); + if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) + item = next_no_fuzzy_pattern(pattern, item); if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 && item->type != RTE_FLOW_ITEM_TYPE_IPV6) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); @@ -2291,6 +2314,8 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, /* Skip IP. */ if (item->type == RTE_FLOW_ITEM_TYPE_IPV4 || item->type == RTE_FLOW_ITEM_TYPE_IPV6) { + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_TUN_OUTIP; + /* Only used to describe the protocol stack. */ if (item->spec || item->mask) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); @@ -2307,10 +2332,17 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, return -rte_errno; } - /* Check if the next not void item is UDP or NVGRE. */ + if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) + ptid |= TXGBE_PTID_TUN_IPV6; + item = next_no_void_pattern(pattern, item); - if (item->type != RTE_FLOW_ITEM_TYPE_UDP && - item->type != RTE_FLOW_ITEM_TYPE_NVGRE) { + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 && + item->type != RTE_FLOW_ITEM_TYPE_IPV6 && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_VXLAN && + item->type != RTE_FLOW_ITEM_TYPE_GRE && + item->type != RTE_FLOW_ITEM_TYPE_NVGRE && + item->type != RTE_FLOW_ITEM_TYPE_GENEVE) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2321,6 +2353,31 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, /* Skip UDP. */ if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + + /* Check if the next not void item is VxLAN or GENEVE. */ + item = next_no_void_pattern(pattern, item); + if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN && + item->type != RTE_FLOW_ITEM_TYPE_GENEVE) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + + /* Skip tunnel. */ + if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN || + item->type == RTE_FLOW_ITEM_TYPE_GRE || + item->type == RTE_FLOW_ITEM_TYPE_NVGRE || + item->type == RTE_FLOW_ITEM_TYPE_GENEVE) { /* Only used to describe the protocol stack. */ if (item->spec || item->mask) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); @@ -2337,9 +2394,15 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, return -rte_errno; } - /* Check if the next not void item is VxLAN. */ + if (item->type == RTE_FLOW_ITEM_TYPE_GRE) + ptid |= TXGBE_PTID_TUN_EIG; + else + ptid |= TXGBE_PTID_TUN_EIGM; + item = next_no_void_pattern(pattern, item); - if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) { + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4 && + item->type != RTE_FLOW_ITEM_TYPE_IPV6) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, @@ -2348,100 +2411,421 @@ txgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, } } - /* check if the next not void item is MAC */ - item = next_no_void_pattern(pattern, item); - if (item->type != RTE_FLOW_ITEM_TYPE_ETH) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; - } + /* Get the MAC info. */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /** + * Only support vlan and dst MAC address, + * others should be masked. + */ + if (item->spec && !item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } - /** - * Only support vlan and dst MAC address, - * others should be masked. - */ + if (item->mask) { + rule->b_mask = TRUE; + eth_mask = item->mask; - if (!item->mask) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + /* Ether type should be masked. */ + if (eth_mask->hdr.ether_type) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + + /** + * src MAC address must be masked, + * and don't support dst MAC address mask. + */ + for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { + if (eth_mask->hdr.src_addr.addr_bytes[j] || + eth_mask->hdr.dst_addr.addr_bytes[j] != 0xFF) { + memset(rule, 0, + sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + + /* When no VLAN, considered as full mask. */ + rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF); + } + + item = next_no_fuzzy_pattern(pattern, item); + if (rule->mask.vlan_tci_mask) { + if (item->type != RTE_FLOW_ITEM_TYPE_VLAN) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } else { + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4 && + item->type != RTE_FLOW_ITEM_TYPE_IPV6 && + item->type != RTE_FLOW_ITEM_TYPE_VLAN) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) { + ptid |= TXGBE_PTID_TUN_EIGMV; + item = next_no_fuzzy_pattern(pattern, item); + } } - /*Not supported last point for range*/ - if (item->last) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - item, "Not supported last point for range"); - return -rte_errno; + + /* Get the IPV4 info. */ + if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV4; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P; + + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + /** + * Only care about src & dst addresses, + * others should be masked. + */ + if (item->spec && !item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + if (item->mask) { + rule->b_mask = TRUE; + ipv4_mask = item->mask; + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.next_proto_id || + ipv4_mask->hdr.hdr_checksum) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.dst_ipv4_mask = ipv4_mask->hdr.dst_addr; + rule->mask.src_ipv4_mask = ipv4_mask->hdr.src_addr; + } + if (item->spec) { + rule->b_spec = TRUE; + ipv4_spec = item->spec; + rule->input.dst_ip[0] = + ipv4_spec->hdr.dst_addr; + rule->input.src_ip[0] = + ipv4_spec->hdr.src_addr; + } + + /** + * Check if the next not void item is + * TCP or UDP or SCTP or END. + */ + item = next_no_fuzzy_pattern(pattern, item); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP && + item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } - rule->b_mask = TRUE; - eth_mask = item->mask; - /* Ether type should be masked. */ - if (eth_mask->hdr.ether_type) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + /* Get the IPV6 info. */ + if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type = TXGBE_ATR_FLOW_TYPE_IPV6; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L3P; + + if (item->last) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + if (item->spec && !item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + if (item->mask) { + rule->b_mask = TRUE; + ipv6_mask = item->mask; + if (ipv6_mask->hdr.vtc_flow || + ipv6_mask->hdr.payload_len || + ipv6_mask->hdr.proto || + ipv6_mask->hdr.hop_limits) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + + /* check src addr mask */ + for (j = 0; j < 16; j++) { + if (ipv6_mask->hdr.src_addr.a[j] == UINT8_MAX) { + rule->mask.src_ipv6_mask |= 1 << j; + } else if (ipv6_mask->hdr.src_addr.a[j] != 0) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + + /* check dst addr mask */ + for (j = 0; j < 16; j++) { + if (ipv6_mask->hdr.dst_addr.a[j] == UINT8_MAX) { + rule->mask.dst_ipv6_mask |= 1 << j; + } else if (ipv6_mask->hdr.dst_addr.a[j] != 0) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + } + } + if (item->spec) { + rule->b_spec = TRUE; + ipv6_spec = item->spec; + rte_memcpy(rule->input.src_ip, + &ipv6_spec->hdr.src_addr, 16); + rte_memcpy(rule->input.dst_ip, + &ipv6_spec->hdr.dst_addr, 16); + } + + /** + * Check if the next not void item is + * TCP or UDP or SCTP or END. + */ + item = next_no_fuzzy_pattern(pattern, item); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP && + item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } - /* src MAC address should be masked. */ - for (j = 0; j < RTE_ETHER_ADDR_LEN; j++) { - if (eth_mask->hdr.src_addr.addr_bytes[j]) { - memset(rule, 0, - sizeof(struct txgbe_fdir_rule)); + /* Get the TCP info. */ + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type |= TXGBE_ATR_L4TYPE_TCP; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + + /*Not supported last point for range*/ + if (item->last) { rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + /** + * Only care about src & dst ports, + * others should be masked. + */ + if (!item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->b_mask = TRUE; + tcp_mask = item->mask; + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.tcp_flags || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); return -rte_errno; } + rule->mask.src_port_mask = tcp_mask->hdr.src_port; + rule->mask.dst_port_mask = tcp_mask->hdr.dst_port; + + if (item->spec) { + rule->b_spec = TRUE; + tcp_spec = item->spec; + rule->input.src_port = + tcp_spec->hdr.src_port; + rule->input.dst_port = + tcp_spec->hdr.dst_port; + } } - rule->mask.mac_addr_byte_mask = 0; - for (j = 0; j < ETH_ADDR_LEN; j++) { - /* It's a per byte mask. */ - if (eth_mask->hdr.dst_addr.addr_bytes[j] == 0xFF) { - rule->mask.mac_addr_byte_mask |= 0x1 << j; - } else if (eth_mask->hdr.dst_addr.addr_bytes[j]) { + + /* Get the UDP info */ + if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type |= TXGBE_ATR_L4TYPE_UDP; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + /** + * Only care about src & dst ports, + * others should be masked. + */ + if (!item->mask) { memset(rule, 0, sizeof(struct txgbe_fdir_rule)); rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); return -rte_errno; } + rule->b_mask = TRUE; + udp_mask = item->mask; + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.src_port_mask = udp_mask->hdr.src_port; + rule->mask.dst_port_mask = udp_mask->hdr.dst_port; + + if (item->spec) { + rule->b_spec = TRUE; + udp_spec = item->spec; + rule->input.src_port = + udp_spec->hdr.src_port; + rule->input.dst_port = + udp_spec->hdr.dst_port; + } } - /* When no vlan, considered as full mask. */ - rule->mask.vlan_tci_mask = rte_cpu_to_be_16(0xEFFF); + /* Get the SCTP info */ + if (item->type == RTE_FLOW_ITEM_TYPE_SCTP) { + /** + * Set the flow type even if there's no content + * as we must have a flow type. + */ + rule->input.flow_type |= TXGBE_ATR_L4TYPE_SCTP; + rule->mask.pkt_type_mask &= ~TXGBE_ATR_TYPE_MASK_L4P; - /** - * Check if the next not void item is vlan or ipv4. - * IPv6 is not supported. - */ - item = next_no_void_pattern(pattern, item); - if (item->type != RTE_FLOW_ITEM_TYPE_VLAN && - item->type != RTE_FLOW_ITEM_TYPE_IPV4) { - memset(rule, 0, sizeof(struct txgbe_fdir_rule)); - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, - item, "Not supported by fdir filter"); - return -rte_errno; + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + } + + /** + * Only care about src & dst ports, + * others should be masked. + */ + if (!item->mask) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->b_mask = TRUE; + sctp_mask = item->mask; + if (sctp_mask->hdr.tag || sctp_mask->hdr.cksum) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } + rule->mask.src_port_mask = sctp_mask->hdr.src_port; + rule->mask.dst_port_mask = sctp_mask->hdr.dst_port; + + if (item->spec) { + rule->b_spec = TRUE; + sctp_spec = item->spec; + rule->input.src_port = + sctp_spec->hdr.src_port; + rule->input.dst_port = + sctp_spec->hdr.dst_port; + } + /* others even sctp port is not supported */ + sctp_mask = item->mask; + if (sctp_mask && + (sctp_mask->hdr.src_port || + sctp_mask->hdr.dst_port || + sctp_mask->hdr.tag || + sctp_mask->hdr.cksum)) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } - /*Not supported last point for range*/ - if (item->last) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - item, "Not supported last point for range"); - return -rte_errno; + + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + /* check if the next not void item is END */ + item = next_no_fuzzy_pattern(pattern, item); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(rule, 0, sizeof(struct txgbe_fdir_rule)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by fdir filter"); + return -rte_errno; + } } - /** - * If the tags is 0, it means don't care about the VLAN. - * Do nothing. - */ + txgbe_fdir_parse_flow_type(&rule->input, ptid, true); return txgbe_parse_fdir_act_attr(attr, actions, rule, error); } @@ -2837,11 +3221,19 @@ txgbe_flow_create(struct rte_eth_dev *dev, sizeof(struct txgbe_hw_fdir_mask)); fdir_info->flex_bytes_offset = fdir_rule.flex_bytes_offset; + fdir_info->flex_relative = fdir_rule.flex_relative; + + if (fdir_rule.mask.flex_bytes_mask) { + uint16_t flex_base; - if (fdir_rule.mask.flex_bytes_mask) + flex_base = txgbe_fdir_get_flex_base(&fdir_rule); txgbe_fdir_set_flexbytes_offset(dev, - fdir_rule.flex_bytes_offset); + fdir_rule.flex_bytes_offset, + flex_base); + } + fdir_info->mask.pkt_type_mask = + fdir_rule.mask.pkt_type_mask; ret = txgbe_fdir_set_input_mask(dev); if (ret) goto out; @@ -2862,7 +3254,9 @@ txgbe_flow_create(struct rte_eth_dev *dev, } if (fdir_info->flex_bytes_offset != - fdir_rule.flex_bytes_offset) + fdir_rule.flex_bytes_offset || + fdir_info->flex_relative != + fdir_rule.flex_relative) goto out; } } @@ -3090,8 +3484,13 @@ txgbe_flow_destroy(struct rte_eth_dev *dev, TAILQ_REMOVE(&filter_fdir_list, fdir_rule_ptr, entries); rte_free(fdir_rule_ptr); - if (TAILQ_EMPTY(&filter_fdir_list)) + if (TAILQ_EMPTY(&filter_fdir_list)) { + memset(&fdir_info->mask, 0, + sizeof(struct txgbe_hw_fdir_mask)); fdir_info->mask_added = false; + fdir_info->flex_relative = false; + fdir_info->flex_bytes_offset = 0; + } } break; case RTE_ETH_FILTER_L2_TUNNEL: From patchwork Mon Jun 9 07:04:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154168 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3A059468B7; Mon, 9 Jun 2025 09:05:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB22640B8F; Mon, 9 Jun 2025 09:05:22 +0200 (CEST) Received: from smtpbgau2.qq.com (smtpbgau2.qq.com [54.206.34.216]) by mails.dpdk.org (Postfix) with ESMTP id CBAF740661; Mon, 9 Jun 2025 09:05:18 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452714t2b43affc X-QQ-Originating-IP: raM430ZCn63BFc8JIBfpdMRK2wdv0uHcmFgBLIptq48= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:14 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 10835568398516550477 EX-QQ-RecipientCnt: 4 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 03/12] net/txgbe: fix reserved extra FDIR headroom Date: Mon, 9 Jun 2025 15:04:45 +0800 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: MSrNHHHirmth57yk78xp6W69joqYxWY5t3fSGrvxdPbZlDHWk29ZwsOt ukaT+Lk7QFx+a4uBMLf2n58BIgMW/6z2gpL/hHJ7GtjtiaB4DCW1M3qOiMwGaTwOUKqo5/h hUji6pDubb9GknzXV2d7jNS4Qn6QoJsbiHJMrT8PYxAn1evJTCNAvyX+CyfUWKyyAGi29yb Mvjs44V/ETixlw6wsMa5zUwl9V1pbg6rOffHis5/0R6Vo8HDm++eJ/7dWLLxbAh54SVEI7g 8baiotsg9tyMDMuYSH+3oYXfCCSZPIKuxImk0UFsp7rUqpyIr/uvBS15uwyqKOIQuDrWHg8 MGBMVH+lTMT1wN3vmgzvurwNWDxdJ3X0Of8gwlY5EUbo8xJJusJHoCbW1OwAfLtxmxm0ZiU oZKQXOmcjfYqVfiizAxNxas0E3cStqkdGv8v/5nY2ccip/SF+k5q1gE4LlsA5T37lNQfq7Z 4ykGQnST7YT3lGc688WspOSw5QG/IOOWJkTw+sDDU6tIXcupgi6Mit28ReYT7+ahTTu+S5J z4EJK+cMd2P965vBOPnBWZj6UcUEVmnvCe1Y5QqS6Y8K/ySSFIyChKVSdp6GyvcnA25iitJ NEqsnJG21VYAodXUA4B2itxOE5Os3bak/SGRMNKNBQFwyeSXVoRNC9JB4+udXL6naUcBF6C YHCP5yMM6QFl0OG9NFe1cOqkwKPw8tIjZWCLITsW+j1CT/36fMWRinJFpWFFaBAagkEJiVU 1xDNcrtWvkwhOy+gVEGcqn3/Jh1xjpnCBdG1Vdsn6j3gxVjVN+CfjboJc0llwBuuRAJY6w1 0PQmVBfQiUDVlGem4CvHxaTP97zPaOTY6MzC4b9PvfiTcs/LI3J84DI7BdyRKHC4k9FjhC0 kn6HGulIa41E3rY9eit6HCdB9MpB/+fGrxqhvBEVyUEBotVQCMwasbmyLlGkF+9qZ4BxCS9 Eters+9sIZTycz4Pu7VijWzgAamPVBv+FCwyvBS5irSYF2qz616AbLReFSxAAziBxkxWz1L nIkJXWlRXh/4pTQEKPVY4BPHuSwJy1tV20tLLvtA== X-QQ-XMRINFO: Mp0Kj//9VHAxr69bL5MkOOs= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Remove redundant 256KB FDIR headroom reservation. FDIR headroom was already allocated in txgbe_fdir_configure() when FDIR is enabled, the second reservation resulted in 256KB less available RX packet buffer than the theoretical size. Fixes: 8bdc7882f376 ("net/txgbe: support DCB") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/base/txgbe_hw.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c index ae2ad87c83..76b9ee3c0a 100644 --- a/drivers/net/txgbe/base/txgbe_hw.c +++ b/drivers/net/txgbe/base/txgbe_hw.c @@ -2106,9 +2106,7 @@ void txgbe_set_pba(struct txgbe_hw *hw, int num_pb, u32 headroom, u32 rxpktsize, txpktsize, txpbthresh; UNREFERENCED_PARAMETER(hw); - - /* Reserve headroom */ - pbsize -= headroom; + UNREFERENCED_PARAMETER(headroom); if (!num_pb) num_pb = 1; From patchwork Mon Jun 9 07:04:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154169 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A3BB468B8; Mon, 9 Jun 2025 09:05:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0BF1940A7A; Mon, 9 Jun 2025 09:05:25 +0200 (CEST) Received: from smtpbgsg1.qq.com (smtpbgsg1.qq.com [54.254.200.92]) by mails.dpdk.org (Postfix) with ESMTP id C72BB40B91 for ; Mon, 9 Jun 2025 09:05:22 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452716te95f881c X-QQ-Originating-IP: FLYyy8hWzn2+F9wT6IDuI3h1+DRrhfUodDjydcaXPME= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:16 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 10053156601016066695 EX-QQ-RecipientCnt: 3 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu Subject: [PATCH v2 04/12] net/txgbe: support RSS offload for SCTP port Date: Mon, 9 Jun 2025 15:04:46 +0800 Message-ID: <76197192193F911C+20250609070454.223387-5-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: Nzc98Th+6mOzokPGNmeOlTEGx1Gc+o0lE5OdWSB+A7ktL0q5YqUOCXCZ eVxuqK/ZqyNI4BRgEmz6S+GVAAkZ0Gvkv3YLrrvnA6qYZogYNJmyRylcqUw0/BGH+uRSgyD +v5T0Xgp4okzf1oqd/56Fp1T1PmsDF/r+2jLIu8zqluVNQqchMsKrxwvuh0FDnRFy0E1ChY FduZKJxX8eYbrYbSVVYKjqGmsZ+DSiOd/3cf4LXiSBu5ZHsfce+D4tiMiMOcqSHVYpskYu6 OxuL05lcSPa7FaKUrI+vuoCBC2nf7j4pntLt9jdZ+ziQ4kzlr9f8TYGZmX38WIEK6PbymfX iXyRhLKLyvhgnAiOynK9QWQp1zqyjjFVD5PsqL5n5Qyaet5LilLPGuMepDZyR9W7J027XwT OWwZHRixHnLVkcLAkd3phvKmf1/9T64/03JeDxZu+odtSuLsRwMe+e0A8Py1Q+ifsE2fsMX XFBR12/s5pxspPaTJ76tonJr/qap8jsX5NHKYbbPBnIBAJdmG96RXK33iNV5pIpvebkECnl dXei0jk4+WOks6AHgBOdyfEk5ES5MBLFKU6urEhB6EYj8jj3usrEaPrMCnR+f3XN9dZJQWA JO6bJ9elqFYeVsmYhzZ01XcB2LeZeBTFqe58+DqQibmrx3bJblrJ1V7ZSn6ugpAYqkUkHBd ngYcx1Q3+vKkwT4n5fzZXz58wfPor5T3OeFPXQYouXF2PxwJWC2EQwhuxUqTdCPLR8Mu1ts vlswQkYUnJH6ASuC9PFaZV9iVkQtA4lRm2dwNslIRLrRjAa3AYssOGx0ufB46vkqphpOy2E zbuQkGSR65tKUYO6G6l4dVEWs4XULLOAqIGq0BR6HDBvV6S6cH39GLy1hKZz2Xr7Ki2cWKK /aE71OtiQnEIIPhkVH/vVqhHKWm4bfrfcks5dLP8f1/vfmBSANrtiVJRswVsHQjSk6O8Hu/ 6JARXG5POxLQkO6nlf4jCVniidz3YQgxQYdPFJZHtvOW713zlrZIRpCG9LHCOdy6b4WbJD5 4ra3mjJsv3Tmm/QDcRM/DSy6OfYueQfTWCGQlHeO/63Pm3ik6D4Pe4RPcOx+c= X-QQ-XMRINFO: MSVp+SPm3vtS1Vd6Y4Mggwc= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for IPv4/IPv6 SCTP RSS offload. Signed-off-by: Jiawen Wu --- drivers/net/txgbe/base/txgbe_regs.h | 4 ++++ drivers/net/txgbe/txgbe_ethdev.h | 2 ++ drivers/net/txgbe/txgbe_rxtx.c | 16 ++++++++++++++++ 3 files changed, 22 insertions(+) diff --git a/drivers/net/txgbe/base/txgbe_regs.h b/drivers/net/txgbe/base/txgbe_regs.h index 7a9ba6976f..346c23b5da 100644 --- a/drivers/net/txgbe/base/txgbe_regs.h +++ b/drivers/net/txgbe/base/txgbe_regs.h @@ -580,6 +580,8 @@ #define TXGBE_RACTL_RSSMASK MS(16, 0xFFFF) #define TXGBE_RACTL_RSSIPV4TCP MS(16, 0x1) #define TXGBE_RACTL_RSSIPV4 MS(17, 0x1) +#define TXGBE_RACTL_RSSIPV4SCTP MS(18, 0x1) +#define TXGBE_RACTL_RSSIPV6SCTP MS(19, 0x1) #define TXGBE_RACTL_RSSIPV6 MS(20, 0x1) #define TXGBE_RACTL_RSSIPV6TCP MS(21, 0x1) #define TXGBE_RACTL_RSSIPV4UDP MS(22, 0x1) @@ -1287,6 +1289,8 @@ enum txgbe_5tuple_protocol { #define TXGBE_VFPLCFG_RSSMASK MS(16, 0xFF) #define TXGBE_VFPLCFG_RSSIPV4TCP MS(16, 0x1) #define TXGBE_VFPLCFG_RSSIPV4 MS(17, 0x1) +#define TXGBE_VFPLCFG_RSSIPV4SCTP MS(18, 0x1) +#define TXGBE_VFPLCFG_RSSIPV6SCTP MS(19, 0x1) #define TXGBE_VFPLCFG_RSSIPV6 MS(20, 0x1) #define TXGBE_VFPLCFG_RSSIPV6TCP MS(21, 0x1) #define TXGBE_VFPLCFG_RSSIPV4UDP MS(22, 0x1) diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index c2d0950d2c..9295d8fbd0 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -65,9 +65,11 @@ RTE_ETH_RSS_IPV4 | \ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ + RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \ RTE_ETH_RSS_IPV6 | \ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ + RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ RTE_ETH_RSS_IPV6_EX | \ RTE_ETH_RSS_IPV6_TCP_EX | \ RTE_ETH_RSS_IPV6_UDP_EX) diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index 4e4b78fb43..a85d417ff6 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -3090,6 +3090,10 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev, if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP || rss_hf & RTE_ETH_RSS_IPV6_UDP_EX) mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP; + if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) + mrqc |= TXGBE_VFPLCFG_RSSIPV4SCTP; + if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + mrqc |= TXGBE_VFPLCFG_RSSIPV6SCTP; if (rss_hf) mrqc |= TXGBE_VFPLCFG_RSSENA; @@ -3120,6 +3124,10 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev, if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP || rss_hf & RTE_ETH_RSS_IPV6_UDP_EX) mrqc |= TXGBE_RACTL_RSSIPV6UDP; + if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) + mrqc |= TXGBE_RACTL_RSSIPV4SCTP; + if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + mrqc |= TXGBE_RACTL_RSSIPV6SCTP; if (rss_hf) mrqc |= TXGBE_RACTL_RSSENA; @@ -3173,6 +3181,10 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev, if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX; + if (mrqc & TXGBE_VFPLCFG_RSSIPV4SCTP) + rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_SCTP; + if (mrqc & TXGBE_VFPLCFG_RSSIPV6SCTP) + rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_SCTP; if (!(mrqc & TXGBE_VFPLCFG_RSSENA)) rss_hf = 0; } else { @@ -3192,6 +3204,10 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev, if (mrqc & TXGBE_RACTL_RSSIPV6UDP) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX; + if (mrqc & TXGBE_RACTL_RSSIPV4SCTP) + rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_SCTP; + if (mrqc & TXGBE_RACTL_RSSIPV6SCTP) + rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_SCTP; if (!(mrqc & TXGBE_RACTL_RSSENA)) rss_hf = 0; } From patchwork Mon Jun 9 07:04:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154170 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 08E8A468B8; Mon, 9 Jun 2025 09:06:00 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2930640B98; Mon, 9 Jun 2025 09:05:26 +0200 (CEST) Received: from smtpbgau2.qq.com (smtpbgau2.qq.com [54.206.34.216]) by mails.dpdk.org (Postfix) with ESMTP id E6AE740B94 for ; Mon, 9 Jun 2025 09:05:23 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452719t1c18da3c X-QQ-Originating-IP: OXAKT/sxG5JnQEiZa8K4CL2EXgG7soTdjbteQ/ZmD5Q= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:18 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 1657494531615998031 EX-QQ-RecipientCnt: 3 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu Subject: [PATCH v2 05/12] net/ngbe: support RSS offload for SCTP port Date: Mon, 9 Jun 2025 15:04:47 +0800 Message-ID: <8B79DE012B805E6A+20250609070454.223387-6-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: MnvGtZwO7LbyepA791NaHhkOZ3BDu2ctZBSH5JcSpD6eTFRckSXnfosn ZUk1xd5Cv6QOkXwinBwFeQIBVOR3CWeLD4VA+M6TyQuv2AEA1scfocKl5t4vCJTYjYdGsdY gunHS5+HVrsTmy9Hyw7O5ZC+r3UrbzElZj9XccN+751/DZHt8VCVex6JlvFaRyAyrWYeWiN iZZszvgLbWw3Tjy2RN8cPZIrj2hjLgNozyzdrPLw5+/oHC25msUrcS8bB1cQlq1oqooJbNR kr/nOiJ0+IX3kO8lEKLcrtNgU/jSCs40KQkBnJtUuG7E8gYs7bJub2WvXy2BS8ckmkiFOhv diPtEpCQ0ebFoAw95w4jlx/bCt9YfwoEKQNNrgWTof8UXcE3jcUEKf1ME7MfR8yXzRSvEZ4 WySfKnwx3m97CLmAYn/oSPkRYSExvWnifZrsgxboa9VB5RY8qn+PrMEVSSn8jKmETKyr/nm 0IBRv+x1LYAmT6/OWVuliCYxHSc7COa/IdN7KeCvSeD4CwP9BA7f7f1pp1y4ZGORQM7KNo6 IZJTDwt5Yj5ec9/hof1YFkgAqhjcwobkVh9dDquj00aKk5OlKHJ+QSuWn8EVDl+gXg7qHlO /3WlT70Ocbr/Zyc23aSWQCLrCUNLE1ThWPgoU52vCqMvSxzaOKQuyRSScsv5BwmYrA1La/P B91r0oNK528WKW9BP5mNuSRSCk6Hpk8N1w2U/CTm65ltmgVg6t9umNHUiELs3OgbNQvibxq LFDyDjtUGr3OVjDvaiOzHjpx3vM7crr5gqM+uoYW4MjxF2ENrS+BAMtbVchkie/QropuQs3 YHnAQi6iqfczQGfczJGW/V94+bttu5spu8rWllu5RV5f318e8HL9DiboDxVy/+VsyGtTNow Aua6WUBuPUE9fGhW3MhZnRCPIQDK0V6ASNL0cOKJE1W50QDZO1Ucr60ANVjUzk6KnsctfgH Epd4MZggV8tql4gqkbsBi5MK/ZjdSS39L0TUTarlXIJwLVKup9EWfaqb4ST9Y4C3T+8rG/5 8UYV5jxGCChfBtna2h1ftqU3pcjgg= X-QQ-XMRINFO: OWPUhxQsoeAVDbp3OJHYyFg= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for IPv4/IPv6 SCTP RSS offload. Signed-off-by: Jiawen Wu --- drivers/net/ngbe/base/ngbe_regs.h | 2 ++ drivers/net/ngbe/ngbe_ethdev.h | 2 ++ drivers/net/ngbe/ngbe_rxtx.c | 8 ++++++++ 3 files changed, 12 insertions(+) diff --git a/drivers/net/ngbe/base/ngbe_regs.h b/drivers/net/ngbe/base/ngbe_regs.h index b1295280a7..3c123049b7 100644 --- a/drivers/net/ngbe/base/ngbe_regs.h +++ b/drivers/net/ngbe/base/ngbe_regs.h @@ -452,6 +452,8 @@ #define NGBE_RACTL_RSSMASK MS(16, 0xFFFF) #define NGBE_RACTL_RSSIPV4TCP MS(16, 0x1) #define NGBE_RACTL_RSSIPV4 MS(17, 0x1) +#define NGBE_RACTL_RSSIPV4SCTP MS(18, 0x1) +#define NGBE_RACTL_RSSIPV6SCTP MS(19, 0x1) #define NGBE_RACTL_RSSIPV6 MS(20, 0x1) #define NGBE_RACTL_RSSIPV6TCP MS(21, 0x1) #define NGBE_RACTL_RSSIPV4UDP MS(22, 0x1) diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h index 37c6459f51..faff57ef34 100644 --- a/drivers/net/ngbe/ngbe_ethdev.h +++ b/drivers/net/ngbe/ngbe_ethdev.h @@ -40,9 +40,11 @@ RTE_ETH_RSS_IPV4 | \ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ + RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \ RTE_ETH_RSS_IPV6 | \ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ + RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ RTE_ETH_RSS_IPV6_EX | \ RTE_ETH_RSS_IPV6_TCP_EX | \ RTE_ETH_RSS_IPV6_UDP_EX) diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c index a372bf2963..3dd268e5bc 100644 --- a/drivers/net/ngbe/ngbe_rxtx.c +++ b/drivers/net/ngbe/ngbe_rxtx.c @@ -2652,6 +2652,10 @@ ngbe_dev_rss_hash_update(struct rte_eth_dev *dev, if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP || rss_hf & RTE_ETH_RSS_IPV6_UDP_EX) mrqc |= NGBE_RACTL_RSSIPV6UDP; + if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) + mrqc |= NGBE_RACTL_RSSIPV4SCTP; + if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + mrqc |= NGBE_RACTL_RSSIPV6SCTP; if (rss_hf) mrqc |= NGBE_RACTL_RSSENA; @@ -2704,6 +2708,10 @@ ngbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev, if (mrqc & NGBE_RACTL_RSSIPV6UDP) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX; + if (mrqc & NGBE_RACTL_RSSIPV4SCTP) + rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_SCTP; + if (mrqc & NGBE_RACTL_RSSIPV6SCTP) + rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_SCTP; if (!(mrqc & NGBE_RACTL_RSSENA)) rss_hf = 0; From patchwork Mon Jun 9 07:04:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154171 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9A367468B8; Mon, 9 Jun 2025 09:06:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 43C3840B97; Mon, 9 Jun 2025 09:05:27 +0200 (CEST) Received: from smtpbgjp3.qq.com (smtpbgjp3.qq.com [54.92.39.34]) by mails.dpdk.org (Postfix) with ESMTP id 1ABDF40B97; Mon, 9 Jun 2025 09:05:23 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452721t6674f443 X-QQ-Originating-IP: NLSLzlGBDPqXZRl3K8d1i+B2ec2VnEvdPWRB9ixrLbo= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:20 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 3508509440119966417 EX-QQ-RecipientCnt: 4 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 06/12] net/txgbe: fix MAC control frame forwarding Date: Mon, 9 Jun 2025 15:04:48 +0800 Message-ID: <099358DAE8B5A930+20250609070454.223387-7-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: NeJ0kb0We42H01yi/DZBML3JrXIwPQmesf0hcXShYsmszY0ECEhkh/Eu OoL8vrXJa3SKpsmsJsj3KEeKMQS3lBS2rP5vHDAwIkxKLG29tIGF3Q2a399Lrh6E5wJI1sQ ham2Iq6kWWztV42nk/TK04PwWKbni9B+9Lykple1joxXSjetbDQU9GQcEGA+8MmkC/8pS5c dccwmEp9CY8XmdjIaqiSGOlww5XRMMAOMvV5RLhieDFxMPr71I1MFn5j1EhQSlrVOKcRX3S ThRSs+ca1LlOM8fgtDSqvwQqhMJi+836zu9lghHKXXt9TAgb+CLvj3V53I/uhgRFrarWKZC UIqPcfTBA+OsUKf8Ja7zrqnmXoGbNCpVD7B9Kkr+49xaWWkJ44CR9R+EEP/lAJ2MGbbZtqO Zfbuw0N8lgEOrdDXOiGoAzx/qy7nhRL5GxQ0cSJQbk6LNn/BZ2rFEGiQulIRyZZo335jBQc cBcNkLMF8wkxmlmml5ImQ4XpzrV62Fsis/OVzLNQQP/SHdl4+qU0vll6cHCyd6V3srhTyXS vpkwXnYi3QKUJImDazTo3VOR1VrxrE5HqL+BcNz9utr7KwDURTtLKjpsDY8G+h8qpX+IDhx Mwbq6T1qoNd/oZvE6tPaAI0asmEa+t2T867UzdZJf4MfUTZNxQVpfSX3TUoDzVysPt0dFow zlDnSpjDl4ScqRmUXMrmuCCwmHOrJJkrOR1cyhCivj2Bqe4Rgqh36cOlKbH0wgHI7gS19zB h3kjbXgdKNKAIbASbF5ZD5SJ78LT9wjMdS4SUwdMrUrQVinU/5RrYmYEUhv7q5esgCD6rD4 ya6J6ZE9iQksjVuPZNt6y+qJoriNwjntTuamCS89apQwwAdOTebofFUfZToUhh6RcDBreQF fy3hz5cwPQB1x9vMGp6wjoxX7IYwvDrzIEbTsxEXiHdxlMwaKGHvW48wGsnlatqsxVNKHRa 3l6v1c//BcbL0WcuW8aLU79OAH9J7BECMtvO3duHOqVsk6TbS0hE5Kz7HNZzZYqS4ajgTH0 ImXzdZVTRHae5A4Pmuji7OXunb6oEIxSANSkiZaV8VrNUOIPETigjBoM65RQwnNMW6aXig6 w== X-QQ-XMRINFO: MSVp+SPm3vtS1Vd6Y4Mggwc= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Test Failure on the case "test_pause_fwd_port_stop_start", which expect MAC control frame forwarding setting still working after port stop/start. Fix the bug to pass the test case. Fixes: 69ce8c8a4ce3 ("net/txgbe: support flow control") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/base/txgbe_hw.c | 9 +++++++++ drivers/net/txgbe/base/txgbe_type.h | 1 + drivers/net/txgbe/txgbe_ethdev.c | 1 + 3 files changed, 11 insertions(+) diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c index 76b9ee3c0a..42cd0e0e2c 100644 --- a/drivers/net/txgbe/base/txgbe_hw.c +++ b/drivers/net/txgbe/base/txgbe_hw.c @@ -226,6 +226,15 @@ s32 txgbe_setup_fc(struct txgbe_hw *hw) TXGBE_MD_DEV_AUTO_NEG, reg_cu); } + /* + * Reconfig mac ctrl frame fwd rule to make sure it still + * working after port stop/start. + */ + wr32m(hw, TXGBE_MACRXFLT, TXGBE_MACRXFLT_CTL_MASK, + (hw->fc.mac_ctrl_frame_fwd ? + TXGBE_MACRXFLT_CTL_NOPS : TXGBE_MACRXFLT_CTL_DROP)); + txgbe_flush(hw); + DEBUGOUT("Set up FC; reg = 0x%08X", reg); out: return err; diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h index 383438ea3c..65527a22e7 100644 --- a/drivers/net/txgbe/base/txgbe_type.h +++ b/drivers/net/txgbe/base/txgbe_type.h @@ -299,6 +299,7 @@ struct txgbe_fc_info { u32 high_water[TXGBE_DCB_TC_MAX]; /* Flow Ctrl High-water */ u32 low_water[TXGBE_DCB_TC_MAX]; /* Flow Ctrl Low-water */ u16 pause_time; /* Flow Control Pause timer */ + u8 mac_ctrl_frame_fwd; /* Forward MAC control frames */ bool send_xon; /* Flow control send XON */ bool strict_ieee; /* Strict IEEE mode */ bool disable_fc_autoneg; /* Do not autonegotiate FC */ diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index e5736bf387..b68a0557be 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -3586,6 +3586,7 @@ txgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) hw->fc.low_water[0] = fc_conf->low_water; hw->fc.send_xon = fc_conf->send_xon; hw->fc.disable_fc_autoneg = !fc_conf->autoneg; + hw->fc.mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd; err = txgbe_fc_enable(hw); From patchwork Mon Jun 9 07:04:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154172 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45E70468B8; Mon, 9 Jun 2025 09:06:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B31ED40DC9; Mon, 9 Jun 2025 09:05:29 +0200 (CEST) Received: from smtpbgau1.qq.com (smtpbgau1.qq.com [54.206.16.166]) by mails.dpdk.org (Postfix) with ESMTP id 14C1740A80; Mon, 9 Jun 2025 09:05:26 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452723tb1d9b2fb X-QQ-Originating-IP: a7bUlGyrsVPE3jdBo7jUzbMjoVfxcAi8LXm7oTJunhE= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:22 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 15688012892001810133 EX-QQ-RecipientCnt: 4 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 07/12] net/ngbe: fix MAC control frame forwarding Date: Mon, 9 Jun 2025 15:04:49 +0800 Message-ID: <6D83A31D8EF41AE5+20250609070454.223387-8-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: Mw3/skGyA4U5+o8st0jMs31UkQCQtAx6/fqxpy5A2l4RmyRhP3J8faIQ dqfyU1xsSgfslznjsFqljhQkDJjd9JYgLaFH8D8/5QbZB8OqWF6gKqxY0qU96nAxmLETUMg PV0F5MY+rFCQVN1srVBc1xeXhlMnBNDqjmOBUYZpgpowTMrO9Nv1dPzLK3fMUM2uYSEBDvB NB4K2cqtnJxhcOpS6uzX7+mrgEZXvIkTu4WGpKnkioTOihWOUQN+4QpRtS0A+CyT8wdrChN s79Z17dg+O5I7IldeIw7I23KN4zS7RCJw5CM0gvhuzP3tLNvb9ANJXbjDL8KN3KrM+97bKs fh/Lv1r75eFihrFCzPAdQ9HzlrCGIhVrfII1jSv8VOYppco36Tbro1j8TbUO4wPNJ5AMS8B 7dJ3cT5ur4vwnuB5p4Lp7rtquLdouC5/yX7EzaISNF8DlInKXRBage4d7QHEw5cXDih/Vkm IFGNz/3kVFHVHBJ+SieZ9hGM62sdNxK82TEuy2jtko/Wf7gW8Rcm7b59+XhySjPnoDRVATX pn9G+m9LbRyT+vSNbEwMFjcH/fCV+qoS+iiJBU2PcKrXwrA9VnOkZk+SQJAWZBLWfbVN65U aDrzZ1hzVAIkzNQxPA74Fa5c2iWcOxd48Dh0mscefQaQR+1kW85Q020y2kVmI4UxzPAtVAH g3ALufBAyPrA1QIV5+obXk2h/10dL60lMYsO69wbNHkLa5ylZC6FHSSdceJmXihK5dFBvVt nAp8RrGkeGnzkDzonKmwHnoKrsC7/U3xtVu8D8dKJLKDstBw1npL5kakZk4S53lzlg5PsdS KLiwQXmsfoZ0cTHMj/LeOA2kyMbkMcFyv5V2xsKilQrzfsfx0jlg2PfWtTMJPk1i+KSg4+v fvNkGhK/M6scdU5+hkimZM6rSCmIDyiisaGqjhgk+hRHvCqTPLj0few2NvIQ+uzOeQBktzC tN5o5Gitq1jTotUQcmJlhQVAPPRIXEQfC+DFW+p9HOnjfaXI+68BiacbmfcjftbfSGdchFd 3hTc2Q0U+kZmjohJegep5TDbBT9oOEPL8Rr/nBrneiUP3e6g/qB2RugrQAIxM= X-QQ-XMRINFO: NI4Ajvh11aEj8Xl/2s1/T8w= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Test failure on the case "test_pause_fwd_port_stop_start", which expect MAC control frame forwarding setting still working after port stop/start. Fix the bug to pass the test case. Fixes: f40e9f0e2278 ("net/ngbe: support flow control") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/ngbe/base/ngbe_hw.c | 9 +++++++++ drivers/net/ngbe/base/ngbe_type.h | 1 + drivers/net/ngbe/ngbe_ethdev.c | 1 + 3 files changed, 11 insertions(+) diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c index 6688ae6a31..bf09f8a817 100644 --- a/drivers/net/ngbe/base/ngbe_hw.c +++ b/drivers/net/ngbe/base/ngbe_hw.c @@ -865,6 +865,15 @@ s32 ngbe_setup_fc_em(struct ngbe_hw *hw) goto out; } + /* + * Reconfig mac ctrl frame fwd rule to make sure it still + * working after port stop/start. + */ + wr32m(hw, NGBE_MACRXFLT, NGBE_MACRXFLT_CTL_MASK, + (hw->fc.mac_ctrl_frame_fwd ? + NGBE_MACRXFLT_CTL_NOPS : NGBE_MACRXFLT_CTL_DROP)); + ngbe_flush(hw); + err = hw->phy.set_pause_adv(hw, reg_cu); out: diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h index 7a3b52ffd4..fc571c7457 100644 --- a/drivers/net/ngbe/base/ngbe_type.h +++ b/drivers/net/ngbe/base/ngbe_type.h @@ -112,6 +112,7 @@ struct ngbe_fc_info { u32 high_water; /* Flow Ctrl High-water */ u32 low_water; /* Flow Ctrl Low-water */ u16 pause_time; /* Flow Control Pause timer */ + u8 mac_ctrl_frame_fwd; /* Forward MAC control frames */ bool send_xon; /* Flow control send XON */ bool strict_ieee; /* Strict IEEE mode */ bool disable_fc_autoneg; /* Do not autonegotiate FC */ diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index 08e87471f6..a8f847de8d 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -2420,6 +2420,7 @@ ngbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) hw->fc.low_water = fc_conf->low_water; hw->fc.send_xon = fc_conf->send_xon; hw->fc.disable_fc_autoneg = !fc_conf->autoneg; + hw->fc.mac_ctrl_frame_fwd = fc_conf->mac_ctrl_frame_fwd; err = hw->mac.fc_enable(hw); From patchwork Mon Jun 9 07:04:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154173 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0D2E7468B8; Mon, 9 Jun 2025 09:06:24 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D4D1440DCB; Mon, 9 Jun 2025 09:05:33 +0200 (CEST) Received: from smtpbgau2.qq.com (smtpbgau2.qq.com [54.206.34.216]) by mails.dpdk.org (Postfix) with ESMTP id AB0CD40B95; Mon, 9 Jun 2025 09:05:29 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452725t7b5db7a7 X-QQ-Originating-IP: nvUA3C63HOJKE/CDDebdZ5B5AXR59tf5ObR/hGU+0gU= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:24 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 9801133671767811648 EX-QQ-RecipientCnt: 4 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 08/12] net/txgbe: fix incorrect device statistics Date: Mon, 9 Jun 2025 15:04:50 +0800 Message-ID: <532DFAB41936EA66+20250609070454.223387-9-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: OEBH2hTtioqYNVXtubkZweChZS9kHjd4K7zsEfD3qFrO792sAiHszr5X y1r3FUjEWOomRl662pdnTilhCpuC2P2f23QMWVSk9bUF2qS+ftKwVoIaI8NlPqlraPemrgV 0BV3VJkmcejQMjS7oo/t8Vm1BE+CRlICYyJ6TkL1xgNmZcuLb/Fm3MjXT8Wjj6GFKRu2TCk /6hPl/BLZkB9lbeCziuGnvJb7u0LPHTty93mP2o8YnloEdqsiQM17j9xYBnjbWifhbCXkRn xQVhJdjuzBFouB773cE57JSF3YoQ/3yMbWmL69MdwETXF3ga9uJG0pC/uW7xqkCDYMhi42V O4zEU00lSCH+E52U3fSDCK0afh1fhcxmU6o8+TegHn55fyRoKNOKMmJFzjVBWH/hZSUMUjv qw3L8Y4rJl8Lz5mphX2MacXdPbDMtUnxxbu71czqCnGYvIOnhXc8cZ0Xtk4Sh5pZjuHa96/ 6hAtgMb/JtwmbWmD7essCY7R/xSeH4yXTrEUB+QxDWEcqmqrUA559D5HBFd7k5IlcQ3/Prd cyw/KTdEecj4Y5jcWBu1uYf/1AXyrqS8dIWguhmrRv+bbIqCGka8+lMHjJx1CDtdv83a2Bv /CYwS/aE4ivAiZ9dgpasBkd6C6tGHo5/ASSKeLfUSe5lfMnc7gkxLOVJySCA9RsLmQcANUD MIMQYY586vnR8TwDgPzZHka5eqGWmPDkG7JAEmKLXJCj9kn+TcRT6CTC0m7q1emBJBsxNrP kIN0lCKr+TqWAYQIWpAM9+ftQvSCGiVgvyWSoIWSUDpH87gneebQkpCu4O8gktizRSDodQI 2FZFr/OBc+SsxMtEOoQf+H7nf6j+wnJMwVce/fL5ihO33L4JlpO8CoZ5ND8TyWHsYi66K2v 40cxQFZWWUlXwtQNPSRHB9kLGw/pCr4KpBCKQefghO5lPG141I33AyXsn7hwDdDiBO60vQS BIP4zdQXkwTIkvduBdW9+Fb+ke8ZSAsBZRyAHeVXaBXRBDteMTlSf7pcmueG/uHJamPyiZ0 j+ck/K+MyfaGI/rlK5KM/OfFSPrWoKgaWWYB+so+UdMcRfiaL2oVwMKucftA4= X-QQ-XMRINFO: Mp0Kj//9VHAxr69bL5MkOOs= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The extend statistic "rx_undersize_errors" is incorrectly read as the counter of frames received with a length error, which names "rx_length_error". And "rx_undersize_errors" is the counter of shorter-than-64B frames received without any errors. In addition, "tx_broadcast_packets" should use rd64() to get the full count on the low and high registers. Fixes: c9bb590d4295 ("net/txgbe: support device statistics") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/txgbe_ethdev.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index b68a0557be..580579094b 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -2250,7 +2250,7 @@ txgbe_read_stats_registers(struct txgbe_hw *hw, hw_stats->rx_total_bytes += rd64(hw, TXGBE_MACRXGBOCTL); hw_stats->rx_broadcast_packets += rd64(hw, TXGBE_MACRXOCTL); - hw_stats->tx_broadcast_packets += rd32(hw, TXGBE_MACTXOCTL); + hw_stats->tx_broadcast_packets += rd64(hw, TXGBE_MACTXOCTL); hw_stats->rx_size_64_packets += rd64(hw, TXGBE_MACRX1TO64L); hw_stats->rx_size_65_to_127_packets += rd64(hw, TXGBE_MACRX65TO127L); @@ -2269,7 +2269,8 @@ txgbe_read_stats_registers(struct txgbe_hw *hw, hw_stats->tx_size_1024_to_max_packets += rd64(hw, TXGBE_MACTX1024TOMAXL); - hw_stats->rx_undersize_errors += rd64(hw, TXGBE_MACRXERRLENL); + hw_stats->rx_length_errors += rd64(hw, TXGBE_MACRXERRLENL); + hw_stats->rx_undersize_errors += rd32(hw, TXGBE_MACRXUNDERSIZE); hw_stats->rx_oversize_cnt += rd32(hw, TXGBE_MACRXOVERSIZE); hw_stats->rx_jabber_errors += rd32(hw, TXGBE_MACRXJABBER); From patchwork Mon Jun 9 07:04:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154174 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C1D39468B8; Mon, 9 Jun 2025 09:06:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8F8B140B9C; Mon, 9 Jun 2025 09:05:39 +0200 (CEST) Received: from smtpbgbr2.qq.com (smtpbgbr2.qq.com [54.207.22.56]) by mails.dpdk.org (Postfix) with ESMTP id 2A88440DD0; Mon, 9 Jun 2025 09:05:30 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452727t3515bb24 X-QQ-Originating-IP: FMn0LM4C4VWdEzQypwa1PzTLJCmUtX4Pyj3agHUG3y8= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:26 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 16213667323343615605 EX-QQ-RecipientCnt: 4 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 09/12] net/ngbe: fix incorrect device statistics Date: Mon, 9 Jun 2025 15:04:51 +0800 Message-ID: <63DC12F514B9F5CE+20250609070454.223387-10-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: Nnelav7R/H77aqXxWyAvacrtjpngcxE+yttk6C5cD934Epz3O6GvDvs/ 1HXAKHFlz/vKYN6r7FaDdAczGkw8WaTNLAHzZL5utYA3Wcog0kqYhW5Q5fhwkO4FnmF54Lh F4hPxnseEYe3V1Becrx/0WxjjEyOoJuUQolDFk6raR514iwR8NADJKnXgmyicJP7TjLAlGI 8x/DiBsO20Eei0Ikt24V7QGRxf8i3cRfL/SxDf+yQJ8jCNwo3cFyz5CC3pG/60knUtl2xRM NeuF5A7vjk5G+ORqQ0OAGCtXxMlPu8psBoifJlInJQBG2X1IL+m1SO30KUrlWKzpZtpETLz 6eYE0bp0fjtLoqhfzHZgBbUFUciSBLhlMP4wXJKtzRJ5h1hh/i/z3rpxyOsYBQOhVTpwArM LvP/1j7fVab8xiul9urbTrP15je9aU4X0dlgTRP10WaNUC06tQcVnJa1sAUacTgmQmjDGcf 2wfxaz63wPCzcIy52/EE9s0adb+LtKIaR9PmTpbXf1MMvPrL1Pmvx5EQuio1Dl1fnlLJybt 6RwZOir2PheCuSm4a4yStyJfOA12XyXCLvrx/HdxXcEC8fRTgJ506IayNcVc7edyNuocVRR nYwN0Lkb3kZVUHZEZzo+8P/0spyl0d/CvUvfQXMvkwA9OjmUtaH9Mba4l1eBMyh69wGCWyc Zptqphdc2sWe46KgkQ577HY4h6GcwQPmvDVTZppt5ySvEtElpeVQLI35TSAnDqUF/Ita/sF 9Mo+4BHwoaQbubiO3dho1GEwQ+4JznhfzQFw8tXMHd9jI6VttevsiAjAg00NMv5DkXuiEj0 fLhtjP6JQf8UrJefglZEbE+fctNI0FFKX7RUu5/fY/+8Dbq6Gqli3OYd6oXw6KG7Sz3g4Q9 hsJ0q7RQX7r3Mlgskh2c0P9MkU4MGy+3xtzdtMm2i1KTEWH9HHqmUfkWWaT6tWpVawmASTh U2K22sFe6T1UgO4QVdP+0iMpRF7NFkap8XCz1G09y+W0mDHyZnGYQzeHB83bJVGipGG/boS UBKPDcXTzYB7RB0Q378CQ6AoToUm9GfVxL/qhtZ13O25DbRzGoWDhDnCFQyQOr8vLO/d5St E3fJF06HOku X-QQ-XMRINFO: MPJ6Tf5t3I/ycC2BItcBVIA= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The extend statistic "rx_undersize_errors" is incorrectly read as the counter of frames received with a length error, which names "rx_length_error". And "rx_undersize_errors" is the counter of shorter-than-64B frames received without any errors. In addition, "tx_broadcast_packets" should use rd64() to get the full count on the low and high registers. Fixes: fdb1e851975a ("net/ngbe: support basic statistics") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ethdev.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c index a8f847de8d..d3ac40299f 100644 --- a/drivers/net/ngbe/ngbe_ethdev.c +++ b/drivers/net/ngbe/ngbe_ethdev.c @@ -1429,7 +1429,7 @@ ngbe_read_stats_registers(struct ngbe_hw *hw, hw_stats->rx_total_bytes += rd64(hw, NGBE_MACRXGBOCTL); hw_stats->rx_broadcast_packets += rd64(hw, NGBE_MACRXOCTL); - hw_stats->tx_broadcast_packets += rd32(hw, NGBE_MACTXOCTL); + hw_stats->tx_broadcast_packets += rd64(hw, NGBE_MACTXOCTL); hw_stats->rx_size_64_packets += rd64(hw, NGBE_MACRX1TO64L); hw_stats->rx_size_65_to_127_packets += rd64(hw, NGBE_MACRX65TO127L); @@ -1448,7 +1448,8 @@ ngbe_read_stats_registers(struct ngbe_hw *hw, hw_stats->tx_size_1024_to_max_packets += rd64(hw, NGBE_MACTX1024TOMAXL); - hw_stats->rx_undersize_errors += rd64(hw, NGBE_MACRXERRLENL); + hw_stats->rx_length_errors += rd64(hw, NGBE_MACRXERRLENL); + hw_stats->rx_undersize_errors += rd32(hw, NGBE_MACRXUNDERSIZE); hw_stats->rx_oversize_cnt += rd32(hw, NGBE_MACRXOVERSIZE); hw_stats->rx_jabber_errors += rd32(hw, NGBE_MACRXJABBER); From patchwork Mon Jun 9 07:04:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154175 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ECE17468B8; Mon, 9 Jun 2025 09:06:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2FBD240DF6; Mon, 9 Jun 2025 09:05:41 +0200 (CEST) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id 17EDF40A6D; Mon, 9 Jun 2025 09:05:32 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452729t8f54a02c X-QQ-Originating-IP: 3gs9EfS03Ts83jchLQfqszejdx6dVvRbr81ZaJefRfg= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:28 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 16601405074837256311 EX-QQ-RecipientCnt: 4 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 10/12] net/txgbe: restrict VLAN strip configuration on VF Date: Mon, 9 Jun 2025 15:04:52 +0800 Message-ID: <55D50257D52812D2+20250609070454.223387-11-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: N+v2p8NjVRtQmUjRkNMTO7QWWdbqWqAotv+2OrGQQLY9pUkQDM8TqIAZ 3OXd18drQgiCMhbvrGJG94UBbeSD8UhZvY78hCOOXBNjK+OleLsmG7NXioJQ9XB/spaT0n1 7UyDo0gBtwpZlo4jtmBW4KFjiB0Kjh7oWZ8vbPOSTlBxdF3zmqQ70KcFgisbyrwfg9wgnAd gVLXKW6qTX3iY6D8Bj3rq06Eq80YMGc1wJ80X6Ww0u7HAFntIMcpauVXknqTHS+MaN9wQh/ NWancB0rSBMeBNBG7Abn3mYuFEmz3cqCkNK/ImiOqpJDuerR7f9Ct139zuzLH63WukGG0ZQ SlgyHG1EuVksIhecfemWU3jaIY5yL7NvstgFB8ulTifmd6FAAL3CTX9uLJkPzLtmO0iUmdt tDDb10psxdNpDkceWaON3I/y7dpbafPpmvquK3gRSHl2fqQRsC8y3RdWqno5rYOyVQL77Es Yf7D/TieLLFbMqaIL+f1X4KKyHuwhmx/kS6VT7Sas9Ld2Cw8an+cYbYWD8kw5cE/cjCxV0h Zi4SItW8groGJCRAQBdOUJ6jbAl1ieABVZMzpmJ5SmHTiZN3qyIjA69vkg952oiJRufeE8x 6EP0Pnu7d0+vpjgPQPn1fM5LZQSHErhZ9J+TXAt7CQ30d4iZWC2EH3NAbQX3eDXdFUHfcWm +CE13q/g2jY1gXFWoyxSpXOa4aB7un12Gxf3HNxem4dhDxhrxYDRE1oP+2M23jXgBnRWdu/ CYdWsQIrB68ufCh+sOwQs+89F3MX081nXNP01bokcZNqqe4utkq/Fx0TMrM7LY8ZDAFz5e0 /Z0/Ogek1T2ylVqNm5M1MidkJ1azLcnQ7Inp+al0pMgIJwsLewZjfnBO9SKR6TvJph5g8+b OBWAHFTGIJrhUtNQwMg3KjMyuRZcLHXfx+KF83hR8q3Lh5YRn8RQeacEsZG/4VIOkTfp44U aEBfmSt+Wtz2NJTZ9H9JctGtbPKH7VR1dXZP5yJN/ewIgTkJ6TgWUfsqDJdsTHtdce3A/fU qMGx084w7+r2Xx6FM6EZ/hNO5GLZFcPJ6SFpViOXUy7fFhKbw29lE8sgl9HAk= X-QQ-XMRINFO: OD9hHCdaPRBwq3WW+NvGbIU= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fix the same issue as PF in commit 66364efcf958 ("net/txgbe: restrict configuration of VLAN strip offload"). There is a hardware limitation that Rx ring config register is not writable when Rx ring is enabled, i.e. the TXGBE_RXCFG_ENA bit is set. But disabling the ring when there is traffic will cause ring get stuck. So restrict the configuration of VLAN strip offload only if device is started. Fixes: aa1ae7941e71 ("net/txgbe: support VF VLAN") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/txgbe_ethdev_vf.c | 31 +++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index c0d8aa15b2..847febf8c3 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -935,7 +935,7 @@ txgbevf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) } static void -txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) +txgbevf_vlan_strip_q_set(struct rte_eth_dev *dev, uint16_t queue, int on) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); uint32_t ctrl; @@ -946,20 +946,28 @@ txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) return; ctrl = rd32(hw, TXGBE_RXCFG(queue)); - txgbe_dev_save_rx_queue(hw, queue); if (on) ctrl |= TXGBE_RXCFG_VLAN; else ctrl &= ~TXGBE_RXCFG_VLAN; - wr32(hw, TXGBE_RXCFG(queue), 0); - msec_delay(100); - txgbe_dev_store_rx_queue(hw, queue); - wr32m(hw, TXGBE_RXCFG(queue), - TXGBE_RXCFG_VLAN | TXGBE_RXCFG_ENA, ctrl); + wr32(hw, TXGBE_RXCFG(queue), ctrl); txgbe_vlan_hw_strip_bitmap_set(dev, queue, on); } +static void +txgbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) +{ + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + + if (!hw->adapter_stopped) { + PMD_DRV_LOG(ERR, "Please stop port first"); + return; + } + + txgbevf_vlan_strip_q_set(dev, queue, on); +} + static int txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask) { @@ -972,7 +980,7 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask) for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq = dev->data->rx_queues[i]; on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP); - txgbevf_vlan_strip_queue_set(dev, i, on); + txgbevf_vlan_strip_q_set(dev, i, on); } } @@ -982,6 +990,13 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask) static int txgbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask) { + struct txgbe_hw *hw = TXGBE_DEV_HW(dev); + + if (!hw->adapter_stopped && (mask & RTE_ETH_VLAN_STRIP_MASK)) { + PMD_DRV_LOG(ERR, "Please stop port first"); + return -EPERM; + } + txgbe_config_vlan_strip_on_all_queues(dev, mask); txgbevf_vlan_offload_config(dev, mask); From patchwork Mon Jun 9 07:04:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154176 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60F6E468B8; Mon, 9 Jun 2025 09:06:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7675640E0C; Mon, 9 Jun 2025 09:05:42 +0200 (CEST) Received: from smtpbgsg1.qq.com (smtpbgsg1.qq.com [54.254.200.92]) by mails.dpdk.org (Postfix) with ESMTP id 7246840DD2; Mon, 9 Jun 2025 09:05:34 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452731t703de70d X-QQ-Originating-IP: TErnmAm6vztble8RJnqHeATZErDxRw/qZ6iSTkGBF14= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:31 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 18018675828024379701 EX-QQ-RecipientCnt: 4 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 11/12] net/ngbe: restrict VLAN strip configuration on VF Date: Mon, 9 Jun 2025 15:04:53 +0800 Message-ID: X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: N7IP8+eIyBrZ86b4G+aS9x7XqTo9wn7CAsVcTeGERHG8pnMg9QCXiDaC chVVQn4uQhSOKfr4lrX2L/QnDe+BchZ0MLiR2lZ12gEMlSV2vJIjbP1VrjNoJtFh5UvPnPy eAzi2bfbQLC0Ufd6vKHT3VYQv5gJoriMdxAyYX2tcJhj5VyCjKzDPWh5zwl0Ydq1cJdvs2t QgbDl9j6wjxtitlHA+AYRPNnVuobQLa0LhPFu1uTsPiuRJN54mc2aPBtqxcVRevzaIGP7PF jYGGp2xalLhPTyQi5eWuJTO8H6m89p3ERP7zv1oa2LApEs3v9/lRs2f0VAW+WscyzYhdwX0 Io6HgerRKxTMC+l8pY4rJGBbd5EhwQoLEUAS4g6OMfnStE7lbnCegC3xHCgABmgFeOnKirE Pos28wIZmyIveHc0pT4BgcMH9CtVQNe9ZP0ceGhymJQsQbJ4KdiCuo4/4ifB7hvLYImOxNi JYcUNwbfzakVCiCLEknfdd7bTRMcFtKhiHZEc/VIEjaxQVh3AGQIyXoYY+3GP88A7eToFp0 peBW9aVPQeSMjn8BSlmkEGCpeSm8tU2djrinpYWwap63kYUdiKk8X40gGrJqjDpLVVx/OUE 9nXqsrmhcysbni53fIrX32UqhWXr4K0lVuUt/yWsqg6bWDYNZos/oEhqK4TzesZ9cJfc3Zg SldaPj4zZO97YZm7hQyyw1gX8rYSH3xZMPaxewriFcG0x7TehA2Xb3mE+G92NZ2on8MCfO+ OE/ISgifJeB9Tt4LvVQ4d06pkNctDLcU6Syjd6DqtjwnXfh+XjtgX94ffHjUH+5F3EBMfsb hIocphpukIlzJXDfnBsTx05ZdGq9J6Xq/3X85weCjeQ2bYVgyxFHMdQroiUwjfd+vuciDVH k/y7Qklqp3iXPKossBWa7TX5CaaZf/OYp+Qs2am6iWpBFp53wcnv59ADWrzDq0vgSfumQSd jcZ+fH2CNh+0DJ92RrcL5O4hLKih219rq2n2vXTU/RSoMvpBL4hdVEJdTb+ch18vaefE72E BfQLvjMCvWPWV8WLgEPldmPhpWbTpoDe02B3EpRGP37r4VL/Oqwp5NS21Nnu0= X-QQ-XMRINFO: Mp0Kj//9VHAxr69bL5MkOOs= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fix the same issue as PF in commit baca8ec066dc ("net/ngbe: restrict configuration of VLAN strip offload"). There is a hardware limitation that Rx ring config register is not writable when Rx ring is enabled, i.e. the TXGBE_RXCFG_ENA bit is set. But disabling the ring when there is traffic will cause ring get stuck. So restrict the configuration of VLAN strip offload only if device is started. Fixes: f47dc03c706f ("net/ngbe: add VLAN ops for VF device") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/ngbe/ngbe_ethdev_vf.c | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/drivers/net/ngbe/ngbe_ethdev_vf.c b/drivers/net/ngbe/ngbe_ethdev_vf.c index 5d68f1602d..846bc981f6 100644 --- a/drivers/net/ngbe/ngbe_ethdev_vf.c +++ b/drivers/net/ngbe/ngbe_ethdev_vf.c @@ -828,7 +828,7 @@ ngbevf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) } static void -ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) +ngbevf_vlan_strip_q_set(struct rte_eth_dev *dev, uint16_t queue, int on) { struct ngbe_hw *hw = ngbe_dev_hw(dev); uint32_t ctrl; @@ -848,6 +848,19 @@ ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) ngbe_vlan_hw_strip_bitmap_set(dev, queue, on); } +static void +ngbevf_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on) +{ + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + if (!hw->adapter_stopped) { + PMD_DRV_LOG(ERR, "Please stop port first"); + return; + } + + ngbevf_vlan_strip_q_set(dev, queue, on); +} + static int ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask) { @@ -860,7 +873,7 @@ ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask) for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq = dev->data->rx_queues[i]; on = !!(rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP); - ngbevf_vlan_strip_queue_set(dev, i, on); + ngbevf_vlan_strip_q_set(dev, i, on); } } @@ -870,6 +883,13 @@ ngbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask) static int ngbevf_vlan_offload_set(struct rte_eth_dev *dev, int mask) { + struct ngbe_hw *hw = ngbe_dev_hw(dev); + + if (!hw->adapter_stopped && (mask & RTE_ETH_VLAN_STRIP_MASK)) { + PMD_DRV_LOG(ERR, "Please stop port first"); + return -EPERM; + } + ngbe_config_vlan_strip_on_all_queues(dev, mask); ngbevf_vlan_offload_config(dev, mask); From patchwork Mon Jun 9 07:04:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiawen Wu X-Patchwork-Id: 154177 X-Patchwork-Delegate: stephen@networkplumber.org Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E9F9468B8; Mon, 9 Jun 2025 09:06:53 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8B07740E13; Mon, 9 Jun 2025 09:05:43 +0200 (CEST) Received: from smtpbguseast2.qq.com (smtpbguseast2.qq.com [54.204.34.130]) by mails.dpdk.org (Postfix) with ESMTP id 1E40140DD2; Mon, 9 Jun 2025 09:05:36 +0200 (CEST) X-QQ-mid: zesmtpsz9t1749452733t6c12f6c6 X-QQ-Originating-IP: bz3st00Tdk3mEZvzUtv1h6DP5jL0FhVE1lS5JL2WzNY= Received: from w-MS-7E16.trustnetic.com ( [220.184.249.46]) by bizesmtp.qq.com (ESMTP) with id ; Mon, 09 Jun 2025 15:05:33 +0800 (CST) X-QQ-SSF: 0000000000000000000000000000000 X-QQ-GoodBg: 0 X-BIZMAIL-ID: 4448260947187978154 EX-QQ-RecipientCnt: 4 From: Jiawen Wu To: dev@dpdk.org Cc: zaiyuwang@trustnetic.com, Jiawen Wu , stable@dpdk.org Subject: [PATCH v2 12/12] net/txgbe: add missing LRO flag in mbuf when LRO enabled Date: Mon, 9 Jun 2025 15:04:54 +0800 Message-ID: <39236692534275AC+20250609070454.223387-13-jiawenwu@trustnetic.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250609070454.223387-1-jiawenwu@trustnetic.com> References: <00DEAE896AFE0D2D+20250606080117.183198-1-jiawenwu@trustnetic.com> <20250609070454.223387-1-jiawenwu@trustnetic.com> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: zesmtpsz:trustnetic.com:qybglogicsvrgz:qybglogicsvrgz8a-1 X-QQ-XMAILINFO: NLq8blVtEno8Vy2tvktYmbzklm/73ZPiYZkbuS85ml+fz+xym30QI4iM +HazPMRFQsbkNWEnplngrCoYf5PA2aAGkf8IUEGvs8CPjU3astmnOMe41fR5GjFd2+J0zeU 3Ib0sHco4GcvtvTQCvkD5bmbecw/h1OwVFa1V2+/wJJq/e2tgoTZQHDgJ5bCMJdE2SxdSys 8PBAO1qJKm0pbdcx/WXuBCO32H19epJMe5FdL+xgtUEp/EzEc86cEAG2htBRGKUviDVfA4E a6or0PMSRm0gktMyIG303QAwFJWc4GE18gt9Oubwqza0wdTdiZp7ubHMwxViDy4YFdUKm8H 7MwlKrZPEiFNKHpxWvIvmqVqKm86MusDvkOeC2ozir5ksGLhVpYtr3z2WrIQYzVSLLS62CB 6UVqTcDIXJ9QxZNw/gkP/TigdvIEZl4gJ/l0yrBcqfzHd3jYyRV5VmQPiTs4sLsTByUvLPe G++nTTjazKiUjXe/53wBdu+RowE01btmnk6nsjPuX9ObkxQWY3YXJlGxtlYLUMJoS1OS7kB MYf8l3Sy6yI449O66ZZOkAY3nJo63WSj7HG38baFme8sP7G3K6uY820o57KGojhvU3jy7f7 uM8f9QD0JGJILGuwb26Mvpj+gtYxw4c7yIhBsi8+zVWM3i0BPu+B0s0muE7oPcMNXcQ6hU1 ENzQxD7ayCWyDB/biBbJVwbAW5k0yCfeMT33j8Od7vDxuMK67P+lLfYbKYDFp65WSmD+Qgq 6v9xOUfisKUWbXDUPZsNcU1oIvxmVoAYEwA+N4bfV4DS5MM0nDVPF6J0jDJyAz6MNBvxHt3 DdAO/88kY9KmXTBudn7da+lo2J4CpcDnVNqJnK2HkwFeY7oFCXXS21iqj2qxwPHr3Ke24I1 9RjO9jpPnWQ2q13bpiLmQatwxz1lEokwk2Vhgttm2bUeV3wdWGhWrKa/8xE4YzY/YWu3Ax5 cusrN679WqelUNum/FlnV6jmHFuB96vQBRIyGzrSlBkLvfqUrS3Usvun6OK3iY0AKi82a1a EbElP2BXmIgILrdkqYOG90aSMYJu3bt0tjz1+xHj/UIPLD6ns7MqeedXwaDJQ= X-QQ-XMRINFO: M/715EihBoGSf6IYSX1iLFg= X-QQ-RECHKSPAM: 0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When LRO is enabled, the driver must set the LRO flag in received aggregated packets to indicate LRO processing to upper-layer applications. Add the missing LRO flag into the ol_flags field of mbuf to fix it. Fixes: 0e484278c85f ("net/txgbe: support Rx") Cc: stable@dpdk.org Signed-off-by: Jiawen Wu --- drivers/net/txgbe/txgbe_rxtx.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index a85d417ff6..e6f33739c4 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -1793,6 +1793,8 @@ txgbe_fill_cluster_head_buf(struct rte_mbuf *head, struct txgbe_rx_desc *desc, pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags); pkt_flags |= rx_desc_error_to_pkt_flags(staterr); pkt_flags |= txgbe_rxd_pkt_info_to_pkt_flags(pkt_info); + if (TXGBE_RXD_RSCCNT(desc->qw0.dw0)) + pkt_flags |= RTE_MBUF_F_RX_LRO; head->ol_flags = pkt_flags; head->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info, rxq->pkt_type_mask);