From patchwork Fri Jul 9 17:29:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 95629 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0637A0548; Fri, 9 Jul 2021 19:30:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4581D410FA; Fri, 9 Jul 2021 19:30:07 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 4F1374003F for ; Fri, 9 Jul 2021 19:30:05 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10039"; a="209557450" X-IronPort-AV: E=Sophos;i="5.84,226,1620716400"; d="scan'208";a="209557450" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jul 2021 10:29:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,226,1620716400"; d="scan'208";a="488263099" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.27]) by FMSMGA003.fm.intel.com with ESMTP; 09 Jul 2021 10:29:43 -0700 From: Ferruh Yigit To: Jerin Jacob , Xiaoyun Li , Chas Williams , "Min Hu (Connor)" , Hemant Agrawal , Sachin Saxena , Qi Zhang , Xiao Wang , Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko , Harman Kalra , Maciej Czekaj , Ray Kinsella , Neil Horman , Bernard Iremonger , Bruce Richardson , Konstantin Ananyev , John McNamara , Igor Russkikh , Pavel Belous , Steven Webster , Matt Peters , Somalapuram Amaranath , Rasesh Mody , Shahed Shaikh , Ajit Khaparde , Somnath Kotur , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Rahul Lakkireddy , Haiyue Wang , Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin , Igor Chauskin , Gagandeep Singh , John Daley , Hyong Youb Kim , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , Yisen Zhuang , Lijun Ou , Beilei Xing , Jingjing Wu , Qiming Yang , Andrew Boyer , Rosen Xu , Shijith Thotton , Srisivasubramanian Srinivasan , Zyta Szpak , Liron Himi , Heinrich Kuhn , Devendra Singh Rawat , Andrew Rybchenko , Keith Wiles , Jiawen Wu , Jian Wang , Maxime Coquelin , Chenbo Xia , Nicolas Chautru , David Hunt , Harry van Haaren , Cristian Dumitrescu , Radu Nicolau , Akhil Goyal , Tomasz Kantecki , Declan Doherty , Pavan Nikhilesh , Kirill Rybalchenko , Jasvinder Singh , Thomas Monjalon Cc: Ferruh Yigit , dev@dpdk.org Date: Fri, 9 Jul 2021 18:29:19 +0100 Message-Id: <20210709172923.3369846-1-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 1/4] ethdev: fix max Rx packet length X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is a confusion on setting max Rx packet length, this patch aims to clarify it. 'rte_eth_dev_configure()' API accepts max Rx packet size via 'uint32_t max_rx_pkt_len' filed of the config struct 'struct rte_eth_conf'. Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result stored into '(struct rte_eth_dev)->data->mtu'. These two APIs are related but they work in a disconnected way, they store the set values in different variables which makes hard to figure out which one to use, also two different related method is confusing for the users. Other issues causing confusion is: * maximum transmission unit (MTU) is payload of the Ethernet frame. And 'max_rx_pkt_len' is the size of the Ethernet frame. Difference is Ethernet frame overhead, but this may be different from device to device based on what device supports, like VLAN and QinQ. * 'max_rx_pkt_len' is only valid when application requested jumbo frame, which adds additional confusion and some APIs and PMDs already discards this documented behavior. * For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory field, this adds configuration complexity for application. As solution, both APIs gets MTU as parameter, and both saves the result in same variable '(struct rte_eth_dev)->data->mtu'. For this 'max_rx_pkt_len' updated as 'mtu', and it is always valid independent from jumbo frame. For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user request and it should be used only within configure function and result should be stored to '(struct rte_eth_dev)->data->mtu'. After that point both application and PMD uses MTU from this variable. When application doesn't provide an MTU during 'rte_eth_dev_configure()' default 'RTE_ETHER_MTU' value is used. As additional clarification, MTU is used to configure the device for physical Rx/Tx limitation. Other related issue is size of the buffer to store Rx packets, many PMDs use mbuf data buffer size as Rx buffer size. And compares MTU against Rx buffer size to decide enabling scattered Rx or not, if PMD supports it. If scattered Rx is not supported by device, MTU bigger than Rx buffer size should fail. Signed-off-by: Ferruh Yigit Reviewed-by: Rosen Xu --- app/test-eventdev/test_perf_common.c | 1 - app/test-eventdev/test_pipeline_common.c | 5 +- app/test-pmd/cmdline.c | 45 ++++----- app/test-pmd/config.c | 18 ++-- app/test-pmd/parameters.c | 4 +- app/test-pmd/testpmd.c | 94 ++++++++++-------- app/test-pmd/testpmd.h | 2 +- app/test/test_link_bonding.c | 1 - app/test/test_link_bonding_mode4.c | 1 - app/test/test_link_bonding_rssconf.c | 2 - app/test/test_pmd_perf.c | 1 - doc/guides/nics/dpaa.rst | 2 +- doc/guides/nics/dpaa2.rst | 2 +- doc/guides/nics/features.rst | 2 +- doc/guides/nics/fm10k.rst | 2 +- doc/guides/nics/mlx5.rst | 4 +- doc/guides/nics/octeontx.rst | 2 +- doc/guides/nics/thunderx.rst | 2 +- doc/guides/rel_notes/deprecation.rst | 25 ----- doc/guides/sample_app_ug/flow_classify.rst | 8 +- doc/guides/sample_app_ug/ioat.rst | 1 - doc/guides/sample_app_ug/ip_reassembly.rst | 2 +- doc/guides/sample_app_ug/skeleton.rst | 8 +- drivers/net/atlantic/atl_ethdev.c | 3 - drivers/net/avp/avp_ethdev.c | 17 ++-- drivers/net/axgbe/axgbe_ethdev.c | 7 +- drivers/net/bnx2x/bnx2x_ethdev.c | 6 +- drivers/net/bnxt/bnxt_ethdev.c | 21 ++-- drivers/net/bonding/rte_eth_bond_pmd.c | 4 +- drivers/net/cnxk/cnxk_ethdev.c | 9 +- drivers/net/cnxk/cnxk_ethdev_ops.c | 8 +- drivers/net/cxgbe/cxgbe_ethdev.c | 12 +-- drivers/net/cxgbe/cxgbe_main.c | 3 +- drivers/net/cxgbe/sge.c | 3 +- drivers/net/dpaa/dpaa_ethdev.c | 52 ++++------ drivers/net/dpaa2/dpaa2_ethdev.c | 31 +++--- drivers/net/e1000/em_ethdev.c | 4 +- drivers/net/e1000/igb_ethdev.c | 18 +--- drivers/net/e1000/igb_rxtx.c | 16 ++- drivers/net/ena/ena_ethdev.c | 27 ++--- drivers/net/enetc/enetc_ethdev.c | 24 ++--- drivers/net/enic/enic_ethdev.c | 2 +- drivers/net/enic/enic_main.c | 42 ++++---- drivers/net/fm10k/fm10k_ethdev.c | 2 +- drivers/net/hinic/hinic_pmd_ethdev.c | 20 ++-- drivers/net/hns3/hns3_ethdev.c | 28 ++---- drivers/net/hns3/hns3_ethdev_vf.c | 38 +++---- drivers/net/hns3/hns3_rxtx.c | 10 +- drivers/net/i40e/i40e_ethdev.c | 10 +- drivers/net/i40e/i40e_ethdev_vf.c | 14 +-- drivers/net/i40e/i40e_rxtx.c | 4 +- drivers/net/iavf/iavf_ethdev.c | 9 +- drivers/net/ice/ice_dcf_ethdev.c | 5 +- drivers/net/ice/ice_ethdev.c | 14 +-- drivers/net/ice/ice_rxtx.c | 12 +-- drivers/net/igc/igc_ethdev.c | 51 +++------- drivers/net/igc/igc_ethdev.h | 7 ++ drivers/net/igc/igc_txrx.c | 22 ++--- drivers/net/ionic/ionic_ethdev.c | 12 +-- drivers/net/ionic/ionic_rxtx.c | 6 +- drivers/net/ipn3ke/ipn3ke_representor.c | 10 +- drivers/net/ixgbe/ixgbe_ethdev.c | 35 +++---- drivers/net/ixgbe/ixgbe_pf.c | 6 +- drivers/net/ixgbe/ixgbe_rxtx.c | 15 ++- drivers/net/liquidio/lio_ethdev.c | 20 +--- drivers/net/mlx4/mlx4_rxq.c | 17 ++-- drivers/net/mlx5/mlx5_rxq.c | 25 ++--- drivers/net/mvneta/mvneta_ethdev.c | 7 -- drivers/net/mvneta/mvneta_rxtx.c | 13 ++- drivers/net/mvpp2/mrvl_ethdev.c | 34 +++---- drivers/net/nfp/nfp_net.c | 9 +- drivers/net/octeontx/octeontx_ethdev.c | 12 +-- drivers/net/octeontx2/otx2_ethdev.c | 2 +- drivers/net/octeontx2/otx2_ethdev_ops.c | 11 +-- drivers/net/pfe/pfe_ethdev.c | 7 +- drivers/net/qede/qede_ethdev.c | 16 +-- drivers/net/qede/qede_rxtx.c | 8 +- drivers/net/sfc/sfc_ethdev.c | 4 +- drivers/net/sfc/sfc_port.c | 6 +- drivers/net/tap/rte_eth_tap.c | 7 +- drivers/net/thunderx/nicvf_ethdev.c | 13 +-- drivers/net/txgbe/txgbe_ethdev.c | 7 +- drivers/net/txgbe/txgbe_ethdev.h | 4 + drivers/net/txgbe/txgbe_ethdev_vf.c | 2 - drivers/net/txgbe/txgbe_rxtx.c | 19 ++-- drivers/net/virtio/virtio_ethdev.c | 4 +- examples/bbdev_app/main.c | 1 - examples/bond/main.c | 1 - examples/distributor/main.c | 1 - .../pipeline_worker_generic.c | 1 - .../eventdev_pipeline/pipeline_worker_tx.c | 1 - examples/flow_classify/flow_classify.c | 10 +- examples/ioat/ioatfwd.c | 1 - examples/ip_fragmentation/main.c | 11 +-- examples/ip_pipeline/link.c | 2 +- examples/ip_reassembly/main.c | 11 ++- examples/ipsec-secgw/ipsec-secgw.c | 7 +- examples/ipv4_multicast/main.c | 8 +- examples/kni/main.c | 6 +- examples/l2fwd-cat/l2fwd-cat.c | 8 +- examples/l2fwd-crypto/main.c | 1 - examples/l2fwd-event/l2fwd_common.c | 1 - examples/l3fwd-acl/main.c | 11 +-- examples/l3fwd-graph/main.c | 4 +- examples/l3fwd-power/main.c | 11 ++- examples/l3fwd/main.c | 4 +- .../performance-thread/l3fwd-thread/main.c | 7 +- examples/pipeline/obj.c | 2 +- examples/ptpclient/ptpclient.c | 10 +- examples/qos_meter/main.c | 1 - examples/qos_sched/init.c | 1 - examples/rxtx_callbacks/main.c | 10 +- examples/skeleton/basicfwd.c | 10 +- examples/vhost/main.c | 4 +- examples/vm_power_manager/main.c | 11 +-- lib/ethdev/rte_ethdev.c | 98 +++++++++++-------- lib/ethdev/rte_ethdev.h | 2 +- lib/ethdev/rte_ethdev_trace.h | 2 +- 118 files changed, 531 insertions(+), 848 deletions(-) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index cc100650c21e..660d5a0364b6 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -669,7 +669,6 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt) struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .rx_adv_conf = { diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c index 6ee530d4cdc9..5fcea74b4d43 100644 --- a/app/test-eventdev/test_pipeline_common.c +++ b/app/test-eventdev/test_pipeline_common.c @@ -197,8 +197,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt) return -EINVAL; } - port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz; - if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN) + port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN - + RTE_ETHER_CRC_LEN; + if (port_conf.rxmode.mtu > RTE_ETHER_MTU) port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; t->internal_port = 1; diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 8468018cf35d..8bdc042f6e8e 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -1892,43 +1892,36 @@ cmd_config_max_pkt_len_parsed(void *parsed_result, __rte_unused void *data) { struct cmd_config_max_pkt_len_result *res = parsed_result; - uint32_t max_rx_pkt_len_backup = 0; - portid_t pid; + portid_t port_id; int ret; + if (strcmp(res->name, "max-pkt-len")) { + printf("Unknown parameter\n"); + return; + } + if (!all_ports_stopped()) { printf("Please stop all ports first\n"); return; } - RTE_ETH_FOREACH_DEV(pid) { - struct rte_port *port = &ports[pid]; - - if (!strcmp(res->name, "max-pkt-len")) { - if (res->value < RTE_ETHER_MIN_LEN) { - printf("max-pkt-len can not be less than %d\n", - RTE_ETHER_MIN_LEN); - return; - } - if (res->value == port->dev_conf.rxmode.max_rx_pkt_len) - return; - - ret = eth_dev_info_get_print_err(pid, &port->dev_info); - if (ret != 0) { - printf("rte_eth_dev_info_get() failed for port %u\n", - pid); - return; - } + RTE_ETH_FOREACH_DEV(port_id) { + struct rte_port *port = &ports[port_id]; - max_rx_pkt_len_backup = port->dev_conf.rxmode.max_rx_pkt_len; + if (res->value < RTE_ETHER_MIN_LEN) { + printf("max-pkt-len can not be less than %d\n", + RTE_ETHER_MIN_LEN); + return; + } - port->dev_conf.rxmode.max_rx_pkt_len = res->value; - if (update_jumbo_frame_offload(pid) != 0) - port->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len_backup; - } else { - printf("Unknown parameter\n"); + ret = eth_dev_info_get_print_err(port_id, &port->dev_info); + if (ret != 0) { + printf("rte_eth_dev_info_get() failed for port %u\n", + port_id); return; } + + update_jumbo_frame_offload(port_id, res->value); } init_port_config(); diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 04ae0feb5852..a87265d7638b 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1139,7 +1139,6 @@ port_mtu_set(portid_t port_id, uint16_t mtu) int diag; struct rte_port *rte_port = &ports[port_id]; struct rte_eth_dev_info dev_info; - uint16_t eth_overhead; int ret; if (port_id_is_invalid(port_id, ENABLED_WARN)) @@ -1155,20 +1154,17 @@ port_mtu_set(portid_t port_id, uint16_t mtu) return; } diag = rte_eth_dev_set_mtu(port_id, mtu); - if (diag) + if (diag) { printf("Set MTU failed. diag=%d\n", diag); - else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) { - /* - * Ether overhead in driver is equal to the difference of - * max_rx_pktlen and max_mtu in rte_eth_dev_info when the - * device supports jumbo frame. - */ - eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu; + return; + } + + rte_port->dev_conf.rxmode.mtu = mtu; + + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) { if (mtu > RTE_ETHER_MTU) { rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - rte_port->dev_conf.rxmode.max_rx_pkt_len = - mtu + eth_overhead; } else rte_port->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 5e69d2aa8cfe..8e8556d74a4a 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -860,7 +860,9 @@ launch_args_parse(int argc, char** argv) if (!strcmp(lgopts[opt_idx].name, "max-pkt-len")) { n = atoi(optarg); if (n >= RTE_ETHER_MIN_LEN) - rx_mode.max_rx_pkt_len = (uint32_t) n; + rx_mode.mtu = (uint32_t) n - + (RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN); else rte_exit(EXIT_FAILURE, "Invalid max-pkt-len=%d - should be > %d\n", diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 1cdd3cdd12b6..2c79cae05664 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -445,13 +445,7 @@ lcoreid_t latencystats_lcore_id = -1; /* * Ethernet device configuration. */ -struct rte_eth_rxmode rx_mode = { - /* Default maximum frame length. - * Zero is converted to "RTE_ETHER_MTU + PMD Ethernet overhead" - * in init_config(). - */ - .max_rx_pkt_len = 0, -}; +struct rte_eth_rxmode rx_mode; struct rte_eth_txmode tx_mode = { .offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE, @@ -1417,6 +1411,20 @@ check_nb_hairpinq(queueid_t hairpinq) return 0; } +static int +get_eth_overhead(struct rte_eth_dev_info *dev_info) +{ + uint32_t eth_overhead; + + if (dev_info->max_mtu != UINT16_MAX && + dev_info->max_rx_pktlen > dev_info->max_mtu) + eth_overhead = dev_info->max_rx_pktlen - dev_info->max_mtu; + else + eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + + return eth_overhead; +} + static void init_config(void) { @@ -1465,7 +1473,7 @@ init_config(void) rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n"); - ret = update_jumbo_frame_offload(pid); + ret = update_jumbo_frame_offload(pid, 0); if (ret != 0) printf("Updating jumbo frame offload failed for port %u\n", pid); @@ -1512,14 +1520,19 @@ init_config(void) */ if (port->dev_info.rx_desc_lim.nb_mtu_seg_max != UINT16_MAX && port->dev_info.rx_desc_lim.nb_mtu_seg_max != 0) { - data_size = rx_mode.max_rx_pkt_len / - port->dev_info.rx_desc_lim.nb_mtu_seg_max; + uint32_t eth_overhead = get_eth_overhead(&port->dev_info); + uint16_t mtu; - if ((data_size + RTE_PKTMBUF_HEADROOM) > + if (rte_eth_dev_get_mtu(pid, &mtu) == 0) { + data_size = mtu + eth_overhead / + port->dev_info.rx_desc_lim.nb_mtu_seg_max; + + if ((data_size + RTE_PKTMBUF_HEADROOM) > mbuf_data_size[0]) { - mbuf_data_size[0] = data_size + - RTE_PKTMBUF_HEADROOM; - warning = 1; + mbuf_data_size[0] = data_size + + RTE_PKTMBUF_HEADROOM; + warning = 1; + } } } } @@ -3352,43 +3365,44 @@ rxtx_port_config(struct rte_port *port) /* * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload, - * MTU is also aligned if JUMBO_FRAME offload is not set. + * MTU is also aligned. * * port->dev_info should be set before calling this function. * + * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU + + * ETH_OVERHEAD". This is useful to update flags but not MTU value. + * * return 0 on success, negative on error */ int -update_jumbo_frame_offload(portid_t portid) +update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen) { struct rte_port *port = &ports[portid]; uint32_t eth_overhead; uint64_t rx_offloads; - int ret; + uint16_t mtu, new_mtu; bool on; - /* Update the max_rx_pkt_len to have MTU as RTE_ETHER_MTU */ - if (port->dev_info.max_mtu != UINT16_MAX && - port->dev_info.max_rx_pktlen > port->dev_info.max_mtu) - eth_overhead = port->dev_info.max_rx_pktlen - - port->dev_info.max_mtu; - else - eth_overhead = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + eth_overhead = get_eth_overhead(&port->dev_info); - rx_offloads = port->dev_conf.rxmode.offloads; + if (rte_eth_dev_get_mtu(portid, &mtu) != 0) { + printf("Failed to get MTU for port %u\n", portid); + return -1; + } + + if (max_rx_pktlen == 0) + max_rx_pktlen = mtu + eth_overhead; - /* Default config value is 0 to use PMD specific overhead */ - if (port->dev_conf.rxmode.max_rx_pkt_len == 0) - port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead; + rx_offloads = port->dev_conf.rxmode.offloads; + new_mtu = max_rx_pktlen - eth_overhead; - if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) { + if (new_mtu <= RTE_ETHER_MTU) { rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; on = false; } else { if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) { printf("Frame size (%u) is not supported by port %u\n", - port->dev_conf.rxmode.max_rx_pkt_len, - portid); + max_rx_pktlen, portid); return -1; } rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; @@ -3409,18 +3423,16 @@ update_jumbo_frame_offload(portid_t portid) } } - /* If JUMBO_FRAME is set MTU conversion done by ethdev layer, - * if unset do it here - */ - if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) { - ret = rte_eth_dev_set_mtu(portid, - port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead); - if (ret) - printf("Failed to set MTU to %u for port %u\n", - port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead, - portid); + if (mtu == new_mtu) + return 0; + + if (rte_eth_dev_set_mtu(portid, new_mtu) != 0) { + printf("Failed to set MTU to %u for port %u\n", new_mtu, portid); + return -1; } + port->dev_conf.rxmode.mtu = new_mtu; + return 0; } diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index d61a055bdd1b..42143f85924f 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue, __rte_unused void *user_param); void add_tx_dynf_callback(portid_t portid); void remove_tx_dynf_callback(portid_t portid); -int update_jumbo_frame_offload(portid_t portid); +int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen); /* * Work-around of a compilation error with ICC on invocations of the diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c index 8a5c8310a8b4..5388d18125a6 100644 --- a/app/test/test_link_bonding.c +++ b/app/test/test_link_bonding.c @@ -136,7 +136,6 @@ static struct rte_eth_conf default_pmd_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_NONE, .split_hdr_size = 0, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, }, .txmode = { .mq_mode = ETH_MQ_TX_NONE, diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c index 2c835fa7adc7..3e9254fe896d 100644 --- a/app/test/test_link_bonding_mode4.c +++ b/app/test/test_link_bonding_mode4.c @@ -108,7 +108,6 @@ static struct link_bonding_unittest_params test_params = { static struct rte_eth_conf default_pmd_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_NONE, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .txmode = { diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c index 5dac60ca1edd..e7bb0497b663 100644 --- a/app/test/test_link_bonding_rssconf.c +++ b/app/test/test_link_bonding_rssconf.c @@ -81,7 +81,6 @@ static struct link_bonding_rssconf_unittest_params test_params = { static struct rte_eth_conf default_pmd_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_NONE, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .txmode = { @@ -93,7 +92,6 @@ static struct rte_eth_conf default_pmd_conf = { static struct rte_eth_conf rss_pmd_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .txmode = { diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c index 3a248d512c4a..a3b4f52c65e6 100644 --- a/app/test/test_pmd_perf.c +++ b/app/test/test_pmd_perf.c @@ -63,7 +63,6 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS]; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_NONE, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .txmode = { diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst index 917482dbe2a5..b8d43aa90098 100644 --- a/doc/guides/nics/dpaa.rst +++ b/doc/guides/nics/dpaa.rst @@ -335,7 +335,7 @@ Maximum packet length ~~~~~~~~~~~~~~~~~~~~~ The DPAA SoC family support a maximum of a 10240 jumbo frame. The value -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len`` +is fixed and cannot be changed. So, even when the ``rxmode.mtu`` member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames up to 10240 bytes can still reach the host interface. diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst index 6470f1c05ac8..ce16e1047df2 100644 --- a/doc/guides/nics/dpaa2.rst +++ b/doc/guides/nics/dpaa2.rst @@ -551,7 +551,7 @@ Maximum packet length ~~~~~~~~~~~~~~~~~~~~~ The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len`` +is fixed and cannot be changed. So, even when the ``rxmode.mtu`` member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames up to 10240 bytes can still reach the host interface. diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index 403c2b03a386..c98242f3b72f 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -166,7 +166,7 @@ Jumbo frame Supports Rx jumbo frames. * **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``. - ``dev_conf.rxmode.max_rx_pkt_len``. + ``dev_conf.rxmode.mtu``. * **[related] rte_eth_dev_info**: ``max_rx_pktlen``. * **[related] API**: ``rte_eth_dev_set_mtu()``. diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst index 7b8ef0e7823d..ed6afd62703d 100644 --- a/doc/guides/nics/fm10k.rst +++ b/doc/guides/nics/fm10k.rst @@ -141,7 +141,7 @@ Maximum packet length ~~~~~~~~~~~~~~~~~~~~~ The FM10000 family of NICS support a maximum of a 15K jumbo frame. The value -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len`` +is fixed and cannot be changed. So, even when the ``rxmode.mtu`` member of ``struct rte_eth_conf`` is set to a value lower than 15364, frames up to 15364 bytes can still reach the host interface. diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 83299646ddb1..338734826a7a 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -584,9 +584,9 @@ Driver options and each stride receives one packet. MPRQ can improve throughput for small-packet traffic. - When MPRQ is enabled, max_rx_pkt_len can be larger than the size of + When MPRQ is enabled, MTU can be larger than the size of user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will - configure large stride size enough to accommodate max_rx_pkt_len as long as + configure large stride size enough to accommodate MTU as long as device allows. Note that this can waste system memory compared to enabling Rx scatter and multi-segment packet. diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst index b1a868b054d1..8236cc3e93e0 100644 --- a/doc/guides/nics/octeontx.rst +++ b/doc/guides/nics/octeontx.rst @@ -157,7 +157,7 @@ Maximum packet length ~~~~~~~~~~~~~~~~~~~~~ The OCTEON TX SoC family NICs support a maximum of a 32K jumbo frame. The value -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len`` +is fixed and cannot be changed. So, even when the ``rxmode.mtu`` member of ``struct rte_eth_conf`` is set to a value lower than 32k, frames up to 32k bytes can still reach the host interface. diff --git a/doc/guides/nics/thunderx.rst b/doc/guides/nics/thunderx.rst index 12d43ce93e28..98f23a2b2a3d 100644 --- a/doc/guides/nics/thunderx.rst +++ b/doc/guides/nics/thunderx.rst @@ -392,7 +392,7 @@ Maximum packet length ~~~~~~~~~~~~~~~~~~~~~ The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value -is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len`` +is fixed and cannot be changed. So, even when the ``rxmode.mtu`` member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames up to 9200 bytes can still reach the host interface. diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 9584d6bfd723..86da47d8f9c6 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -56,31 +56,6 @@ Deprecation Notices In 19.11 PMDs will still update the field even when the offload is not enabled. -* ethdev: ``uint32_t max_rx_pkt_len`` field of ``struct rte_eth_rxmode``, will be - replaced by a new ``uint32_t mtu`` field of ``struct rte_eth_conf`` in v21.11. - The new ``mtu`` field will be used to configure the initial device MTU via - ``rte_eth_dev_configure()`` API. - Later MTU can be changed by ``rte_eth_dev_set_mtu()`` API as done now. - The existing ``(struct rte_eth_dev)->data->mtu`` variable will be used to store - the configured ``mtu`` value, - and this new ``(struct rte_eth_dev)->data->dev_conf.mtu`` variable will - be used to store the user configuration request. - Unlike ``max_rx_pkt_len``, which was valid only when ``JUMBO_FRAME`` enabled, - ``mtu`` field will be always valid. - When ``mtu`` config is not provided by the application, default ``RTE_ETHER_MTU`` - value will be used. - ``(struct rte_eth_dev)->data->mtu`` should be updated after MTU set successfully, - either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``. - - An application may need to configure device for a specific Rx packet size, like for - cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size - can't be bigger than Rx buffer size. - To cover these cases an application needs to know the device packet overhead to be - able to calculate the ``mtu`` corresponding to a Rx buffer size, for this - ``(struct rte_eth_dev_info).max_rx_pktlen`` will be kept, - the device packet overhead can be calculated as: - ``(struct rte_eth_dev_info).max_rx_pktlen - (struct rte_eth_dev_info).max_mtu`` - * ethdev: ``rx_descriptor_done`` dev_ops and ``rte_eth_rx_descriptor_done`` will be removed in 21.11. Existing ``rte_eth_rx_descriptor_status`` and ``rte_eth_tx_descriptor_status`` diff --git a/doc/guides/sample_app_ug/flow_classify.rst b/doc/guides/sample_app_ug/flow_classify.rst index 01915971ae83..2cc36a688af3 100644 --- a/doc/guides/sample_app_ug/flow_classify.rst +++ b/doc/guides/sample_app_ug/flow_classify.rst @@ -325,13 +325,7 @@ Forwarding application is shown below: } The Ethernet ports are configured with default settings using the -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct. - -.. code-block:: c - - static const struct rte_eth_conf port_conf_default = { - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN } - }; +``rte_eth_dev_configure()`` function. For this example the ports are set up with 1 RX and 1 TX queue using the ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions. diff --git a/doc/guides/sample_app_ug/ioat.rst b/doc/guides/sample_app_ug/ioat.rst index 7eb557f91c7a..c5c06261e395 100644 --- a/doc/guides/sample_app_ug/ioat.rst +++ b/doc/guides/sample_app_ug/ioat.rst @@ -162,7 +162,6 @@ multiple CBDMA channels per port: static const struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN }, .rx_adv_conf = { .rss_conf = { diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst index e72c8492e972..2090b23fdd1c 100644 --- a/doc/guides/sample_app_ug/ip_reassembly.rst +++ b/doc/guides/sample_app_ug/ip_reassembly.rst @@ -175,7 +175,7 @@ each RX queue uses its own mempool. .. code-block:: c nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * RTE_LIBRTE_IP_FRAG_MAX_FRAGS; - nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE; + nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + BUF_SIZE - 1) / BUF_SIZE; nb_mbuf *= 2; /* ipv4 and ipv6 */ nb_mbuf += RTE_TEST_RX_DESC_DEFAULT + RTE_TEST_TX_DESC_DEFAULT; nb_mbuf = RTE_MAX(nb_mbuf, (uint32_t)NB_MBUF); diff --git a/doc/guides/sample_app_ug/skeleton.rst b/doc/guides/sample_app_ug/skeleton.rst index 263d8debc81b..a88cb8f14a4b 100644 --- a/doc/guides/sample_app_ug/skeleton.rst +++ b/doc/guides/sample_app_ug/skeleton.rst @@ -157,13 +157,7 @@ Forwarding application is shown below: } The Ethernet ports are configured with default settings using the -``rte_eth_dev_configure()`` function and the ``port_conf_default`` struct: - -.. code-block:: c - - static const struct rte_eth_conf port_conf_default = { - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN } - }; +``rte_eth_dev_configure()`` function. For this example the ports are set up with 1 RX and 1 TX queue using the ``rte_eth_rx_queue_setup()`` and ``rte_eth_tx_queue_setup()`` functions. diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c index 0ce35eb519e2..3f654c071566 100644 --- a/drivers/net/atlantic/atl_ethdev.c +++ b/drivers/net/atlantic/atl_ethdev.c @@ -1636,9 +1636,6 @@ atl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) return -EINVAL; - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - return 0; } diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c index 623fa5e5ff5b..2554f5fdf59a 100644 --- a/drivers/net/avp/avp_ethdev.c +++ b/drivers/net/avp/avp_ethdev.c @@ -1059,17 +1059,18 @@ static int avp_dev_enable_scattered(struct rte_eth_dev *eth_dev, struct avp_dev *avp) { - unsigned int max_rx_pkt_len; + unsigned int max_rx_pktlen; - max_rx_pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len; + max_rx_pktlen = eth_dev->data->mtu + RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN; - if ((max_rx_pkt_len > avp->guest_mbuf_size) || - (max_rx_pkt_len > avp->host_mbuf_size)) { + if ((max_rx_pktlen > avp->guest_mbuf_size) || + (max_rx_pktlen > avp->host_mbuf_size)) { /* * If the guest MTU is greater than either the host or guest * buffers then chained mbufs have to be enabled in the TX * direction. It is assumed that the application will not need - * to send packets larger than their max_rx_pkt_len (MRU). + * to send packets larger than their MTU. */ return 1; } @@ -1124,7 +1125,7 @@ avp_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, PMD_DRV_LOG(DEBUG, "AVP max_rx_pkt_len=(%u,%u) mbuf_size=(%u,%u)\n", avp->max_rx_pkt_len, - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len, + eth_dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN, avp->host_mbuf_size, avp->guest_mbuf_size); @@ -1889,8 +1890,8 @@ avp_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) * function; send it truncated to avoid the performance * hit of having to manage returning the already * allocated buffer to the free list. This should not - * happen since the application should have set the - * max_rx_pkt_len based on its MTU and it should be + * happen since the application should have not send + * packages larger than its MTU and it should be * policing its own packet sizes. */ txq->errors++; diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index 9cb4818af11f..76aeec077f2b 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -350,7 +350,7 @@ axgbe_dev_start(struct rte_eth_dev *dev) struct axgbe_port *pdata = dev->data->dev_private; int ret; struct rte_eth_dev_data *dev_data = dev->data; - uint16_t max_pkt_len = dev_data->dev_conf.rxmode.max_rx_pkt_len; + uint16_t max_pkt_len; dev->dev_ops = &axgbe_eth_dev_ops; @@ -383,6 +383,8 @@ axgbe_dev_start(struct rte_eth_dev *dev) rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state); rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state); + + max_pkt_len = dev_data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) || max_pkt_len > pdata->rx_buf_size) dev_data->scattered_rx = 1; @@ -1490,7 +1492,7 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) dev->data->port_id); return -EBUSY; } - if (frame_size > AXGBE_ETH_MAX_LEN) { + if (mtu > RTE_ETHER_MTU) { dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; val = 1; @@ -1500,7 +1502,6 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) val = 0; } AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val); - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; return 0; } diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c index 463886f17a58..009a94e9a8fa 100644 --- a/drivers/net/bnx2x/bnx2x_ethdev.c +++ b/drivers/net/bnx2x/bnx2x_ethdev.c @@ -175,16 +175,12 @@ static int bnx2x_dev_configure(struct rte_eth_dev *dev) { struct bnx2x_softc *sc = dev->data->dev_private; - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; int mp_ncpus = sysconf(_SC_NPROCESSORS_CONF); PMD_INIT_FUNC_TRACE(sc); - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len; - dev->data->mtu = sc->mtu; - } + sc->mtu = dev->data->dev_conf.rxmode.mtu; if (dev->data->nb_tx_queues > dev->data->nb_rx_queues) { PMD_DRV_LOG(ERR, sc, "The number of TX queues is greater than number of RX queues"); diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index c9536f79267d..335505a106d5 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -1128,13 +1128,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev) rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH; eth_dev->data->dev_conf.rxmode.offloads = rx_offloads; - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - eth_dev->data->mtu = - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len - - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE * - BNXT_NUM_VLANS; - bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu); - } + bnxt_mtu_set_op(eth_dev, eth_dev->data->mtu); + return 0; resource_error: @@ -1172,6 +1167,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev) */ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev) { + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU; uint16_t buf_size; int i; @@ -1186,7 +1182,7 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev) buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM); - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buf_size) + if (eth_dev->data->mtu + overhead > buf_size) return 1; } return 0; @@ -2992,6 +2988,7 @@ bnxt_tx_burst_mode_get(struct rte_eth_dev *dev, __rte_unused uint16_t queue_id, int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) { + uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU; struct bnxt *bp = eth_dev->data->dev_private; uint32_t new_pkt_size; uint32_t rc = 0; @@ -3005,8 +3002,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) if (!eth_dev->data->nb_rx_queues) return rc; - new_pkt_size = new_mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + - VLAN_TAG_SIZE * BNXT_NUM_VLANS; + new_pkt_size = new_mtu + overhead; /* * Disallow any MTU change that would require scattered receive support @@ -3033,7 +3029,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) } /* Is there a change in mtu setting? */ - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len == new_pkt_size) + if (eth_dev->data->mtu == new_mtu) return rc; for (i = 0; i < bp->nr_vnics; i++) { @@ -3055,9 +3051,6 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) } } - if (!rc) - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_pkt_size; - PMD_DRV_LOG(INFO, "New MTU is %d\n", new_mtu); return rc; diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index b01ef003e65c..b2a1833e3f91 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -1728,8 +1728,8 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev, slave_eth_dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER; - slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = - bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len; + slave_eth_dev->data->dev_conf.rxmode.mtu = + bonded_eth_dev->data->dev_conf.rxmode.mtu; if (bonded_eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 7adab4605819..da6c5e8f242f 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -53,7 +53,7 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq) mbp_priv = rte_mempool_get_priv(rxq->qconf.mp); buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM; - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) { + if (eth_dev->data->mtu + (uint32_t)CNXK_NIX_L2_OVERHEAD > buffsz) { dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER; dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS; } @@ -64,18 +64,13 @@ nix_recalc_mtu(struct rte_eth_dev *eth_dev) { struct rte_eth_dev_data *data = eth_dev->data; struct cnxk_eth_rxq_sp *rxq; - uint16_t mtu; int rc; rxq = ((struct cnxk_eth_rxq_sp *)data->rx_queues[0]) - 1; /* Setup scatter mode if needed by jumbo */ nix_enable_mseg_on_jumbo(rxq); - /* Setup MTU based on max_rx_pkt_len */ - mtu = data->dev_conf.rxmode.max_rx_pkt_len - CNXK_NIX_L2_OVERHEAD + - CNXK_NIX_MAX_VTAG_ACT_SIZE; - - rc = cnxk_nix_mtu_set(eth_dev, mtu); + rc = cnxk_nix_mtu_set(eth_dev, data->mtu); if (rc) plt_err("Failed to set default MTU size, rc=%d", rc); diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index b6cc5286c6d0..695d0d6fd3e2 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -440,16 +440,10 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) goto exit; } - frame_size += RTE_ETHER_CRC_LEN; - - if (frame_size > RTE_ETHER_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - - /* Update max_rx_pkt_len */ - data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - exit: return rc; } diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c index 177eca397600..8cf61f12a8d6 100644 --- a/drivers/net/cxgbe/cxgbe_ethdev.c +++ b/drivers/net/cxgbe/cxgbe_ethdev.c @@ -310,11 +310,11 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) return err; /* Must accommodate at least RTE_ETHER_MIN_MTU */ - if (new_mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen) + if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen) return -EINVAL; /* set to jumbo mode if needed */ - if (new_mtu > CXGBE_ETH_MAX_LEN) + if (mtu > RTE_ETHER_MTU) eth_dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else @@ -323,9 +323,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1, -1, -1, true); - if (!err) - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = new_mtu; - return err; } @@ -623,7 +620,8 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, const struct rte_eth_rxconf *rx_conf __rte_unused, struct rte_mempool *mp) { - unsigned int pkt_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len; + unsigned int pkt_len = eth_dev->data->mtu + RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN; struct port_info *pi = eth_dev->data->dev_private; struct adapter *adapter = pi->adapter; struct rte_eth_dev_info dev_info; @@ -683,7 +681,7 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, rxq->fl.size = temp_nb_desc; /* Set to jumbo mode if necessary */ - if (pkt_len > CXGBE_ETH_MAX_LEN) + if (eth_dev->data->mtu > RTE_ETHER_MTU) eth_dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c index 6dd1bf1f836e..91d6bb9bbcb0 100644 --- a/drivers/net/cxgbe/cxgbe_main.c +++ b/drivers/net/cxgbe/cxgbe_main.c @@ -1661,8 +1661,7 @@ int cxgbe_link_start(struct port_info *pi) unsigned int mtu; int ret; - mtu = pi->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len - - (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN); + mtu = pi->eth_dev->data->mtu; conf_offloads = pi->eth_dev->data->dev_conf.rxmode.offloads; diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c index e5f7721dc4b3..830f5192474d 100644 --- a/drivers/net/cxgbe/sge.c +++ b/drivers/net/cxgbe/sge.c @@ -1113,7 +1113,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf, u32 wr_mid; u64 cntrl, *end; bool v6; - u32 max_pkt_len = txq->data->dev_conf.rxmode.max_rx_pkt_len; + u32 max_pkt_len; /* Reject xmit if queue is stopped */ if (unlikely(txq->flags & EQ_STOPPED)) @@ -1129,6 +1129,7 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf, return 0; } + max_pkt_len = txq->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; if ((!(m->ol_flags & PKT_TX_TCP_SEG)) && (unlikely(m->pkt_len > max_pkt_len))) goto out_free; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 27d670f843d2..56703e3a39e8 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -187,15 +187,13 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - if (frame_size > DPAA_ETH_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - fman_if_set_maxfrm(dev->process_private, frame_size); return 0; @@ -213,6 +211,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) struct fman_if *fif = dev->process_private; struct __fman_if *__fif; struct rte_intr_handle *intr_handle; + uint32_t max_rx_pktlen; int speed, duplex; int ret; @@ -238,27 +237,17 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev) tx_offloads, dev_tx_offloads_nodis); } - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - uint32_t max_len; - - DPAA_PMD_DEBUG("enabling jumbo"); - - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= - DPAA_MAX_RX_PKT_LEN) - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; - else { - DPAA_PMD_INFO("enabling jumbo override conf max len=%d " - "supported is %d", - dev->data->dev_conf.rxmode.max_rx_pkt_len, - DPAA_MAX_RX_PKT_LEN); - max_len = DPAA_MAX_RX_PKT_LEN; - } - - fman_if_set_maxfrm(dev->process_private, max_len); - dev->data->mtu = max_len - - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE; + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE; + if (max_rx_pktlen > DPAA_MAX_RX_PKT_LEN) { + DPAA_PMD_INFO("enabling jumbo override conf max len=%d " + "supported is %d", + max_rx_pktlen, DPAA_MAX_RX_PKT_LEN); + max_rx_pktlen = DPAA_MAX_RX_PKT_LEN; } + fman_if_set_maxfrm(dev->process_private, max_rx_pktlen); + if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) { DPAA_PMD_DEBUG("enabling scatter mode"); fman_if_set_sg(dev->process_private, 1); @@ -936,6 +925,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, u32 flags = 0; int ret; u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; + uint32_t max_rx_pktlen; PMD_INIT_FUNC_TRACE(); @@ -977,17 +967,17 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return -EINVAL; } + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + + VLAN_TAG_SIZE; /* Max packet can fit in single buffer */ - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) { + if (max_rx_pktlen <= buffsz) { ; } else if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) { - if (dev->data->dev_conf.rxmode.max_rx_pkt_len > - buffsz * DPAA_SGT_MAX_ENTRIES) { - DPAA_PMD_ERR("max RxPkt size %d too big to fit " + if (max_rx_pktlen > buffsz * DPAA_SGT_MAX_ENTRIES) { + DPAA_PMD_ERR("Maximum Rx packet size %d too big to fit " "MaxSGlist %d", - dev->data->dev_conf.rxmode.max_rx_pkt_len, - buffsz * DPAA_SGT_MAX_ENTRIES); + max_rx_pktlen, buffsz * DPAA_SGT_MAX_ENTRIES); rte_errno = EOVERFLOW; return -rte_errno; } @@ -995,8 +985,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is" " larger than a single mbuf (%u) and scattered" " mode has not been requested", - dev->data->dev_conf.rxmode.max_rx_pkt_len, - buffsz - RTE_PKTMBUF_HEADROOM); + max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM); } dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp); @@ -1034,8 +1023,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, dpaa_intf->valid = 1; DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name, - fman_if_get_sg_enable(fif), - dev->data->dev_conf.rxmode.max_rx_pkt_len); + fman_if_get_sg_enable(fif), max_rx_pktlen); /* checking if push mode only, no error check for now */ if (!rxq->is_static && dpaa_push_mode_max_queue > dpaa_push_queue_idx) { diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 8b803b8542dc..6213bcbf3a43 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -540,6 +540,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) int tx_l3_csum_offload = false; int tx_l4_csum_offload = false; int ret, tc_index; + uint32_t max_rx_pktlen; PMD_INIT_FUNC_TRACE(); @@ -559,23 +560,17 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev) tx_offloads, dev_tx_offloads_nodis); } - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) { - ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW, - priv->token, eth_conf->rxmode.max_rx_pkt_len - - RTE_ETHER_CRC_LEN); - if (ret) { - DPAA2_PMD_ERR( - "Unable to set mtu. check config"); - return ret; - } - dev->data->mtu = - dev->data->dev_conf.rxmode.max_rx_pkt_len - - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - - VLAN_TAG_SIZE; - } else { - return -1; + max_rx_pktlen = eth_conf->rxmode.mtu + RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE; + if (max_rx_pktlen <= DPAA2_MAX_RX_PKT_LEN) { + ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW, + priv->token, max_rx_pktlen - RTE_ETHER_CRC_LEN); + if (ret) { + DPAA2_PMD_ERR("Unable to set mtu. check config"); + return ret; } + } else { + return -1; } if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) { @@ -1475,15 +1470,13 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN) return -EINVAL; - if (frame_size > DPAA2_ETH_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - /* Set the Max Rx frame length as 'mtu' + * Maximum Ethernet header length */ diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index a0ca371b0275..6f418a36aa04 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -1818,7 +1818,7 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) rctl = E1000_READ_REG(hw, E1000_RCTL); /* switch to jumbo mode if needed */ - if (frame_size > E1000_ETH_MAX_LEN) { + if (mtu > RTE_ETHER_MTU) { dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; rctl |= E1000_RCTL_LPE; @@ -1829,8 +1829,6 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } E1000_WRITE_REG(hw, E1000_RCTL, rctl); - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; return 0; } diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index 10ee0f33415a..35b517891d67 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -2686,9 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev) E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg); /* Update maximum packet length */ - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) - E1000_WRITE_REG(hw, E1000_RLPML, - dev->data->dev_conf.rxmode.max_rx_pkt_len); + E1000_WRITE_REG(hw, E1000_RLPML, dev->data->mtu + E1000_ETH_OVERHEAD); } static void @@ -2704,10 +2702,8 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev) E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg); /* Update maximum packet length */ - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) - E1000_WRITE_REG(hw, E1000_RLPML, - dev->data->dev_conf.rxmode.max_rx_pkt_len + - VLAN_TAG_SIZE); + E1000_WRITE_REG(hw, E1000_RLPML, + dev->data->mtu + E1000_ETH_OVERHEAD + VLAN_TAG_SIZE); } static int @@ -4405,7 +4401,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) rctl = E1000_READ_REG(hw, E1000_RCTL); /* switch to jumbo mode if needed */ - if (frame_size > E1000_ETH_MAX_LEN) { + if (mtu > RTE_ETHER_MTU) { dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; rctl |= E1000_RCTL_LPE; @@ -4416,11 +4412,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } E1000_WRITE_REG(hw, E1000_RCTL, rctl); - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - - E1000_WRITE_REG(hw, E1000_RLPML, - dev->data->dev_conf.rxmode.max_rx_pkt_len); + E1000_WRITE_REG(hw, E1000_RLPML, frame_size); return 0; } diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index 278d5d2712af..de12997b4bdd 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -2324,6 +2324,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev) uint32_t srrctl; uint16_t buf_size; uint16_t rctl_bsize; + uint32_t max_len; uint16_t i; int ret; @@ -2342,9 +2343,8 @@ eth_igb_rx_init(struct rte_eth_dev *dev) /* * Configure support of jumbo frames, if any. */ + max_len = dev->data->mtu + E1000_ETH_OVERHEAD; if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; - rctl |= E1000_RCTL_LPE; /* @@ -2422,8 +2422,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev) E1000_SRRCTL_BSIZEPKT_SHIFT); /* It adds dual VLAN length for supporting dual VLAN */ - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + - 2 * VLAN_TAG_SIZE) > buf_size){ + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){ if (!dev->data->scattered_rx) PMD_INIT_LOG(DEBUG, "forcing scatter mode"); @@ -2647,15 +2646,15 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev) uint32_t srrctl; uint16_t buf_size; uint16_t rctl_bsize; + uint32_t max_len; uint16_t i; int ret; hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); /* setup MTU */ - e1000_rlpml_set_vf(hw, - (uint16_t)(dev->data->dev_conf.rxmode.max_rx_pkt_len + - VLAN_TAG_SIZE)); + max_len = dev->data->mtu + E1000_ETH_OVERHEAD; + e1000_rlpml_set_vf(hw, (uint16_t)(max_len + VLAN_TAG_SIZE)); /* Configure and enable each RX queue. */ rctl_bsize = 0; @@ -2712,8 +2711,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev) E1000_SRRCTL_BSIZEPKT_SHIFT); /* It adds dual VLAN length for supporting dual VLAN */ - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + - 2 * VLAN_TAG_SIZE) > buf_size){ + if ((max_len + 2 * VLAN_TAG_SIZE) > buf_size){ if (!dev->data->scattered_rx) PMD_INIT_LOG(DEBUG, "forcing scatter mode"); diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index dfe68279fa7b..e9b718786a39 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -850,26 +850,14 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, return rc; } -static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter) -{ - uint32_t max_frame_len = adapter->max_mtu; - - if (adapter->edev_data->dev_conf.rxmode.offloads & - DEV_RX_OFFLOAD_JUMBO_FRAME) - max_frame_len = - adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len; - - return max_frame_len; -} - static int ena_check_valid_conf(struct ena_adapter *adapter) { - uint32_t max_frame_len = ena_get_mtu_conf(adapter); + uint32_t mtu = adapter->edev_data->mtu; - if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) { + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) { PMD_INIT_LOG(ERR, "Unsupported MTU of %d. " "max mtu: %d, min mtu: %d", - max_frame_len, adapter->max_mtu, ENA_MIN_MTU); + mtu, adapter->max_mtu, ENA_MIN_MTU); return ENA_COM_UNSUPPORTED; } @@ -1042,11 +1030,11 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) ena_dev = &adapter->ena_dev; ena_assert_msg(ena_dev != NULL, "Uninitialized device\n"); - if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) { + if (mtu > adapter->max_mtu || mtu < ENA_MIN_MTU) { PMD_DRV_LOG(ERR, "Invalid MTU setting. new_mtu: %d " "max mtu: %d min mtu: %d\n", - mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU); + mtu, adapter->max_mtu, ENA_MIN_MTU); return -EINVAL; } @@ -2067,7 +2055,10 @@ static int ena_infos_get(struct rte_eth_dev *dev, ETH_RSS_UDP; dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN; - dev_info->max_rx_pktlen = adapter->max_mtu; + dev_info->max_rx_pktlen = adapter->max_mtu + RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN; + dev_info->min_mtu = ENA_MIN_MTU; + dev_info->max_mtu = adapter->max_mtu; dev_info->max_mac_addrs = 1; dev_info->max_rx_queues = adapter->max_num_io_queues; diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c index b496cd470045..cdb9783b5372 100644 --- a/drivers/net/enetc/enetc_ethdev.c +++ b/drivers/net/enetc/enetc_ethdev.c @@ -677,7 +677,7 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - if (frame_size > ENETC_ETH_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev->data->dev_conf.rxmode.offloads &= DEV_RX_OFFLOAD_JUMBO_FRAME; else @@ -687,8 +687,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE); enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE); - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - /*setting the MTU*/ enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(frame_size) | ENETC_SET_TX_MTU(ENETC_MAC_MAXFRM_SIZE)); @@ -705,23 +703,15 @@ enetc_dev_configure(struct rte_eth_dev *dev) struct rte_eth_conf *eth_conf = &dev->data->dev_conf; uint64_t rx_offloads = eth_conf->rxmode.offloads; uint32_t checksum = L3_CKSUM | L4_CKSUM; + uint32_t max_len; PMD_INIT_FUNC_TRACE(); - if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - uint32_t max_len; - - max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; - - enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, - ENETC_SET_MAXFRM(max_len)); - enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), - ENETC_MAC_MAXFRM_SIZE); - enetc_port_wr(enetc_hw, ENETC_PTXMBAR, - 2 * ENETC_MAC_MAXFRM_SIZE); - dev->data->mtu = RTE_ETHER_MAX_LEN - RTE_ETHER_HDR_LEN - - RTE_ETHER_CRC_LEN; - } + max_len = dev->data->dev_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN; + enetc_port_wr(enetc_hw, ENETC_PM0_MAXFRM, ENETC_SET_MAXFRM(max_len)); + enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE); + enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE); if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) { int config; diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c index 8d5797523b8f..6a81ceb62ba7 100644 --- a/drivers/net/enic/enic_ethdev.c +++ b/drivers/net/enic/enic_ethdev.c @@ -455,7 +455,7 @@ static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev, * max mtu regardless of the current mtu (vNIC's mtu). vNIC mtu is * a hint to the driver to size receive buffers accordingly so that * larger-than-vnic-mtu packets get truncated.. For DPDK, we let - * the user decide the buffer size via rxmode.max_rx_pkt_len, basically + * the user decide the buffer size via rxmode.mtu, basically * ignoring vNIC mtu. */ device_info->max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->max_mtu); diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c index 2affd380c6a4..dfc7f5d1f94f 100644 --- a/drivers/net/enic/enic_main.c +++ b/drivers/net/enic/enic_main.c @@ -282,7 +282,7 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq) struct rq_enet_desc *rqd = rq->ring.descs; unsigned i; dma_addr_t dma_addr; - uint32_t max_rx_pkt_len; + uint32_t max_rx_pktlen; uint16_t rq_buf_len; if (!rq->in_use) @@ -293,16 +293,16 @@ enic_alloc_rx_queue_mbufs(struct enic *enic, struct vnic_rq *rq) /* * If *not* using scatter and the mbuf size is greater than the - * requested max packet size (max_rx_pkt_len), then reduce the - * posted buffer size to max_rx_pkt_len. HW still receives packets - * larger than max_rx_pkt_len, but they will be truncated, which we + * requested max packet size (mtu + eth overhead), then reduce the + * posted buffer size to max packet size. HW still receives packets + * larger than max packet size, but they will be truncated, which we * drop in the rx handler. Not ideal, but better than returning * large packets when the user is not expecting them. */ - max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len; + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu); rq_buf_len = rte_pktmbuf_data_room_size(rq->mp) - RTE_PKTMBUF_HEADROOM; - if (max_rx_pkt_len < rq_buf_len && !rq->data_queue_enable) - rq_buf_len = max_rx_pkt_len; + if (max_rx_pktlen < rq_buf_len && !rq->data_queue_enable) + rq_buf_len = max_rx_pktlen; for (i = 0; i < rq->ring.desc_count; i++, rqd++) { mb = rte_mbuf_raw_alloc(rq->mp); if (mb == NULL) { @@ -818,7 +818,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx, unsigned int mbuf_size, mbufs_per_pkt; unsigned int nb_sop_desc, nb_data_desc; uint16_t min_sop, max_sop, min_data, max_data; - uint32_t max_rx_pkt_len; + uint32_t max_rx_pktlen; /* * Representor uses a reserved PF queue. Translate representor @@ -854,23 +854,23 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx, mbuf_size = (uint16_t)(rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM); - /* max_rx_pkt_len includes the ethernet header and CRC. */ - max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len; + /* max_rx_pktlen includes the ethernet header and CRC. */ + max_rx_pktlen = enic_mtu_to_max_rx_pktlen(enic->rte_dev->data->mtu); if (enic->rte_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) { dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx); /* ceil((max pkt len)/mbuf_size) */ - mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size; + mbufs_per_pkt = (max_rx_pktlen + mbuf_size - 1) / mbuf_size; } else { dev_info(enic, "Scatter rx mode disabled\n"); mbufs_per_pkt = 1; - if (max_rx_pkt_len > mbuf_size) { + if (max_rx_pktlen > mbuf_size) { dev_warning(enic, "The maximum Rx packet size (%u) is" " larger than the mbuf size (%u), and" " scatter is disabled. Larger packets will" " be truncated.\n", - max_rx_pkt_len, mbuf_size); + max_rx_pktlen, mbuf_size); } } @@ -879,16 +879,15 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx, rq_sop->data_queue_enable = 1; rq_data->in_use = 1; /* - * HW does not directly support rxmode.max_rx_pkt_len. HW always + * HW does not directly support MTU. HW always * receives packet sizes up to the "max" MTU. * If not using scatter, we can achieve the effect of dropping * larger packets by reducing the size of posted buffers. * See enic_alloc_rx_queue_mbufs(). */ - if (max_rx_pkt_len < - enic_mtu_to_max_rx_pktlen(enic->max_mtu)) { - dev_warning(enic, "rxmode.max_rx_pkt_len is ignored" - " when scatter rx mode is in use.\n"); + if (enic->rte_dev->data->mtu < enic->max_mtu) { + dev_warning(enic, + "mtu is ignored when scatter rx mode is in use.\n"); } } else { dev_info(enic, "Rq %u Scatter rx mode not being used\n", @@ -931,7 +930,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx, if (mbufs_per_pkt > 1) { dev_info(enic, "For max packet size %u and mbuf size %u valid" " rx descriptor range is %u to %u\n", - max_rx_pkt_len, mbuf_size, min_sop + min_data, + max_rx_pktlen, mbuf_size, min_sop + min_data, max_sop + max_data); } dev_info(enic, "Using %d rx descriptors (sop %d, data %d)\n", @@ -1634,11 +1633,6 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu) "MTU (%u) is greater than value configured in NIC (%u)\n", new_mtu, config_mtu); - /* Update the MTU and maximum packet length */ - eth_dev->data->mtu = new_mtu; - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = - enic_mtu_to_max_rx_pktlen(new_mtu); - /* * If the device has not started (enic_enable), nothing to do. * Later, enic_enable() will set up RQs reflecting the new maximum diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 3236290e4021..5e4b361ca6c0 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -757,7 +757,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev) FM10K_SRRCTL_LOOPBACK_SUPPRESS); /* It adds dual VLAN length for supporting dual VLAN */ - if ((dev->data->dev_conf.rxmode.max_rx_pkt_len + + if ((dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 2 * FM10K_VLAN_TAG_SIZE) > buf_size || rxq->offloads & DEV_RX_OFFLOAD_SCATTER) { uint32_t reg; diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index 946465779f2e..c737ef8d06d8 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -324,19 +324,19 @@ static int hinic_dev_configure(struct rte_eth_dev *dev) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; /* mtu size is 256~9600 */ - if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE || - dev->data->dev_conf.rxmode.max_rx_pkt_len > - HINIC_MAX_JUMBO_FRAME_SIZE) { + if (HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) < + HINIC_MIN_FRAME_SIZE || + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) > + HINIC_MAX_JUMBO_FRAME_SIZE) { PMD_DRV_LOG(ERR, - "Max rx pkt len out of range, get max_rx_pkt_len:%d, " + "Packet length out of range, get packet length:%d, " "expect between %d and %d", - dev->data->dev_conf.rxmode.max_rx_pkt_len, + HINIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu), HINIC_MIN_FRAME_SIZE, HINIC_MAX_JUMBO_FRAME_SIZE); return -EINVAL; } - nic_dev->mtu_size = - HINIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.max_rx_pkt_len); + nic_dev->mtu_size = dev->data->dev_conf.rxmode.mtu; /* rss template */ err = hinic_config_mq_mode(dev, TRUE); @@ -1539,7 +1539,6 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) { struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - uint32_t frame_size; int ret = 0; PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d", @@ -1557,16 +1556,13 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) return ret; } - /* update max frame size */ - frame_size = HINIC_MTU_TO_PKTLEN(mtu); - if (frame_size > HINIC_ETH_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; nic_dev->mtu_size = mtu; return ret; diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index e51512560e15..8bccdeddb2f7 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -2379,20 +2379,11 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf) { struct hns3_adapter *hns = dev->data->dev_private; struct hns3_hw *hw = &hns->hw; - uint32_t max_rx_pkt_len; - uint16_t mtu; - int ret; - - if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) - return 0; + uint32_t max_rx_pktlen; - /* - * If jumbo frames are enabled, MTU needs to be refreshed - * according to the maximum RX packet length. - */ - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len; - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN || - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) { + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD; + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN || + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) { hns3_err(hw, "maximum Rx packet length must be greater than %u " "and no more than %u when jumbo frame enabled.", (uint16_t)HNS3_DEFAULT_FRAME_LEN, @@ -2400,13 +2391,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf) return -EINVAL; } - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len); - ret = hns3_dev_mtu_set(dev, mtu); - if (ret) - return ret; - dev->data->mtu = mtu; - - return 0; + return hns3_dev_mtu_set(dev, conf->rxmode.mtu); } static int @@ -2622,7 +2607,7 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } rte_spinlock_lock(&hw->lock); - is_jumbo_frame = frame_size > HNS3_DEFAULT_FRAME_LEN ? true : false; + is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false; frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN); /* @@ -2643,7 +2628,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) else dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; rte_spinlock_unlock(&hw->lock); return 0; diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index e582503f529b..ca839fa55fa0 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -784,8 +784,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev) uint16_t nb_rx_q = dev->data->nb_rx_queues; uint16_t nb_tx_q = dev->data->nb_tx_queues; struct rte_eth_rss_conf rss_conf; - uint32_t max_rx_pkt_len; - uint16_t mtu; + uint32_t max_rx_pktlen; bool gro_en; int ret; @@ -825,29 +824,21 @@ hns3vf_dev_configure(struct rte_eth_dev *dev) goto cfg_err; } - /* - * If jumbo frames are enabled, MTU needs to be refreshed - * according to the maximum RX packet length. - */ - if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - max_rx_pkt_len = conf->rxmode.max_rx_pkt_len; - if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN || - max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) { - hns3_err(hw, "maximum Rx packet length must be greater " - "than %u and less than %u when jumbo frame enabled.", - (uint16_t)HNS3_DEFAULT_FRAME_LEN, - (uint16_t)HNS3_MAX_FRAME_LEN); - ret = -EINVAL; - goto cfg_err; - } - - mtu = (uint16_t)HNS3_PKTLEN_TO_MTU(max_rx_pkt_len); - ret = hns3vf_dev_mtu_set(dev, mtu); - if (ret) - goto cfg_err; - dev->data->mtu = mtu; + max_rx_pktlen = conf->rxmode.mtu + HNS3_ETH_OVERHEAD; + if (max_rx_pktlen > HNS3_MAX_FRAME_LEN || + max_rx_pktlen <= HNS3_DEFAULT_FRAME_LEN) { + hns3_err(hw, "maximum Rx packet length must be greater " + "than %u and less than %u when jumbo frame enabled.", + (uint16_t)HNS3_DEFAULT_FRAME_LEN, + (uint16_t)HNS3_MAX_FRAME_LEN); + ret = -EINVAL; + goto cfg_err; } + ret = hns3vf_dev_mtu_set(dev, conf->rxmode.mtu); + if (ret) + goto cfg_err; + ret = hns3vf_dev_configure_vlan(dev); if (ret) goto cfg_err; @@ -935,7 +926,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) else dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; rte_spinlock_unlock(&hw->lock); return 0; diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c index cb9eccf9faae..6b81688a7225 100644 --- a/drivers/net/hns3/hns3_rxtx.c +++ b/drivers/net/hns3/hns3_rxtx.c @@ -1734,18 +1734,18 @@ hns3_rxq_conf_runtime_check(struct hns3_hw *hw, uint16_t buf_size, uint16_t nb_desc) { struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id]; - struct rte_eth_rxmode *rxmode = &hw->data->dev_conf.rxmode; eth_rx_burst_t pkt_burst = dev->rx_pkt_burst; + uint32_t frame_size = dev->data->mtu + HNS3_ETH_OVERHEAD; uint16_t min_vec_bds; /* * HNS3 hardware network engine set scattered as default. If the driver * is not work in scattered mode and the pkts greater than buf_size - * but smaller than max_rx_pkt_len will be distributed to multiple BDs. + * but smaller than frame size will be distributed to multiple BDs. * Driver cannot handle this situation. */ - if (!hw->data->scattered_rx && rxmode->max_rx_pkt_len > buf_size) { - hns3_err(hw, "max_rx_pkt_len is not allowed to be set greater " + if (!hw->data->scattered_rx && frame_size > buf_size) { + hns3_err(hw, "frame size is not allowed to be set greater " "than rx_buf_len if scattered is off."); return -EINVAL; } @@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev) } if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER || - dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len) + dev->data->mtu + HNS3_ETH_OVERHEAD > hw->rx_buf_len) dev->data->scattered_rx = true; } diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 7b230e2ed17a..1161f301b9ae 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -11772,14 +11772,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (frame_size > I40E_ETH_MAX_LEN) - dev_data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else - dev_data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; return ret; } diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 0cfe13b7b227..086a167ca672 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -1927,8 +1927,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq) rxq->rx_hdr_len = 0; rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << I40E_RXQ_CTX_DBUFF_SHIFT)); len = rxq->rx_buf_len * I40E_MAX_CHAINED_RX_BUFFERS; - rxq->max_pkt_len = RTE_MIN(len, - dev_data->dev_conf.rxmode.max_rx_pkt_len); + rxq->max_pkt_len = RTE_MIN(len, dev_data->mtu + I40E_ETH_OVERHEAD); /** * Check if the jumbo frame and maximum packet length are set correctly @@ -2173,7 +2172,7 @@ i40evf_dev_start(struct rte_eth_dev *dev) hw->adapter_stopped = 0; - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; + vf->max_pkt_len = dev->data->mtu + I40E_ETH_OVERHEAD; vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); @@ -2885,13 +2884,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (frame_size > I40E_ETH_MAX_LEN) - dev_data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else - dev_data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; return ret; } diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 8d65f287f455..aa43796ef1af 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -2904,8 +2904,8 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq) } rxq->max_pkt_len = - RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len * - rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len); + RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len, + data->mtu + I40E_ETH_OVERHEAD); if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN || rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) { diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 41382c6d669b..13c2329d85a7 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -563,12 +563,13 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq) struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_eth_dev_data *dev_data = dev->data; uint16_t buf_size, max_pkt_len, len; + uint32_t frame_size = dev->data->mtu + IAVF_ETH_OVERHEAD; buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; /* Calculate the maximum packet length allowed */ len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS; - max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len); + max_pkt_len = RTE_MIN(len, frame_size); /* Check if the jumbo frame and maximum packet length are set * correctly. @@ -815,7 +816,7 @@ iavf_dev_start(struct rte_eth_dev *dev) adapter->stopped = 0; - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; + vf->max_pkt_len = dev->data->mtu + IAVF_ETH_OVERHEAD; vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); num_queue_pairs = vf->num_queue_pairs; @@ -1445,15 +1446,13 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (frame_size > IAVF_ETH_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - return ret; } diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 69fe6e63d1d3..34b6c9b2a7ed 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -59,9 +59,8 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq) buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM; rxq->rx_hdr_len = 0; rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); - max_pkt_len = RTE_MIN((uint32_t) - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, - dev->data->dev_conf.rxmode.max_rx_pkt_len); + max_pkt_len = RTE_MIN(ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, + dev->data->mtu + ICE_ETH_OVERHEAD); /* Check if the jumbo frame and maximum packet length are set * correctly. diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 63f735d1ff72..bdda6fee3f8e 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3426,8 +3426,8 @@ ice_dev_start(struct rte_eth_dev *dev) pf->adapter_stopped = false; /* Set the max frame size to default value*/ - max_frame_size = pf->dev_data->dev_conf.rxmode.max_rx_pkt_len ? - pf->dev_data->dev_conf.rxmode.max_rx_pkt_len : + max_frame_size = pf->dev_data->mtu ? + pf->dev_data->mtu + ICE_ETH_OVERHEAD : ICE_FRAME_SIZE_MAX; /* Set the max frame size to HW*/ @@ -3806,14 +3806,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (frame_size > ICE_ETH_MAX_LEN) - dev_data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else - dev_data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; return 0; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 3f6e7359844b..a3de4172e2bc 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -262,15 +262,16 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode; uint32_t rxdid = ICE_RXDID_COMMS_OVS; uint32_t regval; + uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD; /* Set buffer size as the head split is disabled. */ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM); rxq->rx_hdr_len = 0; rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); - rxq->max_pkt_len = RTE_MIN((uint32_t) - ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, - dev_data->dev_conf.rxmode.max_rx_pkt_len); + rxq->max_pkt_len = + RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, + frame_size); if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN || @@ -361,11 +362,8 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) return -EINVAL; } - buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - - RTE_PKTMBUF_HEADROOM); - /* Check if scattered RX needs to be used. */ - if (rxq->max_pkt_len > buf_size) + if (frame_size > buf_size) dev_data->scattered_rx = 1; rxq->qrx_tail = hw->hw_addr + QRX_TAIL(rxq->reg_idx); diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index 224a0954836b..b26723064b07 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -20,13 +20,6 @@ #define IGC_INTEL_VENDOR_ID 0x8086 -/* - * The overhead from MTU to max frame size. - * Considering VLAN so tag needs to be counted. - */ -#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \ - RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE) - #define IGC_FC_PAUSE_TIME 0x0680 #define IGC_LINK_UPDATE_CHECK_TIMEOUT 90 /* 9s */ #define IGC_LINK_UPDATE_CHECK_INTERVAL 100 /* ms */ @@ -1602,21 +1595,15 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) /* switch to jumbo mode if needed */ if (mtu > RTE_ETHER_MTU) { - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; rctl |= IGC_RCTL_LPE; } else { - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; rctl &= ~IGC_RCTL_LPE; } IGC_WRITE_REG(hw, IGC_RCTL, rctl); - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - - IGC_WRITE_REG(hw, IGC_RLPML, - dev->data->dev_conf.rxmode.max_rx_pkt_len); + IGC_WRITE_REG(hw, IGC_RLPML, frame_size); return 0; } @@ -2486,6 +2473,7 @@ static int igc_vlan_hw_extend_disable(struct rte_eth_dev *dev) { struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD; uint32_t ctrl_ext; ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT); @@ -2494,23 +2482,14 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev) if ((ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) == 0) return 0; - if ((dev->data->dev_conf.rxmode.offloads & - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) - goto write_ext_vlan; - /* Update maximum packet length */ - if (dev->data->dev_conf.rxmode.max_rx_pkt_len < - RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) { + if (frame_size < RTE_ETHER_MIN_MTU + VLAN_TAG_SIZE) { PMD_DRV_LOG(ERR, "Maximum packet length %u error, min is %u", - dev->data->dev_conf.rxmode.max_rx_pkt_len, - VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU); + frame_size, VLAN_TAG_SIZE + RTE_ETHER_MIN_MTU); return -EINVAL; } - dev->data->dev_conf.rxmode.max_rx_pkt_len -= VLAN_TAG_SIZE; - IGC_WRITE_REG(hw, IGC_RLPML, - dev->data->dev_conf.rxmode.max_rx_pkt_len); + IGC_WRITE_REG(hw, IGC_RLPML, frame_size - VLAN_TAG_SIZE); -write_ext_vlan: IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext & ~IGC_CTRL_EXT_EXT_VLAN); return 0; } @@ -2519,6 +2498,7 @@ static int igc_vlan_hw_extend_enable(struct rte_eth_dev *dev) { struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); + uint32_t frame_size = dev->data->mtu + IGC_ETH_OVERHEAD; uint32_t ctrl_ext; ctrl_ext = IGC_READ_REG(hw, IGC_CTRL_EXT); @@ -2527,23 +2507,14 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev) if (ctrl_ext & IGC_CTRL_EXT_EXT_VLAN) return 0; - if ((dev->data->dev_conf.rxmode.offloads & - DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) - goto write_ext_vlan; - /* Update maximum packet length */ - if (dev->data->dev_conf.rxmode.max_rx_pkt_len > - MAX_RX_JUMBO_FRAME_SIZE - VLAN_TAG_SIZE) { + if (frame_size > MAX_RX_JUMBO_FRAME_SIZE) { PMD_DRV_LOG(ERR, "Maximum packet length %u error, max is %u", - dev->data->dev_conf.rxmode.max_rx_pkt_len + - VLAN_TAG_SIZE, MAX_RX_JUMBO_FRAME_SIZE); + frame_size, MAX_RX_JUMBO_FRAME_SIZE); return -EINVAL; } - dev->data->dev_conf.rxmode.max_rx_pkt_len += VLAN_TAG_SIZE; - IGC_WRITE_REG(hw, IGC_RLPML, - dev->data->dev_conf.rxmode.max_rx_pkt_len); + IGC_WRITE_REG(hw, IGC_RLPML, frame_size); -write_ext_vlan: IGC_WRITE_REG(hw, IGC_CTRL_EXT, ctrl_ext | IGC_CTRL_EXT_EXT_VLAN); return 0; } diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h index 7b6c209df3b6..b3473b5b1646 100644 --- a/drivers/net/igc/igc_ethdev.h +++ b/drivers/net/igc/igc_ethdev.h @@ -35,6 +35,13 @@ extern "C" { #define IGC_HKEY_REG_SIZE IGC_DEFAULT_REG_SIZE #define IGC_HKEY_SIZE (IGC_HKEY_REG_SIZE * IGC_HKEY_MAX_INDEX) +/* + * The overhead from MTU to max frame size. + * Considering VLAN so tag needs to be counted. + */ +#define IGC_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + \ + RTE_ETHER_CRC_LEN + VLAN_TAG_SIZE * 2) + /* * TDBA/RDBA should be aligned on 16 byte boundary. But TDLEN/RDLEN should be * multiple of 128 bytes. So we align TDBA/RDBA on 128 byte boundary. diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index b5489eedd220..d80808a002f5 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -1081,7 +1081,7 @@ igc_rx_init(struct rte_eth_dev *dev) struct igc_rx_queue *rxq; struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev); uint64_t offloads = dev->data->dev_conf.rxmode.offloads; - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; + uint32_t max_rx_pktlen; uint32_t rctl; uint32_t rxcsum; uint16_t buf_size; @@ -1099,17 +1099,17 @@ igc_rx_init(struct rte_eth_dev *dev) IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN); /* Configure support of jumbo frames, if any. */ - if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) rctl |= IGC_RCTL_LPE; - - /* - * Set maximum packet length by default, and might be updated - * together with enabling/disabling dual VLAN. - */ - IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pkt_len); - } else { + else rctl &= ~IGC_RCTL_LPE; - } + + max_rx_pktlen = dev->data->mtu + IGC_ETH_OVERHEAD; + /* + * Set maximum packet length by default, and might be updated + * together with enabling/disabling dual VLAN. + */ + IGC_WRITE_REG(hw, IGC_RLPML, max_rx_pktlen); /* Configure and enable each RX queue. */ rctl_bsize = 0; @@ -1168,7 +1168,7 @@ igc_rx_init(struct rte_eth_dev *dev) IGC_SRRCTL_BSIZEPKT_SHIFT); /* It adds dual VLAN length for supporting dual VLAN */ - if (max_rx_pkt_len + 2 * VLAN_TAG_SIZE > buf_size) + if (max_rx_pktlen > buf_size) dev->data->scattered_rx = 1; } else { /* diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c index e6207939665e..97447a10e46a 100644 --- a/drivers/net/ionic/ionic_ethdev.c +++ b/drivers/net/ionic/ionic_ethdev.c @@ -343,25 +343,15 @@ static int ionic_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) { struct ionic_lif *lif = IONIC_ETH_DEV_TO_LIF(eth_dev); - uint32_t max_frame_size; int err; IONIC_PRINT_CALL(); /* * Note: mtu check against IONIC_MIN_MTU, IONIC_MAX_MTU - * is done by the the API. + * is done by the API. */ - /* - * Max frame size is MTU + Ethernet header + VLAN + QinQ - * (plus ETHER_CRC_LEN if the adapter is able to keep CRC) - */ - max_frame_size = mtu + RTE_ETHER_HDR_LEN + 4 + 4; - - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len < max_frame_size) - return -EINVAL; - err = ionic_lif_change_mtu(lif, mtu); if (err) return err; diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c index b83ea1bcaa6a..3f5fc66abf71 100644 --- a/drivers/net/ionic/ionic_rxtx.c +++ b/drivers/net/ionic/ionic_rxtx.c @@ -773,7 +773,7 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq, struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index]; struct rte_mbuf *rxm, *rxm_seg; uint32_t max_frame_size = - rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len; + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN; uint64_t pkt_flags = 0; uint32_t pkt_type; struct ionic_rx_stats *stats = &rxq->stats; @@ -1016,7 +1016,7 @@ ionic_rx_fill(struct ionic_rx_qcq *rxq, uint32_t len) int __rte_cold ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) { - uint32_t frame_size = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len; + uint32_t frame_size = eth_dev->data->mtu + RTE_ETHER_HDR_LEN; uint8_t *rx_queue_state = eth_dev->data->rx_queue_state; struct ionic_rx_qcq *rxq; int err; @@ -1130,7 +1130,7 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, { struct ionic_rx_qcq *rxq = rx_queue; uint32_t frame_size = - rxq->qcq.lif->eth_dev->data->dev_conf.rxmode.max_rx_pkt_len; + rxq->qcq.lif->eth_dev->data->mtu + RTE_ETHER_HDR_LEN; struct ionic_rx_service service_cb_arg; service_cb_arg.rx_pkts = rx_pkts; diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index 589d9fa5877d..3634c0c8c5f0 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -2801,14 +2801,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu) return -EBUSY; } - if (frame_size > IPN3KE_ETH_MAX_LEN) - dev_data->dev_conf.rxmode.offloads |= - (uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME); + if (mtu > RTE_ETHER_MTU) + dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else - dev_data->dev_conf.rxmode.offloads &= - (uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME); - - dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; if (rpst->i40e_pf_eth) { ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth, diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index b5371568b54d..b9048ade3c35 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -5172,7 +5172,6 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) struct ixgbe_hw *hw; struct rte_eth_dev_info dev_info; uint32_t frame_size = mtu + IXGBE_ETH_OVERHEAD; - struct rte_eth_dev_data *dev_data = dev->data; int ret; ret = ixgbe_dev_info_get(dev, &dev_info); @@ -5186,9 +5185,9 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) /* If device is started, refuse mtu that requires the support of * scattered packets when this feature has not been enabled before. */ - if (dev_data->dev_started && !dev_data->scattered_rx && - (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) { + if (dev->data->dev_started && !dev->data->scattered_rx && + frame_size + 2 * IXGBE_VLAN_TAG_SIZE > + dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) { PMD_INIT_LOG(ERR, "Stop port first."); return -EINVAL; } @@ -5197,23 +5196,18 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0); /* switch to jumbo mode if needed */ - if (frame_size > IXGBE_ETH_MAX_LEN) { - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) { + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; hlreg0 |= IXGBE_HLREG0_JUMBOEN; } else { - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; hlreg0 &= ~IXGBE_HLREG0_JUMBOEN; } IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0); - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS); maxfrs &= 0x0000FFFF; - maxfrs |= (dev->data->dev_conf.rxmode.max_rx_pkt_len << 16); + maxfrs |= (frame_size << 16); IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs); return 0; @@ -6267,12 +6261,10 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, * set as 0x4. */ if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) && - (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE)) - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, - IXGBE_MMW_SIZE_JUMBO_FRAME); + (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)) + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME); else - IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, - IXGBE_MMW_SIZE_DEFAULT); + IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT); /* Set RTTBCNRC of queue X */ IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_idx); @@ -6556,8 +6548,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if (mtu < RTE_ETHER_MIN_MTU || - max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN) + if (mtu < RTE_ETHER_MIN_MTU || max_frame > RTE_ETHER_MAX_JUMBO_FRAME_LEN) return -EINVAL; /* If device is started, refuse mtu that requires the support of @@ -6565,7 +6556,7 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) */ if (dev_data->dev_started && !dev_data->scattered_rx && (max_frame + 2 * IXGBE_VLAN_TAG_SIZE > - dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) { + dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) { PMD_INIT_LOG(ERR, "Stop port first."); return -EINVAL; } @@ -6582,8 +6573,6 @@ ixgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) if (ixgbevf_rlpml_set_vf(hw, max_frame)) return -EINVAL; - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame; return 0; } diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c index fbf2b17d160f..9bcbc445f2d0 100644 --- a/drivers/net/ixgbe/ixgbe_pf.c +++ b/drivers/net/ixgbe/ixgbe_pf.c @@ -576,8 +576,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf) * if PF has jumbo frames enabled which means legacy * VFs are disabled. */ - if (dev->data->dev_conf.rxmode.max_rx_pkt_len > - IXGBE_ETH_MAX_LEN) + if (dev->data->mtu > RTE_ETHER_MTU) break; /* fall through */ default: @@ -587,8 +586,7 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf) * legacy VFs. */ if (max_frame > IXGBE_ETH_MAX_LEN || - dev->data->dev_conf.rxmode.max_rx_pkt_len > - IXGBE_ETH_MAX_LEN) + dev->data->mtu > RTE_ETHER_MTU) return -1; break; } diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index d69f36e97770..5e32a6ce6940 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -5051,6 +5051,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) uint16_t buf_size; uint16_t i; struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode; + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD; int rc; PMD_INIT_FUNC_TRACE(); @@ -5086,7 +5087,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) hlreg0 |= IXGBE_HLREG0_JUMBOEN; maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS); maxfrs &= 0x0000FFFF; - maxfrs |= (rx_conf->max_rx_pkt_len << 16); + maxfrs |= (frame_size << 16); IXGBE_WRITE_REG(hw, IXGBE_MAXFRS, maxfrs); } else hlreg0 &= ~IXGBE_HLREG0_JUMBOEN; @@ -5160,8 +5161,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) IXGBE_SRRCTL_BSIZEPKT_SHIFT); /* It adds dual VLAN length for supporting dual VLAN */ - if (dev->data->dev_conf.rxmode.max_rx_pkt_len + - 2 * IXGBE_VLAN_TAG_SIZE > buf_size) + if (frame_size + 2 * IXGBE_VLAN_TAG_SIZE > buf_size) dev->data->scattered_rx = 1; if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; @@ -5641,6 +5641,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev) struct ixgbe_hw *hw; struct ixgbe_rx_queue *rxq; struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + uint32_t frame_size = dev->data->mtu + IXGBE_ETH_OVERHEAD; uint64_t bus_addr; uint32_t srrctl, psrtype = 0; uint16_t buf_size; @@ -5677,10 +5678,9 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev) * ixgbevf_rlpml_set_vf even if jumbo frames are not used. This way, * VF packets received can work in all cases. */ - if (ixgbevf_rlpml_set_vf(hw, - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) { + if (ixgbevf_rlpml_set_vf(hw, frame_size)) { PMD_INIT_LOG(ERR, "Set max packet length to %d failed.", - dev->data->dev_conf.rxmode.max_rx_pkt_len); + frame_size); return -EINVAL; } @@ -5739,8 +5739,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev) if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER || /* It adds dual VLAN length for supporting dual VLAN */ - (rxmode->max_rx_pkt_len + - 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) { + (frame_size + 2 * IXGBE_VLAN_TAG_SIZE) > buf_size) { if (!dev->data->scattered_rx) PMD_INIT_LOG(DEBUG, "forcing scatter mode"); dev->data->scattered_rx = 1; diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c index b72060a4499b..f0c165c89ba7 100644 --- a/drivers/net/liquidio/lio_ethdev.c +++ b/drivers/net/liquidio/lio_ethdev.c @@ -435,7 +435,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) { struct lio_device *lio_dev = LIO_DEV(eth_dev); uint16_t pf_mtu = lio_dev->linfo.link.s.mtu; - uint32_t frame_len = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; struct lio_dev_ctrl_cmd ctrl_cmd; struct lio_ctrl_pkt ctrl_pkt; @@ -481,16 +480,13 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) return -1; } - if (frame_len > LIO_ETH_MAX_LEN) + if (mtu > RTE_ETHER_MTU) eth_dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else eth_dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len; - eth_dev->data->mtu = mtu; - return 0; } @@ -1398,8 +1394,6 @@ lio_sync_link_state_check(void *eth_dev) static int lio_dev_start(struct rte_eth_dev *eth_dev) { - uint16_t mtu; - uint32_t frame_len = eth_dev->data->dev_conf.rxmode.max_rx_pkt_len; struct lio_device *lio_dev = LIO_DEV(eth_dev); uint16_t timeout = LIO_MAX_CMD_TIMEOUT; int ret = 0; @@ -1442,15 +1436,9 @@ lio_dev_start(struct rte_eth_dev *eth_dev) goto dev_mtu_set_error; } - mtu = (uint16_t)(frame_len - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN); - if (mtu < RTE_ETHER_MIN_MTU) - mtu = RTE_ETHER_MIN_MTU; - - if (eth_dev->data->mtu != mtu) { - ret = lio_dev_mtu_set(eth_dev, mtu); - if (ret) - goto dev_mtu_set_error; - } + ret = lio_dev_mtu_set(eth_dev, eth_dev->data->mtu); + if (ret) + goto dev_mtu_set_error; return 0; diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c index 978cbb8201ea..4a5cfd22aa71 100644 --- a/drivers/net/mlx4/mlx4_rxq.c +++ b/drivers/net/mlx4/mlx4_rxq.c @@ -753,6 +753,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, int ret; uint32_t crc_present; uint64_t offloads; + uint32_t max_rx_pktlen; offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads; @@ -828,13 +829,11 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, }; /* Enable scattered packets support for this queue if necessary. */ MLX4_ASSERT(mb_len >= RTE_PKTMBUF_HEADROOM); - if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= - (mb_len - RTE_PKTMBUF_HEADROOM)) { + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + if (max_rx_pktlen <= (mb_len - RTE_PKTMBUF_HEADROOM)) { ; } else if (offloads & DEV_RX_OFFLOAD_SCATTER) { - uint32_t size = - RTE_PKTMBUF_HEADROOM + - dev->data->dev_conf.rxmode.max_rx_pkt_len; + uint32_t size = RTE_PKTMBUF_HEADROOM + max_rx_pktlen; uint32_t sges_n; /* @@ -846,21 +845,19 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, /* Make sure sges_n did not overflow. */ size = mb_len * (1 << rxq->sges_n); size -= RTE_PKTMBUF_HEADROOM; - if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) { + if (size < max_rx_pktlen) { rte_errno = EOVERFLOW; ERROR("%p: too many SGEs (%u) needed to handle" " requested maximum packet size %u", (void *)dev, - 1 << sges_n, - dev->data->dev_conf.rxmode.max_rx_pkt_len); + 1 << sges_n, max_rx_pktlen); goto error; } } else { WARN("%p: the requested maximum Rx packet size (%u) is" " larger than a single mbuf (%u) and scattered" " mode has not been requested", - (void *)dev, - dev->data->dev_conf.rxmode.max_rx_pkt_len, + (void *)dev, max_rx_pktlen, mb_len - RTE_PKTMBUF_HEADROOM); } DEBUG("%p: maximum number of segments per packet: %u", diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index bb9a9080871d..bd16dde6de13 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1336,10 +1336,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, uint64_t offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads; unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO); - unsigned int max_rx_pkt_len = lro_on_queue ? + unsigned int max_rx_pktlen = lro_on_queue ? dev->data->dev_conf.rxmode.max_lro_pkt_size : - dev->data->dev_conf.rxmode.max_rx_pkt_len; - unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len + + dev->data->mtu + (unsigned int)RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN; + unsigned int non_scatter_min_mbuf_size = max_rx_pktlen + RTE_PKTMBUF_HEADROOM; unsigned int max_lro_size = 0; unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM; @@ -1378,7 +1379,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * needed to handle max size packets, replace zero length * with the buffer length from the pool. */ - tail_len = max_rx_pkt_len; + tail_len = max_rx_pktlen; do { struct mlx5_eth_rxseg *hw_seg = &tmpl->rxq.rxseg[tmpl->rxq.rxseg_n]; @@ -1416,7 +1417,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, "port %u too many SGEs (%u) needed to handle" " requested maximum packet size %u, the maximum" " supported are %u", dev->data->port_id, - tmpl->rxq.rxseg_n, max_rx_pkt_len, + tmpl->rxq.rxseg_n, max_rx_pktlen, MLX5_MAX_RXQ_NSEG); rte_errno = ENOTSUP; goto error; @@ -1441,7 +1442,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not" " configured and no enough mbuf space(%u) to contain " "the maximum RX packet length(%u) with head-room(%u)", - dev->data->port_id, idx, mb_len, max_rx_pkt_len, + dev->data->port_id, idx, mb_len, max_rx_pktlen, RTE_PKTMBUF_HEADROOM); rte_errno = ENOSPC; goto error; @@ -1460,7 +1461,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * following conditions are met: * - MPRQ is enabled. * - The number of descs is more than the number of strides. - * - max_rx_pkt_len plus overhead is less than the max size + * - max_rx_pktlen plus overhead is less than the max size * of a stride or mprq_stride_size is specified by a user. * Need to make sure that there are enough strides to encap * the maximum packet size in case mprq_stride_size is set. @@ -1484,7 +1485,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, !!(offloads & DEV_RX_OFFLOAD_SCATTER); tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size, config->mprq.max_memcpy_len); - max_lro_size = RTE_MIN(max_rx_pkt_len, + max_lro_size = RTE_MIN(max_rx_pktlen, (1u << tmpl->rxq.strd_num_n) * (1u << tmpl->rxq.strd_sz_n)); DRV_LOG(DEBUG, @@ -1493,9 +1494,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, dev->data->port_id, idx, tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n); } else if (tmpl->rxq.rxseg_n == 1) { - MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size); + MLX5_ASSERT(max_rx_pktlen <= first_mb_free_size); tmpl->rxq.sges_n = 0; - max_lro_size = max_rx_pkt_len; + max_lro_size = max_rx_pktlen; } else if (offloads & DEV_RX_OFFLOAD_SCATTER) { unsigned int sges_n; @@ -1517,13 +1518,13 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, "port %u too many SGEs (%u) needed to handle" " requested maximum packet size %u, the maximum" " supported are %u", dev->data->port_id, - 1 << sges_n, max_rx_pkt_len, + 1 << sges_n, max_rx_pktlen, 1u << MLX5_MAX_LOG_RQ_SEGS); rte_errno = ENOTSUP; goto error; } tmpl->rxq.sges_n = sges_n; - max_lro_size = max_rx_pkt_len; + max_lro_size = max_rx_pktlen; } if (config->mprq.enabled && !mlx5_rxq_mprq_enabled(&tmpl->rxq)) DRV_LOG(WARNING, diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c index a3ee15020466..520c6fdb1d31 100644 --- a/drivers/net/mvneta/mvneta_ethdev.c +++ b/drivers/net/mvneta/mvneta_ethdev.c @@ -126,10 +126,6 @@ mvneta_dev_configure(struct rte_eth_dev *dev) return -EINVAL; } - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len - - MRVL_NETA_ETH_HDRS_LEN; - if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) priv->multiseg = 1; @@ -261,9 +257,6 @@ mvneta_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - dev->data->mtu = mtu; - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE; - if (!priv->ppio) /* It is OK. New MTU will be set later on mvneta_dev_start */ return 0; diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c index dfa7ecc09039..2cd4fb31348b 100644 --- a/drivers/net/mvneta/mvneta_rxtx.c +++ b/drivers/net/mvneta/mvneta_rxtx.c @@ -708,19 +708,18 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct mvneta_priv *priv = dev->data->dev_private; struct mvneta_rxq *rxq; uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp); - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN; frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MVNETA_PKT_EFFEC_OFFS; - if (frame_size < max_rx_pkt_len) { + if (frame_size < max_rx_pktlen) { MVNETA_LOG(ERR, "Mbuf size must be increased to %u bytes to hold up " "to %u bytes of data.", - buf_size + max_rx_pkt_len - frame_size, - max_rx_pkt_len); - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - MVNETA_LOG(INFO, "Setting max rx pkt len to %u", - dev->data->dev_conf.rxmode.max_rx_pkt_len); + max_rx_pktlen + buf_size - frame_size, + max_rx_pktlen); + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN; + MVNETA_LOG(INFO, "Setting MTU to %u", dev->data->mtu); } if (dev->data->rx_queues[idx]) { diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 63d348e27936..9d578b4ffa5d 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -496,16 +496,11 @@ mrvl_dev_configure(struct rte_eth_dev *dev) return -EINVAL; } - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len - - MRVL_PP2_ETH_HDRS_LEN; - if (dev->data->mtu > priv->max_mtu) { - MRVL_LOG(ERR, "inherit MTU %u from max_rx_pkt_len %u is larger than max_mtu %u\n", - dev->data->mtu, - dev->data->dev_conf.rxmode.max_rx_pkt_len, - priv->max_mtu); - return -EINVAL; - } + if (dev->data->dev_conf.rxmode.mtu > priv->max_mtu) { + MRVL_LOG(ERR, "MTU %u is larger than max_mtu %u\n", + dev->data->dev_conf.rxmode.mtu, + priv->max_mtu); + return -EINVAL; } if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) @@ -589,9 +584,6 @@ mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - dev->data->mtu = mtu; - dev->data->dev_conf.rxmode.max_rx_pkt_len = mru - MV_MH_SIZE; - if (!priv->ppio) return 0; @@ -1984,7 +1976,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, struct mrvl_priv *priv = dev->data->dev_private; struct mrvl_rxq *rxq; uint32_t frame_size, buf_size = rte_pktmbuf_data_room_size(mp); - uint32_t max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; + uint32_t max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN; int ret, tc, inq; uint64_t offloads; @@ -1999,17 +1991,15 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, return -EFAULT; } - frame_size = buf_size - RTE_PKTMBUF_HEADROOM - - MRVL_PKT_EFFEC_OFFS + RTE_ETHER_CRC_LEN; - if (frame_size < max_rx_pkt_len) { + frame_size = buf_size - RTE_PKTMBUF_HEADROOM - MRVL_PKT_EFFEC_OFFS; + if (frame_size < max_rx_pktlen) { MRVL_LOG(WARNING, "Mbuf size must be increased to %u bytes to hold up " "to %u bytes of data.", - buf_size + max_rx_pkt_len - frame_size, - max_rx_pkt_len); - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - MRVL_LOG(INFO, "Setting max rx pkt len to %u", - dev->data->dev_conf.rxmode.max_rx_pkt_len); + max_rx_pktlen + buf_size - frame_size, + max_rx_pktlen); + dev->data->mtu = frame_size - RTE_ETHER_HDR_LEN; + MRVL_LOG(INFO, "Setting MTU to %u", dev->data->mtu); } if (dev->data->rx_queues[idx]) { diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index b18edd8c7bac..ff531fdb2354 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -644,7 +644,7 @@ nfp_check_offloads(struct rte_eth_dev *dev) } if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) - hw->mtu = rxmode->max_rx_pkt_len; + hw->mtu = dev->data->mtu; if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT) ctrl |= NFP_NET_CFG_CTRL_TXVLAN; @@ -1551,16 +1551,13 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } /* switch to jumbo mode if needed */ - if ((uint32_t)mtu > RTE_ETHER_MTU) + if (mtu > RTE_ETHER_MTU) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu; - /* writing to configuration space */ - nn_cfg_writel(hw, NFP_NET_CFG_MTU, (uint32_t)mtu); + nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu); hw->mtu = mtu; diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c index 9f4c0503b4d4..69c3bda12df8 100644 --- a/drivers/net/octeontx/octeontx_ethdev.c +++ b/drivers/net/octeontx/octeontx_ethdev.c @@ -552,13 +552,11 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) if (rc) return rc; - if (frame_size > OCCTX_L2_MAX_LEN) + if (mtu > RTE_ETHER_MTU) nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - /* Update max_rx_pkt_len */ - data->dev_conf.rxmode.max_rx_pkt_len = frame_size; octeontx_log_info("Received pkt beyond maxlen %d will be dropped", frame_size); @@ -581,7 +579,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq) buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM; /* Setup scatter mode if needed by jumbo */ - if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) { + if (data->mtu > buffsz) { nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER; nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev); nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev); @@ -593,8 +591,8 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq) evdev_priv->rx_offload_flags = nic->rx_offload_flags; evdev_priv->tx_offload_flags = nic->tx_offload_flags; - /* Setup MTU based on max_rx_pkt_len */ - nic->mtu = data->dev_conf.rxmode.max_rx_pkt_len - OCCTX_L2_OVERHEAD; + /* Setup MTU */ + nic->mtu = data->mtu; return 0; } @@ -615,7 +613,7 @@ octeontx_dev_start(struct rte_eth_dev *dev) octeontx_recheck_rx_offloads(rxq); } - /* Setting up the mtu based on max_rx_pkt_len */ + /* Setting up the mtu */ ret = octeontx_dev_mtu_set(dev, nic->mtu); if (ret) { octeontx_log_err("Failed to set default MTU size %d", ret); diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 40af99a26a17..9f162475523c 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -912,7 +912,7 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq) mbp_priv = rte_mempool_get_priv(rxq->pool); buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM; - if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) { + if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) { dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER; dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS; diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 5a4501208e9e..ba282762b749 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -58,14 +58,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) if (rc) return rc; - if (frame_size > NIX_L2_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - /* Update max_rx_pkt_len */ - data->dev_conf.rxmode.max_rx_pkt_len = frame_size; - return rc; } @@ -74,7 +71,6 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev) { struct rte_eth_dev_data *data = eth_dev->data; struct otx2_eth_rxq *rxq; - uint16_t mtu; int rc; rxq = data->rx_queues[0]; @@ -82,10 +78,7 @@ otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev) /* Setup scatter mode if needed by jumbo */ otx2_nix_enable_mseg_on_jumbo(rxq); - /* Setup MTU based on max_rx_pkt_len */ - mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD; - - rc = otx2_nix_mtu_set(eth_dev, mtu); + rc = otx2_nix_mtu_set(eth_dev, data->mtu); if (rc) otx2_err("Failed to set default MTU size %d", rc); diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c index feec4d10a26e..2619bd2f2a19 100644 --- a/drivers/net/pfe/pfe_ethdev.c +++ b/drivers/net/pfe/pfe_ethdev.c @@ -682,16 +682,11 @@ pfe_link_up(struct rte_eth_dev *dev) static int pfe_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - int ret; struct pfe_eth_priv_s *priv = dev->data->dev_private; uint16_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; /*TODO Support VLAN*/ - ret = gemac_set_rx(priv->EMAC_baseaddr, frame_size); - if (!ret) - dev->data->mtu = mtu; - - return ret; + return gemac_set_rx(priv->EMAC_baseaddr, frame_size); } /* pfe_eth_enet_addr_byte_mac diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 323d46e6ebb2..53b2c0ca10e3 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -1312,12 +1312,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev) return -ENOMEM; } - /* If jumbo enabled adjust MTU */ - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) - eth_dev->data->mtu = - eth_dev->data->dev_conf.rxmode.max_rx_pkt_len - - RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD; - if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) eth_dev->data->scattered_rx = 1; @@ -2315,7 +2309,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); struct rte_eth_dev_info dev_info = {0}; struct qede_fastpath *fp; - uint32_t max_rx_pkt_len; uint32_t frame_size; uint16_t bufsz; bool restart = false; @@ -2327,8 +2320,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) DP_ERR(edev, "Error during getting ethernet device info\n"); return rc; } - max_rx_pkt_len = mtu + QEDE_MAX_ETHER_HDR_LEN; - frame_size = max_rx_pkt_len; + + frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN; if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) { DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n", mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN - @@ -2368,7 +2361,7 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) fp->rxq->rx_buf_size = rc; } } - if (frame_size > QEDE_ETH_MAX_LEN) + if (mtu > RTE_ETHER_MTU) dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; @@ -2378,9 +2371,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) dev->data->dev_started = 1; } - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_rx_pkt_len; - return 0; } diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c index 298f4e3e4273..62a126999a5c 100644 --- a/drivers/net/qede/qede_rxtx.c +++ b/drivers/net/qede/qede_rxtx.c @@ -224,7 +224,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; struct qede_rx_queue *rxq; - uint16_t max_rx_pkt_len; + uint16_t max_rx_pktlen; uint16_t bufsz; int rc; @@ -243,21 +243,21 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, dev->data->rx_queues[qid] = NULL; } - max_rx_pkt_len = (uint16_t)rxmode->max_rx_pkt_len; + max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN; /* Fix up RX buffer size */ bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; /* cache align the mbuf size to simplfy rx_buf_size calculation */ bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz); if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) || - (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) { + (max_rx_pktlen + QEDE_ETH_OVERHEAD) > bufsz) { if (!dev->data->scattered_rx) { DP_INFO(edev, "Forcing scatter-gather mode\n"); dev->data->scattered_rx = 1; } } - rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pkt_len); + rc = qede_calc_rx_buf_size(dev, bufsz, max_rx_pktlen); if (rc < 0) return rc; diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index c50ecea0b993..2afb13b77892 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -1016,15 +1016,13 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) /* * The driver does not use it, but other PMDs update jumbo frame - * flag and max_rx_pkt_len when MTU is set. + * flag when MTU is set. */ if (mtu > RTE_ETHER_MTU) { struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; } - dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu; - sfc_adapter_unlock(sa); sfc_log_init(sa, "done"); diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c index ac117f9c4814..ca9538fb8f2f 100644 --- a/drivers/net/sfc/sfc_port.c +++ b/drivers/net/sfc/sfc_port.c @@ -364,14 +364,10 @@ sfc_port_configure(struct sfc_adapter *sa) { const struct rte_eth_dev_data *dev_data = sa->eth_dev->data; struct sfc_port *port = &sa->port; - const struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode; sfc_log_init(sa, "entry"); - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) - port->pdu = rxmode->max_rx_pkt_len; - else - port->pdu = EFX_MAC_PDU(dev_data->mtu); + port->pdu = EFX_MAC_PDU(dev_data->mtu); return 0; } diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c index c515de3bf71d..0a8d29277aeb 100644 --- a/drivers/net/tap/rte_eth_tap.c +++ b/drivers/net/tap/rte_eth_tap.c @@ -1627,13 +1627,8 @@ tap_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { struct pmd_internals *pmd = dev->data->dev_private; struct ifreq ifr = { .ifr_mtu = mtu }; - int err = 0; - err = tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE); - if (!err) - dev->data->mtu = mtu; - - return err; + return tap_ioctl(pmd, SIOCSIFMTU, &ifr, 1, LOCAL_AND_REMOTE); } static int diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index fc1844ddfce1..1d1360faff66 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -176,7 +176,7 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) (frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS)) return -EINVAL; - if (frame_size > NIC_HW_L2_MAX_LEN) + if (mtu > RTE_ETHER_MTU) rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; @@ -184,8 +184,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) if (nicvf_mbox_update_hw_max_frs(nic, mtu)) return -EINVAL; - /* Update max_rx_pkt_len */ - rxmode->max_rx_pkt_len = mtu + RTE_ETHER_HDR_LEN; nic->mtu = mtu; for (i = 0; i < nic->sqs_count; i++) @@ -1724,16 +1722,13 @@ nicvf_dev_start(struct rte_eth_dev *dev) } /* Setup scatter mode if needed by jumbo */ - if (dev->data->dev_conf.rxmode.max_rx_pkt_len + - 2 * VLAN_TAG_SIZE > buffsz) + if (dev->data->mtu + (uint32_t)NIC_HW_L2_OVERHEAD + 2 * VLAN_TAG_SIZE > buffsz) dev->data->scattered_rx = 1; if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0) dev->data->scattered_rx = 1; - /* Setup MTU based on max_rx_pkt_len or default */ - mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ? - dev->data->dev_conf.rxmode.max_rx_pkt_len - - RTE_ETHER_HDR_LEN : RTE_ETHER_MTU; + /* Setup MTU */ + mtu = dev->data->mtu; if (nicvf_dev_set_mtu(dev, mtu)) { PMD_INIT_LOG(ERR, "Failed to set default mtu size"); diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index e62675520a15..d773a81665d7 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -3482,8 +3482,11 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + /* switch to jumbo mode if needed */ + if (mtu > RTE_ETHER_MTU) + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + else + dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; if (hw->mode) wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK, diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h index 3021933965c8..44cfcd76bca4 100644 --- a/drivers/net/txgbe/txgbe_ethdev.h +++ b/drivers/net/txgbe/txgbe_ethdev.h @@ -55,6 +55,10 @@ #define TXGBE_5TUPLE_MAX_PRI 7 #define TXGBE_5TUPLE_MIN_PRI 1 + +/* The overhead from MTU to max frame size. */ +#define TXGBE_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) + #define TXGBE_RSS_OFFLOAD_ALL ( \ ETH_RSS_IPV4 | \ ETH_RSS_NONFRAG_IPV4_TCP | \ diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c index 6f577f4c80df..3362ca097ca7 100644 --- a/drivers/net/txgbe/txgbe_ethdev_vf.c +++ b/drivers/net/txgbe/txgbe_ethdev_vf.c @@ -1143,8 +1143,6 @@ txgbevf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) if (txgbevf_rlpml_set_vf(hw, max_frame)) return -EINVAL; - /* update max frame size */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = max_frame; return 0; } diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index 1a261287d1bd..c6cd3803c434 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -4305,13 +4305,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev) /* * Configure jumbo frame support, if any. */ - if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK, - TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len)); - } else { - wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK, - TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT)); - } + wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK, + TXGBE_FRMSZ_MAX(dev->data->mtu + TXGBE_ETH_OVERHEAD)); /* * If loopback mode is configured, set LPBK bit. @@ -4373,8 +4368,8 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev) wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl); /* It adds dual VLAN length for supporting dual VLAN */ - if (dev->data->dev_conf.rxmode.max_rx_pkt_len + - 2 * TXGBE_VLAN_TAG_SIZE > buf_size) + if (dev->data->mtu + TXGBE_ETH_OVERHEAD + + 2 * TXGBE_VLAN_TAG_SIZE > buf_size) dev->data->scattered_rx = 1; if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; @@ -4826,9 +4821,9 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev) * VF packets received can work in all cases. */ if (txgbevf_rlpml_set_vf(hw, - (uint16_t)dev->data->dev_conf.rxmode.max_rx_pkt_len)) { + (uint16_t)dev->data->mtu + TXGBE_ETH_OVERHEAD)) { PMD_INIT_LOG(ERR, "Set max packet length to %d failed.", - dev->data->dev_conf.rxmode.max_rx_pkt_len); + dev->data->mtu + TXGBE_ETH_OVERHEAD); return -EINVAL; } @@ -4890,7 +4885,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev) if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER || /* It adds dual VLAN length for supporting dual VLAN */ - (rxmode->max_rx_pkt_len + + (dev->data->mtu + TXGBE_ETH_OVERHEAD + 2 * TXGBE_VLAN_TAG_SIZE) > buf_size) { if (!dev->data->scattered_rx) PMD_INIT_LOG(DEBUG, "forcing scatter mode"); diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 05683056676c..9491cc2669f7 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -2009,8 +2009,6 @@ virtio_dev_configure(struct rte_eth_dev *dev) const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; struct virtio_hw *hw = dev->data->dev_private; - uint32_t ether_hdr_len = RTE_ETHER_HDR_LEN + VLAN_TAG_LEN + - hw->vtnet_hdr_size; uint64_t rx_offloads = rxmode->offloads; uint64_t tx_offloads = txmode->offloads; uint64_t req_features; @@ -2039,7 +2037,7 @@ virtio_dev_configure(struct rte_eth_dev *dev) return ret; } - if (rxmode->max_rx_pkt_len > hw->max_mtu + ether_hdr_len) + if (rxmode->mtu > hw->max_mtu) req_features &= ~(1ULL << VIRTIO_NET_F_MTU); if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c index 5251db0b1674..98e47e0812d5 100644 --- a/examples/bbdev_app/main.c +++ b/examples/bbdev_app/main.c @@ -72,7 +72,6 @@ mbuf_input(struct rte_mbuf *mbuf) static const struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_NONE, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .txmode = { diff --git a/examples/bond/main.c b/examples/bond/main.c index f48400e21156..70c37a7d2ba7 100644 --- a/examples/bond/main.c +++ b/examples/bond/main.c @@ -117,7 +117,6 @@ static struct rte_mempool *mbuf_pool; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_NONE, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .rx_adv_conf = { diff --git a/examples/distributor/main.c b/examples/distributor/main.c index 1b1029660e77..0b973d392dc8 100644 --- a/examples/distributor/main.c +++ b/examples/distributor/main.c @@ -81,7 +81,6 @@ struct app_stats prev_app_stats; static const struct rte_eth_conf port_conf_default = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, }, .txmode = { .mq_mode = ETH_MQ_TX_NONE, diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c index f70ab0cc9e38..f5c28268d9f8 100644 --- a/examples/eventdev_pipeline/pipeline_worker_generic.c +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c @@ -284,7 +284,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool) static const struct rte_eth_conf port_conf_default = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c index ca6cd200caad..9d9f150522dd 100644 --- a/examples/eventdev_pipeline/pipeline_worker_tx.c +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c @@ -615,7 +615,6 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool) static const struct rte_eth_conf port_conf_default = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c index 94c155364842..3e1daa228316 100644 --- a/examples/flow_classify/flow_classify.c +++ b/examples/flow_classify/flow_classify.c @@ -59,12 +59,6 @@ static struct{ } parm_config; const char cb_port_delim[] = ":"; -static const struct rte_eth_conf port_conf_default = { - .rxmode = { - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, - }, -}; - struct flow_classifier { struct rte_flow_classifier *cls; }; @@ -191,7 +185,7 @@ static struct rte_flow_attr attr; static inline int port_init(uint8_t port, struct rte_mempool *mbuf_pool) { - struct rte_eth_conf port_conf = port_conf_default; + struct rte_eth_conf port_conf; struct rte_ether_addr addr; const uint16_t rx_rings = 1, tx_rings = 1; int retval; @@ -202,6 +196,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + memset(&port_conf, 0, sizeof(struct rte_eth_conf)); + retval = rte_eth_dev_info_get(port, &dev_info); if (retval != 0) { printf("Error during getting device (port %u) info: %s\n", diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c index 2e377e2d4bb6..5dbf60f7ef54 100644 --- a/examples/ioat/ioatfwd.c +++ b/examples/ioat/ioatfwd.c @@ -806,7 +806,6 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues) static const struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c index 77a6a18d1914..f97287ce2243 100644 --- a/examples/ip_fragmentation/main.c +++ b/examples/ip_fragmentation/main.c @@ -146,7 +146,7 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; static struct rte_eth_conf port_conf = { .rxmode = { - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE, + .mtu = JUMBO_FRAME_MAX_SIZE, .split_hdr_size = 0, .offloads = (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCATTER | @@ -914,9 +914,9 @@ main(int argc, char **argv) "Error during getting device (port %u) info: %s\n", portid, strerror(-ret)); - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN( - dev_info.max_rx_pktlen, - local_port_conf.rxmode.max_rx_pkt_len); + local_port_conf.rxmode.mtu = RTE_MIN( + dev_info.max_mtu, + local_port_conf.rxmode.mtu); /* get the lcore_id for this port */ while (rte_lcore_is_enabled(rx_lcore_id) == 0 || @@ -959,8 +959,7 @@ main(int argc, char **argv) } /* set the mtu to the maximum received packet size */ - ret = rte_eth_dev_set_mtu(portid, - local_port_conf.rxmode.max_rx_pkt_len - MTU_OVERHEAD); + ret = rte_eth_dev_set_mtu(portid, local_port_conf.rxmode.mtu); if (ret < 0) { printf("\n"); rte_exit(EXIT_FAILURE, "Set MTU failed: " diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c index 16bcffe356bc..8628db22f56b 100644 --- a/examples/ip_pipeline/link.c +++ b/examples/ip_pipeline/link.c @@ -46,7 +46,7 @@ static struct rte_eth_conf port_conf_default = { .link_speeds = 0, .rxmode = { .mq_mode = ETH_MQ_RX_NONE, - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */ + .mtu = 9000, /* Jumbo frame MTU */ .split_hdr_size = 0, /* Header split buffer size */ }, .rx_adv_conf = { diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index ce8882a45883..f868e5d906c7 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -162,7 +162,7 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE, + .mtu = JUMBO_FRAME_MAX_SIZE, .split_hdr_size = 0, .offloads = (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_JUMBO_FRAME), @@ -875,7 +875,8 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue) */ nb_mbuf = RTE_MAX(max_flow_num, 2UL * MAX_PKT_BURST) * MAX_FRAG_NUM; - nb_mbuf *= (port_conf.rxmode.max_rx_pkt_len + BUF_SIZE - 1) / BUF_SIZE; + nb_mbuf *= (port_conf.rxmode.mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + + BUF_SIZE - 1) / BUF_SIZE; nb_mbuf *= 2; /* ipv4 and ipv6 */ nb_mbuf += nb_rxd + nb_txd; @@ -1046,9 +1047,9 @@ main(int argc, char **argv) "Error during getting device (port %u) info: %s\n", portid, strerror(-ret)); - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN( - dev_info.max_rx_pktlen, - local_port_conf.rxmode.max_rx_pkt_len); + local_port_conf.rxmode.mtu = RTE_MIN( + dev_info.max_mtu, + local_port_conf.rxmode.mtu); /* get the lcore_id for this port */ while (rte_lcore_is_enabled(rx_lcore_id) == 0 || diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index f252d34985b4..f8a1f544c21d 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -235,7 +235,6 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE]; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, .offloads = DEV_RX_OFFLOAD_CHECKSUM, }, @@ -2161,7 +2160,6 @@ cryptodevs_init(uint16_t req_queue_num) static void port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) { - uint32_t frame_size; struct rte_eth_dev_info dev_info; struct rte_eth_txconf *txconf; uint16_t nb_tx_queue, nb_rx_queue; @@ -2209,10 +2207,9 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n", nb_rx_queue, nb_tx_queue); - frame_size = MTU_TO_FRAMELEN(mtu_size); - if (frame_size > local_port_conf.rxmode.max_rx_pkt_len) + if (mtu_size > RTE_ETHER_MTU) local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - local_port_conf.rxmode.max_rx_pkt_len = frame_size; + local_port_conf.rxmode.mtu = mtu_size; if (multi_seg_required()) { local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER; diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c index fd6207a18b79..989d70ae257a 100644 --- a/examples/ipv4_multicast/main.c +++ b/examples/ipv4_multicast/main.c @@ -107,7 +107,7 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; static struct rte_eth_conf port_conf = { .rxmode = { - .max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE, + .mtu = JUMBO_FRAME_MAX_SIZE, .split_hdr_size = 0, .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME, }, @@ -694,9 +694,9 @@ main(int argc, char **argv) "Error during getting device (port %u) info: %s\n", portid, strerror(-ret)); - local_port_conf.rxmode.max_rx_pkt_len = RTE_MIN( - dev_info.max_rx_pktlen, - local_port_conf.rxmode.max_rx_pkt_len); + local_port_conf.rxmode.mtu = RTE_MIN( + dev_info.max_mtu, + local_port_conf.rxmode.mtu); /* get the lcore_id for this port */ while (rte_lcore_is_enabled(rx_lcore_id) == 0 || diff --git a/examples/kni/main.c b/examples/kni/main.c index beabb3c848aa..c10814c6a94f 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -791,14 +791,12 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu) memcpy(&conf, &port_conf, sizeof(conf)); /* Set new MTU */ - if (new_mtu > RTE_ETHER_MAX_LEN) + if (new_mtu > RTE_ETHER_MTU) conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; else conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - /* mtu + length of header + length of FCS = max pkt length */ - conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE + - KNI_ENET_FCS_SIZE; + conf.rxmode.mtu = new_mtu; ret = rte_eth_dev_configure(port_id, 1, 1, &conf); if (ret < 0) { RTE_LOG(ERR, APP, "Fail to reconfigure port %d\n", port_id); diff --git a/examples/l2fwd-cat/l2fwd-cat.c b/examples/l2fwd-cat/l2fwd-cat.c index 8e7eb3248589..cef4187467f0 100644 --- a/examples/l2fwd-cat/l2fwd-cat.c +++ b/examples/l2fwd-cat/l2fwd-cat.c @@ -19,10 +19,6 @@ #define MBUF_CACHE_SIZE 250 #define BURST_SIZE 32 -static const struct rte_eth_conf port_conf_default = { - .rxmode = { .max_rx_pkt_len = RTE_ETHER_MAX_LEN } -}; - /* l2fwd-cat.c: CAT enabled, basic DPDK skeleton forwarding example. */ /* @@ -32,7 +28,7 @@ static const struct rte_eth_conf port_conf_default = { static inline int port_init(uint16_t port, struct rte_mempool *mbuf_pool) { - struct rte_eth_conf port_conf = port_conf_default; + struct rte_eth_conf port_conf; const uint16_t rx_rings = 1, tx_rings = 1; int retval; uint16_t q; @@ -42,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + memset(&port_conf, 0, sizeof(struct rte_eth_conf)); + /* Configure the Ethernet device. */ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); if (retval != 0) diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c index 4f5161649234..b36c6123c652 100644 --- a/examples/l2fwd-crypto/main.c +++ b/examples/l2fwd-crypto/main.c @@ -215,7 +215,6 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_NONE, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .txmode = { diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c index ab341e55b299..0d0857bf8041 100644 --- a/examples/l2fwd-event/l2fwd_common.c +++ b/examples/l2fwd-event/l2fwd_common.c @@ -11,7 +11,6 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc) uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; struct rte_eth_conf port_conf = { .rxmode = { - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .txmode = { diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c index a1f457b564b6..913037d5f835 100644 --- a/examples/l3fwd-acl/main.c +++ b/examples/l3fwd-acl/main.c @@ -125,7 +125,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) / static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, .offloads = DEV_RX_OFFLOAD_CHECKSUM, }, @@ -1833,12 +1832,12 @@ parse_args(int argc, char **argv) print_usage(prgname); return -1; } - port_conf.rxmode.max_rx_pkt_len = ret; + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN); } - printf("set jumbo frame max packet length " - "to %u\n", - (unsigned int) - port_conf.rxmode.max_rx_pkt_len); + printf("set jumbo frame max packet length to %u\n", + (unsigned int)port_conf.rxmode.mtu + + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN); break; } case OPT_RULE_IPV4_NUM: diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index 75c2e0ef3f3f..ddcb2fbc995d 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -112,7 +112,6 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default); static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .rx_adv_conf = { @@ -510,7 +509,8 @@ parse_args(int argc, char **argv) print_usage(prgname); return -1; } - port_conf.rxmode.max_rx_pkt_len = ret; + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN); } break; } diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index f8dfed163423..02221a79fabf 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -250,7 +250,6 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default); static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, .offloads = DEV_RX_OFFLOAD_CHECKSUM, }, @@ -1972,11 +1971,13 @@ parse_args(int argc, char **argv) print_usage(prgname); return -1; } - port_conf.rxmode.max_rx_pkt_len = ret; + port_conf.rxmode.mtu = ret - + (RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN); } - printf("set jumbo frame " - "max packet length to %u\n", - (unsigned int)port_conf.rxmode.max_rx_pkt_len); + printf("set jumbo frame max packet length to %u\n", + (unsigned int)port_conf.rxmode.mtu + + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN); } if (!strncmp(lgopts[option_index].name, diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index 4cb800aa158d..80b5b93d5f0d 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -121,7 +121,6 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) / static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, .offloads = DEV_RX_OFFLOAD_CHECKSUM, }, @@ -719,7 +718,8 @@ parse_args(int argc, char **argv) print_usage(prgname); return -1; } - port_conf.rxmode.max_rx_pkt_len = ret; + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN); } break; } diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c index 2f593abf263d..1960f00ad28d 100644 --- a/examples/performance-thread/l3fwd-thread/main.c +++ b/examples/performance-thread/l3fwd-thread/main.c @@ -308,7 +308,6 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default); static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, .offloads = DEV_RX_OFFLOAD_CHECKSUM, }, @@ -3004,10 +3003,12 @@ parse_args(int argc, char **argv) print_usage(prgname); return -1; } - port_conf.rxmode.max_rx_pkt_len = ret; + port_conf.rxmode.mtu = ret - (RTE_ETHER_HDR_LEN + + RTE_ETHER_CRC_LEN); } printf("set jumbo frame max packet length to %u\n", - (unsigned int)port_conf.rxmode.max_rx_pkt_len); + (unsigned int)port_conf.rxmode.mtu + + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN); break; } #if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH) diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c index 467cda5a6dac..52f2a139d2c6 100644 --- a/examples/pipeline/obj.c +++ b/examples/pipeline/obj.c @@ -134,7 +134,7 @@ static struct rte_eth_conf port_conf_default = { .link_speeds = 0, .rxmode = { .mq_mode = ETH_MQ_RX_NONE, - .max_rx_pkt_len = 9000, /* Jumbo frame max packet len */ + .mtu = 9000, /* Jumbo frame max MTU */ .split_hdr_size = 0, /* Header split buffer size */ }, .rx_adv_conf = { diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c index 173451eedcbe..54148631f09e 100644 --- a/examples/ptpclient/ptpclient.c +++ b/examples/ptpclient/ptpclient.c @@ -47,12 +47,6 @@ uint32_t ptp_enabled_port_mask; uint8_t ptp_enabled_port_nb; static uint8_t ptp_enabled_ports[RTE_MAX_ETHPORTS]; -static const struct rte_eth_conf port_conf_default = { - .rxmode = { - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, - }, -}; - static const struct rte_ether_addr ether_multicast = { .addr_bytes = {0x01, 0x1b, 0x19, 0x0, 0x0, 0x0} }; @@ -178,7 +172,7 @@ static inline int port_init(uint16_t port, struct rte_mempool *mbuf_pool) { struct rte_eth_dev_info dev_info; - struct rte_eth_conf port_conf = port_conf_default; + struct rte_eth_conf port_conf; const uint16_t rx_rings = 1; const uint16_t tx_rings = 1; int retval; @@ -189,6 +183,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + memset(&port_conf, 0, sizeof(struct rte_eth_conf)); + retval = rte_eth_dev_info_get(port, &dev_info); if (retval != 0) { printf("Error during getting device (port %u) info: %s\n", diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c index 6e724f37835a..2e9ed3cf7ef7 100644 --- a/examples/qos_meter/main.c +++ b/examples/qos_meter/main.c @@ -54,7 +54,6 @@ static struct rte_mempool *pool = NULL; static struct rte_eth_conf port_conf = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, .offloads = DEV_RX_OFFLOAD_CHECKSUM, }, diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c index 1abe003fc6ae..1367569c65db 100644 --- a/examples/qos_sched/init.c +++ b/examples/qos_sched/init.c @@ -57,7 +57,6 @@ struct flow_conf qos_conf[MAX_DATA_STREAMS]; static struct rte_eth_conf port_conf = { .rxmode = { - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, .split_hdr_size = 0, }, .txmode = { diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c index 192521c3c6b0..ea86c69b07ad 100644 --- a/examples/rxtx_callbacks/main.c +++ b/examples/rxtx_callbacks/main.c @@ -40,12 +40,6 @@ tsc_field(struct rte_mbuf *mbuf) static const char usage[] = "%s EAL_ARGS -- [-t]\n"; -static const struct rte_eth_conf port_conf_default = { - .rxmode = { - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, - }, -}; - static struct { uint64_t total_cycles; uint64_t total_queue_cycles; @@ -118,7 +112,7 @@ calc_latency(uint16_t port, uint16_t qidx __rte_unused, static inline int port_init(uint16_t port, struct rte_mempool *mbuf_pool) { - struct rte_eth_conf port_conf = port_conf_default; + struct rte_eth_conf port_conf; const uint16_t rx_rings = 1, tx_rings = 1; uint16_t nb_rxd = RX_RING_SIZE; uint16_t nb_txd = TX_RING_SIZE; @@ -131,6 +125,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + memset(&port_conf, 0, sizeof(struct rte_eth_conf)); + retval = rte_eth_dev_info_get(port, &dev_info); if (retval != 0) { printf("Error during getting device (port %u) info: %s\n", diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c index 43b9d17a3c91..26c63ffed742 100644 --- a/examples/skeleton/basicfwd.c +++ b/examples/skeleton/basicfwd.c @@ -17,12 +17,6 @@ #define MBUF_CACHE_SIZE 250 #define BURST_SIZE 32 -static const struct rte_eth_conf port_conf_default = { - .rxmode = { - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, - }, -}; - /* basicfwd.c: Basic DPDK skeleton forwarding example. */ /* @@ -32,7 +26,7 @@ static const struct rte_eth_conf port_conf_default = { static inline int port_init(uint16_t port, struct rte_mempool *mbuf_pool) { - struct rte_eth_conf port_conf = port_conf_default; + struct rte_eth_conf port_conf; const uint16_t rx_rings = 1, tx_rings = 1; uint16_t nb_rxd = RX_RING_SIZE; uint16_t nb_txd = TX_RING_SIZE; @@ -44,6 +38,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + memset(&port_conf, 0, sizeof(struct rte_eth_conf)); + retval = rte_eth_dev_info_get(port, &dev_info); if (retval != 0) { printf("Error during getting device (port %u) info: %s\n", diff --git a/examples/vhost/main.c b/examples/vhost/main.c index d2179eadb979..e27712727f6a 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -639,8 +639,8 @@ us_vhost_parse_args(int argc, char **argv) if (ret) { vmdq_conf_default.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - vmdq_conf_default.rxmode.max_rx_pkt_len - = JUMBO_FRAME_MAX_SIZE; + vmdq_conf_default.rxmode.mtu = + JUMBO_FRAME_MAX_SIZE; } break; diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c index 7d5bf6855426..309d1a3a8444 100644 --- a/examples/vm_power_manager/main.c +++ b/examples/vm_power_manager/main.c @@ -51,17 +51,10 @@ static uint32_t enabled_port_mask; static volatile bool force_quit; -/****************/ -static const struct rte_eth_conf port_conf_default = { - .rxmode = { - .max_rx_pkt_len = RTE_ETHER_MAX_LEN, - }, -}; - static inline int port_init(uint16_t port, struct rte_mempool *mbuf_pool) { - struct rte_eth_conf port_conf = port_conf_default; + struct rte_eth_conf port_conf; const uint16_t rx_rings = 1, tx_rings = 1; int retval; uint16_t q; @@ -71,6 +64,8 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + memset(&port_conf, 0, sizeof(struct rte_eth_conf)); + retval = rte_eth_dev_info_get(port, &dev_info); if (retval != 0) { printf("Error during getting device (port %u) info: %s\n", diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index c607eabb5b0c..3451125639f9 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1249,15 +1249,15 @@ rte_eth_dev_tx_offload_name(uint64_t offload) static inline int eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size, - uint32_t max_rx_pkt_len, uint32_t dev_info_size) + uint32_t max_rx_pktlen, uint32_t dev_info_size) { int ret = 0; if (dev_info_size == 0) { - if (config_size != max_rx_pkt_len) { + if (config_size != max_rx_pktlen) { RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size" " %u != %u is not allowed\n", - port_id, config_size, max_rx_pkt_len); + port_id, config_size, max_rx_pktlen); ret = -EINVAL; } } else if (config_size > dev_info_size) { @@ -1325,6 +1325,19 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads, return ret; } +static uint16_t +eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu) +{ + uint16_t overhead_len; + + if (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu) + overhead_len = max_rx_pktlen - max_mtu; + else + overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + + return overhead_len; +} + int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, const struct rte_eth_conf *dev_conf) @@ -1332,6 +1345,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, struct rte_eth_dev *dev; struct rte_eth_dev_info dev_info; struct rte_eth_conf orig_conf; + uint32_t max_rx_pktlen; uint16_t overhead_len; int diag; int ret; @@ -1375,11 +1389,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, goto rollback; /* Get the real Ethernet overhead length */ - if (dev_info.max_mtu != UINT16_MAX && - dev_info.max_rx_pktlen > dev_info.max_mtu) - overhead_len = dev_info.max_rx_pktlen - dev_info.max_mtu; - else - overhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen, + dev_info.max_mtu); /* If number of queues specified by application for both Rx and Tx is * zero, use driver preferred values. This cannot be done individually @@ -1448,49 +1459,45 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, } /* - * If jumbo frames are enabled, check that the maximum RX packet - * length is supported by the configured device. + * Check that the maximum RX packet length is supported by the + * configured device. */ - if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { - if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n", - port_id, dev_conf->rxmode.max_rx_pkt_len, - dev_info.max_rx_pktlen); - ret = -EINVAL; - goto rollback; - } else if (dev_conf->rxmode.max_rx_pkt_len < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u max_rx_pkt_len %u < min valid value %u\n", - port_id, dev_conf->rxmode.max_rx_pkt_len, - (unsigned int)RTE_ETHER_MIN_LEN); - ret = -EINVAL; - goto rollback; - } + if (dev_conf->rxmode.mtu == 0) + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU; + max_rx_pktlen = dev->data->dev_conf.rxmode.mtu + overhead_len; + if (max_rx_pktlen > dev_info.max_rx_pktlen) { + RTE_ETHDEV_LOG(ERR, + "Ethdev port_id=%u max_rx_pktlen %u > max valid value %u\n", + port_id, max_rx_pktlen, dev_info.max_rx_pktlen); + ret = -EINVAL; + goto rollback; + } else if (max_rx_pktlen < RTE_ETHER_MIN_LEN) { + RTE_ETHDEV_LOG(ERR, + "Ethdev port_id=%u max_rx_pktlen %u < min valid value %u\n", + port_id, max_rx_pktlen, RTE_ETHER_MIN_LEN); + ret = -EINVAL; + goto rollback; + } - /* Scale the MTU size to adapt max_rx_pkt_len */ - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len - - overhead_len; - } else { - uint16_t pktlen = dev_conf->rxmode.max_rx_pkt_len; - if (pktlen < RTE_ETHER_MIN_MTU + overhead_len || - pktlen > RTE_ETHER_MTU + overhead_len) + if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) { + if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU || + dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU) /* Use default value */ - dev->data->dev_conf.rxmode.max_rx_pkt_len = - RTE_ETHER_MTU + overhead_len; + dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU; } + dev->data->mtu = dev->data->dev_conf.rxmode.mtu; + /* * If LRO is enabled, check that the maximum aggregated packet * size is supported by the configured device. */ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) { if (dev_conf->rxmode.max_lro_pkt_size == 0) - dev->data->dev_conf.rxmode.max_lro_pkt_size = - dev->data->dev_conf.rxmode.max_rx_pkt_len; + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen; ret = eth_dev_check_lro_pkt_size(port_id, dev->data->dev_conf.rxmode.max_lro_pkt_size, - dev->data->dev_conf.rxmode.max_rx_pkt_len, + max_rx_pktlen, dev_info.max_lro_pkt_size); if (ret != 0) goto rollback; @@ -2142,13 +2149,20 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, * If LRO is enabled, check that the maximum aggregated packet * size is supported by the configured device. */ + /* Get the real Ethernet overhead length */ if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) { + uint16_t overhead_len; + uint32_t max_rx_pktlen; + int ret; + + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen, + dev_info.max_mtu); + max_rx_pktlen = dev->data->mtu + overhead_len; if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0) - dev->data->dev_conf.rxmode.max_lro_pkt_size = - dev->data->dev_conf.rxmode.max_rx_pkt_len; - int ret = eth_dev_check_lro_pkt_size(port_id, + dev->data->dev_conf.rxmode.max_lro_pkt_size = max_rx_pktlen; + ret = eth_dev_check_lro_pkt_size(port_id, dev->data->dev_conf.rxmode.max_lro_pkt_size, - dev->data->dev_conf.rxmode.max_rx_pkt_len, + max_rx_pktlen, dev_info.max_lro_pkt_size); if (ret != 0) return ret; diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index faf3bd901d75..9f288f98329c 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -410,7 +410,7 @@ enum rte_eth_tx_mq_mode { struct rte_eth_rxmode { /** The multi-queue packet distribution mode to be used, e.g. RSS. */ enum rte_eth_rx_mq_mode mq_mode; - uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */ + uint32_t mtu; /**< Requested MTU. */ /** Maximum allowed size of LRO aggregated packet. */ uint32_t max_lro_pkt_size; uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/ diff --git a/lib/ethdev/rte_ethdev_trace.h b/lib/ethdev/rte_ethdev_trace.h index 0036bda7465c..1491c815c312 100644 --- a/lib/ethdev/rte_ethdev_trace.h +++ b/lib/ethdev/rte_ethdev_trace.h @@ -28,7 +28,7 @@ RTE_TRACE_POINT( rte_trace_point_emit_u16(nb_tx_q); rte_trace_point_emit_u32(dev_conf->link_speeds); rte_trace_point_emit_u32(dev_conf->rxmode.mq_mode); - rte_trace_point_emit_u32(dev_conf->rxmode.max_rx_pkt_len); + rte_trace_point_emit_u32(dev_conf->rxmode.mtu); rte_trace_point_emit_u64(dev_conf->rxmode.offloads); rte_trace_point_emit_u32(dev_conf->txmode.mq_mode); rte_trace_point_emit_u64(dev_conf->txmode.offloads); From patchwork Fri Jul 9 17:29:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 95630 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DE9A2A0548; Fri, 9 Jul 2021 19:30:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B98DC416FD; Fri, 9 Jul 2021 19:30:18 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 37919410FD for ; Fri, 9 Jul 2021 19:30:17 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10039"; a="209703999" X-IronPort-AV: E=Sophos;i="5.84,226,1620716400"; d="scan'208";a="209703999" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jul 2021 10:30:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,226,1620716400"; d="scan'208";a="488264539" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.27]) by FMSMGA003.fm.intel.com with ESMTP; 09 Jul 2021 10:30:07 -0700 From: Ferruh Yigit To: Somalapuram Amaranath , Ajit Khaparde , Somnath Kotur , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Rahul Lakkireddy , Hemant Agrawal , Sachin Saxena , Haiyue Wang , Gagandeep Singh , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , "Min Hu (Connor)" , Yisen Zhuang , Lijun Ou , Beilei Xing , Jingjing Wu , Qiming Yang , Qi Zhang , Rosen Xu , Shijith Thotton , Srisivasubramanian Srinivasan , Heinrich Kuhn , Harman Kalra , Jerin Jacob , Rasesh Mody , Devendra Singh Rawat , Igor Russkikh , Andrew Rybchenko , Maciej Czekaj , Jiawen Wu , Jian Wang , Thomas Monjalon Cc: Ferruh Yigit , dev@dpdk.org Date: Fri, 9 Jul 2021 18:29:20 +0100 Message-Id: <20210709172923.3369846-2-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210709172923.3369846-1-ferruh.yigit@intel.com> References: <20210709172923.3369846-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 2/4] ethdev: move jumbo frame offload check to library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Setting MTU bigger than RTE_ETHER_MTU requires the jumbo frame support, and application should enable the jumbo frame offload support for it. When jumbo frame offload is not enabled by application, but MTU bigger than RTE_ETHER_MTU is requested there are two options, either fail or enable jumbo frame offload implicitly. Enabling jumbo frame offload implicitly is selected by many drivers since setting a big MTU value already implies it, and this increases usability. This patch moves this logic from drivers to the library, both to reduce the duplicated code in the drivers and to make behaviour more visible. Signed-off-by: Ferruh Yigit Reviewed-by: Andrew Rybchenko Reviewed-by: Rosen Xu Acked-by: Ajit Khaparde --- drivers/net/axgbe/axgbe_ethdev.c | 9 ++------- drivers/net/bnxt/bnxt_ethdev.c | 9 ++------- drivers/net/cnxk/cnxk_ethdev_ops.c | 5 ----- drivers/net/cxgbe/cxgbe_ethdev.c | 8 -------- drivers/net/dpaa/dpaa_ethdev.c | 7 ------- drivers/net/dpaa2/dpaa2_ethdev.c | 7 ------- drivers/net/e1000/em_ethdev.c | 9 ++------- drivers/net/e1000/igb_ethdev.c | 9 ++------- drivers/net/enetc/enetc_ethdev.c | 7 ------- drivers/net/hinic/hinic_pmd_ethdev.c | 7 ------- drivers/net/hns3/hns3_ethdev.c | 8 -------- drivers/net/hns3/hns3_ethdev_vf.c | 6 ------ drivers/net/i40e/i40e_ethdev.c | 5 ----- drivers/net/i40e/i40e_ethdev_vf.c | 5 ----- drivers/net/iavf/iavf_ethdev.c | 7 ------- drivers/net/ice/ice_ethdev.c | 5 ----- drivers/net/igc/igc_ethdev.c | 9 ++------- drivers/net/ipn3ke/ipn3ke_representor.c | 5 ----- drivers/net/ixgbe/ixgbe_ethdev.c | 7 ++----- drivers/net/liquidio/lio_ethdev.c | 7 ------- drivers/net/nfp/nfp_net.c | 6 ------ drivers/net/octeontx/octeontx_ethdev.c | 5 ----- drivers/net/octeontx2/otx2_ethdev_ops.c | 5 ----- drivers/net/qede/qede_ethdev.c | 4 ---- drivers/net/sfc/sfc_ethdev.c | 9 --------- drivers/net/thunderx/nicvf_ethdev.c | 6 ------ drivers/net/txgbe/txgbe_ethdev.c | 6 ------ lib/ethdev/rte_ethdev.c | 18 +++++++++++++++++- 28 files changed, 29 insertions(+), 171 deletions(-) diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index 76aeec077f2b..2960834b4539 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -1492,15 +1492,10 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) dev->data->port_id); return -EBUSY; } - if (mtu > RTE_ETHER_MTU) { - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) val = 1; - } else { - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; + else val = 0; - } AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val); return 0; } diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 335505a106d5..4344a012f06e 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -3018,15 +3018,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) return -EINVAL; } - if (new_mtu > RTE_ETHER_MTU) { + if (new_mtu > RTE_ETHER_MTU) bp->flags |= BNXT_FLAG_JUMBO; - bp->eth_dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - } else { - bp->eth_dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; + else bp->flags &= ~BNXT_FLAG_JUMBO; - } /* Is there a change in mtu setting? */ if (eth_dev->data->mtu == new_mtu) diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 695d0d6fd3e2..349896f6a1bf 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -439,11 +439,6 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) plt_err("Failed to max Rx frame length, rc=%d", rc); goto exit; } - - if (mtu > RTE_ETHER_MTU) - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; exit: return rc; } diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c index 8cf61f12a8d6..0c9cc2f5bb3f 100644 --- a/drivers/net/cxgbe/cxgbe_ethdev.c +++ b/drivers/net/cxgbe/cxgbe_ethdev.c @@ -313,14 +313,6 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen) return -EINVAL; - /* set to jumbo mode if needed */ - if (mtu > RTE_ETHER_MTU) - eth_dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - eth_dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1, -1, -1, true); return err; diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 56703e3a39e8..a444f749bb96 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -187,13 +187,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - fman_if_set_maxfrm(dev->process_private, frame_size); return 0; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 6213bcbf3a43..be2858b3adac 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1470,13 +1470,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN) return -EINVAL; - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - /* Set the Max Rx frame length as 'mtu' + * Maximum Ethernet header length */ diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 6f418a36aa04..1b41dd04df5a 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -1818,15 +1818,10 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) rctl = E1000_READ_REG(hw, E1000_RCTL); /* switch to jumbo mode if needed */ - if (mtu > RTE_ETHER_MTU) { - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) rctl |= E1000_RCTL_LPE; - } else { - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; + else rctl &= ~E1000_RCTL_LPE; - } E1000_WRITE_REG(hw, E1000_RCTL, rctl); return 0; diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index 35b517891d67..f15774eae20d 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -4401,15 +4401,10 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) rctl = E1000_READ_REG(hw, E1000_RCTL); /* switch to jumbo mode if needed */ - if (mtu > RTE_ETHER_MTU) { - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) rctl |= E1000_RCTL_LPE; - } else { - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; + else rctl &= ~E1000_RCTL_LPE; - } E1000_WRITE_REG(hw, E1000_RCTL, rctl); E1000_WRITE_REG(hw, E1000_RLPML, frame_size); diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c index cdb9783b5372..fbcbbb6c0533 100644 --- a/drivers/net/enetc/enetc_ethdev.c +++ b/drivers/net/enetc/enetc_ethdev.c @@ -677,13 +677,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads &= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE); enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE); diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index c737ef8d06d8..c1cde811a252 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -1556,13 +1556,6 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) return ret; } - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - nic_dev->mtu_size = mtu; return ret; diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 8bccdeddb2f7..868d381a4772 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -2597,7 +2597,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) struct hns3_adapter *hns = dev->data->dev_private; uint32_t frame_size = mtu + HNS3_ETH_OVERHEAD; struct hns3_hw *hw = &hns->hw; - bool is_jumbo_frame; int ret; if (dev->data->dev_started) { @@ -2607,7 +2606,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } rte_spinlock_lock(&hw->lock); - is_jumbo_frame = mtu > RTE_ETHER_MTU ? true : false; frame_size = RTE_MAX(frame_size, HNS3_DEFAULT_FRAME_LEN); /* @@ -2622,12 +2620,6 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return ret; } - if (is_jumbo_frame) - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; rte_spinlock_unlock(&hw->lock); return 0; diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index ca839fa55fa0..ff28cad53a03 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -920,12 +920,6 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) rte_spinlock_unlock(&hw->lock); return ret; } - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; rte_spinlock_unlock(&hw->lock); return 0; diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index 1161f301b9ae..c5058f26dff2 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -11772,11 +11772,6 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (mtu > RTE_ETHER_MTU) - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - return ret; } diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 086a167ca672..2015a86ba5ca 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -2884,11 +2884,6 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (mtu > RTE_ETHER_MTU) - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - return ret; } diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 13c2329d85a7..ba5be45e8c5e 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -1446,13 +1446,6 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - return ret; } diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index bdda6fee3f8e..502e410b5641 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3806,11 +3806,6 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - if (mtu > RTE_ETHER_MTU) - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - return 0; } diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index b26723064b07..dcbc26b8186e 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -1592,15 +1592,10 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } rctl = IGC_READ_REG(hw, IGC_RCTL); - - /* switch to jumbo mode if needed */ - if (mtu > RTE_ETHER_MTU) { - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) rctl |= IGC_RCTL_LPE; - } else { - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; + else rctl &= ~IGC_RCTL_LPE; - } IGC_WRITE_REG(hw, IGC_RCTL, rctl); IGC_WRITE_REG(hw, IGC_RLPML, frame_size); diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index 3634c0c8c5f0..e8a33f04bd69 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -2801,11 +2801,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu) return -EBUSY; } - if (mtu > RTE_ETHER_MTU) - dev_data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev_data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - if (rpst->i40e_pf_eth) { ret = rpst->i40e_pf_eth->dev_ops->mtu_set(rpst->i40e_pf_eth, mtu); diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index b9048ade3c35..c4696f34a7a1 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -5196,13 +5196,10 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0); /* switch to jumbo mode if needed */ - if (mtu > RTE_ETHER_MTU) { - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + if (mtu > RTE_ETHER_MTU) hlreg0 |= IXGBE_HLREG0_JUMBOEN; - } else { - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; + else hlreg0 &= ~IXGBE_HLREG0_JUMBOEN; - } IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0); maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS); diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c index f0c165c89ba7..5c40f16bfa24 100644 --- a/drivers/net/liquidio/lio_ethdev.c +++ b/drivers/net/liquidio/lio_ethdev.c @@ -480,13 +480,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) return -1; } - if (mtu > RTE_ETHER_MTU) - eth_dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - eth_dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - return 0; } diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index ff531fdb2354..5cea035e1465 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -1550,12 +1550,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EBUSY; } - /* switch to jumbo mode if needed */ - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - /* writing to configuration space */ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu); diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c index 69c3bda12df8..fb65be2c2dc3 100644 --- a/drivers/net/octeontx/octeontx_ethdev.c +++ b/drivers/net/octeontx/octeontx_ethdev.c @@ -552,11 +552,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) if (rc) return rc; - if (mtu > RTE_ETHER_MTU) - nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - octeontx_log_info("Received pkt beyond maxlen %d will be dropped", frame_size); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index ba282762b749..0c97ef7584a0 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -58,11 +58,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) if (rc) return rc; - if (mtu > RTE_ETHER_MTU) - dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - return rc; } diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 53b2c0ca10e3..71065f8072ac 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -2361,10 +2361,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) fp->rxq->rx_buf_size = rc; } } - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; if (!dev->data->dev_started && restart) { qede_dev_start(dev); diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c index 2afb13b77892..85209b5befbd 100644 --- a/drivers/net/sfc/sfc_ethdev.c +++ b/drivers/net/sfc/sfc_ethdev.c @@ -1014,15 +1014,6 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) } } - /* - * The driver does not use it, but other PMDs update jumbo frame - * flag when MTU is set. - */ - if (mtu > RTE_ETHER_MTU) { - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; - rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - } - sfc_adapter_unlock(sa); sfc_log_init(sa, "done"); diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index 1d1360faff66..0639889b2144 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -151,7 +151,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) struct nicvf *nic = nicvf_pmd_priv(dev); uint32_t buffsz, frame_size = mtu + NIC_HW_L2_OVERHEAD; size_t i; - struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; PMD_INIT_FUNC_TRACE(); @@ -176,11 +175,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) (frame_size + 2 * VLAN_TAG_SIZE > buffsz * NIC_HW_MAX_SEGS)) return -EINVAL; - if (mtu > RTE_ETHER_MTU) - rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - if (nicvf_mbox_update_hw_max_frs(nic, mtu)) return -EINVAL; diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index d773a81665d7..b1a3f9fbb84d 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -3482,12 +3482,6 @@ txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - /* switch to jumbo mode if needed */ - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - if (hw->mode) wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK, TXGBE_FRAME_SIZE_MAX); diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 3451125639f9..d649a5dd69a9 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -3625,6 +3625,7 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) int ret; struct rte_eth_dev_info dev_info; struct rte_eth_dev *dev; + int is_jumbo_frame_capable = 0; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3643,12 +3644,27 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu) return -EINVAL; + + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) + is_jumbo_frame_capable = 1; } + if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0) + return -EINVAL; + ret = (*dev->dev_ops->mtu_set)(dev, mtu); - if (!ret) + if (!ret) { dev->data->mtu = mtu; + /* switch to jumbo mode if needed */ + if (mtu > RTE_ETHER_MTU) + dev->data->dev_conf.rxmode.offloads |= + DEV_RX_OFFLOAD_JUMBO_FRAME; + else + dev->data->dev_conf.rxmode.offloads &= + ~DEV_RX_OFFLOAD_JUMBO_FRAME; + } + return eth_err(port_id, ret); } From patchwork Fri Jul 9 17:29:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 95631 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 225EEA0548; Fri, 9 Jul 2021 19:30:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 026AC416FB; Fri, 9 Jul 2021 19:30:37 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 893F4410FD for ; Fri, 9 Jul 2021 19:30:34 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10039"; a="270854527" X-IronPort-AV: E=Sophos;i="5.84,226,1620716400"; d="scan'208";a="270854527" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jul 2021 10:30:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,226,1620716400"; d="scan'208";a="488265805" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.27]) by FMSMGA003.fm.intel.com with ESMTP; 09 Jul 2021 10:30:22 -0700 From: Ferruh Yigit To: Somalapuram Amaranath , Ajit Khaparde , Somnath Kotur , Rahul Lakkireddy , Hemant Agrawal , Sachin Saxena , Haiyue Wang , Gagandeep Singh , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , Beilei Xing , Jingjing Wu , Qiming Yang , Qi Zhang , Rosen Xu , Shijith Thotton , Srisivasubramanian Srinivasan , Heinrich Kuhn , Harman Kalra , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , Rasesh Mody , Devendra Singh Rawat , Igor Russkikh , Maciej Czekaj , Jiawen Wu , Jian Wang , Thomas Monjalon , Andrew Rybchenko Cc: Ferruh Yigit , dev@dpdk.org Date: Fri, 9 Jul 2021 18:29:21 +0100 Message-Id: <20210709172923.3369846-3-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210709172923.3369846-1-ferruh.yigit@intel.com> References: <20210709172923.3369846-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 3/4] ethdev: move check to library for MTU set X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Move requested MTU value check to the API to prevent the duplicated code. Signed-off-by: Ferruh Yigit Reviewed-by: Andrew Rybchenko Reviewed-by: Rosen Xu --- drivers/net/axgbe/axgbe_ethdev.c | 15 ++++----------- drivers/net/bnxt/bnxt_ethdev.c | 2 +- drivers/net/cxgbe/cxgbe_ethdev.c | 13 +------------ drivers/net/dpaa/dpaa_ethdev.c | 2 -- drivers/net/dpaa2/dpaa2_ethdev.c | 4 ---- drivers/net/e1000/em_ethdev.c | 10 ---------- drivers/net/e1000/igb_ethdev.c | 11 ----------- drivers/net/enetc/enetc_ethdev.c | 4 ---- drivers/net/hinic/hinic_pmd_ethdev.c | 8 +------- drivers/net/i40e/i40e_ethdev.c | 17 ++++------------- drivers/net/i40e/i40e_ethdev_vf.c | 17 ++++------------- drivers/net/iavf/iavf_ethdev.c | 10 ++-------- drivers/net/ice/ice_ethdev.c | 14 +++----------- drivers/net/igc/igc_ethdev.c | 5 ----- drivers/net/ipn3ke/ipn3ke_representor.c | 6 ------ drivers/net/liquidio/lio_ethdev.c | 10 ---------- drivers/net/nfp/nfp_net.c | 4 ---- drivers/net/octeontx/octeontx_ethdev.c | 4 ---- drivers/net/octeontx2/otx2_ethdev_ops.c | 5 ----- drivers/net/qede/qede_ethdev.c | 12 ------------ drivers/net/thunderx/nicvf_ethdev.c | 6 ------ drivers/net/txgbe/txgbe_ethdev.c | 10 ---------- lib/ethdev/rte_ethdev.c | 9 +++++++++ 23 files changed, 29 insertions(+), 169 deletions(-) diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index 2960834b4539..c36cd7b1d2f0 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -1478,25 +1478,18 @@ axgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev) static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - struct rte_eth_dev_info dev_info; struct axgbe_port *pdata = dev->data->dev_private; - uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; - unsigned int val = 0; - axgbe_dev_info_get(dev, &dev_info); - /* check that mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) - return -EINVAL; + unsigned int val; + /* mtu setting is forbidden if port is start */ if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", dev->data->port_id); return -EBUSY; } - if (mtu > RTE_ETHER_MTU) - val = 1; - else - val = 0; + val = mtu > RTE_ETHER_MTU ? 1 : 0; AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val); + return 0; } diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 4344a012f06e..1e7da8ba61a6 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -2991,7 +2991,7 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu) uint32_t overhead = BNXT_MAX_PKT_LEN - BNXT_MAX_MTU; struct bnxt *bp = eth_dev->data->dev_private; uint32_t new_pkt_size; - uint32_t rc = 0; + uint32_t rc; uint32_t i; rc = is_bnxt_in_error(bp); diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c index 0c9cc2f5bb3f..70b879fed100 100644 --- a/drivers/net/cxgbe/cxgbe_ethdev.c +++ b/drivers/net/cxgbe/cxgbe_ethdev.c @@ -301,21 +301,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) { struct port_info *pi = eth_dev->data->dev_private; struct adapter *adapter = pi->adapter; - struct rte_eth_dev_info dev_info; - int err; uint16_t new_mtu = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; - err = cxgbe_dev_info_get(eth_dev, &dev_info); - if (err != 0) - return err; - - /* Must accommodate at least RTE_ETHER_MIN_MTU */ - if (mtu < RTE_ETHER_MIN_MTU || new_mtu > dev_info.max_rx_pktlen) - return -EINVAL; - - err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1, + return t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1, -1, -1, true); - return err; } /* diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index a444f749bb96..60dd4f67fc26 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -167,8 +167,6 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) PMD_INIT_FUNC_TRACE(); - if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA_MAX_RX_PKT_LEN) - return -EINVAL; /* * Refuse mtu that requires the support of scattered packets * when this feature has not been enabled before. diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index be2858b3adac..6b44b0557e6a 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -1466,10 +1466,6 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) return -EINVAL; } - /* check that mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || frame_size > DPAA2_MAX_RX_PKT_LEN) - return -EINVAL; - /* Set the Max Rx frame length as 'mtu' + * Maximum Ethernet header length */ diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 1b41dd04df5a..6ebef55588bc 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -1788,22 +1788,12 @@ eth_em_default_mac_addr_set(struct rte_eth_dev *dev, static int eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { - struct rte_eth_dev_info dev_info; struct e1000_hw *hw; uint32_t frame_size; uint32_t rctl; - int ret; - - ret = eth_em_infos_get(dev, &dev_info); - if (ret != 0) - return ret; frame_size = mtu + E1000_ETH_OVERHEAD; - /* check that mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) - return -EINVAL; - /* * If device is started, refuse mtu that requires the support of * scattered packets when this feature has not been enabled before. diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index f15774eae20d..fb69210ba9f4 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -4368,9 +4368,7 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { uint32_t rctl; struct e1000_hw *hw; - struct rte_eth_dev_info dev_info; uint32_t frame_size = mtu + E1000_ETH_OVERHEAD; - int ret; hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -4379,15 +4377,6 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (hw->mac.type == e1000_82571) return -ENOTSUP; #endif - ret = eth_igb_infos_get(dev, &dev_info); - if (ret != 0) - return ret; - - /* check that mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || - frame_size > dev_info.max_rx_pktlen) - return -EINVAL; - /* * If device is started, refuse mtu that requires the support of * scattered packets when this feature has not been enabled before. diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c index fbcbbb6c0533..a7372c1787c7 100644 --- a/drivers/net/enetc/enetc_ethdev.c +++ b/drivers/net/enetc/enetc_ethdev.c @@ -662,10 +662,6 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) struct enetc_hw *enetc_hw = &hw->hw; uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; - /* check that mtu is within the allowed range */ - if (mtu < ENETC_MAC_MINFRM_SIZE || frame_size > ENETC_MAC_MAXFRM_SIZE) - return -EINVAL; - /* * Refuse mtu that requires the support of scattered packets * when this feature has not been enabled before. diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index c1cde811a252..ce0b52c718ab 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -1539,17 +1539,11 @@ static void hinic_deinit_mac_addr(struct rte_eth_dev *eth_dev) static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) { struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); - int ret = 0; + int ret; PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d", dev->data->port_id, mtu, HINIC_MTU_TO_PKTLEN(mtu)); - if (mtu < HINIC_MIN_MTU_SIZE || mtu > HINIC_MAX_MTU_SIZE) { - PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d", - mtu, HINIC_MIN_MTU_SIZE, HINIC_MAX_MTU_SIZE); - return -EINVAL; - } - ret = hinic_set_port_mtu(nic_dev->hwdev, mtu); if (ret) { PMD_DRV_LOG(ERR, "Set port mtu failed, ret: %d", ret); diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index c5058f26dff2..dad151eac5f1 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -11754,25 +11754,16 @@ static int i40e_set_default_mac_addr(struct rte_eth_dev *dev, } static int -i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused) { - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct rte_eth_dev_data *dev_data = pf->dev_data; - uint32_t frame_size = mtu + I40E_ETH_OVERHEAD; - int ret = 0; - - /* check if mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX) - return -EINVAL; - /* mtu setting is forbidden if port is start */ - if (dev_data->dev_started) { + if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", - dev_data->port_id); + dev->data->port_id); return -EBUSY; } - return ret; + return 0; } /* Restore ethertype filter */ diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index 2015a86ba5ca..f7f9d44ef181 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -2866,25 +2866,16 @@ i40evf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, } static int -i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused) { - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct rte_eth_dev_data *dev_data = vf->dev_data; - uint32_t frame_size = mtu + I40E_ETH_OVERHEAD; - int ret = 0; - - /* check if mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || frame_size > I40E_FRAME_SIZE_MAX) - return -EINVAL; - /* mtu setting is forbidden if port is start */ - if (dev_data->dev_started) { + if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", - dev_data->port_id); + dev->data->port_id); return -EBUSY; } - return ret; + return 0; } static int diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index ba5be45e8c5e..049671ef3da9 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -1432,21 +1432,15 @@ iavf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, } static int -iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused) { - uint32_t frame_size = mtu + IAVF_ETH_OVERHEAD; - int ret = 0; - - if (mtu < RTE_ETHER_MIN_MTU || frame_size > IAVF_FRAME_SIZE_MAX) - return -EINVAL; - /* mtu setting is forbidden if port is start */ if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port must be stopped before configuration"); return -EBUSY; } - return ret; + return 0; } static int diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 502e410b5641..c1a96d3de183 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3788,21 +3788,13 @@ ice_dev_set_link_down(struct rte_eth_dev *dev) } static int -ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused) { - struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct rte_eth_dev_data *dev_data = pf->dev_data; - uint32_t frame_size = mtu + ICE_ETH_OVERHEAD; - - /* check if mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || frame_size > ICE_FRAME_SIZE_MAX) - return -EINVAL; - /* mtu setting is forbidden if port is start */ - if (dev_data->dev_started) { + if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", - dev_data->port_id); + dev->data->port_id); return -EBUSY; } diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c index dcbc26b8186e..e279ae1fff1d 100644 --- a/drivers/net/igc/igc_ethdev.c +++ b/drivers/net/igc/igc_ethdev.c @@ -1576,11 +1576,6 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (IGC_READ_REG(hw, IGC_CTRL_EXT) & IGC_CTRL_EXT_EXT_VLAN) frame_size += VLAN_TAG_SIZE; - /* check that mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || - frame_size > MAX_RX_JUMBO_FRAME_SIZE) - return -EINVAL; - /* * If device is started, refuse mtu that requires the support of * scattered packets when this feature has not been enabled before. diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index e8a33f04bd69..377b96c0236a 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -2778,12 +2778,6 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu) int ret = 0; struct ipn3ke_rpst *rpst = IPN3KE_DEV_PRIVATE_TO_RPST(ethdev); struct rte_eth_dev_data *dev_data = ethdev->data; - uint32_t frame_size = mtu + IPN3KE_ETH_OVERHEAD; - - /* check if mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || - frame_size > IPN3KE_MAC_FRAME_SIZE_MAX) - return -EINVAL; /* mtu setting is forbidden if port is start */ /* make sure NIC port is stopped */ diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c index 5c40f16bfa24..0fd8b247aabf 100644 --- a/drivers/net/liquidio/lio_ethdev.c +++ b/drivers/net/liquidio/lio_ethdev.c @@ -434,7 +434,6 @@ static int lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) { struct lio_device *lio_dev = LIO_DEV(eth_dev); - uint16_t pf_mtu = lio_dev->linfo.link.s.mtu; struct lio_dev_ctrl_cmd ctrl_cmd; struct lio_ctrl_pkt ctrl_pkt; @@ -446,15 +445,6 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) return -EINVAL; } - /* check if VF MTU is within allowed range. - * New value should not exceed PF MTU. - */ - if (mtu < RTE_ETHER_MIN_MTU || mtu > pf_mtu) { - lio_dev_err(lio_dev, "VF MTU should be >= %d and <= %d\n", - RTE_ETHER_MIN_MTU, pf_mtu); - return -EINVAL; - } - /* flush added to prevent cmd failure * incase the queue is full */ diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index 5cea035e1465..8efeacc03943 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -1539,10 +1539,6 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* check that mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || (uint32_t)mtu > hw->max_mtu) - return -EINVAL; - /* mtu setting is forbidden if port is started */ if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c index fb65be2c2dc3..b2355fa695bc 100644 --- a/drivers/net/octeontx/octeontx_ethdev.c +++ b/drivers/net/octeontx/octeontx_ethdev.c @@ -524,10 +524,6 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) struct rte_eth_dev_data *data = eth_dev->data; int rc = 0; - /* Check if MTU is within the allowed range */ - if (frame_size < OCCTX_MIN_FRS || frame_size > OCCTX_MAX_FRS) - return -EINVAL; - buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM; /* Refuse MTU that requires the support of scattered packets diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 0c97ef7584a0..cba03b4bb9b8 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -18,11 +18,6 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) int rc; frame_size += NIX_TIMESYNC_RX_OFFSET * otx2_ethdev_is_ptp_en(dev); - - /* Check if MTU is within the allowed range */ - if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS) - return -EINVAL; - buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM; /* Refuse MTU that requires the support of scattered packets diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 71065f8072ac..098e56e9822f 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -2307,7 +2307,6 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) { struct qede_dev *qdev = QEDE_INIT_QDEV(dev); struct ecore_dev *edev = QEDE_INIT_EDEV(qdev); - struct rte_eth_dev_info dev_info = {0}; struct qede_fastpath *fp; uint32_t frame_size; uint16_t bufsz; @@ -2315,19 +2314,8 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) int i, rc; PMD_INIT_FUNC_TRACE(edev); - rc = qede_dev_info_get(dev, &dev_info); - if (rc != 0) { - DP_ERR(edev, "Error during getting ethernet device info\n"); - return rc; - } frame_size = mtu + QEDE_MAX_ETHER_HDR_LEN; - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) { - DP_ERR(edev, "MTU %u out of range, %u is maximum allowable\n", - mtu, dev_info.max_rx_pktlen - RTE_ETHER_HDR_LEN - - QEDE_ETH_OVERHEAD); - return -EINVAL; - } if (!dev->data->scattered_rx && frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) { DP_INFO(edev, "MTU greater than minimum RX buffer size of %u\n", diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c index 0639889b2144..ac8477cbd7f4 100644 --- a/drivers/net/thunderx/nicvf_ethdev.c +++ b/drivers/net/thunderx/nicvf_ethdev.c @@ -154,12 +154,6 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu) PMD_INIT_FUNC_TRACE(); - if (frame_size > NIC_HW_MAX_FRS) - return -EINVAL; - - if (frame_size < NIC_HW_MIN_FRS) - return -EINVAL; - buffsz = dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM; /* diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c index b1a3f9fbb84d..41b0e63cd79e 100644 --- a/drivers/net/txgbe/txgbe_ethdev.c +++ b/drivers/net/txgbe/txgbe_ethdev.c @@ -3459,18 +3459,8 @@ static int txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { struct txgbe_hw *hw = TXGBE_DEV_HW(dev); - struct rte_eth_dev_info dev_info; uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN; struct rte_eth_dev_data *dev_data = dev->data; - int ret; - - ret = txgbe_dev_info_get(dev, &dev_info); - if (ret != 0) - return ret; - - /* check that mtu is within the allowed range */ - if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) - return -EINVAL; /* If device is started, refuse mtu that requires the support of * scattered packets when this feature has not been enabled before. diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index d649a5dd69a9..41c9e630e4d4 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -3638,6 +3638,9 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) * which relies on dev->dev_ops->dev_infos_get. */ if (*dev->dev_ops->dev_infos_get != NULL) { + uint16_t overhead_len; + uint32_t frame_size; + ret = rte_eth_dev_info_get(port_id, &dev_info); if (ret != 0) return ret; @@ -3645,6 +3648,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) if (mtu < dev_info.min_mtu || mtu > dev_info.max_mtu) return -EINVAL; + overhead_len = eth_dev_get_overhead_len(dev_info.max_rx_pktlen, + dev_info.max_mtu); + frame_size = mtu + overhead_len; + if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) + return -EINVAL; + if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) is_jumbo_frame_capable = 1; } From patchwork Fri Jul 9 17:29:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 95632 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D3264A0548; Fri, 9 Jul 2021 19:30:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BC0FC41707; Fri, 9 Jul 2021 19:30:56 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 70F7D4003F for ; Fri, 9 Jul 2021 19:30:54 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10039"; a="189422580" X-IronPort-AV: E=Sophos;i="5.84,226,1620716400"; d="scan'208";a="189422580" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jul 2021 10:30:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,226,1620716400"; d="scan'208";a="488267798" Received: from silpixa00399752.ir.intel.com (HELO silpixa00399752.ger.corp.intel.com) ([10.237.222.27]) by FMSMGA003.fm.intel.com with ESMTP; 09 Jul 2021 10:30:37 -0700 From: Ferruh Yigit To: Jerin Jacob , Xiaoyun Li , Ajit Khaparde , Somnath Kotur , Igor Russkikh , Pavel Belous , Somalapuram Amaranath , Rasesh Mody , Shahed Shaikh , Chas Williams , "Min Hu (Connor)" , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Rahul Lakkireddy , Hemant Agrawal , Sachin Saxena , Haiyue Wang , Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin , Igor Chauskin , Gagandeep Singh , John Daley , Hyong Youb Kim , Gaetan Rivet , Qi Zhang , Xiao Wang , Ziyang Xuan , Xiaoyun Wang , Guoyang Zhou , Yisen Zhuang , Lijun Ou , Beilei Xing , Jingjing Wu , Qiming Yang , Andrew Boyer , Rosen Xu , Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko , Zyta Szpak , Liron Himi , Heinrich Kuhn , Harman Kalra , Nalla Pradeep , Radha Mohan Chintakuntla , Veerasenareddy Burru , Devendra Singh Rawat , Andrew Rybchenko , Maciej Czekaj , Jiawen Wu , Jian Wang , Maxime Coquelin , Chenbo Xia , Yong Wang , Konstantin Ananyev , Radu Nicolau , Akhil Goyal , David Hunt , John McNamara , Thomas Monjalon Cc: Ferruh Yigit , dev@dpdk.org Date: Fri, 9 Jul 2021 18:29:22 +0100 Message-Id: <20210709172923.3369846-4-ferruh.yigit@intel.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210709172923.3369846-1-ferruh.yigit@intel.com> References: <20210709172923.3369846-1-ferruh.yigit@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 4/4] ethdev: remove jumbo offload flag X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag. Instead of drivers announce this capability, application can deduct the capability by checking reported 'dev_info.max_mtu' or 'dev_info.max_rx_pktlen'. And instead of application explicitly set this flag to enable jumbo frames, this can be deducted by driver by comparing requested 'mtu' to 'RTE_ETHER_MTU'. Removing this additional configuration for simplification. Signed-off-by: Ferruh Yigit Acked-by: Andrew Rybchenko Reviewed-by: Rosen Xu --- app/test-eventdev/test_pipeline_common.c | 2 - app/test-pmd/cmdline.c | 2 +- app/test-pmd/config.c | 24 +--------- app/test-pmd/testpmd.c | 46 +------------------ app/test-pmd/testpmd.h | 2 +- doc/guides/howto/debug_troubleshoot.rst | 2 - doc/guides/nics/bnxt.rst | 1 - doc/guides/nics/features.rst | 3 +- drivers/net/atlantic/atl_ethdev.c | 1 - drivers/net/axgbe/axgbe_ethdev.c | 1 - drivers/net/bnx2x/bnx2x_ethdev.c | 1 - drivers/net/bnxt/bnxt.h | 1 - drivers/net/bnxt/bnxt_ethdev.c | 10 +--- drivers/net/bonding/rte_eth_bond_pmd.c | 8 ---- drivers/net/cnxk/cnxk_ethdev.h | 5 +- drivers/net/cnxk/cnxk_ethdev_ops.c | 1 - drivers/net/cxgbe/cxgbe.h | 1 - drivers/net/cxgbe/cxgbe_ethdev.c | 8 ---- drivers/net/cxgbe/sge.c | 5 +- drivers/net/dpaa/dpaa_ethdev.c | 2 - drivers/net/dpaa2/dpaa2_ethdev.c | 2 - drivers/net/e1000/e1000_ethdev.h | 4 +- drivers/net/e1000/em_ethdev.c | 4 +- drivers/net/e1000/em_rxtx.c | 19 +++----- drivers/net/e1000/igb_rxtx.c | 3 +- drivers/net/ena/ena_ethdev.c | 2 - drivers/net/enetc/enetc_ethdev.c | 3 +- drivers/net/enic/enic_res.c | 1 - drivers/net/failsafe/failsafe_ops.c | 2 - drivers/net/fm10k/fm10k_ethdev.c | 1 - drivers/net/hinic/hinic_pmd_ethdev.c | 1 - drivers/net/hns3/hns3_ethdev.c | 1 - drivers/net/hns3/hns3_ethdev_vf.c | 1 - drivers/net/i40e/i40e_ethdev.c | 1 - drivers/net/i40e/i40e_ethdev_vf.c | 3 +- drivers/net/i40e/i40e_rxtx.c | 2 +- drivers/net/iavf/iavf_ethdev.c | 3 +- drivers/net/ice/ice_dcf_ethdev.c | 3 +- drivers/net/ice/ice_dcf_vf_representor.c | 1 - drivers/net/ice/ice_ethdev.c | 1 - drivers/net/ice/ice_rxtx.c | 3 +- drivers/net/igc/igc_ethdev.h | 1 - drivers/net/igc/igc_txrx.c | 2 +- drivers/net/ionic/ionic_ethdev.c | 1 - drivers/net/ipn3ke/ipn3ke_representor.c | 3 +- drivers/net/ixgbe/ixgbe_ethdev.c | 5 +- drivers/net/ixgbe/ixgbe_pf.c | 9 +--- drivers/net/ixgbe/ixgbe_rxtx.c | 3 +- drivers/net/mlx4/mlx4_rxq.c | 1 - drivers/net/mlx5/mlx5_rxq.c | 1 - drivers/net/mvneta/mvneta_ethdev.h | 3 +- drivers/net/mvpp2/mrvl_ethdev.c | 1 - drivers/net/nfp/nfp_net.c | 6 +-- drivers/net/octeontx/octeontx_ethdev.h | 1 - drivers/net/octeontx2/otx2_ethdev.h | 1 - drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +- drivers/net/octeontx_ep/otx_ep_rxtx.c | 6 --- drivers/net/qede/qede_ethdev.c | 1 - drivers/net/sfc/sfc_rx.c | 2 - drivers/net/thunderx/nicvf_ethdev.h | 1 - drivers/net/txgbe/txgbe_rxtx.c | 1 - drivers/net/virtio/virtio_ethdev.c | 1 - drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 - examples/ip_fragmentation/main.c | 3 +- examples/ip_reassembly/main.c | 3 +- examples/ipsec-secgw/ipsec-secgw.c | 2 - examples/ipv4_multicast/main.c | 1 - examples/kni/main.c | 5 -- examples/l3fwd-acl/main.c | 2 - examples/l3fwd-graph/main.c | 1 - examples/l3fwd-power/main.c | 2 - examples/l3fwd/main.c | 1 - .../performance-thread/l3fwd-thread/main.c | 2 - examples/vhost/main.c | 2 - lib/ethdev/rte_ethdev.c | 26 +---------- lib/ethdev/rte_ethdev.h | 1 - 76 files changed, 42 insertions(+), 250 deletions(-) diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c index 5fcea74b4d43..2775e72c580d 100644 --- a/app/test-eventdev/test_pipeline_common.c +++ b/app/test-eventdev/test_pipeline_common.c @@ -199,8 +199,6 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt) port_conf.rxmode.mtu = opt->max_pkt_sz - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN; - if (port_conf.rxmode.mtu > RTE_ETHER_MTU) - port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; t->internal_port = 1; RTE_ETH_FOREACH_DEV(i) { diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 8bdc042f6e8e..c0b6132d64e8 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -1921,7 +1921,7 @@ cmd_config_max_pkt_len_parsed(void *parsed_result, return; } - update_jumbo_frame_offload(port_id, res->value); + update_mtu_from_frame_size(port_id, res->value); } init_port_config(); diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index a87265d7638b..23a48557b676 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1136,39 +1136,19 @@ port_reg_set(portid_t port_id, uint32_t reg_off, uint32_t reg_v) void port_mtu_set(portid_t port_id, uint16_t mtu) { + struct rte_port *port = &ports[port_id]; int diag; - struct rte_port *rte_port = &ports[port_id]; - struct rte_eth_dev_info dev_info; - int ret; if (port_id_is_invalid(port_id, ENABLED_WARN)) return; - ret = eth_dev_info_get_print_err(port_id, &dev_info); - if (ret != 0) - return; - - if (mtu > dev_info.max_mtu || mtu < dev_info.min_mtu) { - printf("Set MTU failed. MTU:%u is not in valid range, min:%u - max:%u\n", - mtu, dev_info.min_mtu, dev_info.max_mtu); - return; - } diag = rte_eth_dev_set_mtu(port_id, mtu); if (diag) { printf("Set MTU failed. diag=%d\n", diag); return; } - rte_port->dev_conf.rxmode.mtu = mtu; - - if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) { - if (mtu > RTE_ETHER_MTU) { - rte_port->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - } else - rte_port->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - } + port->dev_conf.rxmode.mtu = mtu; } /* Generic flow management functions. */ diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 2c79cae05664..92feadefab59 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -1473,11 +1473,6 @@ init_config(void) rte_exit(EXIT_FAILURE, "rte_eth_dev_info_get() failed\n"); - ret = update_jumbo_frame_offload(pid, 0); - if (ret != 0) - printf("Updating jumbo frame offload failed for port %u\n", - pid); - if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) port->dev_conf.txmode.offloads &= @@ -3364,24 +3359,18 @@ rxtx_port_config(struct rte_port *port) } /* - * Helper function to arrange max_rx_pktlen value and JUMBO_FRAME offload, - * MTU is also aligned. + * Helper function to set MTU from frame size * * port->dev_info should be set before calling this function. * - * if 'max_rx_pktlen' is zero, it is set to current device value, "MTU + - * ETH_OVERHEAD". This is useful to update flags but not MTU value. - * * return 0 on success, negative on error */ int -update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen) +update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen) { struct rte_port *port = &ports[portid]; uint32_t eth_overhead; - uint64_t rx_offloads; uint16_t mtu, new_mtu; - bool on; eth_overhead = get_eth_overhead(&port->dev_info); @@ -3390,39 +3379,8 @@ update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen) return -1; } - if (max_rx_pktlen == 0) - max_rx_pktlen = mtu + eth_overhead; - - rx_offloads = port->dev_conf.rxmode.offloads; new_mtu = max_rx_pktlen - eth_overhead; - if (new_mtu <= RTE_ETHER_MTU) { - rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - on = false; - } else { - if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) { - printf("Frame size (%u) is not supported by port %u\n", - max_rx_pktlen, portid); - return -1; - } - rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - on = true; - } - - if (rx_offloads != port->dev_conf.rxmode.offloads) { - uint16_t qid; - - port->dev_conf.rxmode.offloads = rx_offloads; - - /* Apply JUMBO_FRAME offload configuration to Rx queue(s) */ - for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) { - if (on) - port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; - } - } - if (mtu == new_mtu) return 0; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 42143f85924f..b94bf668dc4d 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1012,7 +1012,7 @@ uint16_t tx_pkt_set_dynf(uint16_t port_id, __rte_unused uint16_t queue, __rte_unused void *user_param); void add_tx_dynf_callback(portid_t portid); void remove_tx_dynf_callback(portid_t portid); -int update_jumbo_frame_offload(portid_t portid, uint32_t max_rx_pktlen); +int update_mtu_from_frame_size(portid_t portid, uint32_t max_rx_pktlen); /* * Work-around of a compilation error with ICC on invocations of the diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst index 457ac441429a..df69fa8bcc24 100644 --- a/doc/guides/howto/debug_troubleshoot.rst +++ b/doc/guides/howto/debug_troubleshoot.rst @@ -71,8 +71,6 @@ RX Port and associated core :numref:`dtg_rx_rate`. * Identify if port Speed and Duplex is matching to desired values with ``rte_eth_link_get``. - * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``. - * Check promiscuous mode if the drops do not occur for unique MAC address with ``rte_eth_promiscuous_get``. diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst index feb0c6a7657a..e6f1628402fc 100644 --- a/doc/guides/nics/bnxt.rst +++ b/doc/guides/nics/bnxt.rst @@ -886,7 +886,6 @@ processing. This improved performance is derived from a number of optimizations:   DEV_RX_OFFLOAD_VLAN_STRIP   DEV_RX_OFFLOAD_KEEP_CRC -   DEV_RX_OFFLOAD_JUMBO_FRAME   DEV_RX_OFFLOAD_IPV4_CKSUM   DEV_RX_OFFLOAD_UDP_CKSUM   DEV_RX_OFFLOAD_TCP_CKSUM diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst index c98242f3b72f..a077c30644d2 100644 --- a/doc/guides/nics/features.rst +++ b/doc/guides/nics/features.rst @@ -165,8 +165,7 @@ Jumbo frame Supports Rx jumbo frames. -* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``. - ``dev_conf.rxmode.mtu``. +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``dev_conf.rxmode.mtu``. * **[related] rte_eth_dev_info**: ``max_rx_pktlen``. * **[related] API**: ``rte_eth_dev_set_mtu()``. diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c index 3f654c071566..5a198f53fce7 100644 --- a/drivers/net/atlantic/atl_ethdev.c +++ b/drivers/net/atlantic/atl_ethdev.c @@ -158,7 +158,6 @@ static struct rte_pci_driver rte_atl_pmd = { | DEV_RX_OFFLOAD_IPV4_CKSUM \ | DEV_RX_OFFLOAD_UDP_CKSUM \ | DEV_RX_OFFLOAD_TCP_CKSUM \ - | DEV_RX_OFFLOAD_JUMBO_FRAME \ | DEV_RX_OFFLOAD_MACSEC_STRIP \ | DEV_RX_OFFLOAD_VLAN_FILTER) diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index c36cd7b1d2f0..0bc9e5eeeb10 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -1217,7 +1217,6 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_KEEP_CRC; diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c index 009a94e9a8fa..50ff04bb2241 100644 --- a/drivers/net/bnx2x/bnx2x_ethdev.c +++ b/drivers/net/bnx2x/bnx2x_ethdev.c @@ -535,7 +535,6 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_pktlen = BNX2X_MAX_RX_PKT_LEN; dev_info->max_mac_addrs = BNX2X_MAX_MAC_ADDRS; dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G; - dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME; dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL; dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA; diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index e93a7eb933b4..9ad7821b4736 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -591,7 +591,6 @@ struct bnxt_rep_info { DEV_RX_OFFLOAD_TCP_CKSUM | \ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_KEEP_CRC | \ DEV_RX_OFFLOAD_VLAN_EXTEND | \ DEV_RX_OFFLOAD_TCP_LRO | \ diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 1e7da8ba61a6..c4fd27bd92de 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -728,15 +728,10 @@ static int bnxt_start_nic(struct bnxt *bp) unsigned int i, j; int rc; - if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) { - bp->eth_dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) bp->flags |= BNXT_FLAG_JUMBO; - } else { - bp->eth_dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; + else bp->flags &= ~BNXT_FLAG_JUMBO; - } /* THOR does not support ring groups. * But we will use the array to save RSS context IDs. @@ -1221,7 +1216,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) if (eth_dev->data->dev_conf.rxmode.offloads & ~(DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_KEEP_CRC | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index b2a1833e3f91..844ac1581a61 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -1731,14 +1731,6 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev, slave_eth_dev->data->dev_conf.rxmode.mtu = bonded_eth_dev->data->dev_conf.rxmode.mtu; - if (bonded_eth_dev->data->dev_conf.rxmode.offloads & - DEV_RX_OFFLOAD_JUMBO_FRAME) - slave_eth_dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - slave_eth_dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - nb_rx_queues = bonded_eth_dev->data->nb_rx_queues; nb_tx_queues = bonded_eth_dev->data->nb_tx_queues; diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 4eead0390532..aa147eee45c9 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -75,9 +75,8 @@ #define CNXK_NIX_RX_OFFLOAD_CAPA \ (DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM | \ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \ - DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP | \ - DEV_RX_OFFLOAD_VLAN_STRIP) + DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_RSS_HASH | \ + DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_VLAN_STRIP) #define RSS_IPV4_ENABLE \ (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP | \ diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 349896f6a1bf..d0924df76152 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -92,7 +92,6 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"}, {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"}, {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"}, - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"}, {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}, {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"}, {DEV_RX_OFFLOAD_SECURITY, " Security,"}, diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h index 7c89a028bf16..37625c5bfb69 100644 --- a/drivers/net/cxgbe/cxgbe.h +++ b/drivers/net/cxgbe/cxgbe.h @@ -51,7 +51,6 @@ DEV_RX_OFFLOAD_IPV4_CKSUM | \ DEV_RX_OFFLOAD_UDP_CKSUM | \ DEV_RX_OFFLOAD_TCP_CKSUM | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_SCATTER | \ DEV_RX_OFFLOAD_RSS_HASH) diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c index 70b879fed100..1374f32b6826 100644 --- a/drivers/net/cxgbe/cxgbe_ethdev.c +++ b/drivers/net/cxgbe/cxgbe_ethdev.c @@ -661,14 +661,6 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, if ((&rxq->fl) != NULL) rxq->fl.size = temp_nb_desc; - /* Set to jumbo mode if necessary */ - if (eth_dev->data->mtu > RTE_ETHER_MTU) - eth_dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - eth_dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx, &rxq->fl, NULL, is_pf4(adapter) ? diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c index 830f5192474d..21b8fe61c9a7 100644 --- a/drivers/net/cxgbe/sge.c +++ b/drivers/net/cxgbe/sge.c @@ -365,13 +365,10 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q, struct rte_mbuf *buf_bulk[n]; int ret, i; struct rte_pktmbuf_pool_private *mbp_priv; - u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads & - DEV_RX_OFFLOAD_JUMBO_FRAME; /* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */ mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool); - if (jumbo_en && - ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000)) + if ((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000) buf_size_idx = RX_LARGE_MTU_BUF; ret = rte_mempool_get_bulk(rxq->rspq.mb_pool, (void *)buf_bulk, n); diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 60dd4f67fc26..9cc808b767ea 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -54,7 +54,6 @@ /* Supported Rx offloads */ static uint64_t dev_rx_offloads_sup = - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_SCATTER; /* Rx offloads which cannot be disabled */ @@ -592,7 +591,6 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev, uint64_t flags; const char *output; } rx_offload_map[] = { - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"}, {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}, {DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"}, {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"}, diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index 6b44b0557e6a..53508972a4c2 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -44,7 +44,6 @@ static uint64_t dev_rx_offloads_sup = DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_VLAN_FILTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_TIMESTAMP; /* Rx offloads which cannot be disabled */ @@ -298,7 +297,6 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev, {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"}, {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"}, {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"}, - {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"}, {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"}, {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}, {DEV_RX_OFFLOAD_SCATTER, " Scattered,"} diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h index 3b4d9c3ee6f4..1ae78fe71f02 100644 --- a/drivers/net/e1000/e1000_ethdev.h +++ b/drivers/net/e1000/e1000_ethdev.h @@ -468,8 +468,8 @@ void eth_em_rx_queue_release(void *rxq); void em_dev_clear_queues(struct rte_eth_dev *dev); void em_dev_free_queues(struct rte_eth_dev *dev); -uint64_t em_get_rx_port_offloads_capa(struct rte_eth_dev *dev); -uint64_t em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev); +uint64_t em_get_rx_port_offloads_capa(void); +uint64_t em_get_rx_queue_offloads_capa(void); int eth_em_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c index 6ebef55588bc..8a752eef52cf 100644 --- a/drivers/net/e1000/em_ethdev.c +++ b/drivers/net/e1000/em_ethdev.c @@ -1083,8 +1083,8 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_queues = 1; dev_info->max_tx_queues = 1; - dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(dev); - dev_info->rx_offload_capa = em_get_rx_port_offloads_capa(dev) | + dev_info->rx_queue_offload_capa = em_get_rx_queue_offloads_capa(); + dev_info->rx_offload_capa = em_get_rx_port_offloads_capa() | dev_info->rx_queue_offload_capa; dev_info->tx_queue_offload_capa = em_get_tx_queue_offloads_capa(dev); dev_info->tx_offload_capa = em_get_tx_port_offloads_capa(dev) | diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c index dfd8f2fd0074..e061f80a906a 100644 --- a/drivers/net/e1000/em_rxtx.c +++ b/drivers/net/e1000/em_rxtx.c @@ -1359,12 +1359,9 @@ em_reset_rx_queue(struct em_rx_queue *rxq) } uint64_t -em_get_rx_port_offloads_capa(struct rte_eth_dev *dev) +em_get_rx_port_offloads_capa(void) { uint64_t rx_offload_capa; - uint32_t max_rx_pktlen; - - max_rx_pktlen = em_get_max_pktlen(dev); rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP | @@ -1374,14 +1371,12 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev) DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_KEEP_CRC | DEV_RX_OFFLOAD_SCATTER; - if (max_rx_pktlen > RTE_ETHER_MAX_LEN) - rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME; return rx_offload_capa; } uint64_t -em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev) +em_get_rx_queue_offloads_capa(void) { uint64_t rx_queue_offload_capa; @@ -1390,7 +1385,7 @@ em_get_rx_queue_offloads_capa(struct rte_eth_dev *dev) * capability be same to per port queue offloading capability * for better convenience. */ - rx_queue_offload_capa = em_get_rx_port_offloads_capa(dev); + rx_queue_offload_capa = em_get_rx_port_offloads_capa(); return rx_queue_offload_capa; } @@ -1839,7 +1834,7 @@ eth_em_rx_init(struct rte_eth_dev *dev) * to avoid splitting packets that don't fit into * one buffer. */ - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME || + if (dev->data->mtu > RTE_ETHER_MTU || rctl_bsize < RTE_ETHER_MAX_LEN) { if (!dev->data->scattered_rx) PMD_INIT_LOG(DEBUG, "forcing scatter mode"); @@ -1874,14 +1869,14 @@ eth_em_rx_init(struct rte_eth_dev *dev) if ((hw->mac.type == e1000_ich9lan || hw->mac.type == e1000_pch2lan || hw->mac.type == e1000_ich10lan) && - rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + dev->data->mtu > RTE_ETHER_MTU) { u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0)); E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3); E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13)); } if (hw->mac.type == e1000_pch2lan) { - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) + if (dev->data->mtu > RTE_ETHER_MTU) e1000_lv_jumbo_workaround_ich8lan(hw, TRUE); else e1000_lv_jumbo_workaround_ich8lan(hw, FALSE); @@ -1908,7 +1903,7 @@ eth_em_rx_init(struct rte_eth_dev *dev) /* * Configure support of jumbo frames, if any. */ - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) + if (dev->data->mtu > RTE_ETHER_MTU) rctl |= E1000_RCTL_LPE; else rctl &= ~E1000_RCTL_LPE; diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index de12997b4bdd..9998d4ea4179 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -1640,7 +1640,6 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev) DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_KEEP_CRC | DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_RSS_HASH; @@ -2344,7 +2343,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev) * Configure support of jumbo frames, if any. */ max_len = dev->data->mtu + E1000_ETH_OVERHEAD; - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (dev->data->mtu & RTE_ETHER_MTU) { rctl |= E1000_RCTL_LPE; /* diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index e9b718786a39..4322dce260f5 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -2042,8 +2042,6 @@ static int ena_infos_get(struct rte_eth_dev *dev, DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM; - rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME; - /* Inform framework about available features */ dev_info->rx_offload_capa = rx_feat; dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c index a7372c1787c7..6457677d300a 100644 --- a/drivers/net/enetc/enetc_ethdev.c +++ b/drivers/net/enetc/enetc_ethdev.c @@ -210,8 +210,7 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused, (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | - DEV_RX_OFFLOAD_KEEP_CRC | - DEV_RX_OFFLOAD_JUMBO_FRAME); + DEV_RX_OFFLOAD_KEEP_CRC); return 0; } diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c index a8f5332a407f..6a4758ea8e8a 100644 --- a/drivers/net/enic/enic_res.c +++ b/drivers/net/enic/enic_res.c @@ -209,7 +209,6 @@ int enic_get_vnic_config(struct enic *enic) DEV_TX_OFFLOAD_TCP_TSO; enic->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 5ff33e03e034..47c5efe9ea77 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -1193,7 +1193,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev, DEV_RX_OFFLOAD_HEADER_SPLIT | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_VLAN_EXTEND | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_SECURITY | @@ -1211,7 +1210,6 @@ fs_dev_infos_get(struct rte_eth_dev *dev, DEV_RX_OFFLOAD_HEADER_SPLIT | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_VLAN_EXTEND | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_TIMESTAMP | DEV_RX_OFFLOAD_SECURITY | diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 5e4b361ca6c0..093021246286 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -1779,7 +1779,6 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev) DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_HEADER_SPLIT | DEV_RX_OFFLOAD_RSS_HASH); } diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c index ce0b52c718ab..b1563350ec0e 100644 --- a/drivers/net/hinic/hinic_pmd_ethdev.c +++ b/drivers/net/hinic/hinic_pmd_ethdev.c @@ -747,7 +747,6 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_SCATTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_TCP_LRO | DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c index 868d381a4772..0c58c55844b0 100644 --- a/drivers/net/hns3/hns3_ethdev.c +++ b/drivers/net/hns3/hns3_ethdev.c @@ -2717,7 +2717,6 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info) DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_VLAN_FILTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TCP_LRO); info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c index ff28cad53a03..c488e03f23a4 100644 --- a/drivers/net/hns3/hns3_ethdev_vf.c +++ b/drivers/net/hns3/hns3_ethdev_vf.c @@ -956,7 +956,6 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info) DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_VLAN_FILTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TCP_LRO); info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index dad151eac5f1..ad7802f63031 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -3758,7 +3758,6 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_VLAN_EXTEND | DEV_RX_OFFLOAD_VLAN_FILTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_RSS_HASH; dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE; diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c index f7f9d44ef181..1c314e2ffdd0 100644 --- a/drivers/net/i40e/i40e_ethdev_vf.c +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -1932,7 +1932,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq) /** * Check if the jumbo frame and maximum packet length are set correctly */ - if (dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (dev_data->mtu > RTE_ETHER_MTU) { if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN || rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) { PMD_DRV_LOG(ERR, "maximum packet length must be " @@ -2378,7 +2378,6 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER; dev_info->tx_queue_offload_capa = 0; diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index aa43796ef1af..a421acf8f6b6 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -2906,7 +2906,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq) rxq->max_pkt_len = RTE_MIN(hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len, data->mtu + I40E_ETH_OVERHEAD); - if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (data->mtu > RTE_ETHER_MTU) { if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN || rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) { PMD_DRV_LOG(ERR, "maximum packet length must " diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 049671ef3da9..f156add80e0d 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -574,7 +574,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq) /* Check if the jumbo frame and maximum packet length are set * correctly. */ - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (dev->data->mtu & RTE_ETHER_MTU) { if (max_pkt_len <= IAVF_ETH_MAX_LEN || max_pkt_len > IAVF_FRAME_SIZE_MAX) { PMD_DRV_LOG(ERR, "maximum packet length must be " @@ -939,7 +939,6 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 34b6c9b2a7ed..72fdcc29c28a 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -65,7 +65,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq) /* Check if the jumbo frame and maximum packet length are set * correctly. */ - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (dev_data->mtu > RTE_ETHER_MTU) { if (max_pkt_len <= ICE_ETH_MAX_LEN || max_pkt_len > ICE_FRAME_SIZE_MAX) { PMD_DRV_LOG(ERR, "maximum packet length must be " @@ -664,7 +664,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_RSS_HASH; dev_info->tx_offload_capa = diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c index 970461f3e90a..07843c6dbc92 100644 --- a/drivers/net/ice/ice_dcf_vf_representor.c +++ b/drivers/net/ice/ice_dcf_vf_representor.c @@ -141,7 +141,6 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev, DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_VLAN_EXTEND | DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index c1a96d3de183..a17c11e95e0b 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3491,7 +3491,6 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_KEEP_CRC | DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_VLAN_FILTER; diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index a3de4172e2bc..a7b0915dabfc 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -259,7 +259,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) struct ice_rlan_ctx rx_ctx; enum ice_status err; uint16_t buf_size; - struct rte_eth_rxmode *rxmode = &dev_data->dev_conf.rxmode; uint32_t rxdid = ICE_RXDID_COMMS_OVS; uint32_t regval; uint32_t frame_size = dev_data->mtu + ICE_ETH_OVERHEAD; @@ -273,7 +272,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, frame_size); - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (dev_data->mtu > RTE_ETHER_MTU) { if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN || rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) { PMD_DRV_LOG(ERR, "maximum packet length must " diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h index b3473b5b1646..5e6c2ff30157 100644 --- a/drivers/net/igc/igc_ethdev.h +++ b/drivers/net/igc/igc_ethdev.h @@ -73,7 +73,6 @@ extern "C" { DEV_RX_OFFLOAD_UDP_CKSUM | \ DEV_RX_OFFLOAD_TCP_CKSUM | \ DEV_RX_OFFLOAD_SCTP_CKSUM | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_KEEP_CRC | \ DEV_RX_OFFLOAD_SCATTER | \ DEV_RX_OFFLOAD_RSS_HASH) diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c index d80808a002f5..30940857eac0 100644 --- a/drivers/net/igc/igc_txrx.c +++ b/drivers/net/igc/igc_txrx.c @@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev) IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN); /* Configure support of jumbo frames, if any. */ - if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) + if (dev->data->mtu & RTE_ETHER_MTU) rctl |= IGC_RCTL_LPE; else rctl &= ~IGC_RCTL_LPE; diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c index 97447a10e46a..795980cb1ca5 100644 --- a/drivers/net/ionic/ionic_ethdev.c +++ b/drivers/net/ionic/ionic_ethdev.c @@ -414,7 +414,6 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev, DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_SCATTER | diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c index 377b96c0236a..4e5d234e8c7d 100644 --- a/drivers/net/ipn3ke/ipn3ke_representor.c +++ b/drivers/net/ipn3ke/ipn3ke_representor.c @@ -74,8 +74,7 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev, DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_VLAN_EXTEND | - DEV_RX_OFFLOAD_VLAN_FILTER | - DEV_RX_OFFLOAD_JUMBO_FRAME; + DEV_RX_OFFLOAD_VLAN_FILTER; dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE; dev_info->tx_offload_capa = diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index c4696f34a7a1..8c180f77a04e 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -6229,7 +6229,6 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t tx_rate) { struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_eth_rxmode *rxmode; uint32_t rf_dec, rf_int; uint32_t bcnrc_val; uint16_t link_speed = dev->data->dev_link.link_speed; @@ -6251,14 +6250,12 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev, bcnrc_val = 0; } - rxmode = &dev->data->dev_conf.rxmode; /* * Set global transmit compensation time to the MMW_SIZE in RTTBCNRM * register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise * set as 0x4. */ - if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) && - (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE)) + if (dev->data->mtu + IXGBE_ETH_OVERHEAD >= IXGBE_MAX_JUMBO_FRAME_SIZE) IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_JUMBO_FRAME); else IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM, IXGBE_MMW_SIZE_DEFAULT); diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c index 9bcbc445f2d0..6e64f9a0ade2 100644 --- a/drivers/net/ixgbe/ixgbe_pf.c +++ b/drivers/net/ixgbe/ixgbe_pf.c @@ -600,15 +600,10 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf) IXGBE_MHADD_MFS_MASK) >> IXGBE_MHADD_MFS_SHIFT; if (max_frs < max_frame) { hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0); - if (max_frame > IXGBE_ETH_MAX_LEN) { - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; + if (max_frame > IXGBE_ETH_MAX_LEN) hlreg0 |= IXGBE_HLREG0_JUMBOEN; - } else { - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; + else hlreg0 &= ~IXGBE_HLREG0_JUMBOEN; - } IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0); max_frs = max_frame << IXGBE_MHADD_MFS_SHIFT; diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 5e32a6ce6940..1e3944127148 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -3021,7 +3021,6 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev) DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_KEEP_CRC | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_RSS_HASH; @@ -5083,7 +5082,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) /* * Configure jumbo frame support, if any. */ - if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) { + if (dev->data->mtu & RTE_ETHER_MTU) { hlreg0 |= IXGBE_HLREG0_JUMBOEN; maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS); maxfrs &= 0x0000FFFF; diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c index 4a5cfd22aa71..e73112c44749 100644 --- a/drivers/net/mlx4/mlx4_rxq.c +++ b/drivers/net/mlx4/mlx4_rxq.c @@ -684,7 +684,6 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv) { uint64_t offloads = DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_KEEP_CRC | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_RSS_HASH; if (priv->hw_csum) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index bd16dde6de13..b7828ef4ebb5 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -335,7 +335,6 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev) struct mlx5_dev_config *config = &priv->config; uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER | DEV_RX_OFFLOAD_TIMESTAMP | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_RSS_HASH); if (!config->mprq.enabled) diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h index ef8067790f82..6428f9ff7931 100644 --- a/drivers/net/mvneta/mvneta_ethdev.h +++ b/drivers/net/mvneta/mvneta_ethdev.h @@ -54,8 +54,7 @@ #define MRVL_NETA_MRU_TO_MTU(mru) ((mru) - MRVL_NETA_HDRS_LEN) /** Rx offloads capabilities */ -#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \ - DEV_RX_OFFLOAD_CHECKSUM) +#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_CHECKSUM) /** Tx offloads capabilities */ #define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \ diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 9d578b4ffa5d..7782b56d24d2 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -59,7 +59,6 @@ /** Port Rx offload capabilities */ #define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_CHECKSUM) /** Port Tx offloads capabilities */ diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c index 8efeacc03943..4e860edad12c 100644 --- a/drivers/net/nfp/nfp_net.c +++ b/drivers/net/nfp/nfp_net.c @@ -643,8 +643,7 @@ nfp_check_offloads(struct rte_eth_dev *dev) ctrl |= NFP_NET_CFG_CTRL_RXVLAN; } - if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) - hw->mtu = dev->data->mtu; + hw->mtu = dev->data->mtu; if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT) ctrl |= NFP_NET_CFG_CTRL_TXVLAN; @@ -1307,9 +1306,6 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG, }; - /* All NFP devices support jumbo frames */ - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME; - if (hw->cap & NFP_NET_CFG_CTRL_RSS) { dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h index b73515de37ca..3a02824e3948 100644 --- a/drivers/net/octeontx/octeontx_ethdev.h +++ b/drivers/net/octeontx/octeontx_ethdev.h @@ -60,7 +60,6 @@ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \ DEV_RX_OFFLOAD_SCATTER | \ DEV_RX_OFFLOAD_SCATTER | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_VLAN_FILTER) #define OCTEONTX_TX_OFFLOADS ( \ diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index e95d933a866d..25f6cbe42512 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -147,7 +147,6 @@ DEV_RX_OFFLOAD_SCTP_CKSUM | \ DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \ DEV_RX_OFFLOAD_SCATTER | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \ DEV_RX_OFFLOAD_VLAN_STRIP | \ DEV_RX_OFFLOAD_VLAN_FILTER | \ diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index a243683d61d3..c65041a16ba7 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -39,8 +39,7 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev, devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE; devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ; - devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME; - devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER; + devinfo->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER; devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS; devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS; diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c index a7d433547e36..aa4dcd33cc79 100644 --- a/drivers/net/octeontx_ep/otx_ep_rxtx.c +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c @@ -953,12 +953,6 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, droq_pkt->l3_len = hdr_lens.l3_len; droq_pkt->l4_len = hdr_lens.l4_len; - if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) && - !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) { - rte_pktmbuf_free(droq_pkt); - goto oq_read_fail; - } - if (droq_pkt->nb_segs > 1 && !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) { rte_pktmbuf_free(droq_pkt); diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 098e56e9822f..abd4b998bd3a 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -1392,7 +1392,6 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev, DEV_RX_OFFLOAD_TCP_LRO | DEV_RX_OFFLOAD_KEEP_CRC | DEV_RX_OFFLOAD_SCATTER | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_RSS_HASH); diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c index 461afc516812..3174b9150340 100644 --- a/drivers/net/sfc/sfc_rx.c +++ b/drivers/net/sfc/sfc_rx.c @@ -915,8 +915,6 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa) { uint64_t caps = sa->priv.dp_rx->dev_offload_capa; - caps |= DEV_RX_OFFLOAD_JUMBO_FRAME; - return caps & sfc_rx_get_offload_mask(sa); } diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h index b8dd905d0bd6..5d38750d6313 100644 --- a/drivers/net/thunderx/nicvf_ethdev.h +++ b/drivers/net/thunderx/nicvf_ethdev.h @@ -40,7 +40,6 @@ #define NICVF_RX_OFFLOAD_CAPA ( \ DEV_RX_OFFLOAD_CHECKSUM | \ DEV_RX_OFFLOAD_VLAN_STRIP | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_SCATTER | \ DEV_RX_OFFLOAD_RSS_HASH) diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c index c6cd3803c434..0ce754fb25b0 100644 --- a/drivers/net/txgbe/txgbe_rxtx.c +++ b/drivers/net/txgbe/txgbe_rxtx.c @@ -1953,7 +1953,6 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev) DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_KEEP_CRC | - DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_VLAN_FILTER | DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_SCATTER; diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 9491cc2669f7..efb76ccf63e6 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -2442,7 +2442,6 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) host_features = VIRTIO_OPS(hw)->get_features(hw); dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP; - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME; if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) { dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM | diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c index 5bffbb8a0e03..60f83aaaedb8 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c @@ -56,7 +56,6 @@ DEV_RX_OFFLOAD_UDP_CKSUM | \ DEV_RX_OFFLOAD_TCP_CKSUM | \ DEV_RX_OFFLOAD_TCP_LRO | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ DEV_RX_OFFLOAD_RSS_HASH) int vmxnet3_segs_dynfield_offset = -1; diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c index f97287ce2243..7b5632fba63a 100644 --- a/examples/ip_fragmentation/main.c +++ b/examples/ip_fragmentation/main.c @@ -149,8 +149,7 @@ static struct rte_eth_conf port_conf = { .mtu = JUMBO_FRAME_MAX_SIZE, .split_hdr_size = 0, .offloads = (DEV_RX_OFFLOAD_CHECKSUM | - DEV_RX_OFFLOAD_SCATTER | - DEV_RX_OFFLOAD_JUMBO_FRAME), + DEV_RX_OFFLOAD_SCATTER), }, .txmode = { .mq_mode = ETH_MQ_TX_NONE, diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c index f868e5d906c7..a1e5e6db6115 100644 --- a/examples/ip_reassembly/main.c +++ b/examples/ip_reassembly/main.c @@ -164,8 +164,7 @@ static struct rte_eth_conf port_conf = { .mq_mode = ETH_MQ_RX_RSS, .mtu = JUMBO_FRAME_MAX_SIZE, .split_hdr_size = 0, - .offloads = (DEV_RX_OFFLOAD_CHECKSUM | - DEV_RX_OFFLOAD_JUMBO_FRAME), + .offloads = DEV_RX_OFFLOAD_CHECKSUM, }, .rx_adv_conf = { .rss_conf = { diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index f8a1f544c21d..bcddd30c486a 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2207,8 +2207,6 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads) printf("Creating queues: nb_rx_queue=%d nb_tx_queue=%u...\n", nb_rx_queue, nb_tx_queue); - if (mtu_size > RTE_ETHER_MTU) - local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; local_port_conf.rxmode.mtu = mtu_size; if (multi_seg_required()) { diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c index 989d70ae257a..c38e310b5691 100644 --- a/examples/ipv4_multicast/main.c +++ b/examples/ipv4_multicast/main.c @@ -109,7 +109,6 @@ static struct rte_eth_conf port_conf = { .rxmode = { .mtu = JUMBO_FRAME_MAX_SIZE, .split_hdr_size = 0, - .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME, }, .txmode = { .mq_mode = ETH_MQ_TX_NONE, diff --git a/examples/kni/main.c b/examples/kni/main.c index c10814c6a94f..0fd945e7e0b2 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -790,11 +790,6 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu) } memcpy(&conf, &port_conf, sizeof(conf)); - /* Set new MTU */ - if (new_mtu > RTE_ETHER_MTU) - conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; - else - conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; conf.rxmode.mtu = new_mtu; ret = rte_eth_dev_configure(port_id, 1, 1, &conf); diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c index 913037d5f835..81d1066c473b 100644 --- a/examples/l3fwd-acl/main.c +++ b/examples/l3fwd-acl/main.c @@ -1813,8 +1813,6 @@ parse_args(int argc, char **argv) }; printf("jumbo frame is enabled\n"); - port_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS; diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c index ddcb2fbc995d..7b197f49d992 100644 --- a/examples/l3fwd-graph/main.c +++ b/examples/l3fwd-graph/main.c @@ -493,7 +493,6 @@ parse_args(int argc, char **argv) const struct option lenopts = {"max-pkt-len", required_argument, 0, 0}; - port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS; /* diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c index 02221a79fabf..a95cb9966dc8 100644 --- a/examples/l3fwd-power/main.c +++ b/examples/l3fwd-power/main.c @@ -1952,8 +1952,6 @@ parse_args(int argc, char **argv) 0, 0}; printf("jumbo frame is enabled \n"); - port_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS; diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index 80b5b93d5f0d..21c9fc73d9b8 100644 --- a/examples/l3fwd/main.c +++ b/examples/l3fwd/main.c @@ -702,7 +702,6 @@ parse_args(int argc, char **argv) "max-pkt-len", required_argument, 0, 0 }; - port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS; /* diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c index 1960f00ad28d..1d9a2d5cccbe 100644 --- a/examples/performance-thread/l3fwd-thread/main.c +++ b/examples/performance-thread/l3fwd-thread/main.c @@ -2986,8 +2986,6 @@ parse_args(int argc, char **argv) required_argument, 0, 0}; printf("jumbo frame is enabled - disabling simple TX path\n"); - port_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS; diff --git a/examples/vhost/main.c b/examples/vhost/main.c index e27712727f6a..a2bdc8928fcb 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -637,8 +637,6 @@ us_vhost_parse_args(int argc, char **argv) } mergeable = !!ret; if (ret) { - vmdq_conf_default.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; vmdq_conf_default.rxmode.mtu = JUMBO_FRAME_MAX_SIZE; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 41c9e630e4d4..a0f20a71aefe 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -118,7 +118,6 @@ static const struct { RTE_RX_OFFLOAD_BIT2STR(HEADER_SPLIT), RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER), RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND), - RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME), RTE_RX_OFFLOAD_BIT2STR(SCATTER), RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP), RTE_RX_OFFLOAD_BIT2STR(SECURITY), @@ -1479,13 +1478,6 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, goto rollback; } - if ((dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) { - if (dev->data->dev_conf.rxmode.mtu < RTE_ETHER_MIN_MTU || - dev->data->dev_conf.rxmode.mtu > RTE_ETHER_MTU) - /* Use default value */ - dev->data->dev_conf.rxmode.mtu = RTE_ETHER_MTU; - } - dev->data->mtu = dev->data->dev_conf.rxmode.mtu; /* @@ -3625,7 +3617,6 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) int ret; struct rte_eth_dev_info dev_info; struct rte_eth_dev *dev; - int is_jumbo_frame_capable = 0; RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); dev = &rte_eth_devices[port_id]; @@ -3653,27 +3644,12 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) frame_size = mtu + overhead_len; if (mtu < RTE_ETHER_MIN_MTU || frame_size > dev_info.max_rx_pktlen) return -EINVAL; - - if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) - is_jumbo_frame_capable = 1; } - if (mtu > RTE_ETHER_MTU && is_jumbo_frame_capable == 0) - return -EINVAL; - ret = (*dev->dev_ops->mtu_set)(dev, mtu); - if (!ret) { + if (!ret) dev->data->mtu = mtu; - /* switch to jumbo mode if needed */ - if (mtu > RTE_ETHER_MTU) - dev->data->dev_conf.rxmode.offloads |= - DEV_RX_OFFLOAD_JUMBO_FRAME; - else - dev->data->dev_conf.rxmode.offloads &= - ~DEV_RX_OFFLOAD_JUMBO_FRAME; - } - return eth_err(port_id, ret); } diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 9f288f98329c..b31e660de23e 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1359,7 +1359,6 @@ struct rte_eth_conf { #define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100 #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200 #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400 -#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800 #define DEV_RX_OFFLOAD_SCATTER 0x00002000 /** * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME