From patchwork Sat Mar 6 15:33:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 88626 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 540A7A0548; Sat, 6 Mar 2021 16:38:59 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 60C1522A441; Sat, 6 Mar 2021 16:35:49 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5158E22A3E5 for ; Sat, 6 Mar 2021 16:35:48 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 126FWmlr029546 for ; Sat, 6 Mar 2021 07:35:47 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=3lX4bJtKIkYBAMs4iycVqJxwAamosdmfptucoMJth/Y=; b=Y83/zizEO5RSP+dWEKnmKOl7KZXBKeu2wYrDrm0iGMnSJdtDKKe4KmomCaB1I9561DAl 4cYa2LdoR9zPPcl+nZijYOt9mTyhflGNjZci9NOUYFK7ZN6v8om7nFgFR8udNtzmJ7K2 DWs6H88igx0L7JXVMZ5NiS9QPFvXmRthXUhRHPY+XXCwyWt8sO/xR/FlcgSa61oCG9xi GkXaFbSQPx52Bogy9wRiTDK7bifoITrN5zTeoxyP4nnfxMfq6FyxYft0fNsXOwFxv0n6 JljA8mE7n+lLaamOMKEM6Y5Hd3ukmR5PAMjlKqbnCiCWZkzs9NjVb83a5fbydooF4oBb 7g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 3747yurcdr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sat, 06 Mar 2021 07:35:47 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 6 Mar 2021 07:35:45 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 6 Mar 2021 07:35:45 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sat, 6 Mar 2021 07:35:45 -0800 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 10BBF3F7040; Sat, 6 Mar 2021 07:35:42 -0800 (PST) From: Nithin Dabilpuram To: CC: , , , , , , Date: Sat, 6 Mar 2021 21:03:49 +0530 Message-ID: <20210306153404.10781-30-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20210306153404.10781-1-ndabilpuram@marvell.com> References: <20210306153404.10781-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-06_08:2021-03-03, 2021-03-06 signatures=0 Subject: [dpdk-dev] [PATCH 29/44] net/cnxk: add Rx/Tx burst mode get ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Patch implements ethdev operations to get Rx and Tx burst mode. Signed-off-by: Sunil Kumar Kori --- doc/guides/nics/features/cnxk.ini | 1 + doc/guides/nics/features/cnxk_vec.ini | 1 + doc/guides/nics/features/cnxk_vf.ini | 1 + drivers/net/cnxk/cnxk_ethdev.c | 2 + drivers/net/cnxk/cnxk_ethdev.h | 4 ++ drivers/net/cnxk/cnxk_ethdev_ops.c | 127 ++++++++++++++++++++++++++++++++++ 6 files changed, 136 insertions(+) diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini index b41af2d..298f167 100644 --- a/doc/guides/nics/features/cnxk.ini +++ b/doc/guides/nics/features/cnxk.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Burst mode info = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini index 7fe8018..a673cc1 100644 --- a/doc/guides/nics/features/cnxk_vec.ini +++ b/doc/guides/nics/features/cnxk_vec.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Burst mode info = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini index 5cc9f3f..335d082 100644 --- a/doc/guides/nics/features/cnxk_vf.ini +++ b/doc/guides/nics/features/cnxk_vf.ini @@ -11,6 +11,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Burst mode info = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c index 77a8c09..28fcf8c 100644 --- a/drivers/net/cnxk/cnxk_ethdev.c +++ b/drivers/net/cnxk/cnxk_ethdev.c @@ -1066,6 +1066,8 @@ struct eth_dev_ops cnxk_eth_dev_ops = { .promiscuous_disable = cnxk_nix_promisc_disable, .allmulticast_enable = cnxk_nix_allmulticast_enable, .allmulticast_disable = cnxk_nix_allmulticast_disable, + .rx_burst_mode_get = cnxk_nix_rx_burst_mode_get, + .tx_burst_mode_get = cnxk_nix_tx_burst_mode_get, }; static int diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index 09031e9..481ede9 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -219,6 +219,10 @@ int cnxk_nix_allmulticast_enable(struct rte_eth_dev *eth_dev); int cnxk_nix_allmulticast_disable(struct rte_eth_dev *eth_dev); int cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode); +int cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode); int cnxk_nix_configure(struct rte_eth_dev *eth_dev); int cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid, uint16_t nb_desc, uint16_t fp_tx_q_sz, diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index 61ecbab..7ae961a 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -72,6 +72,133 @@ cnxk_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) } int +cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode) +{ + ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + const struct burst_info { + uint64_t flags; + const char *output; + } rx_offload_map[] = { + {DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"}, + {DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"}, + {DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"}, + {DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"}, + {DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"}, + {DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"}, + {DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"}, + {DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"}, + {DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"}, + {DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"}, + {DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"}, + {DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"}, + {DEV_RX_OFFLOAD_SCATTER, " Scattered,"}, + {DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"}, + {DEV_RX_OFFLOAD_SECURITY, " Security,"}, + {DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"}, + {DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"}, + {DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"}, + {DEV_RX_OFFLOAD_RSS_HASH, " RSS,"} + }; + static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:", + "Scalar, Rx Offloads:" + }; + uint32_t i; + + PLT_SET_USED(queue_id); + + /* Update burst mode info */ + rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena], + str_size - bytes); + if (rc < 0) + goto done; + + bytes += rc; + + /* Update Rx offload info */ + for (i = 0; i < RTE_DIM(rx_offload_map); i++) { + if (dev->rx_offloads & rx_offload_map[i].flags) { + rc = rte_strscpy(mode->info + bytes, + rx_offload_map[i].output, + str_size - bytes); + if (rc < 0) + goto done; + + bytes += rc; + } + } + +done: + return 0; +} + +int +cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_burst_mode *mode) +{ + ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + const struct burst_info { + uint64_t flags; + const char *output; + } tx_offload_map[] = { + {DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"}, + {DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"}, + {DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"}, + {DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"}, + {DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"}, + {DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"}, + {DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"}, + {DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"}, + {DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"}, + {DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"}, + {DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"}, + {DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"}, + {DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"}, + {DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"}, + {DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"}, + {DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}, + {DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"}, + {DEV_TX_OFFLOAD_SECURITY, " Security,"}, + {DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"}, + {DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"}, + {DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"}, + {DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"} + }; + static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:", + "Scalar, Tx Offloads:" + }; + uint32_t i; + + PLT_SET_USED(queue_id); + + /* Update burst mode info */ + rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena], + str_size - bytes); + if (rc < 0) + goto done; + + bytes += rc; + + /* Update Tx offload info */ + for (i = 0; i < RTE_DIM(tx_offload_map); i++) { + if (dev->tx_offloads & tx_offload_map[i].flags) { + rc = rte_strscpy(mode->info + bytes, + tx_offload_map[i].output, + str_size - bytes); + if (rc < 0) + goto done; + + bytes += rc; + } + } + +done: + return 0; +} + +int cnxk_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr) { struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);