From patchwork Sat Jul 25 08:15:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wangxiaoyun (Cloud)" X-Patchwork-Id: 74803 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5076EA0527; Sat, 25 Jul 2020 10:16:17 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4A5B31C044; Sat, 25 Jul 2020 10:16:12 +0200 (CEST) Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by dpdk.org (Postfix) with ESMTP id 331F41C042 for ; Sat, 25 Jul 2020 10:16:10 +0200 (CEST) Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 1A5363CC2ABD94B7F0C5; Sat, 25 Jul 2020 16:16:08 +0800 (CST) Received: from tester.localdomain (10.175.119.39) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Sat, 25 Jul 2020 16:16:00 +0800 From: Xiaoyun wang To: CC: , , , , , , , , , Xiaoyun wang Date: Sat, 25 Jul 2020 16:15:33 +0800 Message-ID: X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.175.119.39] X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v2 1/4] net/hinic: modify csum offload process X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Encapsulate different types of packet checksum preprocessing into functions. Signed-off-by: Xiaoyun wang --- drivers/net/hinic/hinic_pmd_tx.c | 371 +++++++++++++++++++++------------------ 1 file changed, 202 insertions(+), 169 deletions(-) diff --git a/drivers/net/hinic/hinic_pmd_tx.c b/drivers/net/hinic/hinic_pmd_tx.c index 4d99967..d9f251a 100644 --- a/drivers/net/hinic/hinic_pmd_tx.c +++ b/drivers/net/hinic/hinic_pmd_tx.c @@ -38,9 +38,6 @@ #define HINIC_TSO_PKT_MAX_SGE 127 /* tso max sge 127 */ #define HINIC_TSO_SEG_NUM_INVALID(num) ((num) > HINIC_TSO_PKT_MAX_SGE) -#define HINIC_TX_OUTER_CHECKSUM_FLAG_SET 1 -#define HINIC_TX_OUTER_CHECKSUM_FLAG_NO_SET 0 - /* sizeof(struct hinic_sq_bufdesc) == 16, shift 4 */ #define HINIC_BUF_DESC_SIZE(nr_descs) (SIZE_8BYTES(((u32)nr_descs) << 4)) @@ -671,7 +668,7 @@ static inline void hinic_xmit_mbuf_cleanup(struct hinic_txq *txq) static inline struct hinic_sq_wqe * hinic_get_sq_wqe(struct hinic_txq *txq, int wqebb_cnt, - struct hinic_wqe_info *wqe_info) + struct hinic_wqe_info *wqe_info) { u32 cur_pi, end_pi; u16 remain_wqebbs; @@ -758,36 +755,33 @@ static inline void hinic_xmit_mbuf_cleanup(struct hinic_txq *txq) return __rte_raw_cksum_reduce(sum); } -static inline void -hinic_get_pld_offset(struct rte_mbuf *m, struct hinic_tx_offload_info *off_info, - int outer_cs_flag) +static inline void hinic_get_outer_cs_pld_offset(struct rte_mbuf *m, + struct hinic_tx_offload_info *off_info) { uint64_t ol_flags = m->ol_flags; - if (outer_cs_flag == 1) { - if ((ol_flags & PKT_TX_UDP_CKSUM) == PKT_TX_UDP_CKSUM) { - off_info->payload_offset = m->outer_l2_len + - m->outer_l3_len + m->l2_len + m->l3_len; - } else if ((ol_flags & PKT_TX_TCP_CKSUM) || - (ol_flags & PKT_TX_TCP_SEG)) { - off_info->payload_offset = m->outer_l2_len + - m->outer_l3_len + m->l2_len + - m->l3_len + m->l4_len; - } - } else { - if ((ol_flags & PKT_TX_UDP_CKSUM) == PKT_TX_UDP_CKSUM) { - off_info->payload_offset = m->l2_len + m->l3_len; - } else if ((ol_flags & PKT_TX_TCP_CKSUM) || - (ol_flags & PKT_TX_TCP_SEG)) { - off_info->payload_offset = m->l2_len + m->l3_len + - m->l4_len; - } - } + if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) + off_info->payload_offset = m->outer_l2_len + m->outer_l3_len + + m->l2_len + m->l3_len; + else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG)) + off_info->payload_offset = m->outer_l2_len + m->outer_l3_len + + m->l2_len + m->l3_len + m->l4_len; } -static inline void -hinic_analyze_tx_info(struct rte_mbuf *mbuf, - struct hinic_tx_offload_info *off_info) +static inline void hinic_get_pld_offset(struct rte_mbuf *m, + struct hinic_tx_offload_info *off_info) +{ + uint64_t ol_flags = m->ol_flags; + + if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) + off_info->payload_offset = m->l2_len + m->l3_len; + else if ((ol_flags & PKT_TX_TCP_CKSUM) || (ol_flags & PKT_TX_TCP_SEG)) + off_info->payload_offset = m->l2_len + m->l3_len + + m->l4_len; +} + +static inline void hinic_analyze_tx_info(struct rte_mbuf *mbuf, + struct hinic_tx_offload_info *off_info) { struct rte_ether_hdr *eth_hdr; struct rte_vlan_hdr *vlan_hdr; @@ -817,17 +811,164 @@ static inline void hinic_xmit_mbuf_cleanup(struct hinic_txq *txq) } } -static inline int -hinic_tx_offload_pkt_prepare(struct rte_mbuf *m, - struct hinic_tx_offload_info *off_info) +static inline void hinic_analyze_outer_ip_vxlan(struct rte_mbuf *mbuf, + struct hinic_tx_offload_info *off_info) +{ + struct rte_ether_hdr *eth_hdr; + struct rte_vlan_hdr *vlan_hdr; + struct rte_ipv4_hdr *ipv4_hdr; + struct rte_udp_hdr *udp_hdr; + u16 eth_type = 0; + + eth_hdr = rte_pktmbuf_mtod(mbuf, struct rte_ether_hdr *); + eth_type = rte_be_to_cpu_16(eth_hdr->ether_type); + + if (eth_type == RTE_ETHER_TYPE_VLAN) { + vlan_hdr = (struct rte_vlan_hdr *)(eth_hdr + 1); + eth_type = rte_be_to_cpu_16(vlan_hdr->eth_proto); + } + + if (eth_type == RTE_ETHER_TYPE_IPV4) { + ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *, + mbuf->outer_l2_len); + off_info->outer_l3_type = IPV4_PKT_WITH_CHKSUM_OFFLOAD; + ipv4_hdr->hdr_checksum = 0; + + udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr + + mbuf->outer_l3_len); + udp_hdr->dgram_cksum = 0; + } else if (eth_type == RTE_ETHER_TYPE_IPV6) { + off_info->outer_l3_type = IPV6_PKT; + + udp_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_udp_hdr *, + (mbuf->outer_l2_len + + mbuf->outer_l3_len)); + udp_hdr->dgram_cksum = 0; + } +} + +static inline uint8_t hinic_analyze_l3_type(struct rte_mbuf *mbuf) +{ + uint8_t l3_type; + uint64_t ol_flags = mbuf->ol_flags; + + if (ol_flags & PKT_TX_IPV4) + l3_type = (ol_flags & PKT_TX_IP_CKSUM) ? + IPV4_PKT_WITH_CHKSUM_OFFLOAD : + IPV4_PKT_NO_CHKSUM_OFFLOAD; + else if (ol_flags & PKT_TX_IPV6) + l3_type = IPV6_PKT; + else + l3_type = UNKNOWN_L3TYPE; + + return l3_type; +} + +static inline void hinic_calculate_tcp_checksum(struct rte_mbuf *mbuf, + struct hinic_tx_offload_info *off_info, + uint64_t inner_l3_offset) { struct rte_ipv4_hdr *ipv4_hdr; struct rte_ipv6_hdr *ipv6_hdr; struct rte_tcp_hdr *tcp_hdr; + uint64_t ol_flags = mbuf->ol_flags; + + if (ol_flags & PKT_TX_IPV4) { + ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *, + inner_l3_offset); + + if (ol_flags & PKT_TX_IP_CKSUM) + ipv4_hdr->hdr_checksum = 0; + + tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + + mbuf->l3_len); + tcp_hdr->cksum = hinic_ipv4_phdr_cksum(ipv4_hdr, ol_flags); + } else { + ipv6_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv6_hdr *, + inner_l3_offset); + tcp_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_tcp_hdr *, + (inner_l3_offset + + mbuf->l3_len)); + tcp_hdr->cksum = hinic_ipv6_phdr_cksum(ipv6_hdr, ol_flags); + } + + off_info->inner_l4_type = TCP_OFFLOAD_ENABLE; + off_info->inner_l4_tcp_udp = 1; +} + +static inline void hinic_calculate_udp_checksum(struct rte_mbuf *mbuf, + struct hinic_tx_offload_info *off_info, + uint64_t inner_l3_offset) +{ + struct rte_ipv4_hdr *ipv4_hdr; + struct rte_ipv6_hdr *ipv6_hdr; struct rte_udp_hdr *udp_hdr; - struct rte_ether_hdr *eth_hdr; - struct rte_vlan_hdr *vlan_hdr; - u16 eth_type = 0; + uint64_t ol_flags = mbuf->ol_flags; + + if (ol_flags & PKT_TX_IPV4) { + ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv4_hdr *, + inner_l3_offset); + + if (ol_flags & PKT_TX_IP_CKSUM) + ipv4_hdr->hdr_checksum = 0; + + udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr + + mbuf->l3_len); + udp_hdr->dgram_cksum = hinic_ipv4_phdr_cksum(ipv4_hdr, + ol_flags); + } else { + ipv6_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_ipv6_hdr *, + inner_l3_offset); + + udp_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_udp_hdr *, + (inner_l3_offset + + mbuf->l3_len)); + udp_hdr->dgram_cksum = hinic_ipv6_phdr_cksum(ipv6_hdr, + ol_flags); + } + + off_info->inner_l4_type = UDP_OFFLOAD_ENABLE; + off_info->inner_l4_tcp_udp = 1; +} + +static inline void +hinic_calculate_sctp_checksum(struct hinic_tx_offload_info *off_info) +{ + off_info->inner_l4_type = SCTP_OFFLOAD_ENABLE; + off_info->inner_l4_tcp_udp = 0; + off_info->inner_l4_len = sizeof(struct rte_sctp_hdr); +} + +static inline void hinic_calculate_checksum(struct rte_mbuf *mbuf, + struct hinic_tx_offload_info *off_info, + uint64_t inner_l3_offset) +{ + uint64_t ol_flags = mbuf->ol_flags; + + switch (ol_flags & PKT_TX_L4_MASK) { + case PKT_TX_UDP_CKSUM: + hinic_calculate_udp_checksum(mbuf, off_info, inner_l3_offset); + break; + + case PKT_TX_TCP_CKSUM: + hinic_calculate_tcp_checksum(mbuf, off_info, inner_l3_offset); + break; + + case PKT_TX_SCTP_CKSUM: + hinic_calculate_sctp_checksum(off_info); + break; + + default: + if (ol_flags & PKT_TX_TCP_SEG) + hinic_calculate_tcp_checksum(mbuf, off_info, + inner_l3_offset); + break; + } +} + +static inline int hinic_tx_offload_pkt_prepare(struct rte_mbuf *m, + struct hinic_tx_offload_info *off_info) +{ uint64_t inner_l3_offset; uint64_t ol_flags = m->ol_flags; @@ -836,8 +977,8 @@ static inline void hinic_xmit_mbuf_cleanup(struct hinic_txq *txq) return 0; /* Support only vxlan offload */ - if ((ol_flags & PKT_TX_TUNNEL_MASK) && - !(ol_flags & PKT_TX_TUNNEL_VXLAN)) + if (unlikely((ol_flags & PKT_TX_TUNNEL_MASK) && + !(ol_flags & PKT_TX_TUNNEL_VXLAN))) return -ENOTSUP; #ifdef RTE_LIBRTE_ETHDEV_DEBUG @@ -846,169 +987,61 @@ static inline void hinic_xmit_mbuf_cleanup(struct hinic_txq *txq) #endif if (ol_flags & PKT_TX_TUNNEL_VXLAN) { + off_info->tunnel_type = TUNNEL_UDP_NO_CSUM; + + /* inner_l4_tcp_udp csum should be set to calculate outer + * udp checksum when vxlan packets without inner l3 and l4 + */ + off_info->inner_l4_tcp_udp = 1; + if ((ol_flags & PKT_TX_OUTER_IP_CKSUM) || (ol_flags & PKT_TX_OUTER_IPV6) || (ol_flags & PKT_TX_TCP_SEG)) { inner_l3_offset = m->l2_len + m->outer_l2_len + - m->outer_l3_len; + m->outer_l3_len; off_info->outer_l2_len = m->outer_l2_len; off_info->outer_l3_len = m->outer_l3_len; /* just support vxlan tunneling pkt */ off_info->inner_l2_len = m->l2_len - VXLANLEN - - sizeof(*udp_hdr); - off_info->inner_l3_len = m->l3_len; - off_info->inner_l4_len = m->l4_len; + sizeof(struct rte_udp_hdr); off_info->tunnel_length = m->l2_len; - off_info->tunnel_type = TUNNEL_UDP_NO_CSUM; - hinic_get_pld_offset(m, off_info, - HINIC_TX_OUTER_CHECKSUM_FLAG_SET); + hinic_analyze_outer_ip_vxlan(m, off_info); + + hinic_get_outer_cs_pld_offset(m, off_info); } else { inner_l3_offset = m->l2_len; hinic_analyze_tx_info(m, off_info); /* just support vxlan tunneling pkt */ off_info->inner_l2_len = m->l2_len - VXLANLEN - - sizeof(*udp_hdr) - off_info->outer_l2_len - - off_info->outer_l3_len; - off_info->inner_l3_len = m->l3_len; - off_info->inner_l4_len = m->l4_len; + sizeof(struct rte_udp_hdr) - + off_info->outer_l2_len - + off_info->outer_l3_len; off_info->tunnel_length = m->l2_len - - off_info->outer_l2_len - off_info->outer_l3_len; - off_info->tunnel_type = TUNNEL_UDP_NO_CSUM; + off_info->outer_l2_len - + off_info->outer_l3_len; + off_info->outer_l3_type = IPV4_PKT_NO_CHKSUM_OFFLOAD; - hinic_get_pld_offset(m, off_info, - HINIC_TX_OUTER_CHECKSUM_FLAG_NO_SET); + hinic_get_pld_offset(m, off_info); } } else { inner_l3_offset = m->l2_len; off_info->inner_l2_len = m->l2_len; - off_info->inner_l3_len = m->l3_len; - off_info->inner_l4_len = m->l4_len; off_info->tunnel_type = NOT_TUNNEL; - hinic_get_pld_offset(m, off_info, - HINIC_TX_OUTER_CHECKSUM_FLAG_NO_SET); + hinic_get_pld_offset(m, off_info); } /* invalid udp or tcp header */ if (unlikely(off_info->payload_offset > MAX_PLD_OFFSET)) return -EINVAL; - /* Process outter udp pseudo-header checksum */ - if ((ol_flags & PKT_TX_TUNNEL_VXLAN) && ((ol_flags & PKT_TX_TCP_SEG) || - (ol_flags & PKT_TX_OUTER_IP_CKSUM) || - (ol_flags & PKT_TX_OUTER_IPV6))) { - - /* inner_l4_tcp_udp csum should be setted to calculate outter - * udp checksum when vxlan packets without inner l3 and l4 - */ - off_info->inner_l4_tcp_udp = 1; - - eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); - eth_type = rte_be_to_cpu_16(eth_hdr->ether_type); - - if (eth_type == RTE_ETHER_TYPE_VLAN) { - vlan_hdr = (struct rte_vlan_hdr *)(eth_hdr + 1); - eth_type = rte_be_to_cpu_16(vlan_hdr->eth_proto); - } - - if (eth_type == RTE_ETHER_TYPE_IPV4) { - ipv4_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *, - m->outer_l2_len); - off_info->outer_l3_type = IPV4_PKT_WITH_CHKSUM_OFFLOAD; - ipv4_hdr->hdr_checksum = 0; - - udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr + - m->outer_l3_len); - udp_hdr->dgram_cksum = 0; - } else if (eth_type == RTE_ETHER_TYPE_IPV6) { - off_info->outer_l3_type = IPV6_PKT; - ipv6_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *, - m->outer_l2_len); - - udp_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *, - (m->outer_l2_len + - m->outer_l3_len)); - udp_hdr->dgram_cksum = 0; - } - } else if (ol_flags & PKT_TX_OUTER_IPV4) { - off_info->tunnel_type = TUNNEL_UDP_NO_CSUM; - off_info->inner_l4_tcp_udp = 1; - off_info->outer_l3_type = IPV4_PKT_NO_CHKSUM_OFFLOAD; - } - - if (ol_flags & PKT_TX_IPV4) - off_info->inner_l3_type = (ol_flags & PKT_TX_IP_CKSUM) ? - IPV4_PKT_WITH_CHKSUM_OFFLOAD : - IPV4_PKT_NO_CHKSUM_OFFLOAD; - else if (ol_flags & PKT_TX_IPV6) - off_info->inner_l3_type = IPV6_PKT; + off_info->inner_l3_len = m->l3_len; + off_info->inner_l4_len = m->l4_len; + off_info->inner_l3_type = hinic_analyze_l3_type(m); /* Process the pseudo-header checksum */ - if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_UDP_CKSUM) { - if (ol_flags & PKT_TX_IPV4) { - ipv4_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *, - inner_l3_offset); - - if (ol_flags & PKT_TX_IP_CKSUM) - ipv4_hdr->hdr_checksum = 0; - - udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr + - m->l3_len); - udp_hdr->dgram_cksum = - hinic_ipv4_phdr_cksum(ipv4_hdr, ol_flags); - } else { - ipv6_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *, - inner_l3_offset); - - udp_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_udp_hdr *, - (inner_l3_offset + m->l3_len)); - udp_hdr->dgram_cksum = - hinic_ipv6_phdr_cksum(ipv6_hdr, ol_flags); - } - - off_info->inner_l4_type = UDP_OFFLOAD_ENABLE; - off_info->inner_l4_tcp_udp = 1; - } else if (((ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) || - (ol_flags & PKT_TX_TCP_SEG)) { - if (ol_flags & PKT_TX_IPV4) { - ipv4_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *, - inner_l3_offset); - - if (ol_flags & PKT_TX_IP_CKSUM) - ipv4_hdr->hdr_checksum = 0; - - /* non-TSO tcp */ - tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + - m->l3_len); - tcp_hdr->cksum = - hinic_ipv4_phdr_cksum(ipv4_hdr, ol_flags); - } else { - ipv6_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *, - inner_l3_offset); - /* non-TSO tcp */ - tcp_hdr = - rte_pktmbuf_mtod_offset(m, struct rte_tcp_hdr *, - (inner_l3_offset + m->l3_len)); - tcp_hdr->cksum = - hinic_ipv6_phdr_cksum(ipv6_hdr, ol_flags); - } - - off_info->inner_l4_type = TCP_OFFLOAD_ENABLE; - off_info->inner_l4_tcp_udp = 1; - } else if ((ol_flags & PKT_TX_L4_MASK) == PKT_TX_SCTP_CKSUM) { - off_info->inner_l4_type = SCTP_OFFLOAD_ENABLE; - off_info->inner_l4_tcp_udp = 0; - off_info->inner_l4_len = sizeof(struct rte_sctp_hdr); - } + hinic_calculate_checksum(m, off_info, inner_l3_offset); return 0; } From patchwork Sat Jul 25 08:15:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wangxiaoyun (Cloud)" X-Patchwork-Id: 74804 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1F554A0527; Sat, 25 Jul 2020 10:16:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B182B1C068; Sat, 25 Jul 2020 10:16:15 +0200 (CEST) Received: from huawei.com (szxga07-in.huawei.com [45.249.212.35]) by dpdk.org (Postfix) with ESMTP id 320F11C067; Sat, 25 Jul 2020 10:16:14 +0200 (CEST) Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 2C1EB8FD43551EB87FDF; Sat, 25 Jul 2020 16:16:13 +0800 (CST) Received: from tester.localdomain (10.175.119.39) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Sat, 25 Jul 2020 16:16:02 +0800 From: Xiaoyun wang To: CC: , , , , , , , , , Xiaoyun wang , Date: Sat, 25 Jul 2020 16:15:34 +0800 Message-ID: <910965bec4b85b44a7965a1124ab20a1e6cdaf5e.1595663173.git.cloud.wangxiaoyun@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.175.119.39] X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v2 2/4] net/hinic: optimize Rx performance for x86 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For x86 platform, the rq cqe without cache aligned, which can improve performance for some gateway scenarios. Fixes: 361a9ccf81d6 ("net/hinic: optimize Rx performance") Cc: stable@dpdk.org Signed-off-by: Xiaoyun wang --- drivers/net/hinic/hinic_pmd_rx.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h index 49fa565..8a45f2d 100644 --- a/drivers/net/hinic/hinic_pmd_rx.h +++ b/drivers/net/hinic/hinic_pmd_rx.h @@ -35,7 +35,11 @@ struct hinic_rq_cqe { u32 rss_hash; u32 rsvd[4]; +#if defined(RTE_ARCH_ARM64) } __rte_cache_aligned; +#else +}; +#endif struct hinic_rq_cqe_sect { struct hinic_sge sge; From patchwork Sat Jul 25 08:15:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wangxiaoyun (Cloud)" X-Patchwork-Id: 74805 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45F05A0527; Sat, 25 Jul 2020 10:16:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 36C601C0AD; Sat, 25 Jul 2020 10:16:17 +0200 (CEST) Received: from huawei.com (szxga05-in.huawei.com [45.249.212.191]) by dpdk.org (Postfix) with ESMTP id A58D01C068 for ; Sat, 25 Jul 2020 10:16:14 +0200 (CEST) Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id EF6A68D481433BBD6D70; Sat, 25 Jul 2020 16:16:12 +0800 (CST) Received: from tester.localdomain (10.175.119.39) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Sat, 25 Jul 2020 16:16:04 +0800 From: Xiaoyun wang To: CC: , , , , , , , , , Xiaoyun wang Date: Sat, 25 Jul 2020 16:15:35 +0800 Message-ID: <46b3bc3921132d89ec8b3805419f17adc8eb852d.1595663173.git.cloud.wangxiaoyun@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.175.119.39] X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v2 3/4] net/hinic/base: modify vhd type for SDI X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For ovs offload scenario, when fw processes the virtio header, there is no need to offset; and for standard card scenarios, fw does not care about the vhd_type parameter, so in order to be compatible with these two scenarios, use 0 offset instead. Signed-off-by: Xiaoyun wang --- drivers/net/hinic/base/hinic_pmd_nicio.c | 2 +- drivers/net/hinic/base/hinic_pmd_nicio.h | 5 +++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/hinic/base/hinic_pmd_nicio.c b/drivers/net/hinic/base/hinic_pmd_nicio.c index 2914e99..576fe59 100644 --- a/drivers/net/hinic/base/hinic_pmd_nicio.c +++ b/drivers/net/hinic/base/hinic_pmd_nicio.c @@ -578,7 +578,7 @@ int hinic_init_qp_ctxts(struct hinic_hwdev *hwdev) rx_buf_sz = nic_io->rq_buf_size; /* update rx buf size to function table */ - err = hinic_set_rx_vhd_mode(hwdev, 0, rx_buf_sz); + err = hinic_set_rx_vhd_mode(hwdev, HINIC_VHD_TYPE_0B, rx_buf_sz); if (err) { PMD_DRV_LOG(ERR, "Set rx vhd mode failed, rc: %d", err); return err; diff --git a/drivers/net/hinic/base/hinic_pmd_nicio.h b/drivers/net/hinic/base/hinic_pmd_nicio.h index 9a487d0..600c073 100644 --- a/drivers/net/hinic/base/hinic_pmd_nicio.h +++ b/drivers/net/hinic/base/hinic_pmd_nicio.h @@ -8,6 +8,11 @@ #define RX_BUF_LEN_16K 16384 #define RX_BUF_LEN_1_5K 1536 +/* vhd type */ +#define HINIC_VHD_TYPE_0B 2 +#define HINIC_VHD_TYPE_10B 1 +#define HINIC_VHD_TYPE_12B 0 + #define HINIC_Q_CTXT_MAX 42 /* performance: ci addr RTE_CACHE_SIZE(64B) alignment */ From patchwork Sat Jul 25 08:15:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wangxiaoyun (Cloud)" X-Patchwork-Id: 74806 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E2EE0A0527; Sat, 25 Jul 2020 10:16:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7BE581C0B3; Sat, 25 Jul 2020 10:16:22 +0200 (CEST) Received: from huawei.com (szxga07-in.huawei.com [45.249.212.35]) by dpdk.org (Postfix) with ESMTP id BAD251C0B0; Sat, 25 Jul 2020 10:16:20 +0200 (CEST) Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 07CEB77BBDE849ED8A83; Sat, 25 Jul 2020 16:16:18 +0800 (CST) Received: from tester.localdomain (10.175.119.39) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Sat, 25 Jul 2020 16:16:07 +0800 From: Xiaoyun wang To: CC: , , , , , , , , , Xiaoyun wang , Date: Sat, 25 Jul 2020 16:15:36 +0800 Message-ID: <1833dd4a5af04cf69c9191a5329397772e5f1ae6.1595663173.git.cloud.wangxiaoyun@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.175.119.39] X-CFilter-Loop: Reflected Subject: [dpdk-dev] [PATCH v2 4/4] net/hinic/base: make timeout not affected by system time jump X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Replace gettimeofday() with clock_gettime(CLOCK_MONOTONIC_RAW, &now), the reason is same with this patch "make alarm not affected by system time jump". Fixes: 81d53291a466 ("net/hinic/base: add various headers") Cc: stable@dpdk.org Signed-off-by: Xiaoyun wang --- drivers/net/hinic/base/hinic_compat.h | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/hinic/base/hinic_compat.h b/drivers/net/hinic/base/hinic_compat.h index 2d21b7b..7036b03 100644 --- a/drivers/net/hinic/base/hinic_compat.h +++ b/drivers/net/hinic/base/hinic_compat.h @@ -166,16 +166,17 @@ static inline u32 readl(const volatile void *addr) #define spin_lock(spinlock_prt) rte_spinlock_lock(spinlock_prt) #define spin_unlock(spinlock_prt) rte_spinlock_unlock(spinlock_prt) -static inline unsigned long get_timeofday_ms(void) +static inline unsigned long clock_gettime_ms(void) { - struct timeval tv; + struct timespec tv; - (void)gettimeofday(&tv, NULL); + (void)clock_gettime(CLOCK_MONOTONIC, &tv); - return (unsigned long)tv.tv_sec * 1000 + tv.tv_usec / 1000; + return (unsigned long)tv.tv_sec * 1000 + + (unsigned long)tv.tv_nsec / 1000000; } -#define jiffies get_timeofday_ms() +#define jiffies clock_gettime_ms() #define msecs_to_jiffies(ms) (ms) #define time_before(now, end) ((now) < (end))