From patchwork Mon Nov 10 15:59:23 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 1250 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 22EA97F74; Mon, 10 Nov 2014 16:56:16 +0100 (CET) Received: from mail-wg0-f50.google.com (mail-wg0-f50.google.com [74.125.82.50]) by dpdk.org (Postfix) with ESMTP id 90FA07F70 for ; Mon, 10 Nov 2014 16:56:15 +0100 (CET) Received: by mail-wg0-f50.google.com with SMTP id z12so9143394wgg.9 for ; Mon, 10 Nov 2014 08:06:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vYpcVtrTF0O4DzFNA4i1GzICThaOQYSL6OSjjataz6Y=; b=FCma93GnVnx6lKGYq5J6e8W2wsmZfg1p00bjJOPjLXCd5Tp3uttTyJirsVb3qDqPAB B8aXJNmywcmbCVDR1erv4cTJULnZ4dKB+uGvOpBdVMTy6/9F5QEW69FAdYjd3vcSHzNA aonB9KJesxV6YYqKDgZJMpnq0se9faXGq7+le6Kh4aSscQ/JW0E4U6twHD79pnHMnV1w zNTDUtu7IgcoAOiRhHcKJurNg18DTxLjpQk2mI9OhG8BjdHfGaua2ehnFPy+LYgxPQar ZQTF9J4y2yWwrCzdZmuo2nla8KgqbGHcvrmu+FKgLQYi5twPZVLmuyKdndsdQ5r6iTkJ ZH5Q== X-Gm-Message-State: ALoCoQmzlvFTBqv2QMu0TSY3rmzzOIMsD+eei6MOKUozoXKYh058OHeqsm6nbQdCW5aJqnFrY7t+ X-Received: by 10.180.218.40 with SMTP id pd8mr25251978wic.9.1415635199263; Mon, 10 Nov 2014 07:59:59 -0800 (PST) Received: from glumotte.dev.6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by mx.google.com with ESMTPSA id ll2sm10966561wjb.11.2014.11.10.07.59.58 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 10 Nov 2014 07:59:58 -0800 (PST) From: Olivier Matz To: dev@dpdk.org Date: Mon, 10 Nov 2014 16:59:23 +0100 Message-Id: <1415635166-1364-10-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1415635166-1364-1-git-send-email-olivier.matz@6wind.com> References: <1415635166-1364-1-git-send-email-olivier.matz@6wind.com> Cc: jigsaw@gmail.com Subject: [dpdk-dev] [PATCH 09/12] testpmd: fix use of offload flags in testpmd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In testpmd the rte_port->tx_ol_flags flag was used in 2 incompatible manners: - sometimes used with testpmd specific flags (0xff for checksums, and bit 11 for vlan) - sometimes assigned to m->ol_flags directly, which is wrong in case of checksum flags This commit replaces the hardcoded values by named definitions, which are not compatible with mbuf flags. The testpmd forward engines are fixed to use the flags properly. Signed-off-by: Olivier Matz --- app/test-pmd/config.c | 4 ++-- app/test-pmd/csumonly.c | 40 +++++++++++++++++++++++----------------- app/test-pmd/macfwd.c | 5 ++++- app/test-pmd/macswap.c | 5 ++++- app/test-pmd/testpmd.h | 28 +++++++++++++++++++++------- app/test-pmd/txonly.c | 9 ++++++--- 6 files changed, 60 insertions(+), 31 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 9bc08f4..4b6fb91 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -1674,7 +1674,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id) return; if (vlan_id_is_invalid(vlan_id)) return; - ports[port_id].tx_ol_flags |= PKT_TX_VLAN_PKT; + ports[port_id].tx_ol_flags |= TESTPMD_TX_OFFLOAD_INSERT_VLAN; ports[port_id].tx_vlan_id = vlan_id; } @@ -1683,7 +1683,7 @@ tx_vlan_reset(portid_t port_id) { if (port_id_is_invalid(port_id)) return; - ports[port_id].tx_ol_flags &= ~PKT_TX_VLAN_PKT; + ports[port_id].tx_ol_flags &= ~TESTPMD_TX_OFFLOAD_INSERT_VLAN; } void diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 8d10bfd..743094a 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -322,7 +322,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) /* Do not delete, this is required by HW*/ ipv4_hdr->hdr_checksum = 0; - if (tx_ol_flags & 0x1) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_IP_CKSUM) { /* HW checksum */ ol_flags |= PKT_TX_IP_CKSUM; } @@ -336,7 +336,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) if (l4_proto == IPPROTO_UDP) { udp_hdr = (struct udp_hdr*) (rte_pktmbuf_mtod(mb, unsigned char *) + l2_len + l3_len); - if (tx_ol_flags & 0x2) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_UDP_CKSUM) { /* HW Offload */ ol_flags |= PKT_TX_UDP_CKSUM; if (ipv4_tunnel) @@ -358,7 +358,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) uint16_t len; /* Check if inner L3/L4 checkum flag is set */ - if (tx_ol_flags & 0xF0) + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_CKSUM_MASK) ol_flags |= PKT_TX_VXLAN_CKSUM; inner_l2_len = sizeof(struct ether_hdr); @@ -381,7 +381,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) unsigned char *) + len); inner_l4_proto = inner_ipv4_hdr->next_proto_id; - if (tx_ol_flags & 0x10) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_IP_CKSUM) { /* Do not delete, this is required by HW*/ inner_ipv4_hdr->hdr_checksum = 0; @@ -394,7 +394,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) unsigned char *) + len); inner_l4_proto = inner_ipv6_hdr->proto; } - if ((inner_l4_proto == IPPROTO_UDP) && (tx_ol_flags & 0x20)) { + if ((inner_l4_proto == IPPROTO_UDP) && + (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_UDP_CKSUM)) { /* HW Offload */ ol_flags |= PKT_TX_UDP_CKSUM; @@ -405,7 +406,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) else if (eth_type == ETHER_TYPE_IPv6) inner_udp_hdr->dgram_cksum = get_ipv6_psd_sum(inner_ipv6_hdr); - } else if ((inner_l4_proto == IPPROTO_TCP) && (tx_ol_flags & 0x40)) { + } else if ((inner_l4_proto == IPPROTO_TCP) && + (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_TCP_CKSUM)) { /* HW Offload */ ol_flags |= PKT_TX_TCP_CKSUM; inner_tcp_hdr = (struct tcp_hdr *) (rte_pktmbuf_mtod(mb, @@ -414,7 +416,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) inner_tcp_hdr->cksum = get_ipv4_psd_sum(inner_ipv4_hdr); else if (eth_type == ETHER_TYPE_IPv6) inner_tcp_hdr->cksum = get_ipv6_psd_sum(inner_ipv6_hdr); - } else if ((inner_l4_proto == IPPROTO_SCTP) && (tx_ol_flags & 0x80)) { + } else if ((inner_l4_proto == IPPROTO_SCTP) && + (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_SCTP_CKSUM)) { /* HW Offload */ ol_flags |= PKT_TX_SCTP_CKSUM; inner_sctp_hdr = (struct sctp_hdr *) (rte_pktmbuf_mtod(mb, @@ -427,7 +430,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) } else if (l4_proto == IPPROTO_TCP) { tcp_hdr = (struct tcp_hdr*) (rte_pktmbuf_mtod(mb, unsigned char *) + l2_len + l3_len); - if (tx_ol_flags & 0x4) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_TCP_CKSUM) { ol_flags |= PKT_TX_TCP_CKSUM; tcp_hdr->cksum = get_ipv4_psd_sum(ipv4_hdr); } @@ -440,7 +443,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) sctp_hdr = (struct sctp_hdr*) (rte_pktmbuf_mtod(mb, unsigned char *) + l2_len + l3_len); - if (tx_ol_flags & 0x8) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_SCTP_CKSUM) { ol_flags |= PKT_TX_SCTP_CKSUM; sctp_hdr->cksum = 0; @@ -465,7 +468,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) if (l4_proto == IPPROTO_UDP) { udp_hdr = (struct udp_hdr*) (rte_pktmbuf_mtod(mb, unsigned char *) + l2_len + l3_len); - if (tx_ol_flags & 0x2) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_UDP_CKSUM) { /* HW Offload */ ol_flags |= PKT_TX_UDP_CKSUM; if (ipv6_tunnel) @@ -487,7 +490,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) uint16_t len; /* Check if inner L3/L4 checksum flag is set */ - if (tx_ol_flags & 0xF0) + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_CKSUM_MASK) ol_flags |= PKT_TX_VXLAN_CKSUM; inner_l2_len = sizeof(struct ether_hdr); @@ -511,7 +514,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) inner_l4_proto = inner_ipv4_hdr->next_proto_id; /* HW offload */ - if (tx_ol_flags & 0x10) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_IP_CKSUM) { /* Do not delete, this is required by HW*/ inner_ipv4_hdr->hdr_checksum = 0; @@ -524,7 +527,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) inner_l4_proto = inner_ipv6_hdr->proto; } - if ((inner_l4_proto == IPPROTO_UDP) && (tx_ol_flags & 0x20)) { + if ((inner_l4_proto == IPPROTO_UDP) && + (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_UDP_CKSUM)) { inner_udp_hdr = (struct udp_hdr *) (rte_pktmbuf_mtod(mb, unsigned char *) + len + inner_l3_len); /* HW offload */ @@ -534,7 +538,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) inner_udp_hdr->dgram_cksum = get_ipv4_psd_sum(inner_ipv4_hdr); else if (eth_type == ETHER_TYPE_IPv6) inner_udp_hdr->dgram_cksum = get_ipv6_psd_sum(inner_ipv6_hdr); - } else if ((inner_l4_proto == IPPROTO_TCP) && (tx_ol_flags & 0x40)) { + } else if ((inner_l4_proto == IPPROTO_TCP) && + (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_TCP_CKSUM)) { /* HW offload */ ol_flags |= PKT_TX_TCP_CKSUM; inner_tcp_hdr = (struct tcp_hdr *) (rte_pktmbuf_mtod(mb, @@ -545,7 +550,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) else if (eth_type == ETHER_TYPE_IPv6) inner_tcp_hdr->cksum = get_ipv6_psd_sum(inner_ipv6_hdr); - } else if ((inner_l4_proto == IPPROTO_SCTP) && (tx_ol_flags & 0x80)) { + } else if ((inner_l4_proto == IPPROTO_SCTP) && + (tx_ol_flags & TESTPMD_TX_OFFLOAD_INNER_SCTP_CKSUM)) { /* HW offload */ ol_flags |= PKT_TX_SCTP_CKSUM; inner_sctp_hdr = (struct sctp_hdr *) (rte_pktmbuf_mtod(mb, @@ -559,7 +565,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) else if (l4_proto == IPPROTO_TCP) { tcp_hdr = (struct tcp_hdr*) (rte_pktmbuf_mtod(mb, unsigned char *) + l2_len + l3_len); - if (tx_ol_flags & 0x4) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_TCP_CKSUM) { ol_flags |= PKT_TX_TCP_CKSUM; tcp_hdr->cksum = get_ipv6_psd_sum(ipv6_hdr); } @@ -573,7 +579,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) sctp_hdr = (struct sctp_hdr*) (rte_pktmbuf_mtod(mb, unsigned char *) + l2_len + l3_len); - if (tx_ol_flags & 0x8) { + if (tx_ol_flags & TESTPMD_TX_OFFLOAD_SCTP_CKSUM) { ol_flags |= PKT_TX_SCTP_CKSUM; sctp_hdr->cksum = 0; /* Sanity check, only number of 4 bytes supported by HW */ diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index 38bae23..aa3d705 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -85,6 +85,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs) uint16_t nb_rx; uint16_t nb_tx; uint16_t i; + uint64_t ol_flags = 0; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; uint64_t end_tsc; @@ -108,6 +109,8 @@ pkt_burst_mac_forward(struct fwd_stream *fs) #endif fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; + if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN) + ol_flags = PKT_TX_VLAN_PKT; for (i = 0; i < nb_rx; i++) { mb = pkts_burst[i]; eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *); @@ -115,7 +118,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs) ð_hdr->d_addr); ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr->s_addr); - mb->ol_flags = txp->tx_ol_flags; + mb->ol_flags = ol_flags; mb->l2_len = sizeof(struct ether_hdr); mb->l3_len = sizeof(struct ipv4_hdr); mb->vlan_tci = txp->tx_vlan_id; diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index 1786095..ec61657 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -85,6 +85,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs) uint16_t nb_rx; uint16_t nb_tx; uint16_t i; + uint64_t ol_flags = 0; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; uint64_t end_tsc; @@ -108,6 +109,8 @@ pkt_burst_mac_swap(struct fwd_stream *fs) #endif fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; + if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN) + ol_flags = PKT_TX_VLAN_PKT; for (i = 0; i < nb_rx; i++) { mb = pkts_burst[i]; eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *); @@ -117,7 +120,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs) ether_addr_copy(ð_hdr->s_addr, ð_hdr->d_addr); ether_addr_copy(&addr, ð_hdr->s_addr); - mb->ol_flags = txp->tx_ol_flags; + mb->ol_flags = ol_flags; mb->l2_len = sizeof(struct ether_hdr); mb->l3_len = sizeof(struct ipv4_hdr); mb->vlan_tci = txp->tx_vlan_id; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 9cbfeac..82af2bd 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -123,14 +123,28 @@ struct fwd_stream { #endif }; +/** Offload IP checksum in csum forward engine */ +#define TESTPMD_TX_OFFLOAD_IP_CKSUM 0x0001 +/** Offload UDP checksum in csum forward engine */ +#define TESTPMD_TX_OFFLOAD_UDP_CKSUM 0x0002 +/** Offload TCP checksum in csum forward engine */ +#define TESTPMD_TX_OFFLOAD_TCP_CKSUM 0x0004 +/** Offload SCTP checksum in csum forward engine */ +#define TESTPMD_TX_OFFLOAD_SCTP_CKSUM 0x0008 +/** Offload inner IP checksum in csum forward engine */ +#define TESTPMD_TX_OFFLOAD_INNER_IP_CKSUM 0x0010 +/** Offload inner UDP checksum in csum forward engine */ +#define TESTPMD_TX_OFFLOAD_INNER_UDP_CKSUM 0x0020 +/** Offload inner TCP checksum in csum forward engine */ +#define TESTPMD_TX_OFFLOAD_INNER_TCP_CKSUM 0x0040 +/** Offload inner SCTP checksum in csum forward engine */ +#define TESTPMD_TX_OFFLOAD_INNER_SCTP_CKSUM 0x0080 +/** Offload inner IP checksum mask */ +#define TESTPMD_TX_OFFLOAD_INNER_CKSUM_MASK 0x00F0 +/** Insert VLAN header in forward engine */ +#define TESTPMD_TX_OFFLOAD_INSERT_VLAN 0x0100 /** * The data structure associated with each port. - * tx_ol_flags is slightly different from ol_flags of rte_mbuf. - * Bit 0: Insert IP checksum - * Bit 1: Insert UDP checksum - * Bit 2: Insert TCP checksum - * Bit 3: Insert SCTP checksum - * Bit 11: Insert VLAN Label */ struct rte_port { struct rte_eth_dev_info dev_info; /**< PCI info + driver name */ @@ -141,7 +155,7 @@ struct rte_port { struct fwd_stream *rx_stream; /**< Port RX stream, if unique */ struct fwd_stream *tx_stream; /**< Port TX stream, if unique */ unsigned int socket_id; /**< For NUMA support */ - uint64_t tx_ol_flags;/**< Offload Flags of TX packets. */ + uint16_t tx_ol_flags;/**< TX Offload Flags (TESTPMD_TX_OFFLOAD...). */ uint16_t tx_vlan_id; /**< Tag Id. in TX VLAN packets. */ void *fwd_ctx; /**< Forwarding mode context */ uint64_t rx_bad_ip_csum; /**< rx pkts with bad ip checksum */ diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 3d08005..c984670 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -196,6 +196,7 @@ static void pkt_burst_transmit(struct fwd_stream *fs) { struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; + struct rte_port *txp; struct rte_mbuf *pkt; struct rte_mbuf *pkt_seg; struct rte_mempool *mbp; @@ -203,7 +204,7 @@ pkt_burst_transmit(struct fwd_stream *fs) uint16_t nb_tx; uint16_t nb_pkt; uint16_t vlan_tci; - uint64_t ol_flags; + uint64_t ol_flags = 0; uint8_t i; #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES uint64_t start_tsc; @@ -216,8 +217,10 @@ pkt_burst_transmit(struct fwd_stream *fs) #endif mbp = current_fwd_lcore()->mbp; - vlan_tci = ports[fs->tx_port].tx_vlan_id; - ol_flags = ports[fs->tx_port].tx_ol_flags; + txp = &ports[fs->tx_port]; + vlan_tci = txp->tx_vlan_id; + if (txp->tx_ol_flags & TESTPMD_TX_OFFLOAD_INSERT_VLAN) + ol_flags = PKT_TX_VLAN_PKT; for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { pkt = tx_mbuf_alloc(mbp); if (pkt == NULL) {