From patchwork Thu Aug 20 01:42:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wei Hu (Xavier)" X-Patchwork-Id: 75723 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68A1BA04AF; Thu, 20 Aug 2020 03:42:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 43BE01C0B6; Thu, 20 Aug 2020 03:42:44 +0200 (CEST) Received: from mail.chinasoftinc.com (unknown [114.113.233.8]) by dpdk.org (Postfix) with ESMTP id 891591C0B4 for ; Thu, 20 Aug 2020 03:42:41 +0200 (CEST) Received: from localhost.localdomain (65.49.108.226) by INCCAS001.ito.icss (10.168.0.60) with Microsoft SMTP Server id 14.3.487.0; Thu, 20 Aug 2020 09:42:34 +0800 From: "Wei Hu (Xavier)" To: Wenzhuo Lu , Beilei Xing , Bernard Iremonger , Shahaf Shuler CC: , Date: Thu, 20 Aug 2020 09:42:01 +0800 Message-ID: <20200820014204.25035-2-huwei013@chinasoftinc.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200820014204.25035-1-huwei013@chinasoftinc.com> References: <20200818120254.72792-1-huwei013@chinasoftinc.com> <20200820014204.25035-1-huwei013@chinasoftinc.com> MIME-Version: 1.0 X-Originating-IP: [65.49.108.226] Subject: [dpdk-dev] [PATCH v2 1/4] app/testpmd: fix missing verification of port id X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Chengchang Tang To set Tx vlan offloads, it is required to stop port firstly. But before checking whether the port is stopped, the port id entered by the user is not checked for validity. When the port id is illegal, it would lead to a segmentation fault since it attempts to access a member of non-existent port. This patch adds verification of port id in tx vlan offloads. Fixes: 597f9fafe13b ("app/testpmd: convert to new Tx offloads API") Cc: stable@dpdk.org Signed-off-by: Chengchang Tang Signed-off-by: Wei Hu (Xavier) --- app/test-pmd/cmdline.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 0a6ed85f3..8377f8401 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -4268,6 +4268,9 @@ cmd_tx_vlan_set_parsed(void *parsed_result, { struct cmd_tx_vlan_set_result *res = parsed_result; + if (port_id_is_invalid(res->port_id, ENABLED_WARN)) + return; + if (!port_is_stopped(res->port_id)) { printf("Please stop port %d first\n", res->port_id); return; @@ -4322,6 +4325,9 @@ cmd_tx_vlan_set_qinq_parsed(void *parsed_result, { struct cmd_tx_vlan_set_qinq_result *res = parsed_result; + if (port_id_is_invalid(res->port_id, ENABLED_WARN)) + return; + if (!port_is_stopped(res->port_id)) { printf("Please stop port %d first\n", res->port_id); return; @@ -4435,6 +4441,9 @@ cmd_tx_vlan_reset_parsed(void *parsed_result, { struct cmd_tx_vlan_reset_result *res = parsed_result; + if (port_id_is_invalid(res->port_id, ENABLED_WARN)) + return; + if (!port_is_stopped(res->port_id)) { printf("Please stop port %d first\n", res->port_id); return; From patchwork Thu Aug 20 01:42:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wei Hu (Xavier)" X-Patchwork-Id: 75724 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC65DA04AF; Thu, 20 Aug 2020 03:42:53 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D66F01C0B4; Thu, 20 Aug 2020 03:42:52 +0200 (CEST) Received: from mail.chinasoftinc.com (unknown [114.113.233.8]) by dpdk.org (Postfix) with ESMTP id BA96CAAD5 for ; Thu, 20 Aug 2020 03:42:50 +0200 (CEST) Received: from localhost.localdomain (65.49.108.226) by INCCAS001.ito.icss (10.168.0.60) with Microsoft SMTP Server id 14.3.487.0; Thu, 20 Aug 2020 09:42:46 +0800 From: "Wei Hu (Xavier)" To: Wenzhuo Lu , Beilei Xing , Bernard Iremonger , Shahaf Shuler CC: , Date: Thu, 20 Aug 2020 09:42:02 +0800 Message-ID: <20200820014204.25035-3-huwei013@chinasoftinc.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200820014204.25035-1-huwei013@chinasoftinc.com> References: <20200818120254.72792-1-huwei013@chinasoftinc.com> <20200820014204.25035-1-huwei013@chinasoftinc.com> MIME-Version: 1.0 X-Originating-IP: [65.49.108.226] Subject: [dpdk-dev] [PATCH v2 2/4] app/testpmd: fix VLAN offload configuration when config fail X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Chengchang Tang When failing to configure VLAN offloads after the port was started, there is no need to update the port configuration. Currently, when user configure an unsupported VLAN offloads and fails, and then restart the port, it will fails since the configuration has been refreshed. This patch makes the function return directly insead of refreshing the configuration when execution fails. Fixes: 384161e00627 ("app/testpmd: adjust on the fly VLAN configuration") Cc: stable@dpdk.org Signed-off-by: Chengchang Tang Signed-off-by: Wei Hu (Xavier) Reviewed-by: Ferruh Yigit --- app/test-pmd/config.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 30bee3324..6e8e05ab1 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -3390,9 +3390,11 @@ vlan_extend_set(portid_t port_id, int on) } diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload); - if (diag < 0) + if (diag < 0) { printf("rx_vlan_extend_set(port_pi=%d, on=%d) failed " "diag=%d\n", port_id, on, diag); + return; + } ports[port_id].dev_conf.rxmode.offloads = port_rx_offloads; } @@ -3417,9 +3419,11 @@ rx_vlan_strip_set(portid_t port_id, int on) } diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload); - if (diag < 0) + if (diag < 0) { printf("rx_vlan_strip_set(port_pi=%d, on=%d) failed " "diag=%d\n", port_id, on, diag); + return; + } ports[port_id].dev_conf.rxmode.offloads = port_rx_offloads; } @@ -3458,9 +3462,11 @@ rx_vlan_filter_set(portid_t port_id, int on) } diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload); - if (diag < 0) + if (diag < 0) { printf("rx_vlan_filter_set(port_pi=%d, on=%d) failed " "diag=%d\n", port_id, on, diag); + return; + } ports[port_id].dev_conf.rxmode.offloads = port_rx_offloads; } @@ -3485,9 +3491,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on) } diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload); - if (diag < 0) + if (diag < 0) { printf("%s(port_pi=%d, on=%d) failed " "diag=%d\n", __func__, port_id, on, diag); + return; + } ports[port_id].dev_conf.rxmode.offloads = port_rx_offloads; } From patchwork Thu Aug 20 01:42:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wei Hu (Xavier)" X-Patchwork-Id: 75725 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22512A04AF; Thu, 20 Aug 2020 03:43:09 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 08F0A1C026; Thu, 20 Aug 2020 03:43:09 +0200 (CEST) Received: from mail.chinasoftinc.com (unknown [114.113.233.8]) by dpdk.org (Postfix) with ESMTP id E7A661BEC3 for ; Thu, 20 Aug 2020 03:43:05 +0200 (CEST) Received: from localhost.localdomain (65.49.108.226) by INCCAS001.ito.icss (10.168.0.60) with Microsoft SMTP Server id 14.3.487.0; Thu, 20 Aug 2020 09:43:02 +0800 From: "Wei Hu (Xavier)" To: Wenzhuo Lu , Beilei Xing , Bernard Iremonger , Yongseok Koh , Konstantin Ananyev , Pablo de Lara CC: , Date: Thu, 20 Aug 2020 09:42:03 +0800 Message-ID: <20200820014204.25035-4-huwei013@chinasoftinc.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200820014204.25035-1-huwei013@chinasoftinc.com> References: <20200818120254.72792-1-huwei013@chinasoftinc.com> <20200820014204.25035-1-huwei013@chinasoftinc.com> MIME-Version: 1.0 X-Originating-IP: [65.49.108.226] Subject: [dpdk-dev] [PATCH v2 3/4] app/testpmd: fix packet header in txonly mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Chengchang Tang In txonly forward mode, the packet header is fixed by the initial setting, including the packet length and checksum. So when the packets varies, this may cause a packet header error. Currently, there are two methods in txonly mode to randomly change the packets. 1. Set txsplit random and txpkts (x[,y]*), the number of segments each packets will be a random value between 1 and total number of segments determined by txpkts settings. The step as follows: a) ./testpmd -w xxx -l xx -n 4 -- -i --rxd xxxx --txd xxxx b) set fwd txonly c) set txsplit rand d) set txpkts 2048,2048,2048,2048 e) start The nb_segs of the packets sent by testpmd will be 1~4. The real packet length will be 2048, 4096, 6144 and 8192. But in fact the packet length in ip header and udp header will be fixed by 8178 and 8158. 2. Set txonly-multi-flow. the ip address will be varied to generate multiple flow. The step as follows: a) ./testpmd -w xxx -l xx -n 4 -- -i --txonly-multi-flow b) set fwd txonly c) start The ip address of each pkts will change randomly, but since the header is fixed, the checksum may be a error value. Therefore, this patch adds a function to update the packet length and check sum in the pkts header when the txsplit mode is set to rand or multi-flow is set. Fixes: 82010ef55e7c ("app/testpmd: make txonly mode generate multiple flows") Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing packets") Cc: stable@dpdk.org Signed-off-by: Chengchang Tang Signed-off-by: Wei Hu (Xavier) --- v1 -> v2: fix TYPO_SPELLING warning in the commit log. --- app/test-pmd/txonly.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 3bae367ee..ad3b0a765 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -156,6 +156,34 @@ setup_pkt_udp_ip_headers(struct rte_ipv4_hdr *ip_hdr, ip_hdr->hdr_checksum = (uint16_t) ip_cksum; } +static inline void +update_pkt_header(struct rte_mbuf *pkt, uint32_t total_pkt_len) +{ + struct rte_ipv4_hdr *ip_hdr; + struct rte_udp_hdr *udp_hdr; + uint16_t pkt_data_len; + uint16_t pkt_len; + + pkt_data_len = (uint16_t) (total_pkt_len - ( + sizeof(struct rte_ether_hdr) + + sizeof(struct rte_ipv4_hdr) + + sizeof(struct rte_udp_hdr))); + /* updata udp pkt length */ + udp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_udp_hdr *, + sizeof(struct rte_ether_hdr) + + sizeof(struct rte_ipv4_hdr)); + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct rte_udp_hdr)); + udp_hdr->dgram_len = RTE_CPU_TO_BE_16(pkt_len); + + /* updata ip pkt length and csum */ + ip_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + sizeof(struct rte_ether_hdr)); + ip_hdr->hdr_checksum = 0; + pkt_len = (uint16_t) (pkt_len + sizeof(struct rte_ipv4_hdr)); + ip_hdr->total_length = RTE_CPU_TO_BE_16(pkt_len); + ip_hdr->hdr_checksum = rte_ipv4_cksum(ip_hdr); +} + static inline bool pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp, struct rte_ether_hdr *eth_hdr, const uint16_t vlan_tci, @@ -223,6 +251,10 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp, copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt, sizeof(struct rte_ether_hdr) + sizeof(struct rte_ipv4_hdr)); + + if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) || txonly_multi_flow) + update_pkt_header(pkt, pkt_len); + if (unlikely(timestamp_enable)) { uint64_t skew = RTE_PER_LCORE(timestamp_qskew); struct { From patchwork Thu Aug 20 01:42:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wei Hu (Xavier)" X-Patchwork-Id: 75726 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 80DEFA04AF; Thu, 20 Aug 2020 03:43:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4552F1C069; Thu, 20 Aug 2020 03:43:23 +0200 (CEST) Received: from mail.chinasoftinc.com (unknown [114.113.233.8]) by dpdk.org (Postfix) with ESMTP id 6D9F01C069 for ; Thu, 20 Aug 2020 03:43:20 +0200 (CEST) Received: from localhost.localdomain (65.49.108.226) by INCCAS001.ito.icss (10.168.0.60) with Microsoft SMTP Server id 14.3.487.0; Thu, 20 Aug 2020 09:43:16 +0800 From: "Wei Hu (Xavier)" To: Wenzhuo Lu , Beilei Xing , Bernard Iremonger , Shahaf Shuler , Ferruh Yigit , Qi Zhang CC: , Date: Thu, 20 Aug 2020 09:42:04 +0800 Message-ID: <20200820014204.25035-5-huwei013@chinasoftinc.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200820014204.25035-1-huwei013@chinasoftinc.com> References: <20200818120254.72792-1-huwei013@chinasoftinc.com> <20200820014204.25035-1-huwei013@chinasoftinc.com> MIME-Version: 1.0 X-Originating-IP: [65.49.108.226] Subject: [dpdk-dev] [PATCH v2 4/4] app/testpmd: fix displaying Rx Tx queues information X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Huisong Li Currently, the information of Rx/Tx queues from PMD driver is not displayed exactly in the rxtx_config_display function. Because "ports[pid].rx_conf" and "ports[pid].tx_conf" maintained in testpmd application may be not the value actually used by PMD driver. For instance, user does not set a field, but PMD driver has to use the default value. This patch fixes rxtx_config_display so that the information of Rx/Tx queues can be really displayed for the PMD driver that implement .rxq_info_get and .txq_info_get ops callback function. Fixes: 75c530c1bd5351 ("app/testpmd: fix port configuration print") Fixes: d44f8a485f5d1f ("app/testpmd: enable per queue configure") Cc: stable@dpdk.org Signed-off-by: Huisong Li Signed-off-by: Wei Hu (Xavier) Reviewed-by: Ferruh Yigit --- app/test-pmd/config.c | 64 +++++++++++++++++++++++++++++++------------ 1 file changed, 47 insertions(+), 17 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 6e8e05ab1..3ab24ebd2 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2089,10 +2089,17 @@ rxtx_config_display(void) struct rte_eth_txconf *tx_conf = &ports[pid].tx_conf[0]; uint16_t *nb_rx_desc = &ports[pid].nb_rx_desc[0]; uint16_t *nb_tx_desc = &ports[pid].nb_tx_desc[0]; - uint16_t nb_rx_desc_tmp; - uint16_t nb_tx_desc_tmp; struct rte_eth_rxq_info rx_qinfo; struct rte_eth_txq_info tx_qinfo; + uint16_t rx_free_thresh_tmp; + uint16_t tx_free_thresh_tmp; + uint16_t tx_rs_thresh_tmp; + uint16_t nb_rx_desc_tmp; + uint16_t nb_tx_desc_tmp; + uint64_t offloads_tmp; + uint8_t pthresh_tmp; + uint8_t hthresh_tmp; + uint8_t wthresh_tmp; int32_t rc; /* per port config */ @@ -2106,41 +2113,64 @@ rxtx_config_display(void) /* per rx queue config only for first queue to be less verbose */ for (qid = 0; qid < 1; qid++) { rc = rte_eth_rx_queue_info_get(pid, qid, &rx_qinfo); - if (rc) + if (rc) { nb_rx_desc_tmp = nb_rx_desc[qid]; - else + rx_free_thresh_tmp = + rx_conf[qid].rx_free_thresh; + pthresh_tmp = rx_conf[qid].rx_thresh.pthresh; + hthresh_tmp = rx_conf[qid].rx_thresh.hthresh; + wthresh_tmp = rx_conf[qid].rx_thresh.wthresh; + offloads_tmp = rx_conf[qid].offloads; + } else { nb_rx_desc_tmp = rx_qinfo.nb_desc; + rx_free_thresh_tmp = + rx_qinfo.conf.rx_free_thresh; + pthresh_tmp = rx_qinfo.conf.rx_thresh.pthresh; + hthresh_tmp = rx_qinfo.conf.rx_thresh.hthresh; + wthresh_tmp = rx_qinfo.conf.rx_thresh.wthresh; + offloads_tmp = rx_qinfo.conf.offloads; + } printf(" RX queue: %d\n", qid); printf(" RX desc=%d - RX free threshold=%d\n", - nb_rx_desc_tmp, rx_conf[qid].rx_free_thresh); + nb_rx_desc_tmp, rx_free_thresh_tmp); printf(" RX threshold registers: pthresh=%d hthresh=%d " " wthresh=%d\n", - rx_conf[qid].rx_thresh.pthresh, - rx_conf[qid].rx_thresh.hthresh, - rx_conf[qid].rx_thresh.wthresh); - printf(" RX Offloads=0x%"PRIx64"\n", - rx_conf[qid].offloads); + pthresh_tmp, hthresh_tmp, wthresh_tmp); + printf(" RX Offloads=0x%"PRIx64"\n", offloads_tmp); } /* per tx queue config only for first queue to be less verbose */ for (qid = 0; qid < 1; qid++) { rc = rte_eth_tx_queue_info_get(pid, qid, &tx_qinfo); - if (rc) + if (rc) { nb_tx_desc_tmp = nb_tx_desc[qid]; - else + tx_free_thresh_tmp = + tx_conf[qid].tx_free_thresh; + pthresh_tmp = tx_conf[qid].tx_thresh.pthresh; + hthresh_tmp = tx_conf[qid].tx_thresh.hthresh; + wthresh_tmp = tx_conf[qid].tx_thresh.wthresh; + offloads_tmp = tx_conf[qid].offloads; + tx_rs_thresh_tmp = tx_conf[qid].tx_rs_thresh; + } else { nb_tx_desc_tmp = tx_qinfo.nb_desc; + tx_free_thresh_tmp = + tx_qinfo.conf.tx_free_thresh; + pthresh_tmp = tx_qinfo.conf.tx_thresh.pthresh; + hthresh_tmp = tx_qinfo.conf.tx_thresh.hthresh; + wthresh_tmp = tx_qinfo.conf.tx_thresh.wthresh; + offloads_tmp = tx_qinfo.conf.offloads; + tx_rs_thresh_tmp = tx_qinfo.conf.tx_rs_thresh; + } printf(" TX queue: %d\n", qid); printf(" TX desc=%d - TX free threshold=%d\n", - nb_tx_desc_tmp, tx_conf[qid].tx_free_thresh); + nb_tx_desc_tmp, tx_free_thresh_tmp); printf(" TX threshold registers: pthresh=%d hthresh=%d " " wthresh=%d\n", - tx_conf[qid].tx_thresh.pthresh, - tx_conf[qid].tx_thresh.hthresh, - tx_conf[qid].tx_thresh.wthresh); + pthresh_tmp, hthresh_tmp, wthresh_tmp); printf(" TX offloads=0x%"PRIx64" - TX RS bit threshold=%d\n", - tx_conf[qid].offloads, tx_conf->tx_rs_thresh); + offloads_tmp, tx_rs_thresh_tmp); } } }