From patchwork Thu Dec 4 11:47:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 1751 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id BB4D48032; Thu, 4 Dec 2014 12:47:39 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id DBEBF7E75 for ; Thu, 4 Dec 2014 12:47:36 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 04 Dec 2014 03:47:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,515,1413270000"; d="scan'208";a="632594659" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by fmsmga001.fm.intel.com with ESMTP; 04 Dec 2014 03:47:33 -0800 Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com [10.237.217.45]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id sB4BlUix026280; Thu, 4 Dec 2014 11:47:30 GMT Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1]) by sivswdev01.ir.intel.com with ESMTP id sB4BlUrr009858; Thu, 4 Dec 2014 11:47:30 GMT Received: (from bricha3@localhost) by sivswdev01.ir.intel.com with id sB4BlUYk009853; Thu, 4 Dec 2014 11:47:30 GMT From: Bruce Richardson To: dev@dpdk.org Date: Thu, 4 Dec 2014 11:47:30 +0000 Message-Id: <1417693650-9813-1-git-send-email-bruce.richardson@intel.com> X-Mailer: git-send-email 1.7.4.1 Subject: [dpdk-dev] [PATCH] ixgbe: fix multi-process support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When using multiple processes, the TX function used in all processes should be the same, otherwise the secondary processes cannot transmit more than tx-ring-size - 1 packets. To achieve this, we extract out the code to select the ixgbe TX function to be used into a separate function inside the ixgbe driver, and call that from a secondary process when it is attaching to an already-configured NIC. Testing with symmetric MP app shows that we are able to RX and TX from both primary and secondary processes once this patch is applied. Signed-off-by: Bruce Richardson --- lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 7 +++- lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 61 +++++++++++++++++++++-------------- lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 7 ++++ lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 3 ++ 4 files changed, 52 insertions(+), 26 deletions(-) diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c index 937fc3c..4abab25 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c +++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c @@ -68,6 +68,7 @@ #include "ixgbe/ixgbe_common.h" #include "ixgbe_ethdev.h" #include "ixgbe_bypass.h" +#include "ixgbe_rxtx.h" /* * High threshold controlling when to start sending XOFF frames. Must be at @@ -743,8 +744,12 @@ eth_ixgbe_dev_init(__attribute__((unused)) struct eth_driver *eth_drv, /* for secondary processes, we don't initialise any further as primary * has already done this work. Only check we don't need a different - * RX function */ + * RX and TX function */ if (rte_eal_process_type() != RTE_PROC_PRIMARY){ + struct igb_tx_queue *txq; + txq = eth_dev->data->tx_queues[eth_dev->data->nb_tx_queues-1]; + set_tx_function(eth_dev, txq); + if (eth_dev->data->scattered_rx) eth_dev->rx_pkt_burst = ixgbe_recv_scattered_pkts; return 0; diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c index 5c36bff..263c815 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c @@ -1771,6 +1771,40 @@ static struct ixgbe_txq_ops def_txq_ops = { .reset = ixgbe_reset_tx_queue, }; +/* takes an ethdev and a queue and sets up the tx function to be used based on + * the queue parameters. Used in tx_queue_setup by primary process and then + * in dev_init by secondary process when attaching to an existing ethdev + */ +void +set_tx_function(struct rte_eth_dev* dev, struct igb_tx_queue* txq) +{ + /* Use a simple Tx queue (no offloads, no multi segs) if possible */ + if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS) + && (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)) { + PMD_INIT_LOG(INFO, "Using simple tx code path"); +#ifdef RTE_IXGBE_INC_VECTOR + if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ && + ixgbe_txq_vec_setup(txq) == 0) { + PMD_INIT_LOG(INFO, "Vector tx enabled."); + dev->tx_pkt_burst = ixgbe_xmit_pkts_vec; + } + else +#endif + dev->tx_pkt_burst = ixgbe_xmit_pkts_simple; + } else { + PMD_INIT_LOG(INFO, "Using full-featured tx code path"); + PMD_INIT_LOG(INFO, + " - txq_flags = %lx " "[IXGBE_SIMPLE_FLAGS=%lx]", + (long unsigned )txq->txq_flags, + (long unsigned)IXGBE_SIMPLE_FLAGS); + PMD_INIT_LOG(INFO, + " - tx_rs_thresh = %lu " "[RTE_PMD_IXGBE_TX_MAX_BURST=%lu]", + (long unsigned )txq->tx_rs_thresh, + (long unsigned)RTE_PMD_IXGBE_TX_MAX_BURST); + dev->tx_pkt_burst = ixgbe_xmit_pkts; + } +} + int ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, @@ -1933,31 +1967,8 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64, txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr); - /* Use a simple Tx queue (no offloads, no multi segs) if possible */ - if (((txq->txq_flags & IXGBE_SIMPLE_FLAGS) == IXGBE_SIMPLE_FLAGS) && - (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST)) { - PMD_INIT_LOG(INFO, "Using simple tx code path"); -#ifdef RTE_IXGBE_INC_VECTOR - if (txq->tx_rs_thresh <= RTE_IXGBE_TX_MAX_FREE_BUF_SZ && - ixgbe_txq_vec_setup(txq) == 0) { - PMD_INIT_LOG(INFO, "Vector tx enabled."); - dev->tx_pkt_burst = ixgbe_xmit_pkts_vec; - } - else -#endif - dev->tx_pkt_burst = ixgbe_xmit_pkts_simple; - } else { - PMD_INIT_LOG(INFO, "Using full-featured tx code path"); - PMD_INIT_LOG(INFO, " - txq_flags = %lx " - "[IXGBE_SIMPLE_FLAGS=%lx]", - (long unsigned)txq->txq_flags, - (long unsigned)IXGBE_SIMPLE_FLAGS); - PMD_INIT_LOG(INFO, " - tx_rs_thresh = %lu " - "[RTE_PMD_IXGBE_TX_MAX_BURST=%lu]", - (long unsigned)txq->tx_rs_thresh, - (long unsigned)RTE_PMD_IXGBE_TX_MAX_BURST); - dev->tx_pkt_burst = ixgbe_xmit_pkts; - } + /* set up vector or scalar TX function as appropriate */ + set_tx_function(dev, txq); txq->ops->reset(txq); diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h index 13099af..873656d 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h @@ -248,6 +248,13 @@ struct ixgbe_txq_ops { IXGBE_ADVTXD_DCMD_DEXT |\ IXGBE_ADVTXD_DCMD_EOP) + +/* takes an ethdev and a queue and sets up the tx function to be used based on + * the queue parameters. Used in tx_queue_setup by primary process and then + * in dev_init by secondary process when attaching to an existing ethdev + */ +void set_tx_function(struct rte_eth_dev* dev, struct igb_tx_queue* txq); + #ifdef RTE_IXGBE_INC_VECTOR uint16_t ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c index 579bc46..6755fad 100644 --- a/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c +++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c @@ -748,6 +748,9 @@ int ixgbe_txq_vec_setup(struct igb_tx_queue *txq) if (txq->sw_ring == NULL) return -1; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + /* leave the first one for overflow */ txq->sw_ring = (struct igb_tx_entry *) ((struct igb_tx_entry_v *)txq->sw_ring + 1);