From patchwork Wed Sep 30 12:12:19 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 7311 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 54FA48DAA; Wed, 30 Sep 2015 14:13:26 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 1B2C25A55 for ; Wed, 30 Sep 2015 14:13:24 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP; 30 Sep 2015 05:12:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,612,1437462000"; d="scan'208";a="816163984" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by fmsmga002.fm.intel.com with ESMTP; 30 Sep 2015 05:12:22 -0700 Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com [10.237.217.45]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id t8UCCMwX014968; Wed, 30 Sep 2015 13:12:22 +0100 Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1]) by sivswdev01.ir.intel.com with ESMTP id t8UCCMYK024423; Wed, 30 Sep 2015 13:12:22 +0100 Received: (from bricha3@localhost) by sivswdev01.ir.intel.com with id t8UCCM7x024419; Wed, 30 Sep 2015 13:12:22 +0100 From: Bruce Richardson To: dev@dpdk.org Date: Wed, 30 Sep 2015 13:12:19 +0100 Message-Id: <1443615142-24381-2-git-send-email-bruce.richardson@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1443615142-24381-1-git-send-email-bruce.richardson@intel.com> References: <1443615142-24381-1-git-send-email-bruce.richardson@intel.com> Subject: [dpdk-dev] [PATCH 1/4] ring: enhance rte_eth_from_rings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The ring ethdev creation function creates an ethdev, but does not actually set it up for use. Even if it's just a single ring, the user still needs to create a mempool, call rte_eth_dev_configure, then call rx and tx setup functions before the ethdev can be used. This patch changes things so that the ethdev is fully set up after the call to create the ethdev. The above-mentionned functions can still be called - as will be the case, for instance, if the NIC is created via commandline parameters - but they no longer are essential. The function now also sets rte_errno appropriately on error, so the caller can get a better indication of why a call may have failed. Signed-off-by: Bruce Richardson --- drivers/net/ring/rte_eth_ring.c | 47 +++++++++++++++++++++++++++++++++++------ 1 file changed, 41 insertions(+), 6 deletions(-) diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c index 0ba36d5..bfd6f4e 100644 --- a/drivers/net/ring/rte_eth_ring.c +++ b/drivers/net/ring/rte_eth_ring.c @@ -39,6 +39,7 @@ #include #include #include +#include #define ETH_RING_NUMA_NODE_ACTION_ARG "nodeaction" #define ETH_RING_ACTION_CREATE "CREATE" @@ -276,10 +277,18 @@ rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[], unsigned i; /* do some parameter checking */ - if (rx_queues == NULL && nb_rx_queues > 0) + if (rx_queues == NULL && nb_rx_queues > 0) { + rte_errno = EINVAL; goto error; - if (tx_queues == NULL && nb_tx_queues > 0) + } + if (tx_queues == NULL && nb_tx_queues > 0) { + rte_errno = EINVAL; + goto error; + } + if (nb_rx_queues > RTE_PMD_RING_MAX_RX_RINGS) { + rte_errno = EINVAL; goto error; + } RTE_LOG(INFO, PMD, "Creating rings-backed ethdev on numa socket %u\n", numa_node); @@ -288,21 +297,43 @@ rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[], * and internal (private) data */ data = rte_zmalloc_socket(name, sizeof(*data), 0, numa_node); - if (data == NULL) + if (data == NULL) { + rte_errno = ENOMEM; goto error; + } + + data->rx_queues = rte_zmalloc_socket(name, sizeof(void *) * nb_rx_queues, + 0, numa_node); + if (data->rx_queues == NULL) { + rte_errno = ENOMEM; + goto error; + } + + data->tx_queues = rte_zmalloc_socket(name, sizeof(void *) * nb_tx_queues, + 0, numa_node); + if (data->tx_queues == NULL) { + rte_errno = ENOMEM; + goto error; + } pci_dev = rte_zmalloc_socket(name, sizeof(*pci_dev), 0, numa_node); - if (pci_dev == NULL) + if (pci_dev == NULL) { + rte_errno = ENOMEM; goto error; + } internals = rte_zmalloc_socket(name, sizeof(*internals), 0, numa_node); - if (internals == NULL) + if (internals == NULL) { + rte_errno = ENOMEM; goto error; + } /* reserve an ethdev entry */ eth_dev = rte_eth_dev_allocate(name, RTE_ETH_DEV_VIRTUAL); - if (eth_dev == NULL) + if (eth_dev == NULL) { + rte_errno = ENOSPC; goto error; + } /* now put it all together @@ -318,9 +349,11 @@ rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[], internals->nb_tx_queues = nb_tx_queues; for (i = 0; i < nb_rx_queues; i++) { internals->rx_ring_queues[i].rng = rx_queues[i]; + data->rx_queues[i] = &internals->rx_ring_queues[i]; } for (i = 0; i < nb_tx_queues; i++) { internals->tx_ring_queues[i].rng = tx_queues[i]; + data->tx_queues[i] = &internals->tx_ring_queues[i]; } rte_ring_pmd.pci_drv.name = ring_ethdev_driver_name; @@ -350,6 +383,8 @@ rte_eth_from_rings(const char *name, struct rte_ring *const rx_queues[], return data->port_id; error: + rte_free(data->rx_queues); + rte_free(data->tx_queues); rte_free(data); rte_free(pci_dev); rte_free(internals);