The SoftNIC PMD is intended to provide SW fall-back options for specific
ethdev APIs in a generic way to the NICs not supporting those features.
Currently, the only implemented ethdev API is Traffic Management (TM),
but other ethdev APIs such as rte_flow, traffic metering & policing, etc
can be easily implemented.
Overview:
* Generic: The SoftNIC PMD works with any "hard" PMD that implements the
ethdev API. It does not change the "hard" PMD in any way.
* Creation: For any given "hard" ethdev port, the user can decide to
create an associated "soft" ethdev port to drive the "hard" port. The
"soft" port is a virtual device that can be created at app start-up
through EAL vdev arg or later through the virtual device API.
* Configuration: The app explicitly decides which features are to be
enabled on the "soft" port and which features are still to be used from
the "hard" port. The app continues to explicitly configure both the
"hard" and the "soft" ports after the creation of the "soft" port.
* RX/TX: The app reads packets from/writes packets to the "soft" port
instead of the "hard" port. The RX and TX queues of the "soft" port are
thread safe, as any ethdev.
* Execution: The "soft" port is a feature-rich NIC implemented by the CPU,
so the run function of the "soft" port has to be executed by the CPU in
order to get packets moving between "hard" port and the app.
* Meets the NFV vision: The app should be (almost) agnostic about the NIC
implementation (different vendors/models, HW-SW mix), the app should not
require changes to use different NICs, the app should use the same API
for all NICs. If a NIC does not implement a specific feature, the HW
should be augmented with SW to meet the functionality while still
preserving the same API.
Traffic Management SW fall-back overview:
* Implements the ethdev traffic management API (rte_tm.h).
* Based on the existing librte_sched DPDK library.
Example: Create "soft" port for "hard" port "0000:04:00.1", enable the TM
feature with default settings:
--vdev 'net_softnic0,hard_name=0000:04:00.1,soft_tm=on'
Q1: Why generic name, if only TM is supported (for now)?
A1: The intention is to have SoftNIC PMD implement many other (all?)
ethdev APIs under a single "ideal" ethdev, hence the generic name.
The initial motivation is TM API, but the mechanism is generic and can
be used for many other ethdev APIs. Somebody looking to provide SW
fall-back for other ethdev API is likely to end up inventing the same,
hence it would be good to consolidate all under a single PMD and have
the user explicitly enable/disable the features it needs for each
"soft" device.
Q2: Are there any performance requirements for SoftNIC?
A2: Yes, performance should be great/decent for every feature, otherwise
the SW fall-back is unusable, thus useless.
Q3: Why not change the "hard" device (and keep a single device) instead of
creating a new "soft" device (and thus having two devices)?
A3: This is not possible with the current librte_ether ethdev
implementation. The ethdev->dev_ops are defined as constant structure,
so it cannot be changed per device (nor per PMD). The new ops also
need memory space to store their context data structures, which
requires updating the ethdev->data->dev_private of the existing
device; at best, maybe a resize of ethdev->data->dev_private could be
done, assuming that librte_ether will introduce a way to find out its
size, but this cannot be done while device is running. Other side
effects might exist, as the changes are very intrusive, plus it likely
needs more changes in librte_ether.
Q4: Why not call the SW fall-back dev_ops directly in librte_ether for
devices which do not support the specific feature? If the device
supports the capability, let's call its dev_ops, otherwise call the
SW fall-back dev_ops.
A4: First, similar reasons to Q&A3. This fixes the need to change
ethdev->dev_ops of the device, but it does not do anything to fix the
other significant issue of where to store the context data structures
needed by the SW fall-back functions (which, in this approach, are
called implicitly by librte_ether).
Second, the SW fall-back options should not be restricted arbitrarily
by the librte_ether library, the decision should belong to the app.
For example, the TM SW fall-back should not be limited to only
librte_sched, which (like any SW fall-back) is limited to a specific
hierarchy and feature set, it cannot do any possible hierarchy. If
alternatives exist, the one to use should be picked by the app, not by
the ethdev layer.
Q5: Why is the app required to continue to configure both the "hard" and
the "soft" devices even after the "soft" device has been created? Why
not hiding the "hard" device under the "soft" device and have the
"soft" device configure the "hard" device under the hood?
A5: This was the approach tried in the V2 of this patch set (overlay
"soft" device taking over the configuration of the underlay "hard"
device) and eventually dropped due to increased complexity of having
to keep the configuration of two distinct devices in sync with
librte_ether implementation that is not friendly towards such
approach. Basically, each ethdev API call for the overlay device
needs to configure the overlay device, invoke the same configuration
with possibly modified parameters for the underlay device, then resume
the configuration of overlay device, turning this into a device
emulation project.
V2 minuses: increased complexity (deal with two devices at same time);
need to implement every ethdev API, even those not needed for the scope
of SW fall-back; intrusive; sometimes have to silently take decisions
that should be left to the app.
V3 pluses: lower complexity (only one device); only need to implement
those APIs that are in scope of the SW fall-back; non-intrusive (deal
with "hard" device through ethdev API); app decisions taken by the app
in an explicit way.
Q6: Why expose the SW fall-back in a PMD and not in a SW library?
A6: The SW fall-back for an ethdev API has to implement that specific
ethdev API, (hence expose an ethdev object through a PMD), as opposed
to providing a different API. This approach allows the app to use the
same API (NFV vision). For example, we already have a library for TM
SW fall-back (librte_sched) that can be called directly by the apps
that need to call it outside of ethdev context (use-cases exist), but
an app that works with TM-aware NICs through the ethdev TM API would
have to be changed significantly in order to work with different
TM-agnostic NICs through the librte_sched API.
Q7: Why have all the SW fall-backs in a single PMD? Why not develop
the SW fall-back for each different ethdev API in a separate PMD, then
create a chain of "soft" devices for each "hard" device? Potentially,
this results in smaller size PMDs that are easier to maintain.
A7: Arguments for single ethdev/PMD and against chain of ethdevs/PMDs:
1. All the existing PMDs for HW NICs implement a lot of features under
the same PMD, so there is no reason for single PMD approach to break
code modularity. See the V3 code, a lot of care has been taken for
code modularity.
2. We should avoid the proliferation of SW PMDs.
3. A single device should be handled by a single PMD.
4. People are used with feature-rich PMDs, not with single-feature
PMDs, so we change of mindset?
5. [Configuration nightmare] A chain of "soft" devices attached to
single "hard" device requires the app to be aware that the N "soft"
devices in the chain plus the "hard" device refer to the same HW
device, and which device should be invoked to configure which
feature. Also the length of the chain and functionality of each
link is different for each HW device. This breaks the requirement
of preserving the same API while working with different NICs (NFV).
This most likely results in a configuration nightmare, nobody is
going to seriously use this.
6. [Feature inter-dependecy] Sometimes different features need to be
configured and executed together (e.g. share the same set of
resources, are inter-dependent, etc), so it is better and more
performant to do them in the same ethdev/PMD.
7. [Code duplication] There is a lot of duplication in the
configuration code for the chain of ethdevs approach. The ethdev
dev_configure, rx_queue_setup, tx_queue_setup API functions have to
be implemented per device, and they become meaningless/inconsistent
with the chain approach.
8. [Data structure duplication] The per device data structures have to
be duplicated and read repeatedly for each "soft" ethdev. The
ethdev device, dev_private, data, per RX/TX queue data structures
have to be replicated per "soft" device. They have to be re-read for
each stage, so the same cache misses are now multiplied with the
number of stages in the chain.
9. [rte_ring proliferation] Thread safety requirements for ethdev
RX/TXqueues require an rte_ring to be used for every RX/TX queue
of each "soft" ethdev. This rte_ring proliferation unnecessarily
increases the memory footprint and lowers performance, especially
when each "soft" ethdev ends up on a different CPU core (ping-pong
of cache lines).
10.[Meta-data proliferation] A chain of ethdevs is likely to result
in proliferation of meta-data that has to be passed between the
ethdevs (e.g. policing needs the output of flow classification),
which results in more cache line ping-pong between cores, hence
performance drops.
Cristian Dumitrescu (4):
Jasvinder Singh (4):
net/softnic: add softnic PMD
net/softnic: add traffic management support
net/softnic: add TM capabilities ops
net/softnic: add TM hierarchy related ops
Jasvinder Singh (1):
app/testpmd: add traffic management forwarding mode
MAINTAINERS | 5 +
app/test-pmd/Makefile | 12 +
app/test-pmd/cmdline.c | 88 +
app/test-pmd/testpmd.c | 15 +
app/test-pmd/testpmd.h | 46 +
app/test-pmd/tm.c | 865 +++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 3 +-
doc/api/doxy-api.conf | 1 +
doc/guides/rel_notes/release_17_11.rst | 8 +
drivers/net/Makefile | 5 +
drivers/net/softnic/Makefile | 57 +
drivers/net/softnic/rte_eth_softnic.c | 852 +++++
drivers/net/softnic/rte_eth_softnic.h | 83 +
drivers/net/softnic/rte_eth_softnic_internals.h | 291 ++
drivers/net/softnic/rte_eth_softnic_tm.c | 3452 ++++++++++++++++++++
.../net/softnic/rte_pmd_eth_softnic_version.map | 7 +
mk/rte.app.mk | 5 +-
18 files changed, 5798 insertions(+), 2 deletions(-)
create mode 100644 app/test-pmd/tm.c
create mode 100644 drivers/net/softnic/Makefile
create mode 100644 drivers/net/softnic/rte_eth_softnic.c
create mode 100644 drivers/net/softnic/rte_eth_softnic.h
create mode 100644 drivers/net/softnic/rte_eth_softnic_internals.h
create mode 100644 drivers/net/softnic/rte_eth_softnic_tm.c
create mode 100644 drivers/net/softnic/rte_pmd_eth_softnic_version.map
Series Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Series Acked-by: Thomas Monjalon <thomas@monjalon.net>
@@ -524,6 +524,11 @@ M: Gaetan Rivet <gaetan.rivet@6wind.com>
F: drivers/net/failsafe/
F: doc/guides/nics/fail_safe.rst
+Softnic PMD
+M: Jasvinder Singh <jasvinder.singh@intel.com>
+M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+F: drivers/net/softnic
+
Crypto Drivers
--------------
@@ -271,6 +271,11 @@ CONFIG_RTE_LIBRTE_SFC_EFX_PMD=y
CONFIG_RTE_LIBRTE_SFC_EFX_DEBUG=n
#
+# Compile SOFTNIC PMD
+#
+CONFIG_RTE_LIBRTE_PMD_SOFTNIC=y
+
+#
# Compile software PMD backed by SZEDATA2 device
#
CONFIG_RTE_LIBRTE_PMD_SZEDATA2=n
@@ -55,7 +55,8 @@ The public API headers are grouped by topics:
[KNI] (@ref rte_kni.h),
[ixgbe] (@ref rte_pmd_ixgbe.h),
[i40e] (@ref rte_pmd_i40e.h),
- [crypto_scheduler] (@ref rte_cryptodev_scheduler.h)
+ [crypto_scheduler] (@ref rte_cryptodev_scheduler.h),
+ [softnic] (@ref rte_eth_softnic.h)
- **memory**:
[memseg] (@ref rte_memory.h),
@@ -32,6 +32,7 @@ PROJECT_NAME = DPDK
INPUT = doc/api/doxy-api-index.md \
drivers/crypto/scheduler \
drivers/net/bonding \
+ drivers/net/softnic \
drivers/net/i40e \
drivers/net/ixgbe \
lib/librte_eal/common/include \
@@ -58,6 +58,11 @@ New Features
* Support for Flow API
* Support for Tx and Rx descriptor status functions
+* **Added SoftNIC PMD.**
+
+ Added new SoftNIC PMD. This virtual device offers applications a software
+ fallback support for traffic management.
+
Resolved Issues
---------------
@@ -211,6 +216,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_pmd_ixgbe.so.1
librte_pmd_ring.so.2
librte_pmd_vhost.so.1
+ + librte_pmd_softnic.so.1
librte_port.so.3
librte_power.so.1
librte_reorder.so.1
@@ -112,4 +112,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += vhost
endif # $(CONFIG_RTE_LIBRTE_VHOST)
DEPDIRS-vhost = $(core-libs) librte_vhost
+ifeq ($(CONFIG_RTE_LIBRTE_SCHED),y)
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_SOFTNIC) += softnic
+endif # $(CONFIG_RTE_LIBRTE_SCHED)
+DEPDIRS-softnic = $(core-libs) librte_sched
+
include $(RTE_SDK)/mk/rte.subdir.mk
new file mode 100644
@@ -0,0 +1,56 @@
+# BSD LICENSE
+#
+# Copyright(c) 2017 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_softnic.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_eth_softnic_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SOFTNIC) += rte_eth_softnic.c
+
+#
+# Export include files
+#
+SYMLINK-y-include += rte_eth_softnic.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
new file mode 100644
@@ -0,0 +1,591 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_ethdev.h>
+#include <rte_ethdev_vdev.h>
+#include <rte_malloc.h>
+#include <rte_vdev.h>
+#include <rte_kvargs.h>
+#include <rte_errno.h>
+#include <rte_ring.h>
+
+#include "rte_eth_softnic.h"
+#include "rte_eth_softnic_internals.h"
+
+#define DEV_HARD(p) \
+ (&rte_eth_devices[p->hard.port_id])
+
+#define PMD_PARAM_HARD_NAME "hard_name"
+#define PMD_PARAM_HARD_TX_QUEUE_ID "hard_tx_queue_id"
+
+static const char *pmd_valid_args[] = {
+ PMD_PARAM_HARD_NAME,
+ PMD_PARAM_HARD_TX_QUEUE_ID,
+ NULL
+};
+
+static const struct rte_eth_dev_info pmd_dev_info = {
+ .min_rx_bufsize = 0,
+ .max_rx_pktlen = UINT32_MAX,
+ .max_rx_queues = UINT16_MAX,
+ .max_tx_queues = UINT16_MAX,
+ .rx_desc_lim = {
+ .nb_max = UINT16_MAX,
+ .nb_min = 0,
+ .nb_align = 1,
+ },
+ .tx_desc_lim = {
+ .nb_max = UINT16_MAX,
+ .nb_min = 0,
+ .nb_align = 1,
+ },
+};
+
+static void
+pmd_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
+ struct rte_eth_dev_info *dev_info)
+{
+ memcpy(dev_info, &pmd_dev_info, sizeof(*dev_info));
+}
+
+static int
+pmd_dev_configure(struct rte_eth_dev *dev)
+{
+ struct pmd_internals *p = dev->data->dev_private;
+ struct rte_eth_dev *hard_dev = DEV_HARD(p);
+
+ if (dev->data->nb_rx_queues > hard_dev->data->nb_rx_queues)
+ return -1;
+
+ if (p->params.hard.tx_queue_id >= hard_dev->data->nb_tx_queues)
+ return -1;
+
+ return 0;
+}
+
+static int
+pmd_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ uint16_t nb_rx_desc __rte_unused,
+ unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf __rte_unused,
+ struct rte_mempool *mb_pool __rte_unused)
+{
+ struct pmd_internals *p = dev->data->dev_private;
+
+ if (p->params.soft.intrusive == 0) {
+ struct pmd_rx_queue *rxq;
+
+ rxq = rte_zmalloc_socket(p->params.soft.name,
+ sizeof(struct pmd_rx_queue), 0, socket_id);
+ if (rxq == NULL)
+ return -ENOMEM;
+
+ rxq->hard.port_id = p->hard.port_id;
+ rxq->hard.rx_queue_id = rx_queue_id;
+ dev->data->rx_queues[rx_queue_id] = rxq;
+ } else {
+ struct rte_eth_dev *hard_dev = DEV_HARD(p);
+ void *rxq = hard_dev->data->rx_queues[rx_queue_id];
+
+ if (rxq == NULL)
+ return -1;
+
+ dev->data->rx_queues[rx_queue_id] = rxq;
+ }
+ return 0;
+}
+
+static int
+pmd_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t tx_queue_id,
+ uint16_t nb_tx_desc,
+ unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf __rte_unused)
+{
+ uint32_t size = RTE_ETH_NAME_MAX_LEN + strlen("_txq") + 4;
+ char name[size];
+ struct rte_ring *r;
+
+ snprintf(name, sizeof(name), "%s_txq%04x",
+ dev->data->name, tx_queue_id);
+ r = rte_ring_create(name, nb_tx_desc, socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (r == NULL)
+ return -1;
+
+ dev->data->tx_queues[tx_queue_id] = r;
+ return 0;
+}
+
+static int
+pmd_dev_start(struct rte_eth_dev *dev)
+{
+ struct pmd_internals *p = dev->data->dev_private;
+
+ dev->data->dev_link.link_status = ETH_LINK_UP;
+
+ if (p->params.soft.intrusive) {
+ struct rte_eth_dev *hard_dev = DEV_HARD(p);
+
+ /* The hard_dev->rx_pkt_burst should be stable by now */
+ dev->rx_pkt_burst = hard_dev->rx_pkt_burst;
+ }
+
+ return 0;
+}
+
+static void
+pmd_dev_stop(struct rte_eth_dev *dev)
+{
+ dev->data->dev_link.link_status = ETH_LINK_DOWN;
+}
+
+static void
+pmd_dev_close(struct rte_eth_dev *dev)
+{
+ uint32_t i;
+
+ /* TX queues */
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ rte_ring_free((struct rte_ring *)dev->data->tx_queues[i]);
+}
+
+static int
+pmd_link_update(struct rte_eth_dev *dev __rte_unused,
+ int wait_to_complete __rte_unused)
+{
+ return 0;
+}
+
+static const struct eth_dev_ops pmd_ops = {
+ .dev_configure = pmd_dev_configure,
+ .dev_start = pmd_dev_start,
+ .dev_stop = pmd_dev_stop,
+ .dev_close = pmd_dev_close,
+ .link_update = pmd_link_update,
+ .dev_infos_get = pmd_dev_infos_get,
+ .rx_queue_setup = pmd_rx_queue_setup,
+ .tx_queue_setup = pmd_tx_queue_setup,
+ .tm_ops_get = NULL,
+};
+
+static uint16_t
+pmd_rx_pkt_burst(void *rxq,
+ struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct pmd_rx_queue *rx_queue = rxq;
+
+ return rte_eth_rx_burst(rx_queue->hard.port_id,
+ rx_queue->hard.rx_queue_id,
+ rx_pkts,
+ nb_pkts);
+}
+
+static uint16_t
+pmd_tx_pkt_burst(void *txq,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ return (uint16_t)rte_ring_enqueue_burst(txq,
+ (void **)tx_pkts,
+ nb_pkts,
+ NULL);
+}
+
+static __rte_always_inline int
+run_default(struct rte_eth_dev *dev)
+{
+ struct pmd_internals *p = dev->data->dev_private;
+
+ /* Persistent context: Read Only (update not required) */
+ struct rte_mbuf **pkts = p->soft.def.pkts;
+ uint16_t nb_tx_queues = dev->data->nb_tx_queues;
+
+ /* Persistent context: Read - Write (update required) */
+ uint32_t txq_pos = p->soft.def.txq_pos;
+ uint32_t pkts_len = p->soft.def.pkts_len;
+ uint32_t flush_count = p->soft.def.flush_count;
+
+ /* Not part of the persistent context */
+ uint32_t pos;
+ uint16_t i;
+
+ /* Soft device TXQ read, Hard device TXQ write */
+ for (i = 0; i < nb_tx_queues; i++) {
+ struct rte_ring *txq = dev->data->tx_queues[txq_pos];
+
+ /* Read soft device TXQ burst to packet enqueue buffer */
+ pkts_len += rte_ring_sc_dequeue_burst(txq,
+ (void **)&pkts[pkts_len],
+ DEFAULT_BURST_SIZE,
+ NULL);
+
+ /* Increment soft device TXQ */
+ txq_pos++;
+ if (txq_pos >= nb_tx_queues)
+ txq_pos = 0;
+
+ /* Hard device TXQ write when complete burst is available */
+ if (pkts_len >= DEFAULT_BURST_SIZE) {
+ for (pos = 0; pos < pkts_len; )
+ pos += rte_eth_tx_burst(p->hard.port_id,
+ p->params.hard.tx_queue_id,
+ &pkts[pos],
+ (uint16_t)(pkts_len - pos));
+
+ pkts_len = 0;
+ flush_count = 0;
+ break;
+ }
+ }
+
+ if (flush_count >= FLUSH_COUNT_THRESHOLD) {
+ for (pos = 0; pos < pkts_len; )
+ pos += rte_eth_tx_burst(p->hard.port_id,
+ p->params.hard.tx_queue_id,
+ &pkts[pos],
+ (uint16_t)(pkts_len - pos));
+
+ pkts_len = 0;
+ flush_count = 0;
+ }
+
+ p->soft.def.txq_pos = txq_pos;
+ p->soft.def.pkts_len = pkts_len;
+ p->soft.def.flush_count = flush_count + 1;
+
+ return 0;
+}
+
+int
+rte_pmd_softnic_run(uint8_t port_id)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0);
+#endif
+
+ return run_default(dev);
+}
+
+static struct ether_addr eth_addr = { .addr_bytes = {0} };
+
+static uint32_t
+eth_dev_speed_max_mbps(uint32_t speed_capa)
+{
+ uint32_t rate_mbps[32] = {
+ ETH_SPEED_NUM_NONE,
+ ETH_SPEED_NUM_10M,
+ ETH_SPEED_NUM_10M,
+ ETH_SPEED_NUM_100M,
+ ETH_SPEED_NUM_100M,
+ ETH_SPEED_NUM_1G,
+ ETH_SPEED_NUM_2_5G,
+ ETH_SPEED_NUM_5G,
+ ETH_SPEED_NUM_10G,
+ ETH_SPEED_NUM_20G,
+ ETH_SPEED_NUM_25G,
+ ETH_SPEED_NUM_40G,
+ ETH_SPEED_NUM_50G,
+ ETH_SPEED_NUM_56G,
+ ETH_SPEED_NUM_100G,
+ };
+
+ uint32_t pos = (speed_capa) ? (31 - __builtin_clz(speed_capa)) : 0;
+ return rate_mbps[pos];
+}
+
+static int
+default_init(struct pmd_internals *p,
+ struct pmd_params *params,
+ int numa_node)
+{
+ p->soft.def.pkts = rte_zmalloc_socket(params->soft.name,
+ 2 * DEFAULT_BURST_SIZE * sizeof(struct rte_mbuf *),
+ 0,
+ numa_node);
+
+ if (p->soft.def.pkts == NULL)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void
+default_free(struct pmd_internals *p)
+{
+ rte_free(p->soft.def.pkts);
+}
+
+static void *
+pmd_init(struct pmd_params *params, int numa_node)
+{
+ struct pmd_internals *p;
+ int status;
+
+ p = rte_zmalloc_socket(params->soft.name,
+ sizeof(struct pmd_internals),
+ 0,
+ numa_node);
+ if (p == NULL)
+ return NULL;
+
+ memcpy(&p->params, params, sizeof(p->params));
+ rte_eth_dev_get_port_by_name(params->hard.name, &p->hard.port_id);
+
+ /* Default */
+ status = default_init(p, params, numa_node);
+ if (status) {
+ free(p->params.hard.name);
+ rte_free(p);
+ return NULL;
+ }
+
+ return p;
+}
+
+static void
+pmd_free(struct pmd_internals *p)
+{
+ default_free(p);
+
+ free(p->params.hard.name);
+ rte_free(p);
+}
+
+static int
+pmd_ethdev_register(struct rte_vdev_device *vdev,
+ struct pmd_params *params,
+ void *dev_private)
+{
+ struct rte_eth_dev_info hard_info;
+ struct rte_eth_dev *soft_dev;
+ uint32_t hard_speed;
+ int numa_node;
+ uint8_t hard_port_id;
+
+ rte_eth_dev_get_port_by_name(params->hard.name, &hard_port_id);
+ rte_eth_dev_info_get(hard_port_id, &hard_info);
+ hard_speed = eth_dev_speed_max_mbps(hard_info.speed_capa);
+ numa_node = rte_eth_dev_socket_id(hard_port_id);
+
+ /* Ethdev entry allocation */
+ soft_dev = rte_eth_dev_allocate(params->soft.name);
+ if (!soft_dev)
+ return -ENOMEM;
+
+ /* dev */
+ soft_dev->rx_pkt_burst = (params->soft.intrusive) ?
+ NULL : /* set up later */
+ pmd_rx_pkt_burst;
+ soft_dev->tx_pkt_burst = pmd_tx_pkt_burst;
+ soft_dev->tx_pkt_prepare = NULL;
+ soft_dev->dev_ops = &pmd_ops;
+ soft_dev->device = &vdev->device;
+
+ /* dev->data */
+ soft_dev->data->dev_private = dev_private;
+ soft_dev->data->dev_link.link_speed = hard_speed;
+ soft_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ soft_dev->data->dev_link.link_autoneg = ETH_LINK_SPEED_FIXED;
+ soft_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+ soft_dev->data->mac_addrs = ð_addr;
+ soft_dev->data->promiscuous = 1;
+ soft_dev->data->kdrv = RTE_KDRV_NONE;
+ soft_dev->data->numa_node = numa_node;
+ soft_dev->data->dev_flags = RTE_ETH_DEV_DETACHABLE;
+
+ return 0;
+}
+
+static int
+get_string(const char *key __rte_unused, const char *value, void *extra_args)
+{
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ *(char **)extra_args = strdup(value);
+
+ if (!*(char **)extra_args)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int
+get_uint32(const char *key __rte_unused, const char *value, void *extra_args)
+{
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ *(uint32_t *)extra_args = strtoull(value, NULL, 0);
+
+ return 0;
+}
+
+static int
+pmd_parse_args(struct pmd_params *p, const char *name, const char *params)
+{
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ kvlist = rte_kvargs_parse(params, pmd_valid_args);
+ if (kvlist == NULL)
+ return -EINVAL;
+
+ /* Set default values */
+ memset(p, 0, sizeof(*p));
+ p->soft.name = name;
+ p->soft.intrusive = INTRUSIVE;
+ p->hard.tx_queue_id = SOFTNIC_HARD_TX_QUEUE_ID;
+
+ /* HARD: name (mandatory) */
+ if (rte_kvargs_count(kvlist, PMD_PARAM_HARD_NAME) == 1) {
+ ret = rte_kvargs_process(kvlist, PMD_PARAM_HARD_NAME,
+ &get_string, &p->hard.name);
+ if (ret < 0)
+ goto out_free;
+ } else {
+ ret = -EINVAL;
+ goto out_free;
+ }
+
+ /* HARD: tx_queue_id (optional) */
+ if (rte_kvargs_count(kvlist, PMD_PARAM_HARD_TX_QUEUE_ID) == 1) {
+ ret = rte_kvargs_process(kvlist, PMD_PARAM_HARD_TX_QUEUE_ID,
+ &get_uint32, &p->hard.tx_queue_id);
+ if (ret < 0)
+ goto out_free;
+ }
+
+out_free:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static int
+pmd_probe(struct rte_vdev_device *vdev)
+{
+ struct pmd_params p;
+ const char *params;
+ int status;
+
+ struct rte_eth_dev_info hard_info;
+ uint8_t hard_port_id;
+ int numa_node;
+ void *dev_private;
+
+ RTE_LOG(INFO, PMD,
+ "Probing device \"%s\"\n",
+ rte_vdev_device_name(vdev));
+
+ /* Parse input arguments */
+ params = rte_vdev_device_args(vdev);
+ if (!params)
+ return -EINVAL;
+
+ status = pmd_parse_args(&p, rte_vdev_device_name(vdev), params);
+ if (status)
+ return status;
+
+ /* Check input arguments */
+ if (rte_eth_dev_get_port_by_name(p.hard.name, &hard_port_id))
+ return -EINVAL;
+
+ rte_eth_dev_info_get(hard_port_id, &hard_info);
+ numa_node = rte_eth_dev_socket_id(hard_port_id);
+
+ if (p.hard.tx_queue_id >= hard_info.max_tx_queues)
+ return -EINVAL;
+
+ /* Allocate and initialize soft ethdev private data */
+ dev_private = pmd_init(&p, numa_node);
+ if (dev_private == NULL)
+ return -ENOMEM;
+
+ /* Register soft ethdev */
+ RTE_LOG(INFO, PMD,
+ "Creating soft ethdev \"%s\" for hard ethdev \"%s\"\n",
+ p.soft.name, p.hard.name);
+
+ status = pmd_ethdev_register(vdev, &p, dev_private);
+ if (status) {
+ pmd_free(dev_private);
+ return status;
+ }
+
+ return 0;
+}
+
+static int
+pmd_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_eth_dev *dev = NULL;
+ struct pmd_internals *p;
+
+ if (!vdev)
+ return -EINVAL;
+
+ RTE_LOG(INFO, PMD, "Removing device \"%s\"\n",
+ rte_vdev_device_name(vdev));
+
+ /* Find the ethdev entry */
+ dev = rte_eth_dev_allocated(rte_vdev_device_name(vdev));
+ if (dev == NULL)
+ return -ENODEV;
+ p = dev->data->dev_private;
+
+ /* Free device data structures*/
+ pmd_free(p);
+ rte_free(dev->data);
+ rte_eth_dev_release_port(dev);
+
+ return 0;
+}
+
+static struct rte_vdev_driver pmd_softnic_drv = {
+ .probe = pmd_probe,
+ .remove = pmd_remove,
+};
+
+RTE_PMD_REGISTER_VDEV(net_softnic, pmd_softnic_drv);
+RTE_PMD_REGISTER_PARAM_STRING(net_softnic,
+ PMD_PARAM_HARD_NAME "=<string> "
+ PMD_PARAM_HARD_TX_QUEUE_ID "=<int>");
new file mode 100644
@@ -0,0 +1,67 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_ETH_SOFTNIC_H__
+#define __INCLUDE_RTE_ETH_SOFTNIC_H__
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#ifndef SOFTNIC_HARD_TX_QUEUE_ID
+#define SOFTNIC_HARD_TX_QUEUE_ID 0
+#endif
+
+/**
+ * Run the traffic management function on the softnic device
+ *
+ * This function read the packets from the softnic input queues, insert into
+ * QoS scheduler queues based on mbuf sched field value and transmit the
+ * scheduled packets out through the hard device interface.
+ *
+ * @param portid
+ * port id of the soft device.
+ * @return
+ * zero.
+ */
+
+int
+rte_pmd_softnic_run(uint8_t port_id);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_ETH_SOFTNIC_H__ */
new file mode 100644
@@ -0,0 +1,114 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_ETH_SOFTNIC_INTERNALS_H__
+#define __INCLUDE_RTE_ETH_SOFTNIC_INTERNALS_H__
+
+#include <stdint.h>
+
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+
+#include "rte_eth_softnic.h"
+
+#ifndef INTRUSIVE
+#define INTRUSIVE 0
+#endif
+
+struct pmd_params {
+ /** Parameters for the soft device (to be created) */
+ struct {
+ const char *name; /**< Name */
+ uint32_t flags; /**< Flags */
+
+ /** 0 = Access hard device though API only (potentially slower,
+ * but safer);
+ * 1 = Access hard device private data structures is allowed
+ * (potentially faster).
+ */
+ int intrusive;
+ } soft;
+
+ /** Parameters for the hard device (existing) */
+ struct {
+ char *name; /**< Name */
+ uint16_t tx_queue_id; /**< TX queue ID */
+ } hard;
+};
+
+/**
+ * Default Internals
+ */
+
+#ifndef DEFAULT_BURST_SIZE
+#define DEFAULT_BURST_SIZE 32
+#endif
+
+#ifndef FLUSH_COUNT_THRESHOLD
+#define FLUSH_COUNT_THRESHOLD (1 << 17)
+#endif
+
+struct default_internals {
+ struct rte_mbuf **pkts;
+ uint32_t pkts_len;
+ uint32_t txq_pos;
+ uint32_t flush_count;
+};
+
+/**
+ * PMD Internals
+ */
+struct pmd_internals {
+ /** Params */
+ struct pmd_params params;
+
+ /** Soft device */
+ struct {
+ struct default_internals def; /**< Default */
+ } soft;
+
+ /** Hard device */
+ struct {
+ uint8_t port_id;
+ } hard;
+};
+
+struct pmd_rx_queue {
+ /** Hard device */
+ struct {
+ uint8_t port_id;
+ uint16_t rx_queue_id;
+ } hard;
+};
+
+#endif /* __INCLUDE_RTE_ETH_SOFTNIC_INTERNALS_H__ */
new file mode 100644
@@ -0,0 +1,7 @@
+DPDK_17.11 {
+ global:
+
+ rte_pmd_softnic_run;
+
+ local: *;
+};
@@ -67,7 +67,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += -lrte_distributor
_LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG) += -lrte_ip_frag
_LDLIBS-$(CONFIG_RTE_LIBRTE_GRO) += -lrte_gro
_LDLIBS-$(CONFIG_RTE_LIBRTE_METER) += -lrte_meter
-_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
_LDLIBS-$(CONFIG_RTE_LIBRTE_LPM) += -lrte_lpm
# librte_acl needs --whole-archive because of weak functions
_LDLIBS-$(CONFIG_RTE_LIBRTE_ACL) += --whole-archive
@@ -99,6 +98,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING) += -lrte_ring
_LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal
_LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline
_LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder
+_LDLIBS-$(CONFIG_RTE_LIBRTE_SCHED) += -lrte_sched
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_KNI) += -lrte_kni
@@ -140,6 +140,9 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap
_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring
+ifeq ($(CONFIG_RTE_LIBRTE_SCHED),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SOFTNIC) += -lrte_pmd_softnic
+endif
_LDLIBS-$(CONFIG_RTE_LIBRTE_SFC_EFX_PMD) += -lrte_pmd_sfc_efx
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += -lrte_pmd_szedata2 -lsze2
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_TAP) += -lrte_pmd_tap