From patchwork Mon May 4 08:53:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Andrzej Ostruszka [C]" X-Patchwork-Id: 69690 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2C0B7A04AF; Mon, 4 May 2020 10:53:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BAF5F1D451; Mon, 4 May 2020 10:53:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id BF5551D407 for ; Mon, 4 May 2020 10:53:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0448pLFa004235; Mon, 4 May 2020 01:53:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=ixkMj3nolX1pTlPTpJCME5goMxyCC/pkLVe16CzrRxo=; b=uWmn7JNppzFQosnO/VJER6ULNA+rGUc3W1BlxTvka40cScDmcQniYa5BI3LbcSQewuA8 Xl+2GQrdtRM/0wIaJVRFpYHdprGuBdroSJigMNaXD5voa6Dq2ERO2rN/WHOn8ONCKaoN ERoi8fC6tLFUGQ0e0ZojzfeNUVvmk52W7jbGzfgcqVCN2E6FifYWichgqls+l30P+JQ0 BThuUYmZyOTFqBxK1Nwmjo7QouhDqbMC1+ezwulWxU6PbgPGKPSZVM5QMkAxl5pgxu0i BNHnluamRY7SFT8AC1JAO00UMjQdpYI9vlxqK7PlUA0/SeLLLX17geZ79suaH8ZT6jMY qw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 30s67q64k6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 May 2020 01:53:21 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 May 2020 01:53:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 4 May 2020 01:53:20 -0700 Received: from amok.marvell.com (unknown [10.95.131.97]) by maili.marvell.com (Postfix) with ESMTP id 86BC83F7044; Mon, 4 May 2020 01:53:18 -0700 (PDT) From: Andrzej Ostruszka To: , Thomas Monjalon Date: Mon, 4 May 2020 10:53:12 +0200 Message-ID: <20200504085315.7296-2-aostruszka@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200504085315.7296-1-aostruszka@marvell.com> References: <20200306164104.15528-1-aostruszka@marvell.com> <20200504085315.7296-1-aostruszka@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-04_05:2020-05-01, 2020-05-04 signatures=0 Subject: [dpdk-dev] [PATCH v3 1/4] lib: introduce IF Proxy library X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This library allows to designate ports visible to the system (such as Tun/Tap or KNI) as port representors serving as proxies for other DPDK ports. When such a proxy is configured this library initially queries network configuration from the system and later monitors its changes. The information gathered is passed to the application either via a set of user registered callbacks or as an event added to the configured notification queue (or a combination of these two mechanisms). This way user can use normal network utilities (like those from the iproute2 suite) to configure DPDK ports. Signed-off-by: Andrzej Ostruszka --- MAINTAINERS | 3 + config/common_base | 5 + config/common_linux | 1 + lib/Makefile | 2 + lib/librte_eal/include/rte_eal_interrupts.h | 2 + lib/librte_eal/linux/eal_interrupts.c | 14 +- lib/librte_if_proxy/Makefile | 29 + lib/librte_if_proxy/if_proxy_common.c | 564 ++++++++++++++++++ lib/librte_if_proxy/if_proxy_priv.h | 97 +++ lib/librte_if_proxy/linux/Makefile | 4 + lib/librte_if_proxy/linux/if_proxy.c | 563 ++++++++++++++++++ lib/librte_if_proxy/meson.build | 19 + lib/librte_if_proxy/rte_if_proxy.h | 585 +++++++++++++++++++ lib/librte_if_proxy/rte_if_proxy_version.map | 20 + lib/meson.build | 2 +- 15 files changed, 1905 insertions(+), 5 deletions(-) create mode 100644 lib/librte_if_proxy/Makefile create mode 100644 lib/librte_if_proxy/if_proxy_common.c create mode 100644 lib/librte_if_proxy/if_proxy_priv.h create mode 100644 lib/librte_if_proxy/linux/Makefile create mode 100644 lib/librte_if_proxy/linux/if_proxy.c create mode 100644 lib/librte_if_proxy/meson.build create mode 100644 lib/librte_if_proxy/rte_if_proxy.h create mode 100644 lib/librte_if_proxy/rte_if_proxy_version.map diff --git a/MAINTAINERS b/MAINTAINERS index e05c80504..1013745ce 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1472,6 +1472,9 @@ F: examples/bpf/ F: app/test/test_bpf.c F: doc/guides/prog_guide/bpf_lib.rst +IF Proxy - EXPERIMENTAL +M: Andrzej Ostruszka +F: lib/librte_if_proxy/ Test Applications ----------------- diff --git a/config/common_base b/config/common_base index 14000ba07..95ca8dbf6 100644 --- a/config/common_base +++ b/config/common_base @@ -1087,6 +1087,11 @@ CONFIG_RTE_LIBRTE_BPF_ELF=n # CONFIG_RTE_LIBRTE_IPSEC=y +# +# Compile librte_if_proxy +# +CONFIG_RTE_LIBRTE_IF_PROXY=n + # # Compile the test application # diff --git a/config/common_linux b/config/common_linux index 816810671..1244eb0ae 100644 --- a/config/common_linux +++ b/config/common_linux @@ -16,6 +16,7 @@ CONFIG_RTE_LIBRTE_VHOST_NUMA=y CONFIG_RTE_LIBRTE_VHOST_POSTCOPY=n CONFIG_RTE_LIBRTE_PMD_VHOST=y CONFIG_RTE_LIBRTE_IFC_PMD=y +CONFIG_RTE_LIBRTE_IF_PROXY=y CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y CONFIG_RTE_LIBRTE_PMD_MEMIF=y CONFIG_RTE_LIBRTE_PMD_SOFTNIC=y diff --git a/lib/Makefile b/lib/Makefile index d0ec3919b..1e7d78183 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -118,6 +118,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev DIRS-$(CONFIG_RTE_LIBRTE_RCU) += librte_rcu DEPDIRS-librte_rcu := librte_eal librte_ring +DIRS-$(CONFIG_RTE_LIBRTE_IF_PROXY) += librte_if_proxy +DEPDIRS-librte_if_proxy := librte_eal librte_ethdev ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y) DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git a/lib/librte_eal/include/rte_eal_interrupts.h b/lib/librte_eal/include/rte_eal_interrupts.h index 773a34a42..296a3853d 100644 --- a/lib/librte_eal/include/rte_eal_interrupts.h +++ b/lib/librte_eal/include/rte_eal_interrupts.h @@ -36,6 +36,8 @@ enum rte_intr_handle_type { RTE_INTR_HANDLE_VDEV, /**< virtual device */ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */ + RTE_INTR_HANDLE_NETLINK, /**< netlink notification handle */ + RTE_INTR_HANDLE_MAX /**< count of elements */ }; diff --git a/lib/librte_eal/linux/eal_interrupts.c b/lib/librte_eal/linux/eal_interrupts.c index 16e7a7d51..91ddafc59 100644 --- a/lib/librte_eal/linux/eal_interrupts.c +++ b/lib/librte_eal/linux/eal_interrupts.c @@ -691,6 +691,9 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* not used at this moment */ case RTE_INTR_HANDLE_ALARM: +#if RTE_LIBRTE_IF_PROXY + case RTE_INTR_HANDLE_NETLINK: +#endif rc = -1; break; #ifdef VFIO_PRESENT @@ -818,6 +821,9 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* not used at this moment */ case RTE_INTR_HANDLE_ALARM: +#if RTE_LIBRTE_IF_PROXY + case RTE_INTR_HANDLE_NETLINK: +#endif rc = -1; break; #ifdef VFIO_PRESENT @@ -915,12 +921,12 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) break; #endif #endif - case RTE_INTR_HANDLE_VDEV: case RTE_INTR_HANDLE_EXT: - bytes_read = 0; - call = true; - break; + case RTE_INTR_HANDLE_VDEV: case RTE_INTR_HANDLE_DEV_EVENT: +#if RTE_LIBRTE_IF_PROXY + case RTE_INTR_HANDLE_NETLINK: +#endif bytes_read = 0; call = true; break; diff --git a/lib/librte_if_proxy/Makefile b/lib/librte_if_proxy/Makefile new file mode 100644 index 000000000..43cb702a2 --- /dev/null +++ b/lib/librte_if_proxy/Makefile @@ -0,0 +1,29 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2020 Marvell International Ltd. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_if_proxy.a + +CFLAGS += -DALLOW_EXPERIMENTAL_API +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) +LDLIBS += -lrte_eal -lrte_ethdev + +EXPORT_MAP := rte_if_proxy_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_IF_PROXY) := if_proxy_common.c + +SYSDIR := $(patsubst "%app",%,$(CONFIG_RTE_EXEC_ENV)) +include $(SRCDIR)/$(SYSDIR)/Makefile + +SRCS-$(CONFIG_RTE_LIBRTE_IF_PROXY) += $(addprefix $(SYSDIR)/,$(SRCS)) + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_IF_PROXY)-include := rte_if_proxy.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_if_proxy/if_proxy_common.c b/lib/librte_if_proxy/if_proxy_common.c new file mode 100644 index 000000000..6f72511f4 --- /dev/null +++ b/lib/librte_if_proxy/if_proxy_common.c @@ -0,0 +1,564 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell International Ltd. + */ + +#include "if_proxy_priv.h" +#include + +/* Definitions of data mentioned in if_proxy_priv.h and local ones. */ +int ifpx_log_type; + +/* Table keeping mapping between port and their proxies. */ +static +uint16_t ifpx_ports[RTE_MAX_ETHPORTS]; + +rte_spinlock_t ifpx_lock = RTE_SPINLOCK_INITIALIZER; + +struct ifpx_proxies_head ifpx_proxies = TAILQ_HEAD_INITIALIZER(ifpx_proxies); + +struct ifpx_queue_node { + TAILQ_ENTRY(ifpx_queue_node) elem; + uint16_t state; + struct rte_ring *r; +}; +static +TAILQ_HEAD(ifpx_queues_head, ifpx_queue_node) ifpx_queues = + TAILQ_HEAD_INITIALIZER(ifpx_queues); + +/* All callbacks have similar signature (taking pointer to some event) so we'll + * use this f_ptr to typecast and invoke them in a generic way. There is one + * exception though - notification about completed initial configuration - and + * it is handled separately. + */ +union ifpx_cb_ptr { + int (*f_ptr)(void *ev); /* type for normal event notification */ + union rte_ifpx_cb_ptr cb; +} ifpx_callbacks[RTE_IFPX_NUM_EVENTS]; + +uint64_t rte_ifpx_events_available(void) +{ + if (ifpx_platform.events) + return ifpx_platform.events(); + + /* If callback is not provided then all events are supported. */ + return (1ULL << RTE_IFPX_NUM_EVENTS) - 1; +} + +uint16_t rte_ifpx_proxy_create(enum rte_ifpx_proxy_type type) +{ + char devargs[16] = { '\0' }; + int dev_cnt = 0, nlen; + uint16_t port_id; + + switch (type) { + case RTE_IFPX_DEFAULT: + case RTE_IFPX_TAP: + nlen = strlcpy(devargs, "net_tap", sizeof(devargs)); + break; + case RTE_IFPX_KNI: + nlen = strlcpy(devargs, "net_kni", sizeof(devargs)); + break; + default: + IFPX_LOG(ERR, "Unknown proxy type: %d", type); + return RTE_MAX_ETHPORTS; + } + + RTE_ETH_FOREACH_DEV(port_id) { + if (strcmp(rte_eth_devices[port_id].device->driver->name, + devargs) == 0) + ++dev_cnt; + } + snprintf(devargs+nlen, sizeof(devargs)-nlen, "%d", dev_cnt); + + return rte_ifpx_proxy_create_by_devarg(devargs); +} + +uint16_t rte_ifpx_proxy_create_by_devarg(const char *devarg) +{ + uint16_t port_id = RTE_MAX_ETHPORTS; + struct rte_dev_iterator iter; + + if (rte_dev_probe(devarg) < 0) { + IFPX_LOG(ERR, "Failed to create proxy port %s\n", devarg); + return RTE_MAX_ETHPORTS; + } + + if (rte_eth_iterator_init(&iter, devarg) == 0) { + port_id = rte_eth_iterator_next(&iter); + if (port_id != RTE_MAX_ETHPORTS) + rte_eth_iterator_cleanup(&iter); + } + + return port_id; +} + +int ifpx_proxy_destroy(struct ifpx_proxy_node *px) +{ + unsigned int i; + uint16_t proxy_id = px->proxy_id; + + /* This function is expected to be called with a lock held. */ + RTE_ASSERT(rte_spinlock_trylock(&ifpx_lock) == 0); + + if (px->state & IN_USE) { + px->state |= DEL_PENDING; + return 0; + } + + TAILQ_REMOVE(&ifpx_proxies, px, elem); + free(px); + + /* Clear any bindings for this proxy. */ + for (i = 0; i < RTE_DIM(ifpx_ports); ++i) { + if (ifpx_ports[i] == proxy_id) + ifpx_ports[i] = RTE_MAX_ETHPORTS; + } + + return rte_dev_remove(rte_eth_devices[proxy_id].device); +} + +int rte_ifpx_proxy_destroy(uint16_t proxy_id) +{ + struct ifpx_proxy_node *px; + int ec; + + rte_spinlock_lock(&ifpx_lock); + TAILQ_FOREACH(px, &ifpx_proxies, elem) { + if (px->proxy_id == proxy_id) + break; + } + if (!px) { + ec = -EINVAL; + goto exit; + } + + ec = ifpx_proxy_destroy(px); +exit: + rte_spinlock_unlock(&ifpx_lock); + return ec; +} + +int rte_ifpx_queue_add(struct rte_ring *r) +{ + struct ifpx_queue_node *node; + int ec = 0; + + if (!r) + return -EINVAL; + + rte_spinlock_lock(&ifpx_lock); + TAILQ_FOREACH(node, &ifpx_queues, elem) { + if (node->r == r) { + ec = -EEXIST; + goto exit; + } + } + + node = malloc(sizeof(*node)); + if (!node) { + ec = -ENOMEM; + goto exit; + } + + node->r = r; + TAILQ_INSERT_TAIL(&ifpx_queues, node, elem); +exit: + rte_spinlock_unlock(&ifpx_lock); + + return ec; +} + +int rte_ifpx_queue_remove(struct rte_ring *r) +{ + struct ifpx_queue_node *node, *next; + int ec = -EINVAL; + + if (!r) + return ec; + + rte_spinlock_lock(&ifpx_lock); + for (node = TAILQ_FIRST(&ifpx_queues); node; node = next) { + next = TAILQ_NEXT(node, elem); + if (node->r != r) + continue; + TAILQ_REMOVE(&ifpx_queues, node, elem); + free(node); + ec = 0; + break; + } + rte_spinlock_unlock(&ifpx_lock); + + return ec; +} + +int rte_ifpx_port_bind(uint16_t port_id, uint16_t proxy_id) +{ + struct rte_eth_dev_info proxy_eth_info; + struct ifpx_proxy_node *px; + int ec; + + rte_spinlock_lock(&ifpx_lock); + + if (port_id >= RTE_MAX_ETHPORTS || proxy_id >= RTE_MAX_ETHPORTS || + /* port is a proxy */ + ifpx_ports[port_id] == port_id) { + IFPX_LOG(ERR, "Invalid port_id: %d", port_id); + ec = -EINVAL; + goto error; + } + + /* Do automatic rebinding but issue a warning since this is not + * considered to be a valid behaviour. + */ + if (ifpx_ports[port_id] != RTE_MAX_ETHPORTS) { + IFPX_LOG(WARNING, "Port already bound: %d -> %d", port_id, + ifpx_ports[port_id]); + } + + /* Search for existing proxy - if not found add one to the list. */ + TAILQ_FOREACH(px, &ifpx_proxies, elem) { + if (px->proxy_id == proxy_id) + break; + } + if (!px) { + ec = rte_eth_dev_info_get(proxy_id, &proxy_eth_info); + if (ec < 0 || proxy_eth_info.if_index == 0) { + IFPX_LOG(ERR, "Invalid proxy: %d", proxy_id); + if (ec >= 0) + ec = -EINVAL; + goto error; + } + px = malloc(sizeof(*px)); + if (!px) { + ec = -ENOMEM; + goto error; + } + px->proxy_id = proxy_id; + px->info.if_index = proxy_eth_info.if_index; + rte_eth_dev_get_mtu(proxy_id, &px->info.mtu); + rte_eth_macaddr_get(proxy_id, &px->info.mac); + memset(px->info.if_name, 0, sizeof(px->info.if_name)); + TAILQ_INSERT_TAIL(&ifpx_proxies, px, elem); + } + ifpx_ports[port_id] = proxy_id; + rte_spinlock_unlock(&ifpx_lock); + + /* Add proxy MAC to the port - since port will often just forward + * packets from the proxy/system they will be sent with proxy MAC as + * src. In order to pass communication in other direction we should be + * accepting packets with proxy MAC as dst. + */ + rte_eth_dev_mac_addr_add(port_id, &px->info.mac, 0); + + if (ifpx_platform.get_info) + ifpx_platform.get_info(px->info.if_index); + + return 0; + +error: + rte_spinlock_unlock(&ifpx_lock); + return ec; +} + +int rte_ifpx_port_unbind(uint16_t port_id) +{ + unsigned int i, cnt; + uint16_t proxy_id; + struct ifpx_proxy_node *px; + int ec = 0; + + rte_spinlock_lock(&ifpx_lock); + if (port_id >= RTE_MAX_ETHPORTS || + ifpx_ports[port_id] == RTE_MAX_ETHPORTS || + /* port is a proxy */ + ifpx_ports[port_id] == port_id) { + ec = -EINVAL; + goto exit; + } + + proxy_id = ifpx_ports[port_id]; + ifpx_ports[port_id] = RTE_MAX_ETHPORTS; + + for (i = 0, cnt = 0; i < RTE_DIM(ifpx_ports); ++i) { + if (ifpx_ports[i] == proxy_id) + ++cnt; + } + + /* If there is no port bound to this proxy then remove it. */ + if (cnt == 0) { + TAILQ_FOREACH(px, &ifpx_proxies, elem) { + if (px->proxy_id == proxy_id) + break; + } + RTE_ASSERT(px); + ec = ifpx_proxy_destroy(px); + } +exit: + rte_spinlock_unlock(&ifpx_lock); + return ec; +} + +int rte_ifpx_callbacks_register(unsigned int len, + const struct rte_ifpx_callback cbs[]) +{ + unsigned int i; + + if (!cbs || len == 0) + return -EINVAL; + + rte_spinlock_lock(&ifpx_lock); + + for (i = 0; i < len; ++i) { + if (cbs[i].type < 0 || cbs[i].type > RTE_IFPX_LAST_EVENT) { + IFPX_LOG(WARNING, "Invalid event type: %d", + cbs[i].type); + continue; + } + ifpx_callbacks[i].cb = cbs[i].callback; + } + + rte_spinlock_unlock(&ifpx_lock); + + return 0; +} + +void rte_ifpx_callbacks_unregister_all(void) +{ + rte_spinlock_lock(&ifpx_lock); + memset(&ifpx_callbacks, 0, sizeof(ifpx_callbacks)); + rte_spinlock_unlock(&ifpx_lock); +} + +int rte_ifpx_callbacks_unregister(enum rte_ifpx_event_type ev) +{ + if (ev < 0 || ev > RTE_IFPX_CFG_DONE) + return -EINVAL; + + rte_spinlock_lock(&ifpx_lock); + ifpx_callbacks[ev].f_ptr = NULL; + rte_spinlock_unlock(&ifpx_lock); + + return 0; +} + +uint16_t rte_ifpx_proxy_get(uint16_t port_id) +{ + uint16_t p = RTE_MAX_ETHPORTS; + + if (port_id < RTE_MAX_ETHPORTS) { + rte_spinlock_lock(&ifpx_lock); + p = ifpx_ports[port_id]; + rte_spinlock_unlock(&ifpx_lock); + } + + return p; +} + +unsigned int rte_ifpx_port_get(uint16_t proxy_id, + uint16_t *ports, unsigned int num) +{ + unsigned int p, cnt = 0; + + rte_spinlock_lock(&ifpx_lock); + for (p = 0; p < RTE_DIM(ifpx_ports); ++p) { + if (ifpx_ports[p] == proxy_id && ifpx_ports[p] != p) { + ++cnt; + if (ports && num > 0) { + *ports++ = p; + --num; + } + } + } + rte_spinlock_unlock(&ifpx_lock); + + return cnt; +} + +const struct rte_ifpx_info *rte_ifpx_info_get(uint16_t port_id) +{ + struct ifpx_proxy_node *px; + + rte_spinlock_lock(&ifpx_lock); + + if (port_id >= RTE_MAX_ETHPORTS || + ifpx_ports[port_id] == RTE_MAX_ETHPORTS) { + rte_spinlock_unlock(&ifpx_lock); + return NULL; + } + + TAILQ_FOREACH(px, &ifpx_proxies, elem) { + if (px->proxy_id == ifpx_ports[port_id]) + break; + } + rte_spinlock_unlock(&ifpx_lock); + RTE_ASSERT(px && "Internal IF Proxy library error"); + + return &px->info; +} + +static +void queue_event(const struct rte_ifpx_event *ev, struct rte_ring *r) +{ + struct rte_ifpx_event *e = malloc(sizeof(*ev)); + + if (!e) { + IFPX_LOG(ERR, "Failed to allocate event!"); + return; + } + RTE_ASSERT(r); + + *e = *ev; + rte_ring_sp_enqueue(r, e); +} + +void ifpx_notify_event(struct rte_ifpx_event *ev, struct ifpx_proxy_node *px) +{ + struct ifpx_queue_node *q; + int done = 0; + uint16_t p, proxy_id; + + if (px) { + if (px->state & DEL_PENDING) + return; + proxy_id = px->proxy_id; + RTE_ASSERT(proxy_id != RTE_MAX_ETHPORTS); + px->state |= IN_USE; + } else + proxy_id = RTE_MAX_ETHPORTS; + + RTE_ASSERT(ev && ev->type >= 0 && ev->type <= RTE_IFPX_LAST_EVENT); + /* This function is expected to be called with a lock held. */ + RTE_ASSERT(rte_spinlock_trylock(&ifpx_lock) == 0); + + if (ifpx_callbacks[ev->type].f_ptr) { + union ifpx_cb_ptr fun = ifpx_callbacks[ev->type]; + + /* Below we drop the lock for the time of callback call to allow + * for calling of IF Proxy API. + */ + if (px) { + for (p = 0; p < RTE_DIM(ifpx_ports); ++p) { + if (ifpx_ports[p] != proxy_id || + ifpx_ports[p] == p) + continue; + ev->data.port_id = p; + rte_spinlock_unlock(&ifpx_lock); + done = fun.f_ptr(&ev->data) || done; + rte_spinlock_lock(&ifpx_lock); + } + } else { + RTE_ASSERT(ev->type == RTE_IFPX_CFG_DONE); + rte_spinlock_unlock(&ifpx_lock); + done = fun.cb.cfg_done(); + rte_spinlock_lock(&ifpx_lock); + } + } + if (done) + goto exit; + + /* Event not "consumed" yet so try to notify via queues. */ + TAILQ_FOREACH(q, &ifpx_queues, elem) { + if (px) { + for (p = 0; p < RTE_DIM(ifpx_ports); ++p) { + if (ifpx_ports[p] != proxy_id || + ifpx_ports[p] == p) + continue; + /* Set the port_id - the remaining params should + * be filled before calling this function. + */ + ev->data.port_id = p; + queue_event(ev, q->r); + } + } else + queue_event(ev, q->r); + } +exit: + if (px) + px->state &= ~IN_USE; +} + +void ifpx_cleanup_proxies(void) +{ + struct ifpx_proxy_node *px, *next; + for (px = TAILQ_FIRST(&ifpx_proxies); px; px = next) { + next = TAILQ_NEXT(px, elem); + if (px->state & DEL_PENDING) + ifpx_proxy_destroy(px); + } +} + +int rte_ifpx_listen(void) +{ + int ec; + + if (!ifpx_platform.listen) + return -ENOTSUP; + + ec = ifpx_platform.listen(); + if (ec == 0 && ifpx_platform.get_info) + ifpx_platform.get_info(0); + + return ec; +} + +int rte_ifpx_close(void) +{ + struct ifpx_proxy_node *px; + struct ifpx_queue_node *q; + unsigned int p; + int ec = 0; + + rte_spinlock_lock(&ifpx_lock); + + if (ifpx_platform.close) { + ec = ifpx_platform.close(); + if (ec != 0) + IFPX_LOG(ERR, "Platform 'close' calback failed."); + } + + /* Remove queues. */ + while (!TAILQ_EMPTY(&ifpx_queues)) { + q = TAILQ_FIRST(&ifpx_queues); + TAILQ_REMOVE(&ifpx_queues, q, elem); + free(q); + } + + /* Clear callbacks. */ + memset(&ifpx_callbacks, 0, sizeof(ifpx_callbacks)); + + /* Unbind ports. */ + for (p = 0; p < RTE_DIM(ifpx_ports); ++p) { + if (ifpx_ports[p] == RTE_MAX_ETHPORTS) + continue; + /* We don't need to call rte_ifpx_port_unbind() here since we + * clear proxies below anyway, just clearing the mapping is + * enough (and besides it would deadlock :)). + */ + ifpx_ports[p] = RTE_MAX_ETHPORTS; + } + + /* Clear proxies. */ + while (!TAILQ_EMPTY(&ifpx_proxies)) { + px = TAILQ_FIRST(&ifpx_proxies); + TAILQ_REMOVE(&ifpx_proxies, px, elem); + free(px); + } + + rte_spinlock_unlock(&ifpx_lock); + + return ec; +} + +RTE_INIT(if_proxy_init) +{ + unsigned int i; + for (i = 0; i < RTE_DIM(ifpx_ports); ++i) + ifpx_ports[i] = RTE_MAX_ETHPORTS; + + ifpx_log_type = rte_log_register("lib.if_proxy"); + if (ifpx_log_type >= 0) + rte_log_set_level(ifpx_log_type, RTE_LOG_WARNING); + + if (ifpx_platform.init) + ifpx_platform.init(); +} diff --git a/lib/librte_if_proxy/if_proxy_priv.h b/lib/librte_if_proxy/if_proxy_priv.h new file mode 100644 index 000000000..7691494be --- /dev/null +++ b/lib/librte_if_proxy/if_proxy_priv.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell International Ltd. + */ +#ifndef _IF_PROXY_PRIV_H_ +#define _IF_PROXY_PRIV_H_ + +#include "rte_if_proxy.h" +#include + +#define RTE_IFPX_LAST_EVENT RTE_IFPX_CFG_DONE +#define RTE_IFPX_NUM_EVENTS (RTE_IFPX_LAST_EVENT+1) +#define RTE_IFPX_EVENT_INVALID RTE_IFPX_NUM_EVENTS + +extern int ifpx_log_type; +#define IFPX_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, ifpx_log_type, "%s(): " fmt "\n", \ + __func__, ##args) + +/* Since this library is really a slow/config path we guard all internal data + * with a lock - and only one for all of them should be enough. + */ +extern rte_spinlock_t ifpx_lock; + +enum ifpx_node_status { + IN_USE = 1U << 0, + DEL_PENDING = 1U << 1, +}; + +/* List of configured proxies */ +struct ifpx_proxy_node { + TAILQ_ENTRY(ifpx_proxy_node) elem; + uint16_t proxy_id; + uint16_t state; + struct rte_ifpx_info info; +}; +extern +TAILQ_HEAD(ifpx_proxies_head, ifpx_proxy_node) ifpx_proxies; + +/* This function should be called by the implementation whenever it notices + * change in the network configuration. The arguments are: + * - ev : pointer to filled event data structure (all fields are expected to be + * filled, with the exception of 'port_id' for all proxy/port related + * events: this function clones the event notification for each bound port + * and fills 'port_id' appropriately). + * - px : proxy node when given event is proxy/port related, otherwise pass NULL + */ +void ifpx_notify_event(struct rte_ifpx_event *ev, struct ifpx_proxy_node *px); + +/* This function should be called by the implementation whenever it is done with + * notification about network configuration change. It is only really needed + * for the case of callback based API since from the callback user might attempt + * to remove proxies. Only implementation really knows when notification for + * given proxy is finished so it is a duty of it to call this function to + * cleanup all proxies that has been marked for deletion. + */ +void ifpx_cleanup_proxies(void); + +/* This is the internal function removing the proxy from the list. It is + * related to the notification function above and intended to be used by the + * platform implementation for the case of callback based API. + * During notification via callback the internal lock is released so that + * operation would not deadlock on an attempt to take a lock. However + * modification (destruction) is not really performed - instead the + * callbacks/proxies are marked as "to be deleted". + * Handling of callbacks that are "to be deleted" is done by the + * ifpx_notify_event() function itself however it cannot delete the proxies (in + * particular the proxy passed as an argument) since they might still be + * referred by the calling function. So it is a responsibility of the platform + * implementation to check after calling notification function if there are any + * proxies to be removed and use ifpx_proxy_destroy() to actually release them. + */ +int ifpx_proxy_destroy(struct ifpx_proxy_node *px); + +/* Every implementation should provide definition of this structure: + * - init : called during library initialization (NULL when not needed) + * - events : this should return bitmask of supported events (can be NULL if all + * defined events are supported by the implementation) + * - listen : this function should start service listening to the network + * configuration events/changes, + * - close : this function should close the service started by listen() + * - get_info : this function should query system for current configuration of + * interface with index 'if_index'. After successful initialization of + * listening service this function is called with 0 as an argument. In that + * case configuration of all ports should be obtained - and when this + * procedure completes a RTE_IFPX_CFG_DONE event should be signaled via + * ifpx_notify_event(). + */ +extern +struct ifpx_platform_callbacks { + void (*init)(void); + uint64_t (*events)(void); + int (*listen)(void); + int (*close)(void); + void (*get_info)(int if_index); +} ifpx_platform; + +#endif /* _IF_PROXY_PRIV_H_ */ diff --git a/lib/librte_if_proxy/linux/Makefile b/lib/librte_if_proxy/linux/Makefile new file mode 100644 index 000000000..275b7e1e3 --- /dev/null +++ b/lib/librte_if_proxy/linux/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2020 Marvell International Ltd. + +SRCS += if_proxy.c diff --git a/lib/librte_if_proxy/linux/if_proxy.c b/lib/librte_if_proxy/linux/if_proxy.c new file mode 100644 index 000000000..618631b01 --- /dev/null +++ b/lib/librte_if_proxy/linux/if_proxy.c @@ -0,0 +1,563 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell International Ltd. + */ +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +static +struct rte_intr_handle ifpx_irq = { + .type = RTE_INTR_HANDLE_NETLINK, + .fd = -1, +}; + +static +unsigned int ifpx_pid; + +static +int request_info(int type, int index) +{ + static rte_spinlock_t send_lock = RTE_SPINLOCK_INITIALIZER; + struct info_get { + struct nlmsghdr h; + union { + struct ifinfomsg ifm; + struct ifaddrmsg ifa; + struct rtmsg rtm; + struct ndmsg ndm; + } __rte_aligned(NLMSG_ALIGNTO); + } info_req; + int ret; + + memset(&info_req, 0, sizeof(info_req)); + /* First byte of these messages is family, so just make sure that this + * memset is enough to get all families. + */ + RTE_ASSERT(AF_UNSPEC == 0); + + info_req.h.nlmsg_pid = ifpx_pid; + info_req.h.nlmsg_type = type; + info_req.h.nlmsg_flags = NLM_F_REQUEST | NLM_F_DUMP; + info_req.h.nlmsg_len = offsetof(struct info_get, ifm); + + switch (type) { + case RTM_GETLINK: + info_req.h.nlmsg_len += sizeof(info_req.ifm); + info_req.ifm.ifi_index = index; + break; + case RTM_GETADDR: + info_req.h.nlmsg_len += sizeof(info_req.ifa); + info_req.ifa.ifa_index = index; + break; + case RTM_GETROUTE: + info_req.h.nlmsg_len += sizeof(info_req.rtm); + break; + case RTM_GETNEIGH: + info_req.h.nlmsg_len += sizeof(info_req.ndm); + break; + default: + IFPX_LOG(WARNING, "Unhandled message type: %d", type); + return -EINVAL; + } + /* Store request type (and if it is global or link specific) in 'seq'. + * Later it is used during handling of reply to continue requesting of + * information dump from system - if needed. + */ + info_req.h.nlmsg_seq = index << 8 | type; + + IFPX_LOG(DEBUG, "\tRequesting msg %d for: %u", type, index); + + rte_spinlock_lock(&send_lock); +retry: + ret = send(ifpx_irq.fd, &info_req, info_req.h.nlmsg_len, 0); + if (ret < 0) { + if (errno == EINTR) { + IFPX_LOG(DEBUG, "send() interrupted"); + goto retry; + } + IFPX_LOG(ERR, "Failed to send netlink msg: %d", errno); + rte_errno = errno; + } + rte_spinlock_unlock(&send_lock); + + return ret; +} + +static +void handle_link(const struct nlmsghdr *h) +{ + const struct ifinfomsg *ifi = NLMSG_DATA(h); + int alen = h->nlmsg_len - NLMSG_LENGTH(sizeof(*ifi)); + const struct rtattr *attrs[IFLA_MAX+1] = { NULL }; + const struct rtattr *attr; + struct ifpx_proxy_node *px; + struct rte_ifpx_event ev; + + IFPX_LOG(DEBUG, "\tLink action (%u): %u, 0x%x/0x%x (flags/changed)", + ifi->ifi_index, h->nlmsg_type, ifi->ifi_flags, + ifi->ifi_change); + + rte_spinlock_lock(&ifpx_lock); + TAILQ_FOREACH(px, &ifpx_proxies, elem) { + if (px->info.if_index == (unsigned int)ifi->ifi_index) + break; + } + + /* Drop messages that are not associated with any proxy */ + if (!px) + goto exit; + /* When message is a reply to request for specific interface then keep + * it only when it contains info for this interface. + */ + if (h->nlmsg_pid == ifpx_pid && h->nlmsg_seq >> 8 && + (h->nlmsg_seq >> 8) != (unsigned int)ifi->ifi_index) + goto exit; + + for (attr = IFLA_RTA(ifi); RTA_OK(attr, alen); + attr = RTA_NEXT(attr, alen)) { + if (attr->rta_type > IFLA_MAX) + continue; + attrs[attr->rta_type] = attr; + } + + if (ifi->ifi_change & IFF_UP) { + ev.type = RTE_IFPX_LINK_CHANGE; + ev.link_change.is_up = ifi->ifi_flags & IFF_UP; + ifpx_notify_event(&ev, px); + } + if (attrs[IFLA_MTU]) { + uint16_t mtu = *(const int *)RTA_DATA(attrs[IFLA_MTU]); + if (mtu != px->info.mtu) { + px->info.mtu = mtu; + ev.type = RTE_IFPX_MTU_CHANGE; + ev.mtu_change.mtu = mtu; + ifpx_notify_event(&ev, px); + } + } + if (attrs[IFLA_ADDRESS]) { + const struct rte_ether_addr *mac = + RTA_DATA(attrs[IFLA_ADDRESS]); + + RTE_ASSERT(RTA_PAYLOAD(attrs[IFLA_ADDRESS]) == + RTE_ETHER_ADDR_LEN); + if (memcmp(mac, &px->info.mac, RTE_ETHER_ADDR_LEN) != 0) { + rte_ether_addr_copy(mac, &px->info.mac); + ev.type = RTE_IFPX_MAC_CHANGE; + rte_ether_addr_copy(mac, &ev.mac_change.mac); + ifpx_notify_event(&ev, px); + } + } + if (h->nlmsg_pid == ifpx_pid) { + RTE_ASSERT((h->nlmsg_seq & 0xFF) == RTM_GETLINK); + /* If this is reply for specific link request (not initial + * global dump) then follow up with address request, otherwise + * just store the interface name. + */ + if (h->nlmsg_seq >> 8) + request_info(RTM_GETADDR, ifi->ifi_index); + else if (!px->info.if_name[0] && attrs[IFLA_IFNAME]) + strlcpy(px->info.if_name, RTA_DATA(attrs[IFLA_IFNAME]), + sizeof(px->info.if_name)); + } + + ifpx_cleanup_proxies(); +exit: + rte_spinlock_unlock(&ifpx_lock); +} + +static +void handle_addr(const struct nlmsghdr *h, bool needs_del) +{ + const struct ifaddrmsg *ifa = NLMSG_DATA(h); + int alen = h->nlmsg_len - NLMSG_LENGTH(sizeof(*ifa)); + const struct rtattr *attrs[IFA_MAX+1] = { NULL }; + const struct rtattr *attr; + struct ifpx_proxy_node *px; + struct rte_ifpx_event ev; + const uint8_t *ip; + + IFPX_LOG(DEBUG, "\tAddr action (%u): %u, family: %u", + ifa->ifa_index, h->nlmsg_type, ifa->ifa_family); + + rte_spinlock_lock(&ifpx_lock); + TAILQ_FOREACH(px, &ifpx_proxies, elem) { + if (px->info.if_index == ifa->ifa_index) + break; + } + + /* Drop messages that are not associated with any proxy */ + if (!px) + goto exit; + /* When message is a reply to request for specific interface then keep + * it only when it contains info for this interface. + */ + if (h->nlmsg_pid == ifpx_pid && h->nlmsg_seq >> 8 && + (h->nlmsg_seq >> 8) != ifa->ifa_index) + goto exit; + + for (attr = IFA_RTA(ifa); RTA_OK(attr, alen); + attr = RTA_NEXT(attr, alen)) { + if (attr->rta_type > IFA_MAX) + continue; + attrs[attr->rta_type] = attr; + } + + if (attrs[IFA_ADDRESS]) { + ip = RTA_DATA(attrs[IFA_ADDRESS]); + if (ifa->ifa_family == AF_INET) { + ev.type = needs_del ? RTE_IFPX_ADDR_DEL + : RTE_IFPX_ADDR_ADD; + ev.addr_change.ip = + RTE_IPV4(ip[0], ip[1], ip[2], ip[3]); + } else { + ev.type = needs_del ? RTE_IFPX_ADDR6_DEL + : RTE_IFPX_ADDR6_ADD; + memcpy(ev.addr6_change.ip, ip, 16); + } + ifpx_notify_event(&ev, px); + ifpx_cleanup_proxies(); + } +exit: + rte_spinlock_unlock(&ifpx_lock); +} + +static +void handle_route(const struct nlmsghdr *h, bool needs_del) +{ + const struct rtmsg *r = NLMSG_DATA(h); + int alen = h->nlmsg_len - NLMSG_LENGTH(sizeof(*r)); + const struct rtattr *attrs[RTA_MAX+1] = { NULL }; + const struct rtattr *attr; + struct rte_ifpx_event ev; + struct ifpx_proxy_node *px = NULL; + const uint8_t *ip; + + IFPX_LOG(DEBUG, "\tRoute action: %u, family: %u", + h->nlmsg_type, r->rtm_family); + + for (attr = RTM_RTA(r); RTA_OK(attr, alen); + attr = RTA_NEXT(attr, alen)) { + if (attr->rta_type > RTA_MAX) + continue; + attrs[attr->rta_type] = attr; + } + + memset(&ev, 0, sizeof(ev)); + ev.type = RTE_IFPX_EVENT_INVALID; + + rte_spinlock_lock(&ifpx_lock); + if (attrs[RTA_OIF]) { + int if_index = *((int32_t *)RTA_DATA(attrs[RTA_OIF])); + + if (if_index > 0) { + TAILQ_FOREACH(px, &ifpx_proxies, elem) { + if (px->info.if_index == (uint32_t)if_index) + break; + } + } + } + /* We are only interested in routes related to the proxy interfaces and + * we need to have dst - otherwise skip the message. + */ + if (!px || !attrs[RTA_DST]) + goto exit; + + ip = RTA_DATA(attrs[RTA_DST]); + /* This is common to both IPv4/6. */ + ev.route_change.depth = r->rtm_dst_len; + if (r->rtm_family == AF_INET) { + ev.type = needs_del ? RTE_IFPX_ROUTE_DEL + : RTE_IFPX_ROUTE_ADD; + ev.route_change.ip = RTE_IPV4(ip[0], ip[1], ip[2], ip[3]); + } else { + ev.type = needs_del ? RTE_IFPX_ROUTE6_DEL + : RTE_IFPX_ROUTE6_ADD; + memcpy(ev.route6_change.ip, ip, 16); + } + if (attrs[RTA_GATEWAY]) { + ip = RTA_DATA(attrs[RTA_GATEWAY]); + if (r->rtm_family == AF_INET) + ev.route_change.gateway = + RTE_IPV4(ip[0], ip[1], ip[2], ip[3]); + else + memcpy(ev.route6_change.gateway, ip, 16); + } + + ifpx_notify_event(&ev, px); + /* Let's check for proxies to remove here too - just in case somebody + * removed the non-proxy related callback. + */ + ifpx_cleanup_proxies(); +exit: + rte_spinlock_unlock(&ifpx_lock); +} + +/* Link, addr and route related messages seem to have this macro defined but not + * neighbour one. Define one if it is missing - const qualifiers added just to + * silence compiler - for some reason it is not needed in equivalent macros for + * other messages and here compiler is complaining about (char*) cast on pointer + * to const. + */ +#ifndef NDA_RTA +#define NDA_RTA(r) ((const struct rtattr *)(((const char *)(r)) + \ + NLMSG_ALIGN(sizeof(struct ndmsg)))) +#endif + +static +void handle_neigh(const struct nlmsghdr *h, bool needs_del) +{ + const struct ndmsg *n = NLMSG_DATA(h); + int alen = h->nlmsg_len - NLMSG_LENGTH(sizeof(*n)); + const struct rtattr *attrs[NDA_MAX+1] = { NULL }; + const struct rtattr *attr; + struct ifpx_proxy_node *px; + struct rte_ifpx_event ev; + const uint8_t *ip; + + IFPX_LOG(DEBUG, "\tNeighbour action: %u, family: %u, state: %u, if: %d", + h->nlmsg_type, n->ndm_family, n->ndm_state, n->ndm_ifindex); + + for (attr = NDA_RTA(n); RTA_OK(attr, alen); + attr = RTA_NEXT(attr, alen)) { + if (attr->rta_type > NDA_MAX) + continue; + attrs[attr->rta_type] = attr; + } + + memset(&ev, 0, sizeof(ev)); + ev.type = RTE_IFPX_EVENT_INVALID; + + rte_spinlock_lock(&ifpx_lock); + TAILQ_FOREACH(px, &ifpx_proxies, elem) { + if (px->info.if_index == (unsigned int)n->ndm_ifindex) + break; + } + /* We need only subset of neighbourhood related to proxy interfaces. + * lladdr seems to be needed only for adding new entry - modifications + * (also reported via RTM_NEWLINK) and deletion include only dst. + */ + if (!px || !attrs[NDA_DST] || (!needs_del && !attrs[NDA_LLADDR])) + goto exit; + + ip = RTA_DATA(attrs[NDA_DST]); + if (n->ndm_family == AF_INET) { + ev.type = needs_del ? RTE_IFPX_NEIGH_DEL + : RTE_IFPX_NEIGH_ADD; + ev.neigh_change.ip = RTE_IPV4(ip[0], ip[1], ip[2], ip[3]); + } else { + ev.type = needs_del ? RTE_IFPX_NEIGH6_DEL + : RTE_IFPX_NEIGH6_ADD; + memcpy(ev.neigh6_change.ip, ip, 16); + } + if (attrs[NDA_LLADDR]) + rte_ether_addr_copy(RTA_DATA(attrs[NDA_LLADDR]), + &ev.neigh_change.mac); + + ifpx_notify_event(&ev, px); + /* Let's check for proxies to remove here too - just in case somebody + * removed the non-proxy related callback. + */ + ifpx_cleanup_proxies(); +exit: + rte_spinlock_unlock(&ifpx_lock); +} + +static +void if_proxy_intr_callback(void *arg __rte_unused) +{ + struct nlmsghdr *h; + struct sockaddr_nl addr; + socklen_t addr_len; + char buf[8192]; + ssize_t len; + +restart: + len = recvfrom(ifpx_irq.fd, buf, sizeof(buf), 0, + (struct sockaddr *)&addr, &addr_len); + if (len < 0) { + if (errno == EINTR) { + IFPX_LOG(DEBUG, "recvfrom() interrupted"); + goto restart; + } + IFPX_LOG(ERR, "Failed to read netlink msg: %ld (errno %d)", + len, errno); + return; + } + if (addr_len != sizeof(addr)) { + IFPX_LOG(ERR, "Invalid netlink addr size: %d", addr_len); + return; + } + IFPX_LOG(DEBUG, "Read %lu bytes (buf %lu) from %u/%u", len, + sizeof(buf), addr.nl_pid, addr.nl_groups); + + for (h = (struct nlmsghdr *)buf; NLMSG_OK(h, len); + h = NLMSG_NEXT(h, len)) { + IFPX_LOG(DEBUG, "Recv msg: %u (%u/%u/%u seq/flags/pid)", + h->nlmsg_type, h->nlmsg_seq, h->nlmsg_flags, + h->nlmsg_pid); + + switch (h->nlmsg_type) { + case RTM_NEWLINK: + case RTM_DELLINK: + handle_link(h); + break; + case RTM_NEWADDR: + case RTM_DELADDR: + handle_addr(h, h->nlmsg_type == RTM_DELADDR); + break; + case RTM_NEWROUTE: + case RTM_DELROUTE: + handle_route(h, h->nlmsg_type == RTM_DELROUTE); + break; + case RTM_NEWNEIGH: + case RTM_DELNEIGH: + handle_neigh(h, h->nlmsg_type == RTM_DELNEIGH); + break; + } + + /* If this is a reply for global request then follow up with + * additional requests and notify about finish. + */ + if (h->nlmsg_pid == ifpx_pid && (h->nlmsg_seq >> 8) == 0 && + h->nlmsg_type == NLMSG_DONE) { + if ((h->nlmsg_seq & 0xFF) == RTM_GETLINK) + request_info(RTM_GETADDR, 0); + else if ((h->nlmsg_seq & 0xFF) == RTM_GETADDR) + request_info(RTM_GETROUTE, 0); + else if ((h->nlmsg_seq & 0xFF) == RTM_GETROUTE) + request_info(RTM_GETNEIGH, 0); + else { + struct rte_ifpx_event ev = { + .type = RTE_IFPX_CFG_DONE + }; + + RTE_ASSERT((h->nlmsg_seq & 0xFF) == + RTM_GETNEIGH); + rte_spinlock_lock(&ifpx_lock); + ifpx_notify_event(&ev, NULL); + rte_spinlock_unlock(&ifpx_lock); + } + } + } + IFPX_LOG(DEBUG, "Finished msg loop: %ld bytes left", len); +} + +static +int nlink_listen(void) +{ + struct sockaddr_nl addr = { + .nl_family = AF_NETLINK, + .nl_pid = 0, + }; + socklen_t addr_len = sizeof(addr); + int ret; + + if (ifpx_irq.fd != -1) { + rte_errno = EBUSY; + return -1; + } + + addr.nl_groups = 1 << (RTNLGRP_LINK-1) + | 1 << (RTNLGRP_NEIGH-1) + | 1 << (RTNLGRP_IPV4_IFADDR-1) + | 1 << (RTNLGRP_IPV6_IFADDR-1) + | 1 << (RTNLGRP_IPV4_ROUTE-1) + | 1 << (RTNLGRP_IPV6_ROUTE-1); + + ifpx_irq.fd = socket(AF_NETLINK, SOCK_RAW | SOCK_CLOEXEC, + NETLINK_ROUTE); + if (ifpx_irq.fd == -1) { + IFPX_LOG(ERR, "Failed to create netlink socket: %d", errno); + goto error; + } + /* Starting with kernel 4.19 you can request dump for a specific + * interface and kernel will filter out and send only relevant info. + * Otherwise NLM_F_DUMP will generate info for all interfaces and you + * need to filter them yourself. + */ +#ifdef NETLINK_DUMP_STRICT_CHK + ret = 1; /* use this var also as an input param */ + ret = setsockopt(ifpx_irq.fd, SOL_SOCKET, NETLINK_DUMP_STRICT_CHK, + &ret, sizeof(ret)); + if (ret < 0) { + IFPX_LOG(ERR, "Failed to set socket option: %d", errno); + goto error; + } +#endif + + ret = bind(ifpx_irq.fd, (struct sockaddr *)&addr, addr_len); + if (ret < 0) { + IFPX_LOG(ERR, "Failed to bind socket: %d", errno); + goto error; + } + ret = getsockname(ifpx_irq.fd, (struct sockaddr *)&addr, &addr_len); + if (ret < 0) { + IFPX_LOG(ERR, "Failed to get socket addr: %d", errno); + goto error; + } else { + ifpx_pid = addr.nl_pid; + IFPX_LOG(DEBUG, "Assigned port ID: %u", addr.nl_pid); + } + + ret = rte_intr_callback_register(&ifpx_irq, if_proxy_intr_callback, + NULL); + if (ret == 0) + return 0; + +error: + rte_errno = errno; + if (ifpx_irq.fd != -1) { + close(ifpx_irq.fd); + ifpx_irq.fd = -1; + } + return -1; +} + +static +int nlink_close(void) +{ + int ec; + + if (ifpx_irq.fd < 0) + return -EBADFD; + + /* Drop the lock for the time of unregistering - otherwise we might dead + * lock e.g. we take a lock here and try to unregister and wait for the + * interrupt lock but it is taken already because notification comes + * and executes proxy callback which will try to take a lock. + */ + rte_spinlock_unlock(&ifpx_lock); + do + ec = rte_intr_callback_unregister(&ifpx_irq, + if_proxy_intr_callback, NULL); + while (ec == -EAGAIN); /* unlikely but possible - at least I think so */ + rte_spinlock_lock(&ifpx_lock); + + close(ifpx_irq.fd); + ifpx_irq.fd = -1; + ifpx_pid = 0; + + return 0; +} + +static +void nlink_get_info(int if_index) +{ + if (ifpx_irq.fd != -1) + request_info(RTM_GETLINK, if_index); +} + +struct ifpx_platform_callbacks ifpx_platform = { + .init = NULL, + .events = NULL, + .listen = nlink_listen, + .close = nlink_close, + .get_info = nlink_get_info, +}; diff --git a/lib/librte_if_proxy/meson.build b/lib/librte_if_proxy/meson.build new file mode 100644 index 000000000..f0c1a6e15 --- /dev/null +++ b/lib/librte_if_proxy/meson.build @@ -0,0 +1,19 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2020 Marvell International Ltd. + +# Currently only implemented on Linux +if not is_linux + build = false + reason = 'only supported on linux' +endif + +version = 1 +allow_experimental_apis = true + +deps += ['ethdev'] +sources = files('if_proxy_common.c') +headers = files('rte_if_proxy.h') + +if is_linux + sources += files('linux/if_proxy.c') +endif diff --git a/lib/librte_if_proxy/rte_if_proxy.h b/lib/librte_if_proxy/rte_if_proxy.h new file mode 100644 index 000000000..2378b4424 --- /dev/null +++ b/lib/librte_if_proxy/rte_if_proxy.h @@ -0,0 +1,585 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell International Ltd. + */ + +#ifndef _RTE_IF_PROXY_H_ +#define _RTE_IF_PROXY_H_ + +/** + * @file + * RTE IF Proxy library + * + * The IF Proxy library allows for monitoring of system network configuration + * and configuration of DPDK ports by using usual system utilities (like the + * ones from iproute2 package). + * + * It is based on the notion of "proxy interface" which actually can be any DPDK + * port which is also visible to the system - that is it has non-zero 'if_index' + * field in 'rte_eth_dev_info' structure. + * + * If application doesn't have any such port (or doesn't want to use it for + * proxy) it can create one by calling: + * + * proxy_id = rte_ifpx_create(RTE_IFPX_DEFAULT); + * + * This function is just a wrapper that constructs valid 'devargs' string based + * on the proxy type chosen (currently Tap or KNI) and creates the interface by + * calling rte_ifpx_dev_create(). + * + * Once one has DPDK port capable of being proxy one can bind target DPDK port + * to it by calling. + * + * rte_ifpx_port_bind(port_id, proxy_id); + * + * This binding is a logical one - there is no automatic packet forwarding + * between port and it's proxy since the library doesn't know the structure of + * application's packet processing. It remains application responsibility to + * forward the packets from/to proxy port (by calling the usual DPDK RX/TX burst + * API). However when the library notes some change to the proxy interface it + * will simply call appropriate callback with 'port_id' of the DPDK port that is + * bound to this proxy interface. The binding can be 1 to many - that is many + * ports can point to one proxy - in that case registered callbacks will be + * called for every bound port. + * + * The callbacks that are used for notifications are described by the + * 'rte_ifpx_callback' structure and they are registered by calling: + * + * rte_ifpx_callbacks_register(len, cbs); + * + * where cbs is an array of callback pointers. + * @see rte_ifpx_callbacks_register() + * + * Finally the application should call: + * + * rte_ifpx_listen(); + * + * which will query system for present network configuration and start listening + * to its changes. + */ + +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Enum naming the type of proxy to create. + * + * @see rte_ifpx_create() + */ +enum rte_ifpx_proxy_type { + RTE_IFPX_DEFAULT, /**< Use default proxy type for given arch. */ + RTE_IFPX_TAP, /**< Use Tap based port for proxy. */ + RTE_IFPX_KNI /**< Use KNI based port for proxy. */ +}; + +/** + * Create DPDK port that can serve as an interface proxy. + * + * This function is just a wrapper around rte_ifpx_create_by_devarg() that + * constructs its 'devarg' argument based on type of proxy requested. + * + * @param type + * A type of proxy to create. + * + * @return + * DPDK port id on success, RTE_MAX_ETHPORTS otherwise. + * + * @see enum rte_ifpx_type + * @see rte_ifpx_create_by_devarg() + */ +__rte_experimental +uint16_t rte_ifpx_proxy_create(enum rte_ifpx_proxy_type type); + +/** + * Create DPDK port that can serve as an interface proxy. + * + * @param devarg + * A string passed to rte_dev_probe() to create proxy port. + * + * @return + * DPDK port id on success, RTE_MAX_ETHPORTS otherwise. + */ +__rte_experimental +uint16_t rte_ifpx_proxy_create_by_devarg(const char *devarg); + +/** + * Remove DPDK proxy port. + * + * In addition to removing the proxy port the bindings (if any) are cleared. + * + * @param proxy_id + * Port id of the proxy that should be removed. + * + * @return + * 0 on success, negative on error. + */ +__rte_experimental +int rte_ifpx_proxy_destroy(uint16_t proxy_id); + +/** + * The rte_ifpx_event_type enum lists all possible event types that can be + * signaled by this library. To learn what events are supported on your + * platform call rte_ifpx_events_available(). + * + * NOTE - in order to keep ABI stable do not reorder these enums freely. + */ +enum rte_ifpx_event_type { + RTE_IFPX_MAC_CHANGE, /**< @see struct rte_ifpx_mac_change */ + RTE_IFPX_MTU_CHANGE, /**< @see struct rte_ifpx_mtu_change */ + RTE_IFPX_LINK_CHANGE, /**< @see struct rte_ifpx_link_change */ + RTE_IFPX_ADDR_ADD, /**< @see struct rte_ifpx_addr_change */ + RTE_IFPX_ADDR_DEL, /**< @see struct rte_ifpx_addr_change */ + RTE_IFPX_ADDR6_ADD, /**< @see struct rte_ifpx_addr6_change */ + RTE_IFPX_ADDR6_DEL, /**< @see struct rte_ifpx_addr6_change */ + RTE_IFPX_ROUTE_ADD, /**< @see struct rte_ifpx_route_change */ + RTE_IFPX_ROUTE_DEL, /**< @see struct rte_ifpx_route_change */ + RTE_IFPX_ROUTE6_ADD, /**< @see struct rte_ifpx_route6_change */ + RTE_IFPX_ROUTE6_DEL, /**< @see struct rte_ifpx_route6_change */ + RTE_IFPX_NEIGH_ADD, /**< @see struct rte_ifpx_neigh_change */ + RTE_IFPX_NEIGH_DEL, /**< @see struct rte_ifpx_neigh_change */ + RTE_IFPX_NEIGH6_ADD, /**< @see struct rte_ifpx_neigh6_change */ + RTE_IFPX_NEIGH6_DEL, /**< @see struct rte_ifpx_neigh6_change */ + RTE_IFPX_CFG_DONE, /**< This event is a lib specific event - it is + * signaled when initial network configuration + * query is finished and has no event data. + */ +}; + +/** + * Get the bit mask of implemented events/callbacks for this platform. + * + * @return + * Bit mask of events/callbacks implemented: each event type can be tested by + * checking bit (1 << ev) where 'ev' is one of the rte_ifpx_event_type enum + * values. + * @see enum rte_ifpx_event_type + */ +__rte_experimental +uint64_t rte_ifpx_events_available(void); + +/** + * The rte_ifpx_event defines structure used to pass notification event to + * application. Each event type has its own dedicated inner structure - these + * structures are also used when using callbacks notifications. + */ +struct rte_ifpx_event { + enum rte_ifpx_event_type type; + union { + /** Structure used to pass notification about MAC change of the + * proxy interface. + * @see RTE_IFPX_MAC_CHANGE + */ + struct rte_ifpx_mac_change { + uint16_t port_id; + struct rte_ether_addr mac; + } mac_change; + /** Structure used to pass notification about MTU change. + * @see RTE_IFPX_MTU_CHANGE + */ + struct rte_ifpx_mtu_change { + uint16_t port_id; + uint16_t mtu; + } mtu_change; + /** Structure used to pass notification about link going + * up/down. + * @see RTE_IFPX_LINK_CHANGE + */ + struct rte_ifpx_link_change { + uint16_t port_id; + int is_up; + } link_change; + /** Structure used to pass notification about IPv4 address being + * added/removed. All IPv4 addresses reported by this library + * are in host order. + * @see RTE_IFPX_ADDR_ADD + * @see RTE_IFPX_ADDR_DEL + */ + struct rte_ifpx_addr_change { + uint16_t port_id; + uint32_t ip; + } addr_change; + /** Structure used to pass notification about IPv6 address being + * added/removed. + * @see RTE_IFPX_ADDR6_ADD + * @see RTE_IFPX_ADDR6_DEL + */ + struct rte_ifpx_addr6_change { + uint16_t port_id; + uint8_t ip[16]; + } addr6_change; + /** Structure used to pass notification about IPv4 route being + * added/removed. + * @see RTE_IFPX_ROUTE_ADD + * @see RTE_IFPX_ROUTE_DEL + */ + struct rte_ifpx_route_change { + uint16_t port_id; + uint8_t depth; + uint32_t ip; + uint32_t gateway; + } route_change; + /** Structure used to pass notification about IPv6 route being + * added/removed. + * @see RTE_IFPX_ROUTE6_ADD + * @see RTE_IFPX_ROUTE6_DEL + */ + struct rte_ifpx_route6_change { + uint16_t port_id; + uint8_t depth; + uint8_t ip[16]; + uint8_t gateway[16]; + } route6_change; + /** Structure used to pass notification about IPv4 neighbour + * info changes. + * @see RTE_IFPX_NEIGH_ADD + * @see RTE_IFPX_NEIGH_DEL + */ + struct rte_ifpx_neigh_change { + uint16_t port_id; + struct rte_ether_addr mac; + uint32_t ip; + } neigh_change; + /** Structure used to pass notification about IPv6 neighbour + * info changes. + * @see RTE_IFPX_NEIGH6_ADD + * @see RTE_IFPX_NEIGH6_DEL + */ + struct rte_ifpx_neigh6_change { + uint16_t port_id; + struct rte_ether_addr mac; + uint8_t ip[16]; + } neigh6_change; + /* This structure is used internally - to abstract common parts + * of proxy/port related events and to be able to refer to this + * union without giving it a name. + */ + struct { + uint16_t port_id; + } data; + }; +}; + +/** + * This library can deliver notification about network configuration changes + * either by the use of registered callbacks and/or by queueing change events to + * configured notification queues. The logic used is: + * 1. If there is callback registered for given event type it is called. In + * case of many ports to one proxy binding, this callback is called for every + * port bound. + * 2. If this callback returns non-zero value (for any of ports in case of + * many-1 bindings) the handling of an event is considered as complete. + * 3. Otherwise the event is added to each configured event queue. The event is + * allocated with malloc() so after dequeueing and handling the application + * should deallocate it with free(). + * + * This dual notification mechanism is meant to provide some flexibility to + * application writer. For example, if you store your data in a single writer/ + * many readers coherent data structure you could just update this structure + * from the callback. If you keep separate copy per lcore/port you could make + * some common preparations (if applicable) in the callback, return 0 and use + * notification queues to pick up the change and update data structures. Or you + * could skip the callbacks altogether and just use notification queues - and + * configure them at the level appropriate for your application design (one + * global / one per lcore / one per port ...). + */ + +/** + * Add notification queue to the list of queues. + * + * @param r + * Ring used for queueing of notification events - application can assume that + * there is only one producer. + * @return + * 0 on success, negative otherwise. + */ +int rte_ifpx_queue_add(struct rte_ring *r); + +/** + * Remove notification queue from the list of queues. + * + * @param r + * Notification ring used for queueing of notification events (previously + * added via rte_ifpx_queue_add()). + * @return + * 0 on success, negative otherwise. + */ +int rte_ifpx_queue_remove(struct rte_ring *r); + +/** + * This union groups the callback types that might be called as a notification + * events for changing network configuration. Not every platform might + * implement all of them and you can query the availability with + * rte_ifpx_events_available() function. + * @see rte_ifpx_events_available() + * @see rte_ifpx_callbacks_register() + */ +union rte_ifpx_cb_ptr { + int (*mac_change)(const struct rte_ifpx_mac_change *event); + /**< Callback for notification about MAC change of the proxy interface. + * This callback (as all other port related callbacks) is called for + * each port (with its port_id as a first argument) bound to the proxy + * interface for which change has been observed. + * @see struct rte_ifpx_mac_change + * @return non-zero if event handling is finished + */ + int (*mtu_change)(const struct rte_ifpx_mtu_change *event); + /**< Callback for notification about MTU change. + * @see struct rte_ifpx_mtu_change + * @return non-zero if event handling is finished + */ + int (*link_change)(const struct rte_ifpx_link_change *event); + /**< Callback for notification about link going up/down. + * @see struct rte_ifpx_link_change + * @return non-zero if event handling is finished + */ + int (*addr_add)(const struct rte_ifpx_addr_change *event); + /**< Callback for notification about IPv4 address being added. + * @see struct rte_ifpx_addr_change + * @return non-zero if event handling is finished + */ + int (*addr_del)(const struct rte_ifpx_addr_change *event); + /**< Callback for notification about IPv4 address removal. + * @see struct rte_ifpx_addr_change + * @return non-zero if event handling is finished + */ + int (*addr6_add)(const struct rte_ifpx_addr6_change *event); + /**< Callback for notification about IPv6 address being added. + * @see struct rte_ifpx_addr6_change + */ + int (*addr6_del)(const struct rte_ifpx_addr6_change *event); + /**< Callback for notification about IPv4 address removal. + * @see struct rte_ifpx_addr6_change + * @return non-zero if event handling is finished + */ + /* Please note that "route" callbacks might be also called when user + * adds address to the interface (that is in addition to address related + * callbacks). + */ + int (*route_add)(const struct rte_ifpx_route_change *event); + /**< Callback for notification about IPv4 route being added. + * @see struct rte_ifpx_route_change + * @return non-zero if event handling is finished + */ + int (*route_del)(const struct rte_ifpx_route_change *event); + /**< Callback for notification about IPv4 route removal. + * @see struct rte_ifpx_route_change + * @return non-zero if event handling is finished + */ + int (*route6_add)(const struct rte_ifpx_route6_change *event); + /**< Callback for notification about IPv6 route being added. + * @see struct rte_ifpx_route6_change + * @return non-zero if event handling is finished + */ + int (*route6_del)(const struct rte_ifpx_route6_change *event); + /**< Callback for notification about IPv6 route removal. + * @see struct rte_ifpx_route6_change + * @return non-zero if event handling is finished + */ + int (*neigh_add)(const struct rte_ifpx_neigh_change *event); + /**< Callback for notification about IPv4 neighbour being added. + * @see struct rte_ifpx_neigh_change + * @return non-zero if event handling is finished + */ + int (*neigh_del)(const struct rte_ifpx_neigh_change *event); + /**< Callback for notification about IPv4 neighbour removal. + * @see struct rte_ifpx_neigh_change + * @return non-zero if event handling is finished + */ + int (*neigh6_add)(const struct rte_ifpx_neigh6_change *event); + /**< Callback for notification about IPv6 neighbour being added. + * @see struct rte_ifpx_neigh_change + */ + int (*neigh6_del)(const struct rte_ifpx_neigh6_change *event); + /**< Callback for notification about IPv6 neighbour removal. + * @see struct rte_ifpx_neigh_change + * @return non-zero if event handling is finished + */ + int (*cfg_done)(void); + /**< Lib specific callback - called when initial network configuration + * query is finished. + * @return non-zero if event handling is finished + */ +}; + +/** + * This structure is a "tagged union" used to pass the callback for + * registration. + * + * @see union rte_ifpx_cb_ptr + * @see rte_ifpx_events_available() + * @see rte_ifpx_callbacks_register() + */ +struct rte_ifpx_callback { + enum rte_ifpx_event_type type; + union rte_ifpx_cb_ptr callback; +}; + +/** + * Register proxy callbacks. + * + * This function registers callbacks to be called upon appropriate network + * event notification. + * + * @param cbs + * Set of callbacks that will be called. The library does not take any + * ownership of the pointer passed - the callbacks are stored internally. + * + * @return + * 0 on success, negative otherwise. + */ +__rte_experimental +int rte_ifpx_callbacks_register(unsigned int len, + const struct rte_ifpx_callback cbs[]); + +/** + * Unregister proxy callbacks. + * + * This function unregisters all callbacks previously registered with + * rte_ifpx_callbacks_register(). + */ +__rte_experimental +void rte_ifpx_callbacks_unregister_all(void); + +/** + * Unregister proxy callback. + * + * This function unregisters one callback previously registered with + * rte_ifpx_callbacks_register(). + * + * @param ev + * Type of event for which callback should be removed. + * + * @return + * 0 on success, negative otherwise. + */ +__rte_experimental +int rte_ifpx_callbacks_unregister(enum rte_ifpx_event_type ev); + +/** + * Bind the port to its proxy. + * + * After calling this function all network configuration of the proxy (and it's + * changes) will be passed to given port by calling registered callbacks with + * 'port_id' as an argument. + * + * Note: since both arguments are of the same type in order to not mix them and + * ease remembering the order the first one is kept the same for bind/unbind. + * + * @param port_id + * Id of the port to be bound. + * @param proxy_id + * Id of the proxy the port needs to be bound to. + * @return + * 0 on success, negative on error. + */ +__rte_experimental +int rte_ifpx_port_bind(uint16_t port_id, uint16_t proxy_id); + +/** + * Unbind the port from its proxy. + * + * After calling this function registered callbacks will no longer be called for + * this port (but they might be called for other ports in one to many binding + * scenario). + * + * @param port_id + * Id of the port to unbind. + * @return + * 0 on success, negative on error. + */ +__rte_experimental +int rte_ifpx_port_unbind(uint16_t port_id); + +/** + * Get the system network configuration and start listening to its changes. + * + * @return + * 0 on success, negative otherwise. + */ +__rte_experimental +int rte_ifpx_listen(void); + +/** + * Remove all bindings/callbacks and stop listening to network configuration. + * + * @return + * 0 on success, negative otherwise. + */ +__rte_experimental +int rte_ifpx_close(void); + +/** + * Get the id of the proxy the port is bound to. + * + * @param port_id + * Id of the port for which to get proxy. + * @return + * Port id of the proxy on success, RTE_MAX_ETHPORTS on error. + */ +__rte_experimental +uint16_t rte_ifpx_proxy_get(uint16_t port_id); + +/** + * Test for port acting as a proxy. + * + * @param port_id + * Id of the port. + * @return + * 1 if port acts as a proxy, 0 otherwise. + */ +static inline +int rte_ifpx_is_proxy(uint16_t port_id) +{ + return rte_ifpx_proxy_get(port_id) == port_id; +} + +/** + * Get the ids of the ports bound to the proxy. + * + * @param proxy_id + * Id of the proxy for which to get ports. + * @param ports + * Array where to store the port ids. + * @param num + * Size of the 'ports' array. + * @return + * The number of ports bound to given proxy. Note that bound ports are filled + * in 'ports' array up to its size but the return value is always the total + * number of ports bound - so you can make call first with NULL/0 to query for + * the size of the buffer to create or call it with the buffer you have and + * later check if it was large enough. + */ +__rte_experimental +unsigned int rte_ifpx_port_get(uint16_t proxy_id, + uint16_t *ports, unsigned int num); + +/** + * The structure containing some properties of the proxy interface. + */ +struct rte_ifpx_info { + unsigned int if_index; /* entry valid iff if_index != 0 */ + uint16_t mtu; + struct rte_ether_addr mac; + char if_name[RTE_ETH_NAME_MAX_LEN]; +}; + +/** + * Get the properties of the proxy interface. Argument can be either id of the + * proxy or an id of a port that is bound to it. + * + * @param port_id + * Id of the port (or proxy) for which to get proxy properties. + * @return + * Pointer to the proxy information structure. + */ +__rte_experimental +const struct rte_ifpx_info *rte_ifpx_info_get(uint16_t port_id); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_IF_PROXY_H_ */ diff --git a/lib/librte_if_proxy/rte_if_proxy_version.map b/lib/librte_if_proxy/rte_if_proxy_version.map new file mode 100644 index 000000000..6da35d096 --- /dev/null +++ b/lib/librte_if_proxy/rte_if_proxy_version.map @@ -0,0 +1,20 @@ +EXPERIMENTAL { + global: + + rte_ifpx_callbacks_register; + rte_ifpx_callbacks_unregister; + rte_ifpx_callbacks_unregister_all; + rte_ifpx_close; + rte_ifpx_events_available; + rte_ifpx_info_get; + rte_ifpx_listen; + rte_ifpx_port_bind; + rte_ifpx_port_get; + rte_ifpx_port_unbind; + rte_ifpx_proxy_create; + rte_ifpx_proxy_create_by_devarg; + rte_ifpx_proxy_destroy; + rte_ifpx_proxy_get; + + local: *; +}; diff --git a/lib/meson.build b/lib/meson.build index 07a65a625..caa54f7b5 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -21,7 +21,7 @@ libraries = [ 'acl', 'bbdev', 'bitratestats', 'cfgfile', 'compressdev', 'cryptodev', 'distributor', 'efd', 'eventdev', - 'gro', 'gso', 'ip_frag', 'jobstats', + 'gro', 'gso', 'if_proxy', 'ip_frag', 'jobstats', 'kni', 'latencystats', 'lpm', 'member', 'power', 'pdump', 'rawdev', 'rib', 'reorder', 'sched', 'security', 'stack', 'vhost', From patchwork Mon May 4 08:53:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Andrzej Ostruszka [C]" X-Patchwork-Id: 69691 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E882A04AF; Mon, 4 May 2020 10:53:47 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 909651D50C; Mon, 4 May 2020 10:53:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 710081D44F for ; Mon, 4 May 2020 10:53:24 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0448pAR4025967; Mon, 4 May 2020 01:53:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=nWOmhK34SsKSMfVKY/hdb8S4BlC1dB3i35RQjPdEuC0=; b=uxbN5QRdE8PXrKQG2g18uL796buhqAh2sOn9fmg7tWV8Q/kyuVRzeFagKL28AHLvvXPw hy5itnNcAt2EjlIwPmR0F87CRbSuyvfPgyiyMpjx40JkbxfjY2OhrvxM4UzUkjoc9Q0U BF/A6mE/kFLEoQWZ0wmMykV5CylUvz/BidudjTnxzp+ejCJpFiRLpuR6AWbNScfiOk7J xK5jdg9XE0l1ad7+EMUTW+99NTVXJz5/ev2WIfrwUV2IaUuIHou66EcwFmhN7YsQ2oAO GtlxqP2oJcWtFQDbmS3NDSVcUN5Yasp46+ui7FpIFqb2z7CwHO+Vtyz4QUIZzbNP0zG7 Hw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 30srykk1wx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 May 2020 01:53:23 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 May 2020 01:53:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 4 May 2020 01:53:21 -0700 Received: from amok.marvell.com (unknown [10.95.131.97]) by maili.marvell.com (Postfix) with ESMTP id 6974A3F703F; Mon, 4 May 2020 01:53:20 -0700 (PDT) From: Andrzej Ostruszka To: , Thomas Monjalon , John McNamara , Marko Kovacevic Date: Mon, 4 May 2020 10:53:13 +0200 Message-ID: <20200504085315.7296-3-aostruszka@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200504085315.7296-1-aostruszka@marvell.com> References: <20200306164104.15528-1-aostruszka@marvell.com> <20200504085315.7296-1-aostruszka@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-04_05:2020-05-01, 2020-05-04 signatures=0 Subject: [dpdk-dev] [PATCH v3 2/4] if_proxy: add library documentation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds documentation of IF Proxy library. Signed-off-by: Andrzej Ostruszka --- MAINTAINERS | 1 + doc/guides/prog_guide/if_proxy_lib.rst | 142 +++++++++++++++++++++++++ doc/guides/prog_guide/index.rst | 1 + 3 files changed, 144 insertions(+) create mode 100644 doc/guides/prog_guide/if_proxy_lib.rst diff --git a/MAINTAINERS b/MAINTAINERS index 1013745ce..1216366ab 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1475,6 +1475,7 @@ F: doc/guides/prog_guide/bpf_lib.rst IF Proxy - EXPERIMENTAL M: Andrzej Ostruszka F: lib/librte_if_proxy/ +F: doc/guides/prog_guide/if_proxy_lib.rst Test Applications ----------------- diff --git a/doc/guides/prog_guide/if_proxy_lib.rst b/doc/guides/prog_guide/if_proxy_lib.rst new file mode 100644 index 000000000..4ec7e65a5 --- /dev/null +++ b/doc/guides/prog_guide/if_proxy_lib.rst @@ -0,0 +1,142 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2020 Marvell International Ltd. + +.. _IF_Proxy_Library: + +IF Proxy Library +================ + +When a network interface is assigned to DPDK it usually disappears from +the system and user looses ability to configure it via typical +configuration tools. +There are basically two options to deal with this situation: + +- configure it via command line arguments and/or load configuration + from some file, +- add support for live configuration via some IPC mechanism. + +The first option is static and the second one requires some work to add +communication loop (e.g. separate thread listening/communicating on +a socket). + +This library adds a possibility to configure DPDK ports by using normal +configuration utilities (e.g. from iproute2 suite). +It requires user to configure additional DPDK ports that are visible to +the system (such as Tap or KNI - actually any port that has valid +`if_index` in ``struct rte_eth_dev_info`` will do) and designate them as +a port representor (a proxy) in the system. + +Let's see typical intended usage by an example. +Suppose that you have application that handles traffic on two ports (in +the white list below):: + + ./app -w 00:14.0 -w 00:16.0 --vdev=net_tap0 --vdev=net_tap1 + +So in addition to the "regular" ports you need to configure proxy ports. +These proxy ports can be created via a command line (like above) or from +within the application (e.g. by using `rte_ifpx_proxy_create()` +function). + +When you have proxy ports you need to bind them to the "regular" ports:: + + rte_ifpx_port_bind(port0, proxy0); + rte_ifpx_port_bind(port1, proxy1); + +This binding is a logical one - there is no automatic packet forwarding +configured. +This is because library cannot tell upfront what portion of the traffic +received on ports 0/1 should be redirected to the system via proxies and +also it does not know how the application is structured (what packet +processing engines it uses). +Therefore it is application writer responsibility to include proxy ports +into its packet processing and forward appropriate packets between +proxies and ports. +What the library actually does is that it gets network configuration +from the system and listens to its changes. +This information is then matched against `if_index` of the configured +proxies and passed to the application. + +There are two mechanisms via which library passes notifications to the +application. +First is the set of global callbacks that user has +to register via:: + + rte_ifpx_callbacks_register(len, cbs); + +Here `cbs` is an array of ``struct rte_ifpx_callback`` which is a tagged +union with following members:: + + int (*mac_change)(const struct rte_ifpx_mac_change *event); + int (*mtu_change)(const struct rte_ifpx_mtu_change *event); + int (*link_change)(const struct rte_ifpx_link_change *event); + int (*addr_add)(const struct rte_ifpx_addr_change *event); + int (*addr_del)(const struct rte_ifpx_addr_change *event); + int (*addr6_add)(const struct rte_ifpx_addr6_change *event); + int (*addr6_del)(const struct rte_ifpx_addr6_change *event); + int (*route_add)(const struct rte_ifpx_route_change *event); + int (*route_del)(const struct rte_ifpx_route_change *event); + int (*route6_add)(const struct rte_ifpx_route6_change *event); + int (*route6_del)(const struct rte_ifpx_route6_change *event); + int (*neigh_add)(const struct rte_ifpx_neigh_change *event); + int (*neigh_del)(const struct rte_ifpx_neigh_change *event); + int (*neigh6_add)(const struct rte_ifpx_neigh6_change *event); + int (*neigh6_del)(const struct rte_ifpx_neigh6_change *event); + int (*cfg_done)(void); + +All of them should be self explanatory apart from the last one which is +library specific callback - called when initial network configuration +query is finished. + +So for example when the user issues command:: + + ip link set dev dtap0 mtu 1600 + +then library will call `mtu_change()` callback with MTU change event +having port_id equal to `port0` (id of the port bound to this proxy) and +`mtu` equal to 1600 (``dtap0`` is the default interface name for +``net_tap0``). +Application can simply use `rte_eth_dev_set_mtu()` in this callback. +The same way `rte_eth_dev_default_mac_addr_set()` can be used in +`mac_change()` and `rte_eth_dev_set_link_up/down()` inside the +`link_change()` callback that does dispatch based on `is_up` member of +its `event` argument. + +Please note however that the context in which these callbacks are called +is most probably different from the one in which packets are handled and +it is application writer responsibility to use proper synchronization +mechanisms - if they are needed. + +Second notification mechanism relies on queueing of event notifications +to the configured notification rings. +Application can add queue via:: + + int rte_ifpx_queue_add(struct rte_ring *r); + +This type of notification is used when there is no callback registered +for given type of event or when it is registered but it returns 0. +This way application has following choices: + +- if the data structure that needs to be updated due to notification + is safe to be modified by a single writer (while being used by other + readers) then it can simply do that inside the callback and return + non-zero value to signal end of the event handling + +- otherwise, when there are some common preparation steps that needs + to be done only once, application can register callback that will + perform these steps and return 0 - library will then add an event to + each registered notification queue + +- if the data structures are replicated and there are no common steps + then application can simply skip registering of the callbacks and + configure notification queues (e.g. 1 per each lcore) + +Once we have bindings in place and notification configured, the only +essential part that remains is to get the current network configuration +and start listening to its changes. +This is accomplished via a call to:: + + int rte_ifpx_listen(void); + +From that moment you should see notifications coming to your +application: first ones resulting from querying of current system +configurations and subsequent on the configuration changes. diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 1d0cd49cd..349829bcd 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -58,6 +58,7 @@ Programmer's Guide metrics_lib bpf_lib ipsec_lib + if_proxy_lib source_org dev_kit_build_system dev_kit_root_make_help From patchwork Mon May 4 08:53:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Andrzej Ostruszka [C]" X-Patchwork-Id: 69692 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 15223A04AF; Mon, 4 May 2020 10:53:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B4D101D514; Mon, 4 May 2020 10:53:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D3CEE1D483 for ; Mon, 4 May 2020 10:53:25 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0448oifp003616; Mon, 4 May 2020 01:53:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Eflx8PZCi19VJuDP9EJt2fWWEGB6SDyMoNgOOqJJeho=; b=ajtB6+Mr46zMSn4eZ5WNxdZ4lxRl8+AKXEe/yyBe8KK67xCskwW8NbaPsj0bBBU/0lon PesT19d5+jc3ItoqgKgrZQf5bcEy58eKYouebh4ABQTyyi+q1LOAVCLZT+EmQ8AnNhsA U7BXW5EmAeJZXQVpzsPBOV3vKgszAV0VQS/+DYybBwRku2ILvhylHO1MR33CgROKUAQM FoIm2SlmeeW1C4NpNFB6Ue/KMarAlD0elBnn8STel78cGd2Vr//cAk1FBnTYEPtkarMp Ra+V4Qef4PXDYkZSquo08R7OQTbXbKAvDdiivcUenmys0A8t3qocSppBBT5JAEOkmG5t 2w== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 30s67q64kg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 May 2020 01:53:24 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 May 2020 01:53:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 4 May 2020 01:53:23 -0700 Received: from amok.marvell.com (unknown [10.95.131.97]) by maili.marvell.com (Postfix) with ESMTP id 388FD3F7041; Mon, 4 May 2020 01:53:22 -0700 (PDT) From: Andrzej Ostruszka To: , Thomas Monjalon Date: Mon, 4 May 2020 10:53:14 +0200 Message-ID: <20200504085315.7296-4-aostruszka@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200504085315.7296-1-aostruszka@marvell.com> References: <20200306164104.15528-1-aostruszka@marvell.com> <20200504085315.7296-1-aostruszka@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-04_05:2020-05-01, 2020-05-04 signatures=0 Subject: [dpdk-dev] [PATCH v3 3/4] if_proxy: add simple functionality test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds simple test of the library notifications. Signed-off-by: Andrzej Ostruszka --- MAINTAINERS | 1 + app/test/Makefile | 5 + app/test/meson.build | 4 + app/test/test_if_proxy.c | 707 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 717 insertions(+) create mode 100644 app/test/test_if_proxy.c diff --git a/MAINTAINERS b/MAINTAINERS index 1216366ab..d42cfb566 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1475,6 +1475,7 @@ F: doc/guides/prog_guide/bpf_lib.rst IF Proxy - EXPERIMENTAL M: Andrzej Ostruszka F: lib/librte_if_proxy/ +F: app/test/test_if_proxy.c F: doc/guides/prog_guide/if_proxy_lib.rst Test Applications diff --git a/app/test/Makefile b/app/test/Makefile index 4582eca6c..a13595042 100644 --- a/app/test/Makefile +++ b/app/test/Makefile @@ -240,6 +240,11 @@ SRCS-$(CONFIG_RTE_LIBRTE_RCU) += test_rcu_qsbr.c test_rcu_qsbr_perf.c SRCS-$(CONFIG_RTE_LIBRTE_SECURITY) += test_security.c +ifeq ($(CONFIG_RTE_LIBRTE_IF_PROXY),y) +SRCS-y += test_if_proxy.c +LDLIBS += -lrte_if_proxy +endif + SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec_sad.c ifeq ($(CONFIG_RTE_LIBRTE_IPSEC),y) diff --git a/app/test/meson.build b/app/test/meson.build index fc60acbe7..678f7ef62 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -369,6 +369,10 @@ endif if dpdk_conf.has('RTE_LIBRTE_PDUMP') test_deps += 'pdump' endif +if dpdk_conf.has('RTE_LIBRTE_IF_PROXY') + test_deps += 'if_proxy' + test_sources += 'test_if_proxy.c' +endif if cc.has_argument('-Wno-format-truncation') cflags += '-Wno-format-truncation' diff --git a/app/test/test_if_proxy.c b/app/test/test_if_proxy.c new file mode 100644 index 000000000..4eca049c9 --- /dev/null +++ b/app/test/test_if_proxy.c @@ -0,0 +1,707 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Marvell International Ltd. + */ + +#include "test.h" + +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +/* There are two types of event notifications - one using callbacks and one + * using event queues (rings). We'll test them both and this "bool" will govern + * the type of API to use. + */ +static int use_callbacks = 1; +static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; +static pthread_cond_t cond = PTHREAD_COND_INITIALIZER; + +static struct rte_ring *ev_queue; + +enum net_event_mask { + INITIALIZED = 1U << RTE_IFPX_CFG_DONE, + LINK_CHANGED = 1U << RTE_IFPX_LINK_CHANGE, + MAC_CHANGED = 1U << RTE_IFPX_MAC_CHANGE, + MTU_CHANGED = 1U << RTE_IFPX_MTU_CHANGE, + ADDR_ADD = 1U << RTE_IFPX_ADDR_ADD, + ADDR_DEL = 1U << RTE_IFPX_ADDR_DEL, + ROUTE_ADD = 1U << RTE_IFPX_ROUTE_ADD, + ROUTE_DEL = 1U << RTE_IFPX_ROUTE_DEL, + ADDR6_ADD = 1U << RTE_IFPX_ADDR6_ADD, + ADDR6_DEL = 1U << RTE_IFPX_ADDR6_DEL, + ROUTE6_ADD = 1U << RTE_IFPX_ROUTE6_ADD, + ROUTE6_DEL = 1U << RTE_IFPX_ROUTE6_DEL, + NEIGH_ADD = 1U << RTE_IFPX_NEIGH_ADD, + NEIGH_DEL = 1U << RTE_IFPX_NEIGH_DEL, + NEIGH6_ADD = 1U << RTE_IFPX_NEIGH6_ADD, + NEIGH6_DEL = 1U << RTE_IFPX_NEIGH6_DEL, +}; + +static unsigned int state; + +static struct { + struct rte_ether_addr mac_addr; + uint16_t port_id, mtu; + struct in_addr ipv4, route4; + struct in6_addr ipv6, route6; + uint16_t depth4, depth6; + int is_up; +} net_cfg; + +static +int unlock_notify(unsigned int op) +{ + /* the mutex is expected to be locked on entry */ + RTE_VERIFY(pthread_mutex_trylock(&mutex) == EBUSY); + state |= op; + + pthread_mutex_unlock(&mutex); + return pthread_cond_signal(&cond); +} + +static +void handle_event(struct rte_ifpx_event *ev); + +static +int wait_for(unsigned int op_mask, unsigned int sec) +{ + int ec; + + if (use_callbacks) { + struct timespec time; + + ec = pthread_mutex_trylock(&mutex); + /* the mutex is expected to be locked on entry */ + RTE_VERIFY(ec == EBUSY); + + ec = 0; + clock_gettime(CLOCK_REALTIME, &time); + time.tv_sec += sec; + + while ((state & op_mask) != op_mask && ec == 0) + ec = pthread_cond_timedwait(&cond, &mutex, &time); + } else { + uint64_t deadline; + struct rte_ifpx_event *ev; + + ec = 0; + deadline = rte_get_timer_cycles() + sec * rte_get_timer_hz(); + + while ((state & op_mask) != op_mask) { + if (rte_get_timer_cycles() >= deadline) { + ec = ETIMEDOUT; + break; + } + if (rte_ring_dequeue(ev_queue, (void **)&ev) == 0) + handle_event(ev); + } + } + + return ec; +} + +static +int expect(unsigned int op_mask, const char *fmt, ...) +#if __GNUC__ + __attribute__((format(printf, 2, 3))); +#endif + +static +int expect(unsigned int op_mask, const char *fmt, ...) +{ + char cmd[128]; + va_list args; + int ret; + + state &= ~op_mask; + va_start(args, fmt); + vsnprintf(cmd, sizeof(cmd), fmt, args); + va_end(args); + ret = system(cmd); + if (ret == 0) + /* IPv6 address notifications seem to need that long delay. */ + return wait_for(op_mask, 2); + return ret; +} + +static +int mac_change(const struct rte_ifpx_mac_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (memcmp(ev->mac.addr_bytes, net_cfg.mac_addr.addr_bytes, + RTE_ETHER_ADDR_LEN) == 0) { + unlock_notify(MAC_CHANGED); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int mtu_change(const struct rte_ifpx_mtu_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (ev->mtu == net_cfg.mtu) { + unlock_notify(MTU_CHANGED); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int link_change(const struct rte_ifpx_link_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (ev->is_up == net_cfg.is_up) { + /* Special case for testing of callbacks modification from + * inside of callback: we catch putting link down (the last + * operation in test) and remove callbacks registered. + */ + if (!ev->is_up) + rte_ifpx_callbacks_unregister_all(); + unlock_notify(LINK_CHANGED); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int addr_add(const struct rte_ifpx_addr_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (ev->ip == net_cfg.ipv4.s_addr) { + unlock_notify(ADDR_ADD); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int addr_del(const struct rte_ifpx_addr_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (ev->ip == net_cfg.ipv4.s_addr) { + unlock_notify(ADDR_DEL); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int addr6_add(const struct rte_ifpx_addr6_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (memcmp(ev->ip, net_cfg.ipv6.s6_addr, 16) == 0) { + unlock_notify(ADDR6_ADD); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int addr6_del(const struct rte_ifpx_addr6_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (memcmp(ev->ip, net_cfg.ipv6.s6_addr, 16) == 0) { + unlock_notify(ADDR6_DEL); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int route_add(const struct rte_ifpx_route_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (net_cfg.depth4 == ev->depth && net_cfg.route4.s_addr == ev->ip) { + unlock_notify(ROUTE_ADD); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int route_del(const struct rte_ifpx_route_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (net_cfg.depth4 == ev->depth && net_cfg.route4.s_addr == ev->ip) { + unlock_notify(ROUTE_DEL); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int route6_add(const struct rte_ifpx_route6_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (net_cfg.depth6 == ev->depth && + /* don't check for trailing zeros */ + memcmp(ev->ip, net_cfg.route6.s6_addr, ev->depth/8) == 0) { + unlock_notify(ROUTE6_ADD); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int route6_del(const struct rte_ifpx_route6_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (net_cfg.depth6 == ev->depth && + /* don't check for trailing zeros */ + memcmp(ev->ip, net_cfg.route6.s6_addr, ev->depth/8) == 0) { + unlock_notify(ROUTE6_DEL); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int neigh_add(const struct rte_ifpx_neigh_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (net_cfg.ipv4.s_addr == ev->ip && + memcmp(ev->mac.addr_bytes, net_cfg.mac_addr.addr_bytes, + RTE_ETHER_ADDR_LEN) == 0) { + unlock_notify(NEIGH_ADD); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int neigh_del(const struct rte_ifpx_neigh_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (net_cfg.ipv4.s_addr == ev->ip) { + unlock_notify(NEIGH_DEL); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int neigh6_add(const struct rte_ifpx_neigh6_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (memcmp(ev->ip, net_cfg.ipv6.s6_addr, 16) == 0 && + memcmp(ev->mac.addr_bytes, net_cfg.mac_addr.addr_bytes, + RTE_ETHER_ADDR_LEN) == 0) { + unlock_notify(NEIGH6_ADD); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int neigh6_del(const struct rte_ifpx_neigh6_change *ev) +{ + pthread_mutex_lock(&mutex); + RTE_VERIFY(ev->port_id == net_cfg.port_id); + if (memcmp(ev->ip, net_cfg.ipv6.s6_addr, 16) == 0) { + unlock_notify(NEIGH6_DEL); + return 1; + } + pthread_mutex_unlock(&mutex); + return 0; +} + +static +int cfg_done(void) +{ + pthread_mutex_lock(&mutex); + unlock_notify(INITIALIZED); + return 1; +} + +static +void handle_event(struct rte_ifpx_event *ev) +{ + if (ev->type != RTE_IFPX_CFG_DONE) + RTE_VERIFY(ev->data.port_id == net_cfg.port_id); + + /* If params do not match what we expect just free the event. */ + switch (ev->type) { + case RTE_IFPX_MAC_CHANGE: + if (memcmp(ev->mac_change.mac.addr_bytes, + net_cfg.mac_addr.addr_bytes, + RTE_ETHER_ADDR_LEN) != 0) + goto exit; + break; + case RTE_IFPX_MTU_CHANGE: + if (ev->mtu_change.mtu != net_cfg.mtu) + goto exit; + break; + case RTE_IFPX_LINK_CHANGE: + if (ev->link_change.is_up != net_cfg.is_up) + goto exit; + break; + case RTE_IFPX_ADDR_ADD: + if (ev->addr_change.ip != net_cfg.ipv4.s_addr) + goto exit; + break; + case RTE_IFPX_ADDR_DEL: + if (ev->addr_change.ip != net_cfg.ipv4.s_addr) + goto exit; + break; + case RTE_IFPX_ADDR6_ADD: + if (memcmp(ev->addr6_change.ip, net_cfg.ipv6.s6_addr, + 16) != 0) + goto exit; + break; + case RTE_IFPX_ADDR6_DEL: + if (memcmp(ev->addr6_change.ip, net_cfg.ipv6.s6_addr, + 16) != 0) + goto exit; + break; + case RTE_IFPX_ROUTE_ADD: + if (net_cfg.depth4 != ev->route_change.depth || + net_cfg.route4.s_addr != ev->route_change.ip) + goto exit; + break; + case RTE_IFPX_ROUTE_DEL: + if (net_cfg.depth4 != ev->route_change.depth || + net_cfg.route4.s_addr != ev->route_change.ip) + goto exit; + break; + case RTE_IFPX_ROUTE6_ADD: + if (net_cfg.depth6 != ev->route6_change.depth || + /* don't check for trailing zeros */ + memcmp(ev->route6_change.ip, net_cfg.route6.s6_addr, + ev->route6_change.depth/8) != 0) + goto exit; + break; + case RTE_IFPX_ROUTE6_DEL: + if (net_cfg.depth6 != ev->route6_change.depth || + /* don't check for trailing zeros */ + memcmp(ev->route6_change.ip, net_cfg.route6.s6_addr, + ev->route6_change.depth/8) != 0) + goto exit; + break; + case RTE_IFPX_NEIGH_ADD: + if (net_cfg.ipv4.s_addr != ev->neigh_change.ip || + memcmp(ev->neigh_change.mac.addr_bytes, + net_cfg.mac_addr.addr_bytes, + RTE_ETHER_ADDR_LEN) != 0) + goto exit; + break; + case RTE_IFPX_NEIGH_DEL: + if (net_cfg.ipv4.s_addr != ev->neigh_change.ip) + goto exit; + break; + case RTE_IFPX_NEIGH6_ADD: + if (memcmp(ev->neigh6_change.ip, + net_cfg.ipv6.s6_addr, 16) != 0 || + memcmp(ev->neigh6_change.mac.addr_bytes, + net_cfg.mac_addr.addr_bytes, + RTE_ETHER_ADDR_LEN) != 0) + goto exit; + break; + case RTE_IFPX_NEIGH6_DEL: + if (memcmp(ev->neigh6_change.ip, net_cfg.ipv6.s6_addr, 16) != 0) + goto exit; + break; + case RTE_IFPX_CFG_DONE: + break; + default: + RTE_VERIFY(0 && "Unhandled event type"); + } + + state |= 1U << ev->type; +exit: + free(ev); +} + +static +struct rte_ifpx_callback cbs[] = { + { RTE_IFPX_MAC_CHANGE, {.mac_change = mac_change} }, + { RTE_IFPX_MTU_CHANGE, {.mtu_change = mtu_change} }, + { RTE_IFPX_LINK_CHANGE, {.link_change = link_change} }, + { RTE_IFPX_ADDR_ADD, {.addr_add = addr_add} }, + { RTE_IFPX_ADDR_DEL, {.addr_del = addr_del} }, + { RTE_IFPX_ADDR6_ADD, {.addr6_add = addr6_add} }, + { RTE_IFPX_ADDR6_DEL, {.addr6_del = addr6_del} }, + { RTE_IFPX_ROUTE_ADD, {.route_add = route_add} }, + { RTE_IFPX_ROUTE_DEL, {.route_del = route_del} }, + { RTE_IFPX_ROUTE6_ADD, {.route6_add = route6_add} }, + { RTE_IFPX_ROUTE6_DEL, {.route6_del = route6_del} }, + { RTE_IFPX_NEIGH_ADD, {.neigh_add = neigh_add} }, + { RTE_IFPX_NEIGH_DEL, {.neigh_del = neigh_del} }, + { RTE_IFPX_NEIGH6_ADD, {.neigh6_add = neigh6_add} }, + { RTE_IFPX_NEIGH6_DEL, {.neigh6_del = neigh6_del} }, + /* lib specific callback */ + { RTE_IFPX_CFG_DONE, {.cfg_done = cfg_done} }, +}; + +static +int test_notifications(const struct rte_ifpx_info *pinfo) +{ + char mac_buf[RTE_ETHER_ADDR_FMT_SIZE]; + int ec; + + /* Test link up notification. */ + net_cfg.is_up = 1; + ec = expect(LINK_CHANGED, "ip link set dev %s up", pinfo->if_name); + if (ec != 0) { + printf("Failed to notify about link going up\n"); + return ec; + } + + /* Test for MAC changes notification. */ + rte_eth_random_addr(net_cfg.mac_addr.addr_bytes); + rte_ether_format_addr(mac_buf, sizeof(mac_buf), &net_cfg.mac_addr); + ec = expect(MAC_CHANGED, "ip link set dev %s address %s", + pinfo->if_name, mac_buf); + if (ec != 0) { + printf("Missing/wrong notification about mac change\n"); + return ec; + } + + /* Test for MTU changes notification. */ + net_cfg.mtu = pinfo->mtu + 100; + ec = expect(MTU_CHANGED, "ip link set dev %s mtu %d", + pinfo->if_name, net_cfg.mtu); + if (ec != 0) { + printf("Missing/wrong notification about mtu change\n"); + return ec; + } + + /* Test for adding of IPv4 address - using address from TEST-2 pool. + * This test is specific to linux netlink behaviour - after adding + * address we get both notification about address being added and new + * route. So I check both. + */ + net_cfg.ipv4.s_addr = RTE_IPV4(198, 51, 100, 14); + net_cfg.route4.s_addr = net_cfg.ipv4.s_addr; + net_cfg.depth4 = 32; + ec = expect(ADDR_ADD | ROUTE_ADD, "ip addr add 198.51.100.14 dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv4 address add\n"); + return ec; + } + + /* Test for IPv4 address removal. See comment above for 'addr add'. */ + ec = expect(ADDR_DEL | ROUTE_DEL, "ip addr del 198.51.100.14/32 dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv4 address del\n"); + return ec; + } + + /* Test for adding IPv4 route. */ + net_cfg.route4.s_addr = RTE_IPV4(198, 51, 100, 0); + net_cfg.depth4 = 24; + ec = expect(ROUTE_ADD, "ip route add 198.51.100.0/24 dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv4 route add\n"); + return ec; + } + + /* Test for IPv4 route removal. */ + ec = expect(ROUTE_DEL, "ip route del 198.51.100.0/24 dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv4 route del\n"); + return ec; + } + + /* Test for neighbour addresses notifications. */ + rte_eth_random_addr(net_cfg.mac_addr.addr_bytes); + rte_ether_format_addr(mac_buf, sizeof(mac_buf), &net_cfg.mac_addr); + + ec = expect(NEIGH_ADD, + "ip neigh add 198.51.100.14 dev %s lladdr %s nud noarp", + pinfo->if_name, mac_buf); + if (ec != 0) { + printf("Missing/wrong notifications about IPv4 neighbour add\n"); + return ec; + } + + ec = expect(NEIGH_DEL, "ip neigh del 198.51.100.14 dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv4 neighbour del\n"); + return ec; + } + + /* Now the same for IPv6 - with address from "documentation pool". */ + inet_pton(AF_INET6, "2001:db8::dead:beef", net_cfg.ipv6.s6_addr); + /* This is specific to linux netlink behaviour - after adding address + * we get both notification about address being added and new route. + * So I wait for both. + */ + memcpy(net_cfg.route6.s6_addr, net_cfg.ipv6.s6_addr, 16); + net_cfg.depth6 = 128; + ec = expect(ADDR6_ADD | ROUTE6_ADD, + "ip addr add 2001:db8::dead:beef dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv6 address add\n"); + return ec; + } + + /* See comment above for 'addr6 add'. */ + ec = expect(ADDR6_DEL | ROUTE6_DEL, + "ip addr del 2001:db8::dead:beef/128 dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv6 address del\n"); + return ec; + } + + net_cfg.depth6 = 96; + ec = expect(ROUTE6_ADD, "ip route add 2001:db8::dead:0/96 dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv6 route add\n"); + return ec; + } + + ec = expect(ROUTE6_DEL, "ip route del 2001:db8::dead:0/96 dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv6 route del\n"); + return ec; + } + + ec = expect(NEIGH6_ADD, + "ip neigh add 2001:db8::dead:beef dev %s lladdr %s nud noarp", + pinfo->if_name, mac_buf); + if (ec != 0) { + printf("Missing/wrong notifications about IPv6 neighbour add\n"); + return ec; + } + + ec = expect(NEIGH6_DEL, "ip neigh del 2001:db8::dead:beef dev %s", + pinfo->if_name); + if (ec != 0) { + printf("Missing/wrong notifications about IPv6 neighbour del\n"); + return ec; + } + + /* Finally put link down and test for notification. */ + net_cfg.is_up = 0; + ec = expect(LINK_CHANGED, "ip link set dev %s down", pinfo->if_name); + if (ec != 0) { + printf("Failed to notify about link going down\n"); + return ec; + } + + return 0; +} + +static +int test_if_proxy(void) +{ + int ec; + const struct rte_ifpx_info *pinfo; + uint16_t proxy_id; + + state = 0; + memset(&net_cfg, 0, sizeof(net_cfg)); + + if (rte_eth_dev_count_avail() == 0) { + printf("Run this test with at least one port configured\n"); + return 1; + } + /* User the first port available. */ + RTE_ETH_FOREACH_DEV(net_cfg.port_id) + break; + proxy_id = rte_ifpx_proxy_create(RTE_IFPX_DEFAULT); + RTE_VERIFY(proxy_id != RTE_MAX_ETHPORTS); + rte_ifpx_port_bind(net_cfg.port_id, proxy_id); + rte_ifpx_callbacks_register(RTE_DIM(cbs), cbs); + rte_ifpx_listen(); + + /* Let's start with callback based API. */ + use_callbacks = 1; + pthread_mutex_lock(&mutex); + ec = wait_for(INITIALIZED, 2); + if (ec != 0) { + printf("Failed to obtain network configuration\n"); + goto exit; + } + pinfo = rte_ifpx_info_get(net_cfg.port_id); + RTE_VERIFY(pinfo); + + /* Make sure the link is down. */ + net_cfg.is_up = 0; + ec = expect(LINK_CHANGED, "ip link set dev %s down", pinfo->if_name); + RTE_VERIFY(ec == ETIMEDOUT || ec == 0); + + ec = test_notifications(pinfo); + if (ec != 0) { + printf("Failed test with callback based API\n"); + goto exit; + } + /* Switch to event queue based API and repeat tests. */ + use_callbacks = 0; + ev_queue = rte_ring_create("IFPX-events", 16, SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_SC_DEQ); + ec = rte_ifpx_queue_add(ev_queue); + if (ec != 0) { + printf("Failed to add a notification queue\n"); + goto exit; + } + ec = test_notifications(pinfo); + if (ec != 0) { + printf("Failed test with event queue based API\n"); + goto exit; + } + +exit: + pthread_mutex_unlock(&mutex); + /* Proxy ports are not owned by the lib. Internal references to them + * are cleared on close, but the ports are not destroyed so we need to + * do that explicitly. + */ + rte_ifpx_proxy_destroy(proxy_id); + rte_ifpx_close(); + /* Queue is removed from the lib by rte_ifpx_close() - here we just + * free it. + */ + rte_ring_free(ev_queue); + ev_queue = NULL; + + return ec; +} + +REGISTER_TEST_COMMAND(if_proxy_autotest, test_if_proxy) From patchwork Mon May 4 08:53:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Andrzej Ostruszka [C]" X-Patchwork-Id: 69693 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B412DA04AF; Mon, 4 May 2020 10:54:12 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 644261D528; Mon, 4 May 2020 10:53:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id F1ADA1D51D for ; Mon, 4 May 2020 10:53:28 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0448oQfB002928; Mon, 4 May 2020 01:53:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=+xQnDNhJl1PtqVjrnmr5sICKb7VI91WcOUJql+f9l2Q=; b=rB+Go5DF9Qb6IYvF6vmSG2KgFrzXLpuqIRvbMKhRo72O4IVCJzXUM+JsiOb8UksDVEaJ I5VhNZPlV8eYZIZ05WjWtTHbqyB30o2uk8J0XkKjabMbUJnhlEuvsOdgiOz/4DP+GnH2 fKgYniizpNWettlXaphEh+4NQT8G4DLcE33dGlEEe/uYXpDld0wQFlZVZF+FMsNpO0/z g7d4SgpVDhRdpGohi8l5tQRlMC80tx/vR81ZavYa1n3qgXRD1f55Go1qBeuYv+qgmCW4 ApVAAwzjCAWg73jOIlDuWiS2s9mEafNRSfQPsHgURbB6jepz6/QnaiC2H7nSQLJsBSHS Iw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 30s67q64km-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 04 May 2020 01:53:27 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 May 2020 01:53:26 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 4 May 2020 01:53:25 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 4 May 2020 01:53:25 -0700 Received: from amok.marvell.com (unknown [10.95.131.97]) by maili.marvell.com (Postfix) with ESMTP id CF3B13F7044; Mon, 4 May 2020 01:53:23 -0700 (PDT) From: Andrzej Ostruszka To: , Thomas Monjalon Date: Mon, 4 May 2020 10:53:15 +0200 Message-ID: <20200504085315.7296-5-aostruszka@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200504085315.7296-1-aostruszka@marvell.com> References: <20200306164104.15528-1-aostruszka@marvell.com> <20200504085315.7296-1-aostruszka@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-04_05:2020-05-01, 2020-05-04 signatures=0 Subject: [dpdk-dev] [PATCH v3 4/4] if_proxy: add example application X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add an example application showing possible library usage. This is a simplified version of l3fwd where: - many performance improvements has been removed in order to simplify logic and put focus on the proxy library usage, - the configuration of forwarding has to be done by the user (using typical system tools on proxy ports) - these changes are passed to the application via library notifications. It is meant to show how you can update some data from callbacks (routing - see note below) and how those that are replicated (e.g. kept per lcore) can be updated via event queueing (here neighbouring info). Note: This example assumes that LPM tables can be updated by a single writer while being used by others. To the best of author's knowledge this is the case (by preliminary code inspection) but DPDK does not make such a promise. Obviously, upon the change, there will be a transient period (when some IPs will be directed still to the old destination) but that is expected. Note also that in some cases you might need to tweak your system configuration to see effects. For example you send Gratuitous ARP to DPDK port and expect neighbour tables to be updated in application which does not happen. The packet will be sent to the kernel but it might drop it, please check /proc/sys/net/ipv4/conf/dtap0/arp_accept and related configuration options ('dtap0' here is just a name of your proxy port). Signed-off-by: Andrzej Ostruszka Depends-on: series-8862 --- MAINTAINERS | 1 + examples/Makefile | 1 + examples/l3fwd-ifpx/Makefile | 60 ++ examples/l3fwd-ifpx/l3fwd.c | 1128 +++++++++++++++++++++++++++++++ examples/l3fwd-ifpx/l3fwd.h | 98 +++ examples/l3fwd-ifpx/main.c | 740 ++++++++++++++++++++ examples/l3fwd-ifpx/meson.build | 11 + examples/meson.build | 2 +- 8 files changed, 2040 insertions(+), 1 deletion(-) create mode 100644 examples/l3fwd-ifpx/Makefile create mode 100644 examples/l3fwd-ifpx/l3fwd.c create mode 100644 examples/l3fwd-ifpx/l3fwd.h create mode 100644 examples/l3fwd-ifpx/main.c create mode 100644 examples/l3fwd-ifpx/meson.build diff --git a/MAINTAINERS b/MAINTAINERS index d42cfb566..96f1b4075 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1475,6 +1475,7 @@ F: doc/guides/prog_guide/bpf_lib.rst IF Proxy - EXPERIMENTAL M: Andrzej Ostruszka F: lib/librte_if_proxy/ +F: examples/l3fwd-ifpx/ F: app/test/test_if_proxy.c F: doc/guides/prog_guide/if_proxy_lib.rst diff --git a/examples/Makefile b/examples/Makefile index feff79784..a8cb02a6c 100644 --- a/examples/Makefile +++ b/examples/Makefile @@ -81,6 +81,7 @@ else $(info vm_power_manager requires libvirt >= 0.9.3) endif endif +DIRS-$(CONFIG_RTE_LIBRTE_IF_PROXY) += l3fwd-ifpx DIRS-y += eventdev_pipeline diff --git a/examples/l3fwd-ifpx/Makefile b/examples/l3fwd-ifpx/Makefile new file mode 100644 index 000000000..68eefeb75 --- /dev/null +++ b/examples/l3fwd-ifpx/Makefile @@ -0,0 +1,60 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2020 Marvell International Ltd. + +# binary name +APP = l3fwd + +# all source are stored in SRCS-y +SRCS-y := main.c l3fwd.c + +# Build using pkg-config variables if possible +ifeq ($(shell pkg-config --exists libdpdk && echo 0),0) + +all: shared +.PHONY: shared static +shared: build/$(APP)-shared + ln -sf $(APP)-shared build/$(APP) +static: build/$(APP)-static + ln -sf $(APP)-static build/$(APP) + +PKGCONF ?= pkg-config + +PC_FILE := $(shell $(PKGCONF) --path libdpdk 2>/dev/null) +CFLAGS += -DALLOW_EXPERIMENTAL_API -O3 $(shell $(PKGCONF) --cflags libdpdk) +LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk) +LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk) + +build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build + $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED) + +build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build + $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC) + +build: + @mkdir -p $@ + +.PHONY: clean +clean: + rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared + test -d build && rmdir -p build || true + +else # Build using legacy build system + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, detect a build directory, by looking for a path with a .config +RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config))))) + +include $(RTE_SDK)/mk/rte.vars.mk + +CFLAGS += -DALLOW_EXPERIMENTAL_API + +CFLAGS += -I$(SRCDIR) +CFLAGS += -O3 $(USER_FLAGS) +CFLAGS += $(WERROR_FLAGS) +LDLIBS += -lrte_if_proxy -lrte_ethdev -lrte_eal + +include $(RTE_SDK)/mk/rte.extapp.mk +endif diff --git a/examples/l3fwd-ifpx/l3fwd.c b/examples/l3fwd-ifpx/l3fwd.c new file mode 100644 index 000000000..8811aec01 --- /dev/null +++ b/examples/l3fwd-ifpx/l3fwd.c @@ -0,0 +1,1128 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Marvell International Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#ifndef USE_HASH_CRC +#include +#else +#include +#endif + +#include +#include +#include +#include +#include + +#include "l3fwd.h" + +#define DO_RFC_1812_CHECKS + +#define IPV4_L3FWD_LPM_MAX_RULES 1024 +#define IPV4_L3FWD_LPM_NUMBER_TBL8S (1 << 8) +#define IPV6_L3FWD_LPM_MAX_RULES 1024 +#define IPV6_L3FWD_LPM_NUMBER_TBL8S (1 << 16) + +static volatile bool ifpx_ready; + +/* ethernet addresses of ports */ +static +union lladdr_t port_mac[RTE_MAX_ETHPORTS]; + +static struct rte_lpm *ipv4_routes; +static struct rte_lpm6 *ipv6_routes; + +static +struct ipv4_gateway { + uint16_t port; + union lladdr_t lladdr; + uint32_t ip; +} ipv4_gateways[128]; + +static +struct ipv6_gateway { + uint16_t port; + union lladdr_t lladdr; + uint8_t ip[16]; +} ipv6_gateways[128]; + +/* The lowest 2 bits of next hop (which is 24/21 bit for IPv4/6) are reserved to + * encode: + * 00 -> host route: higher bits of next hop are port id and dst MAC should be + * based on dst IP + * 01 -> gateway route: higher bits of next hop are index into gateway array and + * use port and MAC cached there (if no MAC cached yet then search for it + * based on gateway IP) + * 10 -> proxy entry: packet directed to us, just take higher bits as port id of + * proxy and send packet there (without any modification) + * The port id (16 bits) will always fit however this will not work if you + * need more than 2^20 gateways. + */ +enum route_type { + HOST_ROUTE = 0x00, + GW_ROUTE = 0x01, + PROXY_ADDR = 0x02, +}; + +RTE_STD_C11 +_Static_assert(RTE_DIM(ipv4_gateways) <= (1 << 22) && + RTE_DIM(ipv6_gateways) <= (1 << 19), + "Gateway array index has to fit within next_hop with 2 bits reserved"); + +static +uint32_t find_add_gateway(uint16_t port, uint32_t ip) +{ + uint32_t i, idx = -1U; + + for (i = 0; i < RTE_DIM(ipv4_gateways); ++i) { + /* Remember first free slot in case GW is not present. */ + if (idx == -1U && ipv4_gateways[i].ip == 0) + idx = i; + else if (ipv4_gateways[i].ip == ip) + /* For now assume that given GW will be always at the + * same port, so no checking for that + */ + return i; + } + if (idx != -1U) { + ipv4_gateways[idx].port = port; + ipv4_gateways[idx].ip = ip; + /* Since ARP tables are kept per lcore MAC will be updated + * during first lookup. + */ + } + return idx; +} + +static +void clear_gateway(uint32_t ip) +{ + uint32_t i; + + for (i = 0; i < RTE_DIM(ipv4_gateways); ++i) { + if (ipv4_gateways[i].ip == ip) { + ipv4_gateways[i].ip = 0; + ipv4_gateways[i].lladdr.val = 0; + ipv4_gateways[i].port = RTE_MAX_ETHPORTS; + break; + } + } +} + +static +uint32_t find_add_gateway6(uint16_t port, const uint8_t *ip) +{ + uint32_t i, idx = -1U; + + for (i = 0; i < RTE_DIM(ipv6_gateways); ++i) { + /* Remember first free slot in case GW is not present. */ + if (idx == -1U && ipv6_gateways[i].ip[0] == 0) + idx = i; + else if (ipv6_gateways[i].ip[0]) + /* For now assume that given GW will be always at the + * same port, so no checking for that + */ + return i; + } + if (idx != -1U) { + ipv6_gateways[idx].port = port; + memcpy(ipv6_gateways[idx].ip, ip, 16); + /* Since ARP tables are kept per lcore MAC will be updated + * during first lookup. + */ + } + return idx; +} + +static +void clear_gateway6(const uint8_t *ip) +{ + uint32_t i; + + for (i = 0; i < RTE_DIM(ipv6_gateways); ++i) { + if (memcmp(ipv6_gateways[i].ip, ip, 16) == 0) { + memset(&ipv6_gateways[i].ip, 0, 16); + ipv6_gateways[i].lladdr.val = 0; + ipv6_gateways[i].port = RTE_MAX_ETHPORTS; + break; + } + } +} + +/* Assumptions: + * - Link related changes (MAC/MTU/...) need to be executed once, and it's OK + * to run them from the callback - if this is not the case (e.g. -EBUSY for + * MTU change, then event notification need to be used and more sophisticated + * coordination with lcore loops and stopping/starting of the ports: for + * example lcores not receiving on this port just mark it as inactive and stop + * transmitting to it and the one with RX stops the port sets the MAC starts + * it and notifies other lcores that it is back). + * - LPM is safe to be modified by one writer, and read by many without any + * locks (it looks to me like this is the case), however upon routing change + * there might be a transient period during which packets are not directed + * according to new rule. + * - Hash is unsafe to be used that way (and I don't want to turn on relevant + * flags just to excersize queued notifications) so every lcore keeps its + * copy of relevant data. + * Therefore there are callbacks defined for the routing info/address changes + * and remaining ones are handled via events on per lcore basis. + */ +static +int mac_change(const struct rte_ifpx_mac_change *ev) +{ + int i; + struct rte_ether_addr mac_addr; + char buf[RTE_ETHER_ADDR_FMT_SIZE]; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + rte_ether_format_addr(buf, sizeof(buf), &ev->mac); + RTE_LOG(DEBUG, L3FWD, "MAC change for port %d: %s\n", + ev->port_id, buf); + } + /* NOTE - use copy because RTE functions don't take const args */ + rte_ether_addr_copy(&ev->mac, &mac_addr); + i = rte_eth_dev_default_mac_addr_set(ev->port_id, &mac_addr); + if (i == -EOPNOTSUPP) + i = rte_eth_dev_mac_addr_add(ev->port_id, &mac_addr, 0); + if (i < 0) + RTE_LOG(WARNING, L3FWD, "Failed to set MAC address\n"); + else { + port_mac[ev->port_id].mac.addr = ev->mac; + port_mac[ev->port_id].mac.valid = 1; + } + return 1; +} + +static +int link_change(const struct rte_ifpx_link_change *ev) +{ + uint16_t proxy_id = rte_ifpx_proxy_get(ev->port_id); + uint32_t mask; + + /* Mark the proxy too since we get only port notifications. */ + mask = 1U << ev->port_id | 1U << proxy_id; + + RTE_LOG(DEBUG, L3FWD, "Link change for port %d: %d\n", + ev->port_id, ev->is_up); + if (ev->is_up) { + rte_eth_dev_set_link_up(ev->port_id); + active_port_mask |= mask; + } else { + rte_eth_dev_set_link_down(ev->port_id); + active_port_mask &= ~mask; + } + active_port_mask &= enabled_port_mask; + return 1; +} + +static +int addr_add(const struct rte_ifpx_addr_change *ev) +{ + char buf[INET_ADDRSTRLEN]; + uint32_t ip; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + ip = rte_cpu_to_be_32(ev->ip); + inet_ntop(AF_INET, &ip, buf, sizeof(buf)); + RTE_LOG(DEBUG, L3FWD, "IPv4 address for port %d: %s\n", + ev->port_id, buf); + } + rte_lpm_add(ipv4_routes, ev->ip, 32, + ev->port_id << 2 | PROXY_ADDR); + return 1; +} + +static +int route_add(const struct rte_ifpx_route_change *ev) +{ + char buf[INET_ADDRSTRLEN]; + uint32_t nh, ip; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + ip = rte_cpu_to_be_32(ev->ip); + inet_ntop(AF_INET, &ip, buf, sizeof(buf)); + RTE_LOG(DEBUG, L3FWD, "IPv4 route for port %d: %s/%d\n", + ev->port_id, buf, ev->depth); + } + + /* On Linux upon changing of the IP we get notification for both addr + * and route, so just check if we already have addr entry and if so + * then ignore this notification. + */ + if (ev->depth == 32 && + rte_lpm_lookup(ipv4_routes, ev->ip, &nh) == 0 && nh & PROXY_ADDR) + return 1; + + if (ev->gateway) { + nh = find_add_gateway(ev->port_id, ev->gateway); + if (nh != -1U) + rte_lpm_add(ipv4_routes, ev->ip, ev->depth, + nh << 2 | GW_ROUTE); + else + RTE_LOG(WARNING, L3FWD, "No free slot in GW array\n"); + } else + rte_lpm_add(ipv4_routes, ev->ip, ev->depth, + ev->port_id << 2 | HOST_ROUTE); + return 1; +} + +static +int addr_del(const struct rte_ifpx_addr_change *ev) +{ + char buf[INET_ADDRSTRLEN]; + uint32_t ip; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + ip = rte_cpu_to_be_32(ev->ip); + inet_ntop(AF_INET, &ip, buf, sizeof(buf)); + RTE_LOG(DEBUG, L3FWD, "IPv4 address removed from port %d: %s\n", + ev->port_id, buf); + } + rte_lpm_delete(ipv4_routes, ev->ip, 32); + return 1; +} + +static +int route_del(const struct rte_ifpx_route_change *ev) +{ + char buf[INET_ADDRSTRLEN]; + uint32_t ip; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + ip = rte_cpu_to_be_32(ev->ip); + inet_ntop(AF_INET, &ip, buf, sizeof(buf)); + RTE_LOG(DEBUG, L3FWD, "IPv4 route removed from port %d: %s/%d\n", + ev->port_id, buf, ev->depth); + } + if (ev->gateway) + clear_gateway(ev->gateway); + rte_lpm_delete(ipv4_routes, ev->ip, ev->depth); + return 1; +} + +static +int addr6_add(const struct rte_ifpx_addr6_change *ev) +{ + char buf[INET6_ADDRSTRLEN]; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + inet_ntop(AF_INET6, ev->ip, buf, sizeof(buf)); + RTE_LOG(DEBUG, L3FWD, "IPv6 address for port %d: %s\n", + ev->port_id, buf); + } + rte_lpm6_add(ipv6_routes, ev->ip, 128, + ev->port_id << 2 | PROXY_ADDR); + return 1; +} + +static +int route6_add(const struct rte_ifpx_route6_change *ev) +{ + char buf[INET6_ADDRSTRLEN]; + + /* See comment in route_add(). */ + uint32_t nh; + if (ev->depth == 128 && + rte_lpm6_lookup(ipv6_routes, ev->ip, &nh) == 0 && nh & PROXY_ADDR) + return 1; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + inet_ntop(AF_INET6, ev->ip, buf, sizeof(buf)); + RTE_LOG(DEBUG, L3FWD, "IPv6 route for port %d: %s/%d\n", + ev->port_id, buf, ev->depth); + } + /* no valid IPv6 address starts with 0x00 */ + if (ev->gateway[0]) { + nh = find_add_gateway6(ev->port_id, ev->ip); + if (nh != -1U) + rte_lpm6_add(ipv6_routes, ev->ip, ev->depth, + nh << 2 | GW_ROUTE); + else + RTE_LOG(WARNING, L3FWD, "No free slot in GW6 array\n"); + } else + rte_lpm6_add(ipv6_routes, ev->ip, ev->depth, + ev->port_id << 2 | HOST_ROUTE); + return 1; +} + +static +int addr6_del(const struct rte_ifpx_addr6_change *ev) +{ + char buf[INET6_ADDRSTRLEN]; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + inet_ntop(AF_INET6, ev->ip, buf, sizeof(buf)); + RTE_LOG(DEBUG, L3FWD, "IPv6 address removed from port %d: %s\n", + ev->port_id, buf); + } + rte_lpm6_delete(ipv6_routes, ev->ip, 128); + return 1; +} + +static +int route6_del(const struct rte_ifpx_route6_change *ev) +{ + char buf[INET_ADDRSTRLEN]; + + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + inet_ntop(AF_INET6, ev->ip, buf, sizeof(buf)); + RTE_LOG(DEBUG, L3FWD, "IPv6 route removed from port %d: %s/%d\n", + ev->port_id, buf, ev->depth); + } + if (ev->gateway[0]) + clear_gateway6(ev->gateway); + rte_lpm6_delete(ipv6_routes, ev->ip, ev->depth); + return 1; +} + +static +int cfg_done(void) +{ + uint16_t port_id, px; + const struct rte_ifpx_info *pinfo; + + RTE_LOG(DEBUG, L3FWD, "Proxy config finished\n"); + + /* Copy MAC addresses of the proxies - to be used as src MAC during + * forwarding. + */ + RTE_ETH_FOREACH_DEV(port_id) { + px = rte_ifpx_proxy_get(port_id); + if (px != RTE_MAX_ETHPORTS && px != port_id) { + pinfo = rte_ifpx_info_get(px); + rte_ether_addr_copy(&pinfo->mac, + &port_mac[port_id].mac.addr); + port_mac[port_id].mac.valid = 1; + } + } + + ifpx_ready = 1; + return 1; +} + +static +struct rte_ifpx_callback ifpx_callbacks[] = { + { RTE_IFPX_MAC_CHANGE, {.mac_change = mac_change} }, + { RTE_IFPX_LINK_CHANGE, {.link_change = link_change} }, + { RTE_IFPX_ADDR_ADD, {.addr_add = addr_add} }, + { RTE_IFPX_ADDR_DEL, {.addr_del = addr_del} }, + { RTE_IFPX_ADDR6_ADD, {.addr6_add = addr6_add} }, + { RTE_IFPX_ADDR6_DEL, {.addr6_del = addr6_del} }, + { RTE_IFPX_ROUTE_ADD, {.route_add = route_add} }, + { RTE_IFPX_ROUTE_DEL, {.route_del = route_del} }, + { RTE_IFPX_ROUTE6_ADD, {.route6_add = route6_add} }, + { RTE_IFPX_ROUTE6_DEL, {.route6_del = route6_del} }, + { RTE_IFPX_CFG_DONE, {.cfg_done = cfg_done} }, +}; + +int init_if_proxy(void) +{ + char buf[16]; + unsigned int i; + + rte_ifpx_callbacks_register(RTE_DIM(ifpx_callbacks), ifpx_callbacks); + + RTE_LCORE_FOREACH(i) { + if (lcore_conf[i].n_rx_queue == 0) + continue; + snprintf(buf, sizeof(buf), "IFPX-events_%d", i); + lcore_conf[i].ev_queue = rte_ring_create(buf, 16, SOCKET_ID_ANY, + RING_F_SP_ENQ | RING_F_SC_DEQ); + if (!lcore_conf[i].ev_queue) { + RTE_LOG(ERR, L3FWD, + "Failed to create event queue for lcore %d\n", + i); + return -1; + } + rte_ifpx_queue_add(lcore_conf[i].ev_queue); + } + + return rte_ifpx_listen(); +} + +void close_if_proxy(void) +{ + unsigned int i; + + RTE_LCORE_FOREACH(i) { + if (lcore_conf[i].n_rx_queue == 0) + continue; + rte_ring_free(lcore_conf[i].ev_queue); + } + rte_ifpx_close(); +} + +void wait_for_config_done(void) +{ + while (!ifpx_ready) + rte_delay_ms(100); +} + +#ifdef DO_RFC_1812_CHECKS +static inline +int is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len) +{ + /* From http://www.rfc-editor.org/rfc/rfc1812.txt section 5.2.2 */ + /* + * 1. The packet length reported by the Link Layer must be large + * enough to hold the minimum length legal IP datagram (20 bytes). + */ + if (link_len < sizeof(struct rte_ipv4_hdr)) + return -1; + + /* 2. The IP checksum must be correct. */ + /* this is checked in H/W */ + + /* + * 3. The IP version number must be 4. If the version number is not 4 + * then the packet may be another version of IP, such as IPng or + * ST-II. + */ + if (((pkt->version_ihl) >> 4) != 4) + return -3; + /* + * 4. The IP header length field must be large enough to hold the + * minimum length legal IP datagram (20 bytes = 5 words). + */ + if ((pkt->version_ihl & 0xf) < 5) + return -4; + + /* + * 5. The IP total length field must be large enough to hold the IP + * datagram header, whose length is specified in the IP header length + * field. + */ + if (rte_cpu_to_be_16(pkt->total_length) < sizeof(struct rte_ipv4_hdr)) + return -5; + + return 0; +} +#endif + +/* Send burst of packets on an output interface */ +static inline +int send_burst(struct lcore_conf *lconf, uint16_t n, uint16_t port) +{ + struct rte_mbuf **m_table; + int ret; + uint16_t queueid; + + queueid = lconf->tx_queue_id[port]; + m_table = (struct rte_mbuf **)lconf->tx_mbufs[port].m_table; + + ret = rte_eth_tx_burst(port, queueid, m_table, n); + if (unlikely(ret < n)) { + do { + rte_pktmbuf_free(m_table[ret]); + } while (++ret < n); + } + + return 0; +} + +/* Enqueue a single packet, and send burst if queue is filled */ +static inline +int send_single_packet(struct lcore_conf *lconf, + struct rte_mbuf *m, uint16_t port) +{ + uint16_t len; + + len = lconf->tx_mbufs[port].len; + lconf->tx_mbufs[port].m_table[len] = m; + len++; + + /* enough pkts to be sent */ + if (unlikely(len == MAX_PKT_BURST)) { + send_burst(lconf, MAX_PKT_BURST, port); + len = 0; + } + + lconf->tx_mbufs[port].len = len; + return 0; +} + +static inline +int ipv4_get_destination(const struct rte_ipv4_hdr *ipv4_hdr, + struct rte_lpm *lpm, uint32_t *next_hop) +{ + return rte_lpm_lookup(lpm, + rte_be_to_cpu_32(ipv4_hdr->dst_addr), + next_hop); +} + +static inline +int ipv6_get_destination(const struct rte_ipv6_hdr *ipv6_hdr, + struct rte_lpm6 *lpm, uint32_t *next_hop) +{ + return rte_lpm6_lookup(lpm, ipv6_hdr->dst_addr, next_hop); +} + +static +uint16_t ipv4_process_pkt(struct lcore_conf *lconf, + struct rte_ether_hdr *eth_hdr, + struct rte_ipv4_hdr *ipv4_hdr, uint16_t portid) +{ + union lladdr_t lladdr = { 0 }; + int i; + uint32_t ip, nh; + + /* Here we know that packet is not from proxy - this case is handled + * in the main loop - so if we fail to find destination we will direct + * it to the proxy. + */ + if (ipv4_get_destination(ipv4_hdr, ipv4_routes, &nh) < 0) + return rte_ifpx_proxy_get(portid); + + if (nh & PROXY_ADDR) + return nh >> 2; + + /* Packet not to us so update src/dst MAC. */ + if (nh & GW_ROUTE) { + i = nh >> 2; + if (ipv4_gateways[i].lladdr.mac.valid) + lladdr = ipv4_gateways[i].lladdr; + else { + i = rte_hash_lookup(lconf->neigh_hash, + &ipv4_gateways[i].ip); + if (i < 0) + return rte_ifpx_proxy_get(portid); + lladdr = lconf->neigh_map[i]; + ipv4_gateways[i].lladdr = lladdr; + } + nh = ipv4_gateways[i].port; + } else { + nh >>= 2; + ip = rte_be_to_cpu_32(ipv4_hdr->dst_addr); + i = rte_hash_lookup(lconf->neigh_hash, &ip); + if (i < 0) + return rte_ifpx_proxy_get(portid); + lladdr = lconf->neigh_map[i]; + } + + RTE_ASSERT(lladdr.mac.valid); + RTE_ASSERT(port_mac[nh].mac.valid); + /* dst addr */ + *(uint64_t *)ð_hdr->d_addr = lladdr.val; + /* src addr */ + rte_ether_addr_copy(&port_mac[nh].mac.addr, ð_hdr->s_addr); + + return nh; +} + +static +uint16_t ipv6_process_pkt(struct lcore_conf *lconf, + struct rte_ether_hdr *eth_hdr, + struct rte_ipv6_hdr *ipv6_hdr, uint16_t portid) +{ + union lladdr_t lladdr = { 0 }; + int i; + uint32_t nh; + + /* Here we know that packet is not from proxy - this case is handled + * in the main loop - so if we fail to find destination we will direct + * it to the proxy. + */ + if (ipv6_get_destination(ipv6_hdr, ipv6_routes, &nh) < 0) + return rte_ifpx_proxy_get(portid); + + if (nh & PROXY_ADDR) + return nh >> 2; + + /* Packet not to us so update src/dst MAC. */ + if (nh & GW_ROUTE) { + i = nh >> 2; + if (ipv6_gateways[i].lladdr.mac.valid) + lladdr = ipv6_gateways[i].lladdr; + else { + i = rte_hash_lookup(lconf->neigh6_hash, + ipv6_gateways[i].ip); + if (i < 0) + return rte_ifpx_proxy_get(portid); + lladdr = lconf->neigh6_map[i]; + ipv6_gateways[i].lladdr = lladdr; + } + nh = ipv6_gateways[i].port; + } else { + nh >>= 2; + i = rte_hash_lookup(lconf->neigh6_hash, ipv6_hdr->dst_addr); + if (i < 0) + return rte_ifpx_proxy_get(portid); + lladdr = lconf->neigh6_map[i]; + } + + RTE_ASSERT(lladdr.mac.valid); + /* dst addr */ + *(uint64_t *)ð_hdr->d_addr = lladdr.val; + /* src addr */ + rte_ether_addr_copy(&port_mac[nh].mac.addr, ð_hdr->s_addr); + + return nh; +} + +static __rte_always_inline +void l3fwd_lpm_simple_forward(struct rte_mbuf *m, uint16_t portid, + struct lcore_conf *lconf) +{ + struct rte_ether_hdr *eth_hdr; + uint32_t nh; + + eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); + + if (RTE_ETH_IS_IPV4_HDR(m->packet_type)) { + /* Handle IPv4 headers.*/ + struct rte_ipv4_hdr *ipv4_hdr; + + ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *, + sizeof(*eth_hdr)); + +#ifdef DO_RFC_1812_CHECKS + /* Check to make sure the packet is valid (RFC1812) */ + if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) { + rte_pktmbuf_free(m); + return; + } +#endif + nh = ipv4_process_pkt(lconf, eth_hdr, ipv4_hdr, portid); + +#ifdef DO_RFC_1812_CHECKS + /* Update time to live and header checksum */ + --(ipv4_hdr->time_to_live); + ++(ipv4_hdr->hdr_checksum); +#endif + } else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) { + /* Handle IPv6 headers.*/ + struct rte_ipv6_hdr *ipv6_hdr; + + ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *, + sizeof(*eth_hdr)); + + nh = ipv6_process_pkt(lconf, eth_hdr, ipv6_hdr, portid); + } else + /* Unhandled protocol */ + nh = rte_ifpx_proxy_get(portid); + + if (nh >= RTE_MAX_ETHPORTS || (active_port_mask & 1 << nh) == 0) + rte_pktmbuf_free(m); + else + send_single_packet(lconf, m, nh); +} + +static inline +void l3fwd_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, + uint16_t portid, struct lcore_conf *lconf) +{ + int32_t j; + + /* Prefetch first packets */ + for (j = 0; j < PREFETCH_OFFSET && j < nb_rx; j++) + rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[j], void *)); + + /* Prefetch and forward already prefetched packets. */ + for (j = 0; j < (nb_rx - PREFETCH_OFFSET); j++) { + rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[ + j + PREFETCH_OFFSET], void *)); + l3fwd_lpm_simple_forward(pkts_burst[j], portid, lconf); + } + + /* Forward remaining prefetched packets */ + for (; j < nb_rx; j++) + l3fwd_lpm_simple_forward(pkts_burst[j], portid, lconf); +} + +static +void handle_neigh_add(struct lcore_conf *lconf, + const struct rte_ifpx_neigh_change *ev) +{ + char mac[RTE_ETHER_ADDR_FMT_SIZE]; + char ip[INET_ADDRSTRLEN]; + int32_t i, a; + + i = rte_hash_add_key(lconf->neigh_hash, &ev->ip); + if (i < 0) { + RTE_LOG(WARNING, L3FWD, "Failed to add IPv4 neighbour entry\n"); + return; + } + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + rte_ether_format_addr(mac, sizeof(mac), &ev->mac); + a = rte_cpu_to_be_32(ev->ip); + inet_ntop(AF_INET, &a, ip, sizeof(ip)); + RTE_LOG(DEBUG, L3FWD, "Neighbour update for port %d: %s -> %s@%d\n", + ev->port_id, ip, mac, i); + } + lconf->neigh_map[i].mac.addr = ev->mac; + lconf->neigh_map[i].mac.valid = 1; +} + +static +void handle_neigh_del(struct lcore_conf *lconf, + const struct rte_ifpx_neigh_change *ev) +{ + char ip[INET_ADDRSTRLEN]; + int32_t i, a; + + i = rte_hash_del_key(lconf->neigh_hash, &ev->ip); + if (i < 0) { + RTE_LOG(WARNING, L3FWD, + "Failed to remove IPv4 neighbour entry\n"); + return; + } + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + a = rte_cpu_to_be_32(ev->ip); + inet_ntop(AF_INET, &a, ip, sizeof(ip)); + RTE_LOG(DEBUG, L3FWD, "Neighbour removal for port %d: %s\n", + ev->port_id, ip); + } + lconf->neigh_map[i].val = 0; +} + +static +void handle_neigh6_add(struct lcore_conf *lconf, + const struct rte_ifpx_neigh6_change *ev) +{ + char mac[RTE_ETHER_ADDR_FMT_SIZE]; + char ip[INET6_ADDRSTRLEN]; + int32_t i; + + i = rte_hash_add_key(lconf->neigh6_hash, ev->ip); + if (i < 0) { + RTE_LOG(WARNING, L3FWD, "Failed to add IPv4 neighbour entry\n"); + return; + } + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + rte_ether_format_addr(mac, sizeof(mac), &ev->mac); + inet_ntop(AF_INET6, ev->ip, ip, sizeof(ip)); + RTE_LOG(DEBUG, L3FWD, "Neighbour update for port %d: %s -> %s@%d\n", + ev->port_id, ip, mac, i); + } + lconf->neigh6_map[i].mac.addr = ev->mac; + lconf->neigh6_map[i].mac.valid = 1; +} + +static +void handle_neigh6_del(struct lcore_conf *lconf, + const struct rte_ifpx_neigh6_change *ev) +{ + char ip[INET6_ADDRSTRLEN]; + int32_t i; + + i = rte_hash_del_key(lconf->neigh6_hash, ev->ip); + if (i < 0) { + RTE_LOG(WARNING, L3FWD, "Failed to remove IPv6 neighbour entry\n"); + return; + } + if (rte_log_get_level(RTE_LOGTYPE_L3FWD) >= (int)RTE_LOG_DEBUG) { + inet_ntop(AF_INET6, ev->ip, ip, sizeof(ip)); + RTE_LOG(DEBUG, L3FWD, "Neighbour removal for port %d: %s\n", + ev->port_id, ip); + } + lconf->neigh6_map[i].val = 0; +} + +static +void handle_events(struct lcore_conf *lconf) +{ + struct rte_ifpx_event *ev; + + while (rte_ring_dequeue(lconf->ev_queue, (void **)&ev) == 0) { + switch (ev->type) { + case RTE_IFPX_NEIGH_ADD: + handle_neigh_add(lconf, &ev->neigh_change); + break; + case RTE_IFPX_NEIGH_DEL: + handle_neigh_del(lconf, &ev->neigh_change); + break; + case RTE_IFPX_NEIGH6_ADD: + handle_neigh6_add(lconf, &ev->neigh6_change); + break; + case RTE_IFPX_NEIGH6_DEL: + handle_neigh6_del(lconf, &ev->neigh6_change); + break; + default: + RTE_LOG(WARNING, L3FWD, + "Unexpected event: %d\n", ev->type); + } + free(ev); + } +} + +void setup_lpm(void) +{ + struct rte_lpm6_config cfg6; + struct rte_lpm_config cfg4; + + /* create the LPM table */ + cfg4.max_rules = IPV4_L3FWD_LPM_MAX_RULES; + cfg4.number_tbl8s = IPV4_L3FWD_LPM_NUMBER_TBL8S; + cfg4.flags = 0; + ipv4_routes = rte_lpm_create("IPV4_L3FWD_LPM", SOCKET_ID_ANY, &cfg4); + if (ipv4_routes == NULL) + rte_exit(EXIT_FAILURE, "Unable to create the l3fwd LPM table\n"); + + /* create the LPM6 table */ + cfg6.max_rules = IPV6_L3FWD_LPM_MAX_RULES; + cfg6.number_tbl8s = IPV6_L3FWD_LPM_NUMBER_TBL8S; + cfg6.flags = 0; + ipv6_routes = rte_lpm6_create("IPV6_L3FWD_LPM", SOCKET_ID_ANY, &cfg6); + if (ipv6_routes == NULL) + rte_exit(EXIT_FAILURE, "Unable to create the l3fwd LPM table\n"); +} + +static +uint32_t hash_ipv4(const void *key, uint32_t key_len __rte_unused, + uint32_t init_val) +{ +#ifndef USE_HASH_CRC + return rte_jhash_1word(*(const uint32_t *)key, init_val); +#else + return rte_hash_crc_4byte(*(const uint32_t *)key, init_val); +#endif +} + +static +uint32_t hash_ipv6(const void *key, uint32_t key_len __rte_unused, + uint32_t init_val) +{ +#ifndef USE_HASH_CRC + return rte_jhash_32b(key, 4, init_val); +#else + const uint64_t *pk = key; + init_val = rte_hash_crc_8byte(*pk, init_val); + return rte_hash_crc_8byte(*(pk+1), init_val); +#endif +} + +static +int setup_neigh(struct lcore_conf *lconf) +{ + char buf[16]; + struct rte_hash_parameters ipv4_hparams = { + .name = buf, + .entries = L3FWD_NEIGH_ENTRIES, + .key_len = 4, + .hash_func = hash_ipv4, + .hash_func_init_val = 0, + }; + struct rte_hash_parameters ipv6_hparams = { + .name = buf, + .entries = L3FWD_NEIGH_ENTRIES, + .key_len = 16, + .hash_func = hash_ipv6, + .hash_func_init_val = 0, + }; + + snprintf(buf, sizeof(buf), "neigh_hash-%d", rte_lcore_id()); + lconf->neigh_hash = rte_hash_create(&ipv4_hparams); + snprintf(buf, sizeof(buf), "neigh_map-%d", rte_lcore_id()); + lconf->neigh_map = rte_zmalloc(buf, + L3FWD_NEIGH_ENTRIES*sizeof(*lconf->neigh_map), + 8); + if (lconf->neigh_hash == NULL || lconf->neigh_map == NULL) { + RTE_LOG(ERR, L3FWD, + "Unable to create the l3fwd ARP/IPv4 table (lcore %d)\n", + rte_lcore_id()); + return -1; + } + + snprintf(buf, sizeof(buf), "neigh6_hash-%d", rte_lcore_id()); + lconf->neigh6_hash = rte_hash_create(&ipv6_hparams); + snprintf(buf, sizeof(buf), "neigh6_map-%d", rte_lcore_id()); + lconf->neigh6_map = rte_zmalloc(buf, + L3FWD_NEIGH_ENTRIES*sizeof(*lconf->neigh6_map), + 8); + if (lconf->neigh6_hash == NULL || lconf->neigh6_map == NULL) { + RTE_LOG(ERR, L3FWD, + "Unable to create the l3fwd ARP/IPv6 table (lcore %d)\n", + rte_lcore_id()); + return -1; + } + return 0; +} + +int lpm_check_ptype(int portid) +{ + int i, ret; + int ptype_l3_ipv4 = 0, ptype_l3_ipv6 = 0; + uint32_t ptype_mask = RTE_PTYPE_L3_MASK; + + ret = rte_eth_dev_get_supported_ptypes(portid, ptype_mask, NULL, 0); + if (ret <= 0) + return 0; + + uint32_t ptypes[ret]; + + ret = rte_eth_dev_get_supported_ptypes(portid, ptype_mask, ptypes, ret); + for (i = 0; i < ret; ++i) { + if (ptypes[i] & RTE_PTYPE_L3_IPV4) + ptype_l3_ipv4 = 1; + if (ptypes[i] & RTE_PTYPE_L3_IPV6) + ptype_l3_ipv6 = 1; + } + + if (ptype_l3_ipv4 == 0) + RTE_LOG(WARNING, L3FWD, + "port %d cannot parse RTE_PTYPE_L3_IPV4\n", portid); + + if (ptype_l3_ipv6 == 0) + RTE_LOG(WARNING, L3FWD, + "port %d cannot parse RTE_PTYPE_L3_IPV6\n", portid); + + if (ptype_l3_ipv4 && ptype_l3_ipv6) + return 1; + + return 0; + +} + +static inline +void lpm_parse_ptype(struct rte_mbuf *m) +{ + struct rte_ether_hdr *eth_hdr; + uint32_t packet_type = RTE_PTYPE_UNKNOWN; + uint16_t ether_type; + + eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *); + ether_type = eth_hdr->ether_type; + if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) + packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; + else if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) + packet_type |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN; + + m->packet_type = packet_type; +} + +uint16_t lpm_cb_parse_ptype(uint16_t port __rte_unused, + uint16_t queue __rte_unused, + struct rte_mbuf *pkts[], uint16_t nb_pkts, + uint16_t max_pkts __rte_unused, + void *user_param __rte_unused) +{ + unsigned int i; + + if (unlikely(nb_pkts == 0)) + return nb_pkts; + rte_prefetch0(rte_pktmbuf_mtod(pkts[0], struct ether_hdr *)); + for (i = 0; i < (unsigned int) (nb_pkts - 1); ++i) { + rte_prefetch0(rte_pktmbuf_mtod(pkts[i+1], + struct ether_hdr *)); + lpm_parse_ptype(pkts[i]); + } + lpm_parse_ptype(pkts[i]); + + return nb_pkts; +} + +/* main processing loop */ +int lpm_main_loop(void *dummy __rte_unused) +{ + struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; + unsigned int lcore_id; + uint64_t prev_tsc, diff_tsc, cur_tsc; + int i, j, nb_rx; + uint16_t portid; + uint8_t queueid; + struct lcore_conf *lconf; + struct lcore_rx_queue *rxq; + const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / + US_PER_S * BURST_TX_DRAIN_US; + + prev_tsc = 0; + + lcore_id = rte_lcore_id(); + lconf = &lcore_conf[lcore_id]; + + if (setup_neigh(lconf) < 0) { + RTE_LOG(ERR, L3FWD, "lcore %u failed to setup its ARP tables\n", + lcore_id); + return 0; + } + + if (lconf->n_rx_queue == 0) { + RTE_LOG(INFO, L3FWD, "lcore %u has nothing to do\n", lcore_id); + return 0; + } + + RTE_LOG(INFO, L3FWD, "entering main loop on lcore %u\n", lcore_id); + + for (i = 0; i < lconf->n_rx_queue; i++) { + + portid = lconf->rx_queue_list[i].port_id; + queueid = lconf->rx_queue_list[i].queue_id; + RTE_LOG(INFO, L3FWD, + " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + lcore_id, portid, queueid); + } + + while (!force_quit) { + + cur_tsc = rte_rdtsc(); + /* + * TX burst and event queue drain + */ + diff_tsc = cur_tsc - prev_tsc; + if (unlikely(diff_tsc % drain_tsc == 0)) { + + for (i = 0; i < lconf->n_tx_port; ++i) { + portid = lconf->tx_port_id[i]; + if (lconf->tx_mbufs[portid].len == 0) + continue; + send_burst(lconf, + lconf->tx_mbufs[portid].len, + portid); + lconf->tx_mbufs[portid].len = 0; + } + + if (diff_tsc > EV_QUEUE_DRAIN * drain_tsc) { + if (lconf->ev_queue && + !rte_ring_empty(lconf->ev_queue)) + handle_events(lconf); + prev_tsc = cur_tsc; + } + } + + /* + * Read packet from RX queues + */ + for (i = 0; i < lconf->n_rx_queue; ++i) { + rxq = &lconf->rx_queue_list[i]; + portid = rxq->port_id; + queueid = rxq->queue_id; + nb_rx = rte_eth_rx_burst(portid, queueid, pkts_burst, + MAX_PKT_BURST); + if (nb_rx == 0) + continue; + /* If current queue is from proxy interface then there + * is no need to figure out destination port - just + * forward it to the bound port. + */ + if (unlikely(rxq->dst_port != RTE_MAX_ETHPORTS)) { + for (j = 0; j < nb_rx; ++j) + send_single_packet(lconf, pkts_burst[j], + rxq->dst_port); + } else + l3fwd_send_packets(nb_rx, pkts_burst, portid, + lconf); + } + } + + return 0; +} diff --git a/examples/l3fwd-ifpx/l3fwd.h b/examples/l3fwd-ifpx/l3fwd.h new file mode 100644 index 000000000..fc60078c5 --- /dev/null +++ b/examples/l3fwd-ifpx/l3fwd.h @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Marvell International Ltd. + */ + +#ifndef __L3_FWD_H__ +#define __L3_FWD_H__ + +#include + +#include +#include +#include + +#define RTE_LOGTYPE_L3FWD RTE_LOGTYPE_USER1 + +#define MAX_PKT_BURST 32 +#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ +#define EV_QUEUE_DRAIN 5 /* Check event queue every 5 TX drains */ + +#define MAX_RX_QUEUE_PER_LCORE 16 + +/* + * Try to avoid TX buffering if we have at least MAX_TX_BURST packets to send. + */ +#define MAX_TX_BURST (MAX_PKT_BURST / 2) + +/* Configure how many packets ahead to prefetch, when reading packets */ +#define PREFETCH_OFFSET 3 + +/* Hash parameters. */ +#ifdef RTE_ARCH_64 +/* default to 4 million hash entries (approx) */ +#define L3FWD_HASH_ENTRIES (1024*1024*4) +#else +/* 32-bit has less address-space for hugepage memory, limit to 1M entries */ +#define L3FWD_HASH_ENTRIES (1024*1024*1) +#endif +#define HASH_ENTRY_NUMBER_DEFAULT 4 +/* Default ARP table size */ +#define L3FWD_NEIGH_ENTRIES 1024 + +union lladdr_t { + uint64_t val; + struct { + struct rte_ether_addr addr; + uint16_t valid; + } mac; +}; + +struct mbuf_table { + uint16_t len; + struct rte_mbuf *m_table[MAX_PKT_BURST]; +}; + +struct lcore_rx_queue { + uint16_t port_id; + uint16_t dst_port; + uint8_t queue_id; +} __rte_cache_aligned; + +struct lcore_conf { + uint16_t n_rx_queue; + struct lcore_rx_queue rx_queue_list[MAX_RX_QUEUE_PER_LCORE]; + uint16_t n_tx_port; + uint16_t tx_port_id[RTE_MAX_ETHPORTS]; + uint16_t tx_queue_id[RTE_MAX_ETHPORTS]; + struct mbuf_table tx_mbufs[RTE_MAX_ETHPORTS]; + struct rte_ring *ev_queue; + union lladdr_t *neigh_map; + struct rte_hash *neigh_hash; + union lladdr_t *neigh6_map; + struct rte_hash *neigh6_hash; +} __rte_cache_aligned; + +extern volatile bool force_quit; + +/* mask of enabled/active ports */ +extern uint32_t enabled_port_mask; +extern uint32_t active_port_mask; + +extern struct lcore_conf lcore_conf[RTE_MAX_LCORE]; + +int init_if_proxy(void); +void close_if_proxy(void); + +void wait_for_config_done(void); + +void setup_lpm(void); + +int lpm_check_ptype(int portid); + +uint16_t +lpm_cb_parse_ptype(uint16_t port, uint16_t queue, struct rte_mbuf *pkts[], + uint16_t nb_pkts, uint16_t max_pkts, void *user_param); + +int lpm_main_loop(__attribute__((unused)) void *dummy); + +#endif /* __L3_FWD_H__ */ diff --git a/examples/l3fwd-ifpx/main.c b/examples/l3fwd-ifpx/main.c new file mode 100644 index 000000000..7f1da5ec2 --- /dev/null +++ b/examples/l3fwd-ifpx/main.c @@ -0,0 +1,740 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Marvell International Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include "l3fwd.h" + +/* + * Configurable number of RX/TX ring descriptors + */ +#define RTE_TEST_RX_DESC_DEFAULT 1024 +#define RTE_TEST_TX_DESC_DEFAULT 1024 + +#define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS +#define MAX_RX_QUEUE_PER_PORT 128 + +#define MAX_LCORE_PARAMS 1024 + +/* Static global variables used within this file. */ +static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT; +static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; + +/**< Ports set in promiscuous mode off by default. */ +static int promiscuous_on; + +/* Global variables. */ + +static int parse_ptype; /**< Parse packet type using rx callback, and */ + /**< disabled by default */ + +volatile bool force_quit; + +/* mask of enabled/active ports */ +uint32_t enabled_port_mask; +uint32_t active_port_mask; + +struct lcore_conf lcore_conf[RTE_MAX_LCORE]; + +struct lcore_params { + uint16_t port_id; + uint8_t queue_id; + uint8_t lcore_id; +} __rte_cache_aligned; + +static struct lcore_params lcore_params[MAX_LCORE_PARAMS]; +static struct lcore_params lcore_params_default[] = { + {0, 0, 2}, + {0, 1, 2}, + {0, 2, 2}, + {1, 0, 2}, + {1, 1, 2}, + {1, 2, 2}, + {2, 0, 2}, + {3, 0, 3}, + {3, 1, 3}, +}; + +static uint16_t nb_lcore_params; + +static struct rte_eth_conf port_conf = { + .rxmode = { + .mq_mode = ETH_MQ_RX_RSS, + .max_rx_pkt_len = RTE_ETHER_MAX_LEN, + .split_hdr_size = 0, + .offloads = DEV_RX_OFFLOAD_CHECKSUM, + }, + .rx_adv_conf = { + .rss_conf = { + .rss_key = NULL, + .rss_hf = ETH_RSS_IP, + }, + }, + .txmode = { + .mq_mode = ETH_MQ_TX_NONE, + }, +}; + +static struct rte_mempool *pktmbuf_pool; + +static int +check_lcore_params(void) +{ + uint8_t queue, lcore; + uint16_t i, port_id; + int socketid; + + for (i = 0; i < nb_lcore_params; ++i) { + queue = lcore_params[i].queue_id; + if (queue >= MAX_RX_QUEUE_PER_PORT) { + RTE_LOG(ERR, L3FWD, "Invalid queue number: %hhu\n", + queue); + return -1; + } + lcore = lcore_params[i].lcore_id; + if (!rte_lcore_is_enabled(lcore)) { + RTE_LOG(ERR, L3FWD, "lcore %hhu is not enabled " + "in lcore mask\n", lcore); + return -1; + } + port_id = lcore_params[i].port_id; + if ((enabled_port_mask & (1 << port_id)) == 0) { + RTE_LOG(ERR, L3FWD, "port %u is not enabled " + "in port mask\n", port_id); + return -1; + } + if (!rte_eth_dev_is_valid_port(port_id)) { + RTE_LOG(ERR, L3FWD, "port %u is not present " + "on the board\n", port_id); + return -1; + } + socketid = rte_lcore_to_socket_id(lcore); + if (socketid != 0) { + RTE_LOG(WARNING, L3FWD, + "lcore %hhu is on socket %d with numa off\n", + lcore, socketid); + } + } + return 0; +} + +static int +add_proxies(void) +{ + uint16_t i, p, port_id, proxy_id; + + for (i = 0, p = nb_lcore_params; i < nb_lcore_params; ++i) { + if (p >= RTE_DIM(lcore_params)) { + RTE_LOG(ERR, L3FWD, "Not enough room in lcore_params " + "to add proxy\n"); + return -1; + } + port_id = lcore_params[i].port_id; + if (rte_ifpx_proxy_get(port_id) != RTE_MAX_ETHPORTS) + continue; + + proxy_id = rte_ifpx_proxy_create(RTE_IFPX_DEFAULT); + if (proxy_id == RTE_MAX_ETHPORTS) { + RTE_LOG(ERR, L3FWD, "Failed to crate proxy\n"); + return -1; + } + rte_ifpx_port_bind(port_id, proxy_id); + /* mark proxy as enabled - the corresponding port is, since we + * are after checking of lcore_params + */ + enabled_port_mask |= 1 << proxy_id; + lcore_params[p].port_id = proxy_id; + lcore_params[p].lcore_id = lcore_params[i].lcore_id; + lcore_params[p].queue_id = lcore_params[i].queue_id; + ++p; + } + + nb_lcore_params = p; + return 0; +} + +static uint8_t +get_port_n_rx_queues(const uint16_t port) +{ + int queue = -1; + uint16_t i; + + for (i = 0; i < nb_lcore_params; ++i) { + if (lcore_params[i].port_id == port) { + if (lcore_params[i].queue_id == queue+1) + queue = lcore_params[i].queue_id; + else + rte_exit(EXIT_FAILURE, "queue ids of the port %d must be" + " in sequence and must start with 0\n", + lcore_params[i].port_id); + } + } + return (uint8_t)(++queue); +} + +static int +init_lcore_rx_queues(void) +{ + uint16_t i, p, nb_rx_queue; + uint8_t lcore; + struct lcore_rx_queue *rq; + + for (i = 0; i < nb_lcore_params; ++i) { + lcore = lcore_params[i].lcore_id; + nb_rx_queue = lcore_conf[lcore].n_rx_queue; + if (nb_rx_queue >= MAX_RX_QUEUE_PER_LCORE) { + RTE_LOG(ERR, L3FWD, + "too many queues (%u) for lcore: %u\n", + (unsigned int)nb_rx_queue + 1, + (unsigned int)lcore); + return -1; + } + rq = &lcore_conf[lcore].rx_queue_list[nb_rx_queue]; + rq->port_id = lcore_params[i].port_id; + rq->queue_id = lcore_params[i].queue_id; + if (rte_ifpx_is_proxy(rq->port_id)) { + if (rte_ifpx_port_get(rq->port_id, &p, 1) > 0) + rq->dst_port = p; + else + RTE_LOG(WARNING, L3FWD, + "Found proxy that has no port bound\n"); + } else + rq->dst_port = RTE_MAX_ETHPORTS; + lcore_conf[lcore].n_rx_queue++; + } + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + fprintf(stderr, "%s [EAL options] --" + " -p PORTMASK" + " [-P]" + " --config (port,queue,lcore)[,(port,queue,lcore)]" + " [--ipv6]" + " [--parse-ptype]" + + " -p PORTMASK: Hexadecimal bitmask of ports to configure\n" + " -P : Enable promiscuous mode\n" + " --config (port,queue,lcore): Rx queue configuration\n" + " --ipv6: Set if running ipv6 packets\n" + " --parse-ptype: Set to use software to analyze packet type\n", + prgname); +} + +static int +parse_portmask(const char *portmask) +{ + char *end = NULL; + unsigned long pm; + + /* parse hexadecimal string */ + pm = strtoul(portmask, &end, 16); + if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0')) + return -1; + + if (pm == 0) + return -1; + + return pm; +} + +static int +parse_config(const char *q_arg) +{ + char s[256]; + const char *p, *p0 = q_arg; + char *end; + enum fieldnames { + FLD_PORT = 0, + FLD_QUEUE, + FLD_LCORE, + _NUM_FLD + }; + unsigned long int_fld[_NUM_FLD]; + char *str_fld[_NUM_FLD]; + int i; + unsigned int size; + + nb_lcore_params = 0; + + while ((p = strchr(p0, '(')) != NULL) { + ++p; + p0 = strchr(p, ')'); + if (p0 == NULL) + return -1; + + size = p0 - p; + if (size >= sizeof(s)) + return -1; + + snprintf(s, sizeof(s), "%.*s", size, p); + if (rte_strsplit(s, sizeof(s), str_fld, _NUM_FLD, ',') != + _NUM_FLD) + return -1; + for (i = 0; i < _NUM_FLD; i++) { + errno = 0; + int_fld[i] = strtoul(str_fld[i], &end, 0); + if (errno != 0 || end == str_fld[i] || int_fld[i] > 255) + return -1; + } + if (nb_lcore_params >= MAX_LCORE_PARAMS) { + RTE_LOG(ERR, L3FWD, "exceeded max number of lcore " + "params: %hu\n", nb_lcore_params); + return -1; + } + lcore_params[nb_lcore_params].port_id = + (uint8_t)int_fld[FLD_PORT]; + lcore_params[nb_lcore_params].queue_id = + (uint8_t)int_fld[FLD_QUEUE]; + lcore_params[nb_lcore_params].lcore_id = + (uint8_t)int_fld[FLD_LCORE]; + ++nb_lcore_params; + } + return 0; +} + +#define MAX_JUMBO_PKT_LEN 9600 +#define MEMPOOL_CACHE_SIZE 256 + +static const char short_options[] = + "p:" /* portmask */ + "P" /* promiscuous */ + "L" /* enable long prefix match */ + "E" /* enable exact match */ + ; + +#define CMD_LINE_OPT_CONFIG "config" +#define CMD_LINE_OPT_IPV6 "ipv6" +#define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype" +enum { + /* long options mapped to a short option */ + + /* first long only option value must be >= 256, so that we won't + * conflict with short options + */ + CMD_LINE_OPT_MIN_NUM = 256, + CMD_LINE_OPT_CONFIG_NUM, + CMD_LINE_OPT_PARSE_PTYPE_NUM, +}; + +static const struct option lgopts[] = { + {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, + {CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM}, + {NULL, 0, 0, 0} +}; + +/* + * This expression is used to calculate the number of mbufs needed + * depending on user input, taking into account memory for rx and + * tx hardware rings, cache per lcore and mtable per port per lcore. + * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum + * value of 8192 + */ +#define NB_MBUF(nports) RTE_MAX( \ + (nports*nb_rx_queue*nb_rxd + \ + nports*nb_lcores*MAX_PKT_BURST + \ + nports*n_tx_queue*nb_txd + \ + nb_lcores*MEMPOOL_CACHE_SIZE), \ + 8192U) + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + + argvopt = argv; + + /* Error or normal output strings. */ + while ((opt = getopt_long(argc, argvopt, short_options, + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* portmask */ + case 'p': + enabled_port_mask = parse_portmask(optarg); + if (enabled_port_mask == 0) { + RTE_LOG(ERR, L3FWD, "Invalid portmask\n"); + print_usage(prgname); + return -1; + } + break; + + case 'P': + promiscuous_on = 1; + break; + + /* long options */ + case CMD_LINE_OPT_CONFIG_NUM: + ret = parse_config(optarg); + if (ret) { + RTE_LOG(ERR, L3FWD, "Invalid config\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_PARSE_PTYPE_NUM: + RTE_LOG(INFO, L3FWD, "soft parse-ptype is enabled\n"); + parse_ptype = 1; + break; + + default: + print_usage(prgname); + return -1; + } + } + + if (nb_lcore_params == 0) { + memcpy(lcore_params, lcore_params_default, + sizeof(lcore_params_default)); + nb_lcore_params = RTE_DIM(lcore_params_default); + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + RTE_LOG(NOTICE, L3FWD, + "\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + } +} + +static int +prepare_ptype_parser(uint16_t portid, uint16_t queueid) +{ + if (parse_ptype) { + RTE_LOG(INFO, L3FWD, "Port %d: softly parse packet type info\n", + portid); + if (rte_eth_add_rx_callback(portid, queueid, + lpm_cb_parse_ptype, + NULL)) + return 1; + + RTE_LOG(ERR, L3FWD, "Failed to add rx callback: port=%d\n", + portid); + return 0; + } + + if (lpm_check_ptype(portid)) + return 1; + + RTE_LOG(ERR, L3FWD, + "port %d cannot parse packet type, please add --%s\n", + portid, CMD_LINE_OPT_PARSE_PTYPE); + return 0; +} + +int +main(int argc, char **argv) +{ + struct lcore_conf *lconf; + struct rte_eth_dev_info dev_info; + struct rte_eth_txconf *txconf; + int ret; + unsigned int nb_ports; + uint32_t nb_mbufs; + uint16_t queueid, portid; + unsigned int lcore_id; + uint32_t nb_tx_queue, nb_lcores; + uint8_t nb_rx_queue, queue; + + /* init EAL */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n"); + argc -= ret; + argv += ret; + + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n"); + + if (check_lcore_params() < 0) + rte_exit(EXIT_FAILURE, "check_lcore_params failed\n"); + + if (add_proxies() < 0) + rte_exit(EXIT_FAILURE, "add_proxies failed\n"); + + ret = init_lcore_rx_queues(); + if (ret < 0) + rte_exit(EXIT_FAILURE, "init_lcore_rx_queues failed\n"); + + nb_ports = rte_eth_dev_count_avail(); + + nb_lcores = rte_lcore_count(); + + /* Initial number of mbufs in pool - the amount required for hardware + * rx/tx rings will be added during configuration of ports. + */ + nb_mbufs = nb_ports * nb_lcores * MAX_PKT_BURST + /* mbuf tables */ + nb_lcores * MEMPOOL_CACHE_SIZE; /* per lcore cache */ + + /* Init the lookup structures. */ + setup_lpm(); + + /* initialize all ports (including proxies) */ + RTE_ETH_FOREACH_DEV(portid) { + struct rte_eth_conf local_port_conf = port_conf; + + /* skip ports that are not enabled */ + if ((enabled_port_mask & (1 << portid)) == 0) { + RTE_LOG(INFO, L3FWD, "Skipping disabled port %d\n", + portid); + continue; + } + + /* init port */ + RTE_LOG(INFO, L3FWD, "Initializing port %d ...\n", portid); + + nb_rx_queue = get_port_n_rx_queues(portid); + nb_tx_queue = nb_lcores; + + ret = rte_eth_dev_info_get(portid, &dev_info); + if (ret != 0) + rte_exit(EXIT_FAILURE, + "Error during getting device (port %u) info: %s\n", + portid, strerror(-ret)); + if (nb_rx_queue > dev_info.max_rx_queues || + nb_tx_queue > dev_info.max_tx_queues) + rte_exit(EXIT_FAILURE, + "Port %d cannot configure enough queues\n", + portid); + + RTE_LOG(INFO, L3FWD, "Creating queues: nb_rxq=%d nb_txq=%u...\n", + nb_rx_queue, nb_tx_queue); + + if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE) + local_port_conf.txmode.offloads |= + DEV_TX_OFFLOAD_MBUF_FAST_FREE; + + local_port_conf.rx_adv_conf.rss_conf.rss_hf &= + dev_info.flow_type_rss_offloads; + if (local_port_conf.rx_adv_conf.rss_conf.rss_hf != + port_conf.rx_adv_conf.rss_conf.rss_hf) { + RTE_LOG(INFO, L3FWD, + "Port %u modified RSS hash function based on hardware support," + "requested:%#"PRIx64" configured:%#"PRIx64"\n", + portid, port_conf.rx_adv_conf.rss_conf.rss_hf, + local_port_conf.rx_adv_conf.rss_conf.rss_hf); + } + + ret = rte_eth_dev_configure(portid, nb_rx_queue, + (uint16_t)nb_tx_queue, + &local_port_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot configure device: err=%d, port=%d\n", + ret, portid); + + ret = rte_eth_dev_adjust_nb_rx_tx_desc(portid, &nb_rxd, + &nb_txd); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Cannot adjust number of descriptors: err=%d, " + "port=%d\n", ret, portid); + + nb_mbufs += nb_rx_queue * nb_rxd + nb_tx_queue * nb_txd; + /* init one TX queue per couple (lcore,port) */ + queueid = 0; + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + RTE_LOG(INFO, L3FWD, "\ttxq=%u,%d\n", lcore_id, + queueid); + + txconf = &dev_info.default_txconf; + txconf->offloads = local_port_conf.txmode.offloads; + ret = rte_eth_tx_queue_setup(portid, queueid, nb_txd, + SOCKET_ID_ANY, txconf); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_tx_queue_setup: err=%d, " + "port=%d\n", ret, portid); + + lconf = &lcore_conf[lcore_id]; + lconf->tx_queue_id[portid] = queueid; + queueid++; + + lconf->tx_port_id[lconf->n_tx_port] = portid; + lconf->n_tx_port++; + } + RTE_LOG(INFO, L3FWD, "\n"); + } + + /* Init pkt pool. */ + pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", + rte_align32prevpow2(nb_mbufs), MEMPOOL_CACHE_SIZE, + 0, RTE_MBUF_DEFAULT_BUF_SIZE, SOCKET_ID_ANY); + if (pktmbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + lconf = &lcore_conf[lcore_id]; + RTE_LOG(INFO, L3FWD, "Initializing rx queues on lcore %u ...\n", + lcore_id); + /* init RX queues */ + for (queue = 0; queue < lconf->n_rx_queue; ++queue) { + struct rte_eth_rxconf rxq_conf; + + portid = lconf->rx_queue_list[queue].port_id; + queueid = lconf->rx_queue_list[queue].queue_id; + + RTE_LOG(INFO, L3FWD, "\trxq=%d,%d\n", portid, queueid); + + ret = rte_eth_dev_info_get(portid, &dev_info); + if (ret != 0) + rte_exit(EXIT_FAILURE, + "Error during getting device (port %u) info: %s\n", + portid, strerror(-ret)); + + rxq_conf = dev_info.default_rxconf; + rxq_conf.offloads = port_conf.rxmode.offloads; + ret = rte_eth_rx_queue_setup(portid, queueid, + nb_rxd, SOCKET_ID_ANY, + &rxq_conf, + pktmbuf_pool); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_rx_queue_setup: err=%d, port=%d\n", + ret, portid); + } + } + + RTE_LOG(INFO, L3FWD, "\n"); + + /* start ports */ + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + /* Start device */ + ret = rte_eth_dev_start(portid); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "rte_eth_dev_start: err=%d, port=%d\n", + ret, portid); + + /* + * If enabled, put device in promiscuous mode. + * This allows IO forwarding mode to forward packets + * to itself through 2 cross-connected ports of the + * target machine. + */ + if (promiscuous_on) { + ret = rte_eth_promiscuous_enable(portid); + if (ret != 0) + rte_exit(EXIT_FAILURE, + "rte_eth_promiscuous_enable: err=%s, port=%u\n", + rte_strerror(-ret), portid); + } + } + /* we've managed to start all enabled ports so active == enabled */ + active_port_mask = enabled_port_mask; + + RTE_LOG(INFO, L3FWD, "\n"); + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + lconf = &lcore_conf[lcore_id]; + for (queue = 0; queue < lconf->n_rx_queue; ++queue) { + portid = lconf->rx_queue_list[queue].port_id; + queueid = lconf->rx_queue_list[queue].queue_id; + if (prepare_ptype_parser(portid, queueid) == 0) + rte_exit(EXIT_FAILURE, "ptype check fails\n"); + } + } + + if (init_if_proxy() < 0) + rte_exit(EXIT_FAILURE, "Failed to configure proxy lib\n"); + wait_for_config_done(); + + ret = 0; + /* launch per-lcore init on every lcore */ + rte_eal_mp_remote_launch(lpm_main_loop, NULL, CALL_MASTER); + RTE_LCORE_FOREACH_SLAVE(lcore_id) { + if (rte_eal_wait_lcore(lcore_id) < 0) { + ret = -1; + break; + } + } + + /* stop ports */ + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + RTE_LOG(INFO, L3FWD, "Closing port %d...", portid); + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + rte_log(RTE_LOG_INFO, RTE_LOGTYPE_L3FWD, " Done\n"); + } + + close_if_proxy(); + RTE_LOG(INFO, L3FWD, "Bye...\n"); + + return ret; +} diff --git a/examples/l3fwd-ifpx/meson.build b/examples/l3fwd-ifpx/meson.build new file mode 100644 index 000000000..f0c0920b8 --- /dev/null +++ b/examples/l3fwd-ifpx/meson.build @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2020 Marvell International Ltd. + +# meson file, for building this example as part of a main DPDK build. +# +# To build this example as a standalone application with an already-installed +# DPDK instance, use 'make' + +allow_experimental_apis = true +deps += ['hash', 'lpm', 'if_proxy'] +sources = files('l3fwd.c', 'main.c') diff --git a/examples/meson.build b/examples/meson.build index 1f2b6f516..319d765eb 100644 --- a/examples/meson.build +++ b/examples/meson.build @@ -23,7 +23,7 @@ all_examples = [ 'l2fwd', 'l2fwd-cat', 'l2fwd-event', 'l2fwd-crypto', 'l2fwd-jobstats', 'l2fwd-keepalive', 'l3fwd', - 'l3fwd-acl', 'l3fwd-power', + 'l3fwd-acl', 'l3fwd-ifpx', 'l3fwd-power', 'link_status_interrupt', 'multi_process/client_server_mp/mp_client', 'multi_process/client_server_mp/mp_server',