From patchwork Fri Sep 22 08:19:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bruce Richardson X-Patchwork-Id: 131824 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D4A642612; Fri, 22 Sep 2023 10:20:03 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F1EB240685; Fri, 22 Sep 2023 10:19:46 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 2EE284013F for ; Fri, 22 Sep 2023 10:19:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1695370783; x=1726906783; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=odySRWZfzNgCUmG6AtpMAowK+87wFBYrZBOhfZDuvd0=; b=IED65ZMux8xIKe5bGTaGKR3KavoOgPmMoZmq/MZV+VVSvd+txaaR2nZ2 GxRRpB23WJ8U1qgJcjYLQsB2n5zizC+A3fr/2muAMxtesnZdpzfKl/akJ JlkL4qjRdxW2DBS3tEOg7nzn+GzKT+pnasmabJSOrIJdT9Rr2RYXar4nz BOsLyQSwB+IcSx7dqqWLxk7sgnI2PFKpesc+iRQHcUepfQ3tovG72BdV+ C4AbUaRSRCIC1CiUKuZqxAfIVUKzVFq76IaM6DwM1zFyRxA37saF1eVM3 liRsoQv/uA+KNUHhu3czY3Bd4LzkhTM+RxHSUhDF09u/OINfSzHC/Mecw w==; X-IronPort-AV: E=McAfee;i="6600,9927,10840"; a="378063973" X-IronPort-AV: E=Sophos;i="6.03,167,1694761200"; d="scan'208";a="378063973" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Sep 2023 01:19:32 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10840"; a="1078281138" X-IronPort-AV: E=Sophos;i="6.03,167,1694761200"; d="scan'208";a="1078281138" Received: from silpixa00401385.ir.intel.com ([10.237.214.14]) by fmsmga005.fm.intel.com with ESMTP; 22 Sep 2023 01:19:31 -0700 From: Bruce Richardson To: dev@dpdk.org Cc: Bruce Richardson Subject: [RFC PATCH 3/5] net: new ethdev driver to communicate using shared mem Date: Fri, 22 Sep 2023 09:19:10 +0100 Message-Id: <20230922081912.7090-4-bruce.richardson@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230922081912.7090-1-bruce.richardson@intel.com> References: <20230922081912.7090-1-bruce.richardson@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This ethdev builds on the previous shared_mem bus driver and shared_mem mempool driver to provide an ethdev interface which can allow zero-copy I/O from one process to another. Signed-off-by: Bruce Richardson --- drivers/net/meson.build | 1 + drivers/net/shared_mem/meson.build | 11 + drivers/net/shared_mem/shared_mem_eth.c | 295 ++++++++++++++++++++++++ 3 files changed, 307 insertions(+) create mode 100644 drivers/net/shared_mem/meson.build create mode 100644 drivers/net/shared_mem/shared_mem_eth.c diff --git a/drivers/net/meson.build b/drivers/net/meson.build index bd38b533c5..505d208497 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -53,6 +53,7 @@ drivers = [ 'qede', 'ring', 'sfc', + 'shared_mem', 'softnic', 'tap', 'thunderx', diff --git a/drivers/net/shared_mem/meson.build b/drivers/net/shared_mem/meson.build new file mode 100644 index 0000000000..17d1b84454 --- /dev/null +++ b/drivers/net/shared_mem/meson.build @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 Intel Corporation + +if is_windows + build = false + reason = 'not supported on Windows' +endif + +sources = files('shared_mem_eth.c') +deps += 'bus_shared_mem' +require_iova_in_mbuf = false diff --git a/drivers/net/shared_mem/shared_mem_eth.c b/drivers/net/shared_mem/shared_mem_eth.c new file mode 100644 index 0000000000..564bfdb907 --- /dev/null +++ b/drivers/net/shared_mem/shared_mem_eth.c @@ -0,0 +1,295 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Intel Corporation + */ +#include +#include +#include + +RTE_LOG_REGISTER_DEFAULT(shared_mem_eth_logtype, DEBUG); +#define SHM_ETH_LOG(level, fmt, args...) rte_log(RTE_LOG_ ## level, \ + shared_mem_eth_logtype, "## SHARED MEM ETH: %s(): " fmt "\n", __func__, ##args) +#define SHM_ETH_ERR(fmt, args...) SHM_ETH_LOG(ERR, fmt, ## args) +#define SHM_ETH_INFO(fmt, args...) SHM_ETH_LOG(INFO, fmt, ## args) +#define SHM_ETH_DEBUG(fmt, args...) SHM_ETH_LOG(DEBUG, fmt, ## args) + +struct shm_eth_stats { + uint64_t rx_pkts; + uint64_t tx_pkts; + uint64_t rx_bytes; + uint64_t tx_bytes; +}; + +struct shm_eth_private { + struct rte_ether_addr addr; + struct rte_ring *rx; + struct rte_ring *tx; + struct shm_eth_stats stats; +}; + +static struct rte_mempool *rx_mp; /* TODO: use one per queue */ + +static int +shm_eth_configure(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static int +shm_eth_start(struct rte_eth_dev *dev) +{ + struct shm_eth_private *priv = dev->data->dev_private; + + struct eth_shared_mem_msg msg = (struct eth_shared_mem_msg){ + .type = MSG_TYPE_START, + }; + rte_shm_bus_send_message(&msg, sizeof(msg)); + + rte_shm_bus_recv_message(&msg, sizeof(msg)); + if (msg.type != MSG_TYPE_ACK) { + SHM_ETH_ERR("Didn't get ack from host\n"); + return -1; + } + + memset(&priv->stats, 0, sizeof(priv->stats)); + return 0; +} + +static int +shm_eth_stop(struct rte_eth_dev *dev __rte_unused) +{ + return 0; +} + +static int +shm_eth_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) +{ + *info = (struct rte_eth_dev_info){ + .driver_name = dev->device->driver->name, + .max_rx_queues = 1, + .max_tx_queues = 1, + .max_mac_addrs = 1, + .min_mtu = 64, + .max_mtu = UINT16_MAX, + .max_rx_pktlen = UINT16_MAX, + .nb_rx_queues = 1, + .nb_tx_queues = 1, + .tx_desc_lim = { .nb_max = 8192, .nb_min = 128, .nb_align = 64 }, + .rx_desc_lim = { .nb_max = 8192, .nb_min = 128, .nb_align = 64 }, + }; + return 0; +} + +static int +shm_eth_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +{ + dev->data->mtu = mtu; + return 0; +} + +static int +shm_eth_link_update(struct rte_eth_dev *dev, int wait __rte_unused) +{ + dev->data->dev_link = (struct rte_eth_link){ + .link_speed = RTE_ETH_SPEED_NUM_100G, + .link_duplex = 1, + .link_autoneg = 1, + .link_status = 1, + }; + return 0; +} + +static int +shm_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, + uint16_t nb_rx_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) +{ + RTE_SET_USED(rx_conf); + + struct shm_eth_private *priv = dev->data->dev_private; + char ring_name[32]; + + if (rte_shm_bus_get_mem_offset(mb_pool) == (uintptr_t)-1) { + SHM_ETH_ERR("Mempool not in shared memory"); + return -1; + } + snprintf(ring_name, sizeof(ring_name), "shm_eth_rxr%u", rx_queue_id); + priv->rx = rte_ring_create(ring_name, nb_rx_desc, socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + if (priv->rx == NULL) + return -1; + SHM_ETH_INFO("RX ring @ %p\n", priv->rx); + if (rte_shm_bus_get_mem_offset(priv->rx) == (uintptr_t)-1) { + SHM_ETH_ERR("Ring not created on shared memory."); + return -1; + } + dev->data->rx_queues[rx_queue_id] = priv; + + SHM_ETH_INFO("Mempool offset is: %p", (void *)rte_shm_bus_get_mem_offset(mb_pool)); + SHM_ETH_INFO("Rx queue offset is: %p", (void *)rte_shm_bus_get_mem_offset(priv->rx)); + + struct eth_shared_mem_msg msg = (struct eth_shared_mem_msg){ + .type = MSG_TYPE_MEMPOOL_OFFSET, + .offset = rte_shm_bus_get_mem_offset(mb_pool), + }; + rte_shm_bus_send_message(&msg, sizeof(msg)); + msg = (struct eth_shared_mem_msg){ + .type = MSG_TYPE_RX_RING_OFFSET, + .offset = rte_shm_bus_get_mem_offset(priv->rx), + }; + rte_shm_bus_send_message(&msg, sizeof(msg)); + rx_mp = mb_pool; + return 0; +} + +static int +shm_eth_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, + uint16_t nb_tx_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + RTE_SET_USED(tx_conf); + + struct shm_eth_private *priv = dev->data->dev_private; + char ring_name[32]; + + snprintf(ring_name, sizeof(ring_name), "shm_eth_txr%u", tx_queue_id); + priv->tx = rte_ring_create(ring_name, nb_tx_desc, socket_id, RING_F_SP_ENQ | RING_F_SC_DEQ); + if (priv->tx == NULL) + return -1; + SHM_ETH_DEBUG("TX ring @ %p\n", priv->tx); + if (rte_shm_bus_get_mem_offset(priv->tx) == (uintptr_t)-1) { + SHM_ETH_ERR("TX ring not on shared memory"); + return -1; + } + dev->data->tx_queues[tx_queue_id] = priv; + + struct eth_shared_mem_msg msg = (struct eth_shared_mem_msg){ + .type = MSG_TYPE_TX_RING_OFFSET, + .offset = rte_shm_bus_get_mem_offset(priv->tx), + }; + rte_shm_bus_send_message(&msg, sizeof(msg)); + + return 0; +} + +static int +shm_eth_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct shm_eth_private *priv = dev->data->dev_private; + stats->ibytes = priv->stats.rx_bytes; + stats->ipackets = priv->stats.rx_pkts; + stats->obytes = priv->stats.tx_bytes; + stats->opackets = priv->stats.tx_pkts; + return 0; +} + +static const struct eth_dev_ops ops = { + .dev_configure = shm_eth_configure, + .dev_start = shm_eth_start, + .dev_stop = shm_eth_stop, + .dev_infos_get = shm_eth_infos_get, + .mtu_set = shm_eth_mtu_set, + .rx_queue_setup = shm_eth_rx_queue_setup, + .tx_queue_setup = shm_eth_tx_queue_setup, + .link_update = shm_eth_link_update, + .stats_get = shm_eth_stats_get, +}; + +static uint16_t +shm_eth_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs) +{ + void *deq_vals[nb_bufs]; + struct shm_eth_private *priv = queue; + struct rte_ring *rxr = priv->rx; + uintptr_t offset = (uintptr_t)rte_shm_bus_get_mem_ptr(0); + + int nb_rx = rte_ring_dequeue_burst(rxr, deq_vals, nb_bufs, NULL); + if (nb_rx == 0) + return 0; + + uint64_t bytes = 0; + for (int i = 0; i < nb_rx; i++) { + bufs[i] = RTE_PTR_ADD(deq_vals[i], offset); + bufs[i]->pool = rx_mp; + bufs[i]->buf_addr = RTE_PTR_ADD(bufs[i]->buf_addr, offset); + bytes += bufs[i]->pkt_len; + } + priv->stats.rx_pkts += nb_rx; + priv->stats.rx_bytes += bytes; + return nb_rx; +} + +static uint16_t +shm_eth_tx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs) +{ + void *enq_vals[nb_bufs]; + struct shm_eth_private *priv = queue; + struct rte_ring *txr = priv->tx; + uintptr_t offset = (uintptr_t)rte_shm_bus_get_mem_ptr(0); + uint64_t bytes = 0; + + for (int i = 0; i < nb_bufs; i++) { + bufs[i]->buf_addr = RTE_PTR_SUB(bufs[i]->buf_addr, offset); + bytes += bufs[i]->pkt_len; + rte_cldemote(bufs[i]); + enq_vals[i] = RTE_PTR_SUB(bufs[i], offset); + } + uint16_t nb_enq = rte_ring_enqueue_burst(txr, enq_vals, nb_bufs, NULL); + if (nb_enq != nb_bufs) { + /* restore original buffer settings */ + for (int i = nb_enq; i < nb_bufs; i++) { + bufs[i]->buf_addr = RTE_PTR_ADD(bufs[i]->buf_addr, offset); + bytes -= bufs[i]->pkt_len; + } + } + priv->stats.tx_pkts += nb_enq; + priv->stats.tx_bytes += bytes; + return nb_enq; +} + +static int +ethdev_init(struct rte_eth_dev *ethdev, void *init_params __rte_unused) +{ + struct shm_eth_private *priv = ethdev->data->dev_private; + ethdev->dev_ops = &ops; + ethdev->data->mac_addrs = &priv->addr; + ethdev->rx_pkt_burst = shm_eth_rx_burst; + ethdev->tx_pkt_burst = shm_eth_tx_burst; + + struct eth_shared_mem_msg msg = (struct eth_shared_mem_msg){ + .type = MSG_TYPE_GET_MAC, + }; + rte_shm_bus_send_message(&msg, sizeof(msg)); + + rte_shm_bus_recv_message(&msg, sizeof(msg)); + if (msg.type != MSG_TYPE_REPORT_MAC) { + SHM_ETH_ERR("Didn't get mac address from host\n"); + return -1; + } + rte_ether_addr_copy(&msg.ethaddr, &priv->addr); + + return 0; +} + +static int +shm_eth_probe(struct shared_mem_drv *drv, struct rte_device *dev) +{ + SHM_ETH_INFO("Probing device %p on driver %s", dev, drv->driver.name); + int ret = rte_eth_dev_create(dev, "shared_mem_ethdev", sizeof(struct shm_eth_private), + NULL, NULL, + ethdev_init, NULL); + if (ret != 0) + goto out; + + SHM_ETH_DEBUG("Ethdev created ok\n"); +out: + return ret; +} + +struct shared_mem_drv shm_drv = { + .probe = shm_eth_probe, +}; + + +RTE_PMD_REGISTER_SHMEM_DRV(shm_eth, shm_drv);