Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/126857/?format=api
http://patches.dpdk.org/api/patches/126857/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/20230515162907.8456-1-bruce.richardson@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20230515162907.8456-1-bruce.richardson@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20230515162907.8456-1-bruce.richardson@intel.com", "date": "2023-05-15T16:29:07", "name": "dma/idxd: add support for multi-process when using VFIO", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": true, "hash": "e241c205b7f0f58ad9c45277efda570208c74993", "submitter": { "id": 20, "url": "http://patches.dpdk.org/api/people/20/?format=api", "name": "Bruce Richardson", "email": "bruce.richardson@intel.com" }, "delegate": { "id": 1, "url": "http://patches.dpdk.org/api/users/1/?format=api", "username": "tmonjalo", "first_name": "Thomas", "last_name": "Monjalon", "email": "thomas@monjalon.net" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/20230515162907.8456-1-bruce.richardson@intel.com/mbox/", "series": [ { "id": 27997, "url": "http://patches.dpdk.org/api/series/27997/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=27997", "date": "2023-05-15T16:29:07", "name": "dma/idxd: add support for multi-process when using VFIO", "version": 1, "mbox": "http://patches.dpdk.org/series/27997/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/126857/comments/", "check": "warning", "checks": "http://patches.dpdk.org/api/patches/126857/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id E57B942B17;\n\tMon, 15 May 2023 18:29:27 +0200 (CEST)", "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 6628D40A80;\n\tMon, 15 May 2023 18:29:27 +0200 (CEST)", "from mga02.intel.com (mga02.intel.com [134.134.136.20])\n by mails.dpdk.org (Postfix) with ESMTP id B8F8640687\n for <dev@dpdk.org>; Mon, 15 May 2023 18:29:25 +0200 (CEST)", "from orsmga003.jf.intel.com ([10.7.209.27])\n by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 15 May 2023 09:29:22 -0700", "from silpixa00401385.ir.intel.com ([10.237.214.135])\n by orsmga003.jf.intel.com with ESMTP; 15 May 2023 09:29:21 -0700" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1684168165; x=1715704165;\n h=from:to:cc:subject:date:message-id:mime-version:\n content-transfer-encoding;\n bh=+jne+hzfoY+Dg3IPPM7H/680lH7LXpXXMmVaqLX+5vE=;\n b=WdoDMoU6MwZjBo5iPDCbS1Agv0WfMaqjD/u5/8DOOlKr8Y8WmidfELSN\n k3DW9ZuZG+uOhwZJ3zRSJjQgLVFTFodd1Nw8il9yS0d37cwWAa/A5+RyF\n JiQ+xwRKl8P5Fn0Esl3hn+3anFTxgj6GDbbKtXCmFm15xbPEUO+qCph7I\n NeI3yRIZCsjzWHFq/dQw+xE54onHwuFu+8SyWVkq34HLFNU7LNp5HAc2I\n jyyDDckzvohBCC+rObqpD4KjnBDP00Gdw2ow+1q/4cYs3affx3LI61bfI\n KCr8zjEla70/TBjjRIKYyH0GOJbEgoK06tGatDMMgVE+GJFmiGuU1yKsJ Q==;", "X-IronPort-AV": [ "E=McAfee;i=\"6600,9927,10711\"; a=\"340600965\"", "E=Sophos;i=\"5.99,277,1677571200\"; d=\"scan'208\";a=\"340600965\"", "E=McAfee;i=\"6600,9927,10711\"; a=\"651477464\"", "E=Sophos;i=\"5.99,277,1677571200\"; d=\"scan'208\";a=\"651477464\"" ], "X-ExtLoop1": "1", "From": "Bruce Richardson <bruce.richardson@intel.com>", "To": "dev@dpdk.org", "Cc": "Bruce Richardson <bruce.richardson@intel.com>,\n Kevin Laatz <kevin.laatz@intel.com>,\n Anatoly Burakov <anatoly.burakov@intel.com>", "Subject": "[PATCH] dma/idxd: add support for multi-process when using VFIO", "Date": "Mon, 15 May 2023 17:29:07 +0100", "Message-Id": "<20230515162907.8456-1-bruce.richardson@intel.com>", "X-Mailer": "git-send-email 2.39.2", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org" }, "content": "When using vfio-pci/uio for hardware access, we need to avoid\nreinitializing the hardware when mapping from a secondary process.\nInstead, just configure the function pointers and reuse the data\nmappings from the primary process.\n\nWith the code change, update driver doc with the information that\nvfio-pci can be used for multi-process support, and explicitly state the\nlimitation on multi-process support being unavailable when using idxd\nkernel driver.\n\nSigned-off-by: Bruce Richardson <bruce.richardson@intel.com>\n---\n doc/guides/dmadevs/idxd.rst | 5 +++++\n drivers/dma/idxd/idxd_common.c | 6 ++++--\n drivers/dma/idxd/idxd_pci.c | 30 ++++++++++++++++++++++++++++++\n 3 files changed, 39 insertions(+), 2 deletions(-)", "diff": "diff --git a/doc/guides/dmadevs/idxd.rst b/doc/guides/dmadevs/idxd.rst\nindex bdfd3e78ad..f75d1d0a85 100644\n--- a/doc/guides/dmadevs/idxd.rst\n+++ b/doc/guides/dmadevs/idxd.rst\n@@ -35,6 +35,11 @@ Device Setup\n Intel\\ |reg| DSA devices can use the IDXD kernel driver or DPDK-supported drivers,\n such as ``vfio-pci``. Both are supported by the IDXD PMD.\n \n+.. note::\n+ To use Intel\\ |reg| DSA devices in DPDK multi-process applications,\n+ the devices should be bound to the vfio-pci driver.\n+ Multi-process is not supported when using the kernel IDXD driver.\n+\n Intel\\ |reg| DSA devices using IDXD kernel driver\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n \ndiff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c\nindex 6fe8ad4884..83d53942eb 100644\n--- a/drivers/dma/idxd/idxd_common.c\n+++ b/drivers/dma/idxd/idxd_common.c\n@@ -599,6 +599,10 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,\n \tdmadev->fp_obj->completed = idxd_completed;\n \tdmadev->fp_obj->completed_status = idxd_completed_status;\n \tdmadev->fp_obj->burst_capacity = idxd_burst_capacity;\n+\tdmadev->fp_obj->dev_private = dmadev->data->dev_private;\n+\n+\tif (rte_eal_process_type() != RTE_PROC_PRIMARY)\n+\t\treturn 0;\n \n \tidxd = dmadev->data->dev_private;\n \t*idxd = *base_idxd; /* copy over the main fields already passed in */\n@@ -619,8 +623,6 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,\n \tidxd->batch_idx_ring = (void *)&idxd->batch_comp_ring[idxd->max_batches+1];\n \tidxd->batch_iova = rte_mem_virt2iova(idxd->batch_comp_ring);\n \n-\tdmadev->fp_obj->dev_private = idxd;\n-\n \tidxd->dmadev->state = RTE_DMA_DEV_READY;\n \n \treturn 0;\ndiff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c\nindex 781fa02db3..5fe9314d01 100644\n--- a/drivers/dma/idxd/idxd_pci.c\n+++ b/drivers/dma/idxd/idxd_pci.c\n@@ -309,6 +309,36 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)\n \tIDXD_PMD_INFO(\"Init %s on NUMA node %d\", name, dev->device.numa_node);\n \tdev->device.driver = &drv->driver;\n \n+\tif (rte_eal_process_type() != RTE_PROC_PRIMARY) {\n+\t\tchar qname[32];\n+\t\tint max_qid;\n+\n+\t\t/* look up queue 0 to get the pci structure */\n+\t\tsnprintf(qname, sizeof(qname), \"%s-q0\", name);\n+\t\tIDXD_PMD_INFO(\"Looking up %s\\n\", qname);\n+\t\tret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);\n+\t\tif (ret != 0) {\n+\t\t\tIDXD_PMD_ERR(\"Failed to create dmadev %s\", name);\n+\t\t\treturn ret;\n+\t\t}\n+\t\tqid = rte_dma_get_dev_id_by_name(qname);\n+\t\tmax_qid = rte_atomic16_read(\n+\t\t\t&((struct idxd_dmadev *)rte_dma_fp_objs[qid].dev_private)->u.pci->ref_count);\n+\n+\t\t/* we have queue 0 done, now configure the rest of the queues */\n+\t\tfor (qid = 1; qid < max_qid; qid++) {\n+\t\t\t/* add the queue number to each device name */\n+\t\t\tsnprintf(qname, sizeof(qname), \"%s-q%d\", name, qid);\n+\t\t\tIDXD_PMD_INFO(\"Looking up %s\\n\", qname);\n+\t\t\tret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);\n+\t\t\tif (ret != 0) {\n+\t\t\t\tIDXD_PMD_ERR(\"Failed to create dmadev %s\", name);\n+\t\t\t\treturn ret;\n+\t\t\t}\n+\t\t}\n+\t\treturn 0;\n+\t}\n+\n \tif (dev->device.devargs && dev->device.devargs->args[0] != '\\0') {\n \t\t/* if the number of devargs grows beyond just 1, use rte_kvargs */\n \t\tif (sscanf(dev->device.devargs->args,\n", "prefixes": [] }{ "id": 126857, "url": "