Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/115952/?format=api
http://patches.dpdk.org/api/patches/115952/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/20220906052149.21033-2-xuan.ding@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20220906052149.21033-2-xuan.ding@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20220906052149.21033-2-xuan.ding@intel.com", "date": "2022-09-06T05:21:48", "name": "[v2,1/2] vhost: introduce DMA vchannel unconfiguration", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "6f35774390dbdcc174f3dbc37491908512e247b5", "submitter": { "id": 1401, "url": "http://patches.dpdk.org/api/people/1401/?format=api", "name": "Ding, Xuan", "email": "xuan.ding@intel.com" }, "delegate": { "id": 2642, "url": "http://patches.dpdk.org/api/users/2642/?format=api", "username": "mcoquelin", "first_name": "Maxime", "last_name": "Coquelin", "email": "maxime.coquelin@redhat.com" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/20220906052149.21033-2-xuan.ding@intel.com/mbox/", "series": [ { "id": 24549, "url": "http://patches.dpdk.org/api/series/24549/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=24549", "date": "2022-09-06T05:21:47", "name": "vhost: introduce DMA vchannel unconfiguration", "version": 2, "mbox": "http://patches.dpdk.org/series/24549/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/115952/comments/", "check": "success", "checks": "http://patches.dpdk.org/api/patches/115952/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 474ACA0542;\n\tTue, 6 Sep 2022 07:22:11 +0200 (CEST)", "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 3D03A4114E;\n\tTue, 6 Sep 2022 07:22:10 +0200 (CEST)", "from mga03.intel.com (mga03.intel.com [134.134.136.65])\n by mails.dpdk.org (Postfix) with ESMTP id 040D040697\n for <dev@dpdk.org>; Tue, 6 Sep 2022 07:22:07 +0200 (CEST)", "from fmsmga004.fm.intel.com ([10.253.24.48])\n by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 05 Sep 2022 22:22:07 -0700", "from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.111.163])\n by fmsmga004.fm.intel.com with ESMTP; 05 Sep 2022 22:22:05 -0700" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1662441728; x=1693977728;\n h=from:to:cc:subject:date:message-id:in-reply-to: references;\n bh=D3w2SWb9MYvsplCbgckHaAiFUSkgs140YZo2KiBsZHY=;\n b=AUz4Z70XA8bs3L9UrOKeGSSVLH5+IfqFDpCFbe3nE5tOgrEskChrO0X1\n B08DkhEH/kw3kKuW6cxTReBN3VwcmG3fP6OyIXnFFiBEDiV1gg+pl+A79\n wgzuGiWibAjDzMZH/D7OwANCA0e0bM9Ylqdx34lsJcHer7W5QOvk/PJt6\n chA9QTE2tL1/ek4m0f5AAeof1Z4iRDCByhMbOtx9oyfCSAxBGFWxkuOmT\n j9HmjMlgGulf8VYZUHp+uCwqosQXLt5IDJjOa67Kgpl/yfC62YgGPLXjl\n 6yBqQW/zXO0oE9W826bMVTUqfBJlQmxub2DnlKT5VELa7RtZZz+oVD3AO g==;", "X-IronPort-AV": [ "E=McAfee;i=\"6500,9779,10461\"; a=\"297817975\"", "E=Sophos;i=\"5.93,293,1654585200\"; d=\"scan'208\";a=\"297817975\"", "E=Sophos;i=\"5.93,293,1654585200\"; d=\"scan'208\";a=\"682267053\"" ], "X-ExtLoop1": "1", "From": "xuan.ding@intel.com", "To": "maxime.coquelin@redhat.com,\n\tchenbo.xia@intel.com", "Cc": "dev@dpdk.org, jiayu.hu@intel.com, xingguang.he@intel.com,\n yvonnex.yang@intel.com, cheng1.jiang@intel.com, yuanx.wang@intel.com,\n wenwux.ma@intel.com, Xuan Ding <xuan.ding@intel.com>", "Subject": "[PATCH v2 1/2] vhost: introduce DMA vchannel unconfiguration", "Date": "Tue, 6 Sep 2022 05:21:48 +0000", "Message-Id": "<20220906052149.21033-2-xuan.ding@intel.com>", "X-Mailer": "git-send-email 2.17.1", "In-Reply-To": "<20220906052149.21033-1-xuan.ding@intel.com>", "References": "<20220814140442.82525-1-xuan.ding@intel.com>\n <20220906052149.21033-1-xuan.ding@intel.com>", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.29", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org" }, "content": "From: Xuan Ding <xuan.ding@intel.com>\n\nThis patch adds a new API rte_vhost_async_dma_unconfigure() to unconfigure\nDMA vchannels in vhost async data path.\n\nLock protection are also added to protect DMA vchannels configuration and\nunconfiguration from concurrent calls.\n\nSigned-off-by: Xuan Ding <xuan.ding@intel.com>\n---\n doc/guides/prog_guide/vhost_lib.rst | 5 ++\n doc/guides/rel_notes/release_22_11.rst | 2 +\n lib/vhost/rte_vhost_async.h | 17 +++++++\n lib/vhost/version.map | 3 ++\n lib/vhost/vhost.c | 69 ++++++++++++++++++++++++--\n 5 files changed, 91 insertions(+), 5 deletions(-)", "diff": "diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst\nindex bad4d819e1..22764cbeaa 100644\n--- a/doc/guides/prog_guide/vhost_lib.rst\n+++ b/doc/guides/prog_guide/vhost_lib.rst\n@@ -323,6 +323,11 @@ The following is an overview of some key Vhost API functions:\n Get device type of vDPA device, such as VDPA_DEVICE_TYPE_NET,\n VDPA_DEVICE_TYPE_BLK.\n \n+* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``\n+\n+ Clear DMA vChannels finished to use. This function needs to\n+ be called after the deregisterration of async path has been finished.\n+\n Vhost-user Implementations\n --------------------------\n \ndiff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst\nindex 8c021cf050..e94c006e39 100644\n--- a/doc/guides/rel_notes/release_22_11.rst\n+++ b/doc/guides/rel_notes/release_22_11.rst\n@@ -55,6 +55,8 @@ New Features\n Also, make sure to start the actual text at the margin.\n =======================================================\n \n+* **Added vhost API to unconfigure DMA vchannels.**\n+ Added an API which helps to unconfigure DMA vchannels.\n \n Removed Items\n -------------\ndiff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h\nindex 1db2a10124..0442e027fd 100644\n--- a/lib/vhost/rte_vhost_async.h\n+++ b/lib/vhost/rte_vhost_async.h\n@@ -266,6 +266,23 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,\n \tstruct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,\n \tint *nr_inflight, int16_t dma_id, uint16_t vchan_id);\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.\n+ *\n+ * Unconfigure DMA vChannels in asynchronous data path.\n+ *\n+ * @param dma_id\n+ * the identifier of DMA device\n+ * @param vchan_id\n+ * the identifier of virtual DMA channel\n+ * @return\n+ * 0 on success, and -1 on failure\n+ */\n+__rte_experimental\n+int\n+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id);\n+\n #ifdef __cplusplus\n }\n #endif\ndiff --git a/lib/vhost/version.map b/lib/vhost/version.map\nindex 18574346d5..013a6bcc42 100644\n--- a/lib/vhost/version.map\n+++ b/lib/vhost/version.map\n@@ -96,6 +96,9 @@ EXPERIMENTAL {\n \trte_vhost_async_try_dequeue_burst;\n \trte_vhost_driver_get_vdpa_dev_type;\n \trte_vhost_clear_queue;\n+\n+\t# added in 22.11\n+\trte_vhost_async_dma_unconfigure;\n };\n \n INTERNAL {\ndiff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c\nindex 60cb05a0ff..273616da11 100644\n--- a/lib/vhost/vhost.c\n+++ b/lib/vhost/vhost.c\n@@ -23,6 +23,7 @@\n \n struct virtio_net *vhost_devices[RTE_MAX_VHOST_DEVICE];\n pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;\n+static rte_spinlock_t vhost_dma_lock = RTE_SPINLOCK_INITIALIZER;\n \n struct vhost_vq_stats_name_off {\n \tchar name[RTE_VHOST_STATS_NAME_SIZE];\n@@ -1870,19 +1871,20 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)\n \tvoid *pkts_cmpl_flag_addr;\n \tuint16_t max_desc;\n \n+\trte_spinlock_lock(&vhost_dma_lock);\n \tif (!rte_dma_is_valid(dma_id)) {\n \t\tVHOST_LOG_CONFIG(\"dma\", ERR, \"DMA %d is not found.\\n\", dma_id);\n-\t\treturn -1;\n+\t\tgoto error;\n \t}\n \n \tif (rte_dma_info_get(dma_id, &info) != 0) {\n \t\tVHOST_LOG_CONFIG(\"dma\", ERR, \"Fail to get DMA %d information.\\n\", dma_id);\n-\t\treturn -1;\n+\t\tgoto error;\n \t}\n \n \tif (vchan_id >= info.max_vchans) {\n \t\tVHOST_LOG_CONFIG(\"dma\", ERR, \"Invalid DMA %d vChannel %u.\\n\", dma_id, vchan_id);\n-\t\treturn -1;\n+\t\tgoto error;\n \t}\n \n \tif (!dma_copy_track[dma_id].vchans) {\n@@ -1894,7 +1896,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)\n \t\t\tVHOST_LOG_CONFIG(\"dma\", ERR,\n \t\t\t\t\"Failed to allocate vchans for DMA %d vChannel %u.\\n\",\n \t\t\t\tdma_id, vchan_id);\n-\t\t\treturn -1;\n+\t\t\tgoto error;\n \t\t}\n \n \t\tdma_copy_track[dma_id].vchans = vchans;\n@@ -1903,6 +1905,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)\n \tif (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) {\n \t\tVHOST_LOG_CONFIG(\"dma\", INFO, \"DMA %d vChannel %u already registered.\\n\",\n \t\t\tdma_id, vchan_id);\n+\t\trte_spinlock_unlock(&vhost_dma_lock);\n \t\treturn 0;\n \t}\n \n@@ -1920,7 +1923,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)\n \t\t\trte_free(dma_copy_track[dma_id].vchans);\n \t\t\tdma_copy_track[dma_id].vchans = NULL;\n \t\t}\n-\t\treturn -1;\n+\t\tgoto error;\n \t}\n \n \tdma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = pkts_cmpl_flag_addr;\n@@ -1928,7 +1931,12 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id)\n \tdma_copy_track[dma_id].vchans[vchan_id].ring_mask = max_desc - 1;\n \tdma_copy_track[dma_id].nr_vchans++;\n \n+\trte_spinlock_unlock(&vhost_dma_lock);\n \treturn 0;\n+\n+error:\n+\trte_spinlock_unlock(&vhost_dma_lock);\n+\treturn -1;\n }\n \n int\n@@ -2117,5 +2125,56 @@ int rte_vhost_vring_stats_reset(int vid, uint16_t queue_id)\n \treturn 0;\n }\n \n+int\n+rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id)\n+{\n+\tstruct rte_dma_info info;\n+\tuint16_t max_desc;\n+\tint i;\n+\n+\trte_spinlock_lock(&vhost_dma_lock);\n+\tif (!rte_dma_is_valid(dma_id)) {\n+\t\tVHOST_LOG_CONFIG(\"dma\", ERR, \"DMA %d is not found.\\n\", dma_id);\n+\t\tgoto error;\n+\t}\n+\n+\tif (rte_dma_info_get(dma_id, &info) != 0) {\n+\t\tVHOST_LOG_CONFIG(\"dma\", ERR, \"Fail to get DMA %d information.\\n\", dma_id);\n+\t\tgoto error;\n+\t}\n+\n+\tif (vchan_id >= info.max_vchans) {\n+\t\tVHOST_LOG_CONFIG(\"dma\", ERR, \"Invalid DMA %d vChannel %u.\\n\", dma_id, vchan_id);\n+\t\tgoto error;\n+\t}\n+\n+\tmax_desc = info.max_desc;\n+\tfor (i = 0; i < max_desc; i++) {\n+\t\tif (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] != NULL) {\n+\t\t\trte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i]);\n+\t\t\tdma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr[i] = NULL;\n+\t\t}\n+\t}\n+\n+\tif (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr != NULL) {\n+\t\trte_free(dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr);\n+\t\tdma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr = NULL;\n+\t}\n+\n+\tif (dma_copy_track[dma_id].vchans != NULL) {\n+\t\trte_free(dma_copy_track[dma_id].vchans);\n+\t\tdma_copy_track[dma_id].vchans = NULL;\n+\t}\n+\n+\tdma_copy_track[dma_id].nr_vchans--;\n+\n+\trte_spinlock_unlock(&vhost_dma_lock);\n+\treturn 0;\n+\n+error:\n+\trte_spinlock_unlock(&vhost_dma_lock);\n+\treturn -1;\n+}\n+\n RTE_LOG_REGISTER_SUFFIX(vhost_config_log_level, config, INFO);\n RTE_LOG_REGISTER_SUFFIX(vhost_data_log_level, data, WARNING);\n", "prefixes": [ "v2", "1/2" ] }{ "id": 115952, "url": "