From patchwork Mon Oct 21 08:03:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 61559 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 41C455B32; Mon, 21 Oct 2019 10:03:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 2A3654C74 for ; Mon, 21 Oct 2019 10:03:56 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9L80m4u024517; Mon, 21 Oct 2019 01:03:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=xy6CJp8nTOT+WFMosAOqp/9+dPTJp0NVdbVRJ59Gxtc=; b=MOKSb0mEX6ifAC5E1Oz7vyTjzZWjIMLHabtwjS/mL6NMUfq4flbZYN6C8JO3oCYKs1Ka SUguKd7MVBIPwcGB2UrjrEUsE7jCfBL5lF9K3Rs1stW+Yc2hgev2EA1P5msAxM4r5IUB lfJUlMCJFohsvgy5TWZPZqzb03YlpRiWsOAAYhecyrDxNzdcHxnppsEieSkgX6Kefl9s RkTm/ixJBUWkObKW12Ik6Q7LmWvsqcghMDWEQOJZJCNFsxjIGOd+248N4RH7D5lr+KUL T1SCQW4Ed5wZexwz34PGgnPuiUbEO2N637UyEht/tsSeSfroIYqAYJKZVdq332Nuc6Xx Yg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2vqyuqdutq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 21 Oct 2019 01:03:53 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 21 Oct 2019 01:03:52 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 21 Oct 2019 01:03:52 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 12BD33F7041; Mon, 21 Oct 2019 01:03:48 -0700 (PDT) From: To: CC: , , , , , , , , Vamsi Attunuru Date: Mon, 21 Oct 2019 13:33:21 +0530 Message-ID: <20191021080324.10659-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20191021080324.10659-1-vattunuru@marvell.com> References: <20190816061252.17214-1-vattunuru@marvell.com> <20191021080324.10659-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-21_02:2019-10-18,2019-10-21 signatures=0 Subject: [dpdk-dev] [PATCH v11 1/4] mempool: populate mempool with the page sized chunks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds a routine to populate mempool from page aligned and page sized chunks of memory to ensure memory objs do not fall across the page boundaries. It's useful for applications that require physically contiguous mbuf memory while running in IOVA=VA mode. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K Acked-by: Olivier Matz --- lib/librte_mempool/rte_mempool.c | 69 ++++++++++++++++++++++++++++++ lib/librte_mempool/rte_mempool.h | 20 +++++++++ lib/librte_mempool/rte_mempool_version.map | 2 + 3 files changed, 91 insertions(+) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 0f29e87..ef1298d 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -414,6 +414,75 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, return ret; } +/* Function to populate mempool from page sized mem chunks, allocate page size + * of memory in memzone and populate them. Return the number of objects added, + * or a negative value on error. + */ +int +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + size_t align, pg_sz, pg_shift; + const struct rte_memzone *mz; + unsigned int mz_id, n; + size_t min_chunk_size; + int ret; + + ret = mempool_ops_alloc_once(mp); + if (ret != 0) + return ret; + + if (mp->nb_mem_chunks != 0) + return -EEXIST; + + pg_sz = get_min_page_size(mp->socket_id); + pg_shift = rte_bsf32(pg_sz); + + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { + + ret = rte_mempool_ops_calc_mem_size(mp, n, + pg_shift, &min_chunk_size, &align); + + if (ret < 0) + goto fail; + + if (min_chunk_size > pg_sz) { + ret = -EINVAL; + goto fail; + } + + ret = snprintf(mz_name, sizeof(mz_name), + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); + if (ret < 0 || ret >= (int)sizeof(mz_name)) { + ret = -ENAMETOOLONG; + goto fail; + } + + mz = rte_memzone_reserve_aligned(mz_name, min_chunk_size, + mp->socket_id, 0, align); + + if (mz == NULL) { + ret = -rte_errno; + goto fail; + } + + ret = rte_mempool_populate_iova(mp, mz->addr, + mz->iova, mz->len, + rte_mempool_memchunk_mz_free, + (void *)(uintptr_t)mz); + if (ret < 0) { + rte_memzone_free(mz); + goto fail; + } + } + + return mp->size; + +fail: + rte_mempool_free_memchunks(mp); + return ret; +} + /* Default function to populate the mempool: allocate memory in memzones, * and populate them. Return the number of objects added, or a negative * value on error. diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a..2f5126e 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1062,6 +1062,26 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, void *opaque); /** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Add memory from page sized memzones for objects in the pool at init + * + * This is the function used to populate the mempool with page aligned and + * page sized memzone memory to avoid spreading object memory across two pages + * and to ensure all mempool objects reside on the page memory. + * + * @param mp + * A pointer to the mempool structure. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +__rte_experimental +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp); + +/** * Add memory for objects in the pool at init * * This is the default function used by rte_mempool_create() to populate diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index 17cbca4..d6fe5a5 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -57,4 +57,6 @@ EXPERIMENTAL { global: rte_mempool_ops_get_info; + # added in 19.11 + rte_mempool_populate_from_pg_sz_chunks; }; From patchwork Mon Oct 21 08:03:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 61560 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A44D41BE91; Mon, 21 Oct 2019 10:04:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id D69001BE80 for ; Mon, 21 Oct 2019 10:03:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9L7tYID029878; Mon, 21 Oct 2019 01:03:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=Od2z48f623b5ttBVjXe6HrcjuZNHkvKTvnJXmUIdePQ=; b=BapPaiu99/j5ouQ2MaBIN5jQkh4/SWzi3j1SMpafQoOmIJeYU8+30+5uRKkH0PGW8L6F xEvD7bKQWiPgtyCl/SfhG9OOu91vfu/24pHW0oonRalTxBbjFkax+1cT6+DDnwa+BgTk 6MHOV4M16clJnuOMXSNSGdt/ibCnX0f4EePeVn2FyHoyG7kz1/sFDXFi95yOdSh9b4Td zQABv395ymqMQUz0t/HszLDNBpxQ0JMYjbEQUNEdUqWEYwTE+Png9rRFvTkHYO3qjESx T6bU2nfctX/73n5dA2iR49uaW/y4+mhMeVbphQMM/RXxUhRe0XJ5dUed1jn/eT0+l1Co 7g== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2vr20mn69y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 21 Oct 2019 01:03:58 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 21 Oct 2019 01:03:56 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 21 Oct 2019 01:03:56 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id E7BB93F703F; Mon, 21 Oct 2019 01:03:52 -0700 (PDT) From: To: CC: , , , , , , , , Vamsi Attunuru Date: Mon, 21 Oct 2019 13:33:22 +0530 Message-ID: <20191021080324.10659-3-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20191021080324.10659-1-vattunuru@marvell.com> References: <20190816061252.17214-1-vattunuru@marvell.com> <20191021080324.10659-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-21_02:2019-10-18,2019-10-21 signatures=0 Subject: [dpdk-dev] [PATCH v11 2/4] eal: add legacy kni option X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru This adds a "--legacy-kni" command-line option. It will be used to run existing KNI applications with DPDK 19.11 and later. Signed-off-by: Vamsi Attunuru Suggested-by: Ferruh Yigit --- doc/guides/rel_notes/release_19_11.rst | 4 ++++ lib/librte_eal/common/eal_common_options.c | 5 +++++ lib/librte_eal/common/eal_internal_cfg.h | 2 ++ lib/librte_eal/common/eal_options.h | 2 ++ 4 files changed, 13 insertions(+) diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst index 85953b9..ab2c381 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -115,6 +115,10 @@ New Features Added eBPF JIT support for arm64 architecture to improve the eBPF program performance. +* **Added EAL option to operate KNI in legacy mode.** + + Added EAL option ``--legacy-kni`` to make existing KNI applications work + with DPDK 19.11 and later. Removed Items ------------- diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index 05cae5f..8f5174e 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -81,6 +81,7 @@ eal_long_options[] = { {OPT_LEGACY_MEM, 0, NULL, OPT_LEGACY_MEM_NUM }, {OPT_SINGLE_FILE_SEGMENTS, 0, NULL, OPT_SINGLE_FILE_SEGMENTS_NUM}, {OPT_MATCH_ALLOCATIONS, 0, NULL, OPT_MATCH_ALLOCATIONS_NUM}, + {OPT_LEGACY_KNI, 0, NULL, OPT_LEGACY_KNI_NUM }, {0, 0, NULL, 0 } }; @@ -1408,6 +1409,9 @@ eal_parse_common_option(int opt, const char *optarg, return -1; } break; + case OPT_LEGACY_KNI_NUM: + conf->legacy_kni = 1; + break; /* don't know what to do, leave this to caller */ default: @@ -1636,6 +1640,7 @@ eal_common_usage(void) " (ex: --vdev=net_pcap0,iface=eth2).\n" " --"OPT_IOVA_MODE" Set IOVA mode. 'pa' for IOVA_PA\n" " 'va' for IOVA_VA\n" + " --"OPT_LEGACY_KNI" Run KNI in IOVA_PA mode (legacy mode)\n" " -d LIB.so|DIR Add a driver or driver directory\n" " (can be used multiple times)\n" " --"OPT_VMWARE_TSC_MAP" Use VMware TSC map instead of native RDTSC\n" diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h index a42f349..eee71ec 100644 --- a/lib/librte_eal/common/eal_internal_cfg.h +++ b/lib/librte_eal/common/eal_internal_cfg.h @@ -82,6 +82,8 @@ struct internal_config { rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */ volatile unsigned int init_complete; /**< indicates whether EAL has completed initialization */ + volatile unsigned legacy_kni; + /**< true to enable legacy kni behavior */ }; extern struct internal_config internal_config; /**< Global EAL configuration. */ diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index 9855429..1010ed3 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -69,6 +69,8 @@ enum { OPT_IOVA_MODE_NUM, #define OPT_MATCH_ALLOCATIONS "match-allocations" OPT_MATCH_ALLOCATIONS_NUM, +#define OPT_LEGACY_KNI "legacy-kni" + OPT_LEGACY_KNI_NUM, OPT_LONG_MAX_NUM }; From patchwork Mon Oct 21 08:03:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 61561 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 60ABD1BDAE; Mon, 21 Oct 2019 10:04:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id E0F351BDAE for ; Mon, 21 Oct 2019 10:04:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9L7svui029683; Mon, 21 Oct 2019 01:04:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=EpgRQfqYNITdeODPtrEr6IlHFveD5yQTL0w8DYcdJaU=; b=xiiG6kuO8T2SZIAp0sSUSsLV5KxXCogwW/bHGw+sixGdCxxluGK5dx5bUAPSf85doT5i k9FuP7Oq6zx6JCXBqPp2dsOkGkQZvgf+qlnFQ5uw4cR21wCdY+/Xm4+VOGcuP+WYKtTS 51BH6Enuk1uxcOPvA5T0tZOecyi3/tAM3g7AFue7hnKrehYE7G4Kltnzq2neAlqxiPRf /fOqCv5Zp0lK+WHfjXV1Cz5AROB4Ej5LgmF3UiAVEAsZRwS98teMsyj9qs7V43vZNAmK ta3vXIzbd6exMNBgCxhNYUTNnHsEQ8YAmfDmmCH0tKDp+c5yF7EOp5b27ldYcavWFYiw Tw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2vr20mn6a2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 21 Oct 2019 01:04:02 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 21 Oct 2019 01:04:00 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 21 Oct 2019 01:04:00 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id D318A3F7040; Mon, 21 Oct 2019 01:03:56 -0700 (PDT) From: To: CC: , , , , , , , , Vamsi Attunuru Date: Mon, 21 Oct 2019 13:33:23 +0530 Message-ID: <20191021080324.10659-4-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20191021080324.10659-1-vattunuru@marvell.com> References: <20190816061252.17214-1-vattunuru@marvell.com> <20191021080324.10659-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-21_02:2019-10-18,2019-10-21 signatures=0 Subject: [dpdk-dev] [PATCH v11 3/4] kni: add IOVA=VA support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Current KNI implementation only operates in IOVA_PA mode patch adds required functionality to enable KNI in IOVA_VA mode. Packet pool's mbuf memory should be physically contiguous for the KNI kernel module to work in IOVA=VA mode, new KNI packet pool create APIs are introduced to take care of this memory requirement. examples/kni/ updated to use this API to enable IOVA as VA or IOVA as PA mode. Existing KNI applications can use ``--legacy-kni`` eal option to work with DPDK 19.11 and later versions. When this option is selected, IOVA mode will be forced to PA mode to enable old KNI applications work with the latest DPDK without code change. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K Suggested-by: Ferruh Yigit --- doc/guides/prog_guide/kernel_nic_interface.rst | 26 +++++++++ doc/guides/rel_notes/release_19_11.rst | 13 +++++ examples/kni/main.c | 6 +- lib/librte_eal/linux/eal/eal.c | 39 +++++++++---- lib/librte_eal/linux/eal/include/rte_kni_common.h | 1 + lib/librte_kni/Makefile | 1 + lib/librte_kni/meson.build | 1 + lib/librte_kni/rte_kni.c | 67 +++++++++++++++++++++-- lib/librte_kni/rte_kni.h | 48 ++++++++++++++++ lib/librte_kni/rte_kni_version.map | 3 + 10 files changed, 187 insertions(+), 18 deletions(-) diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst index 2fd58e1..80f731c 100644 --- a/doc/guides/prog_guide/kernel_nic_interface.rst +++ b/doc/guides/prog_guide/kernel_nic_interface.rst @@ -300,6 +300,32 @@ The sk_buff is then freed and the mbuf sent in the tx_q FIFO. The DPDK TX thread dequeues the mbuf and sends it to the PMD via ``rte_eth_tx_burst()``. It then puts the mbuf back in the cache. +IOVA = VA: Support +------------------ + +KNI can be operated in IOVA_VA scheme when + +- LINUX_VERSION_CODE >= KERNEL_VERSION(4, 8, 0) and +- eal option `iova-mode=va` is passed or bus IOVA scheme in the DPDK is selected + as RTE_IOVA_VA. + +Packet Pool APIs for IOVA=VA mode +--------------------------------- + +``rte_kni_pktmbuf_pool_create`` and ``rte_kni_pktmbuf_pool_free`` APIs need to +be used for creating packet pools for running KNI applications in IOVA=VA mode. +Packet pool's mbuf memory should be physically contiguous for the KNI kernel +module to work in IOVA=VA mode, this memory requirement was taken care inside +those KNI packet pool create APIs. + +Command-line option for legacy KNI +---------------------------------- + +Existing KNI applications can use ``--legacy-kni`` eal command-line option to +work with DPDK 19.11 and later versions. When this option is selected, IOVA mode +will be forced to PA mode to enable old KNI applications work with the latest +DPDK without code changes. + Ethtool ------- diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst index ab2c381..e4296a0 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -120,6 +120,19 @@ New Features Added EAL option ``--legacy-kni`` to make existing KNI applications work with DPDK 19.11 and later. +* **Added IOVA as VA support for KNI.** + + Added IOVA as VA support for KNI. When KNI needs to operate in IOVA = VA + mode, packet pool's mbuf memory should be physically contiguous. This memory + requirement taken care using the new ``rte_kni_pktmbuf_pool_create`` and + ``rte_kni_pktmbuf_pool_free`` routines. + + The ``examples/kni/`` updated to use this API to enable IOVA as VA or + IOVA as PA mode. + + When "--legacy-kni" selected, IOVA mode will be forced to PA mode to enable + old KNI application work with the latest DPDK without code changes. + Removed Items ------------- diff --git a/examples/kni/main.c b/examples/kni/main.c index c576fc7..d2f3b46 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -1017,8 +1017,9 @@ main(int argc, char** argv) rte_exit(EXIT_FAILURE, "Could not parse input parameters\n"); /* Create the mbuf pool */ - pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, + pktmbuf_pool = rte_kni_pktmbuf_pool_create("mbuf_pool", NB_MBUF, MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, rte_socket_id()); + if (pktmbuf_pool == NULL) { rte_exit(EXIT_FAILURE, "Could not initialise mbuf pool\n"); return -1; @@ -1085,6 +1086,9 @@ main(int argc, char** argv) continue; kni_free_kni(port); } + + rte_kni_pktmbuf_pool_free(pktmbuf_pool); + for (i = 0; i < RTE_MAX_ETHPORTS; i++) if (kni_port_params_array[i]) { rte_free(kni_port_params_array[i]); diff --git a/lib/librte_eal/linux/eal/eal.c b/lib/librte_eal/linux/eal/eal.c index f397206..f807044 100644 --- a/lib/librte_eal/linux/eal/eal.c +++ b/lib/librte_eal/linux/eal/eal.c @@ -947,6 +947,29 @@ static int rte_eal_vfio_setup(void) } #endif +static enum rte_iova_mode +rte_eal_kni_get_iova_mode(enum rte_iova_mode iova_mode) +{ + if (iova_mode == RTE_IOVA_PA) + goto exit; + + if (internal_config.legacy_kni) { + iova_mode = RTE_IOVA_PA; + RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because legacy KNI is enabled\n"); + goto exit; + } + + if (iova_mode == RTE_IOVA_VA) { +#if KERNEL_VERSION(4, 8, 0) > LINUX_VERSION_CODE + iova_mode = RTE_IOVA_PA; + RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module does not support VA\n"); +#endif + } + +exit: + return iova_mode; +} + static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); @@ -1110,24 +1133,16 @@ rte_eal_init(int argc, char **argv) RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n"); } } -#ifdef RTE_LIBRTE_KNI - /* Workaround for KNI which requires physical address to work */ - if (iova_mode == RTE_IOVA_VA && - rte_eal_check_module("rte_kni") == 1) { - if (phys_addrs) { - iova_mode = RTE_IOVA_PA; - RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module is loaded\n"); - } else { - RTE_LOG(DEBUG, EAL, "KNI can not work since physical addresses are unavailable\n"); - } - } -#endif rte_eal_get_configuration()->iova_mode = iova_mode; } else { rte_eal_get_configuration()->iova_mode = internal_config.iova_mode; } + if (rte_eal_check_module("rte_kni") == 1) + rte_eal_get_configuration()->iova_mode = + rte_eal_kni_get_iova_mode(rte_eal_iova_mode()); + if (rte_eal_iova_mode() == RTE_IOVA_PA && !phys_addrs) { rte_eal_init_alert("Cannot use IOVA as 'PA' since physical addresses are not available"); rte_errno = EINVAL; diff --git a/lib/librte_eal/linux/eal/include/rte_kni_common.h b/lib/librte_eal/linux/eal/include/rte_kni_common.h index b51fe27..1b96cf6 100644 --- a/lib/librte_eal/linux/eal/include/rte_kni_common.h +++ b/lib/librte_eal/linux/eal/include/rte_kni_common.h @@ -123,6 +123,7 @@ struct rte_kni_device_info { unsigned mbuf_size; unsigned int mtu; uint8_t mac_addr[6]; + uint8_t iova_mode; }; #define KNI_DEVICE "kni" diff --git a/lib/librte_kni/Makefile b/lib/librte_kni/Makefile index cbd6599..6405524 100644 --- a/lib/librte_kni/Makefile +++ b/lib/librte_kni/Makefile @@ -6,6 +6,7 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name LIB = librte_kni.a +CFLAGS += -DALLOW_EXPERIMENTAL_API CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -fno-strict-aliasing LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf -lrte_ethdev diff --git a/lib/librte_kni/meson.build b/lib/librte_kni/meson.build index 41fa2e3..dd4c8da 100644 --- a/lib/librte_kni/meson.build +++ b/lib/librte_kni/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation +allow_experimental_apis = true if not is_linux or not dpdk_conf.get('RTE_ARCH_64') build = false reason = 'only supported on 64-bit linux' diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 0f36485..1e53f05 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include "rte_kni_fifo.h" @@ -97,11 +98,6 @@ static volatile int kni_fd = -1; int rte_kni_init(unsigned int max_kni_ifaces __rte_unused) { - if (rte_eal_iova_mode() != RTE_IOVA_PA) { - RTE_LOG(ERR, KNI, "KNI requires IOVA as PA\n"); - return -1; - } - /* Check FD and open */ if (kni_fd < 0) { kni_fd = open("/dev/" KNI_DEVICE, O_RDWR); @@ -300,6 +296,8 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool, kni->group_id = conf->group_id; kni->mbuf_size = conf->mbuf_size; + dev_info.iova_mode = (rte_eal_iova_mode() == RTE_IOVA_VA) ? 1 : 0; + ret = ioctl(kni_fd, RTE_KNI_IOCTL_CREATE, &dev_info); if (ret < 0) goto ioctl_fail; @@ -687,6 +685,65 @@ kni_allocate_mbufs(struct rte_kni *kni) } } +struct rte_mempool * +rte_kni_pktmbuf_pool_create(const char *name, unsigned int n, + unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size, + int socket_id) +{ + struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_ops_name; + struct rte_mempool *mp; + unsigned int elt_size; + int ret; + + if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { + RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + priv_size); + rte_errno = EINVAL; + return NULL; + } + elt_size = sizeof(struct rte_mbuf) + (unsigned int)priv_size + + (unsigned int)data_room_size; + mbp_priv.mbuf_data_room_size = data_room_size; + mbp_priv.mbuf_priv_size = priv_size; + + mp = rte_mempool_create_empty(name, n, elt_size, cache_size, + sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); + if (mp == NULL) + return NULL; + + mp_ops_name = rte_mbuf_best_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); + if (ret != 0) { + RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + rte_pktmbuf_pool_init(mp, &mbp_priv); + + if (rte_eal_iova_mode() == RTE_IOVA_VA) + ret = rte_mempool_populate_from_pg_sz_chunks(mp); + else + ret = rte_mempool_populate_default(mp); + + if (ret < 0) { + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + + rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); + + return mp; +} + +void +rte_kni_pktmbuf_pool_free(struct rte_mempool *mp) +{ + rte_mempool_free(mp); +} + struct rte_kni * rte_kni_get(const char *name) { diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index f6b66c3..2cfdc38 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -187,6 +187,54 @@ unsigned rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num); /** + * Create a kni packet mbuf pool. + * + * This function creates and initializes a packet mbuf pool for KNI applications + * It calls the required mempool populate routine based on the IOVA mode. + * + * @param name + * The name of the mbuf pool. + * @param n + * The number of elements in the mbuf pool. The optimum size (in terms + * of memory usage) for a mempool is when n is a power of two minus one: + * n = (2^q - 1). + * @param cache_size + * Size of the per-core object cache. See rte_mempool_create() for + * details. + * @param priv_size + * Size of application private are between the rte_mbuf structure + * and the data buffer. This value must be aligned to RTE_MBUF_PRIV_ALIGN. + * @param data_room_size + * Size of data buffer in each mbuf, including RTE_PKTMBUF_HEADROOM. + * @param socket_id + * The socket identifier where the memory should be allocated. The + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the + * reserved zone. + * @return + * The pointer to the new allocated mempool, on success. NULL on error + * with rte_errno set appropriately. Possible rte_errno values include: + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure + * - E_RTE_SECONDARY - function was called from a secondary process instance + * - EINVAL - cache size provided is too large, or priv_size is not aligned. + * - ENOSPC - the maximum number of memzones has already been allocated + * - EEXIST - a memzone with the same name already exists + * - ENOMEM - no appropriate memory area found in which to create memzone + */ +__rte_experimental +struct rte_mempool *rte_kni_pktmbuf_pool_create(const char *name, + unsigned int n, unsigned int cache_size, uint16_t priv_size, + uint16_t data_room_size, int socket_id); + +/** + * Free the given packet mempool. + * + * @param mp + * The mempool pointer. + */ +__rte_experimental +void rte_kni_pktmbuf_pool_free(struct rte_mempool *mp); + +/** * Get the KNI context of its name. * * @param name diff --git a/lib/librte_kni/rte_kni_version.map b/lib/librte_kni/rte_kni_version.map index c877dc6..5937bff 100644 --- a/lib/librte_kni/rte_kni_version.map +++ b/lib/librte_kni/rte_kni_version.map @@ -19,5 +19,8 @@ DPDK_2.0 { EXPERIMENTAL { global: + # added in 19.11 rte_kni_update_link; + rte_kni_pktmbuf_pool_create; + rte_kni_pktmbuf_pool_free; }; From patchwork Mon Oct 21 08:03:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 61562 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0016B1BEA7; Mon, 21 Oct 2019 10:04:08 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id DB2981BEA7 for ; Mon, 21 Oct 2019 10:04:06 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9L7smqn029070; Mon, 21 Oct 2019 01:04:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=YG/Aew/aPENd24A2/5wftZyc/KLyGXYgNCgEqPyn0IQ=; b=xtnkS4I5weBJJwbO08b2g3yN2MJPshFtUQGrXWRX8SqHAHOcT4X4FjQSFFsgGNwI791G xgqtwmYavoRCFYxgB+9+TJzYjwoM9ZHhOU6H6n+VwMRIp3P4aRzcc/+KQjzo3KPedCz3 uXhU3yEd1ZpTRbkf8DXJKoA4NEFvFlMd4MUR06rwRih4czLzf+jKQhfvwggq2FDqnF1m WQ2QsHO9P+HCTZ2rYdmiTHNz+rL+v2gb7ujvx3Qw7zm1t0GulOYXEMz10Dlp57HQm7yD peHu4tex6LbAvhqEzB3HWyNLtAJUDHPYioM+B0uwtu/6gFBtZE8z0YFV2Y367Re5Lv/T 1Q== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2vr20mn6ae-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 21 Oct 2019 01:04:06 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 21 Oct 2019 01:04:04 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 21 Oct 2019 01:04:04 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id C995B3F7044; Mon, 21 Oct 2019 01:04:00 -0700 (PDT) From: To: CC: , , , , , , , , Vamsi Attunuru Date: Mon, 21 Oct 2019 13:33:24 +0530 Message-ID: <20191021080324.10659-5-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20191021080324.10659-1-vattunuru@marvell.com> References: <20190816061252.17214-1-vattunuru@marvell.com> <20191021080324.10659-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-21_02:2019-10-18,2019-10-21 signatures=0 Subject: [dpdk-dev] [PATCH v11 4/4] kni: add IOVA=VA support in kernel module X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds support for kernel module to work in IOVA = VA mode by providing address translation routines to convert IOVA aka user space VA to kernel virtual addresses. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- kernel/linux/kni/compat.h | 4 ++++ kernel/linux/kni/kni_dev.h | 31 ++++++++++++++++++++++++ kernel/linux/kni/kni_misc.c | 39 +++++++++++++++++++++++------- kernel/linux/kni/kni_net.c | 58 +++++++++++++++++++++++++++++++++++---------- 4 files changed, 110 insertions(+), 22 deletions(-) diff --git a/kernel/linux/kni/compat.h b/kernel/linux/kni/compat.h index 562d8bf..b5e8914 100644 --- a/kernel/linux/kni/compat.h +++ b/kernel/linux/kni/compat.h @@ -121,3 +121,7 @@ #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0) #define HAVE_SIGNAL_FUNCTIONS_OWN_HEADER #endif + +#if KERNEL_VERSION(4, 8, 0) <= LINUX_VERSION_CODE +#define HAVE_IOVA_TO_KVA_MAPPING_SUPPORT +#endif diff --git a/kernel/linux/kni/kni_dev.h b/kernel/linux/kni/kni_dev.h index c1ca678..abe9b14 100644 --- a/kernel/linux/kni/kni_dev.h +++ b/kernel/linux/kni/kni_dev.h @@ -41,6 +41,8 @@ struct kni_dev { /* kni list */ struct list_head list; + uint8_t iova_mode; + uint32_t core_id; /* Core ID to bind */ char name[RTE_KNI_NAMESIZE]; /* Network device name */ struct task_struct *pthread; @@ -84,8 +86,37 @@ struct kni_dev { void *va[MBUF_BURST_SZ]; void *alloc_pa[MBUF_BURST_SZ]; void *alloc_va[MBUF_BURST_SZ]; + + struct task_struct *usr_tsk; }; +static inline phys_addr_t iova_to_phys(struct task_struct *tsk, + unsigned long iova) +{ + unsigned int flags = FOLL_TOUCH; + phys_addr_t offset, phys_addr; + struct page *page = NULL; + int ret; + + offset = iova & (PAGE_SIZE - 1); + + /* Read one page struct info */ + ret = get_user_pages_remote(tsk, tsk->mm, iova, 1, + flags, &page, 0, 0); + if (ret < 0) + return 0; + + phys_addr = page_to_phys(page) | offset; + put_page(page); + + return phys_addr; +} + +static inline void *iova_to_kva(struct task_struct *tsk, unsigned long iova) +{ + return phys_to_virt(iova_to_phys(tsk, iova)); +} + void kni_net_release_fifo_phy(struct kni_dev *kni); void kni_net_rx(struct kni_dev *kni); void kni_net_init(struct net_device *dev); diff --git a/kernel/linux/kni/kni_misc.c b/kernel/linux/kni/kni_misc.c index 2b75502..7af7ab4 100644 --- a/kernel/linux/kni/kni_misc.c +++ b/kernel/linux/kni/kni_misc.c @@ -348,15 +348,36 @@ kni_ioctl_create(struct net *net, uint32_t ioctl_num, strncpy(kni->name, dev_info.name, RTE_KNI_NAMESIZE); /* Translate user space info into kernel space info */ - kni->tx_q = phys_to_virt(dev_info.tx_phys); - kni->rx_q = phys_to_virt(dev_info.rx_phys); - kni->alloc_q = phys_to_virt(dev_info.alloc_phys); - kni->free_q = phys_to_virt(dev_info.free_phys); - - kni->req_q = phys_to_virt(dev_info.req_phys); - kni->resp_q = phys_to_virt(dev_info.resp_phys); - kni->sync_va = dev_info.sync_va; - kni->sync_kva = phys_to_virt(dev_info.sync_phys); + if (dev_info.iova_mode) { +#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT + kni->tx_q = iova_to_kva(current, dev_info.tx_phys); + kni->rx_q = iova_to_kva(current, dev_info.rx_phys); + kni->alloc_q = iova_to_kva(current, dev_info.alloc_phys); + kni->free_q = iova_to_kva(current, dev_info.free_phys); + + kni->req_q = iova_to_kva(current, dev_info.req_phys); + kni->resp_q = iova_to_kva(current, dev_info.resp_phys); + kni->sync_va = dev_info.sync_va; + kni->sync_kva = iova_to_kva(current, dev_info.sync_phys); + kni->usr_tsk = current; + kni->iova_mode = 1; +#else + pr_err("KNI module does not support IOVA to VA translation\n"); + return -EINVAL; +#endif + } else { + + kni->tx_q = phys_to_virt(dev_info.tx_phys); + kni->rx_q = phys_to_virt(dev_info.rx_phys); + kni->alloc_q = phys_to_virt(dev_info.alloc_phys); + kni->free_q = phys_to_virt(dev_info.free_phys); + + kni->req_q = phys_to_virt(dev_info.req_phys); + kni->resp_q = phys_to_virt(dev_info.resp_phys); + kni->sync_va = dev_info.sync_va; + kni->sync_kva = phys_to_virt(dev_info.sync_phys); + kni->iova_mode = 0; + } kni->mbuf_size = dev_info.mbuf_size; diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c index f25b127..e95207b 100644 --- a/kernel/linux/kni/kni_net.c +++ b/kernel/linux/kni/kni_net.c @@ -36,6 +36,20 @@ static void kni_net_rx_normal(struct kni_dev *kni); /* kni rx function pointer, with default to normal rx */ static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal; +/* iova to kernel virtual address */ +static inline void * +iova2kva(struct kni_dev *kni, void *iova) +{ + return phys_to_virt(iova_to_phys(kni->usr_tsk, (unsigned long)iova)); +} + +static inline void * +iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m) +{ + return phys_to_virt(iova_to_phys(kni->usr_tsk, m->buf_physaddr) + + m->data_off); +} + /* physical address to kernel virtual address */ static void * pa2kva(void *pa) @@ -62,6 +76,24 @@ kva2data_kva(struct rte_kni_mbuf *m) return phys_to_virt(m->buf_physaddr + m->data_off); } +static inline void * +get_kva(struct kni_dev *kni, void *pa) +{ + if (kni->iova_mode == 1) + return iova2kva(kni, pa); + + return pa2kva(pa); +} + +static inline void * +get_data_kva(struct kni_dev *kni, void *pkt_kva) +{ + if (kni->iova_mode == 1) + return iova2data_kva(kni, pkt_kva); + + return kva2data_kva(pkt_kva); +} + /* * It can be called to process the request. */ @@ -178,7 +210,7 @@ kni_fifo_trans_pa2va(struct kni_dev *kni, return; for (i = 0; i < num_rx; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); kni->va[i] = pa2va(kni->pa[i], kva); kva_nb_segs = kva->nb_segs; @@ -266,8 +298,8 @@ kni_net_tx(struct sk_buff *skb, struct net_device *dev) if (likely(ret == 1)) { void *data_kva; - pkt_kva = pa2kva(pkt_pa); - data_kva = kva2data_kva(pkt_kva); + pkt_kva = get_kva(kni, pkt_pa); + data_kva = get_data_kva(kni, pkt_kva); pkt_va = pa2va(pkt_pa, pkt_kva); len = skb->len; @@ -338,9 +370,9 @@ kni_net_rx_normal(struct kni_dev *kni) /* Transfer received packets to netif */ for (i = 0; i < num_rx; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->pkt_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); skb = netdev_alloc_skb(dev, len); @@ -437,9 +469,9 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) num = ret; /* Copy mbufs */ for (i = 0; i < num; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->data_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); while (kva->next) { @@ -449,8 +481,8 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) kva = next_kva; } - alloc_kva = pa2kva(kni->alloc_pa[i]); - alloc_data_kva = kva2data_kva(alloc_kva); + alloc_kva = get_kva(kni, kni->alloc_pa[i]); + alloc_data_kva = get_data_kva(kni, alloc_kva); kni->alloc_va[i] = pa2va(kni->alloc_pa[i], alloc_kva); memcpy(alloc_data_kva, data_kva, len); @@ -517,9 +549,9 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) /* Copy mbufs to sk buffer and then call tx interface */ for (i = 0; i < num; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->pkt_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); skb = netdev_alloc_skb(dev, len); @@ -550,8 +582,8 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) break; prev_kva = kva; - kva = pa2kva(kva->next); - data_kva = kva2data_kva(kva); + kva = get_kva(kni, kva->next); + data_kva = get_data_kva(kni, kva); /* Convert physical address to virtual address */ prev_kva->next = pa2va(prev_kva->next, kva); }