From patchwork Fri Aug 16 06:12:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57717 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0732C1BED5; Fri, 16 Aug 2019 08:13:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 55DD71BED2 for ; Fri, 16 Aug 2019 08:13:21 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x7G6ALnA008930; Thu, 15 Aug 2019 23:13:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=21N5sO+UlZM3VkJelfFRFh1HyOMHh5G/cqrxetA6pPE=; b=eppoKhXPY1sXaSfXvsZw3/EeDGvHWcjQ+aqtKU4D96pd3ow1TitijtOVLs+kYLsF6zNI PHuF6xX8FQ3mLXZLKO9KfJp2qMC/qm16VeOdDxJXk5KsbvaGnBtsxCcHqjD98yubLE77 bp2XWK1rKlOooKN7fTdU4ptwQItu2E+I8RGsT/v2Hz/3rzK0NzZWUw6HrWFzIYOTbIOu EACjTEiDr2hg1i3X5K1g9SnF5xhdcX0DdqdwaUf+1CNLEth4+rHoWvOyufkgiWi/Rbgk 08vgGn/CiTj9A1Zl+/yMgNzVHiCDl/03HIaEOSopMmDklcBZY1m+fsx3SsUBXhrX+UFY kQ== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2udefw1md8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 15 Aug 2019 23:13:20 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 15 Aug 2019 23:13:18 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 15 Aug 2019 23:13:18 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 5A25D3F703F; Thu, 15 Aug 2019 23:13:15 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Fri, 16 Aug 2019 11:42:48 +0530 Message-ID: <20190816061252.17214-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190816061252.17214-1-vattunuru@marvell.com> References: <20190729121313.30639-2-vattunuru@marvell.com> <20190816061252.17214-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-08-16_03:2019-08-14,2019-08-16 signatures=0 Subject: [dpdk-dev] [PATCH v10 1/5] mempool: populate mempool with the page sized chunks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds a routine to populate mempool from page aligned and page sized chunks of memory to ensure memory objs do not fall across the page boundaries. It's useful for applications that require physically contiguous mbuf memory while running in IOVA=VA mode. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K Acked-by: Olivier Matz --- lib/librte_mempool/rte_mempool.c | 69 ++++++++++++++++++++++++++++++ lib/librte_mempool/rte_mempool.h | 20 +++++++++ lib/librte_mempool/rte_mempool_version.map | 1 + 3 files changed, 90 insertions(+) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 7260ce0..0683750 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -414,6 +414,75 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, return ret; } +/* Function to populate mempool from page sized mem chunks, allocate page size + * of memory in memzone and populate them. Return the number of objects added, + * or a negative value on error. + */ +int +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + size_t align, pg_sz, pg_shift; + const struct rte_memzone *mz; + unsigned int mz_id, n; + size_t min_chunk_size; + int ret; + + ret = mempool_ops_alloc_once(mp); + if (ret != 0) + return ret; + + if (mp->nb_mem_chunks != 0) + return -EEXIST; + + pg_sz = get_min_page_size(mp->socket_id); + pg_shift = rte_bsf32(pg_sz); + + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { + + ret = rte_mempool_ops_calc_mem_size(mp, n, + pg_shift, &min_chunk_size, &align); + + if (ret < 0) + goto fail; + + if (min_chunk_size > pg_sz) { + ret = -EINVAL; + goto fail; + } + + ret = snprintf(mz_name, sizeof(mz_name), + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); + if (ret < 0 || ret >= (int)sizeof(mz_name)) { + ret = -ENAMETOOLONG; + goto fail; + } + + mz = rte_memzone_reserve_aligned(mz_name, min_chunk_size, + mp->socket_id, 0, align); + + if (mz == NULL) { + ret = -rte_errno; + goto fail; + } + + ret = rte_mempool_populate_iova(mp, mz->addr, + mz->iova, mz->len, + rte_mempool_memchunk_mz_free, + (void *)(uintptr_t)mz); + if (ret < 0) { + rte_memzone_free(mz); + goto fail; + } + } + + return mp->size; + +fail: + rte_mempool_free_memchunks(mp); + return ret; +} + /* Default function to populate the mempool: allocate memory in memzones, * and populate them. Return the number of objects added, or a negative * value on error. diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a..2f5126e 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1062,6 +1062,26 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, void *opaque); /** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Add memory from page sized memzones for objects in the pool at init + * + * This is the function used to populate the mempool with page aligned and + * page sized memzone memory to avoid spreading object memory across two pages + * and to ensure all mempool objects reside on the page memory. + * + * @param mp + * A pointer to the mempool structure. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +__rte_experimental +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp); + +/** * Add memory for objects in the pool at init * * This is the default function used by rte_mempool_create() to populate diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index 17cbca4..9a6fe65 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -57,4 +57,5 @@ EXPERIMENTAL { global: rte_mempool_ops_get_info; + rte_mempool_populate_from_pg_sz_chunks; }; From patchwork Fri Aug 16 06:12:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57718 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 03A4E1BEED; Fri, 16 Aug 2019 08:13:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id D2B301BEED for ; Fri, 16 Aug 2019 08:13:24 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x7G6ALnB008930; Thu, 15 Aug 2019 23:13:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=qJN1nrYJIHYpBBwdx25T0sJWazuYXaXe7jthBhhGULk=; b=e0xmU9UCE43DVpwxnTj1wx25YHWjU8csbvX1zPYeIS0ZtGYQ0iNJaBE7wsFjg1YbeiqF Dvk5zYriSo64NL+9MS9dwZNN+LOZOaNtfxzrdVBrfpqL2s2jDNnvGj/hjuOvwLufL4cS w44aBFUTCL9h7alPAZBknUEqLoLjvbYsaQpIZHO2ab+MlN2QfKQYhe1AFQU1O95n4yIR DzNw13254jW567xCP57B/utHRW3YvYbRc1N4Bx5qYNUU2JK8qwIQN5pYaTK1hNtTaxqH Fm3WNxOX8McNGAckA+opbev1Cx2tYcQvi7aSaGNm8yKm5IZz/rTTk3pWlHG1llx/44FZ Rw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2udefw1mdf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 15 Aug 2019 23:13:24 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 15 Aug 2019 23:13:22 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 15 Aug 2019 23:13:22 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id ECE423F7040; Thu, 15 Aug 2019 23:13:18 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Fri, 16 Aug 2019 11:42:49 +0530 Message-ID: <20190816061252.17214-3-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190816061252.17214-1-vattunuru@marvell.com> References: <20190729121313.30639-2-vattunuru@marvell.com> <20190816061252.17214-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-08-16_03:2019-08-14,2019-08-16 signatures=0 Subject: [dpdk-dev] [PATCH v10 2/5] kni: add IOVA=VA support in KNI lib X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Current KNI implementation only operates in IOVA=PA mode, patch adds required functionality in KNI lib to support IOVA=VA mode. KNI kernel module requires device info to get iommu domain related information for IOVA addr related translations. Patch defines device related info in rte_kni_device_info structure and passes device info to the kernel KNI module when IOVA=VA mode is enabled. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- lib/librte_eal/linux/eal/include/rte_kni_common.h | 8 ++++++ lib/librte_kni/Makefile | 1 + lib/librte_kni/meson.build | 1 + lib/librte_kni/rte_kni.c | 30 +++++++++++++++++++++++ 4 files changed, 40 insertions(+) diff --git a/lib/librte_eal/linux/eal/include/rte_kni_common.h b/lib/librte_eal/linux/eal/include/rte_kni_common.h index 37d9ee8..4fd8a90 100644 --- a/lib/librte_eal/linux/eal/include/rte_kni_common.h +++ b/lib/librte_eal/linux/eal/include/rte_kni_common.h @@ -111,6 +111,13 @@ struct rte_kni_device_info { void * mbuf_va; phys_addr_t mbuf_phys; + /* PCI info */ + uint16_t vendor_id; /**< Vendor ID or PCI_ANY_ID. */ + uint16_t device_id; /**< Device ID or PCI_ANY_ID. */ + uint8_t bus; /**< Device bus */ + uint8_t devid; /**< Device ID */ + uint8_t function; /**< Device function. */ + uint16_t group_id; /**< Group ID */ uint32_t core_id; /**< core ID to bind for kernel thread */ @@ -121,6 +128,7 @@ struct rte_kni_device_info { unsigned mbuf_size; unsigned int mtu; uint8_t mac_addr[6]; + uint8_t iova_mode; }; #define KNI_DEVICE "kni" diff --git a/lib/librte_kni/Makefile b/lib/librte_kni/Makefile index cbd6599..ab15d10 100644 --- a/lib/librte_kni/Makefile +++ b/lib/librte_kni/Makefile @@ -7,6 +7,7 @@ include $(RTE_SDK)/mk/rte.vars.mk LIB = librte_kni.a CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -fno-strict-aliasing +CFLAGS += -I$(RTE_SDK)/drivers/bus/pci LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf -lrte_ethdev EXPORT_MAP := rte_kni_version.map diff --git a/lib/librte_kni/meson.build b/lib/librte_kni/meson.build index 41fa2e3..fd46f87 100644 --- a/lib/librte_kni/meson.build +++ b/lib/librte_kni/meson.build @@ -9,3 +9,4 @@ version = 2 sources = files('rte_kni.c') headers = files('rte_kni.h') deps += ['ethdev', 'pci'] +includes += include_directories('../../drivers/bus/pci') diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 4b51fb4..2aaaeaa 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -199,6 +200,27 @@ kni_release_mz(struct rte_kni *kni) rte_memzone_free(kni->m_sync_addr); } +static void +kni_dev_pci_addr_get(uint16_t port_id, struct rte_kni_device_info *kni_dev_info) +{ + const struct rte_pci_device *pci_dev; + struct rte_eth_dev_info dev_info; + const struct rte_bus *bus = NULL; + + rte_eth_dev_info_get(port_id, &dev_info); + + if (dev_info.device) + bus = rte_bus_find_by_device(dev_info.device); + if (bus && !strcmp(bus->name, "pci")) { + pci_dev = RTE_DEV_TO_PCI(dev_info.device); + kni_dev_info->bus = pci_dev->addr.bus; + kni_dev_info->devid = pci_dev->addr.devid; + kni_dev_info->function = pci_dev->addr.function; + kni_dev_info->vendor_id = pci_dev->id.vendor_id; + kni_dev_info->device_id = pci_dev->id.device_id; + } +} + struct rte_kni * rte_kni_alloc(struct rte_mempool *pktmbuf_pool, const struct rte_kni_conf *conf, @@ -247,6 +269,12 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool, kni->ops.port_id = UINT16_MAX; memset(&dev_info, 0, sizeof(dev_info)); + + if (rte_eal_iova_mode() == RTE_IOVA_VA) { + uint16_t port_id = conf->group_id; + + kni_dev_pci_addr_get(port_id, &dev_info); + } dev_info.core_id = conf->core_id; dev_info.force_bind = conf->force_bind; dev_info.group_id = conf->group_id; @@ -300,6 +328,8 @@ rte_kni_alloc(struct rte_mempool *pktmbuf_pool, kni->group_id = conf->group_id; kni->mbuf_size = conf->mbuf_size; + dev_info.iova_mode = (rte_eal_iova_mode() == RTE_IOVA_VA) ? 1 : 0; + ret = ioctl(kni_fd, RTE_KNI_IOCTL_CREATE, &dev_info); if (ret < 0) goto ioctl_fail; From patchwork Fri Aug 16 06:12:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57719 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D784D1BEF9; Fri, 16 Aug 2019 08:13:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 9011A1BEE5 for ; Fri, 16 Aug 2019 08:13:28 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x7G6ALnC008930; Thu, 15 Aug 2019 23:13:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=kVlngqrybjNeB9x/eDgoehZe2wTJrZCwI27/OQ+9l4E=; b=w/5D0Z0K2XLTO5+YDUA4VoqrB0pRmgGz6CP/1O/44SW2Ddb303FBMS0CiPRl9Tlhka2h D154VVf33QBFk+DL4Se0JTv+zGAb627MG0uj2/WI+kA0QsDL3m2EY3uRnFuaX0M2glJ5 4nErdkJTIr/XTXcXMnCGIe5yEL+6TtyNU9p863zfRz7DGuPQFCvOGXcoITyChyn1Vaup TQmAVfy/8BgsSjLEG8Tl4ObZqgJO3v1MmHDSq2EYA2hwSFp1bEqTDPNfZTG8O7H5gCZk 5G+HL3Pu7Q4dfMcXJI44wbe/Yg9+/01foOgXChJEEeT740aP96UOKimI+gso+IgkoGjK yA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2udefw1mdm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 15 Aug 2019 23:13:27 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 15 Aug 2019 23:13:25 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 15 Aug 2019 23:13:25 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 928E43F703F; Thu, 15 Aug 2019 23:13:22 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Fri, 16 Aug 2019 11:42:50 +0530 Message-ID: <20190816061252.17214-4-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190816061252.17214-1-vattunuru@marvell.com> References: <20190729121313.30639-2-vattunuru@marvell.com> <20190816061252.17214-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-08-16_03:2019-08-14,2019-08-16 signatures=0 Subject: [dpdk-dev] [PATCH v10 3/5] kni: add app specific mempool create and free routines X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru When KNI operates in IOVA = VA mode, it requires mbuf memory to be physically contiguous to ensure KNI kernel module could translate IOVA addresses properly. Patch adds a KNI specific mempool create routine to populate the KNI packet mbuf pool with memory objects that are being on a page. KNI applications need to use this mempool create & free routines so that mbuf related requirements in IOVA = VA mode are handled inside those routines based on the enabled mode. Updated the release notes with these new routine details. Signed-off-by: Vamsi Attunuru --- doc/guides/rel_notes/release_19_11.rst | 5 +++ examples/kni/main.c | 5 ++- lib/librte_kni/Makefile | 1 + lib/librte_kni/meson.build | 1 + lib/librte_kni/rte_kni.c | 60 ++++++++++++++++++++++++++++++++++ lib/librte_kni/rte_kni.h | 48 +++++++++++++++++++++++++++ lib/librte_kni/rte_kni_version.map | 2 ++ 7 files changed, 121 insertions(+), 1 deletion(-) diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst index 8490d89..8813a10 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -85,6 +85,11 @@ API Changes Also, make sure to start the actual text at the margin. ========================================================= +* kni: ``rte_kni_pktmbuf_pool_create`` ``rte_kni_pktmbuf_pool_free`` functions + were introduced for KNI applications for creating & freeing packet pool. + Since IOVA=VA mode was added in KNI, packet pool's mbuf memory should be + physically contiguous for the KNI kernel module to work in IOVA=VA mode, + this requirement was taken care in the kni packet pool creation fucntions. ABI Changes ----------- diff --git a/examples/kni/main.c b/examples/kni/main.c index 4710d71..fdfeed2 100644 --- a/examples/kni/main.c +++ b/examples/kni/main.c @@ -975,7 +975,7 @@ main(int argc, char** argv) rte_exit(EXIT_FAILURE, "Could not parse input parameters\n"); /* Create the mbuf pool */ - pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, + pktmbuf_pool = rte_kni_pktmbuf_pool_create("mbuf_pool", NB_MBUF, MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ, rte_socket_id()); if (pktmbuf_pool == NULL) { rte_exit(EXIT_FAILURE, "Could not initialise mbuf pool\n"); @@ -1043,6 +1043,9 @@ main(int argc, char** argv) continue; kni_free_kni(port); } + + rte_kni_pktmbuf_pool_free(pktmbuf_pool); + for (i = 0; i < RTE_MAX_ETHPORTS; i++) if (kni_port_params_array[i]) { rte_free(kni_port_params_array[i]); diff --git a/lib/librte_kni/Makefile b/lib/librte_kni/Makefile index ab15d10..5e3dd01 100644 --- a/lib/librte_kni/Makefile +++ b/lib/librte_kni/Makefile @@ -6,6 +6,7 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name LIB = librte_kni.a +CFLAGS += -DALLOW_EXPERIMENTAL_API CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -fno-strict-aliasing CFLAGS += -I$(RTE_SDK)/drivers/bus/pci LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf -lrte_ethdev diff --git a/lib/librte_kni/meson.build b/lib/librte_kni/meson.build index fd46f87..e357445 100644 --- a/lib/librte_kni/meson.build +++ b/lib/librte_kni/meson.build @@ -1,6 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation +allow_experimental_apis = true if not is_linux or not dpdk_conf.get('RTE_ARCH_64') build = false reason = 'only supported on 64-bit linux' diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 2aaaeaa..15dda45 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "rte_kni_fifo.h" @@ -681,6 +682,65 @@ kni_allocate_mbufs(struct rte_kni *kni) } } +struct rte_mempool * +rte_kni_pktmbuf_pool_create(const char *name, unsigned int n, + unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size, + int socket_id) +{ + struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_ops_name; + struct rte_mempool *mp; + unsigned int elt_size; + int ret; + + if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { + RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + priv_size); + rte_errno = EINVAL; + return NULL; + } + elt_size = sizeof(struct rte_mbuf) + (unsigned int)priv_size + + (unsigned int)data_room_size; + mbp_priv.mbuf_data_room_size = data_room_size; + mbp_priv.mbuf_priv_size = priv_size; + + mp = rte_mempool_create_empty(name, n, elt_size, cache_size, + sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); + if (mp == NULL) + return NULL; + + mp_ops_name = rte_mbuf_best_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); + if (ret != 0) { + RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + rte_pktmbuf_pool_init(mp, &mbp_priv); + + if (rte_eal_iova_mode() == RTE_IOVA_VA) + ret = rte_mempool_populate_from_pg_sz_chunks(mp); + else + ret = rte_mempool_populate_default(mp); + + if (ret < 0) { + rte_mempool_free(mp); + rte_errno = -ret; + return NULL; + } + + rte_mempool_obj_iter(mp, rte_pktmbuf_init, NULL); + + return mp; +} + +void +rte_kni_pktmbuf_pool_free(struct rte_mempool *mp) +{ + rte_mempool_free(mp); +} + struct rte_kni * rte_kni_get(const char *name) { diff --git a/lib/librte_kni/rte_kni.h b/lib/librte_kni/rte_kni.h index 5699a64..99d263d 100644 --- a/lib/librte_kni/rte_kni.h +++ b/lib/librte_kni/rte_kni.h @@ -184,6 +184,54 @@ unsigned rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num); /** + * Create a kni packet mbuf pool. + * + * This function creates and initializes a packet mbuf pool for KNI applications + * It calls the required mempool populate routine based on the IOVA mode. + * + * @param name + * The name of the mbuf pool. + * @param n + * The number of elements in the mbuf pool. The optimum size (in terms + * of memory usage) for a mempool is when n is a power of two minus one: + * n = (2^q - 1). + * @param cache_size + * Size of the per-core object cache. See rte_mempool_create() for + * details. + * @param priv_size + * Size of application private are between the rte_mbuf structure + * and the data buffer. This value must be aligned to RTE_MBUF_PRIV_ALIGN. + * @param data_room_size + * Size of data buffer in each mbuf, including RTE_PKTMBUF_HEADROOM. + * @param socket_id + * The socket identifier where the memory should be allocated. The + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the + * reserved zone. + * @return + * The pointer to the new allocated mempool, on success. NULL on error + * with rte_errno set appropriately. Possible rte_errno values include: + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure + * - E_RTE_SECONDARY - function was called from a secondary process instance + * - EINVAL - cache size provided is too large, or priv_size is not aligned. + * - ENOSPC - the maximum number of memzones has already been allocated + * - EEXIST - a memzone with the same name already exists + * - ENOMEM - no appropriate memory area found in which to create memzone + */ +__rte_experimental +struct rte_mempool *rte_kni_pktmbuf_pool_create(const char *name, + unsigned int n, unsigned int cache_size, uint16_t priv_size, + uint16_t data_room_size, int socket_id); + +/** + * Free the given packet mempool. + * + * @param mp + * The mempool pointer. + */ +__rte_experimental +void rte_kni_pktmbuf_pool_free(struct rte_mempool *mp); + +/** * Get the KNI context of its name. * * @param name diff --git a/lib/librte_kni/rte_kni_version.map b/lib/librte_kni/rte_kni_version.map index c877dc6..aba9728 100644 --- a/lib/librte_kni/rte_kni_version.map +++ b/lib/librte_kni/rte_kni_version.map @@ -20,4 +20,6 @@ EXPERIMENTAL { global: rte_kni_update_link; + rte_kni_pktmbuf_pool_create; + rte_kni_pktmbuf_pool_free; }; From patchwork Fri Aug 16 06:12:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57720 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9AFE61BF01; Fri, 16 Aug 2019 08:13:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1E4981BF01 for ; Fri, 16 Aug 2019 08:13:32 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x7G6AF8b008656; Thu, 15 Aug 2019 23:13:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=wrJ7dmEvfYWq0F0wL+W2XOIDPUfHjFpoi3HHcMqwKjg=; b=vk2+Fti1GSv2+F8KtUhQILGnn8RI1VuuFq1g504S3Pouis1Z76pp82C+RZLro/7qs99A lilb4cUM1hBm6eEMAGT4VZ3kMgsJV7nXqJZj+2hGNa3WTPOpsblAY8wa5QDQy5wb+kQ0 6Yrx+cOJcU11xRF34KdqZAD97bmCg61N1zcFdJEWVqmvfW6xpD0Lk2k/fQuCq2vj4Pk6 b9mR73Tgsc4vtzWNbW7nzyFFswpVZX6UHNe44CKu4W0rIYkhhH7xX4HSry1tlXEfp+P0 62azx3hgUUPkCjf4PcmXpGLbIJQw0K2UGIzQP2gjJPUxxpmGccqZXoH/4QqLGLyapiYO JQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2udefw1mds-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 15 Aug 2019 23:13:31 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 15 Aug 2019 23:13:29 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 15 Aug 2019 23:13:29 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 3C83F3F7040; Thu, 15 Aug 2019 23:13:25 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Fri, 16 Aug 2019 11:42:51 +0530 Message-ID: <20190816061252.17214-5-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190816061252.17214-1-vattunuru@marvell.com> References: <20190729121313.30639-2-vattunuru@marvell.com> <20190816061252.17214-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-08-16_03:2019-08-14,2019-08-16 signatures=0 Subject: [dpdk-dev] [PATCH v10 4/5] kni: add IOVA=VA support in KNI module X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Patch adds support for kernel module to work in IOVA = VA mode, the idea is to get physical address from IOVA address using iommu_iova_to_phys API and later use phys_to_virt API to convert the physical address to kernel virtual address. When compared with IOVA = PA mode, there is no performance drop with this approach. This approach does not work with the kernel versions less than 4.4.0 because of API compatibility issues. Patch also updates these support details in KNI documentation. Signed-off-by: Kiran Kumar K Signed-off-by: Vamsi Attunuru --- kernel/linux/kni/compat.h | 4 +++ kernel/linux/kni/kni_dev.h | 4 +++ kernel/linux/kni/kni_misc.c | 71 +++++++++++++++++++++++++++++++++++++++------ kernel/linux/kni/kni_net.c | 59 ++++++++++++++++++++++++++++--------- 4 files changed, 116 insertions(+), 22 deletions(-) diff --git a/kernel/linux/kni/compat.h b/kernel/linux/kni/compat.h index 562d8bf..ee997a6 100644 --- a/kernel/linux/kni/compat.h +++ b/kernel/linux/kni/compat.h @@ -121,3 +121,7 @@ #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0) #define HAVE_SIGNAL_FUNCTIONS_OWN_HEADER #endif + +#if KERNEL_VERSION(4, 4, 0) <= LINUX_VERSION_CODE +#define HAVE_IOVA_AS_VA_SUPPORT +#endif diff --git a/kernel/linux/kni/kni_dev.h b/kernel/linux/kni/kni_dev.h index c1ca678..d5898f3 100644 --- a/kernel/linux/kni/kni_dev.h +++ b/kernel/linux/kni/kni_dev.h @@ -25,6 +25,7 @@ #include #include #include +#include #include #define KNI_KTHREAD_RESCHEDULE_INTERVAL 5 /* us */ @@ -41,6 +42,9 @@ struct kni_dev { /* kni list */ struct list_head list; + uint8_t iova_mode; + struct iommu_domain *domain; + uint32_t core_id; /* Core ID to bind */ char name[RTE_KNI_NAMESIZE]; /* Network device name */ struct task_struct *pthread; diff --git a/kernel/linux/kni/kni_misc.c b/kernel/linux/kni/kni_misc.c index 2b75502..8660205 100644 --- a/kernel/linux/kni/kni_misc.c +++ b/kernel/linux/kni/kni_misc.c @@ -295,6 +295,9 @@ kni_ioctl_create(struct net *net, uint32_t ioctl_num, struct rte_kni_device_info dev_info; struct net_device *net_dev = NULL; struct kni_dev *kni, *dev, *n; + struct pci_dev *pci = NULL; + struct iommu_domain *domain = NULL; + phys_addr_t phys_addr; pr_info("Creating kni...\n"); /* Check the buffer size, to avoid warning */ @@ -348,15 +351,65 @@ kni_ioctl_create(struct net *net, uint32_t ioctl_num, strncpy(kni->name, dev_info.name, RTE_KNI_NAMESIZE); /* Translate user space info into kernel space info */ - kni->tx_q = phys_to_virt(dev_info.tx_phys); - kni->rx_q = phys_to_virt(dev_info.rx_phys); - kni->alloc_q = phys_to_virt(dev_info.alloc_phys); - kni->free_q = phys_to_virt(dev_info.free_phys); - - kni->req_q = phys_to_virt(dev_info.req_phys); - kni->resp_q = phys_to_virt(dev_info.resp_phys); - kni->sync_va = dev_info.sync_va; - kni->sync_kva = phys_to_virt(dev_info.sync_phys); + if (dev_info.iova_mode) { +#ifdef HAVE_IOVA_AS_VA_SUPPORT + pci = pci_get_device(dev_info.vendor_id, + dev_info.device_id, NULL); + if (pci == NULL) { + pr_err("pci dev does not exist\n"); + return -ENODEV; + } + + while (pci) { + if ((pci->bus->number == dev_info.bus) && + (PCI_SLOT(pci->devfn) == dev_info.devid) && + (PCI_FUNC(pci->devfn) == dev_info.function)) { + domain = iommu_get_domain_for_dev(&pci->dev); + break; + } + pci = pci_get_device(dev_info.vendor_id, + dev_info.device_id, pci); + } + + if (domain == NULL) { + pr_err("Failed to get pci dev domain info\n"); + return -ENODEV; + } +#else + pr_err("Kernel version does not support IOVA as VA\n"); + return -EINVAL; +#endif + kni->domain = domain; + phys_addr = iommu_iova_to_phys(domain, dev_info.tx_phys); + kni->tx_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.rx_phys); + kni->rx_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.alloc_phys); + kni->alloc_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.free_phys); + kni->free_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.req_phys); + kni->req_q = phys_to_virt(phys_addr); + phys_addr = iommu_iova_to_phys(domain, dev_info.resp_phys); + kni->resp_q = phys_to_virt(phys_addr); + kni->sync_va = dev_info.sync_va; + phys_addr = iommu_iova_to_phys(domain, dev_info.sync_phys); + kni->sync_kva = phys_to_virt(phys_addr); + kni->iova_mode = 1; + + } else { + + kni->tx_q = phys_to_virt(dev_info.tx_phys); + kni->rx_q = phys_to_virt(dev_info.rx_phys); + kni->alloc_q = phys_to_virt(dev_info.alloc_phys); + kni->free_q = phys_to_virt(dev_info.free_phys); + + kni->req_q = phys_to_virt(dev_info.req_phys); + kni->resp_q = phys_to_virt(dev_info.resp_phys); + kni->sync_va = dev_info.sync_va; + kni->sync_kva = phys_to_virt(dev_info.sync_phys); + kni->iova_mode = 0; + } kni->mbuf_size = dev_info.mbuf_size; diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c index 7bd3a9f..8382859 100644 --- a/kernel/linux/kni/kni_net.c +++ b/kernel/linux/kni/kni_net.c @@ -36,6 +36,21 @@ static void kni_net_rx_normal(struct kni_dev *kni); /* kni rx function pointer, with default to normal rx */ static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal; +/* iova to kernel virtual address */ +static inline void * +iova2kva(struct kni_dev *kni, void *pa) +{ + return phys_to_virt(iommu_iova_to_phys(kni->domain, + (uintptr_t)pa)); +} + +static inline void * +iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m) +{ + return phys_to_virt(iommu_iova_to_phys(kni->domain, + (uintptr_t)m->buf_physaddr) + m->data_off); +} + /* physical address to kernel virtual address */ static void * pa2kva(void *pa) @@ -62,6 +77,24 @@ kva2data_kva(struct rte_kni_mbuf *m) return phys_to_virt(m->buf_physaddr + m->data_off); } +static inline void * +get_kva(struct kni_dev *kni, void *pa) +{ + if (kni->iova_mode == 1) + return iova2kva(kni, pa); + + return pa2kva(pa); +} + +static inline void * +get_data_kva(struct kni_dev *kni, void *pkt_kva) +{ + if (kni->iova_mode == 1) + return iova2data_kva(kni, pkt_kva); + + return kva2data_kva(pkt_kva); +} + /* * It can be called to process the request. */ @@ -178,7 +211,7 @@ kni_fifo_trans_pa2va(struct kni_dev *kni, return; for (i = 0; i < num_rx; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); kni->va[i] = pa2va(kni->pa[i], kva); kva_nb_segs = kva->nb_segs; @@ -266,8 +299,8 @@ kni_net_tx(struct sk_buff *skb, struct net_device *dev) if (likely(ret == 1)) { void *data_kva; - pkt_kva = pa2kva(pkt_pa); - data_kva = kva2data_kva(pkt_kva); + pkt_kva = get_kva(kni, pkt_pa); + data_kva = get_data_kva(kni, pkt_kva); pkt_va = pa2va(pkt_pa, pkt_kva); len = skb->len; @@ -338,9 +371,9 @@ kni_net_rx_normal(struct kni_dev *kni) /* Transfer received packets to netif */ for (i = 0; i < num_rx; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->pkt_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); skb = netdev_alloc_skb(dev, len); @@ -437,9 +470,9 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) num = ret; /* Copy mbufs */ for (i = 0; i < num; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->data_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); while (kva->next) { @@ -449,8 +482,8 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) kva = next_kva; } - alloc_kva = pa2kva(kni->alloc_pa[i]); - alloc_data_kva = kva2data_kva(alloc_kva); + alloc_kva = get_kva(kni, kni->alloc_pa[i]); + alloc_data_kva = get_data_kva(kni, alloc_kva); kni->alloc_va[i] = pa2va(kni->alloc_pa[i], alloc_kva); memcpy(alloc_data_kva, data_kva, len); @@ -517,9 +550,9 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) /* Copy mbufs to sk buffer and then call tx interface */ for (i = 0; i < num; i++) { - kva = pa2kva(kni->pa[i]); + kva = get_kva(kni, kni->pa[i]); len = kva->pkt_len; - data_kva = kva2data_kva(kva); + data_kva = get_data_kva(kni, kva); kni->va[i] = pa2va(kni->pa[i], kva); skb = netdev_alloc_skb(dev, len); @@ -550,8 +583,8 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) break; prev_kva = kva; - kva = pa2kva(kva->next); - data_kva = kva2data_kva(kva); + kva = get_kva(kni, kva->next); + data_kva = get_data_kva(kni, kva); /* Convert physical address to virtual address */ prev_kva->next = pa2va(prev_kva->next, kva); } From patchwork Fri Aug 16 06:12:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 57721 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 96AA21BF0D; Fri, 16 Aug 2019 08:13:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id BB69B1BF05 for ; Fri, 16 Aug 2019 08:13:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x7G6AAvK008470; Thu, 15 Aug 2019 23:13:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=tuUyYzpIdylKJpGsjPtp7dnaL1HIOKtWwcwS/BkmO/0=; b=Jwz/5LT0+sn02c6XYf2fCePBzNNczO+EigJMOxio6pFWg2BWFK1Lnx7/Lc+9mXhStYVl jZrNC+D4iAIOTkbvZr98p7jqS/iHPfyi4Sa1PepzBzXPEQbuHFU1Fc3YRPNqhcpWHrcE w2yc1mrZ7V1WrDkjMQXfOI2OlfwgyvZ4WiLvbdW0RIPRXG5TzDxfie+w91sDAb5Shvjd 4+12H24xYAgEIPl/DZ7lg+E1cHMVlURSR5RoACc0m35L8MWXsopxw/+gz6wHqeSomU6e CHh1i/zfyySwtTRlrrXDp1V/sGberPn2PpoWXCJKG4ICTKwgbGvnk/ctgLy8cr4jZBFJ xA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2udefw1mea-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 15 Aug 2019 23:13:35 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Thu, 15 Aug 2019 23:13:33 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Thu, 15 Aug 2019 23:13:33 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id E193F3F7041; Thu, 15 Aug 2019 23:13:29 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Fri, 16 Aug 2019 11:42:52 +0530 Message-ID: <20190816061252.17214-6-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190816061252.17214-1-vattunuru@marvell.com> References: <20190729121313.30639-2-vattunuru@marvell.com> <20190816061252.17214-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-08-16_03:2019-08-14,2019-08-16 signatures=0 Subject: [dpdk-dev] [PATCH v10 5/5] kni: modify IOVA mode checks to support VA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch addresses checks in KNI and eal that enforce IOVA=PA when IOVA=VA mode is enabled, since KNI kernel module supports VA mode for kernel versions >= 4.4.0. Updated KNI documentation with above details. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- doc/guides/prog_guide/kernel_nic_interface.rst | 8 ++++++++ lib/librte_eal/linux/eal/eal.c | 4 +++- lib/librte_kni/rte_kni.c | 5 ----- 3 files changed, 11 insertions(+), 6 deletions(-) diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst index 38369b3..fd2ce63 100644 --- a/doc/guides/prog_guide/kernel_nic_interface.rst +++ b/doc/guides/prog_guide/kernel_nic_interface.rst @@ -291,6 +291,14 @@ The sk_buff is then freed and the mbuf sent in the tx_q FIFO. The DPDK TX thread dequeues the mbuf and sends it to the PMD via ``rte_eth_tx_burst()``. It then puts the mbuf back in the cache. +IOVA = VA: Support +------------------ + +KNI can be operated in IOVA as VA scheme when following criteria are fullfilled + +- LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 0) +- eal param `--iova-mode va` is passed or bus IOVA scheme is set to RTE_IOVA_VA + Ethtool ------- diff --git a/lib/librte_eal/linux/eal/eal.c b/lib/librte_eal/linux/eal/eal.c index 946222c..73d64c8 100644 --- a/lib/librte_eal/linux/eal/eal.c +++ b/lib/librte_eal/linux/eal/eal.c @@ -1114,12 +1114,14 @@ rte_eal_init(int argc, char **argv) /* Workaround for KNI which requires physical address to work */ if (iova_mode == RTE_IOVA_VA && rte_eal_check_module("rte_kni") == 1) { +#if KERNEL_VERSION(4, 4, 0) > LINUX_VERSION_CODE if (phys_addrs) { iova_mode = RTE_IOVA_PA; - RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module is loaded\n"); + RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module does not support VA\n"); } else { RTE_LOG(DEBUG, EAL, "KNI can not work since physical addresses are unavailable\n"); } +#endif } #endif rte_eal_get_configuration()->iova_mode = iova_mode; diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 15dda45..c77d76f 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -99,11 +99,6 @@ static volatile int kni_fd = -1; int rte_kni_init(unsigned int max_kni_ifaces __rte_unused) { - if (rte_eal_iova_mode() != RTE_IOVA_PA) { - RTE_LOG(ERR, KNI, "KNI requires IOVA as PA\n"); - return -1; - } - /* Check FD and open */ if (kni_fd < 0) { kni_fd = open("/dev/" KNI_DEVICE, O_RDWR);