[dpdk-dev] eal: add request to map reserved physical memory

Message ID 20180328045120.40098-1-ajit.khaparde@broadcom.com (mailing list archive)
State Rejected, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues

Commit Message

Ajit Khaparde March 28, 2018, 4:51 a.m. UTC
  From: Srinath Mannam <srinath.mannam@broadcom.com>

Reserved physical memory is requested from kernel
and it will be mapped to user space.
This memory will be mapped to IOVA using VFIO.
And this memory will be provided to SPDK to allocate
NVMe CQs.

Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com>
Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 lib/librte_eal/common/eal_common_options.c |  5 ++
 lib/librte_eal/common/eal_internal_cfg.h   |  1 +
 lib/librte_eal/common/eal_options.h        |  2 +
 lib/librte_eal/common/include/rte_eal.h    |  8 ++++
 lib/librte_eal/common/include/rte_malloc.h |  7 +++
 lib/librte_eal/common/rte_malloc.c         | 17 +++++++
 lib/librte_eal/linuxapp/eal/eal.c          | 75 ++++++++++++++++++++++++++++++
 7 files changed, 115 insertions(+)
  

Comments

Anatoly Burakov April 12, 2018, 2:35 p.m. UTC | #1
On 28-Mar-18 5:51 AM, Ajit Khaparde wrote:
> From: Srinath Mannam <srinath.mannam@broadcom.com>
> 
> Reserved physical memory is requested from kernel
> and it will be mapped to user space.
> This memory will be mapped to IOVA using VFIO.
> And this memory will be provided to SPDK to allocate
> NVMe CQs.
> 
> Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com>
> Signed-off-by: Scott Branden <scott.branden@broadcom.com>
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---

Hi Srinath,

I've seen this kind of approach implemented before to add additional 
memory types to DPDK (redefining "unused" socket id's to mean something 
else), and i don't like it.

What would be better is to design a new API to support different memory 
types. Some groundwork for this was already laid for this release 
(switching to memseg lists), but more changes will be needed down the 
line. My ideal approach would be to have pluggable memory allocators. 
I've outlined some of my thoughts on this before [1], you're welcome to 
join/continue that discussion, and make sure whatever comes out of it is 
going to be useful for all of us :) I was planning to (attempt to) 
restart that discussion, and this seems like as good an opportunity to 
do that as any other.

Now that the memory hotplug stuff is merged, i'll hopefully get more 
time prototyping.

So, as it is, it's a NACK from me, but let's work together on something 
better :)

[1] http://dpdk.org/ml/archives/dev/2018-February/090937.html
  
Srinath Mannam April 23, 2018, 9:23 a.m. UTC | #2
Hi Anatoly,

Our requirement is, that separate memory segment (speed memory window)
need to be allocated outside huge pages segment.
your thoughts discussed in the link (dynamic memory allocations in
DPDK) are exactly matches with our requirement.
We tried to fit our requirement in the existing memory model with
minimum changes, So we followed this approach.
Memory model in DPDK managed using socket ids. So I also attached new
memory segment to un-used socket which allows to allocate memory using
rte_malloc.

Please add me in your discussions. I am very much interested to join
in your discussions and contribute in development.

Please point me the sources in DPDK related to this part of implementation.


Thank you.


Regards,

Srinath.


On Thu, Apr 12, 2018 at 8:05 PM, Burakov, Anatoly
<anatoly.burakov@intel.com> wrote:
> On 28-Mar-18 5:51 AM, Ajit Khaparde wrote:
>>
>> From: Srinath Mannam <srinath.mannam@broadcom.com>
>>
>> Reserved physical memory is requested from kernel
>> and it will be mapped to user space.
>> This memory will be mapped to IOVA using VFIO.
>> And this memory will be provided to SPDK to allocate
>> NVMe CQs.
>>
>> Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com>
>> Signed-off-by: Scott Branden <scott.branden@broadcom.com>
>> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>> ---
>
>
> Hi Srinath,
>
> I've seen this kind of approach implemented before to add additional memory
> types to DPDK (redefining "unused" socket id's to mean something else), and
> i don't like it.
>
> What would be better is to design a new API to support different memory
> types. Some groundwork for this was already laid for this release (switching
> to memseg lists), but more changes will be needed down the line. My ideal
> approach would be to have pluggable memory allocators. I've outlined some of
> my thoughts on this before [1], you're welcome to join/continue that
> discussion, and make sure whatever comes out of it is going to be useful for
> all of us :) I was planning to (attempt to) restart that discussion, and
> this seems like as good an opportunity to do that as any other.
>
> Now that the memory hotplug stuff is merged, i'll hopefully get more time
> prototyping.
>
> So, as it is, it's a NACK from me, but let's work together on something
> better :)
>
> [1] http://dpdk.org/ml/archives/dev/2018-February/090937.html
>
> --
> Thanks,
> Anatoly
  
Scott Branden April 27, 2018, 4:30 p.m. UTC | #3
Hi Anatoly,

We'd appreciate your input so we can come to a solution of supporting the
necessary memory allocations?

On 23 April 2018 at 02:23, Srinath Mannam <srinath.mannam@broadcom.com>
wrote:

> Hi Anatoly,
>
> Our requirement is, that separate memory segment (speed memory window)
> need to be allocated outside huge pages segment.
> your thoughts discussed in the link (dynamic memory allocations in
> DPDK) are exactly matches with our requirement.
> We tried to fit our requirement in the existing memory model with
> minimum changes, So we followed this approach.
> Memory model in DPDK managed using socket ids. So I also attached new
> memory segment to un-used socket which allows to allocate memory using
> rte_malloc.
>
> Please add me in your discussions. I am very much interested to join
> in your discussions and contribute in development.
>
> Please point me the sources in DPDK related to this part of implementation.
>
>
> Thank you.
>
>
> Regards,
>
> Srinath.
>
>
> On Thu, Apr 12, 2018 at 8:05 PM, Burakov, Anatoly
> <anatoly.burakov@intel.com> wrote:
> > On 28-Mar-18 5:51 AM, Ajit Khaparde wrote:
> >>
> >> From: Srinath Mannam <srinath.mannam@broadcom.com>
> >>
> >> Reserved physical memory is requested from kernel
> >> and it will be mapped to user space.
> >> This memory will be mapped to IOVA using VFIO.
> >> And this memory will be provided to SPDK to allocate
> >> NVMe CQs.
> >>
> >> Signed-off-by: Srinath Mannam <srinath.mannam@broadcom.com>
> >> Signed-off-by: Scott Branden <scott.branden@broadcom.com>
> >> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> >> ---
> >
> >
> > Hi Srinath,
> >
> > I've seen this kind of approach implemented before to add additional
> memory
> > types to DPDK (redefining "unused" socket id's to mean something else),
> and
> > i don't like it.
> >
> > What would be better is to design a new API to support different memory
> > types. Some groundwork for this was already laid for this release
> (switching
> > to memseg lists), but more changes will be needed down the line. My ideal
> > approach would be to have pluggable memory allocators. I've outlined
> some of
> > my thoughts on this before [1], you're welcome to join/continue that
> > discussion, and make sure whatever comes out of it is going to be useful
> for
> > all of us :) I was planning to (attempt to) restart that discussion, and
> > this seems like as good an opportunity to do that as any other.
> >
> > Now that the memory hotplug stuff is merged, i'll hopefully get more time
> > prototyping.
> >
> > So, as it is, it's a NACK from me, but let's work together on something
> > better :)
> >
> > [1] http://dpdk.org/ml/archives/dev/2018-February/090937.html
> >
> > --
> > Thanks,
> > Anatoly
>
  
Anatoly Burakov April 27, 2018, 4:49 p.m. UTC | #4
On 27-Apr-18 5:30 PM, Scott Branden wrote:
> Hi Anatoly,
> 
> We'd appreciate your input so we can come to a solution of supporting 
> the necessary memory allocations?
> 

Hi Scott,

I'm currently starting to work on a prototype that will be at least 
RFC'd (if not v1'd) during 18.08 timeframe. Basically, the idea is to 
create/destroy named malloc heaps dynamically, and allow user to request 
memory from them. You may then mmap() whatever you want and create a 
malloc heap out of it.

Does that sound reasonable?
  
Scott Branden April 27, 2018, 5:09 p.m. UTC | #5
On 18-04-27 09:49 AM, Burakov, Anatoly wrote:
> On 27-Apr-18 5:30 PM, Scott Branden wrote:
>> Hi Anatoly,
>>
>> We'd appreciate your input so we can come to a solution of supporting 
>> the necessary memory allocations?
>>
>
> Hi Scott,
>
> I'm currently starting to work on a prototype that will be at least 
> RFC'd (if not v1'd) during 18.08 timeframe. Basically, the idea is to 
> create/destroy named malloc heaps dynamically, and allow user to 
> request memory from them. You may then mmap() whatever you want and 
> create a malloc heap out of it.
>
> Does that sound reasonable?
Hi Anatoly,

We have a solution right now that requires a kernel driver to allocate 
large contiguous memory and the DPDK changes in the patch. We would like 
to come to a common upstreamed solution as soon as possible.  So please 
send out the patch as soon as you can and we can test/comment on it.  I 
would hope this is a simple addition and shouldn't take long to come to 
an acceptable implementation.  We need this change for DPDK to operate 
on our devices.

Thanks,
  Scott
  
Scott Branden June 6, 2018, 12:18 a.m. UTC | #6
Hi Anatoly,


On 18-04-27 09:49 AM, Burakov, Anatoly wrote:
> On 27-Apr-18 5:30 PM, Scott Branden wrote:
>> Hi Anatoly,
>>
>> We'd appreciate your input so we can come to a solution of supporting 
>> the necessary memory allocations?
>>
>
> Hi Scott,
>
> I'm currently starting to work on a prototype that will be at least 
> RFC'd (if not v1'd) during 18.08 timeframe. Basically, the idea is to 
> create/destroy named malloc heaps dynamically, and allow user to 
> request memory from them. You may then mmap() whatever you want and 
> create a malloc heap out of it.
>
> Does that sound reasonable?
>
Is the plan still to have a patch for 18.08?

Thanks,
  Scott
  
Anatoly Burakov June 7, 2018, 12:15 p.m. UTC | #7
On 06-Jun-18 1:18 AM, Scott Branden wrote:
> Hi Anatoly,
> 
> 
> On 18-04-27 09:49 AM, Burakov, Anatoly wrote:
>> On 27-Apr-18 5:30 PM, Scott Branden wrote:
>>> Hi Anatoly,
>>>
>>> We'd appreciate your input so we can come to a solution of supporting 
>>> the necessary memory allocations?
>>>
>>
>> Hi Scott,
>>
>> I'm currently starting to work on a prototype that will be at least 
>> RFC'd (if not v1'd) during 18.08 timeframe. Basically, the idea is to 
>> create/destroy named malloc heaps dynamically, and allow user to 
>> request memory from them. You may then mmap() whatever you want and 
>> create a malloc heap out of it.
>>
>> Does that sound reasonable?
>>
> Is the plan still to have a patch for 18.08?
> 
> Thanks,
>   Scott
> 
Hi Scott,

The plan is still to submit an RFC during 18.08 timeframe, but since it 
will be an ABI break, it will only be integrated in the next (18.11) 
release.
  
Anatoly Burakov July 9, 2018, 4:02 p.m. UTC | #8
On 07-Jun-18 1:15 PM, Burakov, Anatoly wrote:
> On 06-Jun-18 1:18 AM, Scott Branden wrote:
>> Hi Anatoly,
>>
>>
>> On 18-04-27 09:49 AM, Burakov, Anatoly wrote:
>>> On 27-Apr-18 5:30 PM, Scott Branden wrote:
>>>> Hi Anatoly,
>>>>
>>>> We'd appreciate your input so we can come to a solution of 
>>>> supporting the necessary memory allocations?
>>>>
>>>
>>> Hi Scott,
>>>
>>> I'm currently starting to work on a prototype that will be at least 
>>> RFC'd (if not v1'd) during 18.08 timeframe. Basically, the idea is to 
>>> create/destroy named malloc heaps dynamically, and allow user to 
>>> request memory from them. You may then mmap() whatever you want and 
>>> create a malloc heap out of it.
>>>
>>> Does that sound reasonable?
>>>
>> Is the plan still to have a patch for 18.08?
>>
>> Thanks,
>>   Scott
>>
> Hi Scott,
> 
> The plan is still to submit an RFC during 18.08 timeframe, but since it 
> will be an ABI break, it will only be integrated in the next (18.11) 
> release.
> 
Hi Scott,

You're welcome to offer feedback on the proposal :)

http://patches.dpdk.org/project/dpdk/list/?series=453
  
Srinath Mannam July 9, 2018, 4:12 p.m. UTC | #9
Hi Anatoly,

Thank you for your inputs.

We will port all your patches to our platform and update you.

Regards,
Srinath.

On Mon, Jul 9, 2018 at 9:32 PM, Burakov, Anatoly
<anatoly.burakov@intel.com> wrote:
> On 07-Jun-18 1:15 PM, Burakov, Anatoly wrote:
>>
>> On 06-Jun-18 1:18 AM, Scott Branden wrote:
>>>
>>> Hi Anatoly,
>>>
>>>
>>> On 18-04-27 09:49 AM, Burakov, Anatoly wrote:
>>>>
>>>> On 27-Apr-18 5:30 PM, Scott Branden wrote:
>>>>>
>>>>> Hi Anatoly,
>>>>>
>>>>> We'd appreciate your input so we can come to a solution of supporting
>>>>> the necessary memory allocations?
>>>>>
>>>>
>>>> Hi Scott,
>>>>
>>>> I'm currently starting to work on a prototype that will be at least
>>>> RFC'd (if not v1'd) during 18.08 timeframe. Basically, the idea is to
>>>> create/destroy named malloc heaps dynamically, and allow user to request
>>>> memory from them. You may then mmap() whatever you want and create a malloc
>>>> heap out of it.
>>>>
>>>> Does that sound reasonable?
>>>>
>>> Is the plan still to have a patch for 18.08?
>>>
>>> Thanks,
>>>   Scott
>>>
>> Hi Scott,
>>
>> The plan is still to submit an RFC during 18.08 timeframe, but since it
>> will be an ABI break, it will only be integrated in the next (18.11)
>> release.
>>
> Hi Scott,
>
> You're welcome to offer feedback on the proposal :)
>
> http://patches.dpdk.org/project/dpdk/list/?series=453
>
> --
> Thanks,
> Anatoly
  
Scott Branden July 9, 2018, 8:44 p.m. UTC | #10
On 18-07-09 09:02 AM, Burakov, Anatoly wrote:
> On 07-Jun-18 1:15 PM, Burakov, Anatoly wrote:
>> On 06-Jun-18 1:18 AM, Scott Branden wrote:
>>> Hi Anatoly,
>>>
>>>
>>> On 18-04-27 09:49 AM, Burakov, Anatoly wrote:
>>>> On 27-Apr-18 5:30 PM, Scott Branden wrote:
>>>>> Hi Anatoly,
>>>>>
>>>>> We'd appreciate your input so we can come to a solution of 
>>>>> supporting the necessary memory allocations?
>>>>>
>>>>
>>>> Hi Scott,
>>>>
>>>> I'm currently starting to work on a prototype that will be at least 
>>>> RFC'd (if not v1'd) during 18.08 timeframe. Basically, the idea is 
>>>> to create/destroy named malloc heaps dynamically, and allow user to 
>>>> request memory from them. You may then mmap() whatever you want and 
>>>> create a malloc heap out of it.
>>>>
>>>> Does that sound reasonable?
>>>>
>>> Is the plan still to have a patch for 18.08?
>>>
>>> Thanks,
>>>   Scott
>>>
>> Hi Scott,
>>
>> The plan is still to submit an RFC during 18.08 timeframe, but since 
>> it will be an ABI break, it will only be integrated in the next 
>> (18.11) release.
>>
> Hi Scott,
>
> You're welcome to offer feedback on the proposal :)
>
> http://patches.dpdk.org/project/dpdk/list/?series=453
Thanks, Srinath is looking into it.

Scott
  

Patch

diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c
index 8a51adee6..7b929fde3 100644
--- a/lib/librte_eal/common/eal_common_options.c
+++ b/lib/librte_eal/common/eal_common_options.c
@@ -73,6 +73,7 @@  eal_long_options[] = {
 	{OPT_VDEV,              1, NULL, OPT_VDEV_NUM             },
 	{OPT_VFIO_INTR,         1, NULL, OPT_VFIO_INTR_NUM        },
 	{OPT_VMWARE_TSC_MAP,    0, NULL, OPT_VMWARE_TSC_MAP_NUM   },
+	{OPT_ISO_CMEM,          0, NULL, OPT_ISO_CMEM_NUM         },
 	{0,                     0, NULL, 0                        }
 };
 
@@ -1119,6 +1120,10 @@  eal_parse_common_option(int opt, const char *optarg,
 		conf->no_pci = 1;
 		break;
 
+	case OPT_ISO_CMEM_NUM:
+		conf->iso_cmem = 1;
+		break;
+
 	case OPT_NO_HPET_NUM:
 		conf->no_hpet = 1;
 		break;
diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h
index a0082d121..7c06b9918 100644
--- a/lib/librte_eal/common/eal_internal_cfg.h
+++ b/lib/librte_eal/common/eal_internal_cfg.h
@@ -37,6 +37,7 @@  struct internal_config {
 	volatile unsigned no_hugetlbfs;   /**< true to disable hugetlbfs */
 	unsigned hugepage_unlink;         /**< true to unlink backing files */
 	volatile unsigned no_pci;         /**< true to disable PCI */
+	unsigned int iso_cmem;            /**< true to enable isolated cmem */
 	volatile unsigned no_hpet;        /**< true to disable HPET */
 	volatile unsigned vmware_tsc_map; /**< true to use VMware TSC mapping
 										* instead of native TSC */
diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h
index e86c71142..d6fe9ca97 100644
--- a/lib/librte_eal/common/eal_options.h
+++ b/lib/librte_eal/common/eal_options.h
@@ -55,6 +55,8 @@  enum {
 	OPT_VFIO_INTR_NUM,
 #define OPT_VMWARE_TSC_MAP    "vmware-tsc-map"
 	OPT_VMWARE_TSC_MAP_NUM,
+#define OPT_ISO_CMEM          "iso-cmem"
+	OPT_ISO_CMEM_NUM,
 	OPT_LONG_MAX_NUM
 };
 
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index 044474e6c..322e2e3c2 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -73,6 +73,14 @@  struct rte_config {
 	struct rte_mem_config *mem_config;
 } __attribute__((__packed__));
 
+/**
+ * Get the global custom memory segment structure.
+ *
+ * @return
+ *   A pointer to the global cmemseg structure.
+ */
+struct rte_memseg *rte_eal_get_iso_cmemseg(void);
+
 /**
  * Get the global configuration structure.
  *
diff --git a/lib/librte_eal/common/include/rte_malloc.h b/lib/librte_eal/common/include/rte_malloc.h
index f02a8ba1d..a2ba8be29 100644
--- a/lib/librte_eal/common/include/rte_malloc.h
+++ b/lib/librte_eal/common/include/rte_malloc.h
@@ -156,6 +156,13 @@  rte_realloc(void *ptr, size_t size, unsigned align);
 void *
 rte_malloc_socket(const char *type, size_t size, unsigned align, int socket);
 
+/**
+ * This function allocates memory from the huge-page area of memory or
+ * from reserved memory.
+ */
+void *
+rte_malloc_cmem(const char *type, size_t size, size_t align, int socket);
+
 /**
  * Allocate zero'ed memory from the heap.
  *
diff --git a/lib/librte_eal/common/rte_malloc.c b/lib/librte_eal/common/rte_malloc.c
index e0e0d0b3e..75085be1f 100644
--- a/lib/librte_eal/common/rte_malloc.c
+++ b/lib/librte_eal/common/rte_malloc.c
@@ -33,6 +33,23 @@  void rte_free(void *addr)
 		rte_panic("Fatal error: Invalid memory\n");
 }
 
+/*
+ * Allocate memory on cmem heap if cmem segment available else allocate
+ * from normal heap.
+ */
+void *
+rte_malloc_cmem(const char *type, size_t size, size_t align, int socket_id)
+{
+	struct rte_memseg *cmemseg = rte_eal_get_iso_cmemseg();
+	void *addr;
+
+	if (cmemseg)
+		socket_id = cmemseg->socket_id;
+
+	addr = rte_malloc_socket(NULL, size, align, socket_id);
+	return addr;
+}
+
 /*
  * Allocate memory on specified heap.
  */
diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c
index 2ecd07b95..e8cb0b0f9 100644
--- a/lib/librte_eal/linuxapp/eal/eal.c
+++ b/lib/librte_eal/linuxapp/eal/eal.c
@@ -66,6 +66,7 @@  static rte_usage_hook_t	rte_application_usage_hook = NULL;
 
 /* early configuration structure, when memory config is not mmapped */
 static struct rte_mem_config early_mem_config;
+static struct rte_memseg *iso_cmemseg;
 
 /* define fd variable here, because file needs to be kept open for the
  * duration of the program, as we hold a write lock on it in the primary proc */
@@ -110,12 +111,83 @@  rte_eal_mbuf_default_mempool_ops(void)
 }
 
 /* Return a pointer to the configuration structure */
+struct rte_memseg *
+rte_eal_get_iso_cmemseg(void)
+{
+	if (internal_config.iso_cmem == 1)
+		return iso_cmemseg;
+
+	return NULL;
+}
+
 struct rte_config *
 rte_eal_get_configuration(void)
 {
 	return &rte_config;
 }
 
+static struct rte_memseg *map_cmem_virtual_area(void)
+{
+	void *addr = NULL;
+	int fd;
+	off_t filesize;
+	struct rte_memseg *cmemseg;
+	struct rte_mem_config *mcfg;
+	unsigned int i;
+	unsigned int socket = 0;
+
+	mcfg = rte_eal_get_configuration()->mem_config;
+	if (mcfg == NULL)
+		return NULL;
+
+	for (i = 0; i < RTE_MAX_MEMSEG; i++) {
+		if (mcfg->memseg[i].addr == NULL) {
+			cmemseg = &mcfg->memseg[i];
+			break;
+		}
+		socket |= (1 << mcfg->memseg[i].socket_id);
+	}
+
+	if (!cmemseg)
+		return NULL;
+
+	for (i = 0; i < RTE_MAX_NUMA_NODES; i++) {
+		if (!(socket & (1 << i))) {
+			cmemseg->socket_id = i;
+			break;
+		}
+	}
+	if (i == RTE_MAX_NUMA_NODES)
+		goto error;
+
+	fd = open("/dev/cmem", O_RDWR);
+	if (fd < 0) {
+		RTE_LOG(ERR, EAL, "Cannot open /dev/cmem\n");
+		goto error;
+	}
+
+	filesize = lseek(fd, 0, SEEK_END);
+	if (filesize < 0) {
+		close(fd);
+		goto error;
+	}
+
+	addr = mmap(NULL, filesize, (PROT_READ | PROT_WRITE),
+			MAP_SHARED, fd, 0);
+	close(fd);
+	if (addr == MAP_FAILED)
+		goto error;
+
+	memset(addr, 0, filesize);
+	cmemseg->phys_addr = rte_mem_virt2phy(addr);
+	cmemseg->addr_64 = addr;
+	cmemseg->len = filesize;
+
+	return cmemseg;
+error:
+	return NULL;
+}
+
 enum rte_iova_mode
 rte_eal_iova_mode(void)
 {
@@ -862,6 +934,9 @@  rte_eal_init(int argc, char **argv)
 	/* the directories are locked during eal_hugepage_info_init */
 	eal_hugedirs_unlock();
 
+	if (internal_config.iso_cmem == 1)
+		iso_cmemseg = map_cmem_virtual_area();
+
 	if (rte_eal_memzone_init() < 0) {
 		rte_eal_init_alert("Cannot init memzone\n");
 		rte_errno = ENODEV;