[dpdk-dev,v1] crypto/ccp: use contiguous allocation for DMA memory

Message ID 1523961144-79110-1-git-send-email-Ravi1.kumar@amd.com (mailing list archive)
State Accepted, archived
Delegated to: Pablo de Lara Guarch
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Kumar, Ravi1 April 17, 2018, 10:32 a.m. UTC
  rte_eal_get_physmem_layout() is obsolete now.
This patch fix the broken API usage and allocates
DMA memory with RTE_MEMZONE_IOVA_CONTIG memzone flag.

Signed-off-by: Ravi Kumar <Ravi1.kumar@amd.com>
---
 drivers/crypto/ccp/ccp_dev.c | 45 +++++++++++++++++++-------------------------
 1 file changed, 19 insertions(+), 26 deletions(-)
  

Comments

Anatoly Burakov April 17, 2018, 10:35 a.m. UTC | #1
On 17-Apr-18 11:32 AM, Ravi Kumar wrote:
> rte_eal_get_physmem_layout() is obsolete now.
> This patch fix the broken API usage and allocates
> DMA memory with RTE_MEMZONE_IOVA_CONTIG memzone flag.
> 
> Signed-off-by: Ravi Kumar <Ravi1.kumar@amd.com>
> ---

FWIW,

Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
  
De Lara Guarch, Pablo April 17, 2018, 1:02 p.m. UTC | #2
> -----Original Message-----
> From: Ravi Kumar [mailto:Ravi1.kumar@amd.com]
> Sent: Tuesday, April 17, 2018 11:32 AM
> To: dev@dpdk.org
> Cc: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>;
> hemant.agrawal@nxp.com
> Subject: [PATCH v1] crypto/ccp: use contiguous allocation for DMA memory
> 
> rte_eal_get_physmem_layout() is obsolete now.
> This patch fix the broken API usage and allocates DMA memory with
> RTE_MEMZONE_IOVA_CONTIG memzone flag.
> 
> Signed-off-by: Ravi Kumar <Ravi1.kumar@amd.com>

Squashed into second commit of the AMD CCP patchset and applied to dpdk-next-crypto.

Thanks for the quick patch!
Pablo
  

Patch

diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c
index 55cfcdd..80fe6a4 100644
--- a/drivers/crypto/ccp/ccp_dev.c
+++ b/drivers/crypto/ccp/ccp_dev.c
@@ -88,36 +88,29 @@  ccp_queue_dma_zone_reserve(const char *queue_name,
 			   int socket_id)
 {
 	const struct rte_memzone *mz;
-	unsigned int memzone_flags = 0;
-	const struct rte_memseg *ms;
 
 	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0)
-		return mz;
-
-	ms = rte_eal_get_physmem_layout();
-	switch (ms[0].hugepage_sz) {
-	case(RTE_PGSIZE_2M):
-		memzone_flags = RTE_MEMZONE_2MB;
-		break;
-	case(RTE_PGSIZE_1G):
-		memzone_flags = RTE_MEMZONE_1GB;
-		break;
-	case(RTE_PGSIZE_16M):
-		memzone_flags = RTE_MEMZONE_16MB;
-		break;
-	case(RTE_PGSIZE_16G):
-		memzone_flags = RTE_MEMZONE_16GB;
-		break;
-	default:
-		memzone_flags = RTE_MEMZONE_SIZE_HINT_ONLY;
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+		    ((socket_id == SOCKET_ID_ANY) ||
+		     (socket_id == mz->socket_id))) {
+			CCP_LOG_INFO("re-use memzone already "
+				     "allocated for %s", queue_name);
+			return mz;
+		}
+		CCP_LOG_ERR("Incompatible memzone already "
+			    "allocated %s, size %u, socket %d. "
+			    "Requested size %u, socket %u",
+			    queue_name, (uint32_t)mz->len,
+			    mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
 
-	return rte_memzone_reserve_aligned(queue_name,
-					   queue_size,
-					   socket_id,
-					   memzone_flags,
-					   queue_size);
+	CCP_LOG_INFO("Allocate memzone for %s, size %u on socket %u",
+		     queue_name, queue_size, socket_id);
+
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+			socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }
 
 /* bitmap support apis */