From patchwork Thu Aug 25 11:30:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ferruh Yigit X-Patchwork-Id: 15312 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id E793B559A; Thu, 25 Aug 2016 13:30:42 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id E81095595 for ; Thu, 25 Aug 2016 13:30:40 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP; 25 Aug 2016 04:30:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.28,575,1464678000"; d="scan'208"; a="1031099726" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by fmsmga001.fm.intel.com with ESMTP; 25 Aug 2016 04:30:39 -0700 Received: from sivswdev02.ir.intel.com (sivswdev02.ir.intel.com [10.237.217.46]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id u7PBUctl016453 for ; Thu, 25 Aug 2016 12:30:38 +0100 Received: from sivswdev02.ir.intel.com (localhost [127.0.0.1]) by sivswdev02.ir.intel.com with ESMTP id u7PBUcAq027354 for ; Thu, 25 Aug 2016 12:30:38 +0100 Received: (from fyigit@localhost) by sivswdev02.ir.intel.com with id u7PBUc42027350 for dev@dpdk.org; Thu, 25 Aug 2016 12:30:38 +0100 X-Authentication-Warning: sivswdev02.ir.intel.com: fyigit set sender to ferruh.yigit@intel.com using -f From: Ferruh Yigit To: dev@dpdk.org Date: Thu, 25 Aug 2016 12:30:34 +0100 Message-Id: <1472124636-27227-1-git-send-email-ferruh.yigit@intel.com> X-Mailer: git-send-email 1.7.4.1 Subject: [dpdk-dev] [PATCH 1/3] kni: remove single mempool, single mem_chunk restriction X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use mempool buf_addr and buf_physaddr fields for address translation. Since each mbuf address calculated separately, the restriction of all mbufs should come from a continuous memory restriction is no more valid. mbuf related FIFO's content changed, rx_q and alloc_q now carries physical address of mbufs. tx_q and free_q content not changed, they still carries virtual address of mbufs. Signed-off-by: Ferruh Yigit --- .../linuxapp/eal/include/exec-env/rte_kni_common.h | 3 +- lib/librte_eal/linuxapp/kni/kni_net.c | 123 ++++++++++++++------- lib/librte_kni/rte_kni.c | 39 ++++++- 3 files changed, 118 insertions(+), 47 deletions(-) diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h index 2acdfd9..ea1cd0b 100644 --- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h +++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h @@ -111,7 +111,8 @@ struct rte_kni_fifo { */ struct rte_kni_mbuf { void *buf_addr __attribute__((__aligned__(RTE_CACHE_LINE_SIZE))); - char pad0[10]; + uint64_t buf_physaddr; + char pad0[2]; uint16_t data_off; /**< Start address of data in segment buffer. */ char pad1[2]; uint8_t nb_segs; /**< Number of segments. */ diff --git a/lib/librte_eal/linuxapp/kni/kni_net.c b/lib/librte_eal/linuxapp/kni/kni_net.c index fc82193..7d411b4 100644 --- a/lib/librte_eal/linuxapp/kni/kni_net.c +++ b/lib/librte_eal/linuxapp/kni/kni_net.c @@ -61,6 +61,44 @@ static int kni_net_process_request(struct kni_dev *kni, /* kni rx function pointer, with default to normal rx */ static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal; +/* physical address to kernel virtual address */ +static void * +pa2kva(void *pa) +{ + return phys_to_virt((unsigned long)pa); +} + +/* physical address to virtual address */ +static void * +pa2va(void *pa, struct rte_kni_mbuf *m) +{ + void *va; + + va = (void *)((unsigned long)pa + + (unsigned long)m->buf_addr - + (unsigned long)m->buf_physaddr); + return va; +} + +/* mbuf data kernel virtual address from mbuf kernel virtual address */ +static void * +kva2data_kva(struct rte_kni_mbuf *m) +{ + return phys_to_virt(m->buf_physaddr + m->data_off); +} + +/* virtual address to physical address */ +static void * +va2pa(void *va, struct rte_kni_mbuf *m) +{ + void *pa; + + pa = (void *)((unsigned long)va - + ((unsigned long)m->buf_addr - + (unsigned long)m->buf_physaddr)); + return pa; +} + /* * Open and close */ @@ -125,8 +163,9 @@ kni_net_rx_normal(struct kni_dev *kni) uint32_t len; unsigned i, num_rx, num_fq; struct rte_kni_mbuf *kva; - struct rte_kni_mbuf *va[MBUF_BURST_SZ]; - void * data_kva; + void *pa[MBUF_BURST_SZ]; + void *va[MBUF_BURST_SZ]; + void *data_kva; struct sk_buff *skb; struct net_device *dev = kni->net_dev; @@ -142,17 +181,16 @@ kni_net_rx_normal(struct kni_dev *kni) num_rx = min(num_fq, (unsigned)MBUF_BURST_SZ); /* Burst dequeue from rx_q */ - num_rx = kni_fifo_get(kni->rx_q, (void **)va, num_rx); + num_rx = kni_fifo_get(kni->rx_q, pa, num_rx); if (num_rx == 0) return; /* Transfer received packets to netif */ for (i = 0; i < num_rx; i++) { - kva = (void *)va[i] - kni->mbuf_va + kni->mbuf_kva; + kva = pa2kva(pa[i]); len = kva->pkt_len; - - data_kva = kva->buf_addr + kva->data_off - kni->mbuf_va - + kni->mbuf_kva; + data_kva = kva2data_kva(kva); + va[i] = pa2va(pa[i], kva); skb = dev_alloc_skb(len + 2); if (!skb) { @@ -178,9 +216,8 @@ kni_net_rx_normal(struct kni_dev *kni) if (!kva->next) break; - kva = kva->next - kni->mbuf_va + kni->mbuf_kva; - data_kva = kva->buf_addr + kva->data_off - - kni->mbuf_va + kni->mbuf_kva; + kva = pa2kva(va2pa(kva->next, kva)); + data_kva = kva2data_kva(kva); } } @@ -197,7 +234,7 @@ kni_net_rx_normal(struct kni_dev *kni) } /* Burst enqueue mbufs into free_q */ - ret = kni_fifo_put(kni->free_q, (void **)va, num_rx); + ret = kni_fifo_put(kni->free_q, va, num_rx); if (ret != num_rx) /* Failing should not happen */ KNI_ERR("Fail to enqueue entries into free_q\n"); @@ -213,11 +250,13 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) uint32_t len; unsigned i, num, num_rq, num_tq, num_aq, num_fq; struct rte_kni_mbuf *kva; - struct rte_kni_mbuf *va[MBUF_BURST_SZ]; + void *pa[MBUF_BURST_SZ]; + void *va[MBUF_BURST_SZ]; void * data_kva; struct rte_kni_mbuf *alloc_kva; - struct rte_kni_mbuf *alloc_va[MBUF_BURST_SZ]; + void *alloc_pa[MBUF_BURST_SZ]; + void *alloc_va[MBUF_BURST_SZ]; void *alloc_data_kva; /* Get the number of entries in rx_q */ @@ -243,26 +282,25 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) return; /* Burst dequeue from rx_q */ - ret = kni_fifo_get(kni->rx_q, (void **)va, num); + ret = kni_fifo_get(kni->rx_q, pa, num); if (ret == 0) return; /* Failing should not happen */ /* Dequeue entries from alloc_q */ - ret = kni_fifo_get(kni->alloc_q, (void **)alloc_va, num); + ret = kni_fifo_get(kni->alloc_q, alloc_pa, num); if (ret) { num = ret; /* Copy mbufs */ for (i = 0; i < num; i++) { - kva = (void *)va[i] - kni->mbuf_va + kni->mbuf_kva; + kva = pa2kva(pa[i]); len = kva->pkt_len; - data_kva = kva->buf_addr + kva->data_off - - kni->mbuf_va + kni->mbuf_kva; - - alloc_kva = (void *)alloc_va[i] - kni->mbuf_va + - kni->mbuf_kva; - alloc_data_kva = alloc_kva->buf_addr + - alloc_kva->data_off - kni->mbuf_va + - kni->mbuf_kva; + data_kva = kva2data_kva(kva); + va[i] = pa2va(pa[i], kva); + + alloc_kva = pa2kva(alloc_pa[i]); + alloc_data_kva = kva2data_kva(alloc_kva); + alloc_va[i] = pa2va(alloc_pa[i], alloc_kva); + memcpy(alloc_data_kva, data_kva, len); alloc_kva->pkt_len = len; alloc_kva->data_len = len; @@ -272,14 +310,14 @@ kni_net_rx_lo_fifo(struct kni_dev *kni) } /* Burst enqueue mbufs into tx_q */ - ret = kni_fifo_put(kni->tx_q, (void **)alloc_va, num); + ret = kni_fifo_put(kni->tx_q, alloc_va, num); if (ret != num) /* Failing should not happen */ KNI_ERR("Fail to enqueue mbufs into tx_q\n"); } /* Burst enqueue mbufs into free_q */ - ret = kni_fifo_put(kni->free_q, (void **)va, num); + ret = kni_fifo_put(kni->free_q, va, num); if (ret != num) /* Failing should not happen */ KNI_ERR("Fail to enqueue mbufs into free_q\n"); @@ -302,8 +340,9 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) uint32_t len; unsigned i, num_rq, num_fq, num; struct rte_kni_mbuf *kva; - struct rte_kni_mbuf *va[MBUF_BURST_SZ]; - void * data_kva; + void *pa[MBUF_BURST_SZ]; + void *va[MBUF_BURST_SZ]; + void *data_kva; struct sk_buff *skb; struct net_device *dev = kni->net_dev; @@ -323,16 +362,16 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) return; /* Burst dequeue mbufs from rx_q */ - ret = kni_fifo_get(kni->rx_q, (void **)va, num); + ret = kni_fifo_get(kni->rx_q, pa, num); if (ret == 0) return; /* Copy mbufs to sk buffer and then call tx interface */ for (i = 0; i < num; i++) { - kva = (void *)va[i] - kni->mbuf_va + kni->mbuf_kva; + kva = pa2kva(pa[i]); len = kva->pkt_len; - data_kva = kva->buf_addr + kva->data_off - kni->mbuf_va + - kni->mbuf_kva; + data_kva = kva2data_kva(kva); + va[i] = pa2va(pa[i], kva); skb = dev_alloc_skb(len + 2); if (skb == NULL) @@ -370,9 +409,8 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) if (!kva->next) break; - kva = kva->next - kni->mbuf_va + kni->mbuf_kva; - data_kva = kva->buf_addr + kva->data_off - - kni->mbuf_va + kni->mbuf_kva; + kva = pa2kva(va2pa(kva->next, kva)); + data_kva = kva2data_kva(kva); } } @@ -387,7 +425,7 @@ kni_net_rx_lo_fifo_skb(struct kni_dev *kni) } /* enqueue all the mbufs from rx_q into free_q */ - ret = kni_fifo_put(kni->free_q, (void **)&va, num); + ret = kni_fifo_put(kni->free_q, va, num); if (ret != num) /* Failing should not happen */ KNI_ERR("Fail to enqueue mbufs into free_q\n"); @@ -426,7 +464,8 @@ kni_net_tx(struct sk_buff *skb, struct net_device *dev) unsigned ret; struct kni_dev *kni = netdev_priv(dev); struct rte_kni_mbuf *pkt_kva = NULL; - struct rte_kni_mbuf *pkt_va = NULL; + void *pkt_pa = NULL; + void *pkt_va = NULL; /* save the timestamp */ #ifdef HAVE_TRANS_START_HELPER @@ -453,13 +492,13 @@ kni_net_tx(struct sk_buff *skb, struct net_device *dev) } /* dequeue a mbuf from alloc_q */ - ret = kni_fifo_get(kni->alloc_q, (void **)&pkt_va, 1); + ret = kni_fifo_get(kni->alloc_q, &pkt_pa, 1); if (likely(ret == 1)) { void *data_kva; - pkt_kva = (void *)pkt_va - kni->mbuf_va + kni->mbuf_kva; - data_kva = pkt_kva->buf_addr + pkt_kva->data_off - kni->mbuf_va - + kni->mbuf_kva; + pkt_kva = pa2kva(pkt_pa); + data_kva = kva2data_kva(pkt_kva); + pkt_va = pa2va(pkt_pa, pkt_kva); len = skb->len; memcpy(data_kva, skb->data, len); @@ -471,7 +510,7 @@ kni_net_tx(struct sk_buff *skb, struct net_device *dev) pkt_kva->data_len = len; /* enqueue mbuf into tx_q */ - ret = kni_fifo_put(kni->tx_q, (void **)&pkt_va, 1); + ret = kni_fifo_put(kni->tx_q, &pkt_va, 1); if (unlikely(ret != 1)) { /* Failing should not happen */ KNI_ERR("Fail to enqueue mbuf into tx_q\n"); diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index f48b72b..0f7c9e5 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -455,6 +455,20 @@ kni_free_fifo(struct rte_kni_fifo *fifo) } while (ret); } +static void +kni_free_fifo_phy(struct rte_kni_fifo *fifo) +{ + void *mbuf_phys; + int ret; + + do { + ret = kni_fifo_get(fifo, &mbuf_phys, 1); + /* + * TODO: free mbufs + */ + } while (ret); +} + int rte_kni_release(struct rte_kni *kni) { @@ -472,8 +486,8 @@ rte_kni_release(struct rte_kni *kni) /* mbufs in all fifo should be released, except request/response */ kni_free_fifo(kni->tx_q); - kni_free_fifo(kni->rx_q); - kni_free_fifo(kni->alloc_q); + kni_free_fifo_phy(kni->rx_q); + kni_free_fifo_phy(kni->alloc_q); kni_free_fifo(kni->free_q); slot_id = kni->slot_id; @@ -537,10 +551,25 @@ rte_kni_handle_request(struct rte_kni *kni) return 0; } +static void * +va2pa(struct rte_mbuf *m) +{ + return (void *)((unsigned long)m - + ((unsigned long)m->buf_addr - + (unsigned long)m->buf_physaddr)); +} + unsigned rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num) { - unsigned ret = kni_fifo_put(kni->rx_q, (void **)mbufs, num); + void *phy_mbufs[num]; + unsigned int ret; + unsigned int i; + + for (i = 0; i < num; i++) + phy_mbufs[i] = va2pa(mbufs[i]); + + ret = kni_fifo_put(kni->rx_q, phy_mbufs, num); /* Get mbufs from free_q and then free them */ kni_free_mbufs(kni); @@ -578,6 +607,7 @@ kni_allocate_mbufs(struct rte_kni *kni) { int i, ret; struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; + void *phys[MAX_MBUF_BURST_NUM]; RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pool) != offsetof(struct rte_kni_mbuf, pool)); @@ -607,13 +637,14 @@ kni_allocate_mbufs(struct rte_kni *kni) RTE_LOG(ERR, KNI, "Out of memory\n"); break; } + phys[i] = va2pa(pkts[i]); } /* No pkt mbuf alocated */ if (i <= 0) return; - ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i); + ret = kni_fifo_put(kni->alloc_q, phys, i); /* Check if any mbufs not put into alloc_q, and then free them */ if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) {