Message ID | 14248648813214-git-send-email-Hemant@freescale.com (mailing list archive) |
---|---|
State | Changes Requested, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id A41475A76; Wed, 25 Feb 2015 12:48:10 +0100 (CET) Received: from na01-bl2-obe.outbound.protection.outlook.com (mail-bl2on0107.outbound.protection.outlook.com [65.55.169.107]) by dpdk.org (Postfix) with ESMTP id D6E835A49 for <dev@dpdk.org>; Wed, 25 Feb 2015 12:48:08 +0100 (CET) Received: from DM2PR03CA0047.namprd03.prod.outlook.com (10.141.96.46) by BN1PR0301MB0691.namprd03.prod.outlook.com (25.160.171.28) with Microsoft SMTP Server (TLS) id 15.1.93.16; Wed, 25 Feb 2015 11:48:06 +0000 Received: from BY2FFO11FD003.protection.gbl (2a01:111:f400:7c0c::122) by DM2PR03CA0047.outlook.office365.com (2a01:111:e400:2428::46) with Microsoft SMTP Server (TLS) id 15.1.99.9 via Frontend Transport; Wed, 25 Feb 2015 11:48:05 +0000 Received: from az84smr01.freescale.net (192.88.158.2) by BY2FFO11FD003.mail.protection.outlook.com (10.1.14.125) with Microsoft SMTP Server (TLS) id 15.1.99.6 via Frontend Transport; Wed, 25 Feb 2015 11:48:05 +0000 Received: from nmglablinux18.freescale.com (nmglablinux18.zin33.ap.freescale.net [10.232.20.240]) by az84smr01.freescale.net (8.14.3/8.14.0) with ESMTP id t1PBm335004429; Wed, 25 Feb 2015 04:48:04 -0700 Received: by nmglablinux18.freescale.com (Postfix, from userid 5043) id 0EA233DD9D60; Wed, 25 Feb 2015 17:18:03 +0530 (IST) From: Hemant Agrawal <Hemant@freescale.com> To: <dev@dpdk.org> Date: Wed, 25 Feb 2015 17:18:01 +0530 Message-ID: <14248648813214-git-send-email-Hemant@freescale.com> X-Mailer: git-send-email 1.5.2.4 X-EOPAttributedMessage: 0 Received-SPF: None (protection.outlook.com: nmglablinux18.freescale.com does not designate permitted sender hosts) Authentication-Results: spf=none (sender IP is 192.88.158.2) smtp.mailfrom=b10814@nmglablinux18.freescale.com; freescale.mail.onmicrosoft.com; dkim=none (message not signed) header.d=none; X-Forefront-Antispam-Report: CIP:192.88.158.2; CTRY:US; IPV:NLI; EFV:NLI; SFV:NSPM; SFS:(10019020)(979002)(6039001)(6009001)(428002)(189002)(199003)(46386002)(2351001)(77156002)(62966003)(450100001)(97736003)(68736005)(47776003)(50226001)(110136001)(42186005)(16796002)(101416001)(229853001)(50466002)(92566002)(64706001)(45336002)(36756003)(106466001)(81156004)(50986999)(103686003)(105586002)(19580395003)(19580405001)(69596002)(6806004)(87936001)(46102003)(48376002)(52956003)(86372001)(90966001)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1102; SCL:1; SRVR:BN1PR0301MB0691; H:az84smr01.freescale.net; FPR:; SPF:None; PTR:InfoDomainNonexistent; MX:1; A:0; LANG:en; MIME-Version: 1.0 Content-Type: text/plain X-Microsoft-Antispam: UriScan:; X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:;SRVR:BN1PR0301MB0691; X-Microsoft-Antispam-PRVS: <BN1PR0301MB069187310ED4EACB9C4CDCECC2170@BN1PR0301MB0691.namprd03.prod.outlook.com> X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(5005004); SRVR:BN1PR0301MB0691; X-Forefront-PRVS: 049897979A X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:; SRVR:BN1PR0301MB0691; X-OriginatorOrg: freescale.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Feb 2015 11:48:05.1063 (UTC) X-MS-Exchange-CrossTenant-Id: 710a03f5-10f6-4d38-9ff4-a80b81da590d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=710a03f5-10f6-4d38-9ff4-a80b81da590d; Ip=[192.88.158.2] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN1PR0301MB0691 Subject: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Hemant Agrawal
Feb. 25, 2015, 11:48 a.m. UTC
From: Hemant Agrawal <hemant@freescale.com> if any buffer is read from the tx_q, MAX_BURST buffers will be allocated and attempted to be added to to the alloc_q. This seems terribly inefficient and it also looks like the alloc_q will quickly fill to its maximum capacity. If the system buffers are low in number, it will reach "out of memory" situation. This patch allocates the number of buffers as many dequeued from tx_q. Signed-off-by: Hemant Agrawal <hemant@freescale.com> --- lib/librte_kni/rte_kni.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-)
Comments
Thank you Hemant, I think there might be one issue left with the patch though. The alloc_q must initially be filled with mbufs before getting mbuf back on the tx_q. So the patch should allow rte_kni_rx_burst to check if alloc_q is empty. If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_q with MAX_MBUF_BURST_NUM mbufs) The patch for rte_kni_rx_burst would then look like: @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num) /* If buffers removed, allocate mbufs and then put them into alloc_q */ if (ret) - kni_allocate_mbufs(kni); + kni_allocate_mbufs(kni, ret); + else if (unlikely(kni->alloc_q->write == kni->alloc_q->read)) + kni_allocate_mbufs(kni, 0); Olivier. On 25/02/15 11:48, Hemant Agrawal wrote: > From: Hemant Agrawal <hemant@freescale.com> > > if any buffer is read from the tx_q, MAX_BURST buffers will be allocated and attempted to be added to to the alloc_q. > This seems terribly inefficient and it also looks like the alloc_q will quickly fill to its maximum capacity. If the system buffers are low in number, it will reach "out of memory" situation. > > This patch allocates the number of buffers as many dequeued from tx_q. > > Signed-off-by: Hemant Agrawal <hemant@freescale.com> > --- > lib/librte_kni/rte_kni.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c > index 4e70fa0..4cf8e30 100644 > --- a/lib/librte_kni/rte_kni.c > +++ b/lib/librte_kni/rte_kni.c > @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool { > > > static void kni_free_mbufs(struct rte_kni *kni); > -static void kni_allocate_mbufs(struct rte_kni *kni); > +static void kni_allocate_mbufs(struct rte_kni *kni, int num); > > static volatile int kni_fd = -1; > static struct rte_kni_memzone_pool kni_memzone_pool = { > @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num) > > /* If buffers removed, allocate mbufs and then put them into alloc_q */ > if (ret) > - kni_allocate_mbufs(kni); > + kni_allocate_mbufs(kni, ret); > > return ret; > } > @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni) > } > > static void > -kni_allocate_mbufs(struct rte_kni *kni) > +kni_allocate_mbufs(struct rte_kni *kni, int num) > { > int i, ret; > struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; > @@ -620,7 +620,10 @@ kni_allocate_mbufs(struct rte_kni *kni) > return; > } > > - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { > + if (num == 0 || num > MAX_MBUF_BURST_NUM) > + num = MAX_MBUF_BURST_NUM; > + > + for (i = 0; i < num; i++) { > pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); > if (unlikely(pkts[i] == NULL)) { > /* Out of memory */ > @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni) > ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i); > > /* Check if any mbufs not put into alloc_q, and then free them */ > - if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) {MAX_MBUF_BURST_NUM > > + if (ret >= 0 && ret < i && ret < num) { > int j; > > for (j = ret; j < i; j++)
Hi OIivier Comments inline. Regards, Hemant > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Deme > Sent: 25/Feb/2015 5:44 PM > To: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst > > Thank you Hemant, I think there might be one issue left with the patch though. > The alloc_q must initially be filled with mbufs before getting mbuf back on the > tx_q. > > So the patch should allow rte_kni_rx_burst to check if alloc_q is empty. > If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_q with > MAX_MBUF_BURST_NUM mbufs) > > The patch for rte_kni_rx_burst would then look like: > > @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf > **mbufs, unsigned num) > > /* If buffers removed, allocate mbufs and then put them into alloc_q */ > if (ret) > - kni_allocate_mbufs(kni); > + kni_allocate_mbufs(kni, ret); > + else if (unlikely(kni->alloc_q->write == kni->alloc_q->read)) > + kni_allocate_mbufs(kni, 0); > [hemant] This will introduce a run-time check. I missed to include the other change in the patch. I am doing it in kni_alloc i.e. initiate the alloc_q with default burst size. kni_allocate_mbufs(ctx, 0); In a way, we are now suggesting to reduce the size of alloc_q to only default burst size. Can we reach is situation, when the kernel is adding packets faster in tx_q than the application is able to dequeue? alloc_q can be empty in this case and kernel will be striving. > > Olivier. > > On 25/02/15 11:48, Hemant Agrawal wrote: > > From: Hemant Agrawal <hemant@freescale.com> > > > > if any buffer is read from the tx_q, MAX_BURST buffers will be allocated and > attempted to be added to to the alloc_q. > > This seems terribly inefficient and it also looks like the alloc_q will quickly fill > to its maximum capacity. If the system buffers are low in number, it will reach > "out of memory" situation. > > > > This patch allocates the number of buffers as many dequeued from tx_q. > > > > Signed-off-by: Hemant Agrawal <hemant@freescale.com> > > --- > > lib/librte_kni/rte_kni.c | 13 ++++++++----- > > 1 file changed, 8 insertions(+), 5 deletions(-) > > > > diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index > > 4e70fa0..4cf8e30 100644 > > --- a/lib/librte_kni/rte_kni.c > > +++ b/lib/librte_kni/rte_kni.c > > @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool { > > > > > > static void kni_free_mbufs(struct rte_kni *kni); -static void > > kni_allocate_mbufs(struct rte_kni *kni); > > +static void kni_allocate_mbufs(struct rte_kni *kni, int num); > > > > static volatile int kni_fd = -1; > > static struct rte_kni_memzone_pool kni_memzone_pool = { @@ -575,7 > > +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf > > **mbufs, unsigned num) > > > > /* If buffers removed, allocate mbufs and then put them into alloc_q > */ > > if (ret) > > - kni_allocate_mbufs(kni); > > + kni_allocate_mbufs(kni, ret); > > > > return ret; > > } > > @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni) > > } > > > > static void > > -kni_allocate_mbufs(struct rte_kni *kni) > > +kni_allocate_mbufs(struct rte_kni *kni, int num) > > { > > int i, ret; > > struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; @@ -620,7 +620,10 > @@ > > kni_allocate_mbufs(struct rte_kni *kni) > > return; > > } > > > > - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { > > + if (num == 0 || num > MAX_MBUF_BURST_NUM) > > + num = MAX_MBUF_BURST_NUM; > > + > > + for (i = 0; i < num; i++) { > > pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); > > if (unlikely(pkts[i] == NULL)) { > > /* Out of memory */ > > @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni) > > ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i); > > > > /* Check if any mbufs not put into alloc_q, and then free them */ > > - if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) > {MAX_MBUF_BURST_NUM > > > > + if (ret >= 0 && ret < i && ret < num) { > > int j; > > > > for (j = ret; j < i; j++) > > -- > *Olivier Demé* > *Druid Software Ltd.* > *Tel: +353 1 202 1831* > *Email: odeme@druidsoftware.com <mailto:odeme@druidsoftware.com>* > *URL: http://www.druidsoftware.com* > *Hall 7, stand 7F70.* > Druid Software: Monetising enterprise small cells solutions.
I guess it would be unusual but possible for the kernel to enqueue faster to tx_q than the application dequeues. But that would also be possible with a real NIC, so I think it is acceptable for the kernel to have to drop egress packets in that case. On 25/02/15 12:24, Hemant@freescale.com wrote: > Hi OIivier > Comments inline. > Regards, > Hemant > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Deme >> Sent: 25/Feb/2015 5:44 PM >> To: dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst >> >> Thank you Hemant, I think there might be one issue left with the patch though. >> The alloc_q must initially be filled with mbufs before getting mbuf back on the >> tx_q. >> >> So the patch should allow rte_kni_rx_burst to check if alloc_q is empty. >> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_q with >> MAX_MBUF_BURST_NUM mbufs) >> >> The patch for rte_kni_rx_burst would then look like: >> >> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf >> **mbufs, unsigned num) >> >> /* If buffers removed, allocate mbufs and then put them into alloc_q */ >> if (ret) >> - kni_allocate_mbufs(kni); >> + kni_allocate_mbufs(kni, ret); >> + else if (unlikely(kni->alloc_q->write == kni->alloc_q->read)) >> + kni_allocate_mbufs(kni, 0); >> > [hemant] This will introduce a run-time check. > > I missed to include the other change in the patch. > I am doing it in kni_alloc i.e. initiate the alloc_q with default burst size. > kni_allocate_mbufs(ctx, 0); > > In a way, we are now suggesting to reduce the size of alloc_q to only default burst size. > > Can we reach is situation, when the kernel is adding packets faster in tx_q than the application is able to dequeue? > alloc_q can be empty in this case and kernel will be striving. > >> Olivier. >> >> On 25/02/15 11:48, Hemant Agrawal wrote: >>> From: Hemant Agrawal <hemant@freescale.com> >>> >>> if any buffer is read from the tx_q, MAX_BURST buffers will be allocated and >> attempted to be added to to the alloc_q. >>> This seems terribly inefficient and it also looks like the alloc_q will quickly fill >> to its maximum capacity. If the system buffers are low in number, it will reach >> "out of memory" situation. >>> This patch allocates the number of buffers as many dequeued from tx_q. >>> >>> Signed-off-by: Hemant Agrawal <hemant@freescale.com> >>> --- >>> lib/librte_kni/rte_kni.c | 13 ++++++++----- >>> 1 file changed, 8 insertions(+), 5 deletions(-) >>> >>> diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index >>> 4e70fa0..4cf8e30 100644 >>> --- a/lib/librte_kni/rte_kni.c >>> +++ b/lib/librte_kni/rte_kni.c >>> @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool { >>> >>> >>> static void kni_free_mbufs(struct rte_kni *kni); -static void >>> kni_allocate_mbufs(struct rte_kni *kni); >>> +static void kni_allocate_mbufs(struct rte_kni *kni, int num); >>> >>> static volatile int kni_fd = -1; >>> static struct rte_kni_memzone_pool kni_memzone_pool = { @@ -575,7 >>> +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf >>> **mbufs, unsigned num) >>> >>> /* If buffers removed, allocate mbufs and then put them into alloc_q >> */ >>> if (ret) >>> - kni_allocate_mbufs(kni); >>> + kni_allocate_mbufs(kni, ret); >>> >>> return ret; >>> } >>> @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni) >>> } >>> >>> static void >>> -kni_allocate_mbufs(struct rte_kni *kni) >>> +kni_allocate_mbufs(struct rte_kni *kni, int num) >>> { >>> int i, ret; >>> struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; @@ -620,7 +620,10 >> @@ >>> kni_allocate_mbufs(struct rte_kni *kni) >>> return; >>> } >>> >>> - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { >>> + if (num == 0 || num > MAX_MBUF_BURST_NUM) >>> + num = MAX_MBUF_BURST_NUM; >>> + >>> + for (i = 0; i < num; i++) { >>> pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); >>> if (unlikely(pkts[i] == NULL)) { >>> /* Out of memory */ >>> @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni) >>> ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i); >>> >>> /* Check if any mbufs not put into alloc_q, and then free them */ >>> - if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) >> {MAX_MBUF_BURST_NUM >>> + if (ret >= 0 && ret < i && ret < num) { >>> int j; >>> >>> for (j = ret; j < i; j++) >> -- >> *Olivier Demé* >> *Druid Software Ltd.* >> *Tel: +353 1 202 1831* >> *Email: odeme@druidsoftware.com <mailto:odeme@druidsoftware.com>* >> *URL: http://www.druidsoftware.com* >> *Hall 7, stand 7F70.* >> Druid Software: Monetising enterprise small cells solutions.
On 25/02/15 13:24, Hemant@freescale.com wrote: > Hi OIivier > Comments inline. > Regards, > Hemant > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Deme >> Sent: 25/Feb/2015 5:44 PM >> To: dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst >> >> Thank you Hemant, I think there might be one issue left with the patch though. >> The alloc_q must initially be filled with mbufs before getting mbuf back on the >> tx_q. >> >> So the patch should allow rte_kni_rx_burst to check if alloc_q is empty. >> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_q with >> MAX_MBUF_BURST_NUM mbufs) >> >> The patch for rte_kni_rx_burst would then look like: >> >> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf >> **mbufs, unsigned num) >> >> /* If buffers removed, allocate mbufs and then put them into alloc_q */ >> if (ret) >> - kni_allocate_mbufs(kni); >> + kni_allocate_mbufs(kni, ret); >> + else if (unlikely(kni->alloc_q->write == kni->alloc_q->read)) >> + kni_allocate_mbufs(kni, 0); >> > [hemant] This will introduce a run-time check. > > I missed to include the other change in the patch. > I am doing it in kni_alloc i.e. initiate the alloc_q with default burst size. > kni_allocate_mbufs(ctx, 0); > > In a way, we are now suggesting to reduce the size of alloc_q to only default burst size. As an aside comment here, I think that we should allow to tweak the userspace <-> kernel queue sizes (rx_q, tx_q, free_q and alloc_q) . Whether this should be a build configuration option or a parameter to rte_kni_init(), it is not completely clear to me, but I guess rte_kni_init() is a better option. Having said that, the original mail from Hemant was describing that KNI was giving an out-of-memory. This to me indicates that the pool is incorrectly dimensioned. Even if KNI will not pre-allocate in the alloc_q, or not completely, in the event of high load, you will get this same "out of memory". We can reduce the usage of buffers by the KNI subsystem in kernel space and in userspace, but the kernel will always need a small cache of pre-allocated buffers (coming from user-space), since the KNI kernel module does not know where to grab the packets from (which pool). So my guess is that the dimensioning problem experienced by Hemant would be the same, even with the proposed changes. > > Can we reach is situation, when the kernel is adding packets faster in tx_q than the application is able to dequeue? I think so. We cannot control much how the kernel will schedule the KNI thread(s), specially if the # of threads in relation to the cores is incorrect (not enough), hence we need at least a reasonable amount of buffering to prevent early dropping to those "internal" burst side effects. Marc > alloc_q can be empty in this case and kernel will be striving. > >> Olivier. >> >> On 25/02/15 11:48, Hemant Agrawal wrote: >>> From: Hemant Agrawal <hemant@freescale.com> >>> >>> if any buffer is read from the tx_q, MAX_BURST buffers will be allocated and >> attempted to be added to to the alloc_q. >>> This seems terribly inefficient and it also looks like the alloc_q will quickly fill >> to its maximum capacity. If the system buffers are low in number, it will reach >> "out of memory" situation. >>> This patch allocates the number of buffers as many dequeued from tx_q. >>> >>> Signed-off-by: Hemant Agrawal <hemant@freescale.com> >>> --- >>> lib/librte_kni/rte_kni.c | 13 ++++++++----- >>> 1 file changed, 8 insertions(+), 5 deletions(-) >>> >>> diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index >>> 4e70fa0..4cf8e30 100644 >>> --- a/lib/librte_kni/rte_kni.c >>> +++ b/lib/librte_kni/rte_kni.c >>> @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool { >>> >>> >>> static void kni_free_mbufs(struct rte_kni *kni); -static void >>> kni_allocate_mbufs(struct rte_kni *kni); >>> +static void kni_allocate_mbufs(struct rte_kni *kni, int num); >>> >>> static volatile int kni_fd = -1; >>> static struct rte_kni_memzone_pool kni_memzone_pool = { @@ -575,7 >>> +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf >>> **mbufs, unsigned num) >>> >>> /* If buffers removed, allocate mbufs and then put them into alloc_q >> */ >>> if (ret) >>> - kni_allocate_mbufs(kni); >>> + kni_allocate_mbufs(kni, ret); >>> >>> return ret; >>> } >>> @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni) >>> } >>> >>> static void >>> -kni_allocate_mbufs(struct rte_kni *kni) >>> +kni_allocate_mbufs(struct rte_kni *kni, int num) >>> { >>> int i, ret; >>> struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; @@ -620,7 +620,10 >> @@ >>> kni_allocate_mbufs(struct rte_kni *kni) >>> return; >>> } >>> >>> - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { >>> + if (num == 0 || num > MAX_MBUF_BURST_NUM) >>> + num = MAX_MBUF_BURST_NUM; >>> + >>> + for (i = 0; i < num; i++) { >>> pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); >>> if (unlikely(pkts[i] == NULL)) { >>> /* Out of memory */ >>> @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni) >>> ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i); >>> >>> /* Check if any mbufs not put into alloc_q, and then free them */ >>> - if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) >> {MAX_MBUF_BURST_NUM >>> + if (ret >= 0 && ret < i && ret < num) { >>> int j; >>> >>> for (j = ret; j < i; j++) >> -- >> *Olivier Demé* >> *Druid Software Ltd.* >> *Tel: +353 1 202 1831* >> *Email: odeme@druidsoftware.com <mailto:odeme@druidsoftware.com>* >> *URL: http://www.druidsoftware.com* >> *Hall 7, stand 7F70.* >> Druid Software: Monetising enterprise small cells solutions.
Hi Marc, I think one of the observations is that currently the alloc_q grows very quickly to the maximum fifo size (1024). The patch suggests fixing the alloc_q to a fix size and maybe make that size configurable in rte_kni_alloc or rte_kni_init. It should then be up to the application to provision the mempool accordingly. Currently the out of memory problem shows up if the mempool doesn't have 1024 buffers per KNI. Olivier. On 25/02/15 12:38, Marc Sune wrote: > > On 25/02/15 13:24, Hemant@freescale.com wrote: >> Hi OIivier >> Comments inline. >> Regards, >> Hemant >> >>> -----Original Message----- >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Deme >>> Sent: 25/Feb/2015 5:44 PM >>> To: dev@dpdk.org >>> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst >>> >>> Thank you Hemant, I think there might be one issue left with the >>> patch though. >>> The alloc_q must initially be filled with mbufs before getting mbuf >>> back on the >>> tx_q. >>> >>> So the patch should allow rte_kni_rx_burst to check if alloc_q is >>> empty. >>> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the >>> alloc_q with >>> MAX_MBUF_BURST_NUM mbufs) >>> >>> The patch for rte_kni_rx_burst would then look like: >>> >>> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct >>> rte_mbuf >>> **mbufs, unsigned num) >>> >>> /* If buffers removed, allocate mbufs and then put them into >>> alloc_q */ >>> if (ret) >>> - kni_allocate_mbufs(kni); >>> + kni_allocate_mbufs(kni, ret); >>> + else if (unlikely(kni->alloc_q->write == kni->alloc_q->read)) >>> + kni_allocate_mbufs(kni, 0); >>> >> [hemant] This will introduce a run-time check. >> >> I missed to include the other change in the patch. >> I am doing it in kni_alloc i.e. initiate the alloc_q with default >> burst size. >> kni_allocate_mbufs(ctx, 0); >> >> In a way, we are now suggesting to reduce the size of alloc_q to only >> default burst size. > > As an aside comment here, I think that we should allow to tweak the > userspace <-> kernel queue sizes (rx_q, tx_q, free_q and alloc_q) . > Whether this should be a build configuration option or a parameter to > rte_kni_init(), it is not completely clear to me, but I guess > rte_kni_init() is a better option. > > Having said that, the original mail from Hemant was describing that > KNI was giving an out-of-memory. This to me indicates that the pool is > incorrectly dimensioned. Even if KNI will not pre-allocate in the > alloc_q, or not completely, in the event of high load, you will get > this same "out of memory". > > We can reduce the usage of buffers by the KNI subsystem in kernel > space and in userspace, but the kernel will always need a small cache > of pre-allocated buffers (coming from user-space), since the KNI > kernel module does not know where to grab the packets from (which > pool). So my guess is that the dimensioning problem experienced by > Hemant would be the same, even with the proposed changes. > >> >> Can we reach is situation, when the kernel is adding packets faster >> in tx_q than the application is able to dequeue? > > I think so. We cannot control much how the kernel will schedule the > KNI thread(s), specially if the # of threads in relation to the cores > is incorrect (not enough), hence we need at least a reasonable amount > of buffering to prevent early dropping to those "internal" burst side > effects. > > Marc > >> alloc_q can be empty in this case and kernel will be striving. >> >>> Olivier. >>> >>> On 25/02/15 11:48, Hemant Agrawal wrote: >>>> From: Hemant Agrawal <hemant@freescale.com> >>>> >>>> if any buffer is read from the tx_q, MAX_BURST buffers will be >>>> allocated and >>> attempted to be added to to the alloc_q. >>>> This seems terribly inefficient and it also looks like the alloc_q >>>> will quickly fill >>> to its maximum capacity. If the system buffers are low in number, it >>> will reach >>> "out of memory" situation. >>>> This patch allocates the number of buffers as many dequeued from tx_q. >>>> >>>> Signed-off-by: Hemant Agrawal <hemant@freescale.com> >>>> --- >>>> lib/librte_kni/rte_kni.c | 13 ++++++++----- >>>> 1 file changed, 8 insertions(+), 5 deletions(-) >>>> >>>> diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index >>>> 4e70fa0..4cf8e30 100644 >>>> --- a/lib/librte_kni/rte_kni.c >>>> +++ b/lib/librte_kni/rte_kni.c >>>> @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool { >>>> >>>> >>>> static void kni_free_mbufs(struct rte_kni *kni); -static void >>>> kni_allocate_mbufs(struct rte_kni *kni); >>>> +static void kni_allocate_mbufs(struct rte_kni *kni, int num); >>>> >>>> static volatile int kni_fd = -1; >>>> static struct rte_kni_memzone_pool kni_memzone_pool = { @@ -575,7 >>>> +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf >>>> **mbufs, unsigned num) >>>> >>>> /* If buffers removed, allocate mbufs and then put them into >>>> alloc_q >>> */ >>>> if (ret) >>>> - kni_allocate_mbufs(kni); >>>> + kni_allocate_mbufs(kni, ret); >>>> >>>> return ret; >>>> } >>>> @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni) >>>> } >>>> >>>> static void >>>> -kni_allocate_mbufs(struct rte_kni *kni) >>>> +kni_allocate_mbufs(struct rte_kni *kni, int num) >>>> { >>>> int i, ret; >>>> struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; @@ -620,7 +620,10 >>> @@ >>>> kni_allocate_mbufs(struct rte_kni *kni) >>>> return; >>>> } >>>> >>>> - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { >>>> + if (num == 0 || num > MAX_MBUF_BURST_NUM) >>>> + num = MAX_MBUF_BURST_NUM; >>>> + >>>> + for (i = 0; i < num; i++) { >>>> pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); >>>> if (unlikely(pkts[i] == NULL)) { >>>> /* Out of memory */ >>>> @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni) >>>> ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i); >>>> >>>> /* Check if any mbufs not put into alloc_q, and then free >>>> them */ >>>> - if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) >>> {MAX_MBUF_BURST_NUM >>>> + if (ret >= 0 && ret < i && ret < num) { >>>> int j; >>>> >>>> for (j = ret; j < i; j++) >>> -- >>> *Olivier Demé* >>> *Druid Software Ltd.* >>> *Tel: +353 1 202 1831* >>> *Email: odeme@druidsoftware.com <mailto:odeme@druidsoftware.com>* >>> *URL: http://www.druidsoftware.com* >>> *Hall 7, stand 7F70.* >>> Druid Software: Monetising enterprise small cells solutions. >
On Wed, Feb 25, 2015 at 6:38 AM, Marc Sune <marc.sune@bisdn.de> wrote: > > On 25/02/15 13:24, Hemant@freescale.com wrote: > >> Hi OIivier >> Comments inline. >> Regards, >> Hemant >> >> -----Original Message----- >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Deme >>> Sent: 25/Feb/2015 5:44 PM >>> To: dev@dpdk.org >>> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst >>> >>> Thank you Hemant, I think there might be one issue left with the patch >>> though. >>> The alloc_q must initially be filled with mbufs before getting mbuf back >>> on the >>> tx_q. >>> >>> So the patch should allow rte_kni_rx_burst to check if alloc_q is empty. >>> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_q >>> with >>> MAX_MBUF_BURST_NUM mbufs) >>> >>> The patch for rte_kni_rx_burst would then look like: >>> >>> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf >>> **mbufs, unsigned num) >>> >>> /* If buffers removed, allocate mbufs and then put them into >>> alloc_q */ >>> if (ret) >>> - kni_allocate_mbufs(kni); >>> + kni_allocate_mbufs(kni, ret); >>> + else if (unlikely(kni->alloc_q->write == kni->alloc_q->read)) >>> + kni_allocate_mbufs(kni, 0); >>> >>> [hemant] This will introduce a run-time check. >> >> I missed to include the other change in the patch. >> I am doing it in kni_alloc i.e. initiate the alloc_q with default burst >> size. >> kni_allocate_mbufs(ctx, 0); >> >> In a way, we are now suggesting to reduce the size of alloc_q to only >> default burst size. >> > > As an aside comment here, I think that we should allow to tweak the > userspace <-> kernel queue sizes (rx_q, tx_q, free_q and alloc_q) . Whether > this should be a build configuration option or a parameter to > rte_kni_init(), it is not completely clear to me, but I guess > rte_kni_init() is a better option. > rte_kni_init() is definitely a better option. It allows things to be tuned based on individual system config rather than requiring different builds. > Having said that, the original mail from Hemant was describing that KNI > was giving an out-of-memory. This to me indicates that the pool is > incorrectly dimensioned. Even if KNI will not pre-allocate in the alloc_q, > or not completely, in the event of high load, you will get this same "out > of memory". > > We can reduce the usage of buffers by the KNI subsystem in kernel space > and in userspace, but the kernel will always need a small cache of > pre-allocated buffers (coming from user-space), since the KNI kernel module > does not know where to grab the packets from (which pool). So my guess is > that the dimensioning problem experienced by Hemant would be the same, even > with the proposed changes. > > >> Can we reach is situation, when the kernel is adding packets faster in >> tx_q than the application is able to dequeue? >> > > I think so. We cannot control much how the kernel will schedule the KNI > thread(s), specially if the # of threads in relation to the cores is > incorrect (not enough), hence we need at least a reasonable amount of > buffering to prevent early dropping to those "internal" burst side effects. > > Marc Strongly agree with Marc here. We *really* don't want just a single burst worth of mbufs available to the kernel in alloc_q. That's just asking for congestion when there's no need for it. The original problem reported by Olivier is more of a resource tuning problem than anything else. The number of mbufs you need in the system has to take into account internal queue depths. Jay
> -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jay Rolette > Sent: 25/Feb/2015 7:00 PM > To: Marc Sune > Cc: DPDK > Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst > > On Wed, Feb 25, 2015 at 6:38 AM, Marc Sune <marc.sune@bisdn.de> wrote: > > > > > On 25/02/15 13:24, Hemant@freescale.com wrote: > > > >> Hi OIivier > >> Comments inline. > >> Regards, > >> Hemant > >> > >> -----Original Message----- > >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Deme > >>> Sent: 25/Feb/2015 5:44 PM > >>> To: dev@dpdk.org > >>> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst > >>> > >>> Thank you Hemant, I think there might be one issue left with the > >>> patch though. > >>> The alloc_q must initially be filled with mbufs before getting mbuf > >>> back on the tx_q. > >>> > >>> So the patch should allow rte_kni_rx_burst to check if alloc_q is empty. > >>> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the > >>> alloc_q with MAX_MBUF_BURST_NUM mbufs) > >>> > >>> The patch for rte_kni_rx_burst would then look like: > >>> > >>> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct > >>> rte_mbuf **mbufs, unsigned num) > >>> > >>> /* If buffers removed, allocate mbufs and then put them into > >>> alloc_q */ > >>> if (ret) > >>> - kni_allocate_mbufs(kni); > >>> + kni_allocate_mbufs(kni, ret); else if > >>> + (unlikely(kni->alloc_q->write == kni->alloc_q->read)) > >>> + kni_allocate_mbufs(kni, 0); > >>> > >>> [hemant] This will introduce a run-time check. > >> > >> I missed to include the other change in the patch. > >> I am doing it in kni_alloc i.e. initiate the alloc_q with default > >> burst size. > >> kni_allocate_mbufs(ctx, 0); > >> > >> In a way, we are now suggesting to reduce the size of alloc_q to only > >> default burst size. > >> > > > > As an aside comment here, I think that we should allow to tweak the > > userspace <-> kernel queue sizes (rx_q, tx_q, free_q and alloc_q) . > > Whether this should be a build configuration option or a parameter to > > rte_kni_init(), it is not completely clear to me, but I guess > > rte_kni_init() is a better option. > > > > rte_kni_init() is definitely a better option. It allows things to be tuned based on > individual system config rather than requiring different builds. > > > > Having said that, the original mail from Hemant was describing that > > KNI was giving an out-of-memory. This to me indicates that the pool is > > incorrectly dimensioned. Even if KNI will not pre-allocate in the > > alloc_q, or not completely, in the event of high load, you will get > > this same "out of memory". > > > > We can reduce the usage of buffers by the KNI subsystem in kernel > > space and in userspace, but the kernel will always need a small cache > > of pre-allocated buffers (coming from user-space), since the KNI > > kernel module does not know where to grab the packets from (which > > pool). So my guess is that the dimensioning problem experienced by > > Hemant would be the same, even with the proposed changes. > > > > > >> Can we reach is situation, when the kernel is adding packets faster > >> in tx_q than the application is able to dequeue? > >> > > > > I think so. We cannot control much how the kernel will schedule the > > KNI thread(s), specially if the # of threads in relation to the cores > > is incorrect (not enough), hence we need at least a reasonable amount > > of buffering to prevent early dropping to those "internal" burst side effects. > > > > Marc > > > Strongly agree with Marc here. We *really* don't want just a single burst worth > of mbufs available to the kernel in alloc_q. That's just asking for congestion > when there's no need for it. > > The original problem reported by Olivier is more of a resource tuning problem > than anything else. The number of mbufs you need in the system has to take > into account internal queue depths. [hemant] Following are my suggestions for the time being. 1. The existing code allocates X buffers and try to add them to alloc_q. If alloc_q is not having space, it frees them. This is not optimized at all. In the rx_burst, we shall only add the numbers of packets, as removed from tx_q. 2. During the kni_alloc, we can set kni_allocate_mbufs X*Y buffers initially for alloc_q. We can further improve it to make it configurable in future enhancements. Currently we can have the value of Y as 2. 3. kni_allocate_mbufs will allocate as many buffer are requested in function parameter. > > Jay
On 26/02/15 08:00, Hemant@freescale.com wrote: > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jay Rolette >> Sent: 25/Feb/2015 7:00 PM >> To: Marc Sune >> Cc: DPDK >> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst >> >> On Wed, Feb 25, 2015 at 6:38 AM, Marc Sune <marc.sune@bisdn.de> wrote: >> >>> On 25/02/15 13:24, Hemant@freescale.com wrote: >>> >>>> Hi OIivier >>>> Comments inline. >>>> Regards, >>>> Hemant >>>> >>>> -----Original Message----- >>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Deme >>>>> Sent: 25/Feb/2015 5:44 PM >>>>> To: dev@dpdk.org >>>>> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst >>>>> >>>>> Thank you Hemant, I think there might be one issue left with the >>>>> patch though. >>>>> The alloc_q must initially be filled with mbufs before getting mbuf >>>>> back on the tx_q. >>>>> >>>>> So the patch should allow rte_kni_rx_burst to check if alloc_q is empty. >>>>> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the >>>>> alloc_q with MAX_MBUF_BURST_NUM mbufs) >>>>> >>>>> The patch for rte_kni_rx_burst would then look like: >>>>> >>>>> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct >>>>> rte_mbuf **mbufs, unsigned num) >>>>> >>>>> /* If buffers removed, allocate mbufs and then put them into >>>>> alloc_q */ >>>>> if (ret) >>>>> - kni_allocate_mbufs(kni); >>>>> + kni_allocate_mbufs(kni, ret); else if >>>>> + (unlikely(kni->alloc_q->write == kni->alloc_q->read)) >>>>> + kni_allocate_mbufs(kni, 0); >>>>> >>>>> [hemant] This will introduce a run-time check. >>>> I missed to include the other change in the patch. >>>> I am doing it in kni_alloc i.e. initiate the alloc_q with default >>>> burst size. >>>> kni_allocate_mbufs(ctx, 0); >>>> >>>> In a way, we are now suggesting to reduce the size of alloc_q to only >>>> default burst size. >>>> >>> As an aside comment here, I think that we should allow to tweak the >>> userspace <-> kernel queue sizes (rx_q, tx_q, free_q and alloc_q) . >>> Whether this should be a build configuration option or a parameter to >>> rte_kni_init(), it is not completely clear to me, but I guess >>> rte_kni_init() is a better option. >>> >> rte_kni_init() is definitely a better option. It allows things to be tuned based on >> individual system config rather than requiring different builds. >> >> >>> Having said that, the original mail from Hemant was describing that >>> KNI was giving an out-of-memory. This to me indicates that the pool is >>> incorrectly dimensioned. Even if KNI will not pre-allocate in the >>> alloc_q, or not completely, in the event of high load, you will get >>> this same "out of memory". >>> >>> We can reduce the usage of buffers by the KNI subsystem in kernel >>> space and in userspace, but the kernel will always need a small cache >>> of pre-allocated buffers (coming from user-space), since the KNI >>> kernel module does not know where to grab the packets from (which >>> pool). So my guess is that the dimensioning problem experienced by >>> Hemant would be the same, even with the proposed changes. >>> >>> >>>> Can we reach is situation, when the kernel is adding packets faster >>>> in tx_q than the application is able to dequeue? >>>> >>> I think so. We cannot control much how the kernel will schedule the >>> KNI thread(s), specially if the # of threads in relation to the cores >>> is incorrect (not enough), hence we need at least a reasonable amount >>> of buffering to prevent early dropping to those "internal" burst side effects. >>> >>> Marc >> >> Strongly agree with Marc here. We *really* don't want just a single burst worth >> of mbufs available to the kernel in alloc_q. That's just asking for congestion >> when there's no need for it. >> >> The original problem reported by Olivier is more of a resource tuning problem >> than anything else. The number of mbufs you need in the system has to take >> into account internal queue depths. > [hemant] Following are my suggestions for the time being. > 1. The existing code allocates X buffers and try to add them to alloc_q. If alloc_q is not having space, it frees them. This is not optimized at all. In the rx_burst, we shall only add the numbers of packets, as removed from tx_q. Agree > 2. During the kni_alloc, we can set kni_allocate_mbufs X*Y buffers initially for alloc_q. We can further improve it to make it configurable in future enhancements. Currently we can have the value of Y as 2. Provided that the dimensioning (X*Y), if defined in runtime it is set during rte_kni_init(), in principle I agree. However it is not clear to me if you wantg to call kni_allocate_mbufs(X*Y) for every kni_alloc or just in the first one (in other words, if X*Y == size of alloc_q). Since alloc_q is shared and assuming X*Y == size of alloc_q, I think doing it that in the first kni_alloc() would be sufficient, and then it will get refilled once RX/TX events happen. A different approach, that would require more refactor, since it changes slightly the current strategy, would be to pre-alloc the alloc_q based of the number of KNI interfaces created (kni_alloc). In this sense, rte_kni_init() would get then 2 parameters: the length of the entire shared alloc_q (actually all the queues in the KNI subsystem, with the current impl.) and the number of buffers / KNI interface. This approach could lower the mbuf consumption in certain configurations. > 3. kni_allocate_mbufs will allocate as many buffer are requested in function parameter. Agree Marc > >> Jay
diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c index 4e70fa0..4cf8e30 100644 --- a/lib/librte_kni/rte_kni.c +++ b/lib/librte_kni/rte_kni.c @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool { static void kni_free_mbufs(struct rte_kni *kni); -static void kni_allocate_mbufs(struct rte_kni *kni); +static void kni_allocate_mbufs(struct rte_kni *kni, int num); static volatile int kni_fd = -1; static struct rte_kni_memzone_pool kni_memzone_pool = { @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned num) /* If buffers removed, allocate mbufs and then put them into alloc_q */ if (ret) - kni_allocate_mbufs(kni); + kni_allocate_mbufs(kni, ret); return ret; } @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni) } static void -kni_allocate_mbufs(struct rte_kni *kni) +kni_allocate_mbufs(struct rte_kni *kni, int num) { int i, ret; struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; @@ -620,7 +620,10 @@ kni_allocate_mbufs(struct rte_kni *kni) return; } - for (i = 0; i < MAX_MBUF_BURST_NUM; i++) { + if (num == 0 || num > MAX_MBUF_BURST_NUM) + num = MAX_MBUF_BURST_NUM; + + for (i = 0; i < num; i++) { pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool); if (unlikely(pkts[i] == NULL)) { /* Out of memory */ @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni) ret = kni_fifo_put(kni->alloc_q, (void **)pkts, i); /* Check if any mbufs not put into alloc_q, and then free them */ - if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) { + if (ret >= 0 && ret < i && ret < num) { int j; for (j = ret; j < i; j++)