From patchwork Mon Oct 15 12:52:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ed Czeck X-Patchwork-Id: 46831 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DF28623D; Mon, 15 Oct 2018 14:53:05 +0200 (CEST) Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by dpdk.org (Postfix) with ESMTP id B1324239 for ; Mon, 15 Oct 2018 14:53:04 +0200 (CEST) Received: by mail-qt1-f196.google.com with SMTP id d14-v6so21239531qto.4 for ; Mon, 15 Oct 2018 05:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=atomicrules-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FLcY0dCvnd+dwMjm95tR+YMFgk5PUYxKfotG23gqcP0=; b=pUkr2vMLWKYi26JiaibSnaXnOMQ8UkSK5jm4J+7Mjzj+yT2Osq6Q42WyNHeGZYetFq y968yo+Og6+lfiqGkWVzEpULjJ0fTo/qmhUpt8JXHvpv6wDH7+f1yGFWEcLKF3Of2O3T azgiQUr5A9CBKTa53iwT2i0PJS/j0J6riwo89R2EAiLx5Hh69HWpV4XDFNUeJAALatB6 /ynhvN9hqt7U3eUxjTHbkU0Y3EPNabNeMHy29m4YgBrLNUnzET7/fmIHOPkt+i5pxk5v FuZ/ss78xHPDu47mvKDk8k+gkq72oWGC2ynZgPOsR04q/XfOaEWxd1NPq6QDomCMvcrx n8dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FLcY0dCvnd+dwMjm95tR+YMFgk5PUYxKfotG23gqcP0=; b=XLfu8rFy4Vq0ZhGvy8r0K3bd06G/svqq30xegTjep2ETHgtycyStnDToTqv6M4Wdfd jenVO77N669Lt2XOTRNjMvnLCWKmbjjnRfpOyTa0vcDVeXzI5cJ/Jbko9cT+sP81Gbpi fD789QAmMOZYP8FAop4I3FtFeOCkBFyGjYjlj1xZDE9PjMoNSgnX6s5gqzMqFnqunKiE jabhbItfnQFDqgrFX/596tdE/i0CQ4nnHkkDbG3HmJwlwL+4MWM3pQGuAHVnj6Ka0NX/ a1vxRvpXuoa+kwxsXQCoVK7Lktx3W6cETq7AiDSW6kFMTauQCux01I+y+SPs1rokm88k H1Gg== X-Gm-Message-State: ABuFfohWJfUrC505QbNey+vI9n3z1NhFgUlr/VmGOQzAdOzwE4Q7EvZR OH8PySHKzBnviBY7AcHb9ZuUupkEGLtKjA== X-Google-Smtp-Source: ACcGV61wPXywTh7zUGnzy7iOCA66CKShraaBgFZOcZrsSCendUWrjzP9JJEzMSO4F/yBjWQ/ZQVyqg== X-Received: by 2002:ac8:60d0:: with SMTP id i16-v6mr15543196qtm.249.1539607983517; Mon, 15 Oct 2018 05:53:03 -0700 (PDT) Received: from z170.home (pool-173-48-117-246.bstnma.fios.verizon.net. [173.48.117.246]) by smtp.gmail.com with ESMTPSA id e2-v6sm4982180qkm.55.2018.10.15.05.53.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Oct 2018 05:53:02 -0700 (PDT) From: Ed Czeck To: dev@dpdk.org Cc: john.miller@atomicrules.com, shepard.siegel@atomicrules.com, ferruh.yigit@intel.com, Ed Czeck Date: Mon, 15 Oct 2018 08:52:41 -0400 Message-Id: <1539607961-20851-1-git-send-email-ed.czeck@atomicrules.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1539033410-21422-1-git-send-email-ed.czeck@atomicrules.com> References: <1539033410-21422-1-git-send-email-ed.czeck@atomicrules.com> Subject: [dpdk-dev] [PATCH v2 1/3] net/ark: add recovery code for lack of mbufs during runtime X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Attempt to allocate smaller chunk of mbufs when larger amount is not available. Report error when small chunk not available. Signed-off-by: Ed Czeck --- drivers/net/ark/ark_ethdev_rx.c | 48 +++++++++++++++++++++++++++++++++++------ 1 file changed, 42 insertions(+), 6 deletions(-) diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c index 16f0d11..5751585 100644 --- a/drivers/net/ark/ark_ethdev_rx.c +++ b/drivers/net/ark/ark_ethdev_rx.c @@ -25,6 +25,9 @@ static uint32_t eth_ark_rx_jumbo(struct ark_rx_queue *queue, struct rte_mbuf *mbuf0, uint32_t cons_index); static inline int eth_ark_rx_seed_mbufs(struct ark_rx_queue *queue); +static int eth_ark_rx_seed_recovery(struct ark_rx_queue *queue, + uint32_t *pnb, + struct rte_mbuf **mbufs); /* ************************************************************************* */ struct ark_rx_queue { @@ -196,20 +199,25 @@ eth_ark_dev_rx_queue_setup(struct rte_eth_dev *dev, /* populate mbuf reserve */ status = eth_ark_rx_seed_mbufs(queue); + if (queue->seed_index != nb_desc) { + PMD_DRV_LOG(ERR, "ARK: Failed to allocate %u mbufs for RX queue %d\n", + nb_desc, qidx); + status = -1; + } /* MPU Setup */ if (status == 0) status = eth_ark_rx_hw_setup(dev, queue, qidx, queue_idx); if (unlikely(status != 0)) { - struct rte_mbuf *mbuf; + struct rte_mbuf **mbuf; PMD_DRV_LOG(ERR, "Failed to initialize RX queue %d %s\n", qidx, __func__); /* Free the mbufs allocated */ - for (i = 0, mbuf = queue->reserve_q[0]; - i < nb_desc; ++i, mbuf++) { - rte_pktmbuf_free(mbuf); + for (i = 0, mbuf = queue->reserve_q; + i < queue->seed_index; ++i, mbuf++) { + rte_pktmbuf_free(*mbuf); } rte_free(queue->reserve_q); rte_free(queue->paddress_q); @@ -446,8 +454,13 @@ eth_ark_rx_seed_mbufs(struct ark_rx_queue *queue) struct rte_mbuf **mbufs = &queue->reserve_q[seed_m]; int status = rte_pktmbuf_alloc_bulk(queue->mb_pool, mbufs, nb); - if (unlikely(status != 0)) - return -1; + if (unlikely(status != 0)) { + /* Try to recover from lack of mbufs in pool */ + status = eth_ark_rx_seed_recovery(queue, &nb, mbufs); + if (unlikely(status != 0)) { + return -1; + } + } if (ARK_RX_DEBUG) { /* DEBUG */ while (count != nb) { @@ -495,6 +508,29 @@ eth_ark_rx_seed_mbufs(struct ark_rx_queue *queue) return 0; } +int +eth_ark_rx_seed_recovery(struct ark_rx_queue *queue, + uint32_t *pnb, + struct rte_mbuf **mbufs) +{ + int status = -1; + + /* Ignore small allocation failures */ + if (*pnb <= 64) + return -1; + + *pnb = 64U; + status = rte_pktmbuf_alloc_bulk(queue->mb_pool, mbufs, *pnb); + if (status != 0) { + PMD_DRV_LOG(ERR, + "ARK: Could not allocate %u mbufs from pool for RX queue %u;" + " %u free buffers remaining in queue\n", + *pnb, queue->queue_index, + queue->seed_index - queue->cons_index); + } + return status; +} + void eth_ark_rx_dump_queue(struct rte_eth_dev *dev, uint16_t queue_id, const char *msg)