From patchwork Wed Aug 23 08:15:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 27742 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 741567D52; Wed, 23 Aug 2017 10:15:41 +0200 (CEST) Received: from mail-wr0-f174.google.com (mail-wr0-f174.google.com [209.85.128.174]) by dpdk.org (Postfix) with ESMTP id 87FA17D0B for ; Wed, 23 Aug 2017 10:15:35 +0200 (CEST) Received: by mail-wr0-f174.google.com with SMTP id z91so3036031wrc.4 for ; Wed, 23 Aug 2017 01:15:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=Ycja0fJ4q87IbhlUKsxWNFQfsKGwLhnxif0bFZylfTo=; b=uGM5sR2sdOsMrFYk77f3pV8/9mpKrEdJlIgJ7XZ0DM/iS4J/AU5ilZkavSHFH8V4FR +YBqd7VmDzlCbT4rJlJ1gfxP2gR5pe4Lw92svaCj5Hs/si0YVXO0TABtEMneg21a9HE6 sjNs7LzpOm/EDkJggw1Jn44f/9gozLRTJk4KDC3kDosVOW/Ad+LwMbsyF8TZEpwLqGG4 UAhxUzTLUajkCelxdgpMsRKGSJsEGS2rtvi2CiSRqNCdI+ShcAOA7CIh8ZV6h3/t0a5H V3bgKr3t/X+zKfE4HM0WqFQnQY8VVi1RC5uC27iWGYKbjZu1SUYYuBqKcbTHH4ehWpXW VPmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=Ycja0fJ4q87IbhlUKsxWNFQfsKGwLhnxif0bFZylfTo=; b=uJ5IxsxaZXs3FjFHvAMRngnd0TtLfQizCJqLJCEkqAE5UULvsHHbuMQPAO3kn9PJVA /KnON5iA0RBA+QZ2u1IY+spVVhc0CoFlR/BQLJUW4wjVOFcWvE6JBsxcf80fytLyR+sX zyedSKWSde+pC59b254mm+370SToPukEIvfcAxnTrMEI1Vpy1DVdBpSQmTGTqqy0Fmyv dY3Sw1kc9vzGgut6FPUqQbyZIYX1to2lkWEAIu8SFCiyFmeqeZhi8V8VcsrZpmp6YimF HORc5GyA8xTdBX8cZnF6saiLKMsK+65VneGkQ5ma8KSO0DDDb4sNJvFxLv+k784a+A0M 47fQ== X-Gm-Message-State: AHYfb5hfUlKZE6EPT4d0cGgBXvCxuW/ca/F0jqkvecSgpJU5X6EZ/RnA nAnA9fUrRnaxjo990l73Pw== X-Received: by 10.223.144.163 with SMTP id i32mr1074195wri.40.1503476134802; Wed, 23 Aug 2017 01:15:34 -0700 (PDT) Received: from ping.dev.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id c19sm508450wre.43.2017.08.23.01.15.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 23 Aug 2017 01:15:34 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org, Ferruh Yigit Cc: Shahaf Shuler Date: Wed, 23 Aug 2017 10:15:05 +0200 Message-Id: <06164fc7b495f9bf8f6f58199f23bc5d8927d363.1503475999.git.nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 1/8] net/mlx5: avoid reusing old queue's mbuf on reconfigure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch prepare the merge of fake mbuf allocation needed by the vector code with rxq_alloc_elts() where all mbuf of the queues should be allocated. Signed-off-by: Nelio Laranjeiro Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_rxq.c | 23 +++-------------------- 1 file changed, 3 insertions(+), 20 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 74387a7..550e648 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -666,16 +666,12 @@ rxq_trim_elts(struct rxq *rxq) * Pointer to RX queue structure. * @param elts_n * Number of elements to allocate. - * @param[in] pool - * If not NULL, fetch buffers from this array instead of allocating them - * with rte_pktmbuf_alloc(). * * @return * 0 on success, errno value on failure. */ static int -rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n, - struct rte_mbuf *(*pool)[]) +rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n) { const unsigned int sges_n = 1 << rxq_ctrl->rxq.sges_n; unsigned int i; @@ -687,12 +683,7 @@ rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n, volatile struct mlx5_wqe_data_seg *scat = &(*rxq_ctrl->rxq.wqes)[i]; - buf = (pool != NULL) ? (*pool)[i] : NULL; - if (buf != NULL) { - rte_pktmbuf_reset(buf); - rte_pktmbuf_refcnt_update(buf, 1); - } else - buf = rte_pktmbuf_alloc(rxq_ctrl->rxq.mp); + buf = rte_pktmbuf_alloc(rxq_ctrl->rxq.mp); if (buf == NULL) { ERROR("%p: empty mbuf pool", (void *)rxq_ctrl); ret = ENOMEM; @@ -725,7 +716,6 @@ rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n, assert(ret == 0); return 0; error: - assert(pool == NULL); elts_n = i; for (i = 0; (i != elts_n); ++i) { if ((*rxq_ctrl->rxq.elts)[i] != NULL) @@ -1064,14 +1054,7 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, (void *)dev, strerror(ret)); goto error; } - /* Reuse buffers from original queue if possible. */ - if (rxq_ctrl->rxq.elts_n) { - assert(1 << rxq_ctrl->rxq.elts_n == desc); - assert(rxq_ctrl->rxq.elts != tmpl.rxq.elts); - rxq_trim_elts(&rxq_ctrl->rxq); - ret = rxq_alloc_elts(&tmpl, desc, rxq_ctrl->rxq.elts); - } else - ret = rxq_alloc_elts(&tmpl, desc, NULL); + ret = rxq_alloc_elts(&tmpl, desc); if (ret) { ERROR("%p: RXQ allocation failed: %s", (void *)dev, strerror(ret));