From patchwork Tue Jun 21 07:23:37 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?N=C3=A9lio_Laranjeiro?= X-Patchwork-Id: 14136 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 3C96EB45E; Tue, 21 Jun 2016 09:25:16 +0200 (CEST) Received: from mail-lf0-f42.google.com (mail-lf0-f42.google.com [209.85.215.42]) by dpdk.org (Postfix) with ESMTP id 3497FADEA for ; Tue, 21 Jun 2016 09:24:25 +0200 (CEST) Received: by mail-lf0-f42.google.com with SMTP id h129so9684988lfh.1 for ; Tue, 21 Jun 2016 00:24:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=r9mYo85x1xnVQer6450mwPbfLuGOvk+og9wIvbiMZd0=; b=DvfmX8ex2gTE+RE4tCLjvtSsrHjhwiowwViWDi5RM7pTtiWIZON2GvlgJGTeK8xKzU Je63hy1rL8YeV+FeSZyshof1FbJHhaIR4SWi9UcbGZ2V9Y/MxmHJvVVLAatng/kD4WrO S3Qw+U82cQUMaViEiQZLQojm326tX2O5ichjAygpV0U1gUZjp4D2v2Pe+Lo4mwP50x8X rKg0SYqQEDQi8SJJnhg7ws29uNK9jpHrWI7DueNz8BnKd1X+ZfuG4mlns94Ttfwr9TdA Tk3ye6y+dqmquM7ZdmGeiRf6U3xdrDeDG3P4/bnnTNwZJpPoX9DQbaQ2DYIjzP9ZajB4 8kmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=r9mYo85x1xnVQer6450mwPbfLuGOvk+og9wIvbiMZd0=; b=JLFZH99zQvTiHF4mRN3GAzWdoDSF2bsB0vwLp8JbsrA7TFK4lYqUVvZGMNfAbviUo1 QSgdkvUqYBEcS5FRrS2+qZsthkd3t55IpLSFlutA8xKOt0kRLavd0IsnLNhwU1GVJP62 6siS0G4el8/iNdbihwRdbIYBfbox0qrNntRNoV/o85nYa6iXtMZw/6Q4oCLWKTJzN6el Z1xfsdz+s/4PSX3Azic4DXadrS5TtQNS/bZKJ9/KEBXO6hxkCLahlk00qk3PMTtYy/8w gSevqR/uUJ1llcIrK43uVtjX+PkTRDBARdouDT/eagADWUap0ZdyQuVcczfQ5EScUO1/ 9ZUA== X-Gm-Message-State: ALyK8tJ9Qhrh2HaGwoPo2Q5ZGqcONPg9xwAmXXiXm2TyOnIgTGlicZfPjXIqpGOBWv39kWK6 X-Received: by 10.25.19.204 with SMTP id 73mr4416325lft.24.1466493864793; Tue, 21 Jun 2016 00:24:24 -0700 (PDT) Received: from ping.vm.6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id mh10sm6527314lbb.14.2016.06.21.00.24.23 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 21 Jun 2016 00:24:24 -0700 (PDT) From: Nelio Laranjeiro To: dev@dpdk.org Cc: Ferruh Yigit , Adrien Mazarguil , Vasily Philipov Date: Tue, 21 Jun 2016 09:23:37 +0200 Message-Id: <1466493818-1877-25-git-send-email-nelio.laranjeiro@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1466493818-1877-1-git-send-email-nelio.laranjeiro@6wind.com> References: <1465379291-25310-1-git-send-email-nelio.laranjeiro@6wind.com> <1466493818-1877-1-git-send-email-nelio.laranjeiro@6wind.com> In-Reply-To: <1465379291-25310-1-git-send-email-nelio.laranjeiro@6wind.com> References: <1465379291-25310-1-git-send-email-nelio.laranjeiro@6wind.com> Subject: [dpdk-dev] [PATCH v3 24/25] mlx5: make Rx queue reinitialization safer X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Adrien Mazarguil The primary purpose of rxq_rehash() function is to stop and restart reception on a queue after re-posting buffers. This may fail if the array that temporarily stores existing buffers for reuse cannot be allocated. Update rxq_rehash() to work on the target queue directly (not through a template copy) and avoid this allocation. rxq_alloc_elts() is modified accordingly to take buffers from an existing queue directly and update their refcount. Unlike rxq_rehash(), rxq_setup() must work on a temporary structure but should not allocate new mbufs from the pool while reinitializing an existing queue. This is achieved by using the refcount-aware rxq_alloc_elts() before overwriting queue data. Signed-off-by: Adrien Mazarguil Signed-off-by: Vasily Philipov --- drivers/net/mlx5/mlx5_rxq.c | 83 ++++++++++++++++++++++----------------------- 1 file changed, 41 insertions(+), 42 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 707296c..0a3225e 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -642,7 +642,7 @@ priv_rehash_flows(struct priv *priv) */ static int rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n, - struct rte_mbuf **pool) + struct rte_mbuf *(*pool)[]) { unsigned int i; int ret = 0; @@ -654,9 +654,10 @@ rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n, &(*rxq_ctrl->rxq.wqes)[i]; if (pool != NULL) { - buf = *(pool++); + buf = (*pool)[i]; assert(buf != NULL); rte_pktmbuf_reset(buf); + rte_pktmbuf_refcnt_update(buf, 1); } else buf = rte_pktmbuf_alloc(rxq_ctrl->rxq.mp); if (buf == NULL) { @@ -781,7 +782,7 @@ rxq_cleanup(struct rxq_ctrl *rxq_ctrl) } /** - * Reconfigure a RX queue with new parameters. + * Reconfigure RX queue buffers. * * rxq_rehash() does not allocate mbufs, which, if not done from the right * thread (such as a control thread), may corrupt the pool. @@ -798,67 +799,48 @@ rxq_cleanup(struct rxq_ctrl *rxq_ctrl) int rxq_rehash(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl) { - struct rxq_ctrl tmpl = *rxq_ctrl; - unsigned int mbuf_n; - unsigned int desc_n; - struct rte_mbuf **pool; - unsigned int i, k; + unsigned int elts_n = rxq_ctrl->rxq.elts_n; + unsigned int i; struct ibv_exp_wq_attr mod; int err; DEBUG("%p: rehashing queue %p", (void *)dev, (void *)rxq_ctrl); - /* Number of descriptors and mbufs currently allocated. */ - desc_n = tmpl.rxq.elts_n; - mbuf_n = desc_n; /* From now on, any failure will render the queue unusable. * Reinitialize WQ. */ mod = (struct ibv_exp_wq_attr){ .attr_mask = IBV_EXP_WQ_ATTR_STATE, .wq_state = IBV_EXP_WQS_RESET, }; - err = ibv_exp_modify_wq(tmpl.wq, &mod); + err = ibv_exp_modify_wq(rxq_ctrl->wq, &mod); if (err) { ERROR("%p: cannot reset WQ: %s", (void *)dev, strerror(err)); assert(err > 0); return err; } - /* Allocate pool. */ - pool = rte_malloc(__func__, (mbuf_n * sizeof(*pool)), 0); - if (pool == NULL) { - ERROR("%p: cannot allocate memory", (void *)dev); - return ENOBUFS; - } /* Snatch mbufs from original queue. */ - k = 0; - for (i = 0; (i != desc_n); ++i) - pool[k++] = (*rxq_ctrl->rxq.elts)[i]; - assert(k == mbuf_n); - rte_free(pool); + claim_zero(rxq_alloc_elts(rxq_ctrl, elts_n, rxq_ctrl->rxq.elts)); + for (i = 0; i != elts_n; ++i) { + struct rte_mbuf *buf = (*rxq_ctrl->rxq.elts)[i]; + + assert(rte_mbuf_refcnt_read(buf) == 2); + rte_pktmbuf_free_seg(buf); + } /* Change queue state to ready. */ mod = (struct ibv_exp_wq_attr){ .attr_mask = IBV_EXP_WQ_ATTR_STATE, .wq_state = IBV_EXP_WQS_RDY, }; - err = ibv_exp_modify_wq(tmpl.wq, &mod); + err = ibv_exp_modify_wq(rxq_ctrl->wq, &mod); if (err) { ERROR("%p: WQ state to IBV_EXP_WQS_RDY failed: %s", (void *)dev, strerror(err)); goto error; } - /* Post SGEs. */ - err = rxq_alloc_elts(&tmpl, desc_n, pool); - if (err) { - ERROR("%p: cannot reallocate WRs, aborting", (void *)dev); - rte_free(pool); - assert(err > 0); - return err; - } /* Update doorbell counter. */ - rxq_ctrl->rxq.rq_ci = desc_n; + rxq_ctrl->rxq.rq_ci = elts_n; rte_wmb(); *rxq_ctrl->rxq.rq_db = htonl(rxq_ctrl->rxq.rq_ci); error: - *rxq_ctrl = tmpl; assert(err >= 0); return err; } @@ -868,24 +850,26 @@ error: * * @param tmpl * Pointer to RX queue control template. - * @param rxq_ctrl - * Pointer to RX queue control. * * @return * 0 on success, errno value on failure. */ static inline int -rxq_setup(struct rxq_ctrl *tmpl, struct rxq_ctrl *rxq_ctrl) +rxq_setup(struct rxq_ctrl *tmpl) { struct ibv_cq *ibcq = tmpl->cq; struct mlx5_cq *cq = to_mxxx(cq, cq); struct mlx5_rwq *rwq = container_of(tmpl->wq, struct mlx5_rwq, wq); + struct rte_mbuf *(*elts)[tmpl->rxq.elts_n] = + rte_calloc_socket("RXQ", 1, sizeof(*elts), 0, tmpl->socket); if (cq->cqe_sz != RTE_CACHE_LINE_SIZE) { ERROR("Wrong MLX5_CQE_SIZE environment variable value: " "it should be set to %u", RTE_CACHE_LINE_SIZE); return EINVAL; } + if (elts == NULL) + return ENOMEM; tmpl->rxq.rq_db = rwq->rq.db; tmpl->rxq.cqe_n = ibcq->cqe + 1; tmpl->rxq.cq_ci = 0; @@ -897,9 +881,7 @@ rxq_setup(struct rxq_ctrl *tmpl, struct rxq_ctrl *rxq_ctrl) tmpl->rxq.cqes = (volatile struct mlx5_cqe (*)[]) (uintptr_t)cq->active_buf->buf; - tmpl->rxq.elts = - (struct rte_mbuf *(*)[tmpl->rxq.elts_n]) - ((uintptr_t)rxq_ctrl + sizeof(*rxq_ctrl)); + tmpl->rxq.elts = elts; return 0; } @@ -947,6 +929,7 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, enum ibv_exp_query_intf_status status; unsigned int mb_len = rte_pktmbuf_data_room_size(mp); unsigned int cqe_n = desc - 1; + struct rte_mbuf *(*elts)[desc] = NULL; int ret = 0; (void)conf; /* Thresholds configuration (ignored). */ @@ -1103,13 +1086,19 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, (void *)dev, strerror(ret)); goto error; } - ret = rxq_setup(&tmpl, rxq_ctrl); + ret = rxq_setup(&tmpl); if (ret) { ERROR("%p: cannot initialize RX queue structure: %s", (void *)dev, strerror(ret)); goto error; } - ret = rxq_alloc_elts(&tmpl, desc, NULL); + /* Reuse buffers from original queue if possible. */ + if (rxq_ctrl->rxq.elts_n) { + assert(rxq_ctrl->rxq.elts_n == desc); + assert(rxq_ctrl->rxq.elts != tmpl.rxq.elts); + ret = rxq_alloc_elts(&tmpl, desc, rxq_ctrl->rxq.elts); + } else + ret = rxq_alloc_elts(&tmpl, desc, NULL); if (ret) { ERROR("%p: RXQ allocation failed: %s", (void *)dev, strerror(ret)); @@ -1118,6 +1107,14 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, /* Clean up rxq in case we're reinitializing it. */ DEBUG("%p: cleaning-up old rxq just in case", (void *)rxq_ctrl); rxq_cleanup(rxq_ctrl); + /* Move mbuf pointers to dedicated storage area in RX queue. */ + elts = (void *)(rxq_ctrl + 1); + rte_memcpy(elts, tmpl.rxq.elts, sizeof(*elts)); +#ifndef NDEBUG + memset(tmpl.rxq.elts, 0x55, sizeof(*elts)); +#endif + rte_free(tmpl.rxq.elts); + tmpl.rxq.elts = elts; *rxq_ctrl = tmpl; /* Update doorbell counter. */ rxq_ctrl->rxq.rq_ci = desc; @@ -1127,7 +1124,9 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl, assert(ret == 0); return 0; error: + elts = tmpl.rxq.elts; rxq_cleanup(&tmpl); + rte_free(elts); assert(ret > 0); return ret; }