From patchwork Fri Jun 29 09:29:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "John Daley (johndale)" X-Patchwork-Id: 41921 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BA22A1B445; Fri, 29 Jun 2018 11:32:45 +0200 (CEST) Received: from alln-iport-5.cisco.com (alln-iport-5.cisco.com [173.37.142.92]) by dpdk.org (Postfix) with ESMTP id 8BACC1B3A8 for ; Fri, 29 Jun 2018 11:32:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=5369; q=dns/txt; s=iport; t=1530264763; x=1531474363; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=QHbgNIWd75I/GbA8+MEab5/aPnDguym19LuocyGQPIM=; b=C+i+5Y9lZEAlfIGNMKtz/ruyzUL51i19+VBiOSVsEHid26KYjSlqgdku y/smLFjBTcDYYT/zrj7D+OUzhiIe3l8ZL0xcrbUtYYJb0VYK6uh9e8+FH GSbjwXAVgI9ktrTuzPDHXshaGxjYmnDrRuA2HMI8Lb4+66L1OkEAIarK0 8=; X-IronPort-AV: E=Sophos;i="5.51,285,1526342400"; d="scan'208";a="135790924" Received: from rcdn-core-8.cisco.com ([173.37.93.144]) by alln-iport-5.cisco.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Jun 2018 09:32:42 +0000 Received: from cisco.com (savbu-usnic-a.cisco.com [10.193.184.48]) by rcdn-core-8.cisco.com (8.14.5/8.14.5) with ESMTP id w5T9WguZ002276; Fri, 29 Jun 2018 09:32:42 GMT Received: by cisco.com (Postfix, from userid 392789) id 6FDE520F2001; Fri, 29 Jun 2018 02:32:42 -0700 (PDT) From: John Daley To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Hyong Youb Kim Date: Fri, 29 Jun 2018 02:29:37 -0700 Message-Id: <20180629092944.15576-9-johndale@cisco.com> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180629092944.15576-1-johndale@cisco.com> References: <20180628031940.17397-1-johndale@cisco.com> <20180629092944.15576-1-johndale@cisco.com> Subject: [dpdk-dev] [PATCH v2 08/15] net/enic: use mbuf pointer array for inflight Tx packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Hyong Youb Kim WQ is currently using vnic_wq_buf to store mbuf pointers for Tx packets. But, it contains an unused mempool pointer and mbuf is unnecessarily cast to void pointer. Remove vnic_wq_buf entirely and use an mbuf pointer array instead. Signed-off-by: Hyong Youb Kim Reviewed-by: John Daley --- drivers/net/enic/base/vnic_wq.c | 8 ++++---- drivers/net/enic/base/vnic_wq.h | 10 ++-------- drivers/net/enic/enic_main.c | 6 +++--- drivers/net/enic/enic_rxtx.c | 18 ++++++------------ 4 files changed, 15 insertions(+), 27 deletions(-) diff --git a/drivers/net/enic/base/vnic_wq.c b/drivers/net/enic/base/vnic_wq.c index d61c4c6e2..a4c08a769 100644 --- a/drivers/net/enic/base/vnic_wq.c +++ b/drivers/net/enic/base/vnic_wq.c @@ -32,8 +32,8 @@ static int vnic_wq_alloc_bufs(struct vnic_wq *wq) { unsigned int count = wq->ring.desc_count; /* Allocate the mbuf ring */ - wq->bufs = (struct vnic_wq_buf *)rte_zmalloc_socket("wq->bufs", - sizeof(struct vnic_wq_buf) * count, + wq->bufs = (struct rte_mbuf **)rte_zmalloc_socket("wq->bufs", + sizeof(struct rte_mbuf *) * count, RTE_CACHE_LINE_SIZE, wq->socket_id); wq->head_idx = 0; wq->tail_idx = 0; @@ -145,9 +145,9 @@ int vnic_wq_disable(struct vnic_wq *wq) } void vnic_wq_clean(struct vnic_wq *wq, - void (*buf_clean)(struct vnic_wq_buf *buf)) + void (*buf_clean)(struct rte_mbuf **buf)) { - struct vnic_wq_buf *buf; + struct rte_mbuf **buf; unsigned int to_clean = wq->tail_idx; buf = &wq->bufs[to_clean]; diff --git a/drivers/net/enic/base/vnic_wq.h b/drivers/net/enic/base/vnic_wq.h index 0135bffc5..86ac10e28 100644 --- a/drivers/net/enic/base/vnic_wq.h +++ b/drivers/net/enic/base/vnic_wq.h @@ -36,19 +36,13 @@ struct vnic_wq_ctrl { u32 pad9; }; -/* 16 bytes */ -struct vnic_wq_buf { - struct rte_mempool *pool; - void *mb; -}; - struct vnic_wq { unsigned int index; uint64_t tx_offload_notsup_mask; struct vnic_dev *vdev; struct vnic_wq_ctrl __iomem *ctrl; /* memory-mapped */ struct vnic_dev_ring ring; - struct vnic_wq_buf *bufs; + struct rte_mbuf **bufs; unsigned int head_idx; unsigned int tail_idx; unsigned int socket_id; @@ -164,5 +158,5 @@ unsigned int vnic_wq_error_status(struct vnic_wq *wq); void vnic_wq_enable(struct vnic_wq *wq); int vnic_wq_disable(struct vnic_wq *wq); void vnic_wq_clean(struct vnic_wq *wq, - void (*buf_clean)(struct vnic_wq_buf *buf)); + void (*buf_clean)(struct rte_mbuf **buf)); #endif /* _VNIC_WQ_H_ */ diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c index e20256986..c03ec247a 100644 --- a/drivers/net/enic/enic_main.c +++ b/drivers/net/enic/enic_main.c @@ -69,12 +69,12 @@ enic_rxmbuf_queue_release(__rte_unused struct enic *enic, struct vnic_rq *rq) } } -static void enic_free_wq_buf(struct vnic_wq_buf *buf) +static void enic_free_wq_buf(struct rte_mbuf **buf) { - struct rte_mbuf *mbuf = (struct rte_mbuf *)buf->mb; + struct rte_mbuf *mbuf = *buf; rte_pktmbuf_free_seg(mbuf); - buf->mb = NULL; + *buf = NULL; } static void enic_log_q_error(struct enic *enic) diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c index bbb0444ad..549288c20 100644 --- a/drivers/net/enic/enic_rxtx.c +++ b/drivers/net/enic/enic_rxtx.c @@ -473,7 +473,7 @@ enic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, static inline void enic_free_wq_bufs(struct vnic_wq *wq, u16 completed_index) { - struct vnic_wq_buf *buf; + struct rte_mbuf *buf; struct rte_mbuf *m, *free[ENIC_MAX_WQ_DESCS]; unsigned int nb_to_free, nb_free = 0, i; struct rte_mempool *pool; @@ -483,13 +483,10 @@ static inline void enic_free_wq_bufs(struct vnic_wq *wq, u16 completed_index) nb_to_free = enic_ring_sub(desc_count, wq->tail_idx, completed_index) + 1; tail_idx = wq->tail_idx; - buf = &wq->bufs[tail_idx]; - pool = ((struct rte_mbuf *)buf->mb)->pool; + pool = wq->bufs[tail_idx]->pool; for (i = 0; i < nb_to_free; i++) { - buf = &wq->bufs[tail_idx]; - m = rte_pktmbuf_prefree_seg((struct rte_mbuf *)(buf->mb)); - buf->mb = NULL; - + buf = wq->bufs[tail_idx]; + m = rte_pktmbuf_prefree_seg(buf); if (unlikely(m == NULL)) { tail_idx = enic_ring_incr(desc_count, tail_idx); continue; @@ -574,7 +571,6 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t ol_flags_mask; unsigned int wq_desc_avail; int head_idx; - struct vnic_wq_buf *buf; unsigned int desc_count; struct wq_enet_desc *descs, *desc_p, desc_tmp; uint16_t mss; @@ -669,8 +665,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, vlan_id, 0); *desc_p = desc_tmp; - buf = &wq->bufs[head_idx]; - buf->mb = (void *)tx_pkt; + wq->bufs[head_idx] = tx_pkt; head_idx = enic_ring_incr(desc_count, head_idx); wq_desc_avail--; @@ -691,8 +686,7 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, 0); *desc_p = desc_tmp; - buf = &wq->bufs[head_idx]; - buf->mb = (void *)tx_pkt; + wq->bufs[head_idx] = tx_pkt; head_idx = enic_ring_incr(desc_count, head_idx); wq_desc_avail--; }