From patchwork Wed Nov 5 01:49:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yong Wang X-Patchwork-Id: 1128 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id C8F8D7F2D; Wed, 5 Nov 2014 02:40:33 +0100 (CET) Received: from smtp-outbound-2.vmware.com (smtp-outbound-2.vmware.com [208.91.2.13]) by dpdk.org (Postfix) with ESMTP id D54795947 for ; Wed, 5 Nov 2014 02:40:30 +0100 (CET) Received: from sc9-mailhost1.vmware.com (sc9-mailhost1.vmware.com [10.113.161.71]) by smtp-outbound-2.vmware.com (Postfix) with ESMTP id BF45C289F4 for ; Tue, 4 Nov 2014 17:49:46 -0800 (PST) Received: from sc9-mailhost2.vmware.com (unknown [10.32.43.10]) by sc9-mailhost1.vmware.com (Postfix) with ESMTP id A1DC1191CE for ; Tue, 4 Nov 2014 17:49:46 -0800 (PST) From: Yong Wang To: dev@dpdk.org Date: Tue, 4 Nov 2014 17:49:40 -0800 Message-Id: <1415152183-119796-4-git-send-email-yongwang@vmware.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1415152183-119796-1-git-send-email-yongwang@vmware.com> References: <1415152183-119796-1-git-send-email-yongwang@vmware.com> Subject: [dpdk-dev] [PATCH v2 3/6] vmxnet3: Fix dev stop/restart bug X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This change makes vmxnet3 consistent with other pmds in terms of dev_stop behavior: rather than releasing tx/rx rings, it only resets the ring structure and release the pending mbufs. Verified with various tests (test-pmd and pktgen) over vmxnet3 that dev stop/restart works fine. Signed-off-by: Yong Wang --- lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 78 ++++++++++++++++++++++++++++++++--- 1 file changed, 73 insertions(+), 5 deletions(-) diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c index 0b6363f..2017d4b 100644 --- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c +++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c @@ -157,7 +157,7 @@ vmxnet3_txq_dump(struct vmxnet3_tx_queue *txq) #endif static inline void -vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring) +vmxnet3_cmd_ring_release_mbufs(vmxnet3_cmd_ring_t *ring) { while (ring->next2comp != ring->next2fill) { /* No need to worry about tx desc ownership, device is quiesced by now. */ @@ -171,16 +171,23 @@ vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring) } vmxnet3_cmd_ring_adv_next2comp(ring); } +} + +static void +vmxnet3_cmd_ring_release(vmxnet3_cmd_ring_t *ring) +{ + vmxnet3_cmd_ring_release_mbufs(ring); rte_free(ring->buf_info); ring->buf_info = NULL; } + void vmxnet3_dev_tx_queue_release(void *txq) { vmxnet3_tx_queue_t *tq = txq; - if (txq != NULL) { + if (tq != NULL) { /* Release the cmd_ring */ vmxnet3_cmd_ring_release(&tq->cmd_ring); } @@ -192,13 +199,74 @@ vmxnet3_dev_rx_queue_release(void *rxq) int i; vmxnet3_rx_queue_t *rq = rxq; - if (rxq != NULL) { + if (rq != NULL) { /* Release both the cmd_rings */ for (i = 0; i < VMXNET3_RX_CMDRING_SIZE; i++) vmxnet3_cmd_ring_release(&rq->cmd_ring[i]); } } +static void +vmxnet3_dev_tx_queue_reset(void *txq) +{ + vmxnet3_tx_queue_t *tq = txq; + struct vmxnet3_cmd_ring *ring = &tq->cmd_ring; + struct vmxnet3_comp_ring *comp_ring = &tq->comp_ring; + int size; + + if (tq != NULL) { + /* Release the cmd_ring mbufs */ + vmxnet3_cmd_ring_release_mbufs(&tq->cmd_ring); + } + + /* Tx vmxnet rings structure initialization*/ + ring->next2fill = 0; + ring->next2comp = 0; + ring->gen = VMXNET3_INIT_GEN; + comp_ring->next2proc = 0; + comp_ring->gen = VMXNET3_INIT_GEN; + + size = sizeof(struct Vmxnet3_TxDesc) * ring->size; + size += sizeof(struct Vmxnet3_TxCompDesc) * comp_ring->size; + + memset(ring->base, 0, size); +} + +static void +vmxnet3_dev_rx_queue_reset(void *rxq) +{ + int i; + vmxnet3_rx_queue_t *rq = rxq; + struct vmxnet3_cmd_ring *ring0, *ring1; + struct vmxnet3_comp_ring *comp_ring; + int size; + + if (rq != NULL) { + /* Release both the cmd_rings mbufs */ + for (i = 0; i < VMXNET3_RX_CMDRING_SIZE; i++) + vmxnet3_cmd_ring_release_mbufs(&rq->cmd_ring[i]); + } + + ring0 = &rq->cmd_ring[0]; + ring1 = &rq->cmd_ring[1]; + comp_ring = &rq->comp_ring; + + /* Rx vmxnet rings structure initialization */ + ring0->next2fill = 0; + ring1->next2fill = 0; + ring0->next2comp = 0; + ring1->next2comp = 0; + ring0->gen = VMXNET3_INIT_GEN; + ring1->gen = VMXNET3_INIT_GEN; + comp_ring->next2proc = 0; + comp_ring->gen = VMXNET3_INIT_GEN; + + size = sizeof(struct Vmxnet3_RxDesc) * (ring0->size + ring1->size); + size += sizeof(struct Vmxnet3_RxCompDesc) * comp_ring->size; + + memset(ring0->base, 0, size); +} + void vmxnet3_dev_clear_queues(struct rte_eth_dev *dev) { @@ -211,7 +279,7 @@ vmxnet3_dev_clear_queues(struct rte_eth_dev *dev) if (txq != NULL) { txq->stopped = TRUE; - vmxnet3_dev_tx_queue_release(txq); + vmxnet3_dev_tx_queue_reset(txq); } } @@ -220,7 +288,7 @@ vmxnet3_dev_clear_queues(struct rte_eth_dev *dev) if (rxq != NULL) { rxq->stopped = TRUE; - vmxnet3_dev_rx_queue_release(rxq); + vmxnet3_dev_rx_queue_reset(rxq); } } }