From patchwork Wed May 9 12:47:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 39586 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7335A1B640; Wed, 9 May 2018 14:47:36 +0200 (CEST) Received: from mail-lf0-f65.google.com (mail-lf0-f65.google.com [209.85.215.65]) by dpdk.org (Postfix) with ESMTP id ED6741B40A for ; Wed, 9 May 2018 14:47:27 +0200 (CEST) Received: by mail-lf0-f65.google.com with SMTP id g12-v6so50760160lfb.10 for ; Wed, 09 May 2018 05:47:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5/lMfYIa3qAGPFT89yMvaKIMqYuwA4wZv39zPfjd3VE=; b=cnCK72RCLmC3B+LN9gHOOpaPA1SltzMoXE5qZuBj5gOdS3mMt84mKgsjliucLOSYht qhwhJsss30jY3xFJcDW1sMHBAfZUdMBjWvNwX1SIIw3CnAL3Csp5o4oImt1nzAyCC3Dn SZ++wQ8ZwRjvHstjIqVRFr3cHxeJIW14b5atJwDwN6dgvnBaQZFqseTBPmktCZoYFRps 7rNSJK3NziGHXPeijjn9qrA2BSKcK05kxe7OOwYif1+qGoRGvESHEtuyOjRFFrfOA+Um IAJ8NYOmxz3/GQRxKhFJVDYqxXHPKo1hdkdKzNSID8768SVrYiuiJx0U/DJMbgFIlNlq 1y8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5/lMfYIa3qAGPFT89yMvaKIMqYuwA4wZv39zPfjd3VE=; b=YaPmVrPNWVGBVQat+3yKCKEq6vSzMO7Vg5GR6s8A4x4GE9kXMbhCUMjlvnukSYhxtt SIeRV7ZStb0+gbP++ZWVmPNmX70nA1vPrJZdlDkRrMoaUDKyTRsI1Ba04mH5/OxMOcBD P3WfIOxdWmhhcZ1D/6w4Re0pvCeJWFJCsQqNkkg79jm4+O0ChreRSeN1MTNZB3ho9QgE A8TtrasZO2PF+5ExQ8nlxAl+nRXD3ct9I8PDPDl+qahhUfOEVJfreuZS3H2MgSuTgb87 MkTW9VQil1VqQZVc49VgFsE1ZpFwzzIVxqbk9jCUDSV32JwfmwCVrRpdJ0sWBpL6w4cG 1heQ== X-Gm-Message-State: ALQs6tA3N9by1Q3qpWoblGax5qk9lpvfRokcGnY0wV2eA+jBBN4ephBq UuOUNqU2E7Z3i8zSKlx5FM0+fQ== X-Google-Smtp-Source: AB8JxZq3GN4yeBXFU/wf+r956Dm3m6uwSLtTyAPXnFAhWUIHtYjUJExMiL7L+OJf5mMqCJQ7R11l4w== X-Received: by 2002:a2e:7213:: with SMTP id n19-v6mr30390914ljc.71.1525870047545; Wed, 09 May 2018 05:47:27 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id l10-v6sm5149258lja.62.2018.05.09.05.47.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 05:47:26 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Wed, 9 May 2018 14:47:03 +0200 Message-Id: <20180509124714.23305-4-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180509124714.23305-1-mk@semihalf.com> References: <20180509124714.23305-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v1 13/24] net/ena: add RX out of order completion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This feature allows RX packets to be cleaned up out of order. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 48 ++++++++++++++++++++++++++++++++++++++++---- drivers/net/ena/ena_ethdev.h | 8 ++++++-- 2 files changed, 50 insertions(+), 6 deletions(-) -- 2.14.1 diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 3383ba059..075621905 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -372,6 +372,19 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, } } +static inline int validate_rx_req_id(struct ena_ring *rx_ring, uint16_t req_id) +{ + if (likely(req_id < rx_ring->ring_size)) + return 0; + + RTE_LOG(ERR, PMD, "Invalid rx req_id: %hu\n", req_id); + + rx_ring->adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; + rx_ring->adapter->trigger_reset = true; + + return -EFAULT; +} + static void ena_config_host_info(struct ena_com_dev *ena_dev) { struct ena_admin_host_info *host_info; @@ -728,6 +741,10 @@ static void ena_rx_queue_release(void *queue) rte_free(ring->rx_buffer_info); ring->rx_buffer_info = NULL; + if (ring->empty_rx_reqs) + rte_free(ring->empty_rx_reqs); + ring->empty_rx_reqs = NULL; + ring->configured = 0; RTE_LOG(NOTICE, PMD, "RX Queue %d:%d released\n", @@ -1187,7 +1204,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, (struct ena_adapter *)(dev->data->dev_private); struct ena_ring *rxq = NULL; uint16_t ena_qid = 0; - int rc = 0; + int i, rc = 0; struct ena_com_dev *ena_dev = &adapter->ena_dev; rxq = &adapter->rx_ring[queue_idx]; @@ -1261,6 +1278,19 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } + rxq->empty_rx_reqs = rte_zmalloc("rxq->empty_rx_reqs", + sizeof(uint16_t) * nb_desc, + RTE_CACHE_LINE_SIZE); + if (!rxq->empty_rx_reqs) { + RTE_LOG(ERR, PMD, "failed to alloc mem for empty rx reqs\n"); + rte_free(rxq->rx_buffer_info); + rxq->rx_buffer_info = NULL; + return -ENOMEM; + } + + for (i = 0; i < nb_desc; i++) + rxq->empty_tx_reqs[i] = i; + /* Store pointer to this queue in upper layer */ rxq->configured = 1; dev->data->rx_queues[queue_idx] = rxq; @@ -1275,7 +1305,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) uint16_t ring_size = rxq->ring_size; uint16_t ring_mask = ring_size - 1; uint16_t next_to_use = rxq->next_to_use; - uint16_t in_use; + uint16_t in_use, req_id; struct rte_mbuf **mbufs = &rxq->rx_buffer_info[0]; if (unlikely(!count)) @@ -1303,12 +1333,14 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) struct ena_com_buf ebuf; rte_prefetch0(mbufs[((next_to_use + 4) & ring_mask)]); + + req_id = rxq->empty_rx_reqs[next_to_use_masked]; /* prepare physical address for DMA transaction */ ebuf.paddr = mbuf->buf_iova + RTE_PKTMBUF_HEADROOM; ebuf.len = mbuf->buf_len - RTE_PKTMBUF_HEADROOM; /* pass resource to device */ rc = ena_com_add_single_rx_desc(rxq->ena_com_io_sq, - &ebuf, next_to_use_masked); + &ebuf, req_id); if (unlikely(rc)) { rte_mempool_put_bulk(rxq->mb_pool, (void **)(&mbuf), count - i); @@ -1771,6 +1803,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, unsigned int ring_mask = ring_size - 1; uint16_t next_to_clean = rx_ring->next_to_clean; uint16_t desc_in_use = 0; + uint16_t req_id; unsigned int recv_idx = 0; struct rte_mbuf *mbuf = NULL; struct rte_mbuf *mbuf_head = NULL; @@ -1811,7 +1844,12 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, break; while (segments < ena_rx_ctx.descs) { - mbuf = rx_buff_info[next_to_clean & ring_mask]; + req_id = ena_rx_ctx.ena_bufs[segments].req_id; + rc = validate_rx_req_id(rx_ring, req_id); + if (unlikely(rc)) + break; + + mbuf = rx_buff_info[req_id]; mbuf->data_len = ena_rx_ctx.ena_bufs[segments].len; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->refcnt = 1; @@ -1828,6 +1866,8 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, mbuf_head->pkt_len += mbuf->data_len; mbuf_prev = mbuf; + rx_ring->empty_rx_reqs[next_to_clean & ring_mask] = + req_id; segments++; next_to_clean++; } diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 594e643e2..bba5ad53a 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -75,8 +75,12 @@ struct ena_ring { enum ena_ring_type type; enum ena_admin_placement_policy_type tx_mem_queue_type; - /* Holds the empty requests for TX OOO completions */ - uint16_t *empty_tx_reqs; + /* Holds the empty requests for TX/RX OOO completions */ + union { + uint16_t *empty_tx_reqs; + uint16_t *empty_rx_reqs; + }; + union { struct ena_tx_buffer *tx_buffer_info; /* contex of tx packet */ struct rte_mbuf **rx_buffer_info; /* contex of rx packet */