From patchwork Thu Jun 7 09:43:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 40728 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E5AFA1B3D6; Thu, 7 Jun 2018 11:43:56 +0200 (CEST) Received: from mail-lf0-f49.google.com (mail-lf0-f49.google.com [209.85.215.49]) by dpdk.org (Postfix) with ESMTP id B997D1B29F for ; Thu, 7 Jun 2018 11:43:52 +0200 (CEST) Received: by mail-lf0-f49.google.com with SMTP id n15-v6so13630853lfn.10 for ; Thu, 07 Jun 2018 02:43:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=p8OtjOMsPxmK9PD1AELCqbAvCctoqiy8t0TFI8EKN78=; b=O1i1mzxX8HEhgo7UWRjo82SYg0qllirUbSlDxmztxybLSJ5rKb7ydktTpUc+0nZqEk ZigxJ2favNUBNapfJcQzjhMqQxb6BlZKeLTVpCsS4RmEfHoE2t31KSZcwHuj1VOzQOYb ee6De1Q7ZqnTxlqAQnZq4e4ekWLYP5sVHARFJJez3CROLUaq4qdrGda7x0seEljKn08g LySh0b7UNwo7RHLBmsbne/0AlYsxAj9WXK+f/Z9doOiBa0YsZbcC2r4EZoOW7GpEvTEu tloEKBohmkX5EGJXT90Os1O5S+ycOrLsZIU57gIq1oJWvjHZrHksvpSvr7WvYuEyY5kH kBrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=p8OtjOMsPxmK9PD1AELCqbAvCctoqiy8t0TFI8EKN78=; b=aHXWBk1RlIxJrrWunPlPTSPe04lS8UZzSEbCookdY1hUhIx0YplGFU7iO1zP9U6BJi Dox93xsnSbt7iEJ+jlhzFEogKmVkKgdHv59gJ/ZSGlg+SeQj3CfgwzyUh/5sX+0xYZ/T tuWEQ5to6RarFBSrMGqrqPAC5eKj0nSJIIfVSGvoaTP7hVI6ZQMuIHisb50KPJQOlFc2 nak/dwmHUNpJVrAV6iOVgVkqdkvIp2KlYm1+rbBgY0KpdM+5phTa8xq48zuoXaHeZXce Teu4GfE5F45awPDZ68Lw/UJjh26t6zq4Fro5BfxYXzBjMm0S/fuSV4J/XcBKeyZchIQX wExA== X-Gm-Message-State: APt69E2x5fMDrHuoHpxqZ0AXoETJct1ouUCLFJZDT2Emwovc2qUUqa42 s7WtrMYBD1ZzgSyDQmSLnHH9Kg== X-Google-Smtp-Source: ADUXVKII8OpXPu1XIG6F1P4nOP0D7XFqbHAfBGXdCeqoN5SDIhMzMILZq7dDkt9lgHetKhEFfo7+/g== X-Received: by 2002:a19:4e86:: with SMTP id u6-v6mr863099lfk.105.1528364632357; Thu, 07 Jun 2018 02:43:52 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id p28-v6sm3612368lfh.24.2018.06.07.02.43.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 07 Jun 2018 02:43:51 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com, Rafal Kozik Date: Thu, 7 Jun 2018 11:43:10 +0200 Message-Id: <20180607094322.14312-15-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180607094322.14312-1-mk@semihalf.com> References: <20180607094322.14312-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v3 15/27] net/ena: linearize Tx mbuf X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik Function ena_check_and_linearize_mbuf check Tx mbuf for number of segments and linearize (defragment) it if necessary. It is called before sending each packet. Information about maximum number of segments is stored per each ring. Maximum number of segments supported by NIC is taken from ENA COM in ena_calc_queue_size function and stored in adapter structure. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 31 ++++++++++++++++++++++++++++++- drivers/net/ena/ena_ethdev.h | 2 ++ 2 files changed, 32 insertions(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index f0e95ef58..cdefcd325 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -894,6 +894,7 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) static int ena_calc_queue_size(struct ena_com_dev *ena_dev, + u16 *max_tx_sgl_size, struct ena_com_dev_get_features_ctx *get_feat_ctx) { uint32_t queue_size = ENA_DEFAULT_RING_SIZE; @@ -916,6 +917,9 @@ ena_calc_queue_size(struct ena_com_dev *ena_dev, return -EFAULT; } + *max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, + get_feat_ctx->max_queues.max_packet_tx_descs); + return queue_size; } @@ -1491,6 +1495,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) struct ena_com_dev *ena_dev = &adapter->ena_dev; struct ena_com_dev_get_features_ctx get_feat_ctx; int queue_size, rc; + u16 tx_sgl_size = 0; static int adapters_found; bool wd_state; @@ -1547,13 +1552,15 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; adapter->num_queues = get_feat_ctx.max_queues.max_sq_num; - queue_size = ena_calc_queue_size(ena_dev, &get_feat_ctx); + queue_size = ena_calc_queue_size(ena_dev, &tx_sgl_size, &get_feat_ctx); if ((queue_size <= 0) || (adapter->num_queues <= 0)) return -EFAULT; adapter->tx_ring_size = queue_size; adapter->rx_ring_size = queue_size; + adapter->max_tx_sgl_size = tx_sgl_size; + /* prepare ring structures */ ena_init_rings(adapter); @@ -1652,6 +1659,7 @@ static void ena_init_rings(struct ena_adapter *adapter) ring->id = i; ring->tx_mem_queue_type = adapter->ena_dev.tx_mem_queue_type; ring->tx_max_header_size = adapter->ena_dev.tx_max_header_size; + ring->sgl_size = adapter->max_tx_sgl_size; } for (i = 0; i < adapter->num_queues; i++) { @@ -1923,6 +1931,23 @@ static void ena_update_hints(struct ena_adapter *adapter, } } +static int ena_check_and_linearize_mbuf(struct ena_ring *tx_ring, + struct rte_mbuf *mbuf) +{ + int num_segments, rc; + + num_segments = mbuf->nb_segs; + + if (likely(num_segments < tx_ring->sgl_size)) + return 0; + + rc = rte_pktmbuf_linearize(mbuf); + if (unlikely(rc)) + RTE_LOG(WARNING, PMD, "Mbuf linearize failed\n"); + + return rc; +} + static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -1953,6 +1978,10 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { mbuf = tx_pkts[sent_idx]; + rc = ena_check_and_linearize_mbuf(tx_ring, mbuf); + if (unlikely(rc)) + break; + req_id = tx_ring->empty_tx_reqs[next_to_use & ring_mask]; tx_info = &tx_ring->tx_buffer_info[req_id]; tx_info->mbuf = mbuf; diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index bba5ad53a..73c110ab9 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -101,6 +101,7 @@ struct ena_ring { int configured; struct ena_adapter *adapter; uint64_t offloads; + u16 sgl_size; } __rte_cache_aligned; enum ena_adapter_state { @@ -167,6 +168,7 @@ struct ena_adapter { /* TX */ struct ena_ring tx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned; int tx_ring_size; + u16 max_tx_sgl_size; /* RX */ struct ena_ring rx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned;