Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/67993/?format=api
https://patches.dpdk.org/api/patches/67993/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/patch/20200408082921.31000-28-mk@semihalf.com/", "project": { "id": 1, "url": "https://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20200408082921.31000-28-mk@semihalf.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20200408082921.31000-28-mk@semihalf.com", "date": "2020-04-08T08:29:18", "name": "[v3,27/30] net/ena: refactor Tx path", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": true, "hash": "5e0fc91c2493adcf74fc7788037af7ead47b34a1", "submitter": { "id": 786, "url": "https://patches.dpdk.org/api/people/786/?format=api", "name": "Michal Krawczyk", "email": "mk@semihalf.com" }, "delegate": { "id": 319, "url": "https://patches.dpdk.org/api/users/319/?format=api", "username": "fyigit", "first_name": "Ferruh", "last_name": "Yigit", "email": "ferruh.yigit@amd.com" }, "mbox": "https://patches.dpdk.org/project/dpdk/patch/20200408082921.31000-28-mk@semihalf.com/mbox/", "series": [ { "id": 9246, "url": "https://patches.dpdk.org/api/series/9246/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=9246", "date": "2020-04-08T08:28:51", "name": "Update ENA driver to v2.1.0", "version": 3, "mbox": "https://patches.dpdk.org/series/9246/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/67993/comments/", "check": "success", "checks": "https://patches.dpdk.org/api/patches/67993/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id D3C03A0597;\n\tWed, 8 Apr 2020 10:34:07 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id DCE5B1C1ED;\n\tWed, 8 Apr 2020 10:30:06 +0200 (CEST)", "from mail-lf1-f65.google.com (mail-lf1-f65.google.com\n [209.85.167.65]) by dpdk.org (Postfix) with ESMTP id 27A5D1C1D6\n for <dev@dpdk.org>; Wed, 8 Apr 2020 10:30:01 +0200 (CEST)", "by mail-lf1-f65.google.com with SMTP id k28so4466679lfe.10\n for <dev@dpdk.org>; Wed, 08 Apr 2020 01:30:01 -0700 (PDT)", "from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl.\n [193.106.246.138])\n by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.58\n (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n Wed, 08 Apr 2020 01:29:59 -0700 (PDT)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=semihalf-com.20150623.gappssmtp.com; s=20150623;\n h=from:to:cc:subject:date:message-id:in-reply-to:references\n :mime-version:content-transfer-encoding;\n bh=SOII8x+CfQ8QTvVnjttoLP7FVRxyvZlnQBcLamir9aA=;\n b=CWjvDApU7/mtLh4JpPR/c3Uu71IobVCguxMssiaTt35nVvWSqIFDOd0ahxF3rYWedZ\n 8vT5be30k7doqOr8s6hNL0DA/2dfK5SWlLrmaIdiaG/zqBqiMYlGDHfLVxbgYnn4I0wj\n fSP2dRh0EL9GWFHe/hGF8H5GewMBZNOHehNgTx5LTSRGWH/3syr/g4pb3jKgEYAJ2AzN\n CpKqdVpSFNwy/54ovf2Kp8eoIkDRAqMushSELa7UAALS6kH7w/RUyA9zNtpjMHj0jSSf\n AGuKCzv3Cv1pw316oOXWaMZDWNV/1gu7uo7Xc5coynpTdkVqTwunYVrEp2X/gajgtpv/\n cXNg==", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20161025;\n h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to\n :references:mime-version:content-transfer-encoding;\n bh=SOII8x+CfQ8QTvVnjttoLP7FVRxyvZlnQBcLamir9aA=;\n b=bSPR5p2yUHY/kDliWrJhK4zgRkKUQYy7zAUynCxx9fWRYeM2DtkFFfTMNfrnOH1Rz0\n b9/53IsLbv2Iy0VDfY5ZtvEHBBC0d6gL8+DTKs9CJY9ejq7MSEszSSrD5BjC2z7XdY8q\n H/TNHTDDf+dD4FVd5iDFw1nhHkX/5pJr9b4pWPX6w0EJKym8BOn8r/aEUsuNhGWgHHiF\n Mo846ohke+3iz/MRG0XEhfK6KJ5uJ5M6KqBw+BV6yoYUpEKh1wFOoTJ6YlOY6d3zYsXV\n N6FQMRjycFcrnhqLMNcErIvHcEP9dC1vD9xrRIO3+3N2EeC0swKs9yHb9mRVz8VEgTrB\n qT4Q==", "X-Gm-Message-State": "AGi0PuaTmDXy/GzUpNvPMZCQB+MbpKGzGvQmk1W/nMIZOdl5BDxGnVMk\n qdoEIqA/dKys3eyyFmDYWtZ2xInUCy8=", "X-Google-Smtp-Source": "\n APiQypLHyej6PuHTUagt0TKY34khcIPIMPKXJks6yJHxY+OIC1juREH50/ckbMFL6ljLWekn5O0Abw==", "X-Received": "by 2002:a05:6512:51c:: with SMTP id\n o28mr3944935lfb.116.1586334600164;\n Wed, 08 Apr 2020 01:30:00 -0700 (PDT)", "From": "Michal Krawczyk <mk@semihalf.com>", "To": "dev@dpdk.org", "Cc": "mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com,\n igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com,\n Michal Krawczyk <mk@semihalf.com>", "Date": "Wed, 8 Apr 2020 10:29:18 +0200", "Message-Id": "<20200408082921.31000-28-mk@semihalf.com>", "X-Mailer": "git-send-email 2.20.1", "In-Reply-To": "<20200408082921.31000-1-mk@semihalf.com>", "References": "<20200408082921.31000-1-mk@semihalf.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "Subject": "[dpdk-dev] [PATCH v3 27/30] net/ena: refactor Tx path", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "The original Tx function was very long and was containing both cleanup\nand the sending sections. Because of that it was having a lot of local\nvariables, big indentation and was hard to read.\n\nThis function was split into 2 sections:\n * Sending - which is responsible for preparing the mbuf, mapping it\n to the device descriptors and finally, sending packet to the HW\n * Cleanup - which is releasing packets sent by the HW. Loop which was\n releasing packets was reworked a bit, to make intention more visible\n and aligned with other parts of the driver.\n\nSigned-off-by: Michal Krawczyk <mk@semihalf.com>\nReviewed-by: Igor Chauskin <igorch@amazon.com>\nReviewed-by: Guy Tzalik <gtzalik@amazon.com>\n---\nv2:\n * Fix compilation error on icc by adding braces around 0\n\n drivers/net/ena/ena_ethdev.c | 323 +++++++++++++++++++----------------\n 1 file changed, 179 insertions(+), 144 deletions(-)", "diff": "diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c\nindex f6d0a75819..1a7cc686f5 100644\n--- a/drivers/net/ena/ena_ethdev.c\n+++ b/drivers/net/ena/ena_ethdev.c\n@@ -169,6 +169,13 @@ static int ena_device_init(struct ena_com_dev *ena_dev,\n \t\t\t struct ena_com_dev_get_features_ctx *get_feat_ctx,\n \t\t\t bool *wd_state);\n static int ena_dev_configure(struct rte_eth_dev *dev);\n+static void ena_tx_map_mbuf(struct ena_ring *tx_ring,\n+\tstruct ena_tx_buffer *tx_info,\n+\tstruct rte_mbuf *mbuf,\n+\tvoid **push_header,\n+\tuint16_t *header_len);\n+static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf);\n+static void ena_tx_cleanup(struct ena_ring *tx_ring);\n static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\t\t uint16_t nb_pkts);\n static uint16_t eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\n@@ -2343,193 +2350,221 @@ static int ena_check_and_linearize_mbuf(struct ena_ring *tx_ring,\n \treturn rc;\n }\n \n-static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\n-\t\t\t\t uint16_t nb_pkts)\n+static void ena_tx_map_mbuf(struct ena_ring *tx_ring,\n+\tstruct ena_tx_buffer *tx_info,\n+\tstruct rte_mbuf *mbuf,\n+\tvoid **push_header,\n+\tuint16_t *header_len)\n {\n-\tstruct ena_ring *tx_ring = (struct ena_ring *)(tx_queue);\n-\tuint16_t next_to_use = tx_ring->next_to_use;\n-\tuint16_t next_to_clean = tx_ring->next_to_clean;\n-\tstruct rte_mbuf *mbuf;\n-\tuint16_t seg_len;\n-\tunsigned int cleanup_budget;\n-\tstruct ena_com_tx_ctx ena_tx_ctx;\n-\tstruct ena_tx_buffer *tx_info;\n-\tstruct ena_com_buf *ebuf;\n-\tuint16_t rc, req_id, total_tx_descs = 0;\n-\tuint16_t sent_idx = 0;\n-\tuint16_t push_len = 0;\n-\tuint16_t delta = 0;\n-\tint nb_hw_desc;\n-\tuint32_t total_length;\n-\n-\t/* Check adapter state */\n-\tif (unlikely(tx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) {\n-\t\tPMD_DRV_LOG(ALERT,\n-\t\t\t\"Trying to xmit pkts while device is NOT running\\n\");\n-\t\treturn 0;\n-\t}\n+\tstruct ena_com_buf *ena_buf;\n+\tuint16_t delta, seg_len, push_len;\n \n-\tnb_pkts = RTE_MIN(ena_com_free_q_entries(tx_ring->ena_com_io_sq),\n-\t\tnb_pkts);\n+\tdelta = 0;\n+\tseg_len = mbuf->data_len;\n \n-\tfor (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) {\n-\t\tmbuf = tx_pkts[sent_idx];\n-\t\ttotal_length = 0;\n+\ttx_info->mbuf = mbuf;\n+\tena_buf = tx_info->bufs;\n \n-\t\trc = ena_check_and_linearize_mbuf(tx_ring, mbuf);\n-\t\tif (unlikely(rc))\n-\t\t\tbreak;\n+\tif (tx_ring->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {\n+\t\t/*\n+\t\t * Tx header might be (and will be in most cases) smaller than\n+\t\t * tx_max_header_size. But it's not an issue to send more data\n+\t\t * to the device, than actually needed if the mbuf size is\n+\t\t * greater than tx_max_header_size.\n+\t\t */\n+\t\tpush_len = RTE_MIN(mbuf->pkt_len, tx_ring->tx_max_header_size);\n+\t\t*header_len = push_len;\n \n-\t\treq_id = tx_ring->empty_tx_reqs[next_to_use];\n-\t\ttx_info = &tx_ring->tx_buffer_info[req_id];\n-\t\ttx_info->mbuf = mbuf;\n-\t\ttx_info->num_of_bufs = 0;\n-\t\tebuf = tx_info->bufs;\n+\t\tif (likely(push_len <= seg_len)) {\n+\t\t\t/* If the push header is in the single segment, then\n+\t\t\t * just point it to the 1st mbuf data.\n+\t\t\t */\n+\t\t\t*push_header = rte_pktmbuf_mtod(mbuf, uint8_t *);\n+\t\t} else {\n+\t\t\t/* If the push header lays in the several segments, copy\n+\t\t\t * it to the intermediate buffer.\n+\t\t\t */\n+\t\t\trte_pktmbuf_read(mbuf, 0, push_len,\n+\t\t\t\ttx_ring->push_buf_intermediate_buf);\n+\t\t\t*push_header = tx_ring->push_buf_intermediate_buf;\n+\t\t\tdelta = push_len - seg_len;\n+\t\t}\n+\t} else {\n+\t\t*push_header = NULL;\n+\t\t*header_len = 0;\n+\t\tpush_len = 0;\n+\t}\n \n-\t\t/* Prepare TX context */\n-\t\tmemset(&ena_tx_ctx, 0x0, sizeof(struct ena_com_tx_ctx));\n-\t\tmemset(&ena_tx_ctx.ena_meta, 0x0,\n-\t\t sizeof(struct ena_com_tx_meta));\n-\t\tena_tx_ctx.ena_bufs = ebuf;\n-\t\tena_tx_ctx.req_id = req_id;\n+\t/* Process first segment taking into consideration pushed header */\n+\tif (seg_len > push_len) {\n+\t\tena_buf->paddr = mbuf->buf_iova +\n+\t\t\t\tmbuf->data_off +\n+\t\t\t\tpush_len;\n+\t\tena_buf->len = seg_len - push_len;\n+\t\tena_buf++;\n+\t\ttx_info->num_of_bufs++;\n+\t}\n \n-\t\tdelta = 0;\n+\twhile ((mbuf = mbuf->next) != NULL) {\n \t\tseg_len = mbuf->data_len;\n \n-\t\tif (tx_ring->tx_mem_queue_type ==\n-\t\t\t\tENA_ADMIN_PLACEMENT_POLICY_DEV) {\n-\t\t\tpush_len = RTE_MIN(mbuf->pkt_len,\n-\t\t\t\t\t tx_ring->tx_max_header_size);\n-\t\t\tena_tx_ctx.header_len = push_len;\n-\n-\t\t\tif (likely(push_len <= seg_len)) {\n-\t\t\t\t/* If the push header is in the single segment,\n-\t\t\t\t * then just point it to the 1st mbuf data.\n-\t\t\t\t */\n-\t\t\t\tena_tx_ctx.push_header =\n-\t\t\t\t\trte_pktmbuf_mtod(mbuf, uint8_t *);\n-\t\t\t} else {\n-\t\t\t\t/* If the push header lays in the several\n-\t\t\t\t * segments, copy it to the intermediate buffer.\n-\t\t\t\t */\n-\t\t\t\trte_pktmbuf_read(mbuf, 0, push_len,\n-\t\t\t\t\ttx_ring->push_buf_intermediate_buf);\n-\t\t\t\tena_tx_ctx.push_header =\n-\t\t\t\t\ttx_ring->push_buf_intermediate_buf;\n-\t\t\t\tdelta = push_len - seg_len;\n-\t\t\t}\n-\t\t} /* there's no else as we take advantage of memset zeroing */\n+\t\t/* Skip mbufs if whole data is pushed as a header */\n+\t\tif (unlikely(delta > seg_len)) {\n+\t\t\tdelta -= seg_len;\n+\t\t\tcontinue;\n+\t\t}\n \n-\t\t/* Set TX offloads flags, if applicable */\n-\t\tena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads,\n-\t\t\ttx_ring->disable_meta_caching);\n+\t\tena_buf->paddr = mbuf->buf_iova + mbuf->data_off + delta;\n+\t\tena_buf->len = seg_len - delta;\n+\t\tena_buf++;\n+\t\ttx_info->num_of_bufs++;\n \n-\t\trte_prefetch0(tx_pkts[ENA_IDX_ADD_MASKED(\n-\t\t\tsent_idx, 4, tx_ring->size_mask)]);\n+\t\tdelta = 0;\n+\t}\n+}\n \n-\t\t/* Process first segment taking into\n-\t\t * consideration pushed header\n-\t\t */\n-\t\tif (seg_len > push_len) {\n-\t\t\tebuf->paddr = mbuf->buf_iova +\n-\t\t\t\t mbuf->data_off +\n-\t\t\t\t push_len;\n-\t\t\tebuf->len = seg_len - push_len;\n-\t\t\tebuf++;\n-\t\t\ttx_info->num_of_bufs++;\n-\t\t}\n-\t\ttotal_length += mbuf->data_len;\n+static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf)\n+{\n+\tstruct ena_tx_buffer *tx_info;\n+\tstruct ena_com_tx_ctx ena_tx_ctx = { { 0 } };\n+\tuint16_t next_to_use;\n+\tuint16_t header_len;\n+\tuint16_t req_id;\n+\tvoid *push_header;\n+\tint nb_hw_desc;\n+\tint rc;\n \n-\t\twhile ((mbuf = mbuf->next) != NULL) {\n-\t\t\tseg_len = mbuf->data_len;\n+\trc = ena_check_and_linearize_mbuf(tx_ring, mbuf);\n+\tif (unlikely(rc))\n+\t\treturn rc;\n \n-\t\t\t/* Skip mbufs if whole data is pushed as a header */\n-\t\t\tif (unlikely(delta > seg_len)) {\n-\t\t\t\tdelta -= seg_len;\n-\t\t\t\tcontinue;\n-\t\t\t}\n+\tnext_to_use = tx_ring->next_to_use;\n \n-\t\t\tebuf->paddr = mbuf->buf_iova + mbuf->data_off + delta;\n-\t\t\tebuf->len = seg_len - delta;\n-\t\t\ttotal_length += ebuf->len;\n-\t\t\tebuf++;\n-\t\t\ttx_info->num_of_bufs++;\n+\treq_id = tx_ring->empty_tx_reqs[next_to_use];\n+\ttx_info = &tx_ring->tx_buffer_info[req_id];\n+\ttx_info->num_of_bufs = 0;\n \n-\t\t\tdelta = 0;\n-\t\t}\n+\tena_tx_map_mbuf(tx_ring, tx_info, mbuf, &push_header, &header_len);\n \n-\t\tena_tx_ctx.num_bufs = tx_info->num_of_bufs;\n+\tena_tx_ctx.ena_bufs = tx_info->bufs;\n+\tena_tx_ctx.push_header = push_header;\n+\tena_tx_ctx.num_bufs = tx_info->num_of_bufs;\n+\tena_tx_ctx.req_id = req_id;\n+\tena_tx_ctx.header_len = header_len;\n \n-\t\tif (ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq,\n-\t\t\t\t\t &ena_tx_ctx)) {\n-\t\t\tPMD_DRV_LOG(DEBUG, \"llq tx max burst size of queue %d\"\n-\t\t\t\t\" achieved, writing doorbell to send burst\\n\",\n-\t\t\t\ttx_ring->id);\n-\t\t\tena_com_write_sq_doorbell(tx_ring->ena_com_io_sq);\n-\t\t}\n-\n-\t\t/* prepare the packet's descriptors to dma engine */\n-\t\trc = ena_com_prepare_tx(tx_ring->ena_com_io_sq,\n-\t\t\t\t\t&ena_tx_ctx, &nb_hw_desc);\n-\t\tif (unlikely(rc)) {\n-\t\t\t++tx_ring->tx_stats.prepare_ctx_err;\n-\t\t\tbreak;\n-\t\t}\n-\t\ttx_info->tx_descs = nb_hw_desc;\n+\t/* Set Tx offloads flags, if applicable */\n+\tena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads,\n+\t\ttx_ring->disable_meta_caching);\n \n-\t\tnext_to_use = ENA_IDX_NEXT_MASKED(next_to_use,\n-\t\t\ttx_ring->size_mask);\n-\t\ttx_ring->tx_stats.cnt++;\n-\t\ttx_ring->tx_stats.bytes += total_length;\n+\tif (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq,\n+\t\t\t&ena_tx_ctx))) {\n+\t\tPMD_DRV_LOG(DEBUG,\n+\t\t\t\"llq tx max burst size of queue %d achieved, writing doorbell to send burst\\n\",\n+\t\t\ttx_ring->id);\n+\t\tena_com_write_sq_doorbell(tx_ring->ena_com_io_sq);\n \t}\n-\ttx_ring->tx_stats.available_desc =\n-\t\tena_com_free_q_entries(tx_ring->ena_com_io_sq);\n \n-\t/* If there are ready packets to be xmitted... */\n-\tif (sent_idx > 0) {\n-\t\t/* ...let HW do its best :-) */\n-\t\tena_com_write_sq_doorbell(tx_ring->ena_com_io_sq);\n-\t\ttx_ring->tx_stats.doorbells++;\n-\t\ttx_ring->next_to_use = next_to_use;\n+\t/* prepare the packet's descriptors to dma engine */\n+\trc = ena_com_prepare_tx(tx_ring->ena_com_io_sq,\t&ena_tx_ctx,\n+\t\t&nb_hw_desc);\n+\tif (unlikely(rc)) {\n+\t\t++tx_ring->tx_stats.prepare_ctx_err;\n+\t\treturn rc;\n \t}\n \n-\t/* Clear complete packets */\n-\twhile (ena_com_tx_comp_req_id_get(tx_ring->ena_com_io_cq, &req_id) >= 0) {\n-\t\trc = validate_tx_req_id(tx_ring, req_id);\n-\t\tif (rc)\n+\ttx_info->tx_descs = nb_hw_desc;\n+\n+\ttx_ring->tx_stats.cnt++;\n+\ttx_ring->tx_stats.bytes += mbuf->pkt_len;\n+\n+\ttx_ring->next_to_use = ENA_IDX_NEXT_MASKED(next_to_use,\n+\t\ttx_ring->size_mask);\n+\n+\treturn 0;\n+}\n+\n+static void ena_tx_cleanup(struct ena_ring *tx_ring)\n+{\n+\tunsigned int cleanup_budget;\n+\tunsigned int total_tx_descs = 0;\n+\tuint16_t next_to_clean = tx_ring->next_to_clean;\n+\n+\tcleanup_budget = RTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER,\n+\t\t(unsigned int)ENA_REFILL_THRESH_PACKET);\n+\n+\twhile (likely(total_tx_descs < cleanup_budget)) {\n+\t\tstruct rte_mbuf *mbuf;\n+\t\tstruct ena_tx_buffer *tx_info;\n+\t\tuint16_t req_id;\n+\n+\t\tif (ena_com_tx_comp_req_id_get(tx_ring->ena_com_io_cq, &req_id) != 0)\n+\t\t\tbreak;\n+\n+\t\tif (unlikely(validate_tx_req_id(tx_ring, req_id) != 0))\n \t\t\tbreak;\n \n \t\t/* Get Tx info & store how many descs were processed */\n \t\ttx_info = &tx_ring->tx_buffer_info[req_id];\n-\t\ttotal_tx_descs += tx_info->tx_descs;\n \n-\t\t/* Free whole mbuf chain */\n \t\tmbuf = tx_info->mbuf;\n \t\trte_pktmbuf_free(mbuf);\n+\n \t\ttx_info->mbuf = NULL;\n+\t\ttx_ring->empty_tx_reqs[next_to_clean] = req_id;\n+\n+\t\ttotal_tx_descs += tx_info->tx_descs;\n \n \t\t/* Put back descriptor to the ring for reuse */\n-\t\ttx_ring->empty_tx_reqs[next_to_clean] = req_id;\n \t\tnext_to_clean = ENA_IDX_NEXT_MASKED(next_to_clean,\n \t\t\ttx_ring->size_mask);\n-\t\tcleanup_budget =\n-\t\t\tRTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER,\n-\t\t\t(unsigned int)ENA_REFILL_THRESH_PACKET);\n-\n-\t\t/* If too many descs to clean, leave it for another run */\n-\t\tif (unlikely(total_tx_descs > cleanup_budget))\n-\t\t\tbreak;\n \t}\n-\ttx_ring->tx_stats.available_desc =\n-\t\tena_com_free_q_entries(tx_ring->ena_com_io_sq);\n \n-\tif (total_tx_descs > 0) {\n+\tif (likely(total_tx_descs > 0)) {\n \t\t/* acknowledge completion of sent packets */\n \t\ttx_ring->next_to_clean = next_to_clean;\n \t\tena_com_comp_ack(tx_ring->ena_com_io_sq, total_tx_descs);\n \t\tena_com_update_dev_comp_head(tx_ring->ena_com_io_cq);\n \t}\n+}\n+\n+static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,\n+\t\t\t\t uint16_t nb_pkts)\n+{\n+\tstruct ena_ring *tx_ring = (struct ena_ring *)(tx_queue);\n+\tuint16_t sent_idx = 0;\n+\n+\t/* Check adapter state */\n+\tif (unlikely(tx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) {\n+\t\tPMD_DRV_LOG(ALERT,\n+\t\t\t\"Trying to xmit pkts while device is NOT running\\n\");\n+\t\treturn 0;\n+\t}\n+\n+\tnb_pkts = RTE_MIN(ena_com_free_q_entries(tx_ring->ena_com_io_sq),\n+\t\tnb_pkts);\n+\n+\tfor (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) {\n+\t\tif (ena_xmit_mbuf(tx_ring, tx_pkts[sent_idx]))\n+\t\t\tbreak;\n \n+\t\trte_prefetch0(tx_pkts[ENA_IDX_ADD_MASKED(sent_idx, 4,\n+\t\t\ttx_ring->size_mask)]);\n+\t}\n+\n+\ttx_ring->tx_stats.available_desc =\n+\t\tena_com_free_q_entries(tx_ring->ena_com_io_sq);\n+\n+\t/* If there are ready packets to be xmitted... */\n+\tif (sent_idx > 0) {\n+\t\t/* ...let HW do its best :-) */\n+\t\tena_com_write_sq_doorbell(tx_ring->ena_com_io_sq);\n+\t\ttx_ring->tx_stats.doorbells++;\n+\t}\n+\n+\tena_tx_cleanup(tx_ring);\n+\n+\ttx_ring->tx_stats.available_desc =\n+\t\tena_com_free_q_entries(tx_ring->ena_com_io_sq);\n \ttx_ring->tx_stats.tx_poll++;\n \n \treturn sent_idx;\n", "prefixes": [ "v3", "27/30" ] }{ "id": 67993, "url": "