Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/57444/?format=api
https://patches.dpdk.org/api/patches/57444/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/patch/1565010234-21769-4-git-send-email-viacheslavo@mellanox.com/", "project": { "id": 1, "url": "https://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<1565010234-21769-4-git-send-email-viacheslavo@mellanox.com>", "list_archive_url": "https://inbox.dpdk.org/dev/1565010234-21769-4-git-send-email-viacheslavo@mellanox.com", "date": "2019-08-05T13:03:51", "name": "[v2,3/6] net/mlx5: fix completion queue drain loop", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": true, "hash": "b33217a442b7ab1674438b69bd9bc5c813293b84", "submitter": { "id": 1102, "url": "https://patches.dpdk.org/api/people/1102/?format=api", "name": "Slava Ovsiienko", "email": "viacheslavo@mellanox.com" }, "delegate": { "id": 3268, "url": "https://patches.dpdk.org/api/users/3268/?format=api", "username": "rasland", "first_name": "Raslan", "last_name": "Darawsheh", "email": "rasland@nvidia.com" }, "mbox": "https://patches.dpdk.org/project/dpdk/patch/1565010234-21769-4-git-send-email-viacheslavo@mellanox.com/mbox/", "series": [ { "id": 5927, "url": "https://patches.dpdk.org/api/series/5927/?format=api", "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=5927", "date": "2019-08-05T13:03:48", "name": "fix transmit datapath cumulative series", "version": 2, "mbox": "https://patches.dpdk.org/series/5927/mbox/" } ], "comments": "https://patches.dpdk.org/api/patches/57444/comments/", "check": "success", "checks": "https://patches.dpdk.org/api/patches/57444/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 2A9771BE16;\n\tMon, 5 Aug 2019 15:04:26 +0200 (CEST)", "from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129])\n\tby dpdk.org (Postfix) with ESMTP id 299D01BE0C\n\tfor <dev@dpdk.org>; Mon, 5 Aug 2019 15:04:24 +0200 (CEST)", "from Internal Mail-Server by MTLPINE1 (envelope-from\n\tviacheslavo@mellanox.com)\n\twith ESMTPS (AES256-SHA encrypted); 5 Aug 2019 16:04:19 +0300", "from pegasus12.mtr.labs.mlnx (pegasus12.mtr.labs.mlnx\n\t[10.210.17.40])\n\tby labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x75D4JOH006533;\n\tMon, 5 Aug 2019 16:04:19 +0300", "from pegasus12.mtr.labs.mlnx (localhost [127.0.0.1])\n\tby pegasus12.mtr.labs.mlnx (8.14.7/8.14.7) with ESMTP id\n\tx75D4J9j022369; Mon, 5 Aug 2019 13:04:19 GMT", "(from viacheslavo@localhost)\n\tby pegasus12.mtr.labs.mlnx (8.14.7/8.14.7/Submit) id x75D4JFp022368; \n\tMon, 5 Aug 2019 13:04:19 GMT" ], "X-Authentication-Warning": "pegasus12.mtr.labs.mlnx: viacheslavo set sender to\n\tviacheslavo@mellanox.com using -f", "From": "Viacheslav Ovsiienko <viacheslavo@mellanox.com>", "To": "dev@dpdk.org", "Cc": "yskoh@mellanox.com, matan@mellanox.com", "Date": "Mon, 5 Aug 2019 13:03:51 +0000", "Message-Id": "<1565010234-21769-4-git-send-email-viacheslavo@mellanox.com>", "X-Mailer": "git-send-email 1.8.3.1", "In-Reply-To": "<1565010234-21769-1-git-send-email-viacheslavo@mellanox.com>", "References": "<1565010234-21769-1-git-send-email-viacheslavo@mellanox.com>", "Subject": "[dpdk-dev] [PATCH v2 3/6] net/mlx5: fix completion queue drain loop", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "The completion loop speed optimizations for error-free\noperations are done - no CQE field fetch on each loop\niteration. Also, code size is oprimized - the flush\nbuffers routine is invoked once.\n\nFixes: 318ea4cfa1b1 (\"net/mlx5: fix Tx completion descriptors fetching loop\")\n\nSigned-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>\nAcked-by: Matan Azrad <matan@mellanox.com>\n---\n drivers/net/mlx5/mlx5_rxtx.c | 98 +++++++++++++++++++++++++++++---------------\n drivers/net/mlx5/mlx5_rxtx.h | 8 ++--\n 2 files changed, 68 insertions(+), 38 deletions(-)", "diff": "diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c\nindex 1ec3793..a890f41 100644\n--- a/drivers/net/mlx5/mlx5_rxtx.c\n+++ b/drivers/net/mlx5/mlx5_rxtx.c\n@@ -654,9 +654,10 @@ enum mlx5_txcmp_code {\n * Pointer to the error CQE.\n *\n * @return\n- * The last Tx buffer element to free.\n+ * Negative value if queue recovery failed,\n+ * the last Tx buffer element to free otherwise.\n */\n-uint16_t\n+int\n mlx5_tx_error_cqe_handle(struct mlx5_txq_data *restrict txq,\n \t\t\t volatile struct mlx5_err_cqe *err_cqe)\n {\n@@ -706,6 +707,7 @@ enum mlx5_txcmp_code {\n \t\t\treturn txq->elts_head;\n \t\t}\n \t\t/* Recovering failed - try again later on the same WQE. */\n+\t\treturn -1;\n \t} else {\n \t\ttxq->cq_ci++;\n \t}\n@@ -2010,6 +2012,45 @@ enum mlx5_txcmp_code {\n }\n \n /**\n+ * Update completion queue consuming index via doorbell\n+ * and flush the completed data buffers.\n+ *\n+ * @param txq\n+ * Pointer to TX queue structure.\n+ * @param valid CQE pointer\n+ * if not NULL update txq->wqe_pi and flush the buffers\n+ * @param itail\n+ * if not negative - flush the buffers till this index.\n+ * @param olx\n+ * Configured Tx offloads mask. It is fully defined at\n+ * compile time and may be used for optimization.\n+ */\n+static __rte_always_inline void\n+mlx5_tx_comp_flush(struct mlx5_txq_data *restrict txq,\n+\t\t volatile struct mlx5_cqe *last_cqe,\n+\t\t int itail,\n+\t\t unsigned int olx __rte_unused)\n+{\n+\tuint16_t tail;\n+\n+\tif (likely(last_cqe != NULL)) {\n+\t\ttxq->wqe_pi = rte_be_to_cpu_16(last_cqe->wqe_counter);\n+\t\ttail = ((volatile struct mlx5_wqe_cseg *)\n+\t\t\t(txq->wqes + (txq->wqe_pi & txq->wqe_m)))->misc;\n+\t} else if (itail >= 0) {\n+\t\ttail = (uint16_t)itail;\n+\t} else {\n+\t\treturn;\n+\t}\n+\trte_compiler_barrier();\n+\t*txq->cq_db = rte_cpu_to_be_32(txq->cq_ci);\n+\tif (likely(tail != txq->elts_tail)) {\n+\t\tmlx5_tx_free_elts(txq, tail, olx);\n+\t\tassert(tail == txq->elts_tail);\n+\t}\n+}\n+\n+/**\n * Manage TX completions. This routine checks the CQ for\n * arrived CQEs, deduces the last accomplished WQE in SQ,\n * updates SQ producing index and frees all completed mbufs.\n@@ -2028,10 +2069,11 @@ enum mlx5_txcmp_code {\n \t\t\t unsigned int olx __rte_unused)\n {\n \tunsigned int count = MLX5_TX_COMP_MAX_CQE;\n-\tbool update = false;\n-\tuint16_t tail = txq->elts_tail;\n+\tvolatile struct mlx5_cqe *last_cqe = NULL;\n \tint ret;\n \n+\tstatic_assert(MLX5_CQE_STATUS_HW_OWN < 0, \"Must be negative value\");\n+\tstatic_assert(MLX5_CQE_STATUS_SW_OWN < 0, \"Must be negative value\");\n \tdo {\n \t\tvolatile struct mlx5_cqe *cqe;\n \n@@ -2043,32 +2085,30 @@ enum mlx5_txcmp_code {\n \t\t\t\tassert(ret == MLX5_CQE_STATUS_HW_OWN);\n \t\t\t\tbreak;\n \t\t\t}\n-\t\t\t/* Some error occurred, try to restart. */\n+\t\t\t/*\n+\t\t\t * Some error occurred, try to restart.\n+\t\t\t * We have no barrier after WQE related Doorbell\n+\t\t\t * written, make sure all writes are completed\n+\t\t\t * here, before we might perform SQ reset.\n+\t\t\t */\n \t\t\trte_wmb();\n-\t\t\ttail = mlx5_tx_error_cqe_handle\n+\t\t\tret = mlx5_tx_error_cqe_handle\n \t\t\t\t(txq, (volatile struct mlx5_err_cqe *)cqe);\n-\t\t\tif (likely(tail != txq->elts_tail)) {\n-\t\t\t\tmlx5_tx_free_elts(txq, tail, olx);\n-\t\t\t\tassert(tail == txq->elts_tail);\n-\t\t\t}\n-\t\t\t/* Allow flushing all CQEs from the queue. */\n-\t\t\tcount = txq->cqe_s;\n-\t\t} else {\n-\t\t\tvolatile struct mlx5_wqe_cseg *cseg;\n-\n-\t\t\t/* Normal transmit completion. */\n-\t\t\t++txq->cq_ci;\n-\t\t\trte_cio_rmb();\n-\t\t\ttxq->wqe_pi = rte_be_to_cpu_16(cqe->wqe_counter);\n-\t\t\tcseg = (volatile struct mlx5_wqe_cseg *)\n-\t\t\t\t(txq->wqes + (txq->wqe_pi & txq->wqe_m));\n-\t\t\ttail = cseg->misc;\n+\t\t\t/*\n+\t\t\t * Flush buffers, update consuming index\n+\t\t\t * if recovery succeeded. Otherwise\n+\t\t\t * just try to recover later.\n+\t\t\t */\n+\t\t\tlast_cqe = NULL;\n+\t\t\tbreak;\n \t\t}\n+\t\t/* Normal transmit completion. */\n+\t\t++txq->cq_ci;\n+\t\tlast_cqe = cqe;\n #ifndef NDEBUG\n \t\tif (txq->cq_pi)\n \t\t\t--txq->cq_pi;\n #endif\n-\t\tupdate = true;\n \t/*\n \t * We have to restrict the amount of processed CQEs\n \t * in one tx_burst routine call. The CQ may be large\n@@ -2078,17 +2118,7 @@ enum mlx5_txcmp_code {\n \t * latency.\n \t */\n \t} while (--count);\n-\tif (likely(tail != txq->elts_tail)) {\n-\t\t/* Free data buffers from elts. */\n-\t\tmlx5_tx_free_elts(txq, tail, olx);\n-\t\tassert(tail == txq->elts_tail);\n-\t}\n-\tif (likely(update)) {\n-\t\t/* Update the consumer index. */\n-\t\trte_compiler_barrier();\n-\t\t*txq->cq_db =\n-\t\trte_cpu_to_be_32(txq->cq_ci);\n-\t}\n+\tmlx5_tx_comp_flush(txq, last_cqe, ret, olx);\n }\n \n /**\ndiff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h\nindex c209d99..aaa02a2 100644\n--- a/drivers/net/mlx5/mlx5_rxtx.h\n+++ b/drivers/net/mlx5/mlx5_rxtx.h\n@@ -400,7 +400,7 @@ struct mlx5_txq_ctrl *mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx,\n void mlx5_set_ptype_table(void);\n void mlx5_set_cksum_table(void);\n void mlx5_set_swp_types_table(void);\n-__rte_noinline uint16_t mlx5_tx_error_cqe_handle\n+__rte_noinline int mlx5_tx_error_cqe_handle\n \t\t\t\t(struct mlx5_txq_data *restrict txq,\n \t\t\t\t volatile struct mlx5_err_cqe *err_cqe);\n uint16_t mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n);\n@@ -499,9 +499,9 @@ int mlx5_dma_unmap(struct rte_pci_device *pdev, void *addr, uint64_t iova,\n \n /* CQE status. */\n enum mlx5_cqe_status {\n-\tMLX5_CQE_STATUS_SW_OWN,\n-\tMLX5_CQE_STATUS_HW_OWN,\n-\tMLX5_CQE_STATUS_ERR,\n+\tMLX5_CQE_STATUS_SW_OWN = -1,\n+\tMLX5_CQE_STATUS_HW_OWN = -2,\n+\tMLX5_CQE_STATUS_ERR = -3,\n };\n \n /**\n", "prefixes": [ "v2", "3/6" ] }{ "id": 57444, "url": "