Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/22180/?format=api
http://patches.dpdk.org/api/patches/22180/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/1490274162-85053-2-git-send-email-roy.fan.zhang@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<1490274162-85053-2-git-send-email-roy.fan.zhang@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/1490274162-85053-2-git-send-email-roy.fan.zhang@intel.com", "date": "2017-03-23T13:02:40", "name": "[dpdk-dev,v2,1/3] crypto/scheduler: add fail-over scheduling mode file", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "8242d1cc3a0311accb0fd0bfc688146c6cd865a2", "submitter": { "id": 304, "url": "http://patches.dpdk.org/api/people/304/?format=api", "name": "Fan Zhang", "email": "roy.fan.zhang@intel.com" }, "delegate": { "id": 22, "url": "http://patches.dpdk.org/api/users/22/?format=api", "username": "pdelarag", "first_name": "Pablo", "last_name": "de Lara Guarch", "email": "pablo.de.lara.guarch@intel.com" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/1490274162-85053-2-git-send-email-roy.fan.zhang@intel.com/mbox/", "series": [], "comments": "http://patches.dpdk.org/api/patches/22180/comments/", "check": "success", "checks": "http://patches.dpdk.org/api/patches/22180/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id DC849695C;\n\tThu, 23 Mar 2017 14:01:18 +0100 (CET)", "from mga04.intel.com (mga04.intel.com [192.55.52.120])\n\tby dpdk.org (Postfix) with ESMTP id 62FC211C5\n\tfor <dev@dpdk.org>; Thu, 23 Mar 2017 14:01:08 +0100 (CET)", "from orsmga001.jf.intel.com ([10.7.209.18])\n\tby fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t23 Mar 2017 06:01:07 -0700", "from silpixa00381633.ir.intel.com (HELO\n\tsilpixa00381633.ger.corp.intel.com) ([10.237.222.114])\n\tby orsmga001.jf.intel.com with ESMTP; 23 Mar 2017 06:01:06 -0700" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=simple/simple;\n\td=intel.com; i=@intel.com; q=dns/txt; s=intel;\n\tt=1490274068; x=1521810068;\n\th=from:to:cc:subject:date:message-id:in-reply-to: references;\n\tbh=GZjwpNyUyZRTQAhwVYicP6K64yidi2/8wF8Q4Qbk4hs=;\n\tb=Hl94rwylr0prpQzAfkauEVIwRMkktm4TByFy7aEl0y7r2LDhYdWi6TVf\n\tyxhqyZeuBzkdd26mcNPHH0aHbyjSHQ==;", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos; i=\"5.36,209,1486454400\"; d=\"scan'208\";\n\ta=\"1111418596\"", "From": "Fan Zhang <roy.fan.zhang@intel.com>", "To": "dev@dpdk.org", "Cc": "pablo.de.lara.guarch@intel.com, sergio.gonzalez.monroy@intel.com,\n\tdeclan.doherty@intel.com", "Date": "Thu, 23 Mar 2017 13:02:40 +0000", "Message-Id": "<1490274162-85053-2-git-send-email-roy.fan.zhang@intel.com>", "X-Mailer": "git-send-email 2.7.4", "In-Reply-To": "<1490274162-85053-1-git-send-email-roy.fan.zhang@intel.com>", "References": "<1490274162-85053-1-git-send-email-roy.fan.zhang@intel.com>", "Subject": "[dpdk-dev] [PATCH v2 1/3] crypto/scheduler: add fail-over\n\tscheduling mode file", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://dpdk.org/ml/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "This patch adds the fail-over scheduling mode main source file.\n\nSigned-off-by: Fan Zhang <roy.fan.zhang@intel.com>\n---\n drivers/crypto/scheduler/scheduler_failover.c | 324 ++++++++++++++++++++++++++\n 1 file changed, 324 insertions(+)\n create mode 100644 drivers/crypto/scheduler/scheduler_failover.c", "diff": "diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c\nnew file mode 100644\nindex 0000000..58c8302\n--- /dev/null\n+++ b/drivers/crypto/scheduler/scheduler_failover.c\n@@ -0,0 +1,324 @@\n+/*-\n+ * BSD LICENSE\n+ *\n+ * Copyright(c) 2017 Intel Corporation. All rights reserved.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions\n+ * are met:\n+ *\n+ * * Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * * Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in\n+ * the documentation and/or other materials provided with the\n+ * distribution.\n+ * * Neither the name of Intel Corporation nor the names of its\n+ * contributors may be used to endorse or promote products derived\n+ * from this software without specific prior written permission.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ * \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <rte_cryptodev.h>\n+#include <rte_malloc.h>\n+\n+#include \"rte_cryptodev_scheduler_operations.h\"\n+#include \"scheduler_pmd_private.h\"\n+\n+#define PRIMARY_SLAVE_IDX\t0\n+#define SECONDARY_SLAVE_IDX\t1\n+#define NB_FAILOVER_SLAVES\t2\n+#define SLAVE_SWITCH_MASK\t(0x01)\n+\n+struct fo_scheduler_qp_ctx {\n+\tstruct scheduler_slave primary_slave;\n+\tstruct scheduler_slave secondary_slave;\n+\n+\tuint8_t deq_idx;\n+};\n+\n+static inline uint16_t __attribute__((always_inline))\n+failover_slave_enqueue(struct scheduler_slave *slave, uint8_t slave_idx,\n+\t\tstruct rte_crypto_op **ops, uint16_t nb_ops)\n+{\n+\tuint16_t i, processed_ops;\n+\tstruct rte_cryptodev_sym_session *sessions[nb_ops];\n+\tstruct scheduler_session *sess0, *sess1, *sess2, *sess3;\n+\n+\tfor (i = 0; i < nb_ops && i < 4; i++)\n+\t\trte_prefetch0(ops[i]->sym->session);\n+\n+\tfor (i = 0; (i < (nb_ops - 8)) && (nb_ops > 8); i += 4) {\n+\t\trte_prefetch0(ops[i + 4]->sym->session);\n+\t\trte_prefetch0(ops[i + 5]->sym->session);\n+\t\trte_prefetch0(ops[i + 6]->sym->session);\n+\t\trte_prefetch0(ops[i + 7]->sym->session);\n+\n+\t\tsess0 = (struct scheduler_session *)\n+\t\t\t\tops[i]->sym->session->_private;\n+\t\tsess1 = (struct scheduler_session *)\n+\t\t\t\tops[i+1]->sym->session->_private;\n+\t\tsess2 = (struct scheduler_session *)\n+\t\t\t\tops[i+2]->sym->session->_private;\n+\t\tsess3 = (struct scheduler_session *)\n+\t\t\t\tops[i+3]->sym->session->_private;\n+\n+\t\tsessions[i] = ops[i]->sym->session;\n+\t\tsessions[i + 1] = ops[i + 1]->sym->session;\n+\t\tsessions[i + 2] = ops[i + 2]->sym->session;\n+\t\tsessions[i + 3] = ops[i + 3]->sym->session;\n+\n+\t\tops[i]->sym->session = sess0->sessions[slave_idx];\n+\t\tops[i + 1]->sym->session = sess1->sessions[slave_idx];\n+\t\tops[i + 2]->sym->session = sess2->sessions[slave_idx];\n+\t\tops[i + 3]->sym->session = sess3->sessions[slave_idx];\n+\t}\n+\n+\tfor (; i < nb_ops; i++) {\n+\t\tsess0 = (struct scheduler_session *)\n+\t\t\t\tops[i]->sym->session->_private;\n+\t\tsessions[i] = ops[i]->sym->session;\n+\t\tops[i]->sym->session = sess0->sessions[slave_idx];\n+\t}\n+\n+\tprocessed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,\n+\t\t\tslave->qp_id, ops, nb_ops);\n+\tslave->nb_inflight_cops += processed_ops;\n+\n+\tif (unlikely(processed_ops < nb_ops))\n+\t\tfor (i = processed_ops; i < nb_ops; i++)\n+\t\t\tops[i]->sym->session = sessions[i];\n+\n+\treturn processed_ops;\n+}\n+\n+static uint16_t\n+schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)\n+{\n+\tstruct fo_scheduler_qp_ctx *qp_ctx =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->private_qp_ctx;\n+\tuint16_t enqueued_ops;\n+\n+\tif (unlikely(nb_ops == 0))\n+\t\treturn 0;\n+\n+\tenqueued_ops = failover_slave_enqueue(&qp_ctx->primary_slave,\n+\t\t\tPRIMARY_SLAVE_IDX, ops, nb_ops);\n+\n+\tif (enqueued_ops < nb_ops)\n+\t\tenqueued_ops += failover_slave_enqueue(&qp_ctx->secondary_slave,\n+\t\t\t\tSECONDARY_SLAVE_IDX, &ops[enqueued_ops],\n+\t\t\t\tnb_ops - enqueued_ops);\n+\n+\treturn enqueued_ops;\n+}\n+\n+\n+static uint16_t\n+schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops,\n+\t\tuint16_t nb_ops)\n+{\n+\tstruct rte_ring *order_ring =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->order_ring;\n+\tuint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring,\n+\t\t\tnb_ops);\n+\tuint16_t nb_ops_enqd = schedule_enqueue(qp, ops,\n+\t\t\tnb_ops_to_enq);\n+\n+\tscheduler_order_insert(order_ring, ops, nb_ops_enqd);\n+\n+\treturn nb_ops_enqd;\n+}\n+\n+static uint16_t\n+schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)\n+{\n+\tstruct fo_scheduler_qp_ctx *qp_ctx =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->private_qp_ctx;\n+\tstruct scheduler_slave *slaves[NB_FAILOVER_SLAVES] = {\n+\t\t\t&qp_ctx->primary_slave, &qp_ctx->secondary_slave};\n+\tstruct scheduler_slave *slave = slaves[qp_ctx->deq_idx];\n+\tuint16_t nb_deq_ops = 0, nb_deq_ops2 = 0;\n+\n+\tif (slave->nb_inflight_cops) {\n+\t\tnb_deq_ops = rte_cryptodev_dequeue_burst(slave->dev_id,\n+\t\t\tslave->qp_id, ops, nb_ops);\n+\t\tslave->nb_inflight_cops -= nb_deq_ops;\n+\n+\t\t/* force a flush */\n+\t\tif (unlikely(nb_deq_ops == 0))\n+\t\t\trte_cryptodev_enqueue_burst(slave->dev_id, slave->qp_id,\n+\t\t\t\t\tNULL, 0);\n+\t}\n+\n+\tqp_ctx->deq_idx = (~qp_ctx->deq_idx) & SLAVE_SWITCH_MASK;\n+\n+\tif (nb_deq_ops == nb_ops)\n+\t\treturn nb_deq_ops;\n+\n+\tslave = slaves[qp_ctx->deq_idx];\n+\n+\tif (slave->nb_inflight_cops) {\n+\t\tnb_deq_ops2 = rte_cryptodev_dequeue_burst(slave->dev_id,\n+\t\t\tslave->qp_id, &ops[nb_deq_ops], nb_ops - nb_deq_ops);\n+\t\tslave->nb_inflight_cops -= nb_deq_ops2;\n+\n+\t\t/* force a flush */\n+\t\tif (unlikely(nb_deq_ops == 0))\n+\t\t\trte_cryptodev_enqueue_burst(slave->dev_id, slave->qp_id,\n+\t\t\t\t\tNULL, 0);\n+\t}\n+\n+\treturn nb_deq_ops + nb_deq_ops2;\n+}\n+\n+static uint16_t\n+schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops,\n+\t\tuint16_t nb_ops)\n+{\n+\tstruct rte_ring *order_ring =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->order_ring;\n+\tstruct fo_scheduler_qp_ctx *qp_ctx =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->private_qp_ctx;\n+\tuint16_t nb_deq_ops = 0;\n+\n+\tif (qp_ctx->primary_slave.nb_inflight_cops) {\n+\t\tnb_deq_ops = rte_cryptodev_dequeue_burst(\n+\t\t\t\tqp_ctx->primary_slave.dev_id,\n+\t\t\t\tqp_ctx->primary_slave.qp_id, ops, nb_ops);\n+\t\tqp_ctx->primary_slave.nb_inflight_cops -= nb_deq_ops;\n+\n+\t\t/* force a flush */\n+\t\tif (unlikely(nb_deq_ops == 0))\n+\t\t\trte_cryptodev_enqueue_burst(\n+\t\t\t\t\tqp_ctx->primary_slave.dev_id,\n+\t\t\t\t\tqp_ctx->primary_slave.qp_id,\n+\t\t\t\t\tNULL, 0);\n+\t}\n+\n+\tif (qp_ctx->secondary_slave.nb_inflight_cops) {\n+\t\tnb_deq_ops = rte_cryptodev_dequeue_burst(\n+\t\t\t\tqp_ctx->secondary_slave.dev_id,\n+\t\t\t\tqp_ctx->secondary_slave.qp_id, ops, nb_ops);\n+\t\tqp_ctx->secondary_slave.nb_inflight_cops -= nb_deq_ops;\n+\n+\t\t/* force a flush */\n+\t\tif (unlikely(nb_deq_ops == 0))\n+\t\t\trte_cryptodev_enqueue_burst(\n+\t\t\t\t\tqp_ctx->secondary_slave.dev_id,\n+\t\t\t\t\tqp_ctx->secondary_slave.qp_id,\n+\t\t\t\t\tNULL, 0);\n+\t}\n+\n+\treturn scheduler_order_drain(order_ring, ops, nb_ops);\n+}\n+\n+static int\n+slave_attach(__rte_unused struct rte_cryptodev *dev,\n+\t\t__rte_unused uint8_t slave_id)\n+{\n+\treturn 0;\n+}\n+\n+static int\n+slave_detach(__rte_unused struct rte_cryptodev *dev,\n+\t\t__rte_unused uint8_t slave_id)\n+{\n+\treturn 0;\n+}\n+\n+static int\n+scheduler_start(struct rte_cryptodev *dev)\n+{\n+\tstruct scheduler_ctx *sched_ctx = dev->data->dev_private;\n+\tuint16_t i;\n+\n+\tif (sched_ctx->nb_slaves < 2) {\n+\t\tCS_LOG_ERR(\"Number of slaves shall no less than 2\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\tif (sched_ctx->reordering_enabled) {\n+\t\tdev->enqueue_burst = schedule_enqueue_ordering;\n+\t\tdev->dequeue_burst = schedule_dequeue_ordering;\n+\t} else {\n+\t\tdev->enqueue_burst = schedule_enqueue;\n+\t\tdev->dequeue_burst = schedule_dequeue;\n+\t}\n+\n+\tfor (i = 0; i < dev->data->nb_queue_pairs; i++) {\n+\t\tstruct fo_scheduler_qp_ctx *qp_ctx =\n+\t\t\t((struct scheduler_qp_ctx *)\n+\t\t\t\tdev->data->queue_pairs[i])->private_qp_ctx;\n+\n+\t\trte_memcpy(&qp_ctx->primary_slave,\n+\t\t\t\t&sched_ctx->slaves[PRIMARY_SLAVE_IDX],\n+\t\t\t\tsizeof(struct scheduler_slave));\n+\t\trte_memcpy(&qp_ctx->secondary_slave,\n+\t\t\t\t&sched_ctx->slaves[SECONDARY_SLAVE_IDX],\n+\t\t\t\tsizeof(struct scheduler_slave));\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+scheduler_stop(__rte_unused struct rte_cryptodev *dev)\n+{\n+\treturn 0;\n+}\n+\n+static int\n+scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)\n+{\n+\tstruct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];\n+\tstruct fo_scheduler_qp_ctx *fo_qp_ctx;\n+\n+\tfo_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*fo_qp_ctx), 0,\n+\t\t\trte_socket_id());\n+\tif (!fo_qp_ctx) {\n+\t\tCS_LOG_ERR(\"failed allocate memory for private queue pair\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\tqp_ctx->private_qp_ctx = (void *)fo_qp_ctx;\n+\n+\treturn 0;\n+}\n+\n+static int\n+scheduler_create_private_ctx(__rte_unused struct rte_cryptodev *dev)\n+{\n+\treturn 0;\n+}\n+\n+struct rte_cryptodev_scheduler_ops scheduler_fo_ops = {\n+\tslave_attach,\n+\tslave_detach,\n+\tscheduler_start,\n+\tscheduler_stop,\n+\tscheduler_config_qp,\n+\tscheduler_create_private_ctx,\n+};\n+\n+struct rte_cryptodev_scheduler fo_scheduler = {\n+\t\t.name = \"failover-scheduler\",\n+\t\t.description = \"scheduler which enqueues to the primary slave, \"\n+\t\t\t\t\"and only then enqueues to the secondary slave \"\n+\t\t\t\t\"upon failing on enqueuing to primary\",\n+\t\t.mode = CDEV_SCHED_MODE_FAILOVER,\n+\t\t.ops = &scheduler_fo_ops\n+};\n+\n+struct rte_cryptodev_scheduler *failover_scheduler = &fo_scheduler;\n", "prefixes": [ "dpdk-dev", "v2", "1/3" ] }{ "id": 22180, "url": "