Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/22742/?format=api
http://patches.dpdk.org/api/patches/22742/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/1490804784-64350-2-git-send-email-roy.fan.zhang@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<1490804784-64350-2-git-send-email-roy.fan.zhang@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/1490804784-64350-2-git-send-email-roy.fan.zhang@intel.com", "date": "2017-03-29T16:26:22", "name": "[dpdk-dev,v5,1/3] crypto/scheduler: add packet size based mode code", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "e1e606a8d31a50c429feb8204fff7408a1b4e7c0", "submitter": { "id": 304, "url": "http://patches.dpdk.org/api/people/304/?format=api", "name": "Fan Zhang", "email": "roy.fan.zhang@intel.com" }, "delegate": { "id": 22, "url": "http://patches.dpdk.org/api/users/22/?format=api", "username": "pdelarag", "first_name": "Pablo", "last_name": "de Lara Guarch", "email": "pablo.de.lara.guarch@intel.com" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/1490804784-64350-2-git-send-email-roy.fan.zhang@intel.com/mbox/", "series": [], "comments": "http://patches.dpdk.org/api/patches/22742/comments/", "check": "success", "checks": "http://patches.dpdk.org/api/patches/22742/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 522B4532C;\n\tWed, 29 Mar 2017 20:23:02 +0200 (CEST)", "from mga04.intel.com (mga04.intel.com [192.55.52.120])\n\tby dpdk.org (Postfix) with ESMTP id 12C6AF966\n\tfor <dev@dpdk.org>; Wed, 29 Mar 2017 18:24:59 +0200 (CEST)", "from fmsmga001.fm.intel.com ([10.253.24.23])\n\tby fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t29 Mar 2017 09:24:48 -0700", "from silpixa00381633.ir.intel.com (HELO\n\tsilpixa00381633.ger.corp.intel.com) ([10.237.222.114])\n\tby fmsmga001.fm.intel.com with ESMTP; 29 Mar 2017 09:24:47 -0700" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=simple/simple;\n\td=intel.com; i=@intel.com; q=dns/txt; s=intel;\n\tt=1490804700; x=1522340700;\n\th=from:to:cc:subject:date:message-id:in-reply-to: references;\n\tbh=ScZttNs3JuGmdcUK6GPht9UoJT1Un9WiopJ2fOV8ZvE=;\n\tb=XtShaab5NxXEVZGw3HIjPJbMSi7nkdUmhVsRK+RoTQXiAvKpmpI7QDQg\n\tHSmZHFRyumyyVBabZI/XlpPNjGqa4Q==;", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos; i=\"5.36,242,1486454400\"; d=\"scan'208\";\n\ta=\"1128542328\"", "From": "Fan Zhang <roy.fan.zhang@intel.com>", "To": "dev@dpdk.org", "Cc": "pablo.de.lara.guarch@intel.com, sergio.gonzalez.monroy@intel.com,\n\tdeclan.doherty@intel.com", "Date": "Wed, 29 Mar 2017 17:26:22 +0100", "Message-Id": "<1490804784-64350-2-git-send-email-roy.fan.zhang@intel.com>", "X-Mailer": "git-send-email 2.7.4", "In-Reply-To": "<1490804784-64350-1-git-send-email-roy.fan.zhang@intel.com>", "References": "<1490775959-65295-1-git-send-email-roy.fan.zhang@intel.com>\n\t<1490804784-64350-1-git-send-email-roy.fan.zhang@intel.com>", "Subject": "[dpdk-dev] [PATCH v5 1/3] crypto/scheduler: add packet size based\n\tmode code", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://dpdk.org/ml/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "This patch adds the packet size based distribution mode main source\nfile.\n\nPacket-size based distribution mode is a scheduling mode works with 2\nslaves, primary slave and secondary slave, and distribute the enqueued\ncrypto ops to them based on their data lengths. A crypto op will be\ndistributed to the primary slave if its data length equals or bigger\nthan the designated threshold, otherwise it will be handled by the\nsecondary slave.\n\nSigned-off-by: Fan Zhang <roy.fan.zhang@intel.com>\nSeries-acked-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>\n---\n .../crypto/scheduler/scheduler_pkt_size_distr.c | 410 +++++++++++++++++++++\n 1 file changed, 410 insertions(+)\n create mode 100644 drivers/crypto/scheduler/scheduler_pkt_size_distr.c", "diff": "diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c\nnew file mode 100644\nindex 0000000..8da10c8\n--- /dev/null\n+++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c\n@@ -0,0 +1,410 @@\n+/*-\n+ * BSD LICENSE\n+ *\n+ * Copyright(c) 2017 Intel Corporation. All rights reserved.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions\n+ * are met:\n+ *\n+ * * Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * * Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in\n+ * the documentation and/or other materials provided with the\n+ * distribution.\n+ * * Neither the name of Intel Corporation nor the names of its\n+ * contributors may be used to endorse or promote products derived\n+ * from this software without specific prior written permission.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ * \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <rte_cryptodev.h>\n+#include <rte_malloc.h>\n+\n+#include \"rte_cryptodev_scheduler_operations.h\"\n+#include \"scheduler_pmd_private.h\"\n+\n+#define DEF_PKT_SIZE_THRESHOLD\t\t\t(0xffffff80)\n+#define SLAVE_IDX_SWITCH_MASK\t\t\t(0x01)\n+#define PRIMARY_SLAVE_IDX\t\t\t0\n+#define SECONDARY_SLAVE_IDX\t\t\t1\n+#define NB_PKT_SIZE_SLAVES\t\t\t2\n+\n+/** pkt size based scheduler context */\n+struct psd_scheduler_ctx {\n+\tuint32_t threshold;\n+};\n+\n+/** pkt size based scheduler queue pair context */\n+struct psd_scheduler_qp_ctx {\n+\tstruct scheduler_slave primary_slave;\n+\tstruct scheduler_slave secondary_slave;\n+\tuint32_t threshold;\n+\tuint32_t max_nb_objs;\n+\tuint8_t deq_idx;\n+} __rte_cache_aligned;\n+\n+/** scheduling operation variables' wrapping */\n+struct psd_schedule_op {\n+\tuint8_t slave_idx;\n+\tuint16_t pos;\n+};\n+\n+static uint16_t\n+schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)\n+{\n+\tstruct psd_scheduler_qp_ctx *qp_ctx =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->private_qp_ctx;\n+\tstruct rte_crypto_op *sched_ops[NB_PKT_SIZE_SLAVES][nb_ops];\n+\tstruct scheduler_session *sess;\n+\tuint32_t in_flight_ops[NB_PKT_SIZE_SLAVES] = {\n+\t\t\tqp_ctx->primary_slave.nb_inflight_cops,\n+\t\t\tqp_ctx->secondary_slave.nb_inflight_cops\n+\t};\n+\tstruct psd_schedule_op enq_ops[NB_PKT_SIZE_SLAVES] = {\n+\t\t{PRIMARY_SLAVE_IDX, 0}, {SECONDARY_SLAVE_IDX, 0}\n+\t};\n+\tstruct psd_schedule_op *p_enq_op;\n+\tuint16_t i, processed_ops_pri = 0, processed_ops_sec = 0;\n+\tuint32_t job_len;\n+\n+\tif (unlikely(nb_ops == 0))\n+\t\treturn 0;\n+\n+\tfor (i = 0; i < nb_ops && i < 4; i++) {\n+\t\trte_prefetch0(ops[i]->sym);\n+\t\trte_prefetch0(ops[i]->sym->session);\n+\t}\n+\n+\tfor (i = 0; (i < (nb_ops - 8)) && (nb_ops > 8); i += 4) {\n+\t\trte_prefetch0(ops[i + 4]->sym);\n+\t\trte_prefetch0(ops[i + 4]->sym->session);\n+\t\trte_prefetch0(ops[i + 5]->sym);\n+\t\trte_prefetch0(ops[i + 5]->sym->session);\n+\t\trte_prefetch0(ops[i + 6]->sym);\n+\t\trte_prefetch0(ops[i + 6]->sym->session);\n+\t\trte_prefetch0(ops[i + 7]->sym);\n+\t\trte_prefetch0(ops[i + 7]->sym->session);\n+\n+\t\tsess = (struct scheduler_session *)\n+\t\t\t\tops[i]->sym->session->_private;\n+\t\t/* job_len is initialized as cipher data length, once\n+\t\t * it is 0, equals to auth data length\n+\t\t */\n+\t\tjob_len = ops[i]->sym->cipher.data.length;\n+\t\tjob_len += (ops[i]->sym->cipher.data.length == 0) *\n+\t\t\t\tops[i]->sym->auth.data.length;\n+\t\t/* decide the target op based on the job length */\n+\t\tp_enq_op = &enq_ops[!(job_len & qp_ctx->threshold)];\n+\t\tsched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i];\n+\t\tops[i]->sym->session = sess->sessions[p_enq_op->slave_idx];\n+\t\tp_enq_op->pos++;\n+\n+\t\t/* stop schedule cops before the queue is full, this shall\n+\t\t * prevent the failed enqueue\n+\t\t */\n+\t\tif (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] >=\n+\t\t\t\tqp_ctx->max_nb_objs) {\n+\t\t\ti = nb_ops;\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tsess = (struct scheduler_session *)\n+\t\t\t\tops[i+1]->sym->session->_private;\n+\t\tjob_len = ops[i+1]->sym->cipher.data.length;\n+\t\tjob_len += (ops[i+1]->sym->cipher.data.length == 0) *\n+\t\t\t\tops[i+1]->sym->auth.data.length;\n+\t\tp_enq_op = &enq_ops[!(job_len & qp_ctx->threshold)];\n+\t\tsched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i+1];\n+\t\tops[i+1]->sym->session = sess->sessions[p_enq_op->slave_idx];\n+\t\tp_enq_op->pos++;\n+\n+\t\tif (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] >=\n+\t\t\t\tqp_ctx->max_nb_objs) {\n+\t\t\ti = nb_ops;\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tsess = (struct scheduler_session *)\n+\t\t\t\tops[i+2]->sym->session->_private;\n+\t\tjob_len = ops[i+2]->sym->cipher.data.length;\n+\t\tjob_len += (ops[i+2]->sym->cipher.data.length == 0) *\n+\t\t\t\tops[i+2]->sym->auth.data.length;\n+\t\tp_enq_op = &enq_ops[!(job_len & qp_ctx->threshold)];\n+\t\tsched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i+2];\n+\t\tops[i+2]->sym->session = sess->sessions[p_enq_op->slave_idx];\n+\t\tp_enq_op->pos++;\n+\n+\t\tif (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] >=\n+\t\t\t\tqp_ctx->max_nb_objs) {\n+\t\t\ti = nb_ops;\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\tsess = (struct scheduler_session *)\n+\t\t\t\tops[i+3]->sym->session->_private;\n+\n+\t\tjob_len = ops[i+3]->sym->cipher.data.length;\n+\t\tjob_len += (ops[i+3]->sym->cipher.data.length == 0) *\n+\t\t\t\tops[i+3]->sym->auth.data.length;\n+\t\tp_enq_op = &enq_ops[!(job_len & qp_ctx->threshold)];\n+\t\tsched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i+3];\n+\t\tops[i+3]->sym->session = sess->sessions[p_enq_op->slave_idx];\n+\t\tp_enq_op->pos++;\n+\n+\t\tif (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] >=\n+\t\t\t\tqp_ctx->max_nb_objs) {\n+\t\t\ti = nb_ops;\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\tfor (; i < nb_ops; i++) {\n+\t\tsess = (struct scheduler_session *)\n+\t\t\t\tops[i]->sym->session->_private;\n+\n+\t\tjob_len = ops[i]->sym->cipher.data.length;\n+\t\tjob_len += (ops[i]->sym->cipher.data.length == 0) *\n+\t\t\t\tops[i]->sym->auth.data.length;\n+\t\tp_enq_op = &enq_ops[!(job_len & qp_ctx->threshold)];\n+\t\tsched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i];\n+\t\tops[i]->sym->session = sess->sessions[p_enq_op->slave_idx];\n+\t\tp_enq_op->pos++;\n+\n+\t\tif (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] >=\n+\t\t\t\tqp_ctx->max_nb_objs) {\n+\t\t\ti = nb_ops;\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\tprocessed_ops_pri = rte_cryptodev_enqueue_burst(\n+\t\t\tqp_ctx->primary_slave.dev_id,\n+\t\t\tqp_ctx->primary_slave.qp_id,\n+\t\t\tsched_ops[PRIMARY_SLAVE_IDX],\n+\t\t\tenq_ops[PRIMARY_SLAVE_IDX].pos);\n+\tqp_ctx->primary_slave.nb_inflight_cops += processed_ops_pri;\n+\n+\tprocessed_ops_sec = rte_cryptodev_enqueue_burst(\n+\t\t\tqp_ctx->secondary_slave.dev_id,\n+\t\t\tqp_ctx->secondary_slave.qp_id,\n+\t\t\tsched_ops[SECONDARY_SLAVE_IDX],\n+\t\t\tenq_ops[SECONDARY_SLAVE_IDX].pos);\n+\tqp_ctx->secondary_slave.nb_inflight_cops += processed_ops_sec;\n+\n+\treturn processed_ops_pri + processed_ops_sec;\n+}\n+\n+static uint16_t\n+schedule_enqueue_ordering(void *qp, struct rte_crypto_op **ops,\n+\t\tuint16_t nb_ops)\n+{\n+\tstruct rte_ring *order_ring =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->order_ring;\n+\tuint16_t nb_ops_to_enq = get_max_enqueue_order_count(order_ring,\n+\t\t\tnb_ops);\n+\tuint16_t nb_ops_enqd = schedule_enqueue(qp, ops,\n+\t\t\tnb_ops_to_enq);\n+\n+\tscheduler_order_insert(order_ring, ops, nb_ops_enqd);\n+\n+\treturn nb_ops_enqd;\n+}\n+\n+static uint16_t\n+schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)\n+{\n+\tstruct psd_scheduler_qp_ctx *qp_ctx =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->private_qp_ctx;\n+\tstruct scheduler_slave *slaves[NB_PKT_SIZE_SLAVES] = {\n+\t\t\t&qp_ctx->primary_slave, &qp_ctx->secondary_slave};\n+\tstruct scheduler_slave *slave = slaves[qp_ctx->deq_idx];\n+\tuint16_t nb_deq_ops_pri = 0, nb_deq_ops_sec = 0;\n+\n+\tif (slave->nb_inflight_cops) {\n+\t\tnb_deq_ops_pri = rte_cryptodev_dequeue_burst(slave->dev_id,\n+\t\t\tslave->qp_id, ops, nb_ops);\n+\t\tslave->nb_inflight_cops -= nb_deq_ops_pri;\n+\t}\n+\n+\tqp_ctx->deq_idx = (~qp_ctx->deq_idx) & SLAVE_IDX_SWITCH_MASK;\n+\n+\tif (nb_deq_ops_pri == nb_ops)\n+\t\treturn nb_deq_ops_pri;\n+\n+\tslave = slaves[qp_ctx->deq_idx];\n+\n+\tif (slave->nb_inflight_cops) {\n+\t\tnb_deq_ops_sec = rte_cryptodev_dequeue_burst(slave->dev_id,\n+\t\t\t\tslave->qp_id, &ops[nb_deq_ops_pri],\n+\t\t\t\tnb_ops - nb_deq_ops_pri);\n+\t\tslave->nb_inflight_cops -= nb_deq_ops_sec;\n+\n+\t\tif (!slave->nb_inflight_cops)\n+\t\t\tqp_ctx->deq_idx = (~qp_ctx->deq_idx) &\n+\t\t\t\t\tSLAVE_IDX_SWITCH_MASK;\n+\t}\n+\n+\treturn nb_deq_ops_pri + nb_deq_ops_sec;\n+}\n+\n+static uint16_t\n+schedule_dequeue_ordering(void *qp, struct rte_crypto_op **ops,\n+\t\tuint16_t nb_ops)\n+{\n+\tstruct rte_ring *order_ring =\n+\t\t\t((struct scheduler_qp_ctx *)qp)->order_ring;\n+\n+\tschedule_dequeue(qp, ops, nb_ops);\n+\n+\treturn scheduler_order_drain(order_ring, ops, nb_ops);\n+}\n+\n+static int\n+slave_attach(__rte_unused struct rte_cryptodev *dev,\n+\t\t__rte_unused uint8_t slave_id)\n+{\n+\treturn 0;\n+}\n+\n+static int\n+slave_detach(__rte_unused struct rte_cryptodev *dev,\n+\t\t__rte_unused uint8_t slave_id)\n+{\n+\treturn 0;\n+}\n+\n+static int\n+scheduler_start(struct rte_cryptodev *dev)\n+{\n+\tstruct scheduler_ctx *sched_ctx = dev->data->dev_private;\n+\tstruct psd_scheduler_ctx *psd_ctx = sched_ctx->private_ctx;\n+\tuint16_t i;\n+\n+\t/* for packet size based scheduler, nb_slaves have to >= 2 */\n+\tif (sched_ctx->nb_slaves < NB_PKT_SIZE_SLAVES) {\n+\t\tCS_LOG_ERR(\"not enough slaves to start\");\n+\t\treturn -1;\n+\t}\n+\n+\tfor (i = 0; i < dev->data->nb_queue_pairs; i++) {\n+\t\tstruct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];\n+\t\tstruct psd_scheduler_qp_ctx *ps_qp_ctx =\n+\t\t\t\tqp_ctx->private_qp_ctx;\n+\n+\t\tps_qp_ctx->primary_slave.dev_id =\n+\t\t\t\tsched_ctx->slaves[PRIMARY_SLAVE_IDX].dev_id;\n+\t\tps_qp_ctx->primary_slave.qp_id = i;\n+\t\tps_qp_ctx->primary_slave.nb_inflight_cops = 0;\n+\n+\t\tps_qp_ctx->secondary_slave.dev_id =\n+\t\t\t\tsched_ctx->slaves[SECONDARY_SLAVE_IDX].dev_id;\n+\t\tps_qp_ctx->secondary_slave.qp_id = i;\n+\t\tps_qp_ctx->secondary_slave.nb_inflight_cops = 0;\n+\n+\t\tps_qp_ctx->threshold = psd_ctx->threshold;\n+\n+\t\tps_qp_ctx->max_nb_objs = sched_ctx->qp_conf.nb_descriptors;\n+\t}\n+\n+\tif (sched_ctx->reordering_enabled) {\n+\t\tdev->enqueue_burst = &schedule_enqueue_ordering;\n+\t\tdev->dequeue_burst = &schedule_dequeue_ordering;\n+\t} else {\n+\t\tdev->enqueue_burst = &schedule_enqueue;\n+\t\tdev->dequeue_burst = &schedule_dequeue;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+scheduler_stop(struct rte_cryptodev *dev)\n+{\n+\tuint16_t i;\n+\n+\tfor (i = 0; i < dev->data->nb_queue_pairs; i++) {\n+\t\tstruct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[i];\n+\t\tstruct psd_scheduler_qp_ctx *ps_qp_ctx = qp_ctx->private_qp_ctx;\n+\n+\t\tif (ps_qp_ctx->primary_slave.nb_inflight_cops +\n+\t\t\t\tps_qp_ctx->secondary_slave.nb_inflight_cops) {\n+\t\t\tCS_LOG_ERR(\"Some crypto ops left in slave queue\");\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+scheduler_config_qp(struct rte_cryptodev *dev, uint16_t qp_id)\n+{\n+\tstruct scheduler_qp_ctx *qp_ctx = dev->data->queue_pairs[qp_id];\n+\tstruct psd_scheduler_qp_ctx *ps_qp_ctx;\n+\n+\tps_qp_ctx = rte_zmalloc_socket(NULL, sizeof(*ps_qp_ctx), 0,\n+\t\t\trte_socket_id());\n+\tif (!ps_qp_ctx) {\n+\t\tCS_LOG_ERR(\"failed allocate memory for private queue pair\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\tqp_ctx->private_qp_ctx = (void *)ps_qp_ctx;\n+\n+\treturn 0;\n+}\n+\n+static int\n+scheduler_create_private_ctx(struct rte_cryptodev *dev)\n+{\n+\tstruct scheduler_ctx *sched_ctx = dev->data->dev_private;\n+\tstruct psd_scheduler_ctx *psd_ctx;\n+\n+\tif (sched_ctx->private_ctx)\n+\t\trte_free(sched_ctx->private_ctx);\n+\n+\tpsd_ctx = rte_zmalloc_socket(NULL, sizeof(struct psd_scheduler_ctx), 0,\n+\t\t\trte_socket_id());\n+\tif (!psd_ctx) {\n+\t\tCS_LOG_ERR(\"failed allocate memory\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\tpsd_ctx->threshold = DEF_PKT_SIZE_THRESHOLD;\n+\n+\tsched_ctx->private_ctx = (void *)psd_ctx;\n+\n+\treturn 0;\n+}\n+\n+struct rte_cryptodev_scheduler_ops scheduler_ps_ops = {\n+\tslave_attach,\n+\tslave_detach,\n+\tscheduler_start,\n+\tscheduler_stop,\n+\tscheduler_config_qp,\n+\tscheduler_create_private_ctx,\n+};\n+\n+struct rte_cryptodev_scheduler psd_scheduler = {\n+\t\t.name = \"packet-size-based-scheduler\",\n+\t\t.description = \"scheduler which will distribute crypto op \"\n+\t\t\t\t\"burst based on the packet size\",\n+\t\t.mode = CDEV_SCHED_MODE_PKT_SIZE_DISTR,\n+\t\t.ops = &scheduler_ps_ops\n+};\n+\n+struct rte_cryptodev_scheduler *pkt_size_based_distr_scheduler = &psd_scheduler;\n", "prefixes": [ "dpdk-dev", "v5", "1/3" ] }{ "id": 22742, "url": "