Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/59420/?format=api
http://patches.dpdk.org/api/patches/59420/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/20190919092603.5485-6-pbhagavatula@marvell.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<20190919092603.5485-6-pbhagavatula@marvell.com>", "list_archive_url": "https://inbox.dpdk.org/dev/20190919092603.5485-6-pbhagavatula@marvell.com", "date": "2019-09-19T09:25:59", "name": "[v2,05/10] examples/l2fwd-event: add eventdev queue and port setup", "commit_ref": null, "pull_url": null, "state": "superseded", "archived": true, "hash": "ffb4cad179ad7d46d006d7aa5e0401705e62a210", "submitter": { "id": 1183, "url": "http://patches.dpdk.org/api/people/1183/?format=api", "name": "Pavan Nikhilesh Bhagavatula", "email": "pbhagavatula@marvell.com" }, "delegate": null, "mbox": "http://patches.dpdk.org/project/dpdk/patch/20190919092603.5485-6-pbhagavatula@marvell.com/mbox/", "series": [ { "id": 6446, "url": "http://patches.dpdk.org/api/series/6446/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=6446", "date": "2019-09-19T09:25:54", "name": "example/l2fwd-event: introduce l2fwd-event example", "version": 2, "mbox": "http://patches.dpdk.org/series/6446/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/59420/comments/", "check": "success", "checks": "http://patches.dpdk.org/api/patches/59420/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 296791ED25;\n\tThu, 19 Sep 2019 11:26:31 +0200 (CEST)", "from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com\n\t[67.231.148.174]) by dpdk.org (Postfix) with ESMTP id BE69E1ED0E\n\tfor <dev@dpdk.org>; Thu, 19 Sep 2019 11:26:29 +0200 (CEST)", "from pps.filterd (m0045849.ppops.net [127.0.0.1])\n\tby mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id\n\tx8J9PUUW012561; Thu, 19 Sep 2019 02:26:29 -0700", "from sc-exch04.marvell.com ([199.233.58.184])\n\tby mx0a-0016f401.pphosted.com with ESMTP id 2v3vcdt76h-1\n\t(version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); \n\tThu, 19 Sep 2019 02:26:28 -0700", "from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com\n\t(10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3;\n\tThu, 19 Sep 2019 02:26:27 -0700", "from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com\n\t(10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend\n\tTransport; Thu, 19 Sep 2019 02:26:27 -0700", "from BG-LT7430.marvell.com (unknown [10.28.17.12])\n\tby maili.marvell.com (Postfix) with ESMTP id 824F43F703F;\n\tThu, 19 Sep 2019 02:26:24 -0700 (PDT)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;\n\th=from : to : cc :\n\tsubject : date : message-id : in-reply-to : references : mime-version\n\t: content-transfer-encoding : content-type; s=pfpt0818;\n\tbh=HzLEdFTWEycRngqpt28p0rLkOQWWac0aJzQ4LsNEaU8=;\n\tb=XwO2FMDYIlMINPnMSQxMNnsyEy2fK2VTKWIAy/c7/AOu/bqmscE4IpEQDkj/2VzQtF92\n\tSlE+BUmsJBvkFmkrPx7XoaxORZgnLGf8prXDlqWG0Xxs1jwXpPJfbo3oK9AF2GFa6qga\n\tvzVWuuq+/QFeTH+eCIHB6oAjHzLZkLCQUTg+/A/13rRn40/Wlm36zBKidScVJOt5gIo4\n\tek7sJt2o+ASQ1Mxsj0nmfAO5IRJh12pHPuFj8G7xfxzpYt9GP9ZhOilq5IyA1cztC3+g\n\tvTmJELYa52FQ7WNHkiFMGmxX5r4NzKre0wk9kTiYTKhXPgynKXBDIGwvpkVrcUIYz1UO\n\teg== ", "From": "<pbhagavatula@marvell.com>", "To": "<jerinj@marvell.com>, <bruce.richardson@intel.com>, <akhil.goyal@nxp.com>,\n\tMarko Kovacevic <marko.kovacevic@intel.com>,\n\tOri Kam <orika@mellanox.com>, Radu Nicolau <radu.nicolau@intel.com>, \n\tTomasz Kantecki <tomasz.kantecki@intel.com>,\n\tSunil Kumar Kori <skori@marvell.com>, \"Pavan\n\tNikhilesh\" <pbhagavatula@marvell.com>", "CC": "<dev@dpdk.org>", "Date": "Thu, 19 Sep 2019 14:55:59 +0530", "Message-ID": "<20190919092603.5485-6-pbhagavatula@marvell.com>", "X-Mailer": "git-send-email 2.17.1", "In-Reply-To": "<20190919092603.5485-1-pbhagavatula@marvell.com>", "References": "<20190919092603.5485-1-pbhagavatula@marvell.com>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "Content-Type": "text/plain", "X-Proofpoint-Virus-Version": "vendor=fsecure engine=2.50.10434:6.0.70,1.0.8\n\tdefinitions=2019-09-19_03:2019-09-18,2019-09-19 signatures=0", "Subject": "[dpdk-dev] [PATCH v2 05/10] examples/l2fwd-event: add eventdev\n\tqueue and port setup", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "From: Pavan Nikhilesh <pbhagavatula@marvell.com>\n\nAdd event device queue and port setup based on event eth Tx adapter\ncapabilities.\n\nSigned-off-by: Sunil Kumar Kori <skori@marvell.com>\nSigned-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>\n---\n examples/l2fwd-event/l2fwd_eventdev.c | 10 +\n examples/l2fwd-event/l2fwd_eventdev.h | 18 ++\n examples/l2fwd-event/l2fwd_eventdev_generic.c | 179 +++++++++++++++++-\n .../l2fwd_eventdev_internal_port.c | 173 ++++++++++++++++-\n 4 files changed, 378 insertions(+), 2 deletions(-)", "diff": "diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c\nindex 0d0d3b8b9..7a3d077ae 100644\n--- a/examples/l2fwd-event/l2fwd_eventdev.c\n+++ b/examples/l2fwd-event/l2fwd_eventdev.c\n@@ -216,6 +216,7 @@ eventdev_resource_setup(void)\n {\n \tstruct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();\n \tuint16_t ethdev_count = rte_eth_dev_count_avail();\n+\tuint32_t event_queue_cfg = 0;\n \tuint32_t service_id;\n \tint32_t ret;\n \n@@ -233,6 +234,15 @@ eventdev_resource_setup(void)\n \t/* Ethernet device configuration */\n \teth_dev_port_setup(ethdev_count);\n \n+\t/* Event device configuration */\n+\tevent_queue_cfg = eventdev_rsrc->ops.eventdev_setup(ethdev_count);\n+\n+\t/* Event queue configuration */\n+\teventdev_rsrc->ops.event_queue_setup(ethdev_count, event_queue_cfg);\n+\n+\t/* Event port configuration */\n+\teventdev_rsrc->ops.event_port_setup();\n+\n \t/* Start event device service */\n \tret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,\n \t\t\t&service_id);\ndiff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h\nindex cc0bdd1ad..7646ef29f 100644\n--- a/examples/l2fwd-event/l2fwd_eventdev.h\n+++ b/examples/l2fwd-event/l2fwd_eventdev.h\n@@ -26,6 +26,17 @@ typedef void (*event_port_setup_cb)(void);\n typedef void (*service_setup_cb)(void);\n typedef void (*event_loop_cb)(void);\n \n+struct eventdev_queues {\n+\tuint8_t *event_q_id;\n+\tuint8_t\tnb_queues;\n+};\n+\n+struct eventdev_ports {\n+\tuint8_t *event_p_id;\n+\tuint8_t\tnb_ports;\n+\trte_spinlock_t lock;\n+};\n+\n struct eventdev_setup_ops {\n \tevent_queue_setup_cb event_queue_setup;\n \tevent_port_setup_cb event_port_setup;\n@@ -36,9 +47,14 @@ struct eventdev_setup_ops {\n };\n \n struct eventdev_resources {\n+\tstruct rte_event_port_conf def_p_conf;\n \tstruct l2fwd_port_statistics *stats;\n+\t/* Default port config. */\n+\tuint8_t disable_implicit_release;\n \tstruct eventdev_setup_ops ops;\n \tstruct rte_mempool *pkt_pool;\n+\tstruct eventdev_queues evq;\n+\tstruct eventdev_ports evp;\n \tuint64_t timer_period;\n \tuint32_t *dst_ports;\n \tuint32_t service_id;\n@@ -47,6 +63,8 @@ struct eventdev_resources {\n \tuint8_t event_d_id;\n \tuint8_t sync_mode;\n \tuint8_t tx_mode_q;\n+\tuint8_t deq_depth;\n+\tuint8_t has_burst;\n \tuint8_t mac_updt;\n \tuint8_t enabled;\n \tuint8_t nb_args;\ndiff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c\nindex e3990f8b0..65166fded 100644\n--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c\n+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c\n@@ -17,8 +17,185 @@\n #include \"l2fwd_common.h\"\n #include \"l2fwd_eventdev.h\"\n \n+static uint32_t\n+eventdev_setup_generic(uint16_t ethdev_count)\n+{\n+\tstruct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();\n+\tstruct rte_event_dev_config event_d_conf = {\n+\t\t.nb_events_limit = 4096,\n+\t\t.nb_event_queue_flows = 1024,\n+\t\t.nb_event_port_dequeue_depth = 128,\n+\t\t.nb_event_port_enqueue_depth = 128\n+\t};\n+\tstruct rte_event_dev_info dev_info;\n+\tconst uint8_t event_d_id = 0; /* Always use first event device only */\n+\tuint32_t event_queue_cfg = 0;\n+\tuint16_t num_workers = 0;\n+\tint ret;\n+\n+\t/* Event device configurtion */\n+\trte_event_dev_info_get(event_d_id, &dev_info);\n+\teventdev_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &\n+\t\t\t\t RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);\n+\n+\tif (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)\n+\t\tevent_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;\n+\n+\t/* One queue for each ethdev port + one Tx adapter Single link queue. */\n+\tevent_d_conf.nb_event_queues = ethdev_count + 1;\n+\tif (dev_info.max_event_queues < event_d_conf.nb_event_queues)\n+\t\tevent_d_conf.nb_event_queues = dev_info.max_event_queues;\n+\n+\tif (dev_info.max_num_events < event_d_conf.nb_events_limit)\n+\t\tevent_d_conf.nb_events_limit = dev_info.max_num_events;\n+\n+\tif (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)\n+\t\tevent_d_conf.nb_event_queue_flows =\n+\t\t\t\t\t\tdev_info.max_event_queue_flows;\n+\n+\tif (dev_info.max_event_port_dequeue_depth <\n+\t\t\t\tevent_d_conf.nb_event_port_dequeue_depth)\n+\t\tevent_d_conf.nb_event_port_dequeue_depth =\n+\t\t\t\tdev_info.max_event_port_dequeue_depth;\n+\n+\tif (dev_info.max_event_port_enqueue_depth <\n+\t\t\t\tevent_d_conf.nb_event_port_enqueue_depth)\n+\t\tevent_d_conf.nb_event_port_enqueue_depth =\n+\t\t\t\tdev_info.max_event_port_enqueue_depth;\n+\n+\tnum_workers = rte_lcore_count() - rte_service_lcore_count();\n+\tif (dev_info.max_event_ports < num_workers)\n+\t\tnum_workers = dev_info.max_event_ports;\n+\n+\tevent_d_conf.nb_event_ports = num_workers;\n+\teventdev_rsrc->evp.nb_ports = num_workers;\n+\n+\teventdev_rsrc->has_burst = !!(dev_info.event_dev_cap &\n+\t\t\t\t RTE_EVENT_DEV_CAP_BURST_MODE);\n+\n+\tret = rte_event_dev_configure(event_d_id, &event_d_conf);\n+\tif (ret < 0)\n+\t\trte_exit(EXIT_FAILURE, \"Error in configuring event device\");\n+\n+\teventdev_rsrc->event_d_id = event_d_id;\n+\treturn event_queue_cfg;\n+}\n+\n+static void\n+event_port_setup_generic(void)\n+{\n+\tstruct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();\n+\tuint8_t event_d_id = eventdev_rsrc->event_d_id;\n+\tstruct rte_event_port_conf event_p_conf = {\n+\t\t.dequeue_depth = 32,\n+\t\t.enqueue_depth = 32,\n+\t\t.new_event_threshold = 4096\n+\t};\n+\tstruct rte_event_port_conf def_p_conf;\n+\tuint8_t event_p_id;\n+\tint32_t ret;\n+\n+\t/* Service cores are not used to run worker thread */\n+\teventdev_rsrc->evp.nb_ports = eventdev_rsrc->evp.nb_ports;\n+\teventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *\n+\t\t\t\t\teventdev_rsrc->evp.nb_ports);\n+\tif (!eventdev_rsrc->evp.event_p_id)\n+\t\trte_exit(EXIT_FAILURE, \" No space is available\");\n+\n+\tmemset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));\n+\trte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);\n+\n+\tif (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)\n+\t\tevent_p_conf.new_event_threshold =\n+\t\t\tdef_p_conf.new_event_threshold;\n+\n+\tif (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)\n+\t\tevent_p_conf.dequeue_depth = def_p_conf.dequeue_depth;\n+\n+\tif (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)\n+\t\tevent_p_conf.enqueue_depth = def_p_conf.enqueue_depth;\n+\n+\tevent_p_conf.disable_implicit_release =\n+\t\teventdev_rsrc->disable_implicit_release;\n+\teventdev_rsrc->deq_depth = def_p_conf.dequeue_depth;\n+\n+\tfor (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports;\n+\t\t\t\t\t\t\t\tevent_p_id++) {\n+\t\tret = rte_event_port_setup(event_d_id, event_p_id,\n+\t\t\t\t\t &event_p_conf);\n+\t\tif (ret < 0) {\n+\t\t\trte_exit(EXIT_FAILURE,\n+\t\t\t\t \"Error in configuring event port %d\\n\",\n+\t\t\t\t event_p_id);\n+\t\t}\n+\n+\t\tret = rte_event_port_link(event_d_id, event_p_id,\n+\t\t\t\t\t eventdev_rsrc->evq.event_q_id,\n+\t\t\t\t\t NULL,\n+\t\t\t\t\t eventdev_rsrc->evq.nb_queues - 1);\n+\t\tif (ret != (eventdev_rsrc->evq.nb_queues - 1)) {\n+\t\t\trte_exit(EXIT_FAILURE, \"Error in linking event port %d \"\n+\t\t\t\t \"to event queue\", event_p_id);\n+\t\t}\n+\t\teventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id;\n+\t}\n+\t/* init spinlock */\n+\trte_spinlock_init(&eventdev_rsrc->evp.lock);\n+\n+\teventdev_rsrc->def_p_conf = event_p_conf;\n+}\n+\n+static void\n+event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg)\n+{\n+\tstruct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();\n+\tuint8_t event_d_id = eventdev_rsrc->event_d_id;\n+\tstruct rte_event_queue_conf event_q_conf = {\n+\t\t.nb_atomic_flows = 1024,\n+\t\t.nb_atomic_order_sequences = 1024,\n+\t\t.event_queue_cfg = event_queue_cfg,\n+\t\t.priority = RTE_EVENT_DEV_PRIORITY_NORMAL\n+\t};\n+\tstruct rte_event_queue_conf def_q_conf;\n+\tuint8_t event_q_id;\n+\tint32_t ret;\n+\n+\tevent_q_conf.schedule_type = eventdev_rsrc->sync_mode;\n+\teventdev_rsrc->evq.nb_queues = ethdev_count + 1;\n+\teventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *\n+\t\t\t\t\teventdev_rsrc->evq.nb_queues);\n+\tif (!eventdev_rsrc->evq.event_q_id)\n+\t\trte_exit(EXIT_FAILURE, \"Memory allocation failure\");\n+\n+\trte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);\n+\tif (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)\n+\t\tevent_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;\n+\n+\tfor (event_q_id = 0; event_q_id < (eventdev_rsrc->evq.nb_queues - 1);\n+\t\t\t\t\t\t\t\tevent_q_id++) {\n+\t\tret = rte_event_queue_setup(event_d_id, event_q_id,\n+\t\t\t\t\t &event_q_conf);\n+\t\tif (ret < 0) {\n+\t\t\trte_exit(EXIT_FAILURE,\n+\t\t\t\t \"Error in configuring event queue\");\n+\t\t}\n+\t\teventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;\n+\t}\n+\n+\tevent_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;\n+\tevent_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,\n+\tret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);\n+\tif (ret < 0) {\n+\t\trte_exit(EXIT_FAILURE,\n+\t\t\t \"Error in configuring event queue for Tx adapter\");\n+\t}\n+\teventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;\n+}\n+\n void\n eventdev_set_generic_ops(struct eventdev_setup_ops *ops)\n {\n-\tRTE_SET_USED(ops);\n+\tops->eventdev_setup = eventdev_setup_generic;\n+\tops->event_queue_setup = event_queue_setup_generic;\n+\tops->event_port_setup = event_port_setup_generic;\n }\ndiff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c\nindex a0d2111f9..52cb07707 100644\n--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c\n+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c\n@@ -17,8 +17,179 @@\n #include \"l2fwd_common.h\"\n #include \"l2fwd_eventdev.h\"\n \n+static uint32_t\n+eventdev_setup_internal_port(uint16_t ethdev_count)\n+{\n+\tstruct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();\n+\tstruct rte_event_dev_config event_d_conf = {\n+\t\t.nb_events_limit = 4096,\n+\t\t.nb_event_queue_flows = 1024,\n+\t\t.nb_event_port_dequeue_depth = 128,\n+\t\t.nb_event_port_enqueue_depth = 128\n+\t};\n+\tstruct rte_event_dev_info dev_info;\n+\tuint8_t disable_implicit_release;\n+\tconst uint8_t event_d_id = 0; /* Always use first event device only */\n+\tuint32_t event_queue_cfg = 0;\n+\tuint16_t num_workers = 0;\n+\tint ret;\n+\n+\t/* Event device configurtion */\n+\trte_event_dev_info_get(event_d_id, &dev_info);\n+\n+\tdisable_implicit_release = !!(dev_info.event_dev_cap &\n+\t\t\t\t RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);\n+\teventdev_rsrc->disable_implicit_release =\n+\t\t\t\t\t\tdisable_implicit_release;\n+\n+\tif (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)\n+\t\tevent_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;\n+\n+\tevent_d_conf.nb_event_queues = ethdev_count;\n+\tif (dev_info.max_event_queues < event_d_conf.nb_event_queues)\n+\t\tevent_d_conf.nb_event_queues = dev_info.max_event_queues;\n+\n+\tif (dev_info.max_num_events < event_d_conf.nb_events_limit)\n+\t\tevent_d_conf.nb_events_limit = dev_info.max_num_events;\n+\n+\tif (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)\n+\t\tevent_d_conf.nb_event_queue_flows =\n+\t\t\t\t\t\tdev_info.max_event_queue_flows;\n+\n+\tif (dev_info.max_event_port_dequeue_depth <\n+\t\t\t\tevent_d_conf.nb_event_port_dequeue_depth)\n+\t\tevent_d_conf.nb_event_port_dequeue_depth =\n+\t\t\t\tdev_info.max_event_port_dequeue_depth;\n+\n+\tif (dev_info.max_event_port_enqueue_depth <\n+\t\t\t\tevent_d_conf.nb_event_port_enqueue_depth)\n+\t\tevent_d_conf.nb_event_port_enqueue_depth =\n+\t\t\t\tdev_info.max_event_port_enqueue_depth;\n+\n+\tnum_workers = rte_lcore_count();\n+\tif (dev_info.max_event_ports < num_workers)\n+\t\tnum_workers = dev_info.max_event_ports;\n+\n+\tevent_d_conf.nb_event_ports = num_workers;\n+\teventdev_rsrc->evp.nb_ports = num_workers;\n+\teventdev_rsrc->has_burst = !!(dev_info.event_dev_cap &\n+\t\t\t\t RTE_EVENT_DEV_CAP_BURST_MODE);\n+\n+\tret = rte_event_dev_configure(event_d_id, &event_d_conf);\n+\tif (ret < 0)\n+\t\trte_exit(EXIT_FAILURE, \"Error in configuring event device\");\n+\n+\teventdev_rsrc->event_d_id = event_d_id;\n+\treturn event_queue_cfg;\n+}\n+\n+static void\n+event_port_setup_internal_port(void)\n+{\n+\tstruct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();\n+\tuint8_t event_d_id = eventdev_rsrc->event_d_id;\n+\tstruct rte_event_port_conf event_p_conf = {\n+\t\t.dequeue_depth = 32,\n+\t\t.enqueue_depth = 32,\n+\t\t.new_event_threshold = 4096\n+\t};\n+\tstruct rte_event_port_conf def_p_conf;\n+\tuint8_t event_p_id;\n+\tint32_t ret;\n+\n+\teventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *\n+\t\t\t\t\teventdev_rsrc->evp.nb_ports);\n+\tif (!eventdev_rsrc->evp.event_p_id)\n+\t\trte_exit(EXIT_FAILURE,\n+\t\t\t \"Failed to allocate memory for Event Ports\");\n+\n+\trte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);\n+\tif (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)\n+\t\tevent_p_conf.new_event_threshold =\n+\t\t\t\t\t\tdef_p_conf.new_event_threshold;\n+\n+\tif (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)\n+\t\tevent_p_conf.dequeue_depth = def_p_conf.dequeue_depth;\n+\n+\tif (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)\n+\t\tevent_p_conf.enqueue_depth = def_p_conf.enqueue_depth;\n+\n+\tevent_p_conf.disable_implicit_release =\n+\t\teventdev_rsrc->disable_implicit_release;\n+\n+\tfor (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports;\n+\t\t\t\t\t\t\t\tevent_p_id++) {\n+\t\tret = rte_event_port_setup(event_d_id, event_p_id,\n+\t\t\t\t\t &event_p_conf);\n+\t\tif (ret < 0) {\n+\t\t\trte_exit(EXIT_FAILURE,\n+\t\t\t\t \"Error in configuring event port %d\\n\",\n+\t\t\t\t event_p_id);\n+\t\t}\n+\n+\t\tret = rte_event_port_link(event_d_id, event_p_id, NULL,\n+\t\t\t\t\t NULL, 0);\n+\t\tif (ret < 0) {\n+\t\t\trte_exit(EXIT_FAILURE, \"Error in linking event port %d \"\n+\t\t\t\t \"to event queue\", event_p_id);\n+\t\t}\n+\t\teventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id;\n+\n+\t\t/* init spinlock */\n+\t\trte_spinlock_init(&eventdev_rsrc->evp.lock);\n+\t}\n+\n+\teventdev_rsrc->def_p_conf = event_p_conf;\n+}\n+\n+static void\n+event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg)\n+{\n+\tstruct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();\n+\tuint8_t event_d_id = eventdev_rsrc->event_d_id;\n+\tstruct rte_event_queue_conf event_q_conf = {\n+\t\t.nb_atomic_flows = 1024,\n+\t\t.nb_atomic_order_sequences = 1024,\n+\t\t.event_queue_cfg = event_queue_cfg,\n+\t\t.priority = RTE_EVENT_DEV_PRIORITY_NORMAL\n+\t};\n+\tstruct rte_event_queue_conf def_q_conf;\n+\tuint8_t event_q_id = 0;\n+\tint32_t ret;\n+\n+\trte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);\n+\n+\tif (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)\n+\t\tevent_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;\n+\n+\tif (def_q_conf.nb_atomic_order_sequences <\n+\t\t\t\t\tevent_q_conf.nb_atomic_order_sequences)\n+\t\tevent_q_conf.nb_atomic_order_sequences =\n+\t\t\t\t\tdef_q_conf.nb_atomic_order_sequences;\n+\n+\tevent_q_conf.event_queue_cfg = event_queue_cfg;\n+\tevent_q_conf.schedule_type = eventdev_rsrc->sync_mode;\n+\teventdev_rsrc->evq.nb_queues = ethdev_count;\n+\teventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *\n+\t\t\t\t\teventdev_rsrc->evq.nb_queues);\n+\tif (!eventdev_rsrc->evq.event_q_id)\n+\t\trte_exit(EXIT_FAILURE, \"Memory allocation failure\");\n+\n+\tfor (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {\n+\t\tret = rte_event_queue_setup(event_d_id, event_q_id,\n+\t\t\t\t\t &event_q_conf);\n+\t\tif (ret < 0) {\n+\t\t\trte_exit(EXIT_FAILURE,\n+\t\t\t\t \"Error in configuring event queue\");\n+\t\t}\n+\t\teventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;\n+\t}\n+}\n+\n void\n eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)\n {\n-\tRTE_SET_USED(ops);\n+\tops->eventdev_setup = eventdev_setup_internal_port;\n+\tops->event_queue_setup = event_queue_setup_internal_port;\n+\tops->event_port_setup = event_port_setup_internal_port;\n }\n", "prefixes": [ "v2", "05/10" ] }{ "id": 59420, "url": "