From patchwork Wed Apr 27 11:32:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 110343 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 165C4A0032; Wed, 27 Apr 2022 13:32:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC1CF40E78; Wed, 27 Apr 2022 13:32:35 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CE3BD40691 for ; Wed, 27 Apr 2022 13:32:33 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23R81vH9014965; Wed, 27 Apr 2022 04:32:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Q3Efjd8sxCq0cfLAVg8TreG2Laij/Er2bEmTahhaW5A=; b=E1IOvQASMCsmNxOMTqKHFxHp99qCsBmNTpNeaWEDGqCrW9N5iNFai7NKo/Ka/hushNfX Rut7/NC+Y5b8pB/Bow/n+4C/Iv71BwZBuEeUoumDJ/pSd30WJH4p7Kfzx8gF0kdIS7ME oTh9QNSJzwdlCHlR6GO2SpNuQKQAD+a568d+RBOdwQo2BLms7MJdx11zm4uHHxQYPfxy Tzj61eKCYVSjjmupOmui0/aSXO+WfVPUQ2vN9ZwqkOUMD1RVnqQNGZbqcwRWdF305s+y 7ZE4z9YUKTZxbcrvLUKkS1mpz4coi6xlB5mdVCWOm1omCAwR/JQmTvFQiStzC3ktDGQt cg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3fprt4jg33-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 27 Apr 2022 04:32:29 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 27 Apr 2022 04:32:27 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 27 Apr 2022 04:32:27 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.69.50]) by maili.marvell.com (Postfix) with ESMTP id CF5773F7063; Wed, 27 Apr 2022 04:32:25 -0700 (PDT) From: Pavan Nikhilesh To: , Ray Kinsella CC: , Pavan Nikhilesh Subject: [PATCH 1/3] eventdev: add function to quiesce an event port Date: Wed, 27 Apr 2022 17:02:21 +0530 Message-ID: <20220427113223.13948-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: OZfsji319vkDeZDK_oFtd67RVAfl5Uax X-Proofpoint-GUID: OZfsji319vkDeZDK_oFtd67RVAfl5Uax X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-27_04,2022-04-27_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add function to quiesce any core specific resources consumed by the event port. When the application decides to migrate the event port to another lcore or teardown the current lcore it may to call `rte_event_port_quiesce` to make sure that all the data associated with the event port are released from the lcore, this might also include any prefetched events. While releasing the event port from the lcore, this function calls the user-provided flush callback once per event. Signed-off-by: Pavan Nikhilesh Acked-by: Ray Kinsella --- lib/eventdev/eventdev_pmd.h | 19 +++++++++++++++++++ lib/eventdev/rte_eventdev.c | 19 +++++++++++++++++++ lib/eventdev/rte_eventdev.h | 33 +++++++++++++++++++++++++++++++++ lib/eventdev/version.map | 3 +++ 4 files changed, 74 insertions(+) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index ce469d47a6..cf9f2146a1 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -381,6 +381,23 @@ typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev, */ typedef void (*eventdev_port_release_t)(void *port); +/** + * Quiesce any core specific resources consumed by the event port + * + * @param dev + * Event device pointer. + * @param port + * Event port pointer. + * @param flush_cb + * User-provided event flush function. + * @param args + * Arguments to be passed to the user-provided event flush function. + * + */ +typedef void (*eventdev_port_quiesce_t)(struct rte_eventdev *dev, void *port, + eventdev_port_flush_t flush_cb, + void *args); + /** * Link multiple source event queues to destination event port. * @@ -1218,6 +1235,8 @@ struct eventdev_ops { /**< Set up an event port. */ eventdev_port_release_t port_release; /**< Release an event port. */ + eventdev_port_quiesce_t port_quiesce; + /**< Quiesce an event port. */ eventdev_port_link_t port_link; /**< Link event queues to an event port. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 532a253553..541fa5dc61 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -730,6 +730,25 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id, return 0; } +void +rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, + eventdev_port_flush_t release_cb, void *args) +{ + struct rte_eventdev *dev; + + RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id); + dev = &rte_eventdevs[dev_id]; + + if (!is_valid_port(dev, port_id)) { + RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); + return; + } + + if (dev->dev_ops->port_quiesce) + (*dev->dev_ops->port_quiesce)(dev, dev->data->ports[port_id], + release_cb, args); +} + int rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id, uint32_t *attr_value) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 42a5660169..c86d8a5576 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -830,6 +830,39 @@ int rte_event_port_setup(uint8_t dev_id, uint8_t port_id, const struct rte_event_port_conf *port_conf); +typedef void (*eventdev_port_flush_t)(uint8_t dev_id, struct rte_event event, + void *arg); +/**< Callback function prototype that can be passed during + * rte_event_port_release(), invoked once per a released event. + */ + +/** + * Quiesce any core specific resources consumed by the event port. + * + * Event ports are generally coupled with lcores, and a given Hardware + * implementation might require the PMD to store port specific data in the + * lcore. + * When the application decides to migrate the event port to an other lcore + * or teardown the current lcore it may to call `rte_event_port_quiesce` + * to make sure that all the data associated with the event port are released + * from the lcore, this might also include any prefetched events. + * While releasing the event port from the lcore, this function calls the + * user-provided flush callback once per event. + * + * The event port specific config is not reset. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The index of the event port to setup. The value must be in the range + * [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure(). + * @param release_cb + * Callback function invoked once per flushed event. + */ +__rte_experimental +void rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, + eventdev_port_flush_t release_cb, void *args); + /** * The queue depth of the port on the enqueue side */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd5dada07f..1907093539 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -108,6 +108,9 @@ EXPERIMENTAL { # added in 22.03 rte_event_eth_rx_adapter_event_port_get; + + # added in 22.07 + rte_event_port_quiesce; }; INTERNAL { From patchwork Wed Apr 27 11:32:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 110344 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B8547A0032; Wed, 27 Apr 2022 13:32:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B938341156; Wed, 27 Apr 2022 13:32:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id F2BEC40E78 for ; Wed, 27 Apr 2022 13:32:33 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23R8T643014829; Wed, 27 Apr 2022 04:32:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=woqPE31Hu8yyxUYoNt7bBxm/HI+VayN7E5xdZbQTFqE=; b=KIfey+zmRBJi+G4AUwJD6YEW7eSyyEYzHUhQietvAJg/Hr11uwdIaoRn0l6qYPyYiF1I F3iowcmvaE1Yea9nkABP0TyqAsRPluXcwL5ovyozEnlJCCA6lsKvXuMjDpiglvw6zZV9 Tv/AcigUxFR2T1py2lFz2byEfiT5cDOFP7vqKEnTv0UYnGzXAD8ErHdwTacnc8uUo043 4aqvJrAs5EFcwogX7xYjbRX6j0DrYjFwTuv4XgExN7FK0WDwmLUKHrH5D1AxaW/503Md jJgDgrNM4Y2exW6D4j35CTt5WwOVYwPkMZswHOQ1T3BSn0orcAGrEc2fG+EtPMYndI/2 yA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3fprt4jg38-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 27 Apr 2022 04:32:33 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 27 Apr 2022 04:32:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 27 Apr 2022 04:32:31 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.69.50]) by maili.marvell.com (Postfix) with ESMTP id 3AB8B5B6932; Wed, 27 Apr 2022 04:32:27 -0700 (PDT) From: Pavan Nikhilesh To: , Harry van Haaren , "Radu Nicolau" , Akhil Goyal , "Sunil Kumar Kori" , Pavan Nikhilesh CC: Subject: [PATCH 2/3] eventdev: update examples to use port quiesce Date: Wed, 27 Apr 2022 17:02:22 +0530 Message-ID: <20220427113223.13948-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220427113223.13948-1-pbhagavatula@marvell.com> References: <20220427113223.13948-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: YOIvaNVJ6KdoQIxinUhumSKaoA9kY7tT X-Proofpoint-GUID: YOIvaNVJ6KdoQIxinUhumSKaoA9kY7tT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-27_04,2022-04-27_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Quiesce event ports used by the workers core on exit to free up any outstanding resources. Signed-off-by: Pavan Nikhilesh Change-Id: Iea1f933d4f4926630d82a9883fbe3f1e75876097 --- Depends-on: Series-22677 app/test-eventdev/test_perf_common.c | 8 ++++++++ app/test-eventdev/test_pipeline_common.c | 12 ++++++++++++ examples/eventdev_pipeline/pipeline_common.h | 9 +++++++++ examples/ipsec-secgw/ipsec_worker.c | 13 +++++++++++++ examples/l2fwd-event/l2fwd_common.c | 13 +++++++++++++ examples/l3fwd/l3fwd_event.c | 13 +++++++++++++ 6 files changed, 68 insertions(+) -- 2.35.1 diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index f673a9fddd..2016583979 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -985,6 +985,13 @@ perf_opt_dump(struct evt_options *opt, uint8_t nb_queues) evt_dump("prod_enq_burst_sz", "%d", opt->prod_enq_burst_sz); } +static void +perf_event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args) +{ + rte_mempool_put(args, ev.event_ptr); +} + void perf_worker_cleanup(struct rte_mempool *const pool, uint8_t dev_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, @@ -1000,6 +1007,7 @@ perf_worker_cleanup(struct rte_mempool *const pool, uint8_t dev_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(dev_id, port_id, events, nb_deq); } + rte_event_port_quiesce(dev_id, port_id, perf_event_port_flush, pool); } void diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c index a8dd070000..82e5745071 100644 --- a/app/test-eventdev/test_pipeline_common.c +++ b/app/test-eventdev/test_pipeline_common.c @@ -518,6 +518,16 @@ pipeline_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +pipeline_event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + pipeline_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void pipeline_worker_cleanup(uint8_t dev, uint8_t port, struct rte_event ev[], uint16_t enq, uint16_t deq) @@ -542,6 +552,8 @@ pipeline_worker_cleanup(uint8_t dev, uint8_t port, struct rte_event ev[], rte_event_enqueue_burst(dev, port, ev, deq); } + + rte_event_port_quiesce(dev, port, pipeline_event_port_flush, NULL); } void diff --git a/examples/eventdev_pipeline/pipeline_common.h b/examples/eventdev_pipeline/pipeline_common.h index 9899b257b0..28b6ab85ff 100644 --- a/examples/eventdev_pipeline/pipeline_common.h +++ b/examples/eventdev_pipeline/pipeline_common.h @@ -140,6 +140,13 @@ schedule_devices(unsigned int lcore_id) } } +static void +event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + rte_mempool_put(args, ev.event_ptr); +} + static inline void worker_cleanup(uint8_t dev_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, uint16_t nb_deq) @@ -160,6 +167,8 @@ worker_cleanup(uint8_t dev_id, uint8_t port_id, struct rte_event events[], events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(dev_id, port_id, events, nb_deq); } + + rte_event_port_quiesce(dev_id, port_id, event_port_flush, NULL); } void set_worker_generic_setup_data(struct setup_data *caps, bool burst); diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 3df5acf384..7f259e4cf3 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -737,6 +737,13 @@ ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links, * selected. */ +static void +ipsec_event_port_flush(uint8_t eventdev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + rte_pktmbuf_free(ev.mbuf); +} + /* Workers registered */ #define IPSEC_EVENTMODE_WORKERS 2 @@ -861,6 +868,9 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, rte_event_enqueue_burst(links[0].eventdev_id, links[0].event_port_id, &ev, 1); } + + rte_event_port_quiesce(links[0].eventdev_id, links[0].event_port_id, + ipsec_event_port_flush, NULL); } /* @@ -974,6 +984,9 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, rte_event_enqueue_burst(links[0].eventdev_id, links[0].event_port_id, &ev, 1); } + + rte_event_port_quiesce(links[0].eventdev_id, links[0].event_port_id, + ipsec_event_port_flush, NULL); } static uint8_t diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c index 15bfe790a0..41a0d3f22f 100644 --- a/examples/l2fwd-event/l2fwd_common.c +++ b/examples/l2fwd-event/l2fwd_common.c @@ -128,6 +128,16 @@ l2fwd_event_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +l2fwd_event_port_flush(uint8_t event_d_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + l2fwd_event_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, @@ -147,4 +157,7 @@ l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(event_d_id, port_id, events, nb_deq); } + + rte_event_port_quiesce(event_d_id, port_id, l2fwd_event_port_flush, + NULL); } diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c index a14a21b414..0b58475c85 100644 --- a/examples/l3fwd/l3fwd_event.c +++ b/examples/l3fwd/l3fwd_event.c @@ -301,6 +301,16 @@ l3fwd_event_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +l3fwd_event_port_flush(uint8_t event_d_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + l3fwd_event_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, struct rte_event events[], uint16_t nb_enq, @@ -320,4 +330,7 @@ l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(event_d_id, event_p_id, events, nb_deq); } + + rte_event_port_quiesce(event_d_id, event_p_id, l3fwd_event_port_flush, + NULL); } From patchwork Wed Apr 27 11:32:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 110345 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 17989A0032; Wed, 27 Apr 2022 13:32:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93D75427FA; Wed, 27 Apr 2022 13:32:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 0033D41109 for ; Wed, 27 Apr 2022 13:32:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23R8RAkS029656 for ; Wed, 27 Apr 2022 04:32:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=I9gHUP2EQNyiRmwAk67TCNPTn2Bi350qNHBzcTrXAdE=; b=WDwdACuogXcvoDhK8OFQ2+wLKlGS7dpmQa+vfz9nLAqSgqxjLa8rEYJHdVcKmmLbs0UL odZIjJIjqKrJv1ar8HODyP8gMXCFkGbTXBqC7PB/uuLv9i+9Kr+jsKPq/ygoxqSpakoz Coua6vXNvLgRPmfX0GNtu1R0yUwptBtEXOEMoLZYMiPS6Npzvu1RqB2L18n6UxOFGt6j 3I22jq7y4pF6aBokpJBWcRCemCF9eEuQQy3UiCGX0eXnRPPafXYNK1GnpJvV6sbjgZyG txwBmAb0GRuqGtvTWf2CsEjqaRQJa5MAmUwVax3Cduasiz+XiANJMLlN44D7vLSBJtQn yQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3fprsqtfrx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 27 Apr 2022 04:32:35 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 27 Apr 2022 04:32:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 27 Apr 2022 04:32:33 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.69.50]) by maili.marvell.com (Postfix) with ESMTP id 79B7A3F7063; Wed, 27 Apr 2022 04:32:31 -0700 (PDT) From: Pavan Nikhilesh To: , Pavan Nikhilesh , "Shijith Thotton" CC: Subject: [PATCH 3/3] event/cnxk: implement event port quiesce function Date: Wed, 27 Apr 2022 17:02:23 +0530 Message-ID: <20220427113223.13948-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220427113223.13948-1-pbhagavatula@marvell.com> References: <20220427113223.13948-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ml6lVdVLPje9NLdj95nE9_Iijjwk0nV3 X-Proofpoint-GUID: ml6lVdVLPje9NLdj95nE9_Iijjwk0nV3 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-27_04,2022-04-27_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement event port quiesce function to clean up any lcore resources used. Signed-off-by: Pavan Nikhilesh Change-Id: I7dda3d54dc698645d25ebbfbabd81760940fe649 --- drivers/event/cnxk/cn10k_eventdev.c | 78 ++++++++++++++++++++++++++--- drivers/event/cnxk/cn9k_eventdev.c | 60 +++++++++++++++++++++- 2 files changed, 130 insertions(+), 8 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 94829e789c..d84c5d2d1e 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -167,15 +167,23 @@ cn10k_sso_hws_reset(void *arg, void *hws) uint64_t u64[2]; } gw; uint8_t pend_tt; + bool is_pend; plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); /* Wait till getwork/swtp/waitw/desched completes. */ + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + ws->swtag_req) + is_pend = true; + do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0)); - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) cnxk_sso_hws_swtag_untag(base + SSOW_LF_GWS_OP_SWTAG_UNTAG); @@ -189,15 +197,10 @@ cn10k_sso_hws_reset(void *arg, void *hws) switch (dev->gw_mode) { case CN10K_GW_MODE_PREF: + case CN10K_GW_MODE_PREF_WFE: while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63)) ; break; - case CN10K_GW_MODE_PREF_WFE: - while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & - SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT) - continue; - plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); - break; case CN10K_GW_MODE_NONE: default: break; @@ -533,6 +536,66 @@ cn10k_sso_port_release(void *port) rte_free(gws_cookie); } +static void +cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, + eventdev_port_flush_t flush_cb, void *args) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct cn10k_sso_hws *ws = port; + struct rte_event ev; + uint64_t ptag; + bool is_pend; + + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + ptag = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + if (ptag & (BIT_ULL(62) | BIT_ULL(54)) || ws->swtag_req) + is_pend = true; + do { + ptag = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + } while (ptag & + (BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); + + cn10k_sso_hws_get_work_empty(ws, &ev, + (NIX_RX_OFFLOAD_MAX - 1) | NIX_RX_REAS_F | + NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F); + if (is_pend && ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + + /* Check if we have work in PRF_WQE0, if so extract it. */ + switch (dev->gw_mode) { + case CN10K_GW_MODE_PREF: + case CN10K_GW_MODE_PREF_WFE: + while (plt_read64(ws->base + SSOW_LF_GWS_PRF_WQE0) & + BIT_ULL(63)) + ; + break; + case CN10K_GW_MODE_NONE: + default: + break; + } + + if (CNXK_TT_FROM_TAG(plt_read64(ws->base + SSOW_LF_GWS_PRF_WQE0)) != + SSO_TT_EMPTY) { + plt_write64(BIT_ULL(16) | 1, + ws->base + SSOW_LF_GWS_OP_GET_WORK0); + cn10k_sso_hws_get_work_empty( + ws, &ev, + (NIX_RX_OFFLOAD_MAX - 1) | NIX_RX_REAS_F | + NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F); + if (ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + } + ws->swtag_req = 0; + plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); +} + static int cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], @@ -852,6 +915,7 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, + .port_quiesce = cn10k_sso_port_quiesce, .port_link = cn10k_sso_port_link, .port_unlink = cn10k_sso_port_unlink, .timeout_ticks = cnxk_sso_timeout_ticks, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 987888d3db..46885c5f92 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -186,6 +186,7 @@ cn9k_sso_hws_reset(void *arg, void *hws) uint64_t pend_state; uint8_t pend_tt; uintptr_t base; + bool is_pend; uint64_t tag; uint8_t i; @@ -193,6 +194,13 @@ cn9k_sso_hws_reset(void *arg, void *hws) ws = hws; for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) { base = dev->dual_ws ? dws->base[i] : ws->base; + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + (dev->dual_ws ? (dws->swtag_req && i == !dws->vws) : + ws->swtag_req)) + is_pend = true; /* Wait till getwork/swtp/waitw/desched completes. */ do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); @@ -201,7 +209,7 @@ cn9k_sso_hws_reset(void *arg, void *hws) tag = plt_read64(base + SSOW_LF_GWS_TAG); pend_tt = (tag >> 32) & 0x3; - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) cnxk_sso_hws_swtag_untag( @@ -213,7 +221,14 @@ cn9k_sso_hws_reset(void *arg, void *hws) do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); } while (pend_state & BIT_ULL(58)); + + plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); } + + if (dev->dual_ws) + dws->swtag_req = 0; + else + ws->swtag_req = 0; } void @@ -789,6 +804,48 @@ cn9k_sso_port_release(void *port) rte_free(gws_cookie); } +static void +cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, + eventdev_port_flush_t flush_cb, void *args) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct cn9k_sso_hws_dual *dws; + struct cn9k_sso_hws *ws; + struct rte_event ev; + uintptr_t base; + uint64_t ptag; + bool is_pend; + uint8_t i; + + dws = port; + ws = port; + for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) { + base = dev->dual_ws ? dws->base[i] : ws->base; + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + ptag = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (ptag & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + (dev->dual_ws ? (dws->swtag_req && i == !dws->vws) : + ws->swtag_req)) + is_pend = true; + /* Wait till getwork/swtp/waitw/desched completes. */ + do { + ptag = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + } while (ptag & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | + BIT_ULL(56))); + + cn9k_sso_hws_get_work_empty( + base, &ev, dev->rx_offloads, + dev->dual_ws ? dws->lookup_mem : ws->lookup_mem, + dev->dual_ws ? dws->tstamp : ws->tstamp); + if (is_pend && ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + } +} + static int cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], @@ -1090,6 +1147,7 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, + .port_quiesce = cn9k_sso_port_quiesce, .port_link = cn9k_sso_port_link, .port_unlink = cn9k_sso_port_unlink, .timeout_ticks = cnxk_sso_timeout_ticks,