From patchwork Fri May 13 17:58:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 111138 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD461A00C3; Fri, 13 May 2022 20:01:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8FB6740DF7; Fri, 13 May 2022 20:01:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 78FBF40DDE for ; Fri, 13 May 2022 20:01:05 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24DCHX0d010448; Fri, 13 May 2022 10:59:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=xV2djo7s/29VT8vBH7sONoO693vfUWHctVQMnBq5+jQ=; b=eqnkY1XeNcG+u6kDgURIF+aYclImi0ZmRE75GQpLup2yWjQRbst/AWdrdAQ/ZTrvjN6N w8okFLtwPolsfe3kOmRBpU7C++ODiud5PFpouSWEFnR9HvW2D5vvLOO2GYjjo1nn9B6x ggT8CzgtKE9A2dL5ExbnzQTexzbdVj+lwp+Zsl58at/KP86ap/hAZX7katzqIZBXrBny MlcYoZjNGT5whRH7ONQc75/4iaWxmDhe1lq1W7q9Z3w5LXRZzhraE0bAcilXRANzYVqM V/kYNBO7u3SPQALWI5B/glbdOC4tJrblY4ariZJZOKL8pEuet1bhKPJ9favT2Fc1lbfb 9Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g1c37bgy2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 13 May 2022 10:59:01 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 13 May 2022 10:58:59 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 13 May 2022 10:58:59 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.70.72]) by maili.marvell.com (Postfix) with ESMTP id 387563F7052; Fri, 13 May 2022 10:58:54 -0700 (PDT) From: To: , Ray Kinsella CC: , , , , , , , , , , , , Pavan Nikhilesh Subject: [PATCH v3 1/3] eventdev: add function to quiesce an event port Date: Fri, 13 May 2022 23:28:39 +0530 Message-ID: <20220513175841.11853-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220427113715.15509-1-pbhagavatula@marvell.com> References: <20220427113715.15509-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: NnuHYgXA10RXB6SgPSebOQGe7yxPwU1l X-Proofpoint-ORIG-GUID: NnuHYgXA10RXB6SgPSebOQGe7yxPwU1l X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-13_09,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Add function to quiesce any core specific resources consumed by the event port. When the application decides to migrate the event port to another lcore or teardown the current lcore it may to call `rte_event_port_quiesce` to make sure that all the data associated with the event port are released from the lcore, this might also include any prefetched events. While releasing the event port from the lcore, this function calls the user-provided flush callback once per event. Signed-off-by: Pavan Nikhilesh --- v3 Changes: - Add `rte_` prefix to callback function. - Fix API documentation issues. - Update eventdev documentation. v2 Changes: - Remove internal Change-Id tag from commit messages. doc/guides/prog_guide/eventdev.rst | 35 ++++++++++++++++++++++++++++ lib/eventdev/eventdev_pmd.h | 19 +++++++++++++++ lib/eventdev/rte_eventdev.c | 19 +++++++++++++++ lib/eventdev/rte_eventdev.h | 37 ++++++++++++++++++++++++++++++ lib/eventdev/version.map | 3 +++ 5 files changed, 113 insertions(+) diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst index a49e486a30..afee674ee1 100644 --- a/doc/guides/prog_guide/eventdev.rst +++ b/doc/guides/prog_guide/eventdev.rst @@ -412,6 +412,41 @@ An event driven worker thread has following typical workflow on fastpath: rte_event_enqueue_burst(...); } +Quiescing Event Ports +~~~~~~~~~~~~~~~~~~~~~ + +To migrate the event port to another lcore or while tearing down a worker core +using an event port ``rte_event_port_quiesce()`` can be invoked to make sure +that all the data associated with the event port are released from the worker +core, this might also include any prefetched events. + +A flush callback can be passed to the function to handle any outstanding events. + +.. code-block:: c + + rte_event_port_quiesce(dev_id, port_id, release_cb, NULL); + +.. Note:: + + The event port specific config shall not be reset when this API is + invoked. + +Stopping the EventDev +~~~~~~~~~~~~~~~~~~~~~ + +A single function call tells the eventdev instance to stop processing +events. A flush callback can be registered to free any inflight events +using ``rte_event_dev_stop_flush_callback_register()`` function. + +.. code-block:: c + + int err = rte_event_dev_stop(dev_id); + +.. Note:: + + The event producers such as event_eth_rx_adapter, event_timer_adapter + and event_crypto_adapter need to be stopped before stopping the event + device. Summary ------- diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index ce469d47a6..6173f22b9b 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -381,6 +381,23 @@ typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev, */ typedef void (*eventdev_port_release_t)(void *port); +/** + * Quiesce any core specific resources consumed by the event port + * + * @param dev + * Event device pointer. + * @param port + * Event port pointer. + * @param flush_cb + * User-provided event flush function. + * @param args + * Arguments to be passed to the user-provided event flush function. + * + */ +typedef void (*eventdev_port_quiesce_t)(struct rte_eventdev *dev, void *port, + rte_eventdev_port_flush_t flush_cb, + void *args); + /** * Link multiple source event queues to destination event port. * @@ -1218,6 +1235,8 @@ struct eventdev_ops { /**< Set up an event port. */ eventdev_port_release_t port_release; /**< Release an event port. */ + eventdev_port_quiesce_t port_quiesce; + /**< Quiesce an event port. */ eventdev_port_link_t port_link; /**< Link event queues to an event port. */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 532a253553..0250e57f24 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -730,6 +730,25 @@ rte_event_port_setup(uint8_t dev_id, uint8_t port_id, return 0; } +void +rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, + rte_eventdev_port_flush_t release_cb, void *args) +{ + struct rte_eventdev *dev; + + RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id); + dev = &rte_eventdevs[dev_id]; + + if (!is_valid_port(dev, port_id)) { + RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); + return; + } + + if (dev->dev_ops->port_quiesce) + (*dev->dev_ops->port_quiesce)(dev, dev->data->ports[port_id], + release_cb, args); +} + int rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id, uint32_t *attr_value) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 42a5660169..1a46d289a9 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -830,6 +830,43 @@ int rte_event_port_setup(uint8_t dev_id, uint8_t port_id, const struct rte_event_port_conf *port_conf); +typedef void (*rte_eventdev_port_flush_t)(uint8_t dev_id, + struct rte_event event, void *arg); +/**< Callback function prototype that can be passed during + * rte_event_port_release(), invoked once per a released event. + */ + +/** + * Quiesce any core specific resources consumed by the event port. + * + * Event ports are generally coupled with lcores, and a given Hardware + * implementation might require the PMD to store port specific data in the + * lcore. + * When the application decides to migrate the event port to another lcore + * or teardown the current lcore it may to call `rte_event_port_quiesce` + * to make sure that all the data associated with the event port are released + * from the lcore, this might also include any prefetched events. + * While releasing the event port from the lcore, this function calls the + * user-provided flush callback once per event. + * + * @note The event port specific config shall not be reset when this API is + * called. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The index of the event port to setup. The value must be in the range + * [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure(). + * @param release_cb + * Callback function invoked once per flushed event. + * @param args + * Argument supplied to callback. + */ +__rte_experimental +void +rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, + rte_eventdev_port_flush_t release_cb, void *args); + /** * The queue depth of the port on the enqueue side */ diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index cd5dada07f..1907093539 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -108,6 +108,9 @@ EXPERIMENTAL { # added in 22.03 rte_event_eth_rx_adapter_event_port_get; + + # added in 22.07 + rte_event_port_quiesce; }; INTERNAL { From patchwork Fri May 13 17:58:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 111136 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36043A00C3; Fri, 13 May 2022 19:59:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 155B4410F2; Fri, 13 May 2022 19:59:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id EBABD40DDE for ; Fri, 13 May 2022 19:59:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24DCXbpx010313; Fri, 13 May 2022 10:59:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ivAVMhbCwzjalT5Oa9dJmPqXXiKdhV3uyfLZrgSEjT8=; b=cN5bJmIM7BYNGmMCPjESHzoDPQcghEcejRPPSg6x7Fw4AKdXFbpcce1DGyyefKWK5FuC DIxkrBU8+NmTYxegp7kL3hz8rfDChFuaAuZISDwn5s9Q0i5K69qdDDRgw4b+Ee+238pI FOKDN/zowjw1Dy8LnvCSQCuHe2JevvsfSZflBaMi9IWx9xq4nYNxbwS/zs+TpAVOnY7K s2RCtypGAwKS9DZ+zr+UcGRFSBabkb1DsUqXyrNulQAg8FqAEJQCwefHKVI9Ca75F/zs 5gQxEGAZ9UDZkT/599taFQiKmVwG4rVGGH5xtocAOTejb8dAqx1R2gFseOFSBWuwas7W 2w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3g1c37bgye-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 13 May 2022 10:59:07 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 13 May 2022 10:59:05 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 13 May 2022 10:59:05 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.70.72]) by maili.marvell.com (Postfix) with ESMTP id 5D3FB3F7063; Fri, 13 May 2022 10:59:00 -0700 (PDT) From: To: , Harry van Haaren , "Radu Nicolau" , Akhil Goyal , "Sunil Kumar Kori" , Pavan Nikhilesh CC: , , , , , , , , , , Subject: [PATCH v3 2/3] eventdev: update examples to use port quiesce Date: Fri, 13 May 2022 23:28:40 +0530 Message-ID: <20220513175841.11853-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220513175841.11853-1-pbhagavatula@marvell.com> References: <20220427113715.15509-1-pbhagavatula@marvell.com> <20220513175841.11853-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: lT6J4t-hcBk1qGF-V8vIpKKYIWHINwpo X-Proofpoint-ORIG-GUID: lT6J4t-hcBk1qGF-V8vIpKKYIWHINwpo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-13_09,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Quiesce event ports used by the workers core on exit to free up any outstanding resources. Signed-off-by: Pavan Nikhilesh --- app/test-eventdev/test_perf_common.c | 8 ++++++++ app/test-eventdev/test_pipeline_common.c | 12 ++++++++++++ examples/eventdev_pipeline/pipeline_common.h | 9 +++++++++ examples/ipsec-secgw/ipsec_worker.c | 13 +++++++++++++ examples/l2fwd-event/l2fwd_common.c | 13 +++++++++++++ examples/l3fwd/l3fwd_event.c | 13 +++++++++++++ 6 files changed, 68 insertions(+) diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c index b51a100425..8e3836280d 100644 --- a/app/test-eventdev/test_perf_common.c +++ b/app/test-eventdev/test_perf_common.c @@ -985,6 +985,13 @@ perf_opt_dump(struct evt_options *opt, uint8_t nb_queues) evt_dump("prod_enq_burst_sz", "%d", opt->prod_enq_burst_sz); } +static void +perf_event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args) +{ + rte_mempool_put(args, ev.event_ptr); +} + void perf_worker_cleanup(struct rte_mempool *const pool, uint8_t dev_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, @@ -1000,6 +1007,7 @@ perf_worker_cleanup(struct rte_mempool *const pool, uint8_t dev_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(dev_id, port_id, events, nb_deq); } + rte_event_port_quiesce(dev_id, port_id, perf_event_port_flush, pool); } void diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c index d8e80903b2..c66656cd39 100644 --- a/app/test-eventdev/test_pipeline_common.c +++ b/app/test-eventdev/test_pipeline_common.c @@ -518,6 +518,16 @@ pipeline_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +pipeline_event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + pipeline_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void pipeline_worker_cleanup(uint8_t dev, uint8_t port, struct rte_event ev[], uint16_t enq, uint16_t deq) @@ -542,6 +552,8 @@ pipeline_worker_cleanup(uint8_t dev, uint8_t port, struct rte_event ev[], rte_event_enqueue_burst(dev, port, ev, deq); } + + rte_event_port_quiesce(dev, port, pipeline_event_port_flush, NULL); } void diff --git a/examples/eventdev_pipeline/pipeline_common.h b/examples/eventdev_pipeline/pipeline_common.h index 9899b257b0..28b6ab85ff 100644 --- a/examples/eventdev_pipeline/pipeline_common.h +++ b/examples/eventdev_pipeline/pipeline_common.h @@ -140,6 +140,13 @@ schedule_devices(unsigned int lcore_id) } } +static void +event_port_flush(uint8_t dev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + rte_mempool_put(args, ev.event_ptr); +} + static inline void worker_cleanup(uint8_t dev_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, uint16_t nb_deq) @@ -160,6 +167,8 @@ worker_cleanup(uint8_t dev_id, uint8_t port_id, struct rte_event events[], events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(dev_id, port_id, events, nb_deq); } + + rte_event_port_quiesce(dev_id, port_id, event_port_flush, NULL); } void set_worker_generic_setup_data(struct setup_data *caps, bool burst); diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 3df5acf384..7f259e4cf3 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -737,6 +737,13 @@ ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links, * selected. */ +static void +ipsec_event_port_flush(uint8_t eventdev_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + rte_pktmbuf_free(ev.mbuf); +} + /* Workers registered */ #define IPSEC_EVENTMODE_WORKERS 2 @@ -861,6 +868,9 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, rte_event_enqueue_burst(links[0].eventdev_id, links[0].event_port_id, &ev, 1); } + + rte_event_port_quiesce(links[0].eventdev_id, links[0].event_port_id, + ipsec_event_port_flush, NULL); } /* @@ -974,6 +984,9 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, rte_event_enqueue_burst(links[0].eventdev_id, links[0].event_port_id, &ev, 1); } + + rte_event_port_quiesce(links[0].eventdev_id, links[0].event_port_id, + ipsec_event_port_flush, NULL); } static uint8_t diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c index 15bfe790a0..41a0d3f22f 100644 --- a/examples/l2fwd-event/l2fwd_common.c +++ b/examples/l2fwd-event/l2fwd_common.c @@ -128,6 +128,16 @@ l2fwd_event_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +l2fwd_event_port_flush(uint8_t event_d_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + l2fwd_event_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, struct rte_event events[], uint16_t nb_enq, @@ -147,4 +157,7 @@ l2fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t port_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(event_d_id, port_id, events, nb_deq); } + + rte_event_port_quiesce(event_d_id, port_id, l2fwd_event_port_flush, + NULL); } diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c index a14a21b414..0b58475c85 100644 --- a/examples/l3fwd/l3fwd_event.c +++ b/examples/l3fwd/l3fwd_event.c @@ -301,6 +301,16 @@ l3fwd_event_vector_array_free(struct rte_event events[], uint16_t num) } } +static void +l3fwd_event_port_flush(uint8_t event_d_id __rte_unused, struct rte_event ev, + void *args __rte_unused) +{ + if (ev.event_type & RTE_EVENT_TYPE_VECTOR) + l3fwd_event_vector_array_free(&ev, 1); + else + rte_pktmbuf_free(ev.mbuf); +} + void l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, struct rte_event events[], uint16_t nb_enq, @@ -320,4 +330,7 @@ l3fwd_event_worker_cleanup(uint8_t event_d_id, uint8_t event_p_id, events[i].op = RTE_EVENT_OP_RELEASE; rte_event_enqueue_burst(event_d_id, event_p_id, events, nb_deq); } + + rte_event_port_quiesce(event_d_id, event_p_id, l3fwd_event_port_flush, + NULL); } From patchwork Fri May 13 17:58:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 111137 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 422F1A00C3; Fri, 13 May 2022 19:59:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 709EB40DDE; Fri, 13 May 2022 19:59:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id D028D42838 for ; Fri, 13 May 2022 19:59:12 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24D6COH5007624; Fri, 13 May 2022 10:59:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=vGRC7vDFASrFx5+yraT+STw8eUOhyR+y59IiWBYkkcY=; b=IoJTxaIc3UOSBv9uEqyZJpL/0GDUsy5sNpTNOafsinOP6sgx6gpM+FohajDegsnToDc+ XOC9I0cifJ3kCegUFRqjJ9F14jsQpv2D42f4pi1wKzLnhtTIaHj0kp5wAmPPo8zvpV2u b3GEZsBVOXg0J3xcqEtC0zXMJqFbcfbU5uEYTg17i8chM55HpHPjC45S/Rs+x3GnolpI tkU+/Yjo1AQ8p8KUaNDMFiCi7HDaDEv6W2laA3Hgwd/lMSammh8Q8RUvMjzs7ZW4F6Ha IODG96kt135pNGbrnMpqe5m6m5mw8ppVr3lfUmD6oC7zLvopnisEwj6GHTHoyga2wIEW Yw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3g0yqwq377-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 13 May 2022 10:59:11 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 13 May 2022 10:59:10 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 13 May 2022 10:59:10 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.193.70.72]) by maili.marvell.com (Postfix) with ESMTP id 0E6243F7052; Fri, 13 May 2022 10:59:05 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: , , , , , , , , , , Subject: [PATCH v3 3/3] event/cnxk: implement event port quiesce function Date: Fri, 13 May 2022 23:28:41 +0530 Message-ID: <20220513175841.11853-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220513175841.11853-1-pbhagavatula@marvell.com> References: <20220427113715.15509-1-pbhagavatula@marvell.com> <20220513175841.11853-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wIDrIhKfDriC8fF2-2BMgN1Vf8rlEtOp X-Proofpoint-GUID: wIDrIhKfDriC8fF2-2BMgN1Vf8rlEtOp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-13_09,2022-05-13_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Implement event port quiesce function to clean up any lcore resources used. Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn10k_eventdev.c | 78 ++++++++++++++++++++++++++--- drivers/event/cnxk/cn9k_eventdev.c | 60 +++++++++++++++++++++- 2 files changed, 130 insertions(+), 8 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 9b4d2895ec..409eb892a7 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -166,15 +166,23 @@ cn10k_sso_hws_reset(void *arg, void *hws) uint64_t u64[2]; } gw; uint8_t pend_tt; + bool is_pend; plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); /* Wait till getwork/swtp/waitw/desched completes. */ + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + ws->swtag_req) + is_pend = true; + do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0)); - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) cnxk_sso_hws_swtag_untag(base + SSOW_LF_GWS_OP_SWTAG_UNTAG); @@ -188,15 +196,10 @@ cn10k_sso_hws_reset(void *arg, void *hws) switch (dev->gw_mode) { case CN10K_GW_MODE_PREF: + case CN10K_GW_MODE_PREF_WFE: while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63)) ; break; - case CN10K_GW_MODE_PREF_WFE: - while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & - SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT) - continue; - plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); - break; case CN10K_GW_MODE_NONE: default: break; @@ -532,6 +535,66 @@ cn10k_sso_port_release(void *port) rte_free(gws_cookie); } +static void +cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, + rte_eventdev_port_flush_t flush_cb, void *args) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct cn10k_sso_hws *ws = port; + struct rte_event ev; + uint64_t ptag; + bool is_pend; + + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + ptag = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + if (ptag & (BIT_ULL(62) | BIT_ULL(54)) || ws->swtag_req) + is_pend = true; + do { + ptag = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + } while (ptag & + (BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); + + cn10k_sso_hws_get_work_empty(ws, &ev, + (NIX_RX_OFFLOAD_MAX - 1) | NIX_RX_REAS_F | + NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F); + if (is_pend && ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + + /* Check if we have work in PRF_WQE0, if so extract it. */ + switch (dev->gw_mode) { + case CN10K_GW_MODE_PREF: + case CN10K_GW_MODE_PREF_WFE: + while (plt_read64(ws->base + SSOW_LF_GWS_PRF_WQE0) & + BIT_ULL(63)) + ; + break; + case CN10K_GW_MODE_NONE: + default: + break; + } + + if (CNXK_TT_FROM_TAG(plt_read64(ws->base + SSOW_LF_GWS_PRF_WQE0)) != + SSO_TT_EMPTY) { + plt_write64(BIT_ULL(16) | 1, + ws->base + SSOW_LF_GWS_OP_GET_WORK0); + cn10k_sso_hws_get_work_empty( + ws, &ev, + (NIX_RX_OFFLOAD_MAX - 1) | NIX_RX_REAS_F | + NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F); + if (ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + } + ws->swtag_req = 0; + plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); +} + static int cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], @@ -851,6 +914,7 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn10k_sso_port_setup, .port_release = cn10k_sso_port_release, + .port_quiesce = cn10k_sso_port_quiesce, .port_link = cn10k_sso_port_link, .port_unlink = cn10k_sso_port_unlink, .timeout_ticks = cnxk_sso_timeout_ticks, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 4bba477dd1..dde8497895 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -181,6 +181,7 @@ cn9k_sso_hws_reset(void *arg, void *hws) uint64_t pend_state; uint8_t pend_tt; uintptr_t base; + bool is_pend; uint64_t tag; uint8_t i; @@ -188,6 +189,13 @@ cn9k_sso_hws_reset(void *arg, void *hws) ws = hws; for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) { base = dev->dual_ws ? dws->base[i] : ws->base; + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + (dev->dual_ws ? (dws->swtag_req && i == !dws->vws) : + ws->swtag_req)) + is_pend = true; /* Wait till getwork/swtp/waitw/desched completes. */ do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); @@ -196,7 +204,7 @@ cn9k_sso_hws_reset(void *arg, void *hws) tag = plt_read64(base + SSOW_LF_GWS_TAG); pend_tt = (tag >> 32) & 0x3; - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) cnxk_sso_hws_swtag_untag( @@ -208,7 +216,14 @@ cn9k_sso_hws_reset(void *arg, void *hws) do { pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); } while (pend_state & BIT_ULL(58)); + + plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); } + + if (dev->dual_ws) + dws->swtag_req = 0; + else + ws->swtag_req = 0; } void @@ -784,6 +799,48 @@ cn9k_sso_port_release(void *port) rte_free(gws_cookie); } +static void +cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, + rte_eventdev_port_flush_t flush_cb, void *args) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct cn9k_sso_hws_dual *dws; + struct cn9k_sso_hws *ws; + struct rte_event ev; + uintptr_t base; + uint64_t ptag; + bool is_pend; + uint8_t i; + + dws = port; + ws = port; + for (i = 0; i < (dev->dual_ws ? CN9K_DUAL_WS_NB_WS : 1); i++) { + base = dev->dual_ws ? dws->base[i] : ws->base; + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + ptag = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (ptag & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || + (dev->dual_ws ? (dws->swtag_req && i == !dws->vws) : + ws->swtag_req)) + is_pend = true; + /* Wait till getwork/swtp/waitw/desched completes. */ + do { + ptag = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + } while (ptag & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | + BIT_ULL(56))); + + cn9k_sso_hws_get_work_empty( + base, &ev, dev->rx_offloads, + dev->dual_ws ? dws->lookup_mem : ws->lookup_mem, + dev->dual_ws ? dws->tstamp : ws->tstamp); + if (is_pend && ev.u64) { + if (flush_cb) + flush_cb(event_dev->data->dev_id, ev, args); + cnxk_sso_hws_swtag_flush(ws->base); + } + } +} + static int cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], @@ -1085,6 +1142,7 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .port_def_conf = cnxk_sso_port_def_conf, .port_setup = cn9k_sso_port_setup, .port_release = cn9k_sso_port_release, + .port_quiesce = cn9k_sso_port_quiesce, .port_link = cn9k_sso_port_link, .port_unlink = cn9k_sso_port_unlink, .timeout_ticks = cnxk_sso_timeout_ticks,