From patchwork Tue Sep 5 16:39:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 131180 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28F8D424F4; Tue, 5 Sep 2023 18:39:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1F54C427DB; Tue, 5 Sep 2023 18:39:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A84AB4161A for ; Tue, 5 Sep 2023 18:39:29 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 385DdcLS004704 for ; Tue, 5 Sep 2023 09:39:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=TNRF+AyMhp+C96GHIdnsTHERZ9GRiSmDGoeUWUpb2Jw=; b=AMuub90ahDyc6P7E54nLGfBrudS6s6bmsq+2VtT/DA55fgEpYNBUiq0+Fmc62Hr4tpxG Al+r3wUqiaCiD635FV5mnaheZ7HZJ1K/8pceOF5+0UEevmYQWrxtILezcnpt1RILtgaa njJ7kzIG82YQVDImd3W1dl4xgO4g2J0YKL0GVJciNK/uiucl9qxwBoA0seWfI/yGy/QJ EvZmq/ePzIGh54TJThAZrVTo34nGpvdd/b6j970OSgzoJsSwr5VxNnhl/Q48eL9RbulR ALIK8fagmRmuzhF4uzxVtncFKN7MiUduEn3OjvOyNwrQ4DDFzUo//jRqtqpVoOllDWbz bA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sv4jkajdu-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 05 Sep 2023 09:39:28 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 5 Sep 2023 09:39:13 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 5 Sep 2023 09:39:13 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id B29663F7063; Tue, 5 Sep 2023 09:39:10 -0700 (PDT) From: To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao , Pavan Nikhilesh , Shijith Thotton CC: Subject: [PATCH 1/3] cnxk/event: invalidate GWC on port reset Date: Tue, 5 Sep 2023 22:09:06 +0530 Message-ID: <20230905163908.19946-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-GUID: YusBVT3MA-_In90cZQrWKyXwvzncC_9W X-Proofpoint-ORIG-GUID: YusBVT3MA-_In90cZQrWKyXwvzncC_9W X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-05_10,2023-09-05_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Invalidate GWC on event port i.e., HWS reset to prevent invalid response from SSO. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/roc_sso.c | 31 +++++++++++++++++++++++++++++ drivers/common/cnxk/roc_sso.h | 2 ++ drivers/common/cnxk/version.map | 1 + drivers/event/cnxk/cn10k_eventdev.c | 19 +++++++++++++++++- 4 files changed, 52 insertions(+), 1 deletion(-) diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index a5f48d5bbc..1ea0761531 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -357,6 +357,37 @@ roc_sso_hws_stats_get(struct roc_sso *roc_sso, uint8_t hws, return rc; } +void +roc_sso_hws_gwc_invalidate(struct roc_sso *roc_sso, uint8_t *hws, + uint8_t nb_hws) +{ + struct sso *sso = roc_sso_to_sso_priv(roc_sso); + struct ssow_lf_inv_req *req; + struct dev *dev = &sso->dev; + struct mbox *mbox; + int i; + + if (!nb_hws) + return; + + mbox = mbox_get(dev->mbox); + req = mbox_alloc_msg_sso_ws_cache_inv(mbox); + if (req == NULL) { + mbox_process(mbox); + req = mbox_alloc_msg_sso_ws_cache_inv(mbox); + if (req == NULL) { + mbox_put(mbox); + return; + } + } + req->hdr.ver = SSOW_INVAL_SELECTIVE_VER; + req->nb_hws = nb_hws; + for (i = 0; i < nb_hws; i++) + req->hws[i] = hws[i]; + mbox_process(mbox); + mbox_put(mbox); +} + int roc_sso_hwgrp_stats_get(struct roc_sso *roc_sso, uint8_t hwgrp, struct roc_sso_hwgrp_stats *stats) diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h index a2bb6fcb22..8ee62afb9a 100644 --- a/drivers/common/cnxk/roc_sso.h +++ b/drivers/common/cnxk/roc_sso.h @@ -100,6 +100,8 @@ int __roc_api roc_sso_hwgrp_free_xaq_aura(struct roc_sso *roc_sso, int __roc_api roc_sso_hwgrp_stash_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_stash *stash, uint16_t nb_stash); +void __roc_api roc_sso_hws_gwc_invalidate(struct roc_sso *roc_sso, uint8_t *hws, + uint8_t nb_hws); /* Debug */ void __roc_api roc_sso_dump(struct roc_sso *roc_sso, uint8_t nb_hws, diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 8c71497df8..cfb7efbdc7 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -475,6 +475,7 @@ INTERNAL { roc_sso_hws_base_get; roc_sso_hws_link; roc_sso_hws_stats_get; + roc_sso_hws_gwc_invalidate; roc_sso_hws_unlink; roc_sso_ns_to_gw; roc_sso_rsrc_fini; diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 499a3aace7..56482c20a1 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -118,6 +118,7 @@ static int cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base, cnxk_handle_event_t fn, void *arg) { + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(arg); uint64_t retry = CNXK_SSO_FLUSH_RETRY_MAX; struct cn10k_sso_hws *ws = hws; uint64_t cq_ds_cnt = 1; @@ -128,6 +129,7 @@ cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base, plt_write64(0, base + SSO_LF_GGRP_QCTL); + roc_sso_hws_gwc_invalidate(&dev->sso, &ws->hws_id, 1); plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); req = queue_id; /* GGRP ID */ req |= BIT_ULL(18); /* Grouped */ @@ -162,6 +164,7 @@ cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base, return -EAGAIN; plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); + roc_sso_hws_gwc_invalidate(&dev->sso, &ws->hws_id, 1); rte_mb(); return 0; @@ -181,6 +184,7 @@ cn10k_sso_hws_reset(void *arg, void *hws) uint8_t pend_tt; bool is_pend; + roc_sso_hws_gwc_invalidate(&dev->sso, &ws->hws_id, 1); plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); /* Wait till getwork/swtp/waitw/desched completes. */ is_pend = false; @@ -237,6 +241,7 @@ cn10k_sso_hws_reset(void *arg, void *hws) } plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); + roc_sso_hws_gwc_invalidate(&dev->sso, &ws->hws_id, 1); rte_mb(); } @@ -670,7 +675,9 @@ cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev) static int cn10k_sso_start(struct rte_eventdev *event_dev) { - int rc; + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint8_t hws[RTE_EVENT_MAX_PORTS_PER_DEV]; + int rc, i; rc = cn10k_sso_updt_tx_adptr_data(event_dev); if (rc < 0) @@ -682,6 +689,9 @@ cn10k_sso_start(struct rte_eventdev *event_dev) if (rc < 0) return rc; cn10k_sso_fp_fns_set(event_dev); + for (i = 0; i < event_dev->data->nb_ports; i++) + hws[i] = i; + roc_sso_hws_gwc_invalidate(&dev->sso, hws, event_dev->data->nb_ports); return rc; } @@ -689,6 +699,13 @@ cn10k_sso_start(struct rte_eventdev *event_dev) static void cn10k_sso_stop(struct rte_eventdev *event_dev) { + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint8_t hws[RTE_EVENT_MAX_PORTS_PER_DEV]; + int i; + + for (i = 0; i < event_dev->data->nb_ports; i++) + hws[i] = i; + roc_sso_hws_gwc_invalidate(&dev->sso, hws, event_dev->data->nb_ports); cnxk_sso_stop(event_dev, cn10k_sso_hws_reset, cn10k_sso_hws_flush_events); }