From patchwork Sun Jun 13 00:06:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 94142 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D30FA0C41; Sun, 13 Jun 2021 02:14:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F27C841225; Sun, 13 Jun 2021 02:09:24 +0200 (CEST) Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by mails.dpdk.org (Postfix) with ESMTP id CC384411F7 for ; Sun, 13 Jun 2021 02:09:22 +0200 (CEST) Received: by mail-pl1-f173.google.com with SMTP id v13so4689025ple.9 for ; Sat, 12 Jun 2021 17:09:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=tjpHd7wvXZImdQigI+Az1/U7jVVmtZnjxbYq9gwKoEw=; b=bXZ81EVAarYWDOUSnPlNZAKWRKihrRLNx5Cmhw3J/o0C+BArTjvVZPh58rq3/9t3TM Pz3OHvcN2lS1zZlF0seeTCF23GuMAzYCspW/SkEEZSrUipc2hgzy9jSO3P7rCzC/rFdm gyHHoVwrhfF4CswpbuumQ64AWXl8ip3/2frLU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=tjpHd7wvXZImdQigI+Az1/U7jVVmtZnjxbYq9gwKoEw=; b=py2YqrEBVYXUDcuCr8C1jdeW7fl8yUuyw2/9/FxoLtwy2+fdW+cWwXg1Ou9DZRHX8/ hje16syihtKPgfBmUakzJ+yrho6gxVWUW+FIH2Ykpg5HPNfG1+3OholhT2xYKW+N/xEb hPFSoJVIIfRFSu00R/P+TuvNHBTH/yOay/GnW/W0zDSiQFA2oEoSJgzpPDzd9FBF9NKW JfMwxK4LKc9kLeMYiinkj2VCWYu/HOkhk+uTEOUGvPG/E80yGPVcMU8Q+I+u+2wPg/tl c3L7mefMKWefeUcCpuLfbG0qmJ0U7QZp8yOdLee7VJWQgK5VwCQW6UXzpCafwghitCC9 xIGQ== X-Gm-Message-State: AOAM530iuFEAh9EogHoIzFc9Mk0Ndg13O1MlsT0wIiKmsHpWN8+NCqOH A5fCazoxltnv4UA7cOacVaIHGLrQp9Vu1h9zjEkq+AmOL4qPYfgrX/v/7EUYa8Eg+pvCC1KmGrO NNTnlUk/M/oIWR9BSxvTKbl0GDtCkO456z9HyH/MgFHE5lpPTK3+iZlILbbrScSc= X-Google-Smtp-Source: ABdhPJxff2C8cHOu5kpSBDIJnMgpUw0aCqgGfkaKiXz0VSrfKjdU0fB3gfWn0jD9gwyMKVNvkBhBDA== X-Received: by 2002:a17:90a:f995:: with SMTP id cq21mr11540443pjb.232.1623542961392; Sat, 12 Jun 2021 17:09:21 -0700 (PDT) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id gg22sm12774609pjb.17.2021.06.12.17.09.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 12 Jun 2021 17:09:20 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: Kishore Padmanabha , Venkat Duvvuru , Randy Schacher Date: Sat, 12 Jun 2021 17:06:49 -0700 Message-Id: <20210613000652.28191-56-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.21.1 (Apple Git-122.3) In-Reply-To: <20210613000652.28191-1-ajit.khaparde@broadcom.com> References: <20210530085929.29695-1-venkatkumar.duvvuru@broadcom.com> <20210613000652.28191-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH v2 55/58] net/bnxt: add context list for timers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kishore Padmanabha Add context in ULP for timers. The alarm callback needs to have a valid context pointer when it is invoked. The context could become invalid if the port goes down and the callback is invoked before it is cancelled. Signed-off-by: Kishore Padmanabha Signed-off-by: Venkat Duvvuru Reviewed-by: Randy Schacher Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 95 ++++++++++++++++++++++++++++ drivers/net/bnxt/tf_ulp/bnxt_ulp.h | 12 ++++ drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c | 30 ++++++--- drivers/net/bnxt/tf_ulp/ulp_ha_mgr.c | 37 +++++------ 4 files changed, 144 insertions(+), 30 deletions(-) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c index 972bf8b992..5f1540027c 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "bnxt.h" #include "bnxt_ulp.h" @@ -32,6 +33,17 @@ STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list = /* Mutex to synchronize bnxt_ulp_session_list operations. */ static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER; +/* Spin lock to protect context global list */ +rte_spinlock_t bnxt_ulp_ctxt_lock; +TAILQ_HEAD(cntx_list_entry_list, ulp_context_list_entry); +static struct cntx_list_entry_list ulp_cntx_list = + TAILQ_HEAD_INITIALIZER(ulp_cntx_list); + +/* Static function declarations */ +static int32_t bnxt_ulp_cntxt_list_init(void); +static int32_t bnxt_ulp_cntxt_list_add(struct bnxt_ulp_context *ulp_ctx); +static void bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx); + /* * Allow the deletion of context only for the bnxt device that * created the session. @@ -743,6 +755,16 @@ ulp_ctx_init(struct bnxt *bp, int32_t rc = 0; enum bnxt_ulp_device_id devid; + /* Initialize the context entries list */ + bnxt_ulp_cntxt_list_init(); + + /* Add the context to the context entries list */ + rc = bnxt_ulp_cntxt_list_add(bp->ulp_ctx); + if (rc) { + BNXT_TF_DBG(ERR, "Failed to add the context list entry\n"); + return -ENOMEM; + } + /* Allocate memory to hold ulp context data. */ ulp_data = rte_zmalloc("bnxt_ulp_data", sizeof(struct bnxt_ulp_data), 0); @@ -878,6 +900,13 @@ ulp_ctx_attach(struct bnxt *bp, /* update the session details in bnxt tfp */ bp->tfp.session = session->g_tfp->session; + /* Add the context to the context entries list */ + rc = bnxt_ulp_cntxt_list_add(bp->ulp_ctx); + if (rc) { + BNXT_TF_DBG(ERR, "Failed to add the context list entry\n"); + return -EINVAL; + } + /* * The supported flag will be set during the init. Use it now to * know if we should go through the attach. @@ -1448,6 +1477,9 @@ bnxt_ulp_port_deinit(struct bnxt *bp) BNXT_TF_DBG(DEBUG, "BNXT Port:%d ULP port deinit\n", bp->eth_dev->data->port_id); + /* Free the ulp context in the context entry list */ + bnxt_ulp_cntxt_list_del(bp->ulp_ctx); + /* Get the session details */ pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device); pci_addr = &pci_dev->addr; @@ -1887,3 +1919,66 @@ bnxt_ulp_cntxt_ha_enabled(struct bnxt_ulp_context *ulp_ctx) return false; return !!ULP_HIGH_AVAIL_IS_ENABLED(ulp_ctx->cfg_data->ulp_flags); } + +static int32_t +bnxt_ulp_cntxt_list_init(void) +{ + /* Create the cntxt spin lock */ + rte_spinlock_init(&bnxt_ulp_ctxt_lock); + + return 0; +} + +static int32_t +bnxt_ulp_cntxt_list_add(struct bnxt_ulp_context *ulp_ctx) +{ + struct ulp_context_list_entry *entry; + + entry = rte_zmalloc(NULL, sizeof(struct ulp_context_list_entry), 0); + if (entry == NULL) { + BNXT_TF_DBG(ERR, "unable to allocate memory\n"); + return -ENOMEM; + } + + rte_spinlock_lock(&bnxt_ulp_ctxt_lock); + entry->ulp_ctx = ulp_ctx; + TAILQ_INSERT_TAIL(&ulp_cntx_list, entry, next); + rte_spinlock_unlock(&bnxt_ulp_ctxt_lock); + return 0; +} + +static void +bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx) +{ + struct ulp_context_list_entry *entry, *temp; + + rte_spinlock_lock(&bnxt_ulp_ctxt_lock); + TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) { + if (entry->ulp_ctx == ulp_ctx) { + TAILQ_REMOVE(&ulp_cntx_list, entry, next); + rte_free(entry); + break; + } + } + rte_spinlock_unlock(&bnxt_ulp_ctxt_lock); +} + +struct bnxt_ulp_context * +bnxt_ulp_cntxt_entry_acquire(void) +{ + struct ulp_context_list_entry *entry; + + /* take a lock and get the first ulp context available */ + if (rte_spinlock_trylock(&bnxt_ulp_ctxt_lock)) { + TAILQ_FOREACH(entry, &ulp_cntx_list, next) + if (entry->ulp_ctx) + return entry->ulp_ctx; + } + return NULL; +} + +void +bnxt_ulp_cntxt_entry_release(void) +{ + rte_spinlock_unlock(&bnxt_ulp_ctxt_lock); +} diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h index b1f090a5cb..ea38dc0d9f 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h @@ -112,6 +112,11 @@ struct ulp_tlv_param { uint8_t value[16]; }; +struct ulp_context_list_entry { + TAILQ_ENTRY(ulp_context_list_entry) next; + struct bnxt_ulp_context *ulp_ctx; +}; + /* * Allow the deletion of context only for the bnxt device that * created the session @@ -285,4 +290,11 @@ bnxt_ulp_cntxt_ptr2_ha_info_get(struct bnxt_ulp_context *ulp_ctx); bool bnxt_ulp_cntxt_ha_enabled(struct bnxt_ulp_context *ulp_ctx); + +struct bnxt_ulp_context * +bnxt_ulp_cntxt_entry_acquire(void); + +void +bnxt_ulp_cntxt_entry_release(void); + #endif /* _BNXT_ULP_H_ */ diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c index 7c83cb2054..9a77132385 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c +++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c @@ -197,8 +197,7 @@ ulp_fc_mgr_thread_start(struct bnxt_ulp_context *ctxt) if (ulp_fc_info && !(ulp_fc_info->flags & ULP_FLAG_FC_THREAD)) { rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER, - ulp_fc_mgr_alarm_cb, - (void *)ctxt); + ulp_fc_mgr_alarm_cb, NULL); ulp_fc_info->flags |= ULP_FLAG_FC_THREAD; } @@ -220,7 +219,7 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt) return; ulp_fc_info->flags &= ~ULP_FLAG_FC_THREAD; - rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, (void *)ctxt); + rte_eal_alarm_cancel(ulp_fc_mgr_alarm_cb, NULL); } /* @@ -363,35 +362,48 @@ static int ulp_get_single_flow_stat(struct bnxt_ulp_context *ctxt, */ void -ulp_fc_mgr_alarm_cb(void *arg) +ulp_fc_mgr_alarm_cb(void *arg __rte_unused) { int rc = 0; unsigned int j; enum tf_dir i; - struct bnxt_ulp_context *ctxt = arg; + struct bnxt_ulp_context *ctxt; struct bnxt_ulp_fc_info *ulp_fc_info; struct bnxt_ulp_device_params *dparms; struct tf *tfp; uint32_t dev_id, hw_cntr_id = 0, num_entries = 0; + ctxt = bnxt_ulp_cntxt_entry_acquire(); + if (ctxt == NULL) { + BNXT_TF_DBG(INFO, "could not get the ulp context lock\n"); + rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER, + ulp_fc_mgr_alarm_cb, NULL); + return; + } + ulp_fc_info = bnxt_ulp_cntxt_ptr2_fc_info_get(ctxt); - if (!ulp_fc_info) + if (!ulp_fc_info) { + bnxt_ulp_cntxt_entry_release(); return; + } if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) { BNXT_TF_DBG(DEBUG, "Failed to get device id\n"); + bnxt_ulp_cntxt_entry_release(); return; } dparms = bnxt_ulp_device_params_get(dev_id); if (!dparms) { BNXT_TF_DBG(DEBUG, "Failed to device parms\n"); + bnxt_ulp_cntxt_entry_release(); return; } tfp = bnxt_ulp_cntxt_tfp_get(ctxt, BNXT_ULP_SHARED_SESSION_NO); if (!tfp) { BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n"); + bnxt_ulp_cntxt_entry_release(); return; } @@ -405,6 +417,7 @@ ulp_fc_mgr_alarm_cb(void *arg) if (!ulp_fc_info->num_entries) { pthread_mutex_unlock(&ulp_fc_info->fc_lock); ulp_fc_mgr_thread_cancel(ctxt); + bnxt_ulp_cntxt_entry_release(); return; } /* @@ -443,12 +456,13 @@ ulp_fc_mgr_alarm_cb(void *arg) if (rc) { ulp_fc_mgr_thread_cancel(ctxt); + bnxt_ulp_cntxt_entry_release(); return; } out: + bnxt_ulp_cntxt_entry_release(); rte_eal_alarm_set(US_PER_S * ULP_FC_TIMER, - ulp_fc_mgr_alarm_cb, - (void *)ctxt); + ulp_fc_mgr_alarm_cb, NULL); } /* diff --git a/drivers/net/bnxt/tf_ulp/ulp_ha_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_ha_mgr.c index dc71054f46..1cfe5cd0a7 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_ha_mgr.c +++ b/drivers/net/bnxt/tf_ulp/ulp_ha_mgr.c @@ -26,7 +26,7 @@ #define ULP_HA_IF_TBL_IDX 10 static void ulp_ha_mgr_timer_cancel(struct bnxt_ulp_context *ulp_ctx); -static int32_t ulp_ha_mgr_timer_start(struct bnxt_ulp_context *ulp_ctx); +static int32_t ulp_ha_mgr_timer_start(void); static void ulp_ha_mgr_timer_cb(void *arg); static int32_t ulp_ha_mgr_app_type_set(struct bnxt_ulp_context *ulp_ctx, enum ulp_ha_mgr_app_type app_type); @@ -126,7 +126,7 @@ ulp_ha_mgr_app_type_set(struct bnxt_ulp_context *ulp_ctx, * - Release the flow db lock for flows to continue */ static void -ulp_ha_mgr_timer_cb(void *arg) +ulp_ha_mgr_timer_cb(void *arg __rte_unused) { struct tf_move_tcam_shared_entries_parms mparms = { 0 }; struct bnxt_ulp_context *ulp_ctx; @@ -134,7 +134,13 @@ ulp_ha_mgr_timer_cb(void *arg) struct tf *tfp; int32_t rc; - ulp_ctx = (struct bnxt_ulp_context *)arg; + ulp_ctx = bnxt_ulp_cntxt_entry_acquire(); + if (ulp_ctx == NULL) { + BNXT_TF_DBG(INFO, "could not get the ulp context lock\n"); + ulp_ha_mgr_timer_start(); + return; + } + rc = ulp_ha_mgr_state_get(ulp_ctx, &curr_state); if (rc) { /* @@ -180,31 +186,18 @@ ulp_ha_mgr_timer_cb(void *arg) BNXT_TF_DBG(INFO, "On HA CB: SEC[SEC_TIMER_COPY] => PRIM[PRIM_RUN]\n"); unlock: bnxt_ulp_cntxt_release_fdb_lock(ulp_ctx); + bnxt_ulp_cntxt_entry_release(); return; cb_restart: - ulp_ha_mgr_timer_start(ulp_ctx); + bnxt_ulp_cntxt_entry_release(); + ulp_ha_mgr_timer_start(); } static int32_t -ulp_ha_mgr_timer_start(struct bnxt_ulp_context *ulp_ctx) +ulp_ha_mgr_timer_start(void) { - struct bnxt_ulp_ha_mgr_info *ha_info; - - if (ulp_ctx == NULL) { - BNXT_TF_DBG(ERR, "Invalid parmsi for ha timer start.\n"); - return -EINVAL; - } - - ha_info = bnxt_ulp_cntxt_ptr2_ha_info_get(ulp_ctx); - - if (ha_info == NULL) { - BNXT_TF_DBG(ERR, "Unable to get HA Info in timer start.\n"); - return -EINVAL; - } - ha_info->flags |= ULP_HA_TIMER_THREAD; rte_eal_alarm_set(US_PER_S * ULP_HA_TIMER_SEC, - ulp_ha_mgr_timer_cb, - (void *)ulp_ctx); + ulp_ha_mgr_timer_cb, NULL); return 0; } @@ -374,7 +367,7 @@ ulp_ha_mgr_open(struct bnxt_ulp_context *ulp_ctx) * Clear the high region so the secondary can begin overriding * the current entries. */ - rc = ulp_ha_mgr_timer_start(ulp_ctx); + rc = ulp_ha_mgr_timer_start(); if (rc) { BNXT_TF_DBG(ERR, "Unable to start timer on HA Open.\n"); return -EINVAL;