From patchwork Thu Aug 26 14:57:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Harman Kalra X-Patchwork-Id: 97369 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C438CA0547; Thu, 26 Aug 2021 16:58:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 45A054122E; Thu, 26 Aug 2021 16:58:05 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B2FC84122E for ; Thu, 26 Aug 2021 16:58:03 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.0.43) with SMTP id 17QC3RA6017035; Thu, 26 Aug 2021 07:58:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=i8c6K/JWBk2MdLAr2i1FndWFGATJNe0irAW3vcs3/jw=; b=OiPNQeJgYNgy1otdSYZP9z/zVKmYlnlo3dLylb3FenOWtNUk24vXj31JRSZtqFASz6DI qkNZEIFC7YdFUGsppWyhHkkPP1ott9BNJJ4p5PVkitQ7XOp5w+dHLCmt845Cg8WzLU9Z nLGswI53AYhAbpaU/Pw/8v2nn/zcYSdfvRJSO4XsaBk/6zrS5Q4tGtXw/7L/Ddf3COfk 7ml/eqmEnnABoA2KGzbiok5cqpSU0OayJMc6HtO+Wnx2af0ZRbFDro7H9jt6HTOxpZpG wPqCvQafu5oiWxO9VYroJYXA/DErwJYv9UnLG18LrGKeq9OnFNa2HBJypn2w+/SnwfUd Sw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 3apamh0nb4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 26 Aug 2021 07:58:00 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 26 Aug 2021 07:57:58 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 26 Aug 2021 07:57:58 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id BC0DC3F705F; Thu, 26 Aug 2021 07:57:57 -0700 (PDT) From: Harman Kalra To: , Harman Kalra , Ray Kinsella Date: Thu, 26 Aug 2021 20:27:21 +0530 Message-ID: <20210826145726.102081-3-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210826145726.102081-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: pENS1L5kUAWqwCm0pHH6HLkgimoyGvLm X-Proofpoint-ORIG-GUID: pENS1L5kUAWqwCm0pHH6HLkgimoyGvLm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-26_04,2021-08-26_02,2020-04-07_01 Subject: [dpdk-dev] [RFC 2/7] eal/interrupts: implement get set APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implementing get set APIs for interrupt handle fields. To make any change to the interrupt handle fields, one should make use of these APIs. Signed-off-by: Harman Kalra Acked-by: Ray Kinsella --- lib/eal/common/eal_common_interrupts.c | 506 +++++++++++++++++++++++++ lib/eal/common/meson.build | 2 + lib/eal/include/rte_eal_interrupts.h | 6 +- lib/eal/version.map | 30 ++ 4 files changed, 543 insertions(+), 1 deletion(-) create mode 100644 lib/eal/common/eal_common_interrupts.c diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c new file mode 100644 index 0000000000..2e4fed96f0 --- /dev/null +++ b/lib/eal/common/eal_common_interrupts.c @@ -0,0 +1,506 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include +#include + +#include +#include +#include + +#include + + +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size, + bool from_hugepage) +{ + struct rte_intr_handle *intr_handle; + int i; + + if (from_hugepage) + intr_handle = rte_zmalloc(NULL, + size * sizeof(struct rte_intr_handle), + 0); + else + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle)); + if (!intr_handle) { + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + rte_errno = ENOMEM; + return NULL; + } + + for (i = 0; i < size; i++) { + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID; + intr_handle[i].alloc_from_hugepage = from_hugepage; + } + + return intr_handle; +} + +struct rte_intr_handle *rte_intr_handle_instance_index_get( + struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOMEM; + return NULL; + } + + return &intr_handle[index]; +} + +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle, + const struct rte_intr_handle *src, + int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (src == NULL) { + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n"); + rte_errno = EINVAL; + goto fail; + } + + if (index < 0) { + RTE_LOG(ERR, EAL, "Index cany be negative"); + rte_errno = EINVAL; + goto fail; + } + + intr_handle[index].fd = src->fd; + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd; + intr_handle[index].type = src->type; + intr_handle[index].max_intr = src->max_intr; + intr_handle[index].nb_efd = src->nb_efd; + intr_handle[index].efd_counter_size = src->efd_counter_size; + + memcpy(intr_handle[index].efds, src->efds, src->nb_intr); + memcpy(intr_handle[index].elist, src->elist, src->nb_intr); + + return 0; +fail: + return rte_errno; +} + +void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + } + + if (intr_handle->alloc_from_hugepage) + rte_free(intr_handle); + else + free(intr_handle); +} + +int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->fd = fd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->fd; +fail: + return rte_errno; +} + +int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle, + enum rte_intr_handle_type type) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->type = type; + + return 0; +fail: + return rte_errno; +} + +enum rte_intr_handle_type rte_intr_handle_type_get( + const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + return RTE_INTR_HANDLE_UNKNOWN; + } + + return intr_handle->type; +} + +int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->vfio_dev_fd = fd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->vfio_dev_fd; +fail: + return rte_errno; +} + +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle, + int max_intr) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (max_intr > intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d", + max_intr, intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->max_intr = max_intr; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->max_intr; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle, + int nb_efd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->nb_efd = nb_efd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->nb_efd; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->nb_intr; +fail: + return rte_errno; +} + +int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle, + uint8_t efd_counter_size) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->efd_counter_size = efd_counter_size; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_efd_counter_size_get( + const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->efd_counter_size; +fail: + return rte_errno; +} + +int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->efds; +fail: + return NULL; +} + +int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle, + int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = EINVAL; + goto fail; + } + + return intr_handle->efds[index]; +fail: + return rte_errno; +} + +int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle, + int index, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->efds[index] = fd; + + return 0; +fail: + return rte_errno; +} + +struct rte_epoll_event *rte_intr_handle_elist_index_get( + struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + return &intr_handle->elist[index]; +fail: + return NULL; +} + +int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle, + int index, struct rte_epoll_event elist) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->elist[index] = elist; + + return 0; +fail: + return rte_errno; +} + +int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + return NULL; + } + + return intr_handle->intr_vec; +} + +int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle, + const char *name, int size) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + /* Vector list already allocated */ + if (intr_handle->intr_vec) + return 0; + + if (size > intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0); + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size); + rte_errno = ENOMEM; + goto fail; + } + + intr_handle->vec_list_size = size; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_vec_list_index_get( + const struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index > intr_handle->vec_list_size) { + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n", + index, intr_handle->vec_list_size); + rte_errno = ERANGE; + goto fail; + } + + return intr_handle->intr_vec[index]; +fail: + return rte_errno; +} + +int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle, + int index, int vec) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index > intr_handle->vec_list_size) { + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n", + index, intr_handle->vec_list_size); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->intr_vec[index] = vec; + + return 0; +fail: + return rte_errno; +} + +void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + } + + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; +} diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build index edfca77779..47f2977539 100644 --- a/lib/eal/common/meson.build +++ b/lib/eal/common/meson.build @@ -17,6 +17,7 @@ if is_windows 'eal_common_errno.c', 'eal_common_fbarray.c', 'eal_common_hexdump.c', + 'eal_common_interrupts.c', 'eal_common_launch.c', 'eal_common_lcore.c', 'eal_common_log.c', @@ -53,6 +54,7 @@ sources += files( 'eal_common_fbarray.c', 'eal_common_hexdump.c', 'eal_common_hypervisor.c', + 'eal_common_interrupts.c', 'eal_common_launch.c', 'eal_common_lcore.c', 'eal_common_log.c', diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h index 68ca3a042d..216aece61b 100644 --- a/lib/eal/include/rte_eal_interrupts.h +++ b/lib/eal/include/rte_eal_interrupts.h @@ -55,13 +55,17 @@ struct rte_intr_handle { }; void *handle; /**< device driver handle (Windows) */ }; + bool alloc_from_hugepage; enum rte_intr_handle_type type; /**< handle type */ uint32_t max_intr; /**< max interrupt requested */ uint32_t nb_efd; /**< number of available efd(event fd) */ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */ + uint16_t nb_intr; + /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */ int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */ struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID]; - /**< intr vector epoll event */ + /**< intr vector epoll event */ + uint16_t vec_list_size; int *intr_vec; /**< intr vector number array */ }; diff --git a/lib/eal/version.map b/lib/eal/version.map index beeb986adc..56108d0998 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -426,6 +426,36 @@ EXPERIMENTAL { # added in 21.08 rte_power_monitor_multi; # WINDOWS_NO_EXPORT + + # added in 21.11 + rte_intr_handle_fd_set; + rte_intr_handle_fd_get; + rte_intr_handle_dev_fd_set; + rte_intr_handle_dev_fd_get; + rte_intr_handle_type_set; + rte_intr_handle_type_get; + rte_intr_handle_instance_alloc; + rte_intr_handle_instance_index_get; + rte_intr_handle_instance_free; + rte_intr_handle_instance_index_set; + rte_intr_handle_event_list_update; + rte_intr_handle_max_intr_set; + rte_intr_handle_max_intr_get; + rte_intr_handle_nb_efd_set; + rte_intr_handle_nb_efd_get; + rte_intr_handle_nb_intr_get; + rte_intr_handle_efds_index_set; + rte_intr_handle_efds_index_get; + rte_intr_handle_efds_base; + rte_intr_handle_elist_index_set; + rte_intr_handle_elist_index_get; + rte_intr_handle_efd_counter_size_set; + rte_intr_handle_efd_counter_size_get; + rte_intr_handle_vec_list_alloc; + rte_intr_handle_vec_list_index_set; + rte_intr_handle_vec_list_index_get; + rte_intr_handle_vec_list_free; + rte_intr_handle_vec_list_base; }; INTERNAL {