From patchwork Mon Jun 3 17:32:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 54255 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 86D4C1BB2A; Mon, 3 Jun 2019 19:36:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 76E191BAC7 for ; Mon, 3 Jun 2019 19:36:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x53HKXvx000361; Mon, 3 Jun 2019 10:36:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=j3lxGIdDVDDcvxRcOGv7QnqbrhYhTE+6DLuLuWzoSXY=; b=s4dCsaqrmAXIplamlTw265Doo4qQ7ieAiMu5vJUNEAp9q7+tdJzRFl15bDeGUgKLMwra SvcvMcR/nLF67uyLJiBxn1f5fmUUSHVQCm/2+jeTd4maPF3Q0SMfndTC35tyy2jH28oC 4jUuNFuAGoPb25pUXYjfymXIYqx/DrVdNrVW985MCsFE76y+FJ6LKz9dKRS9+pDJMJpv Y+CIacR1JKtFNVkLJz28NPq8jT9+3rvCM8QTmRvgOe+R7N09y3k9JFf2+JppLfItlXoz zXgDkiH1iQ9FV4JIh7+ZykGAc/VMZG50zaUvsRFZNGRDLu5xhIlof69Je5iUA+AmN3BS 1Q== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2sw2wmhduf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 03 Jun 2019 10:36:34 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 3 Jun 2019 10:36:33 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 3 Jun 2019 10:36:31 -0700 Received: from ajoseph83.caveonetworks.com.com (unknown [10.29.45.56]) by maili.marvell.com (Postfix) with ESMTP id 66AAB3F703F; Mon, 3 Jun 2019 10:36:26 -0700 (PDT) From: Anoob Joseph To: Jerin Jacob , Nikhil Rao , "Erik Gabriel Carrillo" , Abhinandan Gujjar , Bruce Richardson , Pablo de Lara CC: Anoob Joseph , Narayana Prasad , , Lukasz Bartosik , Pavan Nikhilesh , Hemant Agrawal , "Nipun Gupta" , Harry van Haaren , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Liang Ma Date: Mon, 3 Jun 2019 23:02:31 +0530 Message-ID: <1559583160-13944-32-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559583160-13944-1-git-send-email-anoobj@marvell.com> References: <1559583160-13944-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-03_13:, , signatures=0 Subject: [dpdk-dev] [PATCH 31/39] eventdev: add routine to access event queue for eth Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When the application is drafted for single stage eventmode, it will be efficient to have the loop in the application space, rather than passing it on to the helper. When the application's stage is in ORDERED sched mode, the application will have to change the sched type of the event to ATOMIC before sending it, to ensure ingress ordering is maintained. Since, it is application who would do the tx, this info is required in it's space. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- lib/librte_eventdev/rte_eventdev_version.map | 1 + lib/librte_eventdev/rte_eventmode_helper.c | 53 ++++++++++++++++++++++++++++ lib/librte_eventdev/rte_eventmode_helper.h | 21 +++++++++++ 3 files changed, 75 insertions(+) diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map index 8137cb5..3cf926a 100644 --- a/lib/librte_eventdev/rte_eventdev_version.map +++ b/lib/librte_eventdev/rte_eventdev_version.map @@ -134,4 +134,5 @@ EXPERIMENTAL { rte_eventmode_helper_initialize_devs; rte_eventmode_helper_display_conf; rte_eventmode_helper_get_event_lcore_links; + rte_eventmode_helper_get_tx_queue; }; diff --git a/lib/librte_eventdev/rte_eventmode_helper.c b/lib/librte_eventdev/rte_eventmode_helper.c index 6c853f6..e7670e0 100644 --- a/lib/librte_eventdev/rte_eventmode_helper.c +++ b/lib/librte_eventdev/rte_eventmode_helper.c @@ -93,6 +93,24 @@ internal_get_next_active_core(struct eventmode_conf *em_conf, return next_core; } +static struct eventdev_params * +internal_get_eventdev_params(struct eventmode_conf *em_conf, + uint8_t eventdev_id) +{ + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + if (em_conf->eventdev_config[i].eventdev_id == eventdev_id) + break; + } + + /* No match */ + if (i == em_conf->nb_eventdev) + return NULL; + + return &(em_conf->eventdev_config[i]); +} + /* Global functions */ void __rte_experimental @@ -927,3 +945,38 @@ rte_eventmode_helper_get_event_lcore_links(uint32_t lcore_id, return lcore_nb_link; } +uint8_t __rte_experimental +rte_eventmode_helper_get_tx_queue(struct rte_eventmode_helper_conf *mode_conf, + uint8_t eventdev_id) +{ + struct eventdev_params *eventdev_config; + struct eventmode_conf *em_conf; + + if (mode_conf == NULL) { + RTE_EM_HLPR_LOG_ERR("Invalid conf"); + return (uint8_t)(-1); + } + + if (mode_conf->mode_params == NULL) { + RTE_EM_HLPR_LOG_ERR("Invalid mode params"); + return (uint8_t)(-1); + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(mode_conf->mode_params); + + /* Get event device conf */ + eventdev_config = internal_get_eventdev_params(em_conf, eventdev_id); + + if (eventdev_config == NULL) { + RTE_EM_HLPR_LOG_ERR("Error reading eventdev conf"); + return (uint8_t)(-1); + } + + /* + * The last queue would be reserved to be used as atomic queue for the + * last stage (eth packet tx stage) + */ + return eventdev_config->nb_eventqueue - 1; +} + diff --git a/lib/librte_eventdev/rte_eventmode_helper.h b/lib/librte_eventdev/rte_eventmode_helper.h index 925b660..cd6d708 100644 --- a/lib/librte_eventdev/rte_eventmode_helper.h +++ b/lib/librte_eventdev/rte_eventmode_helper.h @@ -136,6 +136,27 @@ rte_eventmode_helper_get_event_lcore_links(uint32_t lcore_id, struct rte_eventmode_helper_conf *mode_conf, struct rte_eventmode_helper_event_link_info **links); +/** + * Get eventdev tx queue + * + * If the application uses event device which does not support internal port + * then it needs to submit the events to an atomic Tx queue before final + * transmission. The Tx queue will be atomic to make sure that ingress order of + * the packets is maintained. This Tx queue will be created internally by the + * eventmode helper subsystem, and application will need it's queue ID when it + * is running the execution loop. + * + * @param mode_conf + * Configuration of the mode in which app is doing packet handling + * @param eventdev_id + * Event device ID + * @return + * Tx queue ID + */ +uint8_t __rte_experimental +rte_eventmode_helper_get_tx_queue(struct rte_eventmode_helper_conf *mode_conf, + uint8_t eventdev_id); + #ifdef __cplusplus } #endif