From patchwork Fri Dec 15 12:59:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sunil Kumar Kori X-Patchwork-Id: 32325 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 36E271B020; Fri, 15 Dec 2017 13:59:48 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0075.outbound.protection.outlook.com [104.47.36.75]) by dpdk.org (Postfix) with ESMTP id 4CD751B01D for ; Fri, 15 Dec 2017 13:59:46 +0100 (CET) Received: from BN3PR03CA0056.namprd03.prod.outlook.com (10.167.1.144) by CY4PR03MB2693.namprd03.prod.outlook.com (10.173.43.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.302.9; Fri, 15 Dec 2017 12:59:44 +0000 Received: from BN1AFFO11FD018.protection.gbl (2a01:111:f400:7c10::145) by BN3PR03CA0056.outlook.office365.com (2a01:111:e400:7a4d::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.323.15 via Frontend Transport; Fri, 15 Dec 2017 12:59:44 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; nxp.com; dkim=none (message not signed) header.d=none;nxp.com; dmarc=fail action=none header.from=nxp.com; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BN1AFFO11FD018.mail.protection.outlook.com (10.58.52.78) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.302.6 via Frontend Transport; Fri, 15 Dec 2017 12:59:35 +0000 Received: from sunil-OptiPlex-790.ap.freescale.net (sunil-OptiPlex-790.ap.freescale.net [10.232.132.53]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id vBFCxZpN020705; Fri, 15 Dec 2017 05:59:42 -0700 From: Sunil Kumar Kori To: CC: , Date: Fri, 15 Dec 2017 18:29:30 +0530 Message-ID: <20171215125933.14302-4-sunil.kori@nxp.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20171215125933.14302-1-sunil.kori@nxp.com> References: <20171215125933.14302-1-sunil.kori@nxp.com> X-EOPAttributedMessage: 0 X-Matching-Connectors: 131578163753240069; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(7966004)(336005)(39380400002)(396003)(376002)(346002)(39860400002)(2980300002)(1109001)(1110001)(339900001)(189003)(199004)(8656006)(105606002)(51416003)(5660300001)(81156014)(2351001)(6666003)(4326008)(53936002)(2950100002)(6916009)(77096006)(85426001)(498600001)(5890100001)(8936002)(76176011)(97736004)(106466001)(104016004)(50466002)(16586007)(54906003)(1076002)(47776003)(575784001)(86362001)(356003)(305945005)(36756003)(2906002)(48376002)(68736007)(316002)(50226002)(81166006)(8676002); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR03MB2693; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; PTR:InfoDomainNonexistent; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN1AFFO11FD018; 1:/Q1MNfRLXc75BXTAHTRLL9FjeD3KxAyje1wkXw78z1W54bNq3ER0ZVfhHg7hC+aTkzatO7BY5zmQXNueA/QacpumfPpCqrG4Kmt/whmk+gvFc7sZh8MMj1vtMBldkQW/ MIME-Version: 1.0 X-MS-Office365-Filtering-Correlation-Id: 5f44e5ed-547e-4469-c605-08d543bbb12c X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4628075)(201703131517081)(2017052603307); SRVR:CY4PR03MB2693; X-Microsoft-Exchange-Diagnostics: 1; CY4PR03MB2693; 3:Bcg2ygZYNuc9SxkbSSObVjdttPwiSOiLADpu3I0RmIMHCcZ1BberogY0/jEwk3Uic31qwgW8/DaTJ/Z0+ok98igtiYNyRJ2ycuwxt98UyzjxtNp1+iU1u3TUm8XquUcKFfFnbbO7xo33ZIhTrlGhMQ0lMhrJz5j6+xhB7P0/XKBbr2IJ4/FjKcxBrxJv/Jv0Dac23IsYK4iqwX+4ZHH7FIAw28NBc+IlKroqKclLpBK1sr2SdLoIJAk0//xFtqp3INlOwsRpzI/sqPtS09Zw0/WqzxGvpP6+lDpsASALp5yQPOuhlwoa3vhuj6hHDlnjNGl7osiciKFWQSv1QFYNWPDoVWMjFriTpi05SfCBfx0=; 25:+Os8cOdU2U4KZJDXbtrPl0xHfQ+WQL7ewvR4/vO93qUQm+qvrFZNCxyuTEIpmzxhbVNupcy/wVXLjqtsvwXLaD/up+iDDBVSOFBxQ5eHwhbggL+ZB5hVlbn4Sc7J/anLvcXiVobMydxauQ6ZNfCuQ01EAWZ07cRtgdl7EPaQ39sd7sXVGTPKHhKKXkAGWiqsIXw+aDCXspzfLy2oLGM3A5hjEOqK9zGwjNF3gCowqk8/qcDeu4w3dIiWA8Ww/jvbFJz3bG0gqMGKUUNiFxcdnCZTcFXzJGBPqh8h3X0vpLyHcJnkiu1Vjd5I57nFvKfPblTXKEhMYdshIJAvZVt9uw== X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PR03MB2693: X-Microsoft-Exchange-Diagnostics: 1; CY4PR03MB2693; 31:Vy5MB5lMLT91iSMzAnl+sZRwPKM4lLDyLkuRIZpak+jBHTQ17zZNnuA87jyWmSG04Bvc/EuF3519WubkuCezJyMWXLzmUIT2GkAoVc8j3JTc2Dy4APgZ2cjFa5R1NYqpcWQywfXkoGdytkJiEuzEoCWeUEMfpp5Jt63cfWjEZxKjagyRT/9Ug7VfuaJrpO7ZKXN2/uXU/UNFM3BDnVdBW7wHLt2U8S6RueENVYUgD0M=; 4:5Be8rrfImbo9r0irNiVg3soXVmR2sBr5AK7CqeVvo6kCzlhYXsPsIV/8N5KaiG9Yg+g2Bx5Ddv5551koihPx4XSPSBqabliakq7rU0TBvYCTeVNboWHs6w/LXEh5vGEzY/Mpc82dFjwY+Xt6C4TkamB1DxkgQBcWKpgq9nr3dddO9Vjlx/hQ94OcZWIJ3vlNOq//5e5uxXjKscEFxEMgw8hc193dOa/yBMYZrzWh1aAkEU9AeXdZxMVaosYhHDs/yueDRF+IInZEcRguavSHC4YqzZOugcMy+K9xriCX7N/ig5bO1fW+yOPZDlNE9pvO X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095135)(2401047)(8121501046)(5005006)(10201501046)(3002001)(3231023)(93006095)(93001095)(6055026)(6096035)(20161123561025)(20161123556025)(20161123559100)(201703131430075)(201703131441075)(201703131448075)(201703131433075)(201703161259150)(20161123563025)(20161123565025)(201708071742011); SRVR:CY4PR03MB2693; BCL:0; PCL:0; RULEID:(100000803101)(100110400095)(400006); SRVR:CY4PR03MB2693; X-Forefront-PRVS: 05220145DE X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY4PR03MB2693; 23:fKQ+/Ic5Qqg7PxylyNSeGYTv3vGs6kTwySndDttD3?= FDCujpWgoS5P3yGoamta5SyhsCSrfeEtsIr+a5M175riGFSKBn2+JNKwnILBwVcls9HTfNt8qRtBi/9EtyihGdmu3atLh7hDleFU6FBhEvkHtclaXoK1mn/rAxkYpO2PHW5Fd5JyCpE8wQMGjTrVmQqfk5+EkCXP/x2uDp5GoWyV1QV1kPMOZjRcOvtMrn4pMjTVVbhE9pKiAOsHPCZcjICKtTc56iGjqsxXP0KVbIcT8LfWP9iaRKVAoiYum0ezEq1Q/SOKBPFZDXJ4F5pw/URHlW+CBK0gfv7LFipXWfcbxQmWLu6oc+CBD0spMabzxEJc4RZhVBD9HtxoHlHqgWvGrvuhTb9fniF7zNe0WsCYt8tMRaKbZ2WxGRPVCNFP5L75YIdjTFmp0svSTpHdsnI7YhlSTMuItjs9PQVPez+nDaw67eHm85ZCeMFR2xOweZ/UBsliY0lZAlMzmvLyeKfa5Ox09gFr0UDQEf9X8LSUSLDVuW/vRqXP+BSjRyoB3hoBNBxjQ1Q0sFZEdkkULytl6yl48CGx9obgup2oeBfMyG4NTmCXPvxchixh+lOeWuMwtTCrGGc1RbtdsPkMmksP78S64uBqJhI6lkGLyjtYuapHm8X96JMBFPhR7z1FiIdG8aWIkiKv17Xj/yUVokQFDRNTfEozHpzjmx5nNf9BKOnTJaub0L4v7lfx/qS9Dqhtkd4jbkbVgXeycsbYm2OcY0B82ii9+n6PPVOxPLO8mK2rdTY1nb40xzReWkWmwTQhZxLCsJR1P/v5Oq0Cw/IcVDQAi1s/Ja8FrXvUrN0cRZT+Ppsdlb6a4M5wPvkQciHQcFLxcFTzE17BktqW5SvNlTonogqZTccu9m0nN71d/40eoVF3e52rlnvH5ayRyrDybO++8pun6KV1AjDr6XFNoqjSxM14gducM9XYAh9XV9z9kRIFooXVWPvr1B0c1PGuJOcPs19O+uL721QZH4cSCqIk6NzfjywcuajUFN/kVEHb4rpiTcnR+Qic0+KRK2GxYO+CmhiaLzlWrwLLJFN0RuMTILO/DsCUtPMQoPa6TpZEbgJ4OfinNDTYs5EY0/QG4nEcytc+/nTvDlleucWubMAwA7BbuknZYO7rDG9sQ== X-Microsoft-Exchange-Diagnostics: 1; CY4PR03MB2693; 6:Sm8paOPXPBN95l/HxTSYcgRNTC2CGzj0AxL4smFnOlGGcADSqGF8ov8D+bIVzDdG2xgQzO7J7UWBbFw1NvnpHCtFYSfQaVNI+IOCyDGqXF04Sw7AQEo5nwaZTMcDhUsgpvjS/Yg7wL9sqHxV/eeVfthYke6RCcFYMPgLgqWjw0sjUSx/2WUcWG8OLXZBUlE3tlULzTMwTVMO5N6TnVP5/GcXkdgV0CiMMGv1ctoVujUB32zli17ZoPT1unkJQwv79fx6P7aOIJrA2dCEDXshGVCLHlNifIUj8we5ulvxEhe0ppfBhL2pf0KkX6aj5Vmy90rSAbZHzKJmiEhCGT/uB5tBzsDAq2uAhsUdHp8yr1s=; 5:6NYG/rzDb27zzLUx1Uxe5Gq3GiNQBJk/9IcD2jtDhdR7YE/nF4Q0+VRNqD3V7G2WEWVKSQ3uSd1CNLKfKPKon35ErJ+XzxfUHwMSo4MrqZMqHToopYJvu3osK3KhdYNIgmDAxayzeRNgpVgPztUytEOSh+fFcmFxq+NUWM725rw=; 24:yf+rAJKEPPeAomQUajCfJn7ptf9U5Sx+p711xHfUalxJDraaItBGxLD2DOCg0S2Cu74k1x8v7GE/dAGsamjQ1p8i3YHKIF68N8S1EvV+0kk=; 7:uYYM5hA6EoHEQ2zJTl+dXYI7DnTFpAOJ7vsbCwHeIDFruuzF8Rr/ZfaYrUezkL+5v2injniI3cjVgYLqqKBAPpE/zwwX+Mu7/o4oYFfBFPKIlyodu9UN+IL8ljbZ+ZtJaWcInPlpKcAMYFiNUHHAkoCayIcZ2EXKEQhjcDtWTN82blPKKtfe9XdnPtbYWG7LJXBhFiH0SPDM7JNjGZkhLvwjWLoOLqlvICAQ+DAlF7bI+An0yGt9HcGx/8Ja5kg5 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Dec 2017 12:59:35.1524 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5f44e5ed-547e-4469-c605-08d543bbb12c X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR03MB2693 Subject: [dpdk-dev] [PATCH 3/6] net/dpaa: ethdev Rx queue configurations with eventdev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Given ethernet Rx queues can be attached with event queue in parallel or atomic mode. Patch imlmplements Rx queue configuration and their corresponding callbacks to handle events from respective queues. Signed-off-by: Sunil Kumar Kori --- drivers/net/dpaa/Makefile | 2 + drivers/net/dpaa/dpaa_ethdev.c | 110 ++++++++++++++++++++++++++++-- drivers/net/dpaa/dpaa_ethdev.h | 29 ++++++++ drivers/net/dpaa/dpaa_rxtx.c | 80 +++++++++++++++++++++- drivers/net/dpaa/rte_pmd_dpaa_version.map | 2 + 5 files changed, 214 insertions(+), 9 deletions(-) diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile index a99d1ee..c644353 100644 --- a/drivers/net/dpaa/Makefile +++ b/drivers/net/dpaa/Makefile @@ -43,7 +43,9 @@ CFLAGS += -I$(RTE_SDK_DPAA)/ CFLAGS += -I$(RTE_SDK_DPAA)/include CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/ +CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/base/qbman CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa +CFLAGS += -I$(RTE_SDK)/drivers/event/dpaa CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c index 7798994..457e421 100644 --- a/drivers/net/dpaa/dpaa_ethdev.c +++ b/drivers/net/dpaa/dpaa_ethdev.c @@ -121,6 +121,21 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = { static struct rte_dpaa_driver rte_dpaa_pmd; +static inline void +dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts) +{ + memset(opts, 0, sizeof(struct qm_mcc_initfq)); + opts->we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA; + opts->fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING | + QM_FQCTRL_PREFERINCACHE; + opts->fqd.context_a.stashing.exclusive = 0; + if (dpaa_svr_family != SVR_LS1046A_FAMILY) + opts->fqd.context_a.stashing.annotation_cl = + DPAA_IF_RX_ANNOTATION_STASH; + opts->fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH; + opts->fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH; +} + static int dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) { @@ -561,6 +576,92 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } +int dpaa_eth_eventq_attach(const struct rte_eth_dev *dev, + int eth_rx_queue_id, + u16 ch_id, + const struct rte_event_eth_rx_adapter_queue_conf *queue_conf) +{ + int ret; + u32 flags = 0; + struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct qman_fq *rxq = &dpaa_intf->rx_queues[eth_rx_queue_id]; + struct qm_mcc_initfq opts = {0}; + + dpaa_poll_queue_default_config(&opts); + + switch (queue_conf->ev.sched_type) { + case RTE_SCHED_TYPE_ATOMIC: + opts.fqd.fq_ctrl |= QM_FQCTRL_HOLDACTIVE; + /* Reset FQCTRL_AVOIDBLOCK bit as it is unnecessary + * configuration with HOLD_ACTIVE setting + */ + opts.fqd.fq_ctrl &= (~QM_FQCTRL_AVOIDBLOCK); + rxq->cb.dqrr_dpdk_cb = dpaa_rx_cb_atomic; + break; + case RTE_SCHED_TYPE_ORDERED: + DPAA_PMD_ERR("Ordered queue schedule type is not supported\n"); + return -1; + default: + opts.fqd.fq_ctrl |= QM_FQCTRL_AVOIDBLOCK; + rxq->cb.dqrr_dpdk_cb = dpaa_rx_cb_parallel; + break; + } + + opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ; + opts.fqd.dest.channel = ch_id; + opts.fqd.dest.wq = queue_conf->ev.priority; + + if (dpaa_intf->cgr_rx) { + opts.we_mask |= QM_INITFQ_WE_CGID; + opts.fqd.cgid = dpaa_intf->cgr_rx[eth_rx_queue_id].cgrid; + opts.fqd.fq_ctrl |= QM_FQCTRL_CGE; + } + + flags = QMAN_INITFQ_FLAG_SCHED; + + ret = qman_init_fq(rxq, flags, &opts); + if (ret) { + DPAA_PMD_ERR("Channel/Queue association failed. fqid %d ret:%d", + rxq->fqid, ret); + return ret; + } + + /* copy configuration which needs to be filled during dequeue */ + memcpy(&rxq->ev, &queue_conf->ev, sizeof(struct rte_event)); + dev->data->rx_queues[eth_rx_queue_id] = rxq; + + return ret; +} + +int dpaa_eth_eventq_detach(const struct rte_eth_dev *dev, + int eth_rx_queue_id) +{ + struct qm_mcc_initfq opts; + int ret; + u32 flags = 0; + struct dpaa_if *dpaa_intf = dev->data->dev_private; + struct qman_fq *rxq = &dpaa_intf->rx_queues[eth_rx_queue_id]; + + dpaa_poll_queue_default_config(&opts); + + if (dpaa_intf->cgr_rx) { + opts.we_mask |= QM_INITFQ_WE_CGID; + opts.fqd.cgid = dpaa_intf->cgr_rx[eth_rx_queue_id].cgrid; + opts.fqd.fq_ctrl |= QM_FQCTRL_CGE; + } + + ret = qman_init_fq(rxq, flags, &opts); + if (ret) { + DPAA_PMD_ERR("init rx fqid %d failed with ret: %d", + rxq->fqid, ret); + } + + rxq->cb.dqrr_dpdk_cb = NULL; + dev->data->rx_queues[eth_rx_queue_id] = NULL; + + return 0; +} + static void dpaa_eth_rx_queue_release(void *rxq __rte_unused) { @@ -881,13 +982,8 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx, return ret; } fq->is_static = false; - opts.we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA; - opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING | - QM_FQCTRL_PREFERINCACHE; - opts.fqd.context_a.stashing.exclusive = 0; - opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH; - opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH; - opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH; + + dpaa_poll_queue_default_config(&opts); if (cgr_rx) { /* Enable tail drop with cgr on this queue */ diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index c0a8430..b81522a 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -36,6 +36,7 @@ /* System headers */ #include #include +#include #include #include @@ -50,6 +51,13 @@ #error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM" #endif +/* mbuf->seqn will be used to store event entry index for + * driver specific usage. For parallel mode queues, invalid + * index will be set and for atomic mode queues, valid value + * ranging from 1 to 16. + */ +#define DPAA_INVALID_MBUF_SEQN 0 + /* we will re-use the HEADROOM for annotation in RX */ #define DPAA_HW_BUF_RESERVE 0 #define DPAA_PACKET_LAYOUT_ALIGN 64 @@ -178,4 +186,25 @@ struct dpaa_if_stats { uint64_t tund; /** #include #include +#include #include "dpaa_ethdev.h" #include "dpaa_rxtx.h" #include #include +#include #include #include #include @@ -451,6 +453,67 @@ dpaa_eth_queue_portal_rx(struct qman_fq *fq, return qman_portal_poll_rx(nb_bufs, (void **)bufs, fq->qp); } +enum qman_cb_dqrr_result +dpaa_rx_cb_parallel(void *event, + struct qman_portal *qm __always_unused, + struct qman_fq *fq, + const struct qm_dqrr_entry *dqrr, + void **bufs) +{ + u32 ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid; + struct rte_mbuf *mbuf; + struct rte_event *ev = (struct rte_event *)event; + + mbuf = dpaa_eth_fd_to_mbuf(&dqrr->fd, ifid); + ev->event_ptr = (void *)mbuf; + ev->flow_id = fq->ev.flow_id; + ev->sub_event_type = fq->ev.sub_event_type; + ev->event_type = RTE_EVENT_TYPE_ETHDEV; + ev->op = RTE_EVENT_OP_NEW; + ev->sched_type = fq->ev.sched_type; + ev->queue_id = fq->ev.queue_id; + ev->priority = fq->ev.priority; + ev->impl_opaque = (uint8_t)DPAA_INVALID_MBUF_SEQN; + mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *bufs = mbuf; + + return qman_cb_dqrr_consume; +} + +enum qman_cb_dqrr_result +dpaa_rx_cb_atomic(void *event, + struct qman_portal *qm __always_unused, + struct qman_fq *fq, + const struct qm_dqrr_entry *dqrr, + void **bufs) +{ + u8 index; + u32 ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid; + struct rte_mbuf *mbuf; + struct rte_event *ev = (struct rte_event *)event; + + mbuf = dpaa_eth_fd_to_mbuf(&dqrr->fd, ifid); + ev->event_ptr = (void *)mbuf; + ev->flow_id = fq->ev.flow_id; + ev->sub_event_type = fq->ev.sub_event_type; + ev->event_type = RTE_EVENT_TYPE_ETHDEV; + ev->op = RTE_EVENT_OP_NEW; + ev->sched_type = fq->ev.sched_type; + ev->queue_id = fq->ev.queue_id; + ev->priority = fq->ev.priority; + + /* Save active dqrr entries */ + index = DQRR_PTR2IDX(dqrr); + DPAA_PER_LCORE_DQRR_SIZE++; + DPAA_PER_LCORE_DQRR_HELD |= 1 << index; + DPAA_PER_LCORE_DQRR_MBUF(index) = mbuf; + ev->impl_opaque = index + 1; + mbuf->seqn = (uint32_t)index + 1; + *bufs = mbuf; + + return qman_cb_dqrr_defer; +} + uint16_t dpaa_eth_queue_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) @@ -734,6 +797,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) uint32_t frames_to_send, loop, sent = 0; uint16_t state; int ret; + uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0}; ret = rte_dpaa_portal_init((void *)0); if (ret) { @@ -794,14 +858,26 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) goto send_pkts; } } + seqn = mbuf->seqn; + if (seqn != DPAA_INVALID_MBUF_SEQN) { + index = seqn - 1; + if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { + flags[loop] = + ((index & QM_EQCR_DCA_IDXMASK) << 8); + flags[loop] |= QMAN_ENQUEUE_FLAG_DCA; + DPAA_PER_LCORE_DQRR_SIZE--; + DPAA_PER_LCORE_DQRR_HELD &= + ~(1 << index); + } + } } send_pkts: loop = 0; while (loop < frames_to_send) { loop += qman_enqueue_multi(q, &fd_arr[loop], - NULL, - frames_to_send - loop); + &flags[loop], + frames_to_send - loop); } nb_bufs -= frames_to_send; sent += frames_to_send; diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map index d76acbd..888f203 100644 --- a/drivers/net/dpaa/rte_pmd_dpaa_version.map +++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map @@ -6,6 +6,8 @@ DPDK_17.11 { DPDK_18.02 { global: + dpaa_eth_eventq_attach; + dpaa_eth_eventq_detach; rte_pmd_dpaa_set_tx_loopback; local: *;