From patchwork Fri Feb 22 11:16:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 50446 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E4F91378E; Fri, 22 Feb 2019 12:16:08 +0100 (CET) Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50040.outbound.protection.outlook.com [40.107.5.40]) by dpdk.org (Postfix) with ESMTP id 7FDA82C5E for ; Fri, 22 Feb 2019 12:16:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=prk2jgsTAAjFqb93jedDUcdRl7GJXlcMmhx8AFcS8yo=; b=o/pMBXSKXDPqoxay7W2xpA+mRX0Lcz07MY7d8y8ne0ppgrRI779tfMF2aKjKrkUyHs2dJFLy01XQSp/T5Ox2qAuJsURXtBDMBdbW+tLEVIbzWOedrR4fOx/PF/tcN1nQ+MZwCAHUL2GVB9nkxxpD2t+Fdn9UU/8Wfb2z4TOQlUY= Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com (10.168.65.19) by VI1PR0401MB2431.eurprd04.prod.outlook.com (10.169.134.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1643.14; Fri, 22 Feb 2019 11:16:04 +0000 Received: from VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::1cad:15be:6f7e:cb84]) by VI1PR0401MB2541.eurprd04.prod.outlook.com ([fe80::1cad:15be:6f7e:cb84%9]) with mapi id 15.20.1643.016; Fri, 22 Feb 2019 11:16:04 +0000 From: Hemant Agrawal To: "dev@dpdk.org" CC: "ferruh.yigit@intel.com" , Shreyansh Jain Thread-Topic: [PATCH 5/6] net/dpaa2: support low level loopback tester Thread-Index: AQHUyp//zQb2ODFb4keenXOTyB6HCg== Date: Fri, 22 Feb 2019 11:16:04 +0000 Message-ID: <20190222111440.30530-5-hemant.agrawal@nxp.com> References: <20190222111440.30530-1-hemant.agrawal@nxp.com> In-Reply-To: <20190222111440.30530-1-hemant.agrawal@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [92.120.1.72] x-mailer: git-send-email 2.17.1 x-clientproxiedby: BM1PR01CA0084.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:1::24) To VI1PR0401MB2541.eurprd04.prod.outlook.com (2603:10a6:800:56::19) authentication-results: spf=none (sender IP is ) smtp.mailfrom=hemant.agrawal@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 8f693601-88ae-46c7-58bd-08d698b7222b x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600110)(711020)(4605104)(4618075)(2017052603328)(7153060)(7193020); SRVR:VI1PR0401MB2431; x-ms-traffictypediagnostic: VI1PR0401MB2431: x-microsoft-exchange-diagnostics: =?iso-8859-1?Q?1; VI1PR0401MB2431; 23:ZYMtVIIe4TNjqqTkMzEyz7VzE0s0o0QrVN+Vl?= =?iso-8859-1?q?z30F7IpUMta5waAqz/4aatZpH?= =?iso-8859-1?q?6+PjrjwmvW+hSJr78M40V8w3dmpXu1LsRjNcQNY0DcpdjOS1QwQ?= =?iso-8859-1?q?zorgWMz1Phq/r/rsKqkkJpplZrqloFvZu0TL/At0p4Jg6/s+cud?= =?iso-8859-1?q?QLR4iktX0WdxX83U2+O+PjBBZ/kMMRvzcZ3B+pRbwiXms7ZjGvD?= =?iso-8859-1?q?88YlUf1vGrn1h49q5CrSzCshaKZakpKjPAXWjxwgRpBS6szWx0g?= =?iso-8859-1?q?cscuSDms4ULIbx8TjZZShMR3HI2V/H7MSfQvH5BbPcbjw/zRdBR?= =?iso-8859-1?q?EiHenk4tGAwZuIG4w9icWS4Iz+DAUttA0FxIGnwRGbB6jRXFhyH?= =?iso-8859-1?q?w6ePRwlQErCEdx0Bu2bLUPEg3iey1l9uotNY6np5276oZxOjyJR?= =?iso-8859-1?q?F2ztD03ru8JYnV2tbLNuJpAwFMgnQMaiZm/4rnjiJv4auOh2P9f?= =?iso-8859-1?q?ghHH6uS8gbLapQF1mYofPBLkVvSXZ5TEBAoSe7li1VMD8J1EmHE?= =?iso-8859-1?q?aMkCwLPCHcqdJ9XwzOFNfFdDQBv70dE67RniNb9uSdaQLT6hbgS?= =?iso-8859-1?q?Dva2T1xtntUgrz64UwAPqBTC6qj1tkfQQmb9sbIX4On2Z0/Rb2H?= =?iso-8859-1?q?c9qDQFUuO9rQBF0EbBiJS5hVxEzxCBecgB7vSmITWyjV92pPG7m?= =?iso-8859-1?q?FRsc6I9kgY0jKY2qfbyQwDKloevA4Pa5noUk/i0G0eluPvFl8kT?= =?iso-8859-1?q?IbeBltqbKpp3qwK9m7w4929K1WyfyQk7nIwUj6dfI+HTRSzlKZs?= =?iso-8859-1?q?B0vhS0fKxWrrCO1PQ282inQH9PJP1QvkAuk/UG+4xo6xEkKg56p?= =?iso-8859-1?q?6LEStTFPH1x5TpXeEshXZT3VeWizL9rRJe/o6+meNrx5UujX1wS?= =?iso-8859-1?q?syJ89djp3O82IF3MLPEgaWrN2jcwj7tSCnsOtggbltAddM0fC+9?= =?iso-8859-1?q?6Wm9XW7oSLAtQVwKhuCS1qm1e0sWjTnl6OIqCkWx2ro5t9g9rey?= =?iso-8859-1?q?xa097S7s1aExI5E4Q+KdN6bPGK2s/KvuvQSHaJ19fJWyttQTXzd?= =?iso-8859-1?q?OjyLp5zrAkZVVky14Jt+Gkp8HmDzv5RcvrEayM+bVT/+oncM5rz?= =?iso-8859-1?q?e/6h5VwIhrGwPOpyqghu0T08vYQ8gv+Uu6m6g4rhoqB0Y/03fyK?= =?iso-8859-1?q?3QaAAg0gs/Z7I3Mlmti0/s5vU1THV/IVVwMULjcsX2Ni5Wv7imV?= =?iso-8859-1?q?GkQ9OpYZbODjsnDY33FSei9SDkIQiGFZtQShrJsmyGVs3TQMnZA?= =?iso-8859-1?q?yeIi3ScArPlUKRr+rLO7X7Oq+fTzVai2QhSPAiSa/yzeca8VIUB?= =?iso-8859-1?q?dk3ftu7KyeX0PIPOp1sjMAWEKc69Q1/KTFlNId3Lb6jsHTj5C2d?= =?iso-8859-1?q?+/yYqCpXoohsoydjLrESezlp+avXEHwvHBXTQ72W/JcFKuUiiQb?= =?iso-8859-1?q?R/okmkz4PaYzz50WXKzW?= x-microsoft-antispam-prvs: x-forefront-prvs: 09565527D6 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(346002)(39860400002)(376002)(396003)(136003)(366004)(189003)(199004)(2906002)(99286004)(8936002)(105586002)(26005)(54906003)(3846002)(6116002)(106356001)(53946003)(71190400001)(8676002)(97736004)(316002)(6506007)(6512007)(1730700003)(5640700003)(81156014)(71200400001)(102836004)(36756003)(14444005)(5024004)(256004)(386003)(6486002)(50226002)(81166006)(6436002)(6916009)(86362001)(1076003)(305945005)(478600001)(5660300002)(68736007)(44832011)(2616005)(486006)(11346002)(2501003)(53936002)(2351001)(66066001)(30864003)(7736002)(4326008)(186003)(446003)(14454004)(76176011)(25786009)(52116002)(476003); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR0401MB2431; H:VI1PR0401MB2541.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: oGwBm/90B+6JMDxmbJBXp8bAQeQHk/fzW+ACX2O4br0l1HmPoYIBxwjBDgrLtIE9untCPH8mz/yGyXEWcjABvSc0JW1zXnPiUNreTTizOV6tseb/d8iBpraxShkQZNsI2zpODwPKEhXIogqcRNlwaSj5lMT1AB4/BDU9R5PQPcxIpJhTc5K9/X9CfNNCgSLbOmiS1u8XKmgwteD9C8fL8xiLrMGzhRN+tEf4L9Nqn8GWzbEo6BJCYX1Out5WcoVDzoT+qKwrFTifIcwTfCmtD9oHO5fS2UcAL0onclX3aSWCmiOaT6WBK5D+j+MOIYoyW7jR7vqfP/XwgpfipeIuHxaOBbgJW4ZByOYOICU9p/0ZspyA3DKrtJap5ChQ8d+nCO0FR94wtvEgUHRPI/pg+AAjg719av2oprZn3hcte0E= MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8f693601-88ae-46c7-58bd-08d698b7222b X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Feb 2019 11:16:03.1060 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2431 Subject: [dpdk-dev] [PATCH 5/6] net/dpaa2: support low level loopback tester X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Hemant Agrawal --- doc/guides/nics/dpaa2.rst | 5 + .../fslmc/qbman/include/fsl_qbman_portal.h | 18 ++ drivers/bus/fslmc/qbman/qbman_portal.c | 161 ++++++++++++++++++ drivers/bus/fslmc/rte_bus_fslmc_version.map | 1 + drivers/net/dpaa2/dpaa2_ethdev.c | 57 ++++++- drivers/net/dpaa2/dpaa2_ethdev.h | 3 + drivers/net/dpaa2/dpaa2_rxtx.c | 161 ++++++++++++++++++ 7 files changed, 402 insertions(+), 4 deletions(-) diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst index 769dc4e12..392ab0580 100644 --- a/doc/guides/nics/dpaa2.rst +++ b/doc/guides/nics/dpaa2.rst @@ -499,6 +499,11 @@ for details. Done testpmd> + +* Use dev arg option ``drv_loopback=1`` to loopback packets at + driver level. Any packet received will be reflected back by the + driver on same port. + Enabling logs ------------- diff --git a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h index a9192d3cb..07b8a4372 100644 --- a/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h +++ b/drivers/bus/fslmc/qbman/include/fsl_qbman_portal.h @@ -1039,6 +1039,24 @@ int qbman_swp_enqueue_multiple(struct qbman_swp *s, const struct qbman_fd *fd, uint32_t *flags, int num_frames); + +/** + * qbman_swp_enqueue_multiple_fd() - Enqueue multiple frames with same + eq descriptor + * @s: the software portal used for enqueue. + * @d: the enqueue descriptor. + * @fd: the frame descriptor to be enqueued. + * @flags: bit-mask of QBMAN_ENQUEUE_FLAG_*** options + * @num_frames: the number of the frames to be enqueued. + * + * Return the number of enqueued frames, -EBUSY if the EQCR is not ready. + */ +int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames); + /** * qbman_swp_enqueue_multiple_desc() - Enqueue multiple frames with * individual eq descriptor. diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c index f49b18097..20da8b921 100644 --- a/drivers/bus/fslmc/qbman/qbman_portal.c +++ b/drivers/bus/fslmc/qbman/qbman_portal.c @@ -93,6 +93,20 @@ qbman_swp_enqueue_multiple_mem_back(struct qbman_swp *s, uint32_t *flags, int num_frames); +static int +qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames); + +static int +qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames); + static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, @@ -139,6 +153,13 @@ static int (*qbman_swp_enqueue_multiple_ptr)(struct qbman_swp *s, int num_frames) = qbman_swp_enqueue_multiple_direct; +static int (*qbman_swp_enqueue_multiple_fd_ptr)(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames) + = qbman_swp_enqueue_multiple_fd_direct; + static int (*qbman_swp_enqueue_multiple_desc_ptr)(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, @@ -243,6 +264,8 @@ struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d) qbman_swp_enqueue_ring_mode_mem_back; qbman_swp_enqueue_multiple_ptr = qbman_swp_enqueue_multiple_mem_back; + qbman_swp_enqueue_multiple_fd_ptr = + qbman_swp_enqueue_multiple_fd_mem_back; qbman_swp_enqueue_multiple_desc_ptr = qbman_swp_enqueue_multiple_desc_mem_back; qbman_swp_pull_ptr = qbman_swp_pull_mem_back; @@ -862,6 +885,144 @@ inline int qbman_swp_enqueue_multiple(struct qbman_swp *s, return qbman_swp_enqueue_multiple_ptr(s, d, fd, flags, num_frames); } +static int qbman_swp_enqueue_multiple_fd_direct(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames) +{ + uint32_t *p = NULL; + const uint32_t *cl = qb_cl(d); + uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask; + int i, num_enqueued = 0; + uint64_t addr_cena; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cena_read_reg(&s->sys, + QBMAN_CENA_SWP_EQCR_CI) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return 0; + } + + eqcr_pi = s->eqcr.pi; + num_enqueued = (s->eqcr.available < num_frames) ? + s->eqcr.available : num_frames; + s->eqcr.available -= num_enqueued; + /* Fill in the EQCR ring */ + for (i = 0; i < num_enqueued; i++) { + p = qbman_cena_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + memcpy(&p[1], &cl[1], 28); + memcpy(&p[8], fd[i], sizeof(struct qbman_fd)); + eqcr_pi++; + } + + lwsync(); + + /* Set the verb byte, have to substitute in the valid-bit */ + eqcr_pi = s->eqcr.pi; + for (i = 0; i < num_enqueued; i++) { + p = qbman_cena_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + p[0] = cl[0] | s->eqcr.pi_vb; + if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) { + struct qbman_eq_desc *d = (struct qbman_eq_desc *)p; + + d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | + ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK); + } + eqcr_pi++; + if (!(eqcr_pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + } + + /* Flush all the cacheline without load/store in between */ + eqcr_pi = s->eqcr.pi; + addr_cena = (size_t)s->sys.addr_cena; + for (i = 0; i < num_enqueued; i++) { + dcbf(addr_cena + + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + eqcr_pi++; + } + s->eqcr.pi = eqcr_pi & full_mask; + + return num_enqueued; +} + +static int qbman_swp_enqueue_multiple_fd_mem_back(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames) +{ + uint32_t *p = NULL; + const uint32_t *cl = qb_cl(d); + uint32_t eqcr_ci, eqcr_pi, half_mask, full_mask; + int i, num_enqueued = 0; + + half_mask = (s->eqcr.pi_ci_mask>>1); + full_mask = s->eqcr.pi_ci_mask; + if (!s->eqcr.available) { + eqcr_ci = s->eqcr.ci; + s->eqcr.ci = qbman_cena_read_reg(&s->sys, + QBMAN_CENA_SWP_EQCR_CI_MEMBACK) & full_mask; + s->eqcr.available = qm_cyc_diff(s->eqcr.pi_ring_size, + eqcr_ci, s->eqcr.ci); + if (!s->eqcr.available) + return 0; + } + + eqcr_pi = s->eqcr.pi; + num_enqueued = (s->eqcr.available < num_frames) ? + s->eqcr.available : num_frames; + s->eqcr.available -= num_enqueued; + /* Fill in the EQCR ring */ + for (i = 0; i < num_enqueued; i++) { + p = qbman_cena_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + memcpy(&p[1], &cl[1], 28); + memcpy(&p[8], fd[i], sizeof(struct qbman_fd)); + eqcr_pi++; + } + + /* Set the verb byte, have to substitute in the valid-bit */ + eqcr_pi = s->eqcr.pi; + for (i = 0; i < num_enqueued; i++) { + p = qbman_cena_write_start_wo_shadow(&s->sys, + QBMAN_CENA_SWP_EQCR(eqcr_pi & half_mask)); + p[0] = cl[0] | s->eqcr.pi_vb; + if (flags && (flags[i] & QBMAN_ENQUEUE_FLAG_DCA)) { + struct qbman_eq_desc *d = (struct qbman_eq_desc *)p; + + d->eq.dca = (1 << QB_ENQUEUE_CMD_DCA_EN_SHIFT) | + ((flags[i]) & QBMAN_EQCR_DCA_IDXMASK); + } + eqcr_pi++; + if (!(eqcr_pi & half_mask)) + s->eqcr.pi_vb ^= QB_VALID_BIT; + } + s->eqcr.pi = eqcr_pi & full_mask; + + dma_wmb(); + qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_EQCR_PI, + (QB_RT_BIT)|(s->eqcr.pi)|s->eqcr.pi_vb); + return num_enqueued; +} + +inline int qbman_swp_enqueue_multiple_fd(struct qbman_swp *s, + const struct qbman_eq_desc *d, + struct qbman_fd **fd, + uint32_t *flags, + int num_frames) +{ + return qbman_swp_enqueue_multiple_fd_ptr(s, d, fd, flags, num_frames); +} + static int qbman_swp_enqueue_multiple_desc_direct(struct qbman_swp *s, const struct qbman_eq_desc *d, const struct qbman_fd *fd, diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map index 811a2e7b9..aa844dd80 100644 --- a/drivers/bus/fslmc/rte_bus_fslmc_version.map +++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map @@ -141,5 +141,6 @@ DPDK_19.05 { qbman_result_eqresp_rc; qbman_result_eqresp_rspid; qbman_result_eqresp_set_rspid; + qbman_swp_enqueue_multiple_fd; } DPDK_18.11; diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c index f8c2983b9..08a95a14a 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.c +++ b/drivers/net/dpaa2/dpaa2_ethdev.c @@ -27,6 +27,8 @@ #include "dpaa2_ethdev.h" #include +#define DRIVER_LOOPBACK_MODE "drv_looback" + /* Supported Rx offloads */ static uint64_t dev_rx_offloads_sup = DEV_RX_OFFLOAD_VLAN_STRIP | @@ -732,7 +734,8 @@ dpaa2_supported_ptypes_get(struct rte_eth_dev *dev) RTE_PTYPE_UNKNOWN }; - if (dev->rx_pkt_burst == dpaa2_dev_prefetch_rx) + if (dev->rx_pkt_burst == dpaa2_dev_prefetch_rx || + dev->rx_pkt_burst == dpaa2_dev_loopback_rx) return ptypes; return NULL; } @@ -1997,6 +2000,43 @@ populate_mac_addr(struct fsl_mc_io *dpni_dev, struct dpaa2_dev_priv *priv, return -1; } +static int +check_devargs_handler(__rte_unused const char *key, const char *value, + __rte_unused void *opaque) +{ + if (strcmp(value, "1")) + return -1; + + return 0; +} + +static int +dpaa2_get_devargs(struct rte_devargs *devargs, const char *key) +{ + struct rte_kvargs *kvlist; + + if (!devargs) + return 0; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (!kvlist) + return 0; + + if (!rte_kvargs_count(kvlist, key)) { + rte_kvargs_free(kvlist); + return 0; + } + + if (rte_kvargs_process(kvlist, key, + check_devargs_handler, NULL) < 0) { + rte_kvargs_free(kvlist); + return 0; + } + rte_kvargs_free(kvlist); + + return 1; +} + static int dpaa2_dev_init(struct rte_eth_dev *eth_dev) { @@ -2016,7 +2056,10 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) * plugged. */ eth_dev->dev_ops = &dpaa2_ethdev_ops; - eth_dev->rx_pkt_burst = dpaa2_dev_prefetch_rx; + if (dpaa2_get_devargs(dev->devargs, DRIVER_LOOPBACK_MODE)) + eth_dev->rx_pkt_burst = dpaa2_dev_loopback_rx; + else + eth_dev->rx_pkt_burst = dpaa2_dev_prefetch_rx; eth_dev->tx_pkt_burst = dpaa2_dev_tx; return 0; } @@ -2133,7 +2176,12 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev) eth_dev->dev_ops = &dpaa2_ethdev_ops; - eth_dev->rx_pkt_burst = dpaa2_dev_prefetch_rx; + if (dpaa2_get_devargs(dev->devargs, DRIVER_LOOPBACK_MODE)) { + eth_dev->rx_pkt_burst = dpaa2_dev_loopback_rx; + DPAA2_PMD_INFO("Loopback mode"); + } else { + eth_dev->rx_pkt_burst = dpaa2_dev_prefetch_rx; + } eth_dev->tx_pkt_burst = dpaa2_dev_tx; RTE_LOG(INFO, PMD, "%s: netdev created\n", eth_dev->data->name); @@ -2251,7 +2299,8 @@ static struct rte_dpaa2_driver rte_dpaa2_pmd = { }; RTE_PMD_REGISTER_DPAA2(net_dpaa2, rte_dpaa2_pmd); - +RTE_PMD_REGISTER_PARAM_STRING(net_dpaa2, + DRIVER_LOOPBACK_MODE "="); RTE_INIT(dpaa2_pmd_init_log) { dpaa2_logtype_pmd = rte_log_register("pmd.net.dpaa2"); diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index 13259be7d..7148104ec 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -125,6 +125,9 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev, int dpaa2_eth_eventq_detach(const struct rte_eth_dev *dev, int eth_rx_queue_id); +uint16_t dpaa2_dev_loopback_rx(void *queue, struct rte_mbuf **bufs, + uint16_t nb_pkts); + uint16_t dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts); void dpaa2_dev_process_parallel_event(struct qbman_swp *swp, diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index 1aa184730..c6e50123c 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -1143,3 +1143,164 @@ dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) (void)nb_pkts; return 0; } + +#if defined(RTE_TOOLCHAIN_GCC) +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wcast-qual" +#elif defined(RTE_TOOLCHAIN_CLANG) +#pragma clang diagnostic push +#pragma clang diagnostic ignored "-Wcast-qual" +#endif + +/* This function loopbacks all the received packets.*/ +uint16_t +dpaa2_dev_loopback_rx(void *queue, + struct rte_mbuf **bufs __rte_unused, + uint16_t nb_pkts) +{ + /* Function receive frames for a given device and VQ*/ + struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)queue; + struct qbman_result *dq_storage, *dq_storage1 = NULL; + uint32_t fqid = dpaa2_q->fqid; + int ret, num_rx = 0, num_tx = 0, pull_size; + uint8_t pending, status; + struct qbman_swp *swp; + struct qbman_fd *fd[DPAA2_LX2_DQRR_RING_SIZE]; + struct qbman_pull_desc pulldesc; + struct qbman_eq_desc eqdesc; + struct queue_storage_info_t *q_storage = dpaa2_q->q_storage; + struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data; + struct dpaa2_dev_priv *priv = eth_data->dev_private; + struct dpaa2_queue *tx_q = priv->tx_vq[0]; + /* todo - currently we are using 1st TX queue only for loopback*/ + + if (unlikely(!DPAA2_PER_LCORE_ETHRX_DPIO)) { + ret = dpaa2_affine_qbman_ethrx_swp(); + if (ret) { + DPAA2_PMD_ERR("Failure in affining portal"); + return 0; + } + } + swp = DPAA2_PER_LCORE_ETHRX_PORTAL; + pull_size = (nb_pkts > dpaa2_dqrr_size) ? dpaa2_dqrr_size : nb_pkts; + if (unlikely(!q_storage->active_dqs)) { + q_storage->toggle = 0; + dq_storage = q_storage->dq_storage[q_storage->toggle]; + q_storage->last_num_pkts = pull_size; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, + q_storage->last_num_pkts); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage)), 1); + if (check_swp_active_dqs(DPAA2_PER_LCORE_ETHRX_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs( + DPAA2_PER_LCORE_ETHRX_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_ETHRX_DPIO->index); + } + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_PMD_DP_DEBUG( + "VDQ command not issued.QBMAN busy\n"); + /* Portal was busy, try again */ + continue; + } + break; + } + q_storage->active_dqs = dq_storage; + q_storage->active_dpio_id = DPAA2_PER_LCORE_ETHRX_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_ETHRX_DPIO->index, + dq_storage); + } + + dq_storage = q_storage->active_dqs; + rte_prefetch0((void *)(size_t)(dq_storage)); + rte_prefetch0((void *)(size_t)(dq_storage + 1)); + + /* Prepare next pull descriptor. This will give space for the + * prefething done on DQRR entries + */ + q_storage->toggle ^= 1; + dq_storage1 = q_storage->dq_storage[q_storage->toggle]; + qbman_pull_desc_clear(&pulldesc); + qbman_pull_desc_set_numframes(&pulldesc, pull_size); + qbman_pull_desc_set_fq(&pulldesc, fqid); + qbman_pull_desc_set_storage(&pulldesc, dq_storage1, + (size_t)(DPAA2_VADDR_TO_IOVA(dq_storage1)), 1); + + /*Prepare enqueue descriptor*/ + qbman_eq_desc_clear(&eqdesc); + qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ); + qbman_eq_desc_set_response(&eqdesc, 0, 0); + qbman_eq_desc_set_fq(&eqdesc, tx_q->fqid); + + /* Check if the previous issued command is completed. + * Also seems like the SWP is shared between the Ethernet Driver + * and the SEC driver. + */ + while (!qbman_check_command_complete(dq_storage)) + ; + if (dq_storage == get_swp_active_dqs(q_storage->active_dpio_id)) + clear_swp_active_dqs(q_storage->active_dpio_id); + + pending = 1; + + do { + /* Loop until the dq_storage is updated with + * new token by QBMAN + */ + while (!qbman_check_new_result(dq_storage)) + ; + rte_prefetch0((void *)((size_t)(dq_storage + 2))); + /* Check whether Last Pull command is Expired and + * setting Condition for Loop termination + */ + if (qbman_result_DQ_is_pull_complete(dq_storage)) { + pending = 0; + /* Check for valid frame. */ + status = qbman_result_DQ_flags(dq_storage); + if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) + continue; + } + fd[num_rx] = (struct qbman_fd *)qbman_result_DQ_fd(dq_storage); + + dq_storage++; + num_rx++; + } while (pending); + + while (num_tx < num_rx) { + num_tx += qbman_swp_enqueue_multiple_fd(swp, &eqdesc, + &fd[num_tx], 0, num_rx - num_tx); + } + + if (check_swp_active_dqs(DPAA2_PER_LCORE_ETHRX_DPIO->index)) { + while (!qbman_check_command_complete( + get_swp_active_dqs(DPAA2_PER_LCORE_ETHRX_DPIO->index))) + ; + clear_swp_active_dqs(DPAA2_PER_LCORE_ETHRX_DPIO->index); + } + /* issue a volatile dequeue command for next pull */ + while (1) { + if (qbman_swp_pull(swp, &pulldesc)) { + DPAA2_PMD_DP_DEBUG("VDQ command is not issued." + "QBMAN is busy (2)\n"); + continue; + } + break; + } + q_storage->active_dqs = dq_storage1; + q_storage->active_dpio_id = DPAA2_PER_LCORE_ETHRX_DPIO->index; + set_swp_active_dqs(DPAA2_PER_LCORE_ETHRX_DPIO->index, dq_storage1); + + dpaa2_q->rx_pkts += num_rx; + dpaa2_q->tx_pkts += num_tx; + + return 0; +} +#if defined(RTE_TOOLCHAIN_GCC) +#pragma GCC diagnostic pop +#elif defined(RTE_TOOLCHAIN_CLANG) +#pragma clang diagnostic pop +#endif