From patchwork Mon Apr 8 11:22:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gagandeep Singh X-Patchwork-Id: 52415 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BEEE65F13; Mon, 8 Apr 2019 13:23:08 +0200 (CEST) Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00053.outbound.protection.outlook.com [40.107.0.53]) by dpdk.org (Postfix) with ESMTP id D92224F90 for ; Mon, 8 Apr 2019 13:22:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D7mhKE8Q2LsVh4lAKWDFASUn1jk3jXkX0UGOVDzs3xY=; b=oeuoEJT2XF6Jn74HMH9pkO/wRowxQpMprQOp9HRskJGsMA6fplUWOvJ0YDNf342cj5GoGieYUXICI8d51Zw/xZPnKQBx8QkUX1AIaxW4PtmGrnM7+4rcTTZ3c0N/JUWVGRtM8ilQjGwaGBEMRE1R8wySPT8aWZn3kbTCufi/OtQ= Received: from VE1PR04MB6365.eurprd04.prod.outlook.com (10.255.118.78) by VE1PR04MB6606.eurprd04.prod.outlook.com (20.179.235.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1771.13; Mon, 8 Apr 2019 11:22:44 +0000 Received: from VE1PR04MB6365.eurprd04.prod.outlook.com ([fe80::f5ad:f178:4c55:13e0]) by VE1PR04MB6365.eurprd04.prod.outlook.com ([fe80::f5ad:f178:4c55:13e0%3]) with mapi id 15.20.1771.016; Mon, 8 Apr 2019 11:22:44 +0000 From: Gagandeep Singh To: "dev@dpdk.org" , "ferruh.yigit@intel.com" CC: Gagandeep Singh Thread-Topic: [PATCH 10/13] net/enetc: enable Rx-Tx queue start/stop feature Thread-Index: AQHU7f1iLCIShg5/eUuENvI3sis6QA== Date: Mon, 8 Apr 2019 11:22:44 +0000 Message-ID: <1554745507-15089-11-git-send-email-g.singh@nxp.com> References: <1554745507-15089-1-git-send-email-g.singh@nxp.com> In-Reply-To: <1554745507-15089-1-git-send-email-g.singh@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BM1PR0101CA0003.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:18::13) To VE1PR04MB6365.eurprd04.prod.outlook.com (2603:10a6:803:12a::14) x-mailer: git-send-email 1.9.1 authentication-results: spf=none (sender IP is ) smtp.mailfrom=G.Singh@nxp.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [92.120.1.67] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 154e5638-d5c6-45d6-2714-08d6bc14850b x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600139)(711020)(4605104)(4618075)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7193020); SRVR:VE1PR04MB6606; x-ms-traffictypediagnostic: VE1PR04MB6606: x-microsoft-antispam-prvs: x-forefront-prvs: 0001227049 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(366004)(376002)(346002)(39860400002)(396003)(136003)(199004)(189003)(76176011)(97736004)(81156014)(71200400001)(71190400001)(72206003)(3846002)(25786009)(81166006)(53936002)(2616005)(5660300002)(6116002)(8676002)(7736002)(446003)(478600001)(8936002)(11346002)(305945005)(2906002)(256004)(14444005)(66066001)(6512007)(476003)(6436002)(486006)(50226002)(2501003)(99286004)(106356001)(86362001)(110136005)(4326008)(6486002)(68736007)(105586002)(6506007)(52116002)(316002)(26005)(386003)(14454004)(186003)(36756003)(102836004); DIR:OUT; SFP:1101; SCL:1; SRVR:VE1PR04MB6606; H:VE1PR04MB6365.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: 2hO7EWhcScKait2sITwRHBx2HlqpDoTTiWqTe1T986P2+AuSSJfC47TkqXveV6GiAzq9Rg/UTusy7mGfbDzWIrDYgkqEyZQ1uE4Xgq5saM6tyd3Phoszgw7iL8ELWcdatr+mE+UwhsaswHxfWVNgSqBWC0b8rGPCuBIAiZbed4BrPlRu0L80HlLRKj42yrr8OLW/AHQxjDOQ1mlgSEH37QFK8E7HcFp/uWxkBV/S+HcQ8madU6goYp8BofV6SN0q5tWSFNCvqBf3xaGZ5UT1lF/EKFbRQIWrK4LP1ZqsTpodz033YAKoF+DD/vHGIVukCfGWs16ib1gRptsvIFo2sAnA3C+27GBOc+2q3fyx6rKZhqgTwchmKPKbHB7wW+t+iqFELOydEJ2gXBTV4VZWTfZEdlDL1ZqHTRMEg80ud3k= MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 154e5638-d5c6-45d6-2714-08d6bc14850b X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Apr 2019 11:22:44.1575 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6606 Subject: [dpdk-dev] [PATCH 10/13] net/enetc: enable Rx-Tx queue start/stop feature X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Rx and Tx queue start-stop and deferred queue start features enabled. Signed-off-by: Gagandeep Singh --- doc/guides/nics/enetc.rst | 2 + doc/guides/nics/features/enetc.ini | 1 + drivers/net/enetc/enetc_ethdev.c | 185 ++++++++++++++++++++++++++----------- 3 files changed, 134 insertions(+), 54 deletions(-) diff --git a/doc/guides/nics/enetc.rst b/doc/guides/nics/enetc.rst index eeb0752..26d61f6 100644 --- a/doc/guides/nics/enetc.rst +++ b/doc/guides/nics/enetc.rst @@ -50,6 +50,8 @@ ENETC Features - Promiscuous - Multicast - Jumbo packets +- Queue Start/Stop +- Deferred Queue Start NIC Driver (PMD) ~~~~~~~~~~~~~~~~ diff --git a/doc/guides/nics/features/enetc.ini b/doc/guides/nics/features/enetc.ini index 0eed2cb..bd901fa 100644 --- a/doc/guides/nics/features/enetc.ini +++ b/doc/guides/nics/features/enetc.ini @@ -11,6 +11,7 @@ Promiscuous mode = Y Allmulticast mode = Y MTU update = Y Jumbo frame = Y +Queue start/stop = Y Linux VFIO = Y ARMv8 = Y Usage doc = Y diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c index 4428678..db23276 100644 --- a/drivers/net/enetc/enetc_ethdev.c +++ b/drivers/net/enetc/enetc_ethdev.c @@ -203,7 +203,6 @@ enetc_setup_txbdr(struct enetc_hw *hw, struct enetc_bdr *tx_ring) { int idx = tx_ring->index; - uint32_t tbmr; phys_addr_t bd_address; bd_address = (phys_addr_t) @@ -215,9 +214,6 @@ enetc_txbdr_wr(hw, idx, ENETC_TBLENR, ENETC_RTBLENR_LEN(tx_ring->bd_count)); - tbmr = ENETC_TBMR_EN; - /* enable ring */ - enetc_txbdr_wr(hw, idx, ENETC_TBMR, tbmr); enetc_txbdr_wr(hw, idx, ENETC_TBCIR, 0); enetc_txbdr_wr(hw, idx, ENETC_TBCISR, 0); tx_ring->tcir = (void *)((size_t)hw->reg + @@ -227,16 +223,22 @@ } static int -enetc_alloc_tx_resources(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc) +enetc_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) { - int err; + int err = 0; struct enetc_bdr *tx_ring; struct rte_eth_dev_data *data = dev->data; struct enetc_eth_adapter *priv = ENETC_DEV_PRIVATE(data->dev_private); + PMD_INIT_FUNC_TRACE(); + if (nb_desc > MAX_BD_COUNT) + return -1; + tx_ring = rte_zmalloc(NULL, sizeof(struct enetc_bdr), 0); if (tx_ring == NULL) { ENETC_PMD_ERR("Failed to allocate TX ring memory"); @@ -253,6 +255,17 @@ enetc_setup_txbdr(&priv->hw.hw, tx_ring); data->tx_queues[queue_idx] = tx_ring; + if (!tx_conf->tx_deferred_start) { + /* enable ring */ + enetc_txbdr_wr(&priv->hw.hw, tx_ring->index, + ENETC_TBMR, ENETC_TBMR_EN); + dev->data->tx_queue_state[tx_ring->index] = + RTE_ETH_QUEUE_STATE_STARTED; + } else { + dev->data->tx_queue_state[tx_ring->index] = + RTE_ETH_QUEUE_STATE_STOPPED; + } + return 0; fail: rte_free(tx_ring); @@ -260,24 +273,6 @@ return err; } -static int -enetc_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id __rte_unused, - const struct rte_eth_txconf *tx_conf __rte_unused) -{ - int err = 0; - - PMD_INIT_FUNC_TRACE(); - if (nb_desc > MAX_BD_COUNT) - return -1; - - err = enetc_alloc_tx_resources(dev, queue_idx, nb_desc); - - return err; -} - static void enetc_tx_queue_release(void *txq) { @@ -367,23 +362,27 @@ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rx_ring->mb_pool) - RTE_PKTMBUF_HEADROOM); enetc_rxbdr_wr(hw, idx, ENETC_RBBSR, buf_size); - /* enable ring */ - enetc_rxbdr_wr(hw, idx, ENETC_RBMR, ENETC_RBMR_EN); enetc_rxbdr_wr(hw, idx, ENETC_RBPIR, 0); } static int -enetc_alloc_rx_resources(struct rte_eth_dev *dev, - uint16_t rx_queue_id, - uint16_t nb_rx_desc, - struct rte_mempool *mb_pool) +enetc_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t rx_queue_id, + uint16_t nb_rx_desc, + unsigned int socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) { - int err; + int err = 0; struct enetc_bdr *rx_ring; struct rte_eth_dev_data *data = dev->data; struct enetc_eth_adapter *adapter = ENETC_DEV_PRIVATE(data->dev_private); + PMD_INIT_FUNC_TRACE(); + if (nb_rx_desc > MAX_BD_COUNT) + return -1; + rx_ring = rte_zmalloc(NULL, sizeof(struct enetc_bdr), 0); if (rx_ring == NULL) { ENETC_PMD_ERR("Failed to allocate RX ring memory"); @@ -400,6 +399,17 @@ enetc_setup_rxbdr(&adapter->hw.hw, rx_ring, mb_pool); data->rx_queues[rx_queue_id] = rx_ring; + if (!rx_conf->rx_deferred_start) { + /* enable ring */ + enetc_rxbdr_wr(&adapter->hw.hw, rx_ring->index, ENETC_RBMR, + ENETC_RBMR_EN); + dev->data->rx_queue_state[rx_ring->index] = + RTE_ETH_QUEUE_STATE_STARTED; + } else { + dev->data->rx_queue_state[rx_ring->index] = + RTE_ETH_QUEUE_STATE_STOPPED; + } + return 0; fail: rte_free(rx_ring); @@ -407,27 +417,6 @@ return err; } -static int -enetc_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t rx_queue_id, - uint16_t nb_rx_desc, - unsigned int socket_id __rte_unused, - const struct rte_eth_rxconf *rx_conf __rte_unused, - struct rte_mempool *mb_pool) -{ - int err = 0; - - PMD_INIT_FUNC_TRACE(); - if (nb_rx_desc > MAX_BD_COUNT) - return -1; - - err = enetc_alloc_rx_resources(dev, rx_queue_id, - nb_rx_desc, - mb_pool); - - return err; -} - static void enetc_rx_queue_release(void *rxq) { @@ -666,6 +655,90 @@ int enetc_stats_get(struct rte_eth_dev *dev, return 0; } +static int +enetc_rx_queue_start(struct rte_eth_dev *dev, uint16_t qidx) +{ + struct enetc_eth_adapter *priv = + ENETC_DEV_PRIVATE(dev->data->dev_private); + struct enetc_bdr *rx_ring; + uint32_t rx_data; + + rx_ring = dev->data->rx_queues[qidx]; + if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) { + rx_data = enetc_rxbdr_rd(&priv->hw.hw, rx_ring->index, + ENETC_RBMR); + rx_data = rx_data | ENETC_RBMR_EN; + enetc_rxbdr_wr(&priv->hw.hw, rx_ring->index, ENETC_RBMR, + rx_data); + dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; + } + + return 0; +} + +static int +enetc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx) +{ + struct enetc_eth_adapter *priv = + ENETC_DEV_PRIVATE(dev->data->dev_private); + struct enetc_bdr *rx_ring; + uint32_t rx_data; + + rx_ring = dev->data->rx_queues[qidx]; + if (dev->data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) { + rx_data = enetc_rxbdr_rd(&priv->hw.hw, rx_ring->index, + ENETC_RBMR); + rx_data = rx_data & (~ENETC_RBMR_EN); + enetc_rxbdr_wr(&priv->hw.hw, rx_ring->index, ENETC_RBMR, + rx_data); + dev->data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + } + + return 0; +} + +static int +enetc_tx_queue_start(struct rte_eth_dev *dev, uint16_t qidx) +{ + struct enetc_eth_adapter *priv = + ENETC_DEV_PRIVATE(dev->data->dev_private); + struct enetc_bdr *tx_ring; + uint32_t tx_data; + + tx_ring = dev->data->tx_queues[qidx]; + if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) { + tx_data = enetc_txbdr_rd(&priv->hw.hw, tx_ring->index, + ENETC_TBMR); + tx_data = tx_data | ENETC_TBMR_EN; + enetc_txbdr_wr(&priv->hw.hw, tx_ring->index, ENETC_TBMR, + tx_data); + dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; + } + + return 0; +} + +static int +enetc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t qidx) +{ + struct enetc_eth_adapter *priv = + ENETC_DEV_PRIVATE(dev->data->dev_private); + struct enetc_bdr *tx_ring; + uint32_t tx_data; + + tx_ring = dev->data->tx_queues[qidx]; + if (dev->data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) { + tx_data = enetc_txbdr_rd(&priv->hw.hw, tx_ring->index, + ENETC_TBMR); + tx_data = tx_data & (~ENETC_TBMR_EN); + enetc_txbdr_wr(&priv->hw.hw, tx_ring->index, ENETC_TBMR, + tx_data); + dev->data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + } + + return 0; +} + /* * The set of PCI devices this driver supports */ @@ -691,8 +764,12 @@ int enetc_stats_get(struct rte_eth_dev *dev, .dev_infos_get = enetc_dev_infos_get, .mtu_set = enetc_mtu_set, .rx_queue_setup = enetc_rx_queue_setup, + .rx_queue_start = enetc_rx_queue_start, + .rx_queue_stop = enetc_rx_queue_stop, .rx_queue_release = enetc_rx_queue_release, .tx_queue_setup = enetc_tx_queue_setup, + .tx_queue_start = enetc_tx_queue_start, + .tx_queue_stop = enetc_tx_queue_stop, .tx_queue_release = enetc_tx_queue_release, .dev_supported_ptypes_get = enetc_supported_ptypes_get, };