From patchwork Mon Jul 2 16:54:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shally Verma X-Patchwork-Id: 42109 X-Patchwork-Delegate: pablo.de.lara.guarch@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 45C4D1B539; Mon, 2 Jul 2018 18:55:54 +0200 (CEST) Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0083.outbound.protection.outlook.com [104.47.40.83]) by dpdk.org (Postfix) with ESMTP id 80A141B539 for ; Mon, 2 Jul 2018 18:55:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3LbP50wSOMBYswQsF9HyVk/dlGyUiDv/C3M0gRmQTJo=; b=RfEnl9qPXZQe+kNdzGLuaS9/QYZRwgkZDWfeQ0FAI0LQr3NZvo+0IgYLDzTZKqIrvuONcwbYU8DQ3mYk7xVumLp1rOcDVIdPPxlTT9matzHON98O1DILaftnejivbFZIPjiZW66UYbfBcNM1eDlGG76mjjkMKUt4PX0mqLcItRU= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Shally.Verma@cavium.com; Received: from hyd1sverma-dt.caveonetworks.com (115.113.156.2) by MWHPR0701MB3644.namprd07.prod.outlook.com (2603:10b6:301:7d::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.906.20; Mon, 2 Jul 2018 16:55:49 +0000 From: Shally Verma To: pablo.de.lara.guarch@intel.com Cc: dev@dpdk.org, pathreya@caviumnetworks.com, mchalla@caviumnetworks.com, Sunila Sahu , Ashish Gupta Date: Mon, 2 Jul 2018 22:24:33 +0530 Message-Id: <1530550477-22444-3-git-send-email-shally.verma@caviumnetworks.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1530550477-22444-1-git-send-email-shally.verma@caviumnetworks.com> References: <1530550477-22444-1-git-send-email-shally.verma@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [115.113.156.2] X-ClientProxiedBy: MAXPR0101CA0007.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:c::17) To MWHPR0701MB3644.namprd07.prod.outlook.com (2603:10b6:301:7d::37) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e857e445-3605-4f9b-ce28-08d5e03caadc X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989117)(5600053)(711020)(4534165)(7168020)(4627221)(201703031133081)(201702281549075)(8990107)(2017052603328)(7153060)(7193020); SRVR:MWHPR0701MB3644; X-Microsoft-Exchange-Diagnostics: 1; MWHPR0701MB3644; 3:gBhlhmqUGD/fCi2j3pB72NPw7z57PlUBX5CVFMZY3ld0G2b2dvvFD+wAARPqIn2pgkIFkrZqHUeZAtqc4UKU2+zqEJqkEIf+SrdKcpmkkvYlpXP11cGbrh6dzhBOp/TFXlV2yof1yFylMTuN+IxpQ82P38sTV1/qPwPqvsgRVbYXcAd/ieGRxhplWSHnUtPmCxZ2SDg1Wf5cZPIOAVrP/xTnrX9eMD1M3iCwOhvvbKq+rusydksBv68qkNjiWlgV; 25:2QGvcpctAJTeipqfKE7e6Ed1nJp6O7LxCFsdMAe6ObAwdrltzZTDCY59R4HArbsH1wCxGvwoqkWqWLbgL4Ek0PT4GzGiymvLCn3L3swCRsg4vN1iZ6F8ceGnXe0p9N7S0v/88648AIKEAxrQdcR6IDfyjZ3Q84P3ohyIVkzASdEXygFoRxT2eED20VM41AYhJEkXEPtYwprXxymIzAaWNkgCKK9SmFNe8zZtLx9sjcRNJml8eR6F7+42N986LxPa3WHkvuu6sTkBjvNOELT5ZsSpu9EUEuvF8ye76mTOLwWF2zWrGDpuuQTihc79AKIUUzDJGjn6xXMQjr23aQTEeQ==; 31:8oo2Vxe7V8xcHYAD3GR1sKkfj3sb2XiVMTbKF2Bd+6Cp90OrknPMN4qdpWhNURx0WMTbxShRJu+hvJY2JFvAjcMFDF0AE3ShQgtyzpZYICS0Iqql91qx3cpoUr6LIYvmmG9KDYXUYjSxBsGOPqRvUEF4+uI0bMOXQ6Pg2i2kYmaLtmyG0VFLg7fJeG4uwxtnJEz6m578Rx6Jtn7bDskwOLlgJ4935CH1tkipKTw2yXA= X-MS-TrafficTypeDiagnostic: MWHPR0701MB3644: X-Microsoft-Exchange-Diagnostics: 1; MWHPR0701MB3644; 20:878iYLXsZKi5vNgtIzFoX/u9DfmvLX3GlnFsksPDJAjpS286YCLSfLlSpF73VO+Jrhl/Cq6M2AzcUgbnKPzhb1n1/xXubBE/8mS5maFfCw01zbv0VYALW9vYWOkWKKIeENQJjXrCXrK/Th+pZbpWxSU7b0gBRebMS/Ujt/kT0LoHVkbScRUGKy+Z421DPU26HQGOomYcxKyn8PWxdEazyqw26aA1r8fZgjGXzyNpX/UQ0+kyS+6W+U7hCqvhFy032Q4LdPgbuKbh0A4aaV3+WDMYKTrMiZR5zun2Z4mnK3RCQ37rKWDuxxRjwq+bbPvpyL6nwm8LSBZZ+OuYWLkZYiBXVNq4WSK6G1xngLkxymoxBWU+yZLyxdpaGPfy7zTA7UvTecRxkb6K0nHadJM2dj7RIF1c1aJPPrbKquBLuJlrRBeK40GoeCLWJuuz2/zdPwK4TSwD5vKFE4UYgGHMhBXcqB2QPJrt/duHbs/SdcIWrX6rOsvYD9f8xZZazdFDXVcf62PxpDYdMF6+pom8DpI9On9cjDjLLBWOedsM+sflOwAsDTRBqtYaXzHBtU1XG73pypgQUetMPK3FiKQEXAk7mWuE4MVs3zTaUWKpibY=; 4:eUwDpj/NLy8pDMJ99rMdlH6xO0spVcjkQyuw2ts4gvbN4ByonnkoPUbTGKAXpJobWvgOLcnQDZ6Xkaelrj+NITdNbkhtEXMqNUzmo+nExeKSdxEb3AqaLbMyY/dD8MRK5bUaLG4eCTfXOFJv63fy9hl4B0x25UHDsA9KEcKEfEyez1QHrez3vNqWcFgFO+nx9zIAZHu5PHLhyKRUq8+I5eosA6b9Qe7Tk/Ra9aCpUfNyjm2TZPhPMyEg6/1QU4S3HSiTuEC101nvp7W+Uu/UNA== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(93006095)(10201501046)(3002001)(3231254)(944501410)(52105095)(149027)(150027)(6041310)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123558120)(20161123562045)(6072148)(201708071742011)(7699016); SRVR:MWHPR0701MB3644; BCL:0; PCL:0; RULEID:; SRVR:MWHPR0701MB3644; X-Forefront-PRVS: 07215D0470 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(366004)(396003)(136003)(39860400002)(346002)(376002)(189003)(199004)(6506007)(25786009)(55236004)(107886003)(3846002)(6116002)(16586007)(2906002)(14444005)(186003)(51416003)(52116002)(16526019)(76176011)(4326008)(6486002)(105586002)(386003)(36756003)(6512007)(2351001)(42882007)(54906003)(69596002)(72206003)(2361001)(53936002)(26005)(48376002)(50466002)(316002)(68736007)(478600001)(6916009)(476003)(81166006)(66066001)(81156014)(8936002)(97736004)(8676002)(47776003)(446003)(11346002)(106356001)(50226002)(2616005)(956004)(6666003)(44832011)(53416004)(486006)(7736002)(305945005)(5660300001); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR0701MB3644; H:hyd1sverma-dt.caveonetworks.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MWHPR0701MB3644; 23:KRfXYDMhKz8/e6xsg39bbUOXylkWx8xlSNu83zs?= ANxrEzMCK0oxqE4D1wStL9GI/zXWHO6AtsXFNfg8X2wrVQnNlnlV/lsbkCBzRgZqSANG9QCY5rNpcnoFnaVNTLgyC3PgkQSrDR2EEuiHYvqPGM1Tto3yKbsaMyK95i5Jm8gwaMHPZYbqgKRmt4279aua6+B6a4Nkmx8J9ZUl4A+IUakcmgOmyYX87PhfncJ2+ojGoAjmml3wEIHeKUuB/MQ5JBiq0o4Dt+CompNsp+6ns7o4uepgWyH5WmEVcMJdkKBcFacKq2X77rocDetQx352mp2Pg1GNoPUsT2GY8sWiYXJAt6oa0IiTjnSrdiwUz+jSDqqtRJiKeaYx/6M5Bw0n4436cELlFHIVs1vErrasEqKify5e8MtLZ35v82fquoJ5gyn6UvmSk/cHX1Ngl2OFwlc89L/rOsieHZqJcWF26a7AIcjzQyw3myjLU4qUylLpDAWFLeijz9j3G7pa/pfgaP+m073RjRWVYhhiRLYTMimB1E6hiO/PgPlH3qOAPlAR50x5qjlf1ysk2xQ1vDvOre1V83d05SJb8msrgvuyg/csFGAqUHfI4LaHJ6Rg1RQx536kOK9+7YClvLTbM5tenn9tEBTQt1grBFHiLx7slMW8Jp1K/xK/rSfoxOvWotyu2pLqLh+RKYObC0qht738HVdmHFCnz2gQrs7S8r2mxSe3KWsoVUAIV0qY9MEPxrctdjgOHkvBypUTV0Lai6HCWV2CR2/31he+XHgHjt0Gv4vlCbAJ9Fni9jk2Btabi0hBfvYAYDxRzRoFyWvC+f5AH13fQ3daKVOCXuyk6w/v4YUb2kk/nKrA/I+ZVJn210Z+7J8afw38B3oLFigXuky1vmbEkqoyWx0TK/WTaY7qUsu36WWmyId7JQq1uh5CHCFqIVKdnE8aSWHcEkHVUlhYnJLjXAZ5K80LBxUHG1jDosqRBgjNl99DsiwRWungil5UDV+hVyZ3wdPp5A5aWlKYzHSCHCZJY2l6ue3VOqnPKKUQwK+uQLZEHlQnYAvxnQGMkbb43VYvoG7cKLHuGs77WRLuDiuConvUU+9osYrcjN9mmbGqory3S6/GVib40aO9Dl7gi8jSghsWk4CHjc5FRAl8M/Y1WpdagKPAWBYS+MYnnmyrsfUp6pvLOptjAGgVYvxCc0LckwJOQwZg6eNRdAL/xgaduNNjVoD2oDbDjTRl06nDw68YPMeOG5hoX3bBWSn5z9C200z9MkAOx1dOgpH7QxYI/7bA2CYNi3yAn4oEJuHyy1U9sECptNmFHnIaHcG0wuclyZN3am7QvKwj2yuBbgrBCTCUWXStRVnKuc2vv6IC0loPhXK+kZjIn+PY= X-Microsoft-Antispam-Message-Info: aNICbvm0N6L5bs5GQAcJTRoGnd0G6gyirglGfKPR5pwB/VvgsusGlK+44b6bFQBcut/5Oqjjtf5LE8pDkUV7t/66b+YKe8MJpF3UFmhJjOVfkQSSldsBAONwsvPewPEOwxFmy3P9CYJP8rgWr/FeejDH9MJ2QIh7ewdAKFJxoMcxCgFyNfJsRkElCWqNjOt14qtqWLrj2opBMAzlij8rbysMxPzwEYQeZGGJkmlTjsSD5yVl5o2Z9QA35IPI7I6vg+JKdB9Jv+ylHJI7B5V4ajztl+JrGZEUdQuz3lk9lB+NI8ygV0Pt/3ZBww3LcC3uf+5WEfuJ49y08lwBj26yyw5Xaxm4JIM5g2iZZh4TBhU= X-Microsoft-Exchange-Diagnostics: 1; MWHPR0701MB3644; 6:/EwBrq0NinXVKJt1pqzLBl5Qzho+y3j1OOLm/NiFZ3SicSWlP7KVm6tXmy2YSng65wUPXDcUSrhGPi4a+YApnBakNcIqMQX9dp37g25dalcvWwMmschLA18f9g1WH5biZtpCOvQGGur/T7TiTHUxH0r3DzIlPZYNWFcdq7fjU5zAYJkEwqWgxHj7kAPEzUSwBmkmY/mUnzn/Ln/Wi/hmNPdoHNghcrppN2T/P4efDtZo7uGPifZbZmZqurxb6aSoQyRQkJQ+4pvagV0jVoapO6eah+3fGHtiJHCk1ZHc9r18XOqpuNRnyiuVgigu0BNVlESadIkj/k0z4lffHgivQH7kx2P8OmnqMOmpGrFTqAHIQEzVB4zjOXJ62vbYR1sWiCn9laxlNTCH/Z1yfPt+o+iZ+uwdT4OIWfe1MlR3PFl4AlgvehadEyehuOxnG93UMLI+okwj7sNM9gafpmIHew==; 5:Sn69SW3AjDjJQply2JZkkJif5C3S52LpVRyvmHLIIwDH4CSUCmUQ+bA9AJjNpM9mwGoditfaYYRgTKPraNLqzj2hNb2af5uNFWYXFGYv1DKIniYdDebCLV4YizCv0cKaTg4OaH8ufy2OlC6sAe6dT1Hk8JDWoLCKU+APNa/bECk=; 24:/pAV2BNkLrUFDVXTVQVjUskGJz2Uj478rj/HcA4G8hDmgNOGuG63jLap5Wm2rL5nKbzMmLMci898QL5TnsjvrRrPG0uKQ9V1wWGRv/DTY+A= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; MWHPR0701MB3644; 7:sEmyHirjq6weJwW8Vz70uYgl/z447w+dfkJZnsjg0V63H4Vh5w/4ZYscKp+ewRKeoN9ZSCYs/PMOyRkKWaO6WBsrzhK28A4qGOwdpz6V4dWmTGE8N6s9rTSR+gcvu/00DWPvWPDhUbMrqy2Q99P8gGovlHPsnSbEjN0Mi88AYaahsYJuQ1OfUl5eyMGANMNKQtV60WBZuClYPfH7g/EfSUlEN9uprNWI61ObpNLRcVpe5kBE08FQVZy77gdI+RI9 X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Jul 2018 16:55:49.3924 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e857e445-3605-4f9b-ce28-08d5e03caadc X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR0701MB3644 Subject: [dpdk-dev] [PATCH v2 2/6] compress/octeontx: add device setup PMD ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunila Sahu implement device configure and PMD ops. setup stream resource memory pool setup and enable hardware queue Signed-off-by: Ashish Gupta Signed-off-by: Shally Verma Signed-off-by: Sunila Sahu --- drivers/compress/octeontx/zip_pmd.c | 251 ++++++++++++++++++++++++++++++++++++ drivers/compress/octeontx/zipvf.c | 73 +++++++++++ drivers/compress/octeontx/zipvf.h | 56 +++++++- 3 files changed, 378 insertions(+), 2 deletions(-) diff --git a/drivers/compress/octeontx/zip_pmd.c b/drivers/compress/octeontx/zip_pmd.c index 2011db37e..44c271e1a 100644 --- a/drivers/compress/octeontx/zip_pmd.c +++ b/drivers/compress/octeontx/zip_pmd.c @@ -9,8 +9,259 @@ #include #include +static const struct rte_compressdev_capabilities + octtx_zip_pmd_capabilities[] = { + { .algo = RTE_COMP_ALGO_DEFLATE, + /* Deflate */ + .comp_feature_flags = 0, + /* Non sharable Priv XFORM and Stateless */ + .window_size = { + .min = 1, + .max = 14, + .increment = 1 + /* size supported 2^1 to 2^14 */ + }, + }, + RTE_COMP_END_OF_CAPABILITIES_LIST() +}; + +/** Configure device */ +static int +zip_pmd_config(struct rte_compressdev *dev, + struct rte_compressdev_config *config) +{ + int nb_streams; + char res_pool[RTE_MEMZONE_NAMESIZE]; + struct zip_vf *vf; + struct rte_mempool *zip_buf_mp; + + if (!config || !dev) + return -EIO; + + vf = (struct zip_vf *)(dev->data->dev_private); + + /* create pool with maximum numbers of resources + * required by streams + */ + + /* use common pool for non-shareable priv_xform and stream */ + nb_streams = config->max_nb_priv_xforms + config->max_nb_streams; + + snprintf(res_pool, RTE_MEMZONE_NAMESIZE, "octtx_zip_res_pool%u", + dev->data->dev_id); + + /** TBD Should we use the per core object cache for stream resources */ + zip_buf_mp = rte_mempool_create( + res_pool, + nb_streams * MAX_BUFS_PER_STREAM, + ZIP_BUF_SIZE, + 0, + 0, + NULL, + NULL, + NULL, + NULL, + SOCKET_ID_ANY, + 0); + + if (zip_buf_mp == NULL) { + ZIP_PMD_ERR( + "Failed to create buf mempool octtx_zip_res_pool%u", + dev->data->dev_id); + return -1; + } + + vf->zip_mp = zip_buf_mp; + + return 0; +} + +/** Start device */ +static int +zip_pmd_start(__rte_unused struct rte_compressdev *dev) +{ + return 0; +} + +/** Stop device */ +static void +zip_pmd_stop(__rte_unused struct rte_compressdev *dev) +{ + +} + +/** Close device */ +static int +zip_pmd_close(struct rte_compressdev *dev) +{ + if (dev == NULL) + return -1; + + struct zip_vf *vf = (struct zip_vf *)dev->data->dev_private; + rte_mempool_free(vf->zip_mp); + + return 0; +} + +/** Get device statistics */ +static void +zip_pmd_stats_get(struct rte_compressdev *dev, + struct rte_compressdev_stats *stats) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct zipvf_qp *qp = dev->data->queue_pairs[qp_id]; + + stats->enqueued_count += qp->qp_stats.enqueued_count; + stats->dequeued_count += qp->qp_stats.dequeued_count; + + stats->enqueue_err_count += qp->qp_stats.enqueue_err_count; + stats->dequeue_err_count += qp->qp_stats.dequeue_err_count; + } +} + +/** Reset device statistics */ +static void +zip_pmd_stats_reset(struct rte_compressdev *dev) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct zipvf_qp *qp = dev->data->queue_pairs[qp_id]; + memset(&qp->qp_stats, 0, sizeof(qp->qp_stats)); + } +} + +/** Get device info */ +static void +zip_pmd_info_get(struct rte_compressdev *dev, + struct rte_compressdev_info *dev_info) +{ + struct zip_vf *vf = (struct zip_vf *)dev->data->dev_private; + + if (dev_info != NULL) { + dev_info->driver_name = dev->device->driver->name; + dev_info->feature_flags = dev->feature_flags; + dev_info->capabilities = octtx_zip_pmd_capabilities; + dev_info->max_nb_queue_pairs = vf->max_nb_queue_pairs; + } +} + +/** Release queue pair */ +static int +zip_pmd_qp_release(struct rte_compressdev *dev, uint16_t qp_id) +{ + struct zipvf_qp *qp = dev->data->queue_pairs[qp_id]; + struct rte_ring *r = NULL; + + if (qp != NULL) { + zipvf_q_term(qp); + r = rte_ring_lookup(qp->name); + if (r) + rte_ring_free(r); + rte_free(qp); + dev->data->queue_pairs[qp_id] = NULL; + } + return 0; +} + +/** Create a ring to place process packets on */ +static struct rte_ring * +zip_pmd_qp_create_processed_pkts_ring(struct zipvf_qp *qp, + unsigned int ring_size, int socket_id) +{ + struct rte_ring *r; + + r = rte_ring_lookup(qp->name); + if (r) { + if (rte_ring_get_size(r) >= ring_size) { + ZIP_PMD_INFO("Reusing existing ring %s for processed" + " packets", qp->name); + return r; + } + + ZIP_PMD_ERR("Unable to reuse existing ring %s for processed" + " packets", qp->name); + return NULL; + } + + return rte_ring_create(qp->name, ring_size, socket_id, + RING_F_EXACT_SZ); +} + +/** Setup a queue pair */ +static int +zip_pmd_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, + uint32_t max_inflight_ops, int socket_id) +{ + struct zipvf_qp *qp = NULL; + struct zip_vf *vf = (struct zip_vf *) (dev->data->dev_private); + int ret; + + if (!dev) + return -1; + + /* Free memory prior to re-allocation if needed. */ + if (dev->data->queue_pairs[qp_id] != NULL) { + ZIP_PMD_INFO("Using existing queue pair %d ", qp_id); + return 0; + } + + /* Allocate the queue pair data structure. */ + qp = rte_zmalloc_socket("ZIP PMD Queue Pair", sizeof(*qp), + RTE_CACHE_LINE_SIZE, socket_id); + if (qp == NULL) + return (-ENOMEM); + + snprintf(qp->name, sizeof(qp->name), + "zip_pmd_%u_qp_%u", + dev->data->dev_id, qp_id); + + /* Create completion queue upto max_inflight_ops */ + qp->processed_pkts = zip_pmd_qp_create_processed_pkts_ring(qp, + max_inflight_ops, socket_id); + if (qp->processed_pkts == NULL) + goto qp_setup_cleanup; + + qp->id = qp_id; + qp->vf = vf; + + ret = zipvf_q_init(qp); + if (ret < 0) + goto qp_setup_cleanup; + + dev->data->queue_pairs[qp_id] = qp; + + memset(&qp->qp_stats, 0, sizeof(qp->qp_stats)); + return 0; + +qp_setup_cleanup: + if (qp->processed_pkts) { + rte_ring_free(qp->processed_pkts); + qp->processed_pkts = NULL; + } + if (qp) { + rte_free(qp); + qp = NULL; + } + return -1; +} + struct rte_compressdev_ops octtx_zip_pmd_ops = { + .dev_configure = zip_pmd_config, + .dev_start = zip_pmd_start, + .dev_stop = zip_pmd_stop, + .dev_close = zip_pmd_close, + + .stats_get = zip_pmd_stats_get, + .stats_reset = zip_pmd_stats_reset, + + .dev_infos_get = zip_pmd_info_get, + + .queue_pair_setup = zip_pmd_qp_setup, + .queue_pair_release = zip_pmd_qp_release, }; diff --git a/drivers/compress/octeontx/zipvf.c b/drivers/compress/octeontx/zipvf.c index a85d7f323..2a74e8bbb 100644 --- a/drivers/compress/octeontx/zipvf.c +++ b/drivers/compress/octeontx/zipvf.c @@ -18,6 +18,79 @@ zip_reg_write64(uint8_t *hw_addr, uint64_t offset, uint64_t val) *(uint64_t *)(base + offset) = val; } +static void +zip_q_enable(struct zipvf_qp *qp) +{ + zip_vqx_ena_t que_ena; + + /*ZIP VFx command queue init*/ + que_ena.u = 0ull; + que_ena.s.ena = 1; + + zip_reg_write64(qp->vf->vbar0, ZIP_VQ_ENA, que_ena.u); + rte_wmb(); +} + +/* initialize given qp on zip device */ +int +zipvf_q_init(struct zipvf_qp *qp) +{ + zip_vqx_sbuf_addr_t que_sbuf_addr; + + uint64_t size; + void *cmdq_addr; + uint64_t iova; + struct zipvf_cmdq *cmdq = &qp->cmdq; + struct zip_vf *vf = qp->vf; + + /* allocate and setup instruction queue */ + size = ZIP_MAX_CMDQ_SIZE; + size = ZIP_ALIGN_ROUNDUP(size, ZIP_CMDQ_ALIGN); + + cmdq_addr = rte_zmalloc(qp->name, size, ZIP_CMDQ_ALIGN); + if (cmdq_addr == NULL) + return -1; + + cmdq->sw_head = (uint64_t *)cmdq_addr; + cmdq->va = (uint8_t *)cmdq_addr; + iova = rte_mem_virt2iova(cmdq_addr); + + /* Check for 128 byte alignment, if not align it*/ + iova = (uint64_t)ZIP_ALIGN_ROUNDUP(iova, 128); + cmdq->iova = iova; + + que_sbuf_addr.u = 0ull; + que_sbuf_addr.s.ptr = (cmdq->iova >> 7); + zip_reg_write64(vf->vbar0, ZIP_VQ_SBUF_ADDR, que_sbuf_addr.u); + + zip_q_enable(qp); + + memset(cmdq->va, 0, ZIP_MAX_CMDQ_SIZE); + rte_spinlock_init(&cmdq->qlock); + + return 0; +} + +int +zipvf_q_term(struct zipvf_qp *qp) +{ + struct zipvf_cmdq *cmdq = &qp->cmdq; + zip_vqx_ena_t que_ena; + struct zip_vf *vf = qp->vf; + + if (cmdq->va != NULL) { + memset(cmdq->va, 0, ZIP_MAX_CMDQ_SIZE); + rte_free(cmdq->va); + } + + /*Disabling the ZIP queue*/ + que_ena.u = 0ull; + zip_reg_write64(vf->vbar0, ZIP_VQ_ENA, que_ena.u); + + return 0; +} + + int zipvf_create(struct rte_compressdev *compressdev) { diff --git a/drivers/compress/octeontx/zipvf.h b/drivers/compress/octeontx/zipvf.h index 36c44c8c5..4c7eb8862 100644 --- a/drivers/compress/octeontx/zipvf.h +++ b/drivers/compress/octeontx/zipvf.h @@ -75,8 +75,53 @@ int octtx_zip_logtype_driver; ZIP_PMD_LOG(INFO, fmt, ## args) #define ZIP_PMD_ERR(fmt, args...) \ ZIP_PMD_LOG(ERR, fmt, ## args) -#define ZIP_PMD_WARN(fmt, args...) \ - ZIP_PMD_LOG(WARNING, fmt, ## args) + +/* resources required to process stream */ +enum { + RES_BUF = 0, + CMD_BUF, + HASH_CTX_BUF, + DECOMP_CTX_BUF, + IN_DATA_BUF, + OUT_DATA_BUF, + HISTORY_DATA_BUF, + MAX_BUFS_PER_STREAM +} NUM_BUFS_PER_STREAM; + + +struct zipvf_qp; + + +/** + * ZIP instruction Queue + */ +struct zipvf_cmdq { + rte_spinlock_t qlock; + /* queue lock */ + uint64_t *sw_head; + /* 64-bit word pointer to queue head */ + uint8_t *va; + /* pointer to iq virual address */ + rte_iova_t iova; + /* iova addr of cmdq head*/ +}; + +/** + * ZIP device queue structure + */ +struct zipvf_qp { + struct zipvf_cmdq cmdq; + /* Hardware queue handle */ + struct rte_ring *processed_pkts; + /* Ring for placing process packets */ + struct rte_compressdev_stats qp_stats; + /* Queue pair statistics */ + uint16_t id; + /* Queue Pair Identifier */ + char name[RTE_COMPRESSDEV_NAME_MAX_LEN]; + /* Unique Queue Pair Name */ + struct zip_vf *vf; +} __rte_cache_aligned; /** * ZIP VF device structure. @@ -102,6 +147,13 @@ zipvf_create(struct rte_compressdev *compressdev); int zipvf_destroy(struct rte_compressdev *compressdev); +int +zipvf_q_init(struct zipvf_qp *qp); + +int +zipvf_q_term(struct zipvf_qp *qp); + + uint64_t zip_reg_read64(uint8_t *hw_addr, uint64_t offset);