From patchwork Tue Feb 21 09:26:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shijith Thotton X-Patchwork-Id: 20598 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id A0147FA48; Tue, 21 Feb 2017 10:29:32 +0100 (CET) Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0073.outbound.protection.outlook.com [104.47.32.73]) by dpdk.org (Postfix) with ESMTP id 33DECFA48 for ; Tue, 21 Feb 2017 10:29:28 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=tWo2LvIBgbNuCT4U47Qb9VyUFr5UBWamPiOexR0zz4E=; b=UqZvy2Nfd72bLhwKX14Q2scqfrt6XBJG6mVfpcvscWaTd6E02MVN5IpHxMe3I7h5/fLKO6+6k/xzDh5DhddVtc3LM0n0QBgL1lvaTt/8iv/nOrk/08WTKuMFAwoM25LupBoKN3XdBPW0dFmr7gdKHogysrXG6rUaokDMdcc/BzE= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Shijith.Thotton@cavium.com; Received: from lio357.in.caveonetworks.com (14.140.2.178) by SN1PR07MB2285.namprd07.prod.outlook.com (10.164.47.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.919.13; Tue, 21 Feb 2017 09:29:23 +0000 From: Shijith Thotton To: dev@dpdk.org Cc: Jerin Jacob , Derek Chickles , Venkat Koppula , Mallesham Jatharakonda Date: Tue, 21 Feb 2017 14:56:40 +0530 Message-Id: <1487669225-30091-26-git-send-email-shijith.thotton@caviumnetworks.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1487669225-30091-1-git-send-email-shijith.thotton@caviumnetworks.com> References: <1487669225-30091-1-git-send-email-shijith.thotton@caviumnetworks.com> MIME-Version: 1.0 X-Originating-IP: [14.140.2.178] X-ClientProxiedBy: MA1PR01CA0089.INDPRD01.PROD.OUTLOOK.COM (10.174.56.29) To SN1PR07MB2285.namprd07.prod.outlook.com (10.164.47.155) X-MS-Office365-Filtering-Correlation-Id: 45833c00-c600-4ee2-f7db-08d45a3c2113 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:SN1PR07MB2285; X-Microsoft-Exchange-Diagnostics: 1; SN1PR07MB2285; 3:6c8EuV49fI2X2ZX5Rw8q1ngLqo0MZ4mJvtQMEVqee1gx91zXz+B49NY2gaShtT6wNQXc7WKTm2e60AbgKFgFynwB9F4N+qaMWITy73ukwXVJcC5TwyCc36/Xihhv6y2QdDV3K0Kq2ioTwHdt4tra513Xu2Ne0vR2KHIeYxG2a/p6/CtzAR1hYYQH/VQxGqO8PIlipJSeVaO7v4edJ8JM+4WCNeWlXI6DTEbsvZC6lhjvWDdirYzpdmSIY7sz4tzoZthBObYT0g8wPhIjUlfphQ==; 25:vRmeFAgxgZ9Bh51KhHWUZio7c53xsa1Y6l0pLotstbGA/ojQCnCDEc2x/CtMUf5+jdKBEc7Ovv53JHcHCkLc6RdmbAU8hWymTtolER9rvZBBBxSXiDfLNTj74ZHSy9ojccyfqHMikVPFRKqaJKRdN9zd9uD+J6tMG507XTAO0ndLFC39cJtCeVvTNdAsW3QzTR4bFU9vbyPniprPqn0x1j6dQ88eLLATZEK6l97quOThS7tcH2q3mPN7Ch7aYwQnYbmL9nDZATouL1bugkRycV35HUXL/IOLe37sKmOXnnOaOy3RuD88rP/2K3Dljq71e4A7QfOgb3P61rrCpyr5C7z3UkVdJes87dMvVZnqpME3LaMkwCU4YYajF2zUhQdGiHsX0y/692MriUvZi3N3HR3AS6CQgJgMvS9qHQPdPYaEb+uUXP9Ll30RHYhLR+UNbWKAvX6VYNlc0DGPtjrphA== X-Microsoft-Exchange-Diagnostics: 1; SN1PR07MB2285; 31:hLqo9jZLjavo6VKtGBm6lAkxer9+grZJJYtg/PMLnsWVgirkKj/tVzLYeQYC5NJFGrxHVejN8y2qZggjXt2de45VSLZJMY71CaPEoxs2EYF141ImFOL6NxFpb+Avgfz8sM6U2EedoHUg++N5xCvoIacoUWT0WMt1wP39k3lArCKR+w8Zu9Ox1+wDG60bA0X4FbqK8FyQ+EgaF0m8cJYl6Oh3mC43EUlawxY5MB9RJ5dSlOBSYg62bcBiNj0kFfeZ; 20:NYBGwfL9Ff34JzPgUiibpRKgAFLOWK+otyOhvlTk67PihCP8FJamJK0S5XXy7rSaZRrJtFs+yZP0MUFKYeKxUyFg3VWOWNppRzYaL2elKiUX4+WTHo3BH2mrcof5QbvT1LJKYYrIoNC3JC73Uk0t7AsHzly/NagyzSZKszD2cWsyjGO6YoLcDANfVDxGM0qLC/oL74r7A7nUGNYMZFn+E/gxG3lLZWWjJ7BfJ6eL8w1We9EXdfBPVEv67vD5UMgm86SuD0IsVMtawK9rZM6u9GNjGyEj/PruuHdWFwM4Ibww4x7R02S9AgMLp/KTVU3Z+USDKG9rJCsp6vMhsSWrqJCC7fLz8ZFOJRnST+zSWHr68vyNaSlmlx3YV4nutklC+B7twWjcpxddLA85X5GG7ZINzFU35ci1MbLvReCx/H13lS34muSfFq1FwAf/pohTRpARBW91WqiSs7JC8SMz2qLQ78jZGeyTDo0OxByV0oM5jXK/M2CtWQrjDTvKHctLZPZ9dv4GGIlpP33+hOcmIfXXZiBoOiXygkf0PiwPX6a0r5f3e30JR6XLzXRVjH/PJ+SJFeyYOIPW4DnRM6Yz1VDqwv42z91I5CKGY2cwcpY= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6041248)(20161123555025)(20161123564025)(20161123562025)(20161123560025)(20161123558025)(6072148); SRVR:SN1PR07MB2285; BCL:0; PCL:0; RULEID:; SRVR:SN1PR07MB2285; X-Microsoft-Exchange-Diagnostics: 1; SN1PR07MB2285; 4:/gLGkEk1yAhugSHiImY2Jrh0W2+9VlpI8fZ4Zh3ynbKxbx2UkLByHVHSEp+gj17ieIam8O5wuHhxsUyfnlYiLuDa8lb9mAfXxGt531Q5FIyrcJ6tjKuTYYQ29yd8PZL/pqK+p1zzfJynQkP5v3dkVkylR/STFa9pJGQcVmNPUZW2QExMY1u3ibeNYEqBMX9lt0NxMNzQuLrR6mnNS67uw9DC6Mq1NxgeH4CoKkYT8x6dwCvEcz2za8MpKRXg1i7613RCLMUwxSIMEjRe+VjpqmGK6Vcjj8ZVRwA6ztlk4LtddpOUJqQoCnnOVeCWxIvZ0gcWi6QnJ7BnaoMUaUWnH571FO5sxfhJUy59ClfHAoURCBkkInxC3KaFtm5gBcR1dfuxGq2+Y1aFvafW/dpdf3O0/Y6JR2PXobpBcmNpFvAnIGEj2P2PreYm10GJeLSOSP5sqdm+KXBL1wOXKprK1cCMN5hf/cEvpb1FvOYScI1q2H22x7zgCxHkEdsHdn5xdNkvMlVbG/cVcsgvP7xape7o97v1YWGQBZZoze2RtqokIaJHS8+r4FQIx8cYV26pOxRoI8haVXjI0vVcsOMiLigxnDi5RLgXb6MSerRSzrY= X-Forefront-PRVS: 0225B0D5BC X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(7916002)(39450400003)(39410400002)(39830400002)(189002)(50944005)(199003)(50226002)(81166006)(42186005)(2361001)(575784001)(101416001)(81156014)(305945005)(3846002)(6116002)(5009440100003)(8676002)(105586002)(7736002)(53936002)(36756003)(110136004)(106356001)(6506006)(25786008)(38730400002)(2351001)(92566002)(48376002)(6486002)(97736004)(50466002)(5003940100001)(189998001)(68736007)(2906002)(4326007)(33646002)(76176999)(54906002)(5660300001)(6666003)(6916009)(50986999)(2950100002)(6512007)(4720700003)(42882006)(66066001)(47776003)(7099028)(110426004); DIR:OUT; SFP:1101; SCL:1; SRVR:SN1PR07MB2285; H:lio357.in.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; SN1PR07MB2285; 23:YKb2eF9VjIIcNnd3A7qyi0CgasQ121W6pDhssmO1J?= eUV9o+eTfzc/LDSO8VbSi3Pve9V2kEHEcwZuU/HTdLZM/s5XEEOf8p7Ho3iouYGnyNDKybMN0o2it5VM4f/F6OL3p5BSvs+F4teFCQdNy1pvGSji7uxZQsjhSfgpBCMJu+skrTIxPhxv+hOwBSREue0FNOFG56em8o5z+lu1HtWF7s44zxhVCb52w+u8hOHsPIoM9SHAxdNPJJB/4DJ5M3AxB0B0LR29ckOR63wjtq2dCuCXLKc9+JJ6tyZtEt8ft7Y1zYMPIrS5xABFo54zmLC1L8s9NJbM7PLqDF00Acla3eFCSCCVcfmgg+5kKQ/Fp8+JM1HUmDbCmhaw4+3wqWJKC7d1IisVy1dfM+dJfCJ1auSYTY3MZYEXHDEfJoHL1xhZuVrxE1Wmk0i3EUsjqtEFQWaqcuCyecmHlkZzLJ5TKJOJPjtKCmOgK+uo46G1T+QVJglemHAdv2P4yJRnHrG7bZuS9LC3EjdEAX5jcf5fqpYiatWMEDZvZ0/XeoRq4mL8/YfpW30s9redL5LU3ZNxSzFO3VplZG/AC6trzq+aiGuUZe3Vc/Yvy4+2mBafNCheRfKaIh+Eqc952p7SDBTORDCBIKQjVhAccYgB5SCLcqvHr25qYYpOjFWeeOdHcjfHujQIEXxNl+JXWINzDAU5n4dig9VBFTNFaMW3blchlfgLrxXAWJUmcjXF7Q6NbBtu7PvQChsKwzK56RhVZTyya7MxTaVJ8TZTUbcM8OtgtGS60FLANeRAdbmsHYapV5AclnQaINAYX+/+TDB++BgmqYsAYCLO9E5JQVmBQO2qM8c2daM7DcqvweeEcH4L3OQaPSrZUidaeKkfJMdVAo1upfKwAd9jUCCSa8RZksBqO0IVaw1QC6xSm7cdxEJRfj/hayB2z0sOQvH250JOP5i1HAVRKJbImpfVrokL3ABOJEMLC3kc6B4FNOyMAoJsfd3odVy7AB/v85XLd/YQeOo9vnOTY18V49hkVRkqKF01+sUYfFoEbLG4KCaP3B7TtD1bnQo9wCl1cK2qWmqQ+6Whf+e3oPt1zl4y42P+DYqv2khvi9CZ9FB9tAQRguHconH6dbOM2bivGN3IJODBmR7vDSnNzxOyL9cNDi/XZYTGCedtLZvKPm7mLxJ5GtsUutU4f98HJJh9b10oFS1paZJTJd/lzV1Nv7fn6KyeisjPIhA4lgh74GfCo0frSh2XSQWf/98gPYe6a2vOm5HBQEJpZR1sY34ilDKRZ7C4oAWqoQbCKL5n+6XvB/Bm5npOJStFJHVut/MGd+ZcMkhJ0PIkFzb88hAxr6ynmDSv+KEWA== X-Microsoft-Exchange-Diagnostics: 1; SN1PR07MB2285; 6:da5OuMdN7rSNvBfT1uJy/rSb7sVXNkjEBE6amfuUHc+mUyFoVS8cBWDadNNBy6xu4gPojVlgWnu4MJ0Q2E0WZc4zI6b/kmfenRBWqs2DBnGhpk1LMKWEWe2DvnT9dsklNQ179BNyGKBHVFOGDKdqQeKvkAQmGreY1bqHGE2DBgb/QwpRwSWwIRJNxo+5XJQ7WALfrqS+48Y6ZLM38hpqa7+deiS+Zfw0A5RONYeB91N+U3YszTfHwo8XoVHmfkI6kXtwFJ/Jnbkm0/zV8DVHgP9KAq7Ys6C54zz0Y3iDOXRIpEJKsyBj2rf2rUZqYDkGcnFxeWSCh2/Iqb/e0TPh0CYJ7FmF67+JVEMb3X5kavlYQGRZSXXEL78LJmMfH0gamP1guUJYM/lVPvmKGuVcEg==; 5:vLPn/kurJy7eWNwgKj3YTsfWtShlrpjZsek1PFccuAxzhyPAFHOpsDLvSPXIusOp75HGEwCGWjLrRCcynf4d0lgiySIq3xNtQ/AQv5ZbDLE3Fx6lBtXJvdXU71fMqFubYc+F3CFDS3k9U5jbZVUEJw==; 24:3vfdAzBFanLcTS3ZL5K7LWU7jSlZPXrluyW7+nz/WOtz+X04ya8cd+OLOtZns1rA/0xYb2q1jV3KEe3BrFp0etvRLqO7+ATB0rtUkJDES2g= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; SN1PR07MB2285; 7:+FZ+wl8ApCYpdyMi+c7dTDTlWve8q8v+7e+pvTuvGO7fTqlv4YKIZI93EsbGTxv59/JsqngdTxUYokxlkJr4sP0NxOD0nqfoZZ2LZIgeAf4JVgm3XjN0btlxOab4/3J7zCJQ+EWC+qo6gqd/5HIEc3T8DCP6cFlnU+AMy1GmIzoDLB8H2aJcvzXx3eb8MYA25USs7IKxzLOlLDwanPuGXfR4JMCv349ORL2rKL7po8FSJMyo3hm7cSoHLKCVk6N9IUysxWeSAw8xoCmmtHVI5DGk60erUi8sveooSDlt099jKeOw+nxBIQsREU0e92lan6uihYdTiosl/SSBVoDblQ== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Feb 2017 09:29:23.4951 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR07MB2285 Subject: [dpdk-dev] [PATCH 25/50] net/liquidio: add Rx data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add APIs to receive packets and re-fill ring buffers. Signed-off-by: Shijith Thotton Signed-off-by: Jerin Jacob Signed-off-by: Derek Chickles Signed-off-by: Venkat Koppula Signed-off-by: Mallesham Jatharakonda --- drivers/net/liquidio/base/lio_hw_defs.h | 12 + drivers/net/liquidio/lio_ethdev.c | 5 + drivers/net/liquidio/lio_rxtx.c | 380 ++++++++++++++++++++++++++++++++ drivers/net/liquidio/lio_rxtx.h | 13 ++ 4 files changed, 410 insertions(+) diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h index 912b8b9..94a21ad 100644 --- a/drivers/net/liquidio/base/lio_hw_defs.h +++ b/drivers/net/liquidio/base/lio_hw_defs.h @@ -112,12 +112,24 @@ enum octeon_tag_type { /* used for NIC operations */ #define LIO_OPCODE 1 +/* Subcodes are used by host driver/apps to identify the sub-operation + * for the core. They only need to by unique for a given subsystem. + */ +#define LIO_OPCODE_SUBCODE(op, sub) \ + ((((op) & 0x0f) << 8) | ((sub) & 0x7f)) + /** LIO_OPCODE subcodes */ /* This subcode is sent by core PCI driver to indicate cores are ready. */ +#define LIO_OPCODE_NW_DATA 0x02 /* network packet data */ #define LIO_OPCODE_IF_CFG 0x09 #define LIO_MAX_RX_PKTLEN (64 * 1024) +/* RX(packets coming from wire) Checksum verification flags */ +/* TCP/UDP csum */ +#define LIO_L4_CSUM_VERIFIED 0x1 +#define LIO_IP_CSUM_VERIFIED 0x2 + /* Interface flags communicated between host driver and core app. */ enum lio_ifflags { LIO_IFFLAG_UNICAST = 0x10 diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c index 29424c6..300baee 100644 --- a/drivers/net/liquidio/lio_ethdev.c +++ b/drivers/net/liquidio/lio_ethdev.c @@ -404,6 +404,8 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; + eth_dev->rx_pkt_burst = NULL; + return 0; } @@ -416,6 +418,8 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); + eth_dev->rx_pkt_burst = &lio_dev_recv_pkts; + /* Primary does the initialization. */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -451,6 +455,7 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) lio_dev_err(lio_dev, "MAC addresses memory allocation failed\n"); eth_dev->dev_ops = NULL; + eth_dev->rx_pkt_burst = NULL; return -ENOMEM; } diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c index 506a1db..7c6b446 100644 --- a/drivers/net/liquidio/lio_rxtx.c +++ b/drivers/net/liquidio/lio_rxtx.c @@ -326,6 +326,386 @@ return 0; } +static inline uint32_t +lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len) +{ + uint32_t buf_cnt = 0; + + while (total_len > (buf_size * buf_cnt)) + buf_cnt++; + + return buf_cnt; +} + +/* If we were not able to refill all buffers, try to move around + * the buffers that were not dispatched. + */ +static inline uint32_t +lio_droq_refill_pullup_descs(struct lio_droq *droq, + struct lio_droq_desc *desc_ring) +{ + uint32_t refill_index = droq->refill_idx; + uint32_t desc_refilled = 0; + + while (refill_index != droq->read_idx) { + if (droq->recv_buf_list[refill_index].buffer) { + droq->recv_buf_list[droq->refill_idx].buffer = + droq->recv_buf_list[refill_index].buffer; + desc_ring[droq->refill_idx].buffer_ptr = + desc_ring[refill_index].buffer_ptr; + droq->recv_buf_list[refill_index].buffer = NULL; + desc_ring[refill_index].buffer_ptr = 0; + do { + droq->refill_idx = lio_incr_index( + droq->refill_idx, 1, + droq->max_count); + desc_refilled++; + droq->refill_count--; + } while (droq->recv_buf_list[droq->refill_idx].buffer); + } + refill_index = lio_incr_index(refill_index, 1, + droq->max_count); + } /* while */ + + return desc_refilled; +} + +/* lio_droq_refill + * + * @param lio_dev - pointer to the lio device structure + * @param droq - droq in which descriptors require new buffers. + * + * Description: + * Called during normal DROQ processing in interrupt mode or by the poll + * thread to refill the descriptors from which buffers were dispatched + * to upper layers. Attempts to allocate new buffers. If that fails, moves + * up buffers (that were not dispatched) to form a contiguous ring. + * + * Returns: + * No of descriptors refilled. + * + * Locks: + * This routine is called with droq->lock held. + */ +static uint32_t +lio_droq_refill(struct lio_device *lio_dev, struct lio_droq *droq) +{ + struct lio_droq_desc *desc_ring; + uint32_t desc_refilled = 0; + void *buf = NULL; + + desc_ring = droq->desc_ring; + + while (droq->refill_count && (desc_refilled < droq->max_count)) { + /* If a valid buffer exists (happens if there is no dispatch), + * reuse the buffer, else allocate. + */ + if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) { + buf = lio_recv_buffer_alloc(lio_dev, droq->q_no); + /* If a buffer could not be allocated, no point in + * continuing + */ + if (buf == NULL) + break; + + droq->recv_buf_list[droq->refill_idx].buffer = buf; + } + + desc_ring[droq->refill_idx].buffer_ptr = + lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer); + /* Reset any previous values in the length field. */ + droq->info_list[droq->refill_idx].length = 0; + + droq->refill_idx = lio_incr_index(droq->refill_idx, 1, + droq->max_count); + desc_refilled++; + droq->refill_count--; + } + + if (droq->refill_count) + desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring); + + /* if droq->refill_count + * The refill count would not change in pass two. We only moved buffers + * to close the gap in the ring, but we would still have the same no. of + * buffers to refill. + */ + return desc_refilled; +} + +static int +lio_droq_fast_process_packet(struct lio_device *lio_dev, + struct lio_droq *droq, + struct rte_mbuf **rx_pkts) +{ + struct rte_mbuf *nicbuf = NULL; + struct lio_droq_info *info; + uint32_t total_len = 0; + int data_total_len = 0; + uint32_t pkt_len = 0; + union octeon_rh *rh; + int data_pkts = 0; + + info = &droq->info_list[droq->read_idx]; + lio_swap_8B_data((uint64_t *)info, 2); + + if (!info->length) + return -1; + + /* Len of resp hdr in included in the received data len. */ + info->length -= OCTEON_RH_SIZE; + rh = &info->rh; + + total_len += (uint32_t)info->length; + + if (lio_opcode_slow_path(rh)) { + uint32_t buf_cnt; + + buf_cnt = lio_droq_get_bufcount(droq->buffer_size, + (uint32_t)info->length); + droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt, + droq->max_count); + droq->refill_count += buf_cnt; + } else { + if (info->length <= droq->buffer_size) { + if (rh->r_dh.has_hash) + pkt_len = (uint32_t)(info->length - 8); + else + pkt_len = (uint32_t)info->length; + + nicbuf = droq->recv_buf_list[droq->read_idx].buffer; + droq->recv_buf_list[droq->read_idx].buffer = NULL; + droq->read_idx = lio_incr_index( + droq->read_idx, 1, + droq->max_count); + droq->refill_count++; + + if (likely(nicbuf != NULL)) { + nicbuf->data_off = RTE_PKTMBUF_HEADROOM; + nicbuf->nb_segs = 1; + nicbuf->next = NULL; + /* We don't have a way to pass flags yet */ + nicbuf->ol_flags = 0; + if (rh->r_dh.has_hash) { + uint64_t *hash_ptr; + + nicbuf->ol_flags |= PKT_RX_RSS_HASH; + hash_ptr = rte_pktmbuf_mtod(nicbuf, + uint64_t *); + lio_swap_8B_data(hash_ptr, 1); + nicbuf->hash.rss = (uint32_t)*hash_ptr; + nicbuf->data_off += 8; + } + + nicbuf->pkt_len = pkt_len; + nicbuf->data_len = pkt_len; + nicbuf->port = lio_dev->port_id; + /* Store the mbuf */ + rx_pkts[data_pkts++] = nicbuf; + data_total_len += pkt_len; + } + + /* Prefetch buffer pointers when on a cache line + * boundary + */ + if ((droq->read_idx & 3) == 0) { + rte_prefetch0( + &droq->recv_buf_list[droq->read_idx]); + rte_prefetch0( + &droq->info_list[droq->read_idx]); + } + } else { + struct rte_mbuf *first_buf = NULL; + struct rte_mbuf *last_buf = NULL; + + while (pkt_len < info->length) { + int cpy_len = 0; + + cpy_len = ((pkt_len + droq->buffer_size) > + info->length) + ? ((uint32_t)info->length - + pkt_len) + : droq->buffer_size; + + nicbuf = + droq->recv_buf_list[droq->read_idx].buffer; + droq->recv_buf_list[droq->read_idx].buffer = + NULL; + + if (likely(nicbuf != NULL)) { + /* Note the first seg */ + if (!pkt_len) + first_buf = nicbuf; + + nicbuf->data_off = RTE_PKTMBUF_HEADROOM; + nicbuf->nb_segs = 1; + nicbuf->next = NULL; + nicbuf->port = lio_dev->port_id; + /* We don't have a way to pass + * flags yet + */ + nicbuf->ol_flags = 0; + if ((!pkt_len) && (rh->r_dh.has_hash)) { + uint64_t *hash_ptr; + + nicbuf->ol_flags |= + PKT_RX_RSS_HASH; + hash_ptr = rte_pktmbuf_mtod( + nicbuf, uint64_t *); + lio_swap_8B_data(hash_ptr, 1); + nicbuf->hash.rss = + (uint32_t)*hash_ptr; + nicbuf->data_off += 8; + nicbuf->pkt_len = cpy_len - 8; + nicbuf->data_len = cpy_len - 8; + } else { + nicbuf->pkt_len = cpy_len; + nicbuf->data_len = cpy_len; + } + + if (pkt_len) + first_buf->nb_segs++; + + if (last_buf) + last_buf->next = nicbuf; + + last_buf = nicbuf; + } else { + PMD_RX_LOG(lio_dev, ERR, "no buf\n"); + } + + pkt_len += cpy_len; + droq->read_idx = lio_incr_index( + droq->read_idx, + 1, droq->max_count); + droq->refill_count++; + + /* Prefetch buffer pointers when on a + * cache line boundary + */ + if ((droq->read_idx & 3) == 0) { + rte_prefetch0(&droq->recv_buf_list + [droq->read_idx]); + + rte_prefetch0( + &droq->info_list[droq->read_idx]); + } + } + rx_pkts[data_pkts++] = first_buf; + if (rh->r_dh.has_hash) + data_total_len += (pkt_len - 8); + else + data_total_len += pkt_len; + } + + /* Inform upper layer about packet checksum verification */ + struct rte_mbuf *m = rx_pkts[data_pkts - 1]; + + if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED) + m->ol_flags |= PKT_RX_IP_CKSUM_GOOD; + + if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED) + m->ol_flags |= PKT_RX_L4_CKSUM_GOOD; + } + + if (droq->refill_count >= droq->refill_threshold) { + int desc_refilled = lio_droq_refill(lio_dev, droq); + + /* Flush the droq descriptor data to memory to be sure + * that when we update the credits the data in memory is + * accurate. + */ + rte_wmb(); + rte_write32(desc_refilled, droq->pkts_credit_reg); + /* make sure mmio write completes */ + rte_wmb(); + } + + info->length = 0; + info->rh.rh64 = 0; + + return data_pkts; +} + +static uint32_t +lio_droq_fast_process_packets(struct lio_device *lio_dev, + struct lio_droq *droq, + struct rte_mbuf **rx_pkts, + uint32_t pkts_to_process) +{ + int ret, data_pkts = 0; + uint32_t pkt; + + for (pkt = 0; pkt < pkts_to_process; pkt++) { + ret = lio_droq_fast_process_packet(lio_dev, droq, + &rx_pkts[data_pkts]); + if (ret < 0) { + lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n", + lio_dev->port_id, droq->q_no, + droq->read_idx, pkts_to_process); + break; + } + data_pkts += ret; + } + + rte_atomic64_sub(&droq->pkts_pending, pkt); + + return data_pkts; +} + +static inline uint32_t +lio_droq_check_hw_for_pkts(struct lio_droq *droq) +{ + uint32_t last_count; + uint32_t pkt_count; + + pkt_count = rte_read32(droq->pkts_sent_reg); + + last_count = pkt_count - droq->pkt_count; + droq->pkt_count = pkt_count; + + if (last_count) + rte_atomic64_add(&droq->pkts_pending, last_count); + + return last_count; +} + +uint16_t +lio_dev_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t budget) +{ + struct lio_droq *droq = rx_queue; + struct lio_device *lio_dev = droq->lio_dev; + uint32_t pkts_processed = 0; + uint32_t pkt_count = 0; + + lio_droq_check_hw_for_pkts(droq); + + pkt_count = rte_atomic64_read(&droq->pkts_pending); + if (!pkt_count) + return 0; + + if (pkt_count > budget) + pkt_count = budget; + + /* Grab the lock */ + rte_spinlock_lock(&droq->lock); + pkts_processed = lio_droq_fast_process_packets(lio_dev, + droq, rx_pkts, + pkt_count); + + if (droq->pkt_count) { + rte_write32(droq->pkt_count, droq->pkts_sent_reg); + droq->pkt_count = 0; + } + + /* Release the spin lock */ + rte_spinlock_unlock(&droq->lock); + + return pkts_processed; +} + /** * lio_init_instr_queue() * @param lio_dev - pointer to the lio device structure. diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h index 4d742de..ccf9ca3 100644 --- a/drivers/net/liquidio/lio_rxtx.h +++ b/drivers/net/liquidio/lio_rxtx.h @@ -515,6 +515,17 @@ enum { return (uint64_t)dma_addr; } +static inline int +lio_opcode_slow_path(union octeon_rh *rh) +{ + uint16_t subcode1, subcode2; + + subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode); + subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA); + + return subcode2 != subcode1; +} + /* Macro to increment index. * Index is incremented by count; if the sum exceeds * max, index is wrapped-around to the start. @@ -533,6 +544,8 @@ enum { int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs, int desc_size, struct rte_mempool *mpool, unsigned int socket_id); +uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t budget); /** Setup instruction queue zero for the device * @param lio_dev which lio device to setup